chash
stringlengths
16
16
content
stringlengths
267
674k
e4b7df101e605b07
Dismiss Notice Join Physics Forums Today! Physical justification for Uncertainty principle 1. May 10, 2006 #1 Most popular or elementary textbook level expositions advance the following physical argument in favor of the uncertainty principle: In order to observe a state we must disturb it. Thus we have changed the state by our very act of observation and uncertainty creeps in. Now this explanation is obviously wrong to me for more than one reasons, but what is the real explanation. Does an explanation exist or is it taken as an unexplainable axiom? Thanks. 2. jcsd 3. May 10, 2006 #2 User Avatar Science Advisor Homework Helper It's a logical consequence of the 5+1 axioms of QM. 4. May 10, 2006 #3 Don't stick too much to the 'axioms' or to the 'observation' stuff. The wavepacket picture is in itself already very clear and much simpler: if a wavepacket has a sharp localisation it cannot have a sharp wavevector spectrum, and vice-versa​ This picture covers only a small part of the application of the uncertaincy principle. But it show that the "axiom" or "observation" point of views are not absolutely necessary. However, there is a link. Of course, if you want to 'prepare' or to 'choose' a system a wave-system that has some localisation in space, then you know that -because it is a wave system- you will have an undefinite wavevector. Now, there is still something to clarify: the relation between momentum and wavevector. Saying that this is simply Planck's relation tells us rather little, except that a universal constant is involved. But where comes this relation from, physically. What is its interpretation? There is none I think, only that QM is more general than CM. I find fascinating how Euler, Lagrange, Hamilton, and others were able to put classical mechanics in the so-called 'canonical form' without knowing anything about the quantum. This is so impressive that I feel guilty for not being able to explain the path from momentum to wavevector. 5. May 10, 2006 #4 User Avatar Gold Member Loom, I dunno what your level of knowledge on the subject is, but mine is way below the level of the two above answers. They're of no help to me. I'll attempt to answer it in layperson's terms. Forgive me if I underestimate your level of knowledge. First, be careful of the term "obvious". Nothing should be obvious. We're dealing with atomic and subatomic particles here, something that cannot be compared to macroscopic objects like rocks and people. Second, be careful of the assumptions you make when talking about "obseving" something. In the macro world, we use an observing medium (light) that is WAY smaller than the objects we are trying to observe - it has no noticeable effect on them. But in the atomic world, our observing methods are on the scale of the things being observed. Consider what would happen if you scaled your atomic observations up to macro size. Say you wanted to observe the speed an position of a grapefruit-sized atom across the table. Not too hard to do - unless your only source of information about its speed and position was by pelting it with apples! Further, you don't get to actually SEE the apples OR the grapefruit, all you get to do to record where the APPLES stopped after they bounced. What's worse, slow-moving apples do a lousy job of telling you where the grapefruit is, or is going. Faster-moving apples will give you a better idea, but how do you think these faster-moving apples will affect the velocity and position of the grapefruit? It'll knock the grapefruit around even MORE. Now, after the first few apples have ricocheted off the grapefruit, do you think it still has the same position or velocity? Last edited: May 10, 2006 6. May 10, 2006 #5 The physical justification is that it works. Dave, what you've said would imply that the Uncertainty Principle is a purely experimental phenomena. In fact, it is absolutely necessary for our current formulation of quantum mechanics, and is a consequence of the formal structure, not just some appendix to it. What the OP said is more or less correct, to observe we must disturb, and unlike in classical mechanics, in quantum mechanics there is no omniscient observer measuring every dynamical quantity. The observer is part of the system, so the order of measurement that an observer chooses directly affects the system being measured, and there's not way around it. 7. May 10, 2006 #6 User Avatar Staff Emeritus Science Advisor Education Advisor I will have to concur with the previous post. You are implying that the HUP is a "measurement uncertainty". This is not correct. If it is, then does that mean that if I have a better instrument, I can actually change the HUP? The uncertainty that one gets in a single measurement, which is contained in what you have described above, is not the HUP. Again, take note that the HUP is a natural consequence of the formulation and not a consequence of measurement instrumentation. The formulation provides no way around it. 8. May 10, 2006 #7 User Avatar Gold Member Yes, it was not meant to be my implication that it was merely a measurement thing. What I was trying to get across is that there is no way of making an observation without affecting the thing being observed. 9. May 10, 2006 #8 Is the canonical comm. relation one of dextercioby's "5+1 postulates"? 10. May 10, 2006 #9 Here's a different formulation of heisenberg based on Fourier analysis: Consider the quantum wavefunction as describing the probability that you will find the particle at that location. Take the fourier transform of the wavefunction. Suppose you have determined the position of the particle to great accuracy. The quantum wavefunction then "looks" like a spike at the location where the particle is and zero elsewhere. What does the fourier transform look like? It's all over the place since the range of frequencies that must be added up to get the wavefunction is huge - so the wave number of the quantum wavefunction is gigantic. Since wave number is related to momentum, position and momentum therefore are mutually uncertain. Similarly, suppose you have determined wave number(i.e., momentum) to great certainty. The fourier transform of the wavefunction is then a spike, indicating that the wavefunction has few frequency components - so the quantum wavefunction is all spread out, meaning there is uncertainty in position. Finally, some knowledge of fourier analysis says the minimum uncertainty happens when the quantum wavefunction is a gaussian. Therefore, the wave number is also a gaussian, and the product is 1/2 This can be converted into the familiar form by plugging in the momentum in place of the wave number, and saying that is the absolute minimum (which is where the greater-than or equals comes in) 11. May 10, 2006 #10 User Avatar Staff Emeritus Gold Member Dearly Missed If I am not mistaken this was Heisenberg's original justification for his principle. 12. May 10, 2006 #11 I'm not exactly sure if it was his original, but I remember reading that Heisenberg initially fell into the same wrong conclusion, that it is error introduced by observation, and was convinced by Born (or possibly Bohr?) to look at his work again, after which he determined it was really a fundamental uncertainty. 13. May 10, 2006 #12 Molu, uncertainty has nothing to do with disturbances. To see this, we just have to remember that the quantum-mechanical probability algorithm is time-symmetric. It allows us to assign probabilities not only to the possible outcomes of future measurements (on the basis of actual outcomes of present measurements) but also to the possible outcomes of past measurements (on the same basis). (In both cases we use the Born rule. We can even assign probabilities to the possible outcomes of present measurements on the basis of past and future measurements if we use the Aharonov-Bergmann-Lebowitz rule instead.) Suppose that a measurement at t1 yields the value v1 and a measurement at the later time t2 yields the value v2. If the measurements are of the repeatable kind and the Hamiltonian is 0, then according to the usual time-asymmetric interpretation of the formalism, the observable measured possesses the value v1 from t1 to t2, at which time its value changes to v2. If this is a valid interpretation than so is the following: the observable measured possesses the value v2 from t1, at which time its value suddenly changes from v1, till the time t2. If the latter interpretation is harebrained, then so is the first. If there is no collapse backward in time, then there is no collapse forward in time. And if there is no collapse, the question of disturbance does not arise. Observables have values only if, when, and to the extent that they are measured. If you want to say something half testable about the values of observables between measurements, you can say it in terms of the probabilities of the possible outcomes of unperformed measurements. 14. May 10, 2006 #13 Daniel, right you are, but there is nothing sacrosanct about those axioms. You can choose others as long as they give you the same probability assignments. These days, post string theory and Susskind's landscape, anthropic arguments are again en vogue. Quoting Jeffrey Bub, the fact that we find ourselves in a quantum world where measurement is possible... will surely involve the same sort of explanation as the fact that we find ourselves in a world where we are able to exist as carbon-based life forms.​ In this spirit, require the existence of objects that • have spatial extent (they "occupy" space), • are composed of a (large but) finite number of objects that lack spatial extent, • are stable—they neither collapse nor explode the moment they are formed, and find that a precondition for the existence of such objects is the fuzziness of both the relative positions and the relative momenta of their components, as I have explained in https://www.physicsforums.com/showthread.php?t=116582". For a stable equilibrium between • the tendency of interatomic relative positions to become less fuzzy due to electrostatic attraction and • the tendency of interatomic relative positions to become more fuzzy due to the fuzziness of the corresponding momenta, we need a relation between Delta x and Delta p such that a decrease in one implies an increase in the other. It stands to reason that Delta p goes to infinity as Delta x goes to zero. A sharp position then implies a maximally fuzzy momentum (that is, a flat distribution containing no information whatever). This suffices to derive the free particle propagator and the free Schrödinger equation. Once we have that, the possibilities of incorporating interactions into the resulting formalism are very limited and largely determined by requiring the existence of measurement devices or, in other words, the consistency of the formalism, inasmuch this presupposes the existence measurement devices. :smile: Last edited by a moderator: Apr 22, 2017 15. May 11, 2006 #14 Michel, the concepts that feature in classical physics have their roots in the mathematical features of the quantum-mechanical probability algorithms. The classical concepts of energy and momentum are rooted in the time and space dependence of the phases of quantum-mechanical probability amplitudes. (These amplitudes are complex numbers. While a real number has an absolute value and a sign, a complex number has an absolute value and a phase, which is a real number.) 16. May 11, 2006 #15 The explanation you give, a highly popular one first given by Heisenberg himself, is nevertheless a wrong one as pointed out by others and as I said myself. In fact post-EPR it is my understanding that it is entirely possible to avoid the observer effect. If Bob measures one of two entangled particles, he is also in fact measuring Alice's particle without in any way 'disturbing' Alice's particle. Because of this, any uncertainty principle constructed on the basis of observer effect would hold at classical mechanics but break down when the Bell Inequality is violated. 17. May 11, 2006 #16 I think that's the best explanation I've seen here. It explained things at the right level for me. Thanks :) Though I'm afraid I didn't take the trouble to understand Fourier Analysis, I can get the essence of the argument. 18. Jun 13, 2006 #17 There's actually not much magical stuff to the whole "uncertainty principle." "It comes from the axioms of QM." Well...it can....but you can also get it just from arguments involving the fourier transform of a finite wave pulse. As such it can just as easily be obtained classically as it can quantum mechanically. Even classically, if you look at a pulse of light and take its fourier transform, you'll find that it behaves as though it has a distribution of frequencies with an average frequency that corresponds exactly to the frequency that normally appears in Planck's E=h*(nu). If you then say that with each frequency there is an associated energy of the same form as above, then you can obtain, via classical arguments, the "uncertainty principle." It's not really an uncertainty principle, per se, since this is classical, but you can find that the length of the time pulse and said energy distribution obeys exactly the same inequality. See? Oooooo! QM from classical! Of course the wave for "particles" requires quantum mechanics, but the principle can be obtained in a similar way... If you want a better feel for the uncertainty principle, screw elementary texts and ad hoc hand-waving explanations like the one you mentioned and study fourier transforms and some wave theory. I always found that elementary texts just made physics more confusing by replacing valid mathematics with the suspicious invocation of intuition and ad hoc explanations anyway. EDIT: Oh! It looks like someone got to the fourier transform point before I did. I would still just like to emphasize that in that argument you already get the jist of, there was NO mention of quantum mechanical phenomena aside from the wave function associated with the "particle." I think this begs the question: what other supposedly quantum mechanical phenomena can be obtained from classical arguments? You'd be surprised how much. Think about it some. Whenever you encounter something funny in quantum mechanics, ask "can this be obtained with equivalently classical methods?" There's been a lot of work in stochastic mechanics by people like Nelson and Boyers that show that classical particles subject to stochastically varying potentials or fields appear to obey quantum mechanical laws. Thought provoking, no? Sorry to get off topic. Last edited: Jun 13, 2006 Similar Discussions: Physical justification for Uncertainty principle
5442a405aae20092
Quantitative strain mapping at the nanometer scale (Vol. 43 No. 1) image (a) Scheme and (b) dark-field (004) electron hologram of a Si/Si0.79Ge0.21 superlattice. (c) Phase of the hologram. (d) Strain map. As strain is now used routinely in transistor devices to increase the mobility of the charge carriers, the microelectronics industry needs ways to map the strain with nanometer resolution. Recently, a powerful TEM (transmission electron microscopy) based technique called dark-field electron holography has been invented by Martin Hÿtch at CEMES in Toulouse. To map the strain, it is necessary to thin a sample to electron transparency using a focused ion beam tool. Then, to form a dark-field electron hologram, electron beams that have been diffracted by both the region of interest (a device, a layer) and a region of reference (usually the substrate) are interfered using an electron biprism. When grown on a Si substrate by epitaxy, SiGe layers are tensily strained in the growth direction as shown in Figure (a). Due to the presence of strain, variations of the hologram fringe spacing can be seen (b). Using Fourier space processing, the phase of the electrons can be retrieved from the hologram (c), and by taking the gradient of the phase, the strain map can then be calculated (d). As the technique is quantitative, one can directly correlate the results with simulations to get information about the composition in the layers. As an example, we have investigated the variation of the substitutional carbon content in annealed Si/SiGeC superlattices. Carbon is used to control the strain and avoid plastic relaxation. However during annealing, the formation of Β-SiC clusters reduces the effect of the C atoms. By combining holography and finite element simulation, we have shown that after annealing at 1050°C, a SiGeC structure behaves like pure SiGe from the point of view of strain. The reduction of the substitutional C content in annealed Si/SiGeC superlattices studied by dark-field electron holography T. Denneulin, J. L. Rouvière, A. Béché, M. Py, J. P. Barnes, N. Rochat, J. M. Hartmann and D. Cooper, Semicond. Sci. Technol. 26, 125010 (2011) [Abstract] Vector Correlators in Lattice QCD (Vol. 43 No. 1) image Comparison of the vacuum polarisation calculated from e+e- annilhilation cross section with recent lattice simulations. Vector Correlators in Lattice QCD: Methods and Applications D. Bernecker and H. Meyer, Eur. Phys. J. A, 47 11, 1 (2011) Leidenfrost propulsion on a ratchet (Vol. 43 No. 1) image Levitating Leidenfrost drop on a hot ratchet: a force acts on the drop, which deflects a fiber plunging in it. The Leidenfrost phenomenon is observed when depositing liquids on solids much hotter than their boiling point. Liquids then levitate on a cushion of their own vapour, and slowly evaporate without boiling, due to the absence of contact with the substrate. The vapour cushion also makes liquids ultra-mobile, and Linke discovered in 2006 that Leidenfrost drops on a hot ratchet self-propel, in the direction of "climbing" the teeth steps. The corresponding forces were found to be 10 to 100 µN, much smaller than the liquid weight, yet enough to generate velocities of order 10 cm/s. The origin of the motion was not really clear, despite stimulating propositions in Linke's original paper. As a first step, it was reported in 2011 by Lagubeau et al. that disks of sublimating dry ice also levitate and self-propel on hot ratchets: liquid deformations are not responsible for the motion. However, the levitating object in all these experiments squeezes the vapour below, and the resulting flow might be rectified by the asymmetric profile of the ratchet. The key question was not only to check this assumption, but also to determine in which privileged direction the vapour flows. By tracking tiny glass beads in the vapour, it was shown that rectification indeed takes place, along the descending slope of the teeth - the vapour escaping laterally once reaching the step of the teeth. Hence the levitating body is entrained by the viscous drag arising from this directional vapour flow. Goldstein et al. reached a similar conclusion in a paper to appear in the Journal of Fluid Mechanics. Many questions however remain: ratchets also generate special frictions (the liquid hits the teeth as it progresses), and the optimal ratchet (maximizing the speed of these little hovercrafts) has not yet been designed. Viscous mechanism for Leidenfrost propulsion on a ratchet G. Dupeux, M. Le Merrer, G. Lagubeau, C. Clanet, S. Hardt and D. Quéré, EPL, 96, 58001 (2011) Yellow-green and amber InGaN micro-LED arrays (Vol. 43 No. 1) image Representative amber micro-LED array and controllable emission pattern. Longer wavelength InGaN emitters (~600nm) are important for some potential applications such as optoelectronic tweezers and visible light communication. The primary obstacle for developing InGaN light-emitting diodes (LEDs) at longer wavelengths, however, is because it is difficult to incorporate a high indium composition (for extending the emission wavelength) while maintaining good epitaxial quality. Indium in high proportion tends to aggregate. High indium InGaN structures also show strong piezoelectric fields, which in turn induce a reduced wavefunction overlap between electrons and holes, and a consequently weakened emission. To overcome these effects, new epitaxial InGaN structures of high-indium content are grown, in which an electron reservoir layer is introduced to enhance the indium incorporation and the light emission, but retain conventional (1000) orientation. Photoluminescence measurements reveal that the emission wavelengths of these high-In quantum well structures can be tuned from 560nm to 600nm, depending on actual indium composition. Yellow-green and amber devices in an array-format are developed based on these wafer structures, where each LED pixel can be individually addressable. Power measurements indicate that the power density of the yellow-green (amber) device per pixel is up to 8W/cm2 (4.4 W/cm2), much higher than that of conventional broad-sized LEDs made under the same condition, and nearly an order higher than that required by optoelectronic tweezers, validating the feasibility of using these micro-LEDs for tweezing. Nevertheless, it is found that the emission wavelength is strongly blueshifted upon injection current increase, up to ~50nm. Numerical simulations reveal that this is caused by screening of the quantum Stark effect and a band filling effect, thus further optimisation of the growth conditions and epitaxial structures is needed. Z. Gong, N.Y. Liu, Y.B. Tao, D. Massoubre, E.Y. Xie, X.D. Hu, Z.Z. Chen, G.Y. Zhang, Y.B. Pan, M.S. Hao, I.M. Watson, E. Gu and M.D. Dawson, Semicond. Sci. Technol. 27, 015003 (2012) Effects of electric field-induced versus conventional heating (Vol. 43 No. 1) image Study of mobile phone electric field shown no extraordinary heating The effect of microwave heating and cell phone radiation on sample material is no different than a temperature increase, according to the present work. Richert and coworkers attempted for the first time to systematically quantify the difference between microwave-induced heating and conventional heating using a hotplate or an oil-bath, with thin liquid glycerol samples. The authors measured molecular mobility and reactivity changes induced by electric fields in these samples, which can be gauged by what is known as configurational temperature. They realised that thin samples exposed to low-frequency electric field heating can have a considerably higher mobility and reactivity than samples exposed to standard heating, even if they are exactly at the same temperature. They also found that at frequencies exceeding several megahertz and for samples thicker than one millimetre, the type of heating does not have a significant impact on the level of molecular mobility and reactivity, which is mainly dependent on the sample temperature. Actually, the configurational temperatures are only marginally higher than the real measurable one. Previous studies were mostly fundamental in nature and did not establish a connection between microwaves and mobile phone heating effects. These findings imply that for heating with microwave or cell phone radiation operating in the gigahertz frequency range, no other effect than a temperature increase should be expected. Since the results are based on averaged temperatures, future work will be required to quantify local overheating, which can, for example, occur in biological tissue subjected to a microwave field, and better assess the risks linked to using both microwaves and mobile phones. Heating liquid dielectrics by time dependent fields A. Khalife, U. Pathak and R. Richert, Eur. Phys. J. B 83, 429-435 (2011) Ordered Si oxide nanodots at atmospheric pressure (Vol. 43 No. 1) image SEM image of the surface morphology of SiOxHyCz nano-islands localized at the center of nano-indents It is now possible to simultaneously create highly reproductive three-dimensional silicon oxide nanodots on micrometric scale silicon films in only a few seconds. The present study shows that one can create a square array of such nanodots, using regularly spaced nanoindents on the deposition layer, that could ultimately find applications as biosensors for genomics or bio-diagnostics. A process called atmospheric pressure plasma-enhanced chemical vapour deposition is used. This approach is a much faster alternative to methods such as nanoscale lithography, which only permits the deposition of one nanodot at a time. It also allows the growth of a well- ordered array of nanodots, which is not the case of many growth processes. In addition, it can be carried out at atmospheric pressure, which decreases its costs compared to low-pressure deposition processes. One goal was to understand the self-organization mechanisms leading to a preferential deposition of the nanodots in the indents. By varying the indents' interspacing, they made it comparable to the average distance travelled by the silicon oxide particles of the deposited material. Thus, by adapting both the indents' spacing and the silicon substrate temperature, they observed optimum self-ordering inside the indents using atomic force microscopy. The next step will be to investigate how such nanoarrays could be used as nanosensors. It is planed to develop similar square arrays on metallic substrates in order to better control the driving forces producing the highly ordered self-organisation of nanodots. Further research will be needed to give sensing ability to individual nanodots by associating them with probe molecules designed to recognise target molecules to be detected. Ordering of SiOxHyCz islands deposited by atmospheric pressure microwave plasma torch on Si(100) substrates patterned by nanoindentation X. Landreau, B. Lanfant, T. Merle, E. Laborde, C. Dublanche-Tixier and P. Tristant, Eur. Phys. J. D, 65/3, 421 (2011) Dynamics of interacting edge defects in copolymer lamellae image Can metals remember their shape at nanoscale, too? Simulation of the thermally induced austenitic phase transition in NiTi nanoparticles First quantum machine to produce four clones (Vol. 43 No. 1) image These intertwinned rays of light are an artistic metaphore for quantum cloning This article presents a theory for a quantum cloning machine able to produce several copies of the state of a particle at atomic or sub-atomic scale, or quantum state. It could have implications for quantum information processing methods used, for example, in message encryption systems. Quantum cloning is difficult because quantum mechanics laws only allow for an approximate copy-not an exact copy-of an original quantum state to be made, as measuring such a state prior to its cloning would alter it. The present work shows that it is theoretically possible to create four approximate copies of an initial quantum state, in a process called asymmetric cloning. The authors have extended previous work that was limited to quantum cloning providing only two or three copies of the original state. One key challenge was that the quality of the approximate copy decreases as the number of copies increases. It appears possible to optimise the quality of the cloned copies, thus yielding four good approximations of the initial quantum state. The present quantum cloning machine is shown to have the advantage of being universal and therefore able to work with any quantum state, ranging from a photon to an atom. Asymetric quantum cloning has applications in analysing the security of messages encryption systems, based on shared secret quantum keys. Two people will know whether their communication is secure by analysing the quality of each copy of their secret key. Any third party trying to gain knowledge of that key would be detected as measuring it would disturb the state of that key. Optimal asymmetric 1 4 quantum cloning in arbitrary dimension X.J. Ren, Y. Xiang and H. Fan, Eur. Phys. J. D (2011) Energy transport for biomolecular networks (Vol. 43 No. 1) image Probability density of the average energy transfer time τ as a function of the de-phasing rate γ. As indicated by the median (white line), for a typical configuration, τ is reduced by the de-phasing up to an optimal rate γ, where the transfer becomes most efficient. However, these noise-assisted transfer times are longer than the minimum transfer time achieved by an optimized configuration for vanishing de-phasing (minimum of dot-dashed line on the left). Recent experimental demonstrations of long-lived quantum coherence in certain photosynthetic light-harvesting structures have launched a flurry of controversy over the role of coherence in biological function. An ongoing investigation into the astonishingly high-energy transport efficiency of these structures suggests that nature takes advantage of quantum coherent dynamics. We inquire on the fundamental principles of quantum coherent energy transport in ensembles of spatially disordered molecular networks subjected to dephasing noise. De-phasing reduces the coherence between individual network nodes and has already been shown to assist transport substantially provided that quantum coherence is disadvantageous by reason of destructive interference, e.g. in the presence of disorder and quantum localization. In a statistical survey, we map the probability landscape of transport efficiency for the whole ensemble of disordered networks, in search of specially adapted molecular conformations that nature may select in order to facilitate energy transport: We thus find certain optimal molecular configurations that by virtue of constructive quantum interference yield the highest transport efficiencies in the absence of dephasing noise. Moreover, the transport efficiencies realized by these optimal configurations are systematically higher than the noise-assisted efficiencies mentioned above. As discussed in the article, this defines a clear incentive to select configurations for which quantum coherence can be harnessed. The optimization topography of exciton transport T. Scholak, T. Wellens and A. Buchleitner, EPL, 96, 10001 (2011) Double ionization with absorbers (Vol. 43 No. 1) image The system is initially described by a two-particle wave function (top). As a particle is ionized and subsequently hits the absorber (Τ), the remaining electron is described by a one-particle density matrix (middle), which may be thought of as an ensemble of several one-particle wave functions. As also the one-particle system may be absorbed, the vacuum state may also be populated (bottom). In quantum dynamics, unbound systems, such as atoms being ionised, are typically very costly to describe numerically as their extension is not limited. This problem should be reduced if one could settle for a description of the remainder of the system and disregard the escaping particles. Removing the escaping particles may be achieved by introducing absorbers close to the boundary of the numerical grid. The problem is, however, that when such "interactions" are combined with the Schrödinger equation, all information about the system is lost as a particle is absorbed. Thus, if we wish to still describe the remaining particles, a generalization of the formalism is called for. As it turns out, this generalisation is provided by the Lindblad equation. This generalised formalism has been applied to calculate two-photon double ionisation probabilities for a model helium atom exposed to laser fields. In the simulations, the remaining electron was reconstructed as the first electron was absorbed. Since there was a finite probability for also the second electron to hit the absorber at some point, the system could, with a certain probability, end up in the vacuum state, i.e. the state with no particles. As this probability was seen to converge, it was interpreted as the probability of double ionisation. The validity of this approach was verified by comparing its predictions with those of a more conventional method applying a large numerical grid. A master equation approach to double ionization of helium S. Selstø, T. Birkeland, S. Kvaal, R. Nepstad and M. Førre, J. Phys. B: At. Mol. Opt. Phys. 44, 215003 (2011) Entanglement or separability: factorisation of density matrix algebra (Vol. 43 No. 1) image The separable quantum states form the blue double pyramid while the entangled states are located in the remaining tetrahedron cones The present theoretical description of teleportation phenomena in sub-atomic scale physical systems proves that mathematical tools let free to choose how to separate out the constituting matter of a complex physical system by selectively analysing its so-called quantum state. That is the state in which the system is found when performing measurement, being either entangled or not. The state of entanglement corresponds to a complex physical system in a definite (pure) state, while its parts taken individually are not. This concept of entanglement used in quantum information theory applies when measurement in laboratory A (called Alice) depends on the definite measurement in laboratory B (called Bob), as both measurements are correlated. This phenomenon cannot be observed in larger-scale physical systems. The findings show that the entanglement or separability of a quantum state -whether its sub-states are separable or not; i.e., whether Alice and Bob were able to find independent measurements - depends on the perspective used to assess its status. This is applied to model physical systems of quantum information including the theoretical study of teleportation, which is the transportation of a single quantum state. Other practical applications include gaining a better understanding of K-meson creation and decay in particle physics, and of the quantum Hall effect, where electric conductivity takes quantified values. Entanglement or separability: the choice of how to factorise the algebra of a density matrix W. Thirring, R.A. Bertlmanna, P. Köhler and H. Narnhofer, Eur. Phys. J. D, 64/2, 181 (2011)
8ce370a8535816dc
Are quanta particles or waves? Written by: Christoph Adami Primary Source:  Spherical Harmonics Are quanta particles or waves? The title of this post is an age-old question isn’t it? Particle or wave? Wave or particle? Many have rightly argued that the so-called “wave-particle duality” is at the very heart of quantum weirdness, and hence, of all of quantum mechanics. Einstein said it. Bohr said it. Feynman said it. Two out of those three are physics heroes of mine, so that’s a majority right there. Richard Feynman (1918-1988) Source: Wikimedia Feynman, when talking about what we now call the wave-particle duality, was referring to the famous “double-slit experiment“. He wrote (in his famous Feynman Lectures, Chapter 37 of Volume 1, to be precise): So what is Feynman talking about here? Instead of launching on a lengthy exposition of the double-slit experiment, as luck would have it I’ve already done that, in a blog post about the quantum eraser. That post, incidentally, was No. 6 in the “Quantum measurement” series that starts here. You don’t necessarily have to have read all those posts to follow this one, but believe me, it would help a lot. At the minimum, start at No. 6 if you’re not already familiar with the double-slit experiment. But you’ll get a succinct introduction to the double-slit experiment below anyway. Alright, back to quantum mechanics. Actually, step back a little bit more, to classical mechanics. In classical physics, there is no duality between waves and particles. Waves are waves, and they would never behave like particles. For example, you can’t kick a wave, really, no matter what the surfer types tell you. Particles on the other hand, do not interfere with each other as waves do. You can kick particles (kinda), and you can count them. You can’t count waves. What Bohr, Einstein, and Feynman are trying to tell you is that in quantum mechanics (meaning the real world, because as I have told you before, classical mechanics is an illusion, it does not exist) the same stuff can be either particle OR wave. Not both, mind you. Here’s what Einstein said about this, and to tell you the truth, this statement sounds like he’s been hanging out with Bohr far too much: A. Einstein (1879-1955) Source: Wikimedia I’ve used a picture of Einstein in 1904 here, because you’ve seen far too many pics of him sticking out his tongue and hair disheveled. He wasn’t like that most of the time when he made his most important contributions. Lest you think that the troubles these 20th century physicists had with quantum mechanics is the stuff of history, think again. In 2012, a mere 5 years ago, experimenters from Germany (in the lab of the very eminent Wolfgang Schleich) claimed that they had collected evidence that a quantum system can be both particle and wave at the same time. Such an observation-if true-would run afoul of Bohr’s “duality principle”, which declared that a quantum system can only be one or the other, depending on the type of experiment used to examine the system. One or the other, but never both. Rest assured though, analyzing results of the Schleich experiment in a different way reveals that all is well with complementarity after all, as was pointed out by a team at the University of Ottawa, led by the equally eminent Robert Boyd. (You can read an excellent summary of that controversy in Tom Siegfried’s piece here.) What all this fighting about duality should teach you is that this is not at all a solved problem. As recently as a few days ago, Steven Weinberg (who, full disclosure, has also been in my pantheon of physicists ever after I read his “First Three Minutes” at a very tender age) wrote about the particle-wave duality in the New York Review of Books. I hope that he reads this post, because it may alleviate some of his troubles. In this piece, entitled “The Trouble with Quantum Mechanics“, Weinberg admits to being as puzzled as his predecessors Einstein, Bohr, and Feynman, about the true nature of quantum physics. How can we understand, he muses, that quantum dynamics is governed by a deterministic equation (the Schrödinger equation), yet when we try to measure something, then all we can muster is probabilities? “So we still have to ask”, Weinberg writes, “how do probabilities get into quantum mechanics?” How indeed. You know of course, from reading my diatribes, that this is a question I am interested in myself. I have obliquely hinted that I think I know where the probabilities are coming from (if you can find the relevant post) and that one day I’ll write a detailed account of that idea (it’s 3/4 written already, actually). But today is not that day. Having convinced you that the particle-wave duality is still a very hot topic in quantum physics, let me take on that particular subject first. What I want to do in this blog post is to make you think differently about the complementarity principle. What I’m going to tell you is that you should stop thinking in terms of “particle or wave”. It is a false dichotomy. It is a false dilemma because quantum systems are neither particle nor wave. Those two are classical concepts, after all. Strictly speaking, quantum systems are quantum fields. But this is not the time to delve into quantum field theory, so instead I will try to marshal the tools of quantum information theory to tell you what is really complementary in quantum measurement, what it is that you can have “only one of”, and what it is that is being “traded-off”. You don’t exchange a bit of particle for a bit of wave, this much I can tell you right here. To do this, I have to introduce you to some very counter-intuitive quantum stuff. Now, you might argue: “All quantum stuff is counter-intuitive”, and I’d have to agree with you if all your intuition is classical. What I am going to tell you is stuff that even baffles seasoned quantum physicists. I’m going to tell you about quantum experiments where the “nature” of the quantum experiment that you perform can be changed after you’ve already completed the experiment! Let me remind you right here, that the–also very eminent–Niels Bohr tried to teach us that whether a quantum system appears as a particle or as a wave depends on the type of experiment you subject it to. Here I’m telling you that this is a bunch of hogwash, because I’ll show you that when you do an experiment, you can change whether it is a “particle”- or a “wave”-experiment long after the data have been collected! I know you’re not shocked at my dissing Bohr as I have a habit of doing so. But I’m in good company, by the way, if you read what Feynman wrote about Bohr in his “Surely You’re Joking” series. “Alright I bite”, one of you readers exclaimed just now, “how do you retroactively change the type of experiment you make?” Glad you asked. Because now I can talk about John Archibald Wheeler. Wheeler was not a conventional physicist: Even though his early career as a nuclear physicist led to several important contributions to the Manhattan project, he was also interested in many other areas of physics. Indeed, he was a central figure in the “revival” of general relativity theory. (That theory had gone a bit out of fashion when people realized that many predictions of the theory were difficult to measure.) Wheeler co-authored what many (including myself) think is the best book on the topic: “Gravitation” (with Charles Misner and Kip Thorne). That book is often just referred to as “MTW”. John Archibald Wheeler (1911-2008). Source: University of Texas I never got to meet Wheeler, perhaps because I entered the field of quantum gravity too late. While Wheeler has been influential in the field of quantum information, it really was his gravity work that had the most lasting impact. He invented the terms “black hole” and “wormhole”, after all. His most influential contribution to quantum information science is, undoubtedly, the “delayed choice” gedankenexperiment. Let me explain that to you. Wheeler’s thought experiment examines the question of whether a photon, say, takes on wave or particle nature before it interacts with the experiment, sensing (in a way) what kind of experiment is going to be performed on it. In the simplest version of the delayed choice experiment, the nature of the experiment would be changed “after the photon had made up its mind” whether it was going to play the role of particle, or whether it would make an appearance as a wave. Needless to say, this is of course not how quantum mechanics works, and Wheeler was fully aware of it. His interpretation was that a photon is neither wave nor particle, and that it takes on one of the two “coats” only when it is being observed. I’m going to tell you that I agree with the first part (the photon is neither wave nor particle), but I disagree with the second part: it does not in fact take on either particle or wave nature after it is observed. It never ever takes on such a role. If you think about it, the idea that a system only “comes into being by being observed” is preposterous (however, such a thought was quite in line with some other of Wheeler’s philosophies). Measurements are interactions with other systems just as much as any other interactions are: there is nothing special about measurement. This is, in essence, what I’m going to try to convince you of. Even though the reasoning behind the delayed-choice experiment is preposterous, it has generated an enormous amount of work. Let’s first look at how we may set up such an experiment. Below is an illustration of a double-slit experiment from Feynman’s famous lecture, where he replaced photons by electrons shot out of an electron gun (such devices are perfectly reasonable and feasible). Note that Caltech, where Feynman spent the majority of his career, has made these lectures freely available. The particular chapter can be accessed here. Fig. 1: An interference experiment with electrons. (Source: Feynman Lectures on Physics) Later on, we’re going to be using photons instead of electrons for the quantum system, because experiments are much easier with photon beams as opposed to electron beams. In that case, we are going to assume that any light is going to be so faint that it can’t be thought of as the classical light waves that give rise to Young’s interference fringes. Then, at any point in time, there will be at most one photon between the double-slit and the detector, so you have to think about single photons either taking one or the other, or both paths, through the double-slit experiment. Quantum mechanics predicts that a single electron takes both paths to create the interference pattern in the figure above at (c). Thus, it must somehow interfere with itself, which is difficult to imagine if you think of the electron as a particle. (Which of course it is not). Can we force it to behave as a particle? Suppose you put a particle detector between the wall and the backstop: one behind slit 1, and one behind slit 2. If you get a “hit” on either detector, then you know which path the electron travelled. (You can do this experiment without actually removing the electron, so that you can still get patterns on the screen.) When you obtain this “which-path” information, the interference pattern disappears: you’ve forced the electron to behave as a particle. Wheeler’s idea was this: Suppose the distance between the wall and the backstop is very, very large. If you do not put the contraption that will measure which path the electron took (the “which-path detector”) into the experiment, the electron would have no choice but to go along both paths, ready to interfere with itself and create the interference pattern on the screen. But suppose you bring in the “which-path” apparatus after the electron has passed the slit, but before it is going to hit the screen. Is the electron wave function that is on the “other path” going to “change its mind”, or go backwards? What would happen? The thought experiment very nicely illustrates how preposterous the idea is that the experiment itself determines “what the quantum system is”, as changing the experiment mid-flight cannot possibly change the nature of the electron. The experiment I’m going to describe to you (the delayed-choice quantum eraser experiment) has in fact been carried out several times now, and drives Wheeler’s idea to the extreme. The choice of experiment (insert the “which-path” detector or not), can be made after the electron has hit the screen! If you are a reader for whom this is immediately obvious, then congratulations (and consider a career in quantum physics, if this is not already your career). It is indeed completely obvious if you understand quantum mechanics, but let me walk you through it anyway. First, if it was the experiment that determines the nature of the quantum system (particle or wave), how can you change the experiment after it already has occurred? That this is possible is also due to the peculiarities of quantum physics, and it is also the hardest to explain. I’ll do it with photons rather than electrons, as this is the experiment that was carried out, and it is also the description I used in the paper that I’m really writing about. You knew this was coming, didn’t you? We can do double-slit experiments with photons just as with electrons: we just have to turn down the intensity of light such that individual photons can be registered on a phosphorescent screen. When you see the screen light up at a particular spot (or, in more modern times, a pixel on a CCD detector lights up), you interpret it that a photon has hit there. Often, the double-slit is replaced by a Mach-Zehnder interferometer, but you shouldn’t worry about such technicalities: you can in fact use either. To pull off this feat of changing the experiment after the fact, you have to create an entangled pair of photons first. You already know what an entangled pair (a “Bell-state”) is, because I wrote about it several times: for example in the context of black holes here, and in the context of quantum teleportation and superdense coding here. This pair of photons is also sometimes called an Einstein-Podolsky-Rosen (EPR) pair, because that trio first described a similar entangled state in a very famous paper in 1935. Let’s create such a pair by entangling the “polarization” degree of freedom of the photon. This is the part that is a bit more complicated: to understand it, you have to understand polarization. Every photon can come in two different polarization states, but what these states are depends on how you decide to measure them. This will be crucial, because this is in fact how you change the measurement after the fact. The thing to know about an entangled pair is that it is in a superposition of those two states. Suppose we use as basis for the photon polarization the “horizontal/vertical” basis. That means that if a photon is polarized horizontally, and you put a filter in front of it that only allows vertical polarization to go through, then out comes nothing. Polarization is, if you will, a photon’s way of wiggling. Below is a picture which shows the photon wiggling in the “vertical” and in the horizontal way. But they can also wiggle in the “circular-left” and “circular-right” way. In fact, it can wiggle in an infinite number of “opposing ways”, and these are related to each other by a unitary transformation. Fig. 2: One way of depicting photon polarization. The way a photon is polarized can be changed by an optical element (a “wave plate“), and this ability will be key in the experiment. Suppose we begin with a pair of photons A and B in a Bell-state, written in terms of the horizontal |h and vertical |v polarization eigenstates: |ΨAB=12(|hA|vB+|vA|hB) (1) You notice that neither of the photons has a defined state, but if I measure one of them (say A) and find that my detector says it is in an |h state, then I can be sure that measuring B will give you “v”, no matter whether you do the measurement now, or a year later with a detector placed a light year away. This is precisely what Einstein could not stomach, calling this mysterious bond “spooky action at a distance”, but a careful analysis reveals that there is no “action” at all: signals cannot be sent using this bond. But here’s the thing: I can measure photon B either in the h,v coordinate system, or in another one. This will become crucial, so keep this in mind. But for the moment let’s forget that a “copy” of photon A (the entangled partner) is flying out there, possibly to a measurement device a light-year away. Actually, there is nothing a light year away from us, so let’s say we are far in the future and the detector is on Proxima Centauri, about 4 and a quarter light years away. It’ll just be a longer experiment. Photon A now goes through a double-slit, just as the electrons in Figure 1. Now we’ll do the “are you a particle or a wave” measurement. We do this by putting so-called “quarter-wave plates” in the path of the photons. When you do this, you entangle the polarization of the photon with the spatial degree of freedom (namely “left slit” or “right slit”). Once you’ve done this, you only have to measure the polarization of photon A to know whether it went through the left or right slit. In a way, you’ve tagged the photon’s path by the polarization. After doing this, you will lose the interference pattern. You can either have an interference pattern (and we say that the photon wavefunction is “coherent”), or you can have “which-path” information, which makes the wavefunction incoherent. Or so people thought for a long time. It turns out that you can also have a a little bit of both, but you can’t have both full which-path information, and full coherence: there is a tradeoff. And that tradeoff depends on the angle by which you rotate the polarization basis. In the description above, we used “quarter-wave” plates, which give you full information, and zero coherence. Choose something other than 45 degrees (that’s the quarter wave), and you can get a little bit of both. It turns out that there is a simple relationship that quantifies this tradeoff in terms of the angle you choose to do the tagging with. Let’s call this angle ϕ. We can then define the “distinguishability” D and the “visibility” V, where D2 measures how well you can distinguish the photon paths (a measure of which-path information), while V2 quantifies the visibility of the interference fringes (a measure of the coherence of the wavefunction). A celebrated inequality (due to Greenberger and Yasin [1]) states that D2+V21 (2) Now, according to what I just wrote, choosing the angle of the wave plate when performing the which-path entangling operation chooses the experiment for you: Set it at 0 degree and you do not entangle at all, so that no which-path information is obtained (then D2=0 and V2=1). Set it at ϕ=π/4, and you get perfect which-path information, and no visibility. How can you choose the experiment after the fact, when you have to choose the angle when setting up the experiment? How? So the following is what makes quantum mechanics so beautiful. You can actually do this because when I described the experiment to you, I did not (it turns out) use an entangled EPR pair as the input, I used a photon in a defined polarization state, such as |h. I did not tell you about this because it would have confused you. I needed you to understand how to extract which-path information first, and how doing it gradually will gradually destroy coherence. Now take a deep breath, and read very slowly. If the input to the two-slits (and therefore to the “which-path” detector that entangles polarization and path) is the EPR state Eq. (1), you actually do not get any which-path information using the quarter-wave plate. This is because when the photon “comes in”, it is not in a defined polarization state. If it was not in a defined state, you extract nothing. So for that setup, V2=1 even though ϕ=π/4. Now one more deep breath after you digested this bit. Maybe take two, just to be safe. Whether the state that comes in to the two slits is indeed Eq. (1) is up to the person at Proxima Centauri, a year after that data was recorded on the CCD screen on Earth. This is because of what is |h and what is |v is determined by how you measure it. A quantum system does not have a state until you say how you measure it. It will be in the h,v basis if that is the basis of your measurement device. It will be in the R,L (right-circular, left-circular) basis, if that is instead what you will choose to examine it with. Or it could be anything in between. I wrote about this at length in the blog post about the collapse of the wavefunction, within the “On quantum measurement” series. (Rightfully, the present post really should be “On quantum measurement. Part 8, but I decided to make it stand alone). Please go back to that if the two breaths did not help. There is also an intriguing parallel to how Shannon entropy is not defined until you determine how you will be measuring it, as I wrote about in “What is Information-Part 1”. The deeper reason for this is that all of physics is about the relative state of measurement devices. Mark my words. The reason our person at Proxima Centauri handling photon B actually prepares the state is because photon A is not “projected” at any point of the experiment. This could be done, of course, but that is a different experiment. So now we can see how the delayed-choice experiment works: If Proxima Centauri person (PCP, for short) measures at an angle θ=0 with respect to the preparation Eq. (2), then the photon is in a defined state (no matter whether the outcome is h or v) and only then do you actually extract which-path information. In that case, visibility V2=0. If PCP measures at θ=π/4 on the other hand, the entanglement operation (the “tagging”) does not work: it is as if the measurement by PCP “erased” the tagging, and V2=1 instead. So indeed, a measurement far in the future (well, here more than four years in the future) will determine what kind of an experiment is done on the photon. The event far in the future will determine whether the photon appeared as a particle, or a wave. Weird, right? What is that you ask? How can an event far in the future affect the data that are stored on a device far in the past? I didn’t say it did, did I? Of course it does not. The truth is much more magical. Without going into all the details here (but which you can read about in any paper about the Bell-state quantum eraser, or indeed my own paper referenced below), the result of the measurement by PCP in the future contains crucial information about how to decode the data in the past, information that is akin to the key in a cryptography procedure. Yes, cryptographic. That is indeed what I wrote. You will only be able to decipher D2 and V2 when the measurement in the future (which is really a state preparation in the past) is available to you. That is the true magic of quantum mechanics. Without it, you won’t be able to see any fringes in the data. But with it, you may be able to reconstruct them to full visibility, if that is how the photon was measured at Proxima Centauri. How do I know any of this is true? Because we (my student Jennifer Glick and I) analyzed the entire experiment in terms of quantum information theory, and ultimately were able to write down the equations that describe discrimination and visibility (coherence) entirely in terms of entropies and information, in [2] (Jennifer did all the calculations and wrote the first draft of the manuscript). Clearly, “which-path information” should have an obvious information-theoretic rendering, but it turns out that this is actually a little bit tricky because it really is a “conditional information”. But it turns out that “coherence” (or “visibility”) can also be measured information-theoretically. And lo and behold, the two are related. In our description, they are related by a common information-theoretic identity: the chain rule for entropies. According to that identity, information I and coherence C (as a function of the PCP angle θ) are related so that I(θ)+C(θ)=1 (3) . In a simple qubit model, the information and coherence take on extremely simple forms, namely I(θ)=H[sin2(θ+π/4)] with C(θ)=1H[sin2(θ+π/4)], where H[p] is the standard Shannon entropy function H[p]=plog(p)(1p)log(1p). And take a look at how our information-theoretic quantities compare to the quantum optical measures of discrimination and visibility in Fig. 3 below. It almost looks like that discrimination and visibility (coherence) should have been defined information-theoretically from the outset, doesn’t it? Fig.3: Top: Which-path information (solid line) and coherence (dashed line) in terms of quantum information theory. Bottom: Discrimination (solid) and visibility (dashed)  in quantum optics. Qrefers to the quantum state at the beam-splitter, and DA and DBrefer to polarization detectors. From [2]. So what does all this teach us about quantum mechanics in the end (besides, of course, that quantum mechanics is awesome)? We have learned at least two things. Quantum systems are not either particle or wave. They are in fact neither because both concepts are classical in nature. This, to some extent, I stipulate we knew already. Wheeler knew it. (Bohr, I contend, not so much). But what I’ve shown you is that quantum systems don’t “change their colors” after measurement either, as Wheeler had advocated. They remain “neither”, even when we think we pinned them down, because what I’ve shown you is that you can have them take on this coat or that, or any in between, years after the ink has dried (I mean, after the data were recorded). They (the photons, electrons, etc.) are not one or the other. They appear to you the way you choose you want to see them, when you interrogate a quantum state with classical devices. Those devices cannot reveal to you the reality of the quantum state, because the devices are classical. Don’t hate them because of their limitations. Instead, use them wisely, because what I just showed you is that, if used in a clever manner, they enable you to learn something about the true nature of quantum physics after all. As, for example, the experiment in [3] does. [1] D.M. Greenberger and A. Yasin, “Simultaneous wave and particle knowledge in a neutron interferometer. Physics Letters A 128 (1988) 391-394. [2] J.R. Glick and C. Adami, “Quantum information theory of the Bell-state quantum eraser“. Phys. Rev. A 95 (2017) 012105. Full text also on arXiv. Note: Jennifer Glick is first author on this paper because she performed all calculations in it and wrote the first draft. [3] Y.H. Kim, R. Yu, S.P. Kulik, Y.H. Shih, and M.O. Scully, “Delayed “choice” quantum eraser,” Phys Rev Lett 84 (2000) 1-5. The following two tabs change content below. Dr. Adami is Professor for Microbiology and Molecular Genetics & Physics and Astronomy at Michigan State University in East Lansing, Michigan. As a computational biologist, Dr. Adami’s main focus is Darwinian evolution, which he studies theoretically, experimentally, and computationally, at different levels of organization (from simple molecules to brains). Latest posts by Christoph Adami (see all)
80742ea67ec66291
Amplitudes and statistics When re-reading Feynman’s ‘explanation’ of Bose-Einstein versus Fermi-Dirac statistics (Lectures, Vol. III, Chapter 4), and my own March 2014 post summarizing his argument, I suddenly felt his approach raises as many questions as it answers. So I thought it would be good to re-visit it, which is what I’ll do here. Before you continue reading, however, I should warn you: I am not sure I’ll manage to do a better job now, as compared to a few months ago. But let me give it a try. Setting up the experiment The (thought) experiment is simple enough: what’s being analyzed is the (theoretical) behavior of two particles, referred to as particle a and particle b respectively that are being scattered into  two detectors, referred to as 1 and 2. That can happen in two ways, as depicted below: situation (a) and situation (b). [And, yes, it’s a bit confusing to use the same letters a and b here, but just note the brackets and you’ll be fine.] It’s an elastic scattering and it’s seen in the center-of-mass reference frame in order to ensure we can analyze it using just one variable, θ, for the angle of incidence. So there is no interaction between those two particles in a quantum-mechanical sense: there is no exchange of spin (spin flipping) nor is there any exchange of energy–like in Compton scattering, in which a photon gives some of its energy to an electron, resulting in a Compton shift (i.e. the wavelength of the scattered photon is different than that of the incoming photon). No, it’s just what it is: two particles deflecting each other. […] Well… Maybe. Let’s fully develop the argument to see what’s going on. Identical particles-aIdentical particles-b First, the analysis is done for two non-identical particles, say an alpha particle (i.e. a helium nucleus) and then some other nucleus (e.g. oxygen, carbon, beryllium,…). Because of the elasticity of the ‘collision’, the possible outcomes of the experiment are binary: if particle a gets into detector 1, it means particle b will be picked up by detector 2, and vice versa. The first situation (particle a gets into detector 1 and particle b goes into detector 2) is depicted in (a), i.e. the illustration on the left above, while the opposite situation, exchanging the role of the particles, is depicted in (b), i.e. the illustration on the right-hand side. So these two ‘ways’ are two different possibilities which are distinguishable not only in principle but also in practice, for non-identical particles that is (just imagine a detector which can distinguish helium from oxygen, or whatever other substance the other particle is). Therefore, strictly following the rules of quantum mechanics, we should add the probabilities of both events to arrive at the total probability of some particle (and with ‘some’, I mean particle a or particle b) ending up in some detector (again, with ‘some’ detector, I mean detector 1 or detector 2). Now, this is where Feynman’s explanation becomes somewhat tricky. The whole event (i.e. some particle ending up in some detector) is being reduced to two mutually exclusive possibilities that are both being described by the same (complex-valued) wave function f, which has that angle of incidence as its argument. To be precise: the angle of incidence is θ for the first possibility and it’s π–θ for the second possibility. That being said, it is obvious, even if Feynman doesn’t mention it, that both possibilities actually represent a combination of two separate things themselves: 1. For situation (a), we have particle a going to detector 1 and particle b going to detector 2. Using Dirac’s so-called bra-ket notation, we should write 〈1|a〉〈2|b〉 = f(θ), with f(θ) a probability amplitude, which should yield a probability when taking its absolute square: P(θ) = |f(θ)|2. 2. For situation (b), we have particle b going to detector 1 and particle a going to 2, so we have 〈1|b〉〈2|a〉, which Feynman equates with f(π–θ), so we write 〈1|b〉〈2|a〉 = 〈2|a〉〈1|b〉 = f(π–θ). Now, Feynman doesn’t dwell on this–not at all, really–but this casual assumption–i.e. the assumption that situation (b) can be represented by using the same wave function f–merits some more reflection. As said, Feynman is very brief on it: he just says situation (b) is the same situation as (a), but then detector 1 and detector 2 being switched (so we exchange the role of the detectors, I’d say). Hence, the relevant angle is π–θ and, of course, it’s a center-of-mass view again so if a goes to 2, then b has to go to 1. There’s no Third Way here. In short, a priori it would seem to be very obvious indeed to associate only one wave function (i.e. that (complex-valued) f(θ) function) with the two possibilities: that wave function f yields a probability amplitude for θ and, hence, it should also yield some (other) probability amplitude for π–θ, i.e. for the ‘other’ angle. So we have two probability amplitudes but one wave function only. You’ll say: Of course! What’s the problem? Why are you being fussy? Well… I think these assumptions about f(θ) and f(π–θ) representing the underlying probability amplitudes are all nice and fine (and, yes, they are very reasonable indeed), but I also think we should take them for what they are at this moment: assumptions. Huh? Yes. At this point, I would like to draw your attention to the fact that the only thing we can measure are real-valued possibilities. Indeed, when we do this experiment like a zillion times, it will give us some real number P for the probability that a goes to 1 and b goes to 2 (let me denote this number as P(θ) = Pa→1 and b→2), and then, when we change the angle of incidence by switching detector 1 and 2, it will also give us some (other) real number for the probability that a goes to 2 and b goes to 1 (i.e. a number which we can denote as P(π–θ) = Pa→2 and b→1). Now, while it would seem to be very reasonable that the underlying probability amplitudes are the same, we should be honest with ourselves and admit that the probability amplitudes are something we cannot directly measure. At this point, let me quickly say something about Dirac’s bra-ket notation, just in case you haven’t heard about it yet. As Feynman notes, we have to get away from thinking too much in terms of wave functions traveling through space because, in quantum mechanics, all sort of stuff can happen (e.g. spin flipping) and not all of it can be analyzed in terms of interfering probability amplitudes. Hence, it’s often more useful to think in terms of a system being in some state and then transitioning to some other state, and that’s why that bra-ket notation is so helpful. We have to read these bra-kets from right to left: the part on the right, e.g. |a〉, is the ket and, in this case, that ket just says that we’re looking at some particle referred to as particle a, while the part on the left, i.e. 〈1|, is the bra, i.e. a shorthand for particle a having arrived at detector 1. If we’d want to be complete, we should write: 〈1|a〉 = 〈particle a arrives at detector 1|particle a leaves its source〉 Note that 〈1|a〉 is some complex-valued number (i.e. a probability amplitude) and so we multiply it here with some other complex number, 〈2|b〉, because it’s two things happening together. As said, don’t worry too much about it. Strictly speaking, we don’t need wave functions and/or probability amplitudes to analyze this situation because there is no interaction in the quantum-mechanical sense: we’ve got a scattering process indeed (implying some randomness in where those particles end up, as opposed to what we’d have in a classical analysis of two billiard balls colliding), but we do not have any interference between wave functions (probability amplitudes) here. We’re just introducing the wave function f because we want to illustrate the difference between this situation (i.e. the scattering of non-identical particles) and what we’d have if we’d be looking at identical particles being scattered. At this point, I should also note that this bra-ket notation is more in line with Feynman’s own so-called path integral formulation of quantum mechanics, which is actually implicit in his line of argument: rather than thinking about the wave function as representing the (complex) amplitude of some particle to be at point x in space at point t in time, we think about the amplitude as something that’s associated with a path, i.e. one of the possible itineraries from the source (its origin) to the detector (its destination). That explains why this f(θ) function doesn’t mention the position (x) and space (t) variables. What x and t variables would we use anyway? Well… I don’t know. It’s true the position of the detectors is fully determined by θ, so we don’t need to associate any x or t with them. Hence, if we’d be thinking about the space-time variables, then we should be talking the position in space and time of both particle a and particle b. Indeed, it’s easy to see that only a slight change in the horizontal (x) or vertical position (y) of either particle would ensure that both particles do not end up in the detectors. However, as mentioned above, Feynman doesn’t even mention this. Hence, we must assume that any randomness in any x or t variable is captured by that wave function f, which explains why this is actually not a classical analysis: so, in short, we do not have two billiard balls colliding here. Hmm… You’ll say I am a nitpicker. You’ll say that, of course, any uncertainty is indeed being incorporated in the fact that we represent what’s going on by a wave function f which we cannot observe directly but whose absolute square represents a probability (or, to use precise statistical terminology, a probability density), which we can measure: P = |f(θ)|2 = f(θ)·f*(θ), with f* the complex conjugate of the complex number f. So… […] What? Well… Nothing. You’re right. This thought experiment describes a classical situation (like two billiard balls colliding) and then it doesn’t, because we cannot predict the outcome (i.e. we can’t say where the two billiard balls are going to end up: we can only describe the likely outcome in terms of probabilities Pa→1 and b→2 = |f(θ)|and Pa→2 and b→1 = |f(π–θ)|2. Of course, needless to say, the normalization condition should apply: if we add all probabilities over all angles, then we should get 1, we can write: ∫|f(θ)|2dθ = ∫f(θ)·f*(θ)dθ = 1. So that’s it, then? No. Let this sink in for a while. I’ll come back to it. Let me first make a bit of a detour to illustrate what this thought experiment is supposed to yield, and that’s a more intuitive explanation of Bose-Einstein statistics and Fermi-Dirac statistics, which we’ll get out of the experiment above if we repeat it using identical particles. So we’ll introduce the terms Bose-Einstein statistics and Fermi-Dirac statistics. Hence, there should also be some term for the reference situation described above, i.e a situation in which we non-identical particles are ‘interacting’, so to say, but then with no interference between their wave functions. So, when everything is said and done, it’s a term we should associate with classical mechanics. It’s called Maxwell-Boltzmann statistics. Huh? Why would we need ‘statistics’ here? Well… We can imagine many particles engaging like this–just colliding elastically and, thereby, interacting in a classical sense, even if we don’t know where exactly they’re going to end up, because of uncertainties in initial positions and what have you. In fact, you already know what this is about: it’s the behavior of particles as described by the kinetic theory of gases (often referred to as statististical mechanics) which, among other things, yields a very elegant function for the distribution of the velocities of gas molecules, as shown below for various gases (helium, neon, argon and xenon) at one specific temperature (25º C), i.e. the graph on the left-hand side, or for the same gas (oxygen) at different temperatures (–100º C, 20º C and 600º C), i.e. the graph on the right-hand side. Now, all these density functions and what have you are, indeed, referred to as Maxwell-Boltzmann statistics, by physicists and mathematicians that is (you know they always need some special term in order to make sure other people (i.e. people like you and me, I guess) have trouble understanding them). 700px-MaxwellBoltzmann-en 800px-Maxwell-Boltzmann_distribution_1 In fact, we get the same density function for other properties of the molecules, such as their momentum and their total energy. It’s worth elaborating on this, I think, because I’ll later compare with Bose-Einstein and Fermi-Dirac statistics. Maxwell-Boltzmann statistics Kinetic gas theory yields a very simple and beautiful theorem. It’s the following: in a gas that’s in thermal equilibrium (or just in equilibrium, if you want), the probability (P) of finding a molecule with energy E is proportional to e–E/kT, so we have: P ∝ e–E/kT Now that’s a simple function, you may think. If we treat E as just a continuous variable, and T as some constant indeed – hence, if we just treat (the probability) P as a function of (the energy) E – then we get a function like the one below (with the blue, red and green using three different values for T). So how do we relate that to the nice bell-shaped curves above? The very simple graphs above seem to indicate the probability is greatest for E = 0, and then just goes down, instead of going up initially to reach some maximum around some average value and then drop down again. Well… The fallacy here, of course, is that the constant of proportionality is itself dependent on the temperature. To be precise, the probability density function for velocities is given by: Boltzmann distribution The function for energy is similar. To be precise, we have the following function: Boltzmann distribution-energy This (and the velocity function too) is a so-called chi-squared distribution, and ϵ is the energy per degree of freedom in the system. Now these functions will give you such nice bell-shaped curves, and so all is alright. In any case, don’t worry too much about it. I have to get back to that story of the two particles and the two detectors. However, before I do so, let me jot down two (or three) more formulas. The first one is the formula for the expected number 〈Ni〉 of particles occupying energy level ε(and the brackets here, 〈Ni〉, have nothing to do with the bra-ket notation mentioned above: it’s just a general notation for some expected value): Boltzmann distribution-no of particlesThis formula has the same shape as the ones above but we brought the exponential function down, into the denominator, so the minus sign disappears. And then we also simplified it by introducing that gi factor, which I won’t explain here, because the only reason why I wanted to jot this down is to allow you to compare this formula with the equivalent formula when (a) Fermi-Dirac and (b) Bose-Einstein statistics apply: B-E and F-D distribution-no of particles Do you see the difference? The only change in the formula is the ±1 term in the denominator: we have a minus one (–1) for Fermi-Dirac statistics and a plus one (+1) for Bose-Einstein statistics indeed. That’s all. That’s the difference with Maxwell-Boltzmann statistics. Huh? Yes. Think about it, but don’t worry too much. Just make a mental note of it, as it will be handy when you’d be exploring related articles. [And, of course, please don’t think I am bagatellizing the difference between Maxwell-Boltzmann, Bose-Einstein and Fermi-Dirac statistics here: that ±1 term in the denominator is, obviously, a very important difference, as evidenced by the consequences of formulas like the one above: just think about the crowding-in effect in lasers as opposed to the Pauli exclusion principle, for example. :-)] Setting up the experiment (continued) Let’s get back to our experiment. As mentioned above, we don’t really need probability amplitudes in the classical world: ordinary probabilities, taking into account uncertainties about initial conditions only, will do. Indeed, there’s a limit to the precision with which we can measure the position in space and time of any particle in the classical world as well and, hence, we’d expect some randomness (as captured in the scattering phenomenon) but, as mentioned above, ordinary probabilities would do to capture that. Nevertheless, we did associate probability amplitudes with the events described above in order to illustrate the difference with the quantum-mechanical world. More specifically, we distinguished: 1. Situation (a): particle a goes to detector 1 and b goes to 2, versus 2. Situation (b): particle a goes to 2 and b goes to 1. In our bra-ket notation: 1. 〈1|a〉〈2|b〉 = f(θ), and 2. 〈1|b〉〈2|a〉 = f(π–θ). The f(θ) function is a quantum-mechanical wave function. As mentioned above, while we’d expect to see some space (x) and time (t) variables in it, these are, apparently, already captured by the θ variable. What about f(π–θ)? Well… As mentioned above also, that’s just the same function as f(θ) but using the angle π–θ as the argument. So, the following remark is probably too trivial to note but let me do it anyway (to make sure you understand what we’re modeling here really): while it’s the same function f, the values f(θ) and f(π–θ) are, of course, not necessarily equal and, hence, the corresponding probabilities are also not necessarily the same. Indeed, some angles of scattering may be more likely than others. However, note that we assume that the function f itself is  exactly the same for the two situations (a) and (b), as evidenced by that normalization condition we assume to be respected: if we add all probabilities over all angles, then we should get 1, so ∫|f(θ)|2dθ = ∫f(θ)·f*(θ)dθ = 1. So far so good, you’ll say. However, let me ask the same critical question once again: why would we use the same wave function f for the second situation?  Huh? You’ll say: why wouldn’t we? Well… Think about it. Again, how do we find that f(θ) function? The assumption here is that we just do the experiment a zillion times while varying the angle θ and, hence, that we’ll find some average corresponding to P(θ), i.e. the probability. Now, the next step then is to equate that average value to |f(θ)|obviously, because we have this quantum-mechanical theory saying probabilities are the absolute square of probability amplitudes. And,  so… Well… Yes. We then just take the square root of the P function to find the f(θ) function, isn’t it? Well… No. That’s where Feynman is not very accurate when it comes to spelling out all of the assumptions underpinning this thought experiment. We should obviously watch out here, as there’s all kinds of complications when you do something like that. To a large extent (perhaps all of it), the complications are mathematical only. First, note that any number (real or complex, but note that |f(θ)|2 is a real number) has two distinct real square roots: a positive and a negative one: x = ± √x2. Secondly, we should also note that, if f(θ) is a regular complex-valued wave function of x and t and θ (and with ‘regular’, we mean, of course, that’s it’s some solution to a Schrödinger (or Schrödinger-like) equation), then we can multiply it with some random factor shifting its phase Θ (usually written as Θ = kx–ωt+α) and the square of its absolute value (i.e. its squared norm) will still yield the same value. In mathematical terms, such factor is just a complex number with a modulus (or length or norm–whatever terminology you prefer) equal to one, which we can write as a complex exponential: eiα, for example. So we should note that, from a mathematical point of view, any function eiαf(θ) will yield the same probabilities as f(θ). Indeed, |f(θ)|= |eiαf(θ)|= (|eiα||f(θ)|)= |eiα|2|f(θ)|= 12|f(θ)|2 Likewise, while we assume that this function f(π–θ) is the same function f as that f(θ) function, from a mathematical point of view, the function eiβf(π–θ) would do just as well, because its absolute square yields the very same (real) probability |f(π–θ)|2. So the question as to what wave function we should take for the probability amplitude is not as easy to answer as you may think. Huh? So what function should we take then? Well… We don’t know. Fortunately, it doesn’t matter, for non-identical particles that is. Indeed, when analyzing the scattering of non-identical particles, we’re interested in the probabilities only and we can calculate the total probability of particle a ending up in detector 1 or 2 (and, hence, particle b ending up in detector 2 or 1) as the following sum: |eiαf(θ)|2 +|eiβf(π–θ)|= |f(θ)|2 +|f(π–θ)|2. In other words, for non-identical particles, these phase factors (eiα or eiβ) don’t matter and we can just forget about them. However, and that’s the crux of the matter really, we should mention them, of course, in case we’d have to add the probability amplitudeswhich is exactly what we’ll have to do when we’re looking at identical particles, of course. In fact, in that case (i.e. when these phase factors eiα and eiβ will actually matter), you should note that what matters really is the phase difference, so we could replace α and β with some δ (which is what we’ll do below). However, let’s not put the cart before the horse and conclude our analysis of what’s going on when we’re considering non-identical parties: in that case, this phase difference doesn’t matter. And the remark about the positive and negative square root doesn’t matter either. In fact, if you want, you can subsume it under the phase difference story by writing eiα as eiα = ± 1. To be more explicit: we could say that –f(θ) is the probability amplitude, as |–f(θ)|is also equal to that very same real number |f(θ)|2. OK. Done. Bose-Einstein and Fermi-Dirac statistics As I mentioned above, the story becomes an entirely different one when we’re doing the same experiment with identical particles. At this point, Feynman’s argument becomes rather fuzzy and, in my humble opinion, that’s because he refused to be very explicit about all of those implicit assumptions I mentioned above. What I can make of it, is the following: 1. We know that we’ll have to add probability amplitudes, instead of probabilities, because we’re talking one event that can happen in two indistinguishable ways. Indeed, for non-identical particles, we can, in principle (and in practice) distinguish situation (a) and (b) – and so that’s why we only have to add some real-valued numbers representing probabilities – but so we cannot do do that for identical particles. 2. Situation (a) is still being described by some probability amplitude f(θ). We don’t know what function exactly, but we assume there is some unique wave function f(θ) out there that accurately describes the probability amplitude of particle a going to 1 (and, hence, particle b going to 2), even if we can’t tell which is a and which is b. What about the phase factor? Well… We just assume we’ve chosen our t such that α = 0. In short, the assumption is that situation (a) is represented by some probability amplitude (or wave function, if you prefer that term) f(θ). 3. However, a (or some) particle (i.e. particle a or particle b) ending up in a (some) detector (i.e. detector 1 or detector 2) may come about in two ways that cannot be distinguished one from the other. One is the way described above, by that wave function f(θ). The other way is by exchanging the role of the two particles. Now, it would seem logical to associate the amplitude f(π–θ) with the second way. But we’re in the quantum-mechanical world now. There’s uncertainty, in position, in momentum, in energy, in time, whatever. So we can’t be sure about the phase. That being said, the wave function will still have the same functional form, we must assume, as it should yield the same probability when squaring. To account for that, we will allow for a phase factor, and we know it will be important when adding the amplitudes. So, while the probability for the second way (i.e. the square of its absolute value) should be the same, its probability amplitude does not necessarily have to be the same: we have to allow for positive and negative roots or, more generally, a possible phase shift. Hence, we’ll write the probability amplitude as eiδf(π–θ) for the second way. [Why do I use δ instead of β? Well… Again: note that it’s the phase difference that matters. From a mathematical point of view, it’s the same as inserting an eiβ factor: δ can take on any value.] 4. Now it’s time for the Big Trick. Nature doesn’t matter about our labeling of particles. If we have to multiply the wave function (i.e. f(π–θ), or f(θ)–it’s the same: we’re talking a complex-valued function of some variable (i.e. the angle θ) here) with a phase factor eiδ when exchanging the roles of the particles (or, what amounts to the same, exchanging the role of the detectors), we should get back to our point of departure (i.e. no exchange of particles, or detectors) when doing that two times in a row, isn’t it? So we exchange the role of particle a and b in this analysis (or the role of the detectors), and then we’d exchange their roles once again, then there’s no exchange of roles really and we’re back at the original situation. So we must have eiδeiδf(θ) = f(θ) (and eiδeiδf(π–θ) = f(π–θ) of course, which is exactly the same statement from a mathematical point of view). 5. However, that means (eiδ)= +1, which, in turn, implies that eiδ is plus or minus one: eiδ = ± 1. So that means the phase difference δ must be equal to 0 or π (or –π, which is the same as +π). In practical terms, that means we have two ways of combining probability amplitudes for identical particles: we either add them or, else, we subtract them. Both cases exist in reality, and lead to the dichotomy between Bose and Fermi particles: 1. For Bose particles, we find the total probability amplitude for this scattering event by adding the two individual amplitudes: f(θ) + f(π–θ). 2. For Fermi particles, we find the total probability amplitude for this scattering event by subtracting the two individual amplitudes: f(θ) – f(π–θ). As compared to the probability for non-identical particles which, you’ll remember, was equal to |f(θ)|2 +|f(π–θ)|2, we have the following Bose-Einstein and Fermi-Dirac statistics: 1. For Bose particles: the combined probability is equal to |f(θ) + f(π–θ)|2. For example, if θ is 90°, then we have a scattering probability that is exactly twice the probability for non-identical particles. Indeed, if θ is 90°, then f(θ) = f(π–θ), and then we have |f(π/2) + f(π/2)|2 = |2f(π/2)|2 = 4|f(π/2)|2. Now, that’s two times |f(π/2)|2 +|f(π/2)|2 = 2|f(π/2)|2 indeed. 2. For Fermi particles (e.g. electrons), we have a combined probability equal to |f(θ) – f(π–θ)|2. Again, if θ is 90°, f(θ) = f(π–θ), and so it would mean that we have a combined probability which is equal to zero ! Now, that‘s a strange result, isn’t it? It is. Fortunately, the strange result has to be modified because electrons will also have spin and, hence, in half of the cases, the two electrons will actually not be identical but have opposite spin. That changes the analysis substantially (see Feynman’s Lectures, III-3-12). To be precise, if we take the spin factor into, we’ll find a total probability (for θ = 90°) equal to |f(π/2)|2, so that’s half of the probability for non-identical particles. Hmm… You’ll say: Now that was a complicated story! I fully agree. Frankly, I must admit I feel like I still don’t quite ‘get‘ the story with that phase shift eiδ, in an intuitive way that is (and so that’s the reason for going through the trouble of writing out this post). While I think it makes somewhat more sense now (I mean, more than when I wrote a post on this in March), I still feel I’ve only brought some of the implicit assumptions to the fore. In essence, what we’ve got here is a mathematical dichotomy (or a mathematical possibility if you want) corresponding to what turns out to be an actual dichotomy in Nature: in quantum-mechanics, particles are either bosons or fermions. There is no Third Way, in quantum-mechanics that is (there is a Third Way in reality, of course: that’s the classical world!). I guess it will become more obvious as I’ll get somewhat more acquainted with the real arithmetic involved in quantum-mechanical calculations over the coming weeks. In short, I’ve analyzed this thing over and over again, but it’s still not quite clear me. I guess I should just move on and accept that: 1. This explanation ‘explains’ the experimental evidence, and that’s different probabilities for identical particles as compared to non-identical particles. 2. This explanation ‘complements’ analyses such as that 1916 analysis of blackbody radiation by Einstein (see my post on that), which approaches interference from an angle that’s somewhat more intuitive. A numerical example I’ve learned that, when some theoretical piece feels hard to read, an old-fashioned numerical example often helps. So let’s try one here. We can experiment with many functional forms but let’s keep things simple. From the illustration (which I copy below for your convenience), that angle θ can take any value between −π and +π, so you shouldn’t think detector 1 can only be ‘north’ of the collision spot: it can be anywhere. Identical particles-a Now, it may or may not make sense (and please work out other examples than this one here), but let’s assume particle a and b are more likely to go in a line that’s more or less straight. In other words, the assumption is that both particles deflect each other only slightly, or even not at all. After all, we’re talking ‘point-like’ particles here and so, even when we try hard, it’s hard to make them collide really. That would amount to a typical bell-shaped curve for that probability density curve P(θ): one like the blue curve below. That one shows that the probability of particle a and b just bouncing back (i.e. θ ≈ ±π) is (close to) zero, while it’s highest for θ ≈ 0, and some intermediate value for anything angle in-between. The red curve shows P(π–θ), which can be found by mirroring the P(θ) around the vertical axis, which yields the same function because the function is symmetrical: P(θ) = P(–θ), and then shifting it by adding the vertical distance π. It should: it’s the second possibility, remember? Particle a ending up in detector 2. But detector 2 is positioned at the angle π–θ and, hence, if π–θ is close to ±π (so if θ ≈ 0), that means particle 1 is basically bouncing back also, which we said is unlikely. On the other hand, if detector 2 is positioned at an angle π–θ ≈ 0, then we have the highest probability of particle a going right to it. In short, the red curve makes sense too, I would think. [But do think about yourself: you’re the ultimate judge!] Example - graph The harder question, of course, concerns the choice of some wave function f(θ) to match those P curves above. Remember that these probability densities P are real numbers and any real number is the absolute square (aka the squared norm) of an infinite number of complex numbers! So we’ve got l’embarras du choix, as they say in French. So… What do to? Well… Let’s keep things simple and stupid and choose a real-valued wave function f(θ), such as the blue function below. Huh? You’ll wonder if that’s legitimate. Frankly, I am not 100% sure, but why not? The blue f(θ) function will give you the blue P(θ) above, so why not go along with it? It’s based on a cosine function but it’s only half of a full cycle. Why? Not sure. I am just trying to match some sinusoidal function with the probability density function here, so… Well… Let’s take the next step. Example 2 The red graph above is the associated f(π–θ) function. Could we choose another one? No. There’s no freedom of choice here, I am afraid: if we choose a functional form for f(θ), then our f(π–θ) function is fixed too. So it is what it is: negative between –π and 0, and positive between 0 and +π and 0. Now that is definitely not good, because f(π–θ) for θ = –π is not equal to f(π–θ) for θ = +π: they’re opposite values. That’s nonsensical, isn’t it? Both the f(θ) and the f(π–θ) should be something cyclical… But, again, let’s go along with it as for now: note that the green horizontal line is the sum of the squared (absolute) values of f(θ) and f(π–θ), and note that it’s some constant. Now, that’s a funny result, because I assumed both particles were more likely to go in some straight line, rather than recoil with some sharp angle θ. It again indicates I must be doing something wrong here. However, the important thing for me here is to compare with the Bose-Einstein and Fermi-Dirac statistics. What’s the total probability there if we take that blue f(θ) function? Well… That’s what’s shown below. The horizontal blue line is the same as the green line in the graph above: a constant probability for some particle (a or b) ending up in some detector (1 or 2). Note that the surface, when added, of the two rectangles above the x-axis (i.e. the θ-axis) should add up to 1. The red graph gives the probability when the experiment is carried out for (identical) bosons (or Bose particles as I like to call them). It’s weird: it makes sense from a mathematical point of view (the surface under the curve adds up to the same surface under the blue line, so it adds up to 1) but, from a physics point of view, what does this mean? A maximum at θ = π/2 and a minimum at θ = –π/2? Likewise, how to interpret the result for fermions? Is this OK? Well… To some extent, I guess. It surely matches the theoretical results I mentioned above: we have twice the probability for bosons for θ = 90° (red curve), and a probability equal to zero for the same angle when we’re talking fermions (green curve). Still, this numerical example triggers more questions than it answers. Indeed, my starting hypothesis was very symmetrical: both particle a and b are likely to go in a straight line, rather than being deflected in some sharp(er) angle. Now, while that hypothesis gave a somewhat unusual but still understandable probability density function in the classical world (for non-identical particles, we got a constant for P(θ) + P(π–θ)), we get this weird asymmetry in the quantum-mechanical world: we’re much more likely to catch boson in a detector ‘north’ of the line of firing than ‘south’ of it, and vice versa for fermions. That’s weird, to say the least. So let’s go back to the drawing board and take another function for f(θ) and, hence, for f(π–θ). This time, the two graphs below assume that (i) f(θ) and f(π–θ) have a real as well as an imaginary part and (ii) that they go through a full cycle, instead of a half-cycle only. This is done by equating the real part of the two functions with cos(θ) and cos(π–θ) respectively, and their imaginary part with sin(θ) and sin(π–θ) respectively. [Note that we conveniently forget about the normalization condition here.] What do we see? Well… The imaginary part of f(θ) and f(π–θ) is the same, because sin(π–θ) = sin(θ). We also see that the real part of f(θ) and f(π–θ) are the same except for a phase difference equal to π: cos(π–θ) = cos[–(θ–π)] = cos(θ–π). More importantly, we see that the absolute square of both f(θ) and f(π–θ) yields the same constant, and so their sum P = |f(θ)|2 +|f(π–θ)|= 2|f(θ)|2 = 2|f(π–θ)|= 2P(θ) = 2P(π–θ). So that’s another constant. That’s actually OK because, this time, I did not favor one angle over the other (so I did not assume both particles were more likely to go in some straight line rather than recoil). Now, how does this compare to Bose-Einstein and Fermi-Dirac statistics? That’s shown below. For Bose-Einstein (left-hand side), the sum of the real parts of f(θ) and f(π–θ) yields zero (blue line), while the sum of their imaginary parts (i.e. the red graph) yields a sine-like function but it has double the amplitude of sin(θ). That’s logical: sin(θ) + sin(π–θ) = 2sin(θ). The green curve is the more interesting one, because that’s the total probability we’re looking for. It has two maxima now, at +π/2 and at –π/2. That’s good, as it does away with that ‘weird asymmetry’ we got when we used a ‘half-cycle’ f(θ) function. B-E and F-D Likewise, the Fermi-Dirac probability density function looks good as well (right-hand side). We have the imaginary parts of f(θ) and f(π–θ) that ‘add’ to zero: sin(θ) – sin(π–θ) = 0 (I put ‘add’ between brackets because, with Fermi-Dirac, we’re subtracting of course), while the real parts ‘add’ up to a double cosine function: cos(θ) – cos(π–θ) = cos(θ) – [–cos(θ)] = 2cos(θ). We now get a minimum at +π/2 and at –π/2, which is also in line with the general result we’d expect. The (final) graph below summarizes our findings. It gives the three ‘types’ of probabilities, i.e. the probability of finding some particle in some detector as a function of the angle –π < θ < +π using: 1. Maxwell-Boltzmann statistics: that’s the green constant (non-identical particles, and probability does not vary with the angle θ). 2. Bose-Einstein: that’s the blue graph below. It has two maxima, at +π/2 and at –π/2, and two minima, at 0 and at ±π (+π and –π are the same angle obviously), with the maxima equal to twice the value we get under Maxwell-Boltzmann statistics. 3. Finally, the red graph gives the Fermi-Dirac probabilities. Also two maxima and minima, but at different places: the maxima are at θ = 0 and  θ = ±π, while the minima are at at +π/2 and at –π/2. Funny, isn’t it? These probability density functions are all well-behaved, in the sense that they add up to the same total (which should be 1 when applying the normalization condition). Indeed, the surfaces under the green, blue and red lines are obviously the same. But so we get these weird fluctuations for Bose-Einstein and Fermi-Dirac statistics, favoring two specific angles over all others, while there’s no such favoritism when the experiment involves non-identical particles. This, of course, just follows from our assumption concerning f(θ). What if we double the frequency of f(θ), i.e. from one cycle to two cycles between –π and +π? Well… Just try it: take f(θ) = cos(2·θ) + isin(2·θ) and do the calculations. You should get the following probability graphs: we have the same green line for non-identical particles, but interference with four maxima (and four minima) for the Bose-Einstein and Fermi-Dirac probabilities. summary 2 Again… Funny, isn’t it? So… What to make of this? Frankly, I don’t know. But one last graph makes for an interesting observation: if the angular frequency of f(θ) takes on larger and larger values, the Bose-Einstein and Fermi-Dirac probability density functions also start oscillating wildly. For example, the graphs below are based on a f(θ) function equal to f(θ) = cos(25·θ) + isin(25·θ). The explosion of color hurts the eye, doesn’t it? 🙂 But, apart from that, do you now see why physicists say that, at high frequencies, the interference pattern gets smeared out? Indeed, if we move the detector just a little bit (i.e. we change the angle θ just a little bit) in the example below, we hit a maximum instead of a minimum, and vice versa. In short, the granularity may be such that we can only measure that green line, in which case we’d think we’re dealing with Maxwell-Boltzmann statistics, while the underlying reality may be different. summary 4 That explains another quote in Feynman’s famous introduction to quantum mechanics (Lectures, Vol. III, Chapter 1): “If the motion of all matter—as well as electrons—must be described in terms of waves, what about the bullets in our first experiment? Why didn’t we see an interference pattern there? It turns out that for the bullets the wavelengths were so tiny that the interference patterns became very fine. So fine, in fact, that with any detector of finite size one could not distinguish the separate maxima and minima. What we saw was only a kind of average, which is the classical curve. In the Figure below, we have tried to indicate schematically what happens with large-scale objects. Part (a) of the figure shows the probability distribution one might predict for bullets, using quantum mechanics. The rapid wiggles are supposed to represent the interference pattern one gets for waves of very short wavelength. Any physical detector, however, straddles several wiggles of the probability curve, so that the measurements show the smooth curve drawn in part (b) of the figure.” Interference with bullets But that should really conclude this post. It has become way too long already. One final remark, though: the ‘smearing out’ effect also explains why those three equations for 〈Ni〉 sometimes do amount to more or less the same thing: the Bose-Einstein and Fermi-Dirac formulas may approximate the Maxwell-Boltzmann equation. In that case, the ±1 term in the denominator does not make much of a difference. As we said a couple of times already, it all depends on scale. 🙂 Concluding remarks 1. The best I can do in terms of interpreting the above, is to tell myself that we cannot fully ‘fix’ the functional form of the wave function for the second or ‘other’ way the event can happen if we’re ‘fixing’ the functional form for the first of the two possibilities. We have to allow for a phase shift eiδ indeed, which incorporates all kinds of considerations of uncertainty in regard to both time and position and, hence, in regard to energy and momentum also (using both the ΔEΔt = ħ/2 and ΔxΔp = ħ/2 expressions)–I assume (but that’s just a gut instinct). And then the symmetry of the situation then implies eiδ can only take on one of two possible values: –1 or +1 which, in turn, implies that δ is equal to 0 or π. 2. For those who’d think I am basically doing nothing but re-write a chapter out of Feynman’s Lectures, I’d refute that. One point to note is that Feynman doesn’t seem to accept that we should introduce a phase factor in the analysis for non-identical particles as well. To be specific: just switching the detectors (instead of the particles) also implies that one should allow for the mathematical possibility of the phase of that f function being shifted by some random factor δ. The only difference with the quantum-mechanical analysis (i.e. the analysis for identical particles) is that the phase factor doesn’t make a difference as to the final result, because we’re not adding amplitudes but their absolute squares and, hence, a phase shift doesn’t matter. 3. I think all of the reasoning above makes not only for a very fine but also a very beautiful theoretical argument, even I feel like I don’t fully ‘understand’ it, in an intuitive way that is. I hope this post has made you think. Isn’t it wonderful to see that the theoretical or mathematical possibilities of the model actually correspond to realities, both in the classical as well as in the quantum-mechanical world? In fact, I can imagine that most physicists and mathematicians would shrug this whole reflection off like… Well… Like: “Of course! It’s obvious, isn’t it?” I don’t think it’s obvious. I think it’s deep. I would even qualify it as mysterious, and surely as beautiful. 🙂 Relativity paradoxes What relativity does The ladder paradox Ladder paradox 1Ladder paradox 2 cΔt– Δx= cΔt’– Δx’2. The twin paradox Now… For some real fun… The Complementarity Principle Unlike what you might think when seeing the title of this post, it is not my intention to enter into philosophical discussions here: many authors have been writing about this ‘principle’, most of which–according to eminent physicists–don’t know what they are talking about. So I have no intention to make a fool of myself here too. However, what I do want to do here is explore, in an intuitive way, how the classical and quantum-mechanical explanations of the phenomenon of the diffraction of light are different from each other–and fundamentally so–while, necessarily, having to yield the same predictions. It is in that sense that the two explanations should be ‘complementary’. The classical explanation I’ve done a fairly complete analysis of the classical explanation in my posts on Diffraction and the Uncertainty Principle (20 and 21 September), so I won’t dwell on that here. Let me just repeat the basics. The model is based on the so-called Huygens-Fresnel Principle, according to which each point in the slit becomes a source of a secondary spherical wave. These waves then interfere, constructively or destructively, and, hence, by adding them, we get the form of the wave at each point of time and at each point in space behind the slit. The animation below illustrates the idea. However, note that the mathematical analysis does not assume that the point sources are neatly separated from each other: instead of only six point sources, we have an infinite number of them and, hence, adding up the waves amounts to solving some integral (which, as you know, is an infinite sum). We know what we are supposed to get: a diffraction pattern. The intensity of the light on the screen at the other side depends on (1) the slit width (d), (2) the frequency of the light (λ), and (3) the angle of incidence (θ), as shown below. One point to note is that we have smaller bumps left and right. We don’t get that if we’d treat the slit as a single point source only, like Feynman does when he discusses the double-slit experiment for (physical) waves. Indeed, look at the image below: each of the slits acts as one point source only and, hence, the intensity curves I1 and I2 do not show a diffraction pattern. They are just nice Gaussian “bell” curves, albeit somewhat adjusted because of the angle of incidence (we have two slits above and below the center, instead of just one on the normal itself). So we have an interference pattern on the screen and, now that we’re here, let me be clear on terminology: I am going along with the widespread definition of diffraction being a pattern created by one slit, and the definition of interference as a pattern created by two or more slits. I am noting this just to make sure there’s no confusion. Water waves That should be clear enough. Let’s move on the quantum-mechanical explanation. The quantum-mechanical explanation There are several formulations of quantum mechanics: you’ve heard about matrix mechanics and wave mechanics. Roughly speaking, in matrix mechanics “we interpret the physical properties of particles as matrices that evolve in time”, while the wave mechanics approach is primarily based on these complex-valued wave functions–one for each physical property (e.g. position, momentum, energy). Both approaches are mathematically equivalent. There is also a third approach, which is referred to as the path integral formulation, which  “replaces the classical notion of a single, unique trajectory for a system with a sum, or functional integral, over an infinity of possible trajectories to compute an amplitude” (all definitions here were taken from Wikipedia). This approach is associated with Richard Feynman but can also be traced back to Paul Dirac, like most of the math involved in quantum mechanics, it seems. It’s this approach which I’ll try to explain–again, in an intuitive way only–in order to show the two explanations should effectively lead to the same predictions. The key to understanding the path integral formulation is the assumption that a particle–and a ‘particle’ may refer to both bosons (e.g. photons) or fermions (e.g. electrons)–can follow any path from point A to B, as illustrated below. Each of these paths is associated with a (complex-valued) probability amplitude, and we have to add all these probability amplitudes to arrive at the probability amplitude for the particle to move from A to B. You can find great animations illustrating what it’s all about in the relevant Wikipedia article but, because I can’t upload video here, I’ll just insert two illustrations from Feynman’s 1985 QED, in which he does what I try to do, and that is to approach the topic intuitively, i.e. without too much mathematical formalism. So probability amplitudes are just ‘arrows’ (with a length and a direction, just like a complex number or a vector), and finding the resultant or final arrow is a matter of just adding all the little arrows to arrive at one big arrow, which is the probability amplitude, which he denotes as P(A, B), as shown below. This intuitive approach is great and actually goes a very long way in explaining complicated phenomena, such as iridescence for example (the wonderful patterns of color on an oil film!), or the partial reflection of light by glass (anything between 0 and 16%!). All his tricks make sense. For example, different frequencies are interpreted as slower or faster ‘stopwatches’ and, as such, they determine the final direction of the arrows which, in turn, explains why blue and red light are reflected differently. And so on and son. It all works. […] Up to a point. Indeed, Feynman does get in trouble when trying to explain diffraction. I’ve reproduced his explanation below. The key to the argument is the following: 1. If we have a slit that’s very wide, there are a lot of possible paths for the photon to take. However, most of these paths cancel each other out, and so that’s why the photon is likely to travel in a straight line. Let me quote Feynman: “When the gap between the blocks is wide enough to allow many neighboring paths to P and Q, the arrows for the paths to P add up (because all the paths to P take nearly the same time), while the paths to Q cancel out (because those paths have a sizable difference in time). So the photomultiplier at Q doesn’t click.” (QED, p.54) 2. However, “when the gap is nearly closed and there are only a few neighboring paths, the arrows to Q also add up, because there is hardly any difference in time between them, either (see Fig. 34). Of course, both final arrows are small, so there’s not much light either way through such a small hole, but the detector at Q clicks almost as much as the one at P! So when you try to squeeze light too much to make sure it’s going only in a straight line, it refuses to cooperate and begins to spread out.” (QED, p. 55) Many arrowsFew arrows This explanation is as simple and intuitive as Feynman’s ‘explanation’ of diffraction using the Uncertainty Principle in his introductory chapter on quantum mechanics (Lectures, I-38-2), which is illustrated below. I won’t go into the detail (I’ve done that before) but you should note that, just like the explanation above, such explanations do not explain the secondary, tertiary etc bumps in the diffraction pattern. Diffraction of electrons So what’s wrong with these explanations? Nothing much. They’re simple and intuitive, but essentially incomplete, because they do not incorporate all of the math involved in interference. Incorporating the math means doing these integrals for 1. Electromagnetic waves in classical mechanics: here we are talking ‘wave functions’ with some real-valued amplitude representing the strength of the electric and magnetic field; and 2. Probability waves: these are complex-valued functions, with the complex-valued amplitude representing probability amplitudes. The two should, obviously, yield the same result, but a detailed comparison between the approaches is quite complicated, it seems. Now, I’ve googled a lot of stuff, and I duly note that diffraction of electromagnetic waves (i.e. light) is conveniently analyzed by summing up complex-valued waves too, and, moreover, they’re of the same familiar type: ψ = Aei(kx–ωt). However, these analyses also duly note that it’s only the real part of the wave that has an actual physical interpretation, and that it’s only because working with natural exponentials (addition, multiplication, integration, derivation, etc) is much easier than working with sine and cosine waves that such complex-valued wave functions are used (also) in classical mechanics. In fact, note the fine print in Feynman’s illustration of interference of physical waves (Fig. 37-2): he calculates the intensities I1 and I2 by taking the square of the absolute amplitudes ĥ1 and ĥ2, and the hat indicates that we’re also talking some complex-valued wave function here. Hence, we must be talking the same mathematical waves in both explanations, aren’t we? In other words, we should get the same psi functions ψ = Aei(kx–ωt) in both explanations, don’t we? Well… Maybe. But… Probably not. As far as I know–but I must be wrong–we cannot just re-normalize the E and B vectors in these electromagnetic waves in order to establish an equivalence with probability waves. I haven’t seen that being done (but I readily admit I still have a lot of reading to do) and so I must assume it’s not very clear-cut at all. So what? Well… I don’t know. So far, I did not find a ‘nice’ or ‘intuitive’ explanation of a quantum-mechanical approach to the phenomenon of diffraction yielding the same grand diffraction equation, referred to as the Fresnel-Kirchoff diffraction formula (see below), or one of its more comprehensible (because simplified) representations, such as the Fraunhofer diffraction formula, or the even easier formula which I used in my own post (you can google them: they’re somewhat less monstrous and–importantly–they work with real numbers only, which makes them easier to understand). Kirchoff formula[…] That looks pretty daunting, isn’t it? You may start to understand it a bit better by noting that (n, r) and (n, s) are angles, so that’s OK in a cosine function. The other variables also have fairly standard interpretations, as shown below, but… Admit it: ‘easy’ is something else, isn’t it? So… Where are we here? Well… As said, I trust that both explanations are mathematically equivalent – just like matrix and wave mechanics 🙂 –and, hence, that a quantum-mechanical analysis will indeed yield the same formula. However, I think I’ll only understand physics truly if I’ve gone through all of the motions here. Well then… I guess that should be some kind of personal benchmark that should guide me on this journey, isn’t it? 🙂 I’ll keep you posted. Post scriptum: To be fair to Feynman, and demonstrating his talent as a teacher once again, he actually acknowledges that the double-slit thought experiment uses simplified assumptions that do not include diffraction effects when the electrons go through the slit(s). He does so, however, only in one of the first chapters of Vol. III of the Lectures, where he comes back to the experiment to further discuss the first principles of quantum mechanics. I’ll just quote him: “Incidentally, we are going to suppose that the holes 1 and 2 are small enough that when we say an electron goes through the hole, we don’t have to discuss which part of the hole. We could, of course, split each hole into pieces with a certain amplitude that the electron goes to the top of the hole and the bottom of the hole and so on. We will suppose that the hole is small enough so that we don’t have to worry about this detail. That is part of the roughness involved; the matter can be made more precise, but we don’t want to do so at this stage.” So here he acknowledges that he omitted the intricacies of diffraction. I noted this only later. Sorry. A Royal Road to quantum physics? It is said that, when Ptolemy asked Euclid to quickly explain him geometry, Euclid told the King that there was no ‘Royal Road’ to it, by which he meant it’s just difficult and takes a lot of time to understand. Physicists will tell you the same about quantum physics. So, I know that, at this point, I should just study Feynman’s third Lectures Volume and shut up for a while. However, before I get lost while playing with state vectors, S-matrices, eigenfunctions, eigenvalues and what have you, I’ll try that Royal Road anyway, building on my previous digression on Hamiltonian mechanics. So… What was that about? Well… If you understood anything from my previous post, it should be that both the Lagrangian and Hamiltonian function use the equations for kinetic and potential energy to derive the equations of motion for a system. The key difference between the Lagrangian and Hamiltonian approach was that the Lagrangian approach yields one differential equation–which had to be solved to yield a functional form for x as a function of time, while the Hamiltonian approach yielded two differential equations–which had to be solved to yield a functional form for both position (x) and momentum (p). In other words, Lagrangian mechanics is a model that focuses on the position variable(s) only, while, in Hamiltonian mechanics, we also keep track of the momentum variable(s). Let me briefly explain the procedure again, so we’re clear on it: 1. We write down a function referred to as the Lagrangian function. The function is L = T – V with T and V the kinetic and potential energy respectively. T has to be expressed as a function of velocity (v) and V has to be expressed as a function of position (x). You’ll say: of course! However, it is an important point to note, otherwise the following step doesn’t make sense. So we take the equations for kinetic and potential energy and combine them to form a function L = L(x, v). 2. We then calculate the so-called Lagrangian equation, in which we use that function L. To be precise: what we have to do is calculate its partial derivatives and insert these in the following equation: It should be obvious now why I stressed we should write L as a function of velocity and position, i.e. as L = L(x, v). Otherwise those partial derivatives don’t make sense. As to where this equation comes from, don’t worry about it: I did not explain why this works. I didn’t do that here, and I also didn’t do it in my previous post. What we’re doing here is just explaining how it goes, not why. 3. If we’ve done everything right, we should get a second-order differential equation which, as mentioned above, we should then solve for x(t). That’s what ‘solving’ a differential equation is about: find a functional form that satisfies the equation. Let’s now look at the Hamiltonian approach. 1. We write down a function referred to as the Hamiltonian function. It looks similar to the Lagrangian, except that we sum kinetic and potential energy, and that T has to be expressed as a function of the momentum p. So we have a function H = T + V = H(x, p). 2. We then calculate the so-called Hamiltonian equations, which is a set of two equations, rather than just one equation. [We have two for the one-dimensional situation that we are modeling here: it’s a different story (i.e. we will have more equations) if we’d have more degrees of freedom of course.] It’s the same as in the Lagrangian approach: it’s just a matter of calculating partial derivatives, and insert them in the equations below. Again, note that I am not explaining why this Hamiltonian hocus-pocus actually works. I am just saying how it works. Hamiltonian equations 3. If we’ve done everything right, we should get two first-order differential equations which we should then solve for x(t) and p(t). Now, solving a set of equations may or may not be easy, depending on your point of view. If you wonder how it’s done, there’s excellent stuff on the Web that will show you how (such as, for instance, Paul’s Online Math Notes). Now, I mentioned in my previous post that the Hamiltonian approach to modeling mechanics is very similar to the approach that’s used in quantum mechanics and that it’s therefore the preferred approach in physics. I also mentioned that, in classical physics, position and momentum are also conjugate variables, and I also showed how we can calculate the momentum as a conjugate variable from the Lagrangian: p = ∂L/∂v. However, I did not dwell on what conjugate variables actually are in classical mechanics. I won’t do that here either. Just accept that conjugate variables, in classical mechanics, are also defined as pairs of variables. They’re not related through some uncertainty relation, like in quantum physics, but they’re related because they can both be obtained as the derivatives of a function which I haven’t introduced as yet. That function is referred to as the action, but… Well… Let’s resist the temptation to digress any further here. If you really want to know what action is–in physics, that is… 🙂 Well… Google it, I’d say. What you should take home from this digression is that position and momentum are also conjugate variables in classical mechanics. Let’s now move on to quantum mechanics. You’ll see that the ‘similarity’ in approach is… Well… Quite relative, I’d say. 🙂 Position and momentum in quantum mechanics As you know by now (I wrote at least a dozen posts on this), the concept of position and momentum in quantum mechanics is very different from that in classical physics: we do not have x(t) and p(t) functions which give a unique, precise and unambiguous value for x and p when we assign a value to the time variable and plug it in. No. What we have in quantum physics is some weird wave function, denoted by the Greek letters φ (phi) or ψ (psi) or, using Greek capitals, Φ and Ψ. To be more specific, the psi usually denotes the wave function in the so-called position space (so we write ψ = ψ(x)), and the phi will usually denote the wave function in the so-called momentum space (so we write φ = φ(p)). That sounds more complicated than it is, obviously, but I just wanted to respect terminology here. Finally, note that the ψ(x) and φ(p) wave functions are related through the Uncertainty Principle: they’re conjugate variables, and we have this ΔxΔp = ħ/2 equation, in which the Δ is some standard deviation from some mean value. I should not go into more detail here: you know that by now, don’t you? While the argument of these functions is some real number, the wave functions themselves are complex-valued, so they have a real and complex amplitude. I’ve also illustrated that a couple of times already but, just to make sure, take a look at the animation below, so you know what we are sort of talking about: 1. The A and B situations represent a classical oscillator: we know exactly where the red ball is at any point in time. 2. The C to H situations give us a complex-valued amplitude, with the blue oscillation as the real part, and the pink oscillation as the imaginary part. QuantumHarmonicOscillatorAnimationSo we have such wave function both for x and p. Note that the animation above suggests we’re only looking at the wave function for x but–trust me–we have a similar one for p, and they’re related indeed. [To see how exactly, I’d advise you to go through the proof of the so-called Kennard inequality.] So… What do we do with that? The position and momentum operators When we want to know where a particle actually is, or what its momentum is, we need to do something with this wave function ψ or φ. Let’s focus on the position variable first. While the wave function itself is said to have ‘no physical interpretation’ (frankly, I don’t know what that means: I’d think everything has some kind of interpretation (and what’s physical and non-physical?), but let’s not get lost in philosophy here), we know that the square of the absolute value of the probability amplitude yields a probability density. So |ψ(x)|gives us a probability density function or, to put it simply, the probability to find our ‘particle’ (or ‘wavicle’ if you want) at point x. Let’s now do something more sophisticated and write down the expected value of x, which is usually denoted by 〈x〉 (although that invites confusion with Dirac’s bra-ket notation, but don’t worry about it): expected value of x Don’t panic. It’s just an integral. Look at it. ψ* is just the complex conjugate (i.e. a – ib if ψ = a + ib) and you will (or should) remember that the product of a complex number with its (complex) conjugate gives us the square of its absolute value: ψ*ψ = |ψ(x)|2. What about that x? Can we just insert that there, in-between ψ* and ψ ? Good question. The answer is: yes, of course! That x is just some real number and we can put it anywhere. However, it’s still a good question because, while multiplication of complex numbers is commutative (hence,  z1z2 = z2z1), the order of our operators – which we will introduce soon – can often not be changed without consequences, so it is something to note. For the rest, that integral above is quite obvious and it should really not puzzle you: we just multiply a value with its probability of occurring and integrate over the whole domain to get an expected value 〈x〉. Nothing wrong here. Note that we get some real number. [You’ll say: of course! However, I always find it useful to check that when looking at those things mixing complex-valued functions with real-valued variables or arguments. A quick check on the dimensions of what we’re dealing helps greatly in understanding what we’re doing.] So… You’ve surely heard about the position and momentum operators already. Is that, then, what it is? Doing some integral on some function to get an expected value? Well… No. But there’s a relation. However, let me first make a remark on notation, because that can be quite confusing. The position operator is usually written with a hat on top of the variable – like ẑ – but so I don’t find a hat with every letter with the editor tool for this blog and, hence, I’ll use a bold letter x and p to denote the operator. Don’t confuse it with me using a bold letter for vectors though ! Now, back to the story. Let’s first give an example of an operator you’re already familiar with in order to understand what an operator actually is. To put it simply: an operator is an instruction to do something with a function. For example: ∂/∂t is an instruction to differentiate some function with regard to the variable t (which usually stands for time). The ∂/∂t operator is obviously referred to as a differentiation operator. When we put a function behind, e.g. f(x, t), we get ∂f(x, t)/∂t, which is just another function in x and t. So we have the same here: x in itself is just an instruction: you need to put a function behind in order to get some result. So you’ll see it as xψ. In fact, it would be useful to use brackets probably, like x[ψ], especially because I can’t put those hats on the letters here, but I’ll stick to the usual notation, which does not use brackets. Likewise, we have a momentum operator: p = –iħ∂/∂x. […] Let it sink in. [..] What’s this? Don’t worry about it. I know: that looks like a very different animal than that x operator. I’ll explain later. Just note, for the moment, that the momentum operator (also) involves a (partial) derivative and, hence, we refer to it as a differential operator (as opposed to differentiation operator). The instruction p = –iħ∂/∂x basically means: differentiate the function with regard to x and multiply with iħ (i.e. the product of Planck’s constant and the imaginary unit i). Nothing wrong with that. Just calculate a derivative and multiply with a tiny imaginary (complex) number. Now, back to the position operator x. As you can see, that’s a very simple operator–much simpler than the momentum operator in any case. The position operator applied to ψ yields, quite simply, the xψ(x) factor in the integrand above. So we just get a new function xψ(x) when we apply x to ψ, of which the values are simply the product of x and ψ(x). Hence, we write xψ = xψ. Really? Is it that simple? Yes. For now at least. 🙂 Back to the momentum operator. Where does that come from? That story is not so simple. [Of course not. It can’t be. Just look at it.] Because we have to avoid talking about eigenvalues and all that, my approach to the explanation will be quite intuitive. [As for ‘my’ approach, let me note that it’s basically the approach as used in the Wikipedia article on it. :-)] Just stay with me for a while here. Let’s assume ψ is given by ψ = ei(kx–ωt). So that’s a nice periodic function, albeit complex-valued. Now, we know that functional form doesn’t make all that much sense because it corresponds to the particle being everywhere, because the square of its absolute value is some constant. In fact, we know it doesn’t even respect the normalization condition: all probabilities have to add up to 1. However, that being said, we also know that we can superimpose an infinite number of such waves (all with different k and ω) to get a more localized wave train, and then re-normalize the result to make sure the normalization condition is met. Hence, let’s just go along with this idealized example and see where it leads. We know the wave number k (i.e. its ‘frequency in space’, as it’s often described) is related to the momentum p through the de Broglie relation: p = ħk. [Again, you should think about a whole bunch of these waves and, hence, some spread in k corresponding to some spread in p, but just go along with the story for now and don’t try to make it even more complicated.] Now, if we differentiate with regard to x, and then substitute, we get ∂ψ/∂x = ∂ei(kx–ωt)/∂x = ikei(kx–ωt) = ikψ, or So what is this? Well… On the left-hand side, we have the (partial) derivative of a complex-valued function (ψ) with regard to x. Now, that derivative is, more likely than not, also some complex-valued function. And if you don’t believe me, just look at the right-hand side of the equation, where we have that i and ψ. In fact, the equation just shows that, when we take that derivative, we get our original function ψ but multiplied by ip/ħ. Hey! We’ve got a differential equation here, don’t we? Yes. And the solution for it is… Well… The natural exponential. Of course! That should be no surprise because we started out with a natural exponential as functional form! So that’s not the point. What is the point, then? Well… If we bring that i/ħ factor to the other side, we get: (–i/ħ)(∂ψ/∂x) = pψ [If you’re confused about the –i, remember that i–1 = 1/i = –i.] So… We’ve got pψ on the right-hand side now. So… Well… That’s like xψ, isn’t it? Yes. 🙂 If we define the momentum operator as p = (–i/ħ)(∂/∂x), then we get pψ = pψ. So that’s the same thing as for the position operator. It’s just that p is… Well… A more complex operator, as it has that –i/ħ factor in it. And, yes, of course it also involves an instruction to differentiate, which also sets it apart from the position operator, which is just an instruction to multiply the function with its argument. I am sure you’ll find this funny–perhaps even fishy–business. And, yes, I have the same questions: what does it all mean? I can’t answer that here. As for now, just accept that this position and momentum operator are what they are, and that I can’t do anything about that. But… I hear you sputter: what about their interpretation? Well… Sorry… I could say that the functions xψ and pψ are so-called linear maps but that is not likely to help you much in understanding what these operators really do. You – and I for sure 🙂 – will indeed have to go through that story of eigenvalues to a somewhat deeper understanding of what these operators actually are. That’s just how it is. As for now, I just have to move on. Sorry for letting you down here. 🙂 Energy operators Now that we sort of ‘understand’ those position and momentum operators (or their mathematical form at least), it’s time to introduce the energy operators. Indeed, in quantum mechanics, we’ve also got an operator for (a) kinetic energy, and for (b) potential energy. These operators are also denoted with a hat above the T and V symbol. All quantum-mechanical operators are like that, it seems. However, because of the limitations of the editor tool here, I’ll also use a bold T and V respectively. Now, I am sure you’ve had enough of this operators, so let me just jot them down: 1. V = V, so that’s just an instruction to multiply a function with V = V(x, t). That’s easy enough because that’s just like the position vector. 2. As for T, that’s more complicated. It involves that momentum operator p, which was also more complicated, remember? Let me just give you the formula: T = p/2m = p2/2m. So we multiply the operator p with itself here. What does that mean? Well… Because the operator involves a derivative, it means we have to take the derivative twice and… No ! Well… Let me correct myself: yes and no. 🙂 That p·p product is, strictly speaking, a dot product between two vectors, and so it’s not just a matter of differentiating twice. Now that we are here, we may just as well extend the analysis a bit and assume that we also have a y and z coordinate, so we’ll have a position vector r = (x, y, z). [Note that r is a vector here, not an operator. !?! Oh… Well…] Extending the analysis to three (or more) dimensions means that we should replace the differentiation operator by the so-called gradient or del operator: ∇ = (∂/∂x, ∂/∂y, ∂/∂z). And now that dot product p will, among other things, yield another operator which you’re surely familiar with: the Laplacian. Let me remind you of it: Hence, we can write the kinetic energy operator T as: Kinetic energy operator I quickly copied this formula from Wikipedia, which doesn’t have the limitation of the WordPress editor tool, and so you see it now the way you should see it, i.e. with the hat notation. 🙂 In case you’re despairing, hang on ! We’re almost there. 🙂 We can, indeed, now define the Hamiltonian operator that’s used in quantum mechanics. While the Hamiltonian function was the sum of the potential and kinetic energy functions in classical physics, in quantum mechanics we add the two energy operators. You’ll grumble and say: that’s not the same as adding energies. And you’re right: adding operators is not the same as adding energy functions. Of course it isn’t. 🙂 But just stick to the story, please, and stop criticizing. [Oh – just in case you wonder where that minus sign comes from: i2 = –1, of course.] Adding the two operators together yields the following: Hamiltonian operator So. Yes. That’s the famous Hamiltonian operator. OK. So what? Yes…. Hmm… What do we do with that operator? Well… We apply it to the function and so we write Hψ = … Hmm… Well… What?  Well… I am not writing this post just to give some definitions of the type of operators that are used in quantum mechanics and then just do obvious stuff by writing it all out. No. I am writing this post to illustrate how things work. OK. So how does it work then?  Well… It turns out that, in quantum mechanics, we have similar equations as in classical mechanics. Remember that I just wrote down the set of (two) differential equations when discussing Hamiltonian mechanics? Here I’ll do the same. The Hamiltonian operator appears in an equation of which you’ve surely heard of and which, just like me, you’d love to understand–and then I mean: understand it fully, completely, and intuitively. […] Yes. It’s the Schrödinger equation: schrodinger 1 Note, once again, I am not saying anything about where this equation comes from. It’s like jotting down that Lagrange equation, or the set of Hamiltonian equations: I am not saying anything about the why of all this hocus pocus. I am just saying how it goes. So we’ve got another differential equation here, and we have to solve it. If we all write it out using the above definition of the Hamiltonian operator, we get: Schrodinger 2 If you’re still with me, you’ll immediately wonder about that μ. Well… Don’t. It’s the mass really, but the so-called reduced mass. Don’t worry about it. Just google it if you want to know more about this concept of a ‘reduced’ mass: it’s a fine point which doesn’t matter here really. The point is the grand result. But… So… What is the grand result? What are we looking at here? Well… Just as I said above: that Schrödinger equation is a differential equation, just like those equations we got when applying the Lagrangian and Hamiltonian approach to modeling a dynamic system in classical mechanics, and, hence, just like what we (were supposed to) do there, we have to solve it. 🙂 Of course, it looks much more daunting than our Lagrangian or Hamiltonian differential equations, because we’ve got complex-valued functions here, and you’re probably scared of that iħ factor too. But you shouldn’t be. When everything is said and done, we’ve got a differential equation here that we need to solve for ψ. In other words, we need to find functional forms for ψ that satisfy the above equation. That’s it. Period. So how do these solutions look like? Well, they look like those complex-valued oscillating things in the very first animation above. Let me copy them again: So… That’s it then? Yes. I won’t say anything more about it here, because (1) this post has become way too long already, and so I won’t dwell on the solutions of that Schrödinger equation, and because (2) I do feel it’s about time I really start doing what it takes, and that’s to work on all of the math that’s necessary to actually do all that hocus-pocus. 🙂 Post scriptum: As for understanding the Schrödinger equation “fully, completely, and intuitively”, I am not sure that’s actually possible. But I am trying hard and so let’s see. 🙂 I’ll tell you after I mastered the math. But something inside of me tells me there’s indeed no Royal Road to it. 🙂 Post scriptum 2 (dated 16 November 2015): I’ve added this post scriptum, more than a year later after writing all of the above, because I now realize how immature it actually is. If you really want to know more about quantum math, then you should read my more recent posts, like the one on the Hamiltonian matrix. It’s not that anything that I write above is wrong—it isn’t. But… Well… It’s just that I feel that I’ve jumped the gun. […] But then that’s probably not a bad thing. 🙂 Newtonian, Lagrangian and Hamiltonian mechanics I. Newtonian mechanics F = –kx (i.e. Hooke’s law) x(t) = A·cos(ωt + α) II. Lagrangian mechanics III. Hamiltonian mechanics Complex Fourier analysis: an introduction […] OK, you’ll say. So what? Fourier transform function The Uncertainty Principle for energy and time In all of my posts on the Uncertainty Principle, I left a few points open or rather vague, and that was usually because I didn’t have a clear understanding of them. As I’ve read some more in the meanwhile, I think I sort of ‘get’ these points somewhat better now. Let me share them with you in this and my next posts. This post will focus on the Uncertainty Principle for time and energy. Indeed, most (if not all) experiments illustrating the Uncertainty Principle (such as the double-slit experiment with electrons for example) focus on the position (x) and momentum (p) variables: Δx·Δp = h. But there is also a similar relationship between time and energy: ΔE·Δt = h These pairs of variables (position and momentum, and energy and time) are so-called conjugate variables. I think I said enough about the Δx·Δp = h equation, but what about the ΔE·Δt = h equation? Indeed, we can sort of imagine what ΔE stands for, but what about Δt? It must also be some uncertainty: about time obviously–but what time are we talking about? I found one particularly appealing explanation in a small booklet that I bought–long time ago– in Berlin: the dtv-Atlas zur Atomphysik. First, note that the uncertainty about the position (Δx) of our ‘wavicle’ (let’s say an electron) is to be related to the length of the (complex-valued) wave-train that represents the ‘particle’ (or ‘wavicle’ if you prefer that term) in space (and in time). In turn, the length of that wave-train is determined by the spread in the frequencies of the component waves that make up that wave-train, as illustrated below. [However, note that the illustration assumes the amplitudes are real-valued only, so there’s no imaginary part. I’ll come back to this point in my next post.] Now, we can use the de Broglie relation (λ = h/p) to relate the uncertainty about the position to the spread in the wavelengths (and, hence, the frequencies) of the component waves: p = h/λ and, hence, Δp = Δ(h/λ) = hΔ(1/λ) In case you wonder why I can simply take h out of the brackets, i.e. why I can write Δ(h/λ) = hΔ(1/λ), just remember that the delta symbol here (Δ) refers to a measure like the standard deviation of a variable, so Δx represents σx. Now, one can prove the following: 3. The standard deviation scales with the scale of the variable: Δ(kx) = |k |Δ(x) It’s obviously the last rule that we’re using here. Now, Δx equals h/Δp according to the Uncertainty Principle—if we take it as an equality, rather than as an inequality, that is. Therefore, Δx must equal: That’s obvious, but so what? We cannot write Δx = Δλ, because there’s no rule that says that Δ(1/λ) = 1/Δλ and, therefore, h/Δp ≠ Δλ. Indeed, suppose we define Δλ as an interval or a length defined by the difference between its upper bound and its lower bound. Then we can write Δλ as Δλ = λ2 – λ1 and, hence, we can then write Δp as Δp = Δ(h/λ) = h/λ1 – h/λ= h(1/λ1 – 1/λ2) = h[λ2 – λ1]/λ1λ2. Now, that’s obviously something very different than h/Δλ = h/(λ2 – λ1). So we should surely not write that Δp = h/Δλ. Never ever. Having said that, the Δx = 1/Δ(1/λ) = λ1λ2/(λ2 – λ1) relationship that emerges here is quite interesting. I encourage you to explore it yourself, as I need to move on here. So… We’re kinda stuck. What to do? How do we get that energy-time relationship? The de Broglie relation tells us that E = hν, so we can write that ΔE = Δ(hν) = hΔν. But we need to get ΔE = Δ(hν) = hΔν = h/Δt. How do we get Δν = 1/Δt, which is – obviously – the relationship that we need to get ΔE = h/Δt?  To get the answer to that question, we need to ask ourselves another one: what’s Δt here? What are we talking about? The answer is remarkably mundane: Δt is the measurement time. What measurement time? Relax. You’ll understand in a moment. Let’s go through it. We know there’s a universal relationship between the propagation speed of a wave (which I’ll denote by c for the time being, but don’t confuse this variable with the speed of light: it can be any speed) and the wavelength and frequency. More specifically, c = λν, and hence, 1/λ = ν/c. So we can now write Δ(1/λ) as Δ(ν/c) = Δ(ν)/c. We also know that the frequency of the wave is the reciprocal of the so-called period of the wave, i.e. the time that’s needed to go through one oscillation: τ = 1/ν and, hence, ν = 1/τ. Hence, we can write Δ(ν) = Δ(1/τ). OK. That’s stating the obvious. So what? Where do we go from here? First, note that, for a wavetrain, there’s no precise frequency or period, nor is there any precise number of oscillations. That’s the essence of the Uncertainty Principle in its most ubiquitous form (Δx = h/Δp). But so we can try to measure. Now, to measure something, we need some time. More in particular, to measure the frequency of a wave, we’ll need to look at that wave and register (i.e. measure) at least a few oscillations, as shown below. time and energy I took the image from the above-mentioned German booklet and, hence, the illustration incorporates some German. However, that should not deter you from following the remarkably simple argument, which is the following: 1. The error in our measurement of the frequency (i.e. the Meβfehler, denoted by Δν) is related to the measurement time (i.e. the Meβzeit, denoted by Δt in the diagram above). Indeed, if τ represents the actual period of the oscillation – which is the reciprocal of the frequency: τ = 1/ν) (both τ and ν are obviously unknown to us: otherwise we wouldn’t be trying to measure the frequency), then we can write Δt as some multiple of τ. More specifically, in the example above we assume that Δt ≈ 4τ = 4/ν. [Note that we use an almost equal to sign (≈) rather than an equality sign (=) because we don’t know τ (or ν). That’s the whole point about it, indeed.] 2. During that time, we measure four oscillations in our example and, hence, we are tempted to write that ν = 4/Δt. However, because of the measurement error, we should interpret the value for our measurement not as 4 exactly but as 4 plus or minus one: 4 ± 1. Indeed, it’s like measuring the length of something: if our yardstick has millimeter marks, then we’ll measure someone’s length as some number plus or minus 1 mm. Here we are counting the number of oscillations. Hence, the result of our measurement should be written as ν ± Δν = (4 ± 1)/Δt = 4/Δt ± 1/Δt. If you have trouble following the argument, just put in some numbers in order to gain a better understanding. For example, imagine an oscillation of 100 Hz (i.e. 100 oscillations per second), and a measurement time of four hundredths of a second (i.e. Δt = 4×10–2 s). Suppose, then, we do indeed measure 4 ± 1 oscillations during that time. Then the frequency of this wave must be equal to ν ± Δν = (4 ± 1)/Δt = 4/(4×10–2 s) ± 1/(4×10–2 s) = 100 ± 25 Hz. In other words, we here accept that we have a measurement error of Δν/ν = 25/100 = 25%. That’s a relatively large error because the measurement time was relatively short, [Note that ‘relatively short’ means ‘short as compared to the actual period of the oscillation’. Indeed, 4×10–2 s is obviously not short in any absolute sense: in fact, it is like an eternity when we’re talking light waves, which have frequencies measured in terahertz.] 3. The example makes it clear that Δν, i.e. the error in our measurement of the frequency, is related to the measurement time as follows: Δν = 1/Δt. Hence, if we double the measurement time, we halve the error in the measurement of the frequency. The relationship is quite straightforward indeed: let’s take the example of that 100 Hz wave once again and assume that our measurement time Δt is equal to Δt = 10τ = 10×10–2 s = 10–1 s. In that case, we get Δν = 1/10–1 s = 10 Hz. Hence, the measurement error is now Δν/ν = 10/100 = 10%. 4. How long should the measurement time be in order to get a 1% error only? Let’s write the error as a percentage first: Δν/ν = x % = x/100. But Δν = 1/Δt. Hence, we have Δν/ν = (1/Δt)/ν = 1/(Δt·ν) = x/100 or Δt = 100/(x·ν). So, for x = 1 (i.e. an error of 1%), we get Δt = 100/(1·100) = 1 second; for x = 5 (i.e. an error of 5%), we get Δt = 100/(5·100) = 0.2 seconds. Finally, for x = 25 (i.e. an error of 25%), we get Δt = 100/(25·100) = 0.04 seconds, or 4×10–2 s, which is what this example started out with. You’ll say: so what? We’re still nowhere… Well… No. We’ve got a formula with the frequency variable here, so we can now derive the Uncertainty Principle for time and energy from the other de Broglie relation (E = hν), which relates the energy of a ‘wavicle’ to the de Broglie frequency. Hence, the uncertainty about the energy about the energy must be related to the measurement time as follows: E = hν ⇒ ΔE = Δ(hν) = hΔν = h(1/Δt) = h/Δt ⇔ ΔE·Δt = h So, what this expression of the Uncertainty Principle says is the following: if we increase the measurement time, we’ll reduce the uncertainty in our knowledge of the energy of our ‘wavicle’. Conversely, if we only have a very short measurement time, we’ll not be able to say much about its energy. A final note needs to be made on the value of h: it’s very tiny. Indeed, a value of (about) 6.6×10−34 J·s or, using the smaller eV unit for energy, some 4.1×10−15 eV·s is unimaginably small, especially because we need to take into account that the energy concept as used in the de Broglie equation includes the rest mass of a particle. Now, anything that has any rest mass has enormous energy according to Einstein’s mass-energy equivalence relationship: E = mc2. Let’s consider, for example, a hydrogen atom. Its atomic mass can be expressed in eV/c2, using the same E = mcbut written as m = E/c2, although you will usually find it expressed in so-called unified atomic mass units (u). The mass of our hydrogen atom is approximately 1 u ≈ 931.5×106 eV/c2. That means its energy is about 931.5×106 eV. In plain language, that’s 931.5 million eV. Hence, if we’d be happy with an uncertainty of plus or minus one million eV, then it’s obvious that even very small values for Δt (i.e. very short measurements) will give us what we want. However, it is likely that we’ll want to reduce the measurement error to much less than plus or minus one million eV, so that means that our measurement time Δt will have to go up. Having said that, the point is still quite clear: we don’t need much time to measure the mass (or the energy) of this hydrogen atom very accurately. The corollary of this is that the de Broglie frequency f = E/h of such particle is very high. To be precise, the frequency will be in the order of (931.5×106 eV)/(4.1×10−15 eV·s) = 0.2×1024 Hz. In practice, this means that the wavelength is so tiny that there’s no detector which will actually measure the ‘oscillation’: any physical detector will straddle most – in fact, I should say: all – of the wiggles of the probability curve. All these facts basically state the same: a hydrogen atom occupies a very precisely determined position in time and space. Hence, we will see it as a ‘hard’ particle, not as a ‘wavicle’. That’s why the interference experiment mentions electrons, rather than hydrogen atoms or other ‘big stuff’, even if I should immediately add that interference patterns have been observed using much larger particles as well. However, I wrote about that before, so I won’t repeat myself here. The point was to make that energy-time relationship somewhat more explicit, and I hope I’ve been successful at that at least. You can play with some more numbers yourself now. 🙂 Post scriptum: The Breit-Wigner distribution The Uncertainty Principle applied to time and energy has an interesting application: it’s used to assign a lifetime to very short-lived particles. In essence, the ‘spread’ around their mean energy (ΔE) is used to calculate their lifetime through the ΔEΔt = ħ/2 equation. I won’t say much about this, because Georgia University’s Hyperphysics website gives an excellent quick explanation of this, and so I just copied that below. Hyperphysics Breit-Wigner
a2682d3983b6ee12
Archive for the 'Nuclear Physics' Category Aug 14 2011 Topology of the Vacuum No responses yet Jun 09 2011 Internal Conversion No responses yet Feb 22 2011 LHC Motto In an earlier entry it was lamented how some physicists seem to make a transition from special to general relativity as though the two are somehow linked.  I don’t know if the Wikipedia article on Albert Einstein was written by a physicist, however it goes one step further and gets relativity completely mixed up.  It calls Einstein “a German born theoretical physicist who discovered the theory of general relativity effecting a revolution in physics.” [1] For young people studying math and science, please note that it was special relativity that advanced physics by a giant leap, not general relativity.  In a recent article on CERN’s startup of the Large Hadron Collider after a 10-week shutdown, Robert Evans of Reuters, and the Toronto Sun, got it right when it was said: “New Physics, the motto of the LHC, refers to knowledge that will take research beyond the “Standard Model” of how the universe works that emerged from the work of Albert Einstein and his 1905 Theory of Special Relativity.” [2] No responses yet Dec 20 2010 Research Integrity Published by under Nuclear Physics Here is one instance where responsibility recently prevailed: No responses yet Oct 07 2010 The Standard Model Published by under Nuclear Physics Even if gravitons are the fundamental quantum unit, it is beyond obvious that many other electromagnetic waves and particles have their own functions in the physical world. Electrons, along with protons, alpha particles, and a few other nuclei are stable, whether part of atoms or free.  They are synchronized with a gravitational field providing barrier pressure and conjugate wave functions.  Several other particles in the standard model typically have lifetimes shorter than a micro second, found often through scattering experiments.  Since we cannot call a time so short an ‘existence’, one occasionally finds statements by physicists that these particles have never been found in the free state. With gravitons coming in at all polar angles to a subatomic particle or to a nucleus, combinations of mass, angular momentum, parity, isospin, and charge, to maintain a nucleus or to produce intermediate particles, are needed in order to produce the four fundamental forces of nature.  In other words, understanding in terms of internal and free gravitons, with their wave functions and conjugates, is simply not enough.  The uncertainty principle prevents us from obtaining a clear picture of the inside of an atomic nucleus or an electron at any instant in time. Many of the gamma rays emitted through nuclear fission or a scattering experiment are in the tens and hundreds of keV in terms of energy.  For nuclear fission, it is nuclear multipole vibrations and moments, and internal wave functions that provide the spring action to eject particles from a nucleus.  In scattering we can add incident particle energies. As far as internal gravitons, no matter how many are involved with an ejection, recoil energy expenditure and other effects can cause the 312.76 MeV gamma rays to reduce in energy down into the keV range when emitted, – which are of course then no longer gravitons.  Internal wave functions with transverse momentum to the ejection of a particle from a nucleus, which would be most of them, may acquire angular momentum or circular or elliptic polarity in the process, in both the remaining nucleus and in the resultant scattered particles, whether it be alpha or beta decay, or the short lived quark, gluon, pion, ω meson, ρ meson, kaon, W± boson, Z boson, or the hyperon and other strange baryons, to name a few. As far as free gravitons captured as they enter a nucleus, some may be used to manufacture W± and Z0 bosons in order to maintain the weak nuclear force. We cannot go forward with physics by throwing out the standard model.  It must stay, and only be revised by agreement of the physics community as a whole. No responses yet Mar 29 2010 According to what one reads, the CERN LHC is set to start colliding 3.5 TeV proton beams tomorrow, if they can get the beams, running for days now, lined up by then.  The LHC will obtain a variety of downscattered energies, however photons of the same energy and phase can add together in the same measurement.  In terms of discrete photons, it should be a continuous spectrum with a peak near the highest energy that the instrument can effectively measure. With particle masses on the other hand, nobody knows for sure.  It could be an extension upon the “zoo of elementary particles that the experimentalists were discovering in their particle accelerators” * prior to the development of the Standard Model of particle physics. In some ways it would appear easier to work with one elementary particle instead of sixteen or more. * Smolin, Lee, The Trouble with Physics, Houghton Mifflin Company, c. 2006, p. 54 (The Standard Model of particle physics was not the main focus of Smolin’s book.) No responses yet Apr 10 2008 The Nucleus and Gravitons There is another possibility, as opposed to total deflection, for what happens when gravitons encounter a nucleus.  Since the makings of an electron exist inside a neutron, and neutrons and protons have their own spin functions going on inside, it is not outside the realm of possibilities that atomic nuclei, each when part of its own functional atom, absorb gravitons. What a nucleus in a gravitational field, the earth’s let’s say, would do with all this absorbed energy is not too hard to imagine.  The Coulomb field of an atom is full of electromagnetic wave activity involved with keeping electrons in orbit, and some of that energy may escape.  Therefore we would have gravitational energy replenishing Coulomb energy through both the electrons and the nucleus of an atom. The synchronization involved as a graviton enters a nucleus would have to be just as smooth as when one enters an electron in a quantum atomic orbital in order for the nucleus to not have been pushed by the absorbed gravitons.  Also, inflow of energy must balance outflow in order for there to be a steady-state, steady-flow process.  In this case it is only those gravitons which pass near the edge of a nucleus that would have to be deflected by its local magnetic field. If you have read it and remember, my April 2007 paper already has the Coulomb force as the final mediator of the gravitational force. No responses yet Oct 30 2007 The Strong Nuclear Force I think of the strong nuclear force as being due to the vortices* from an electron setting up standing waves within the protons and neutrons within the nucleus of an atom.  For anyone willing to make a try at the math, the Schrödinger equation may be a good place to start.  These standing waves, it is presumed, set up quite nicely within an alpha particle since an alpha particle is very stable. As a possible consequence, it may be that all nucleons within an intact nucleus have roughly equal positive charges.  In the case where a neutron is ejected from a nucleus, it would gather all the vortices it needs as it takes off and becomes of neutral charge. *  Kadin, A. M., “Circular Polarization and Quantum Spin: A Unified Real-Space Picture of Photons and Electrons”, ArXiv Quantum Physics preprint, 2005: No responses yet
1c1b7de6ffcf95e4
Another Higgs Update from CERN At the CERN laboratory today, there’s an ongoing report to the CERN council that oversees the lab, and this includes talks from the Large Hadron Collider [LHC] accelerator operators, and from the experimentalists who built and operate the detectors (ATLAS, CMS, LHCb, ALICE, TOTEM) that are designed to detect and interpret the debris from the LHC’s proton-proton collisions.  Among the results being presented today are some measurements of the properties of the Higgs-like particle whose discovery was announced in July, including ones that were notably missing from the HCP conference presentations in Kyoto last month. Here are some highlights, to be fleshed out in more detail later, if warranted. LHC accelerator operations report: An excellent year.  In the best week the LHC produced 1.35 inverse femtobarn (fb) of data for both ATLAS and CMS; as of December 5, LHC produced 23.2 inverse fb for the year per experiment (note each experiment will have somewhat less recorded, due to normal losses), slightly above the target for the year. Biggest problems: beam instability (much bigger problem in 2012 than 2011); stray high-energy particles affecting electronics in the tunnel; dust falling out of the beampipe into the beam, potentially a significant problem for 2015. After the 2013-2014 shutdown, what will be the likely running conditions in 2015?  The current intention is to go to 25 nanoseconds between collisions (the design) rather than the 50 nanoseconds used in 2012, and to start at 13 TeV per collision. ATLAS: New two-photon and four-lepton measurements; new spin and parity measurements. Two photons: new categories of events added, with either 1 lepton or of two jets at low invariant mass, characteristic of production of a Higgs with a Z or W. This channel now shows 6.1 standard deviation significance (3.3 expected) by itself: discovery of a Higgs-like particle in a single decay mode.  Mass 126.6±0.3±0.7 GeV/c² Signal remains high: 1.8 ± 0.3 + 0.29 – 0.21 times the Standard Model expectation.  (But note the expectation depends somewhat on the assumed mass of the Higgs; not sure yet which mass was taken here.  126.6 GeV was assumed.) Four leptons: 4.1 standard deviations, signal strength 1.3 ± 0.4 times Standard Model expectation. Mass 123.5±0.9+0.4-0.2 GeV/c² — notably lower than two photons. The two mass measurements are 3 GeV apart (surprising but not impossible given the amount of data; or perhaps there is a technical problem somewhere, though I’m sure they looked very, very hard for one).  They are compatible only at 2.7 standard deviations.  Combined mass: 125.2 +- 0.3 +- 0.6 GeV/c² Spin (from photons): spin 2 disfavored at the 91% level; compatible with spin 0. Spin (from leptons): spin 2 disfavored only at the 85% level; compatible with spin 0. Parity (from leptons): Exclusion of odd-parity spin-0 particle at 99%. First limit on Higgs decay to Z + photon: although still ~20 times the Standard Model expectation, this limit is good enough to rule out various non-standard interpretations of the Higgs-like particle. No update of Higgs decay to two photons.  This is too bad.  It’s somewhat exciting that ATLAS’s result on Higgs decay to two photons remains somewhat high compared to expectations.  But the excess is still not yet 3 standard deviations.  Deviations of this size do come and go.  And we don’t have confirmation from CMS.  So the situation remains tantalizing but unfortunately not yet very convincing.  We may not learn anything more from CMS or ATLAS til March, when they have analyzed the full 2012 data set. I did not catch anything new from LHCb or ALICE; generally I haven’t had time to cover ALICE’s research program here.  TOTEM, a special purpose detector for measuring things I haven’t discussed on this website, is still getting rolling. 60 responses to “Another Higgs Update from CERN 1. Hi does it compare with fermilab history? as we need a yardstick – to know if is an £/knowledge improvement – thanks 2. Does the 1.8 x signal strength for a 126.6 GeV Higgs in the diphoton channel work out to a roughly correct signal strength for a 125 GeV one? I thought that went the other way; lighter Higgs means weaker diphoton signal, right? • No, you’ve got it backwards, I’m afraid. Around 126 GeV, lighter Higgs means greater production rate and almost the same diphoton decay probability; the probability is near its maximum here. So the product of the production rate and the probability to decay to two photons — which gives you the total size of the two photon signal — grows as you decrease the Higgs mass from 126 to 123. It doesn’t grow very fast though. 3. They are so coward at CMS, to not update the data about the decay to two photons 😦 I want to know this … ! 4. Is there any chance the 3 Gev difference is due to a DM particle ? • How would a dark matter particle cause some other particle to decay with one mass when it decays in one way, and to decay with a different mass when it decays another way? I don’t see any logical connection. Remember, a particle is a ripple in a field; its mass is a resonant frequency, and its decays are dissipation. Adding another particle is not going to cause the field to ripple with two resonant frequencies, one for one type of dissipation and one for another. How would you make a bell that has one tone when it loses energy to friction, and another tone when it loses energy to the air around it? A bell simply has a tone, period, even though its vibrations dissipate in many ways. • Sorry my question was just dumb. I was thinking perhaps during the Higgs decay to two photons, if a light undetected DM particle was also being created then it might appear to the experiment that the Higgs was a little heavier than it really is. But with a little more thought on my part, I realize it was a silly question. • Oh, *that’s* what you had in mind. It would go the other way, incidentally: if the Higgs decayed to four leptons + an additional particle, then the four leptons would have a lower mass than the two photons. But the real problem with the idea is that the invariant mass of the four leptons would not, in that case, form a peak. Instead it would form a broader distribution. The only way around that would be to have a second Higgs, call it H2, with a mass of 123 GeV, so that you have H1 –> H2 + X, where X is invisible, and then H2 decays to four leptons, giving a peak. However, with two Higgses, one that decays to photons and the other to four leptons (via Z particles), at that point you’d have mucked up the theory so thoroughly that nothing should look anything like the Standard Model at all. 5. Pingback: New Higgs Results Tomorrow? | Not Even Wrong 6. the Siliconopolitan They should have gone with the Large Hamster Collider instead. That would have been self-cleaning. • They did some trials but the experiments complained about the messy collisions. Also, PETA was a PITA. 7. Pingback: [Reblog]: Another Higgs Update from CERN « Io Non Faccio Niente 8. Discovery of radium paved the way for Gamma ray detection. It is photons of high frequency and resonance. When gamma energies exceeding 5MeV, intereacting with the electric field of nucleus, the energy of incident photon is converted into the “mass” of an electron-positron pair. A particle is a ripple in a field; its “mass” is a resonant frequency, and its decays are dissipation. If a photon or photon(s) atmosphere travel in opposite directions, roaring around or “returning around” in the closed system – at the point of “return” it LOSSES ITS KINETIC ENERGY, DUE TO BREMSSTRAHLUNG MECHANISM – due to conservation of energy it continues to travel at “c”- but the change in energy would have already emmited as radiation?. Where this black body radiation or dark matter particle go?. There is no difference in resonant frequency, but an unaccounted extra energy is involved from “zero rhythm” ?- I mean zero rhythm, will not change anything, but it will make Physical information enter into Physical paradox – because of arbitrary quantum phase(or resonance) transitions?? 9. can the falling out of dust in the beam pipe have something to do with the use of the room in the tunnel close to the beam-pipe to shoot some low-budget horror movies? 10. http:// www. perimeterinstitute. ca/videos/new-electroweak-states-plain-sight 11. Pingback: Two Higgs Bosons? CERN Scientists Revisit Large Hadron Collider Particle Data | Screw Cable 12. Pingback: Two Higgs Bosons? CERN Scientists Revisit Large Hadron Collider Particle Data « Go to News! 13. Pingback: JTGDeals.Com News » Two Higgs Bosons? CERN Scientists Revisit Large Hadron Collider Particle Data 14. Pingback: Two Higgs Bosons? CERN Scientists Revisit Large Hadron Collider Particle Data | Breaking News - Ali Office 15. Pingback: WHOA: Have Scientists Found Two Higgs Bosons? - FourTech Plus 16. Pingback: Two Higgs Bosons? CERN Scientists Revisit Large Hadron Collider Particle Data | ABC Updater 17. Pingback: Two Higgs Bosons? CERN Scientists Revisit Large Hadron Collider Particle Data :: Newspri 18. Pingback: ATLAS Results: One Higgs Or Two? | BirchIndigo 19. Pingback: Two Higgs Bosons? CERN Scientists Revisit Large Hadron Collider Particle Data :: iShoutLoud 20. Pingback: Two Higgs Bosons? CERN Scientists Revisit Large Hadron Collider Particle Data | download free music to mp3download free music to mp3 21. What science behind this?: Elementary school schooting on Friday. Within minutes, 26 people were dead at Sandy Hook Elementary School — 20 of them children. • physics junkie No science involved in this at all, but my first response is why you brought this up in this forum and your question about science? It is a tragedy that has nothing to do with science. So what is your real question, or is it a sarcastic remark about science? What does a harmless forum on particle physics have to do with a tragedy of such porportions? We should be praying, and helping out, not boardening the spectre of it for no good reason. 22. Pingback: Where does mass come from? « The Gauge Connection 23. First Iam sorry and my tears for those beautiful and innocent children. Why we go to particle physics – mother of all basis? First our thanks to Professor Strassler – try to teach correct science. There is some answer…some not…may be never…. Particles are reverberating(into energy) and dissipate(into nothing). This dance have a pattern, rhythm, follow some rule – we call it conservation law(Nature Believed (for deep reasons) To Be Exact), the three quarks rule(Rules of Nature Believed (for less deep reasons) To Be Very Nearly Exact). Some follow Bose–Einstein condensate, again dissipation become reverberation under absolute zero. We consider under Higgs world, all the particles are massless and Higgs vev gives mass – also answerable to this innocent children’s life. I mean the pattern must protect some ethic also- the question asked by Buddha also. why photons are stable why electrons are stable why protons are stable or very long-lived why at least one type of neutrino is stable or very long-lived Why neutrons are stable only in atomic nuclei? The dissipation of Higgs vev into nothing occur during proton-proto collision?.A particle is stable because it cannot decay further. The influence of gravity is very feeble. The effect of dynamite explosion is same under earth’s gravity and in outer space. There will be no effect of expansion of space even at the level of weakforce interaction distance of 3×10−17 during nuclear explosion(at zero gravity). The neutron decay cannot overcome the nuclear binding energy- but the proton decay can overcome both gravity and nuclear binding energy – so its decay is prominent than neutron. Why we call Higgs vev(resonance) as particle, if it dissipate at proton-proton collision. It is having intrinsic conservation, if it is not, it could not be transcendental. It is not following the rule of closed system of matter – formed during Big bang. It follows its own statistics at will, we call it as physical information. It does make physical paradox – having a long statistical pattern, we identify as different particles of different chiralities. This pattern is phenomenological, and vary according to the democracy???. 24. physics anarchist When a wave front(or ripple) is reverberated upto rest mass, it is a stable particle(may be containing zillions of quark-antiquark pairs, dancing, bouncing and flailing to a DJ – like proton). They carry energy proportional to its frequency and its polarization as spin. If two of this wave front(protons) collide at high energy, like if you put your finger right on the edge of the water ripple, it dissipate suddenly. There is no analogy, at the point of collision, the water itself dissapear- means, the total energy(frequency) of so called decayed particles not equal to the energy of two protons. Something(ripple or vev) disappear as water(Higgs field) or nothing? Why there is two ripples(Higgh particles), did the explotion of proto-proton collision overcome the binding energy of Higgs(h) and in short while, it try to restore its energy conservation? • I am not certain I understand your questions. When the two protons collide, and a Higgs particle is formed, the two protons do not disappear. A small portion of their collision energy is used to make the Higgs particle, but the remainder is used to make a large number of hadrons, which carry motion-energy as well as mass-energy. If you look at the picture which is at the upper left of my webpages, you will see this: there are two photons from the decaying Higgs, but there are also many particles (purple tracks) which are post-collision debris from the smashed protons. So the collision process is proton + proton –> Higgs + many hadrons (pions, protons, anti-protons, etc.) followed somewhat later, and completely independently, by the decay Higgs –> two photons. Have I answered your question? 25. physics anarchist Yes Professor, thank you. / In quantum computation, entangled quantum states are used to perform computations in parallel, which may allow certain calculations to be performed much more quickly than they ever could be with classical computers. So instantaneous wave function collapse does occur(keep locality)? If there is no continuous wave function field and phase transition, information can be transmitted faster than the speed of light – violating causality? . But the Schrödinger equation could not explain why this correlation occurs instantaneously even when the separation distance is large. Which shows, there’s no slower-than-light influence that can pass between the entangled particles./ ‘Hidden’ variables are responsible for random measurement results in LHC? Please compare EPR paradox with the appeared two Higgs particles with 3Gev difference. • I can compare them very easily; there’s no relation whatsoever. And by the way, the Schrodinger equation is not the equation used in quantum field theory; it is a non-relativistic equation, and relativistic equations have to be used if you are going to state the EPR paradox properly. • I cannot understand difference between general relativity and Schrödinger equation. Schrödinger’s equation developed from classical mechanics(Newton’s second law), General relativity developed from Newton’s law of gravitation. Quantum field theory also developed from classical mechanics- in which Schrödinger’s equation is the analogue of Newton’s law for a quantum system. Without gravity(or quantum gravity) general relativity cannot be connected with QFT. The only problem is causality – which is absent both in Schrödinger’s cat and general relativity’s “c^2” ? Both of them only explain local realism(with imaginary constants). Experimental results always contradict with both of them ? – regardless it is Schrödinger’s equation or relativistic equations ? • Your remark is very puzzling. You say : “Quantum field theory also developed from classical mechanics- in which Schrödinger’s equation is the analogue of Newton’s law for a quantum system.” That’s not true; it developed from Dirac’s (special-)relativistic quantum mechanics, and from Maxwell’s theory of electrodynamics, which was made (special-)relativistic by Einstein. [General relativity indeed has no role in this step.] Causality is encoded into the theory at the very outset. • Thank you Professor, special relativity also developed from classical mechanics and electro magnetism – connected to quantum mechanics thru Dirac equation. It is relativistic but Schrödinger’s equation is not? The causality was encoded at Dirac’s resolute faith in the logic of mathematics as a means to physical reasoning – which had xenophobia with gravity by its own logic? Physical reasoning cannot avoid gravity, so it was included in general relativity? Experiments were made in local realism, so it can avoid gravity in mathematical models? If we believe bigbang, the temperature made to lock quarks inside hadrons(physical informations) can have quantum entanglement(boson level) with our piece of universe, if the informations travel more than “c” ? The fermion level(local realism) created by gravitons can interact with those physical informations thru Yukawa interaction. If the energy difference is considerable, then the experimental results in local realism will have different causality ? So the constancy of Heavens is changed ? Flying carpets and Moses split the Red Sea is not causal now. If you rewind to 17th century, Aeroplanes were not causal. May be anti gravity vehicles become causal in near future? There is anarchism in physical reasoning, but the theories are extremely good ? • Try asking one sensible question at a time. Then maybe I can answer you. Right now, your logic does not make sense; you are connecting many things that are simply unrelated. 26. Pingback: Front Line Science: Fe XVII Problem and the possibility of Two Higgs Bosons « Disturbing the Universe 27. Pingback: Two Higgs Bosons? No Evidence for That | Of Particular Significance 28. Pingback: Can You Find The Higgs-Like Particle? | Of Particular Significance 29. Pingback: Solution to Yesterday’s Puzzle: Higgs, Found! | Of Particular Significance 30. Pingback: LHC ends 2012 run | Scrub Physics 31. Pingback: It’s (not) The End of the World | Of Particular Significance 32. Pingback: The Higgs: An Unexpected Boson - Australian Science 33. I had not heard of TOTEM experiment before; I looked it up and it seems that in its simplicity (regarding the complexity and size of its detectors) it is measuring a fascinating and apparently unexplained phenomenon, namely the fact that the p-p cross section increases with energy. Apparently this is not predicted by SM. Have I misunderstood something, or could this be a hint of, if not “new physics” then at least a better understanding of the structure of the proton? Would this be something the ILC, if realized, could investigate? • Yes, I’m afraid you’re mistaken — the increase in the proton-proton cross section that is observed in the data (and has been observed for decades) *is* predicted to occur within the Standard Model. The prediction is subtle and involves a phenomenon called “pomeron physics”, too complex to explain in a comment. But there’s nothing about it which poses a challenge for the Standard Model… and it reveals more about how protons interact with each other than it does about the structure of the proton. (Meanwhile the ILC is a bad place to study it — no protons!) 34. I see, thanks for clearing it up. It would still be interesting to know a little bit about what it does… (On the ILC suggestion: I was not intending that it be used to study p-p cross-section per se, rather to examine the internal structure of the proton with higher-energy electrons than ever before, assuming something would be lurking there). • ILC will be uniquely designed for electron-positron collisions; it will not be useful for other purposes. Electron-proton colliders can study proton structure, and in a relatively clean fashion, but the ILC won’t be able to do this. And the last electron-proton collider, HERA at the DESY lab,, couldn’t go anywhere near as high in energy as the LHC — nor will any machine in coming decades. So for now the LHC is the place to be, for this particular investigation. 35. physics anarchist If general relativity is applied in quantum gravity distance, there is change(or fluctuation) in spacetime metric at creation of mass by Higgs vev- is a pseudoscalar(in pion exchange) during Yukawa interaction- where graviton is scalar ? If there is no negative pressure of dark energy, improper rotation will not occur and combination of it with spin 2 will not be spin 0 ? So there is no conservation of energy(for a while) at quantum gravity distance – due to change in coupling constant of pions ? • Sorry, this makes no sense. I can’t answer nonsensical questions. Try to ask one sensible question, and I’ll try to answer it. • Thank you Professor for your tolerance and open mindness, my burst of intuitions could not find words and failed to ask sensible qustions. /It is a veritable cliche borne of quantum theory’s founding fathers that to utter an understanding of atomic reality based on quantum theory is to reveal one’s ignorance. Under such circumstances the most fruitful approach, it seems to me, is to be very receptive to new ideas. Possibly the ultimate explanation will be of a less mathematical character and a more artistic or intuitive character. Maybe all the baggage carried along with the word, “particle” is an obstacle to a deeper understanding. Maybe the problem has to do with our failure to fully understand gravity./ – Richard Benish | December 10, 2012. 37. Pingback: From Moriond, Higgs Evidence Piles Up | Of Particular Significance 38. telemoveis desbloqueados a entirely different subject but it has pretty much the same page layout and design. Excellent choice of colors! 39. Je finirai de lire ça demain awesome and truly fine data in support of visitors. 41. Un monumental bravo au créateur de ce blog 42. Encore un excellent poste, j’espère en discuter ce soir avec certains de mes potes
421a098e7ecd6742
Molecular orbital imprint in laser-driven electron recollision See allHide authors and affiliations Science Advances  04 May 2018: Vol. 4, no. 5, eaap8148 DOI: 10.1126/sciadv.aap8148 Electrons released by strong-field ionization from atoms and molecules or in solids can be accelerated in the oscillating laser field and driven back to their ion core. The ensuing interaction, phase-locked to the optical cycle, initiates the central processes underlying attosecond science. A common assumption assigns a single, well-defined return direction to the recolliding electron. We study laser-induced electron rescattering associated with two different ionization continua in the same, spatially aligned, polyatomic molecule. We show by experiment and theory that the electron return probability is molecular frame–dependent and carries structural information on the ionized orbital. The returning wave packet structure has to be accounted for in analyzing strong-field spectroscopy experiments that critically depend on the interaction of the laser-driven continuum electron, such as laser-induced electron diffraction. The essence of attosecond strong-field spectroscopies and attosecond pulse generation is captured by the well-known and widely used three-step model (1, 2). The three steps consist of laser-driven tunnel ionization, propagation of the electron in the continuum, and interaction with the ion core upon recollision, all of which occur consecutively within a fraction of a laser cycle. While the first two steps are common to all recollision processes, the interaction step can take the form of radiative recombination in high harmonic generation (HHG) or of inelastic or elastic scattering, as is the case in nonsequential double ionization (36) and laser-induced electron diffraction (LIED) (715). While many experimental observations to date were attributed to the molecular-frame angle dependence of the strong-field ionization (SFI) and the electron-ion interaction steps, one of the central assumptions underlying attosecond strong-field spectroscopies is that the propagation step is largely system-independent. The propagation is usually assumed to “wash out” the phase structure of the initial state, so that the returning wave packet has a well-defined direction of return [for an exception, see the studies of Lein (16) and Meckel et al. (17)]. This assumption is implicitly enforced by the most commonly used quantum-mechanical implementation of the three-step model—the stationary-phase realization of the strong-field approximation (SFA) (18, 19). It is also often assumed that recollision occurs for the same fraction of ionization events, regardless of the molecular orientation [see, however, the study of Morishita and Tolstikhin (20)]. Here, we examine this central proposition both experimentally and theoretically. We use laser-driven electron rescattering (9) associated with two different SFI continuum channels (21, 22) in the same, spatially laser-aligned 1,3-butadiene molecules. The two channels correspond to different states of the cation formed by SFI and are separated by means of a coincidence measurement. We determine the channel-resolved yields of electrons in the rescattering tail of the photoelectron kinetic energy spectrum in the partially reconstructed molecular frame. We extract their dependence on the alignment of the laser field relative to the molecule, exploring the polar angle θ, while averaging over the molecular azimuthal angle φ. Both angles are defined in the molecular coordinate system as illustrated in Fig. 1. Access to molecular-frame information is experimentally enabled by deconvolution of laboratory-frame observables that are measured for aligned molecules with the independently determined distribution of molecular alignment. The molecular-frame rescattering yield is given by the product of the molecular-frame ionization probability S(θ) (23, 24) and the rescattering probability RQ(θ), which is almost generally assumed to factorize into the return probability R(θ) and the high-angle scattering probability Q(θ), following the three-step model. Moreover, in the analysis of strong-field spectroscopy experiments, a common approach is to regard R(θ) as a constant that does not, hence, depend on θ. Q(θ) then contains the structural information on the molecule. It captures the molecular-frame dependence of the rescattering differential cross section (DCS), which is extracted in LIED experiments (11, 13, 15, 25). Our approach contains a number of features that are ideally suited for exploration of the returning continuum electron wave packet: (i) The molecular-frame SFI probability S(θ) is separately determined for the two channels by measuring the yield of direct electrons in the same experiment (24). (ii) The continuum electron wave packets originate from states associated with a different, characteristic nodal structure. (iii) In the interaction step, the channel-dependent returning electron wave packet scatters off the total electron density of the respective cation. Because the electronic structure of the cation differs by the contribution of only one valence electron, Q(θ) is expected to be nearly the same for the two channels. We explicitly confirm this expectation by a numerical calculation (see below). (iv) The alignment distribution of the sample is the same for the two channels, as these occur in the same molecule. Fig. 1 Channel-specific continuum wave packets in 1,3-butadiene. (A) The upper part shows the D0 Dyson orbital. The lower part displays a snapshot of the simulated electron density isosurface of value Embedded Image(where a0 is the Bohr radius) for the D0 channel 0.7 fs after the peak electric field and was obtained for perpendicular laser polarization (θ = 90°, φ = 90°). The polar angle θ is defined with respect to the principal axis with the lowest moment of inertia, and φ is the azimuthal angle (see the coordinate system on top). The color visualizes the phase of the continuum wave function. (B) The same as (A) but for the D1 channel. The SFI of 1,3-butadiene molecules under the conditions used in this study is dominated by two channels accessing the electronic ground (D0, 90% yield) and the first electronically excited (D1, 10% yield) state of the 1,3-butadiene cation, as previously established for similar experimental parameters (22, 24). These channels are tagged by formation of the parent ion Embedded Image (D0) and the fragment ions Embedded Image/Embedded Image (D1), respectively, with the latter resulting from unimolecular dissociation of the excited cation. The D0 and D1 continua correspond to ionization from the HOMO (highest occupied molecular orbital) and HOMO-1. Different SFI continua are understood to be associated with Dyson orbitals Embedded Image, where ψN and ψI are the N and N − 1 electron wave function of the neutral and the ion, respectively. The Dyson orbitals are hence one-electron wave functions, which are depleted in the SFI step. For the D0 and D1 SFI of 1,3-butadiene, the associated Dyson orbitals are depicted in Fig. 1 (A and B). While highly relevant for the structure of the continuum wave packet, the channel-dependent Dyson orbitals contribute only one hole to the overall otherwise-identical electron density distribution of the two states of the 1,3-butadiene molecular ion, which contains a total of 29 electrons. Figure 1 (A and B) also displays the continuum wave packets accompanying D0 and D1 SFI. These wave packets are strongly channel-dependent. We will show in the following how the initial nodal structure, which stems from the associated Dyson orbitals, is preserved during propagation. For the presented experiment, we extended the channel-resolved above-threshold ionization technique (22) to laser-induced rescattering. We used a reaction microscope (26), adjusted to a high photoelectron kinetic energy detection cutoff, to measure the three-dimensional momenta of electrons originating from the SFI of 1,3-butadiene molecules in coincidence with the ion mass. To drive SFI and recollision, linearly polarized laser pulses (λ = 1290 nm wavelength, 40 fs pulse duration) were used. From the recorded tuples of electron momentum and ion mass, photoelectron kinetic energy spectra coincident with the formation of Embedded Image (D0), Embedded Image (D1), and Embedded Image (D1) were derived, such as shown in Fig. 2A for a measurement with randomly aligned molecules. The depicted spectra are normalized to the same area. Although the measured yield decreases exponentially with increasing kinetic energy, all distributions contain two regions with different, well-distinguishable slopes for ranges of approximately 0 to 10 eV and above 20 eV. This is the expected behavior separating direct electrons from the rescattering tail. The position of the inflection point located at 16 eV is interpreted as the 2 Up cutoff for direct electrons and allows us to derive the used intensity, which was I = (5.2 ± 1.3) × 1013 W/cm2 for the angle-dependent data (see below). The determined intensity is in good quantitative agreement with the intensity calculated from the laser parameters (see Materials and Methods and the Supplementary Materials). The coincidence spectra in Fig. 2A differ significantly in the relative contribution of the rescattering tail. The rescattering yield is much larger for the electrons coincident with the ions tagging the D1 as opposed to the ions tagging the D0 SFI channel. Reassuringly, the measured distributions are the same for the two signature fragments marking D1, whose yield was combined in the analysis that follows to improve statistics. Fig. 2 Experimental data. (A) Electron kinetic energy distributions acquired in coincidence with different ion species, allowing one to distinguish between the D0 and D1 ionization channels. The distributions are normalized to the same total yield for better comparison. The green and orange areas indicate integration limits for the determination of the yields of direct (MD) and rescattered (MR) electrons, respectively, in the angle-dependent data in (C). (B) Delay scan of the nonadiabatic alignment, showing the 1,3-butadiene parent ion yield around the rotational half-revival. The error bars are given by counting statistics. Shown as a red solid line and shaded area are the best fits and confidence intervals from the extraction procedure of the alignment distribution, which uses a symmetric-top approximation. The experimental data shown in (C) were acquired at the delay ta corresponding to peak alignment. (C) Channel-resolved yield of direct (Embedded Image, Embedded Image) and rescattered (Embedded Image, Embedded Image) electrons as a function of the angle α′ between the polarizations of the alignment and SFI laser fields, using the integration limits shown in (A). Solid lines denote best fits from the deconvolution of the molecular-frame ionization and rescattering probabilities using the known alignment distribution. Error bars, given by counting statistics, are smaller than the symbols used for Embedded Image, Embedded Image, and Embedded Image. To obtain molecular-frame information, we used nonadiabatic laser alignment (27) for one-dimensional field-free spatial alignment of the supersonically cooled molecules. 1,3-Butadiene molecules can be reasonably well described by a symmetric top approximation. Our experiment averages over the azimuthal molecular-frame angle φ. A revival trace indicating the measured parent ion yield as a function of Δt, the time delay of the SFI probe pulse with respect to the preceding copolarized alignment laser pulse, is depicted in Fig. 2B. The yield maximizes at Δt = 58 ps, in line with the expectation for the rotational half-revival. To quantify the degree of alignment, we used a previously established self-consistent procedure in which measured experimental data act as a maximum likelihood predictor for numerical alignment simulations (28). The most probable axis distribution of the aligned molecules is determined together with a confidence interval, which allows the determination of molecular frame–dependent parameters from a deconvolution of laboratory-frame observables that are measured for the aligned molecules. At the delay leading to the maximum SFI yield (Δt = 58 ps), the peak alignment is reached, characterized by 〈cos2(θ)〉 = Embedded Image (see the Supplementary Materials for further details). Measurements of the coincidence photoelectron spectrum as a function of α′, the angle between the linear polarizations of the alignment and SFI pulses, were performed at peak alignment. α′ is labeled with a prime to distinguish this laboratory-frame angle from the unprimed molecular-frame angles. Using the described procedure, the laboratory-frame angle-dependent yield of direct [MD(α′), 0 to 16 eV] and rescattered [MR(α′), 25 to 55 eV] photoelectrons was determined for the D0 and D1 channels, respectively, and is presented in Fig. 2C. Error bars reflect the statistical confidence limits. Molecular-frame (that is, θ-dependent) parameters were determined from the laboratory-frame (that is, α′-dependent) observables by fitting the latter with a series of low-order Legendre polynomials (see solid lines) incorporating the previously determined alignment distribution. The fit of MR(α′) results in the determination of S(θ) × R(θ) × Q(θ), the product of the molecular-frame polar angle–dependent ionization, return, and high-angle scattering probabilities. S(θ) is obtained directly from the deconvolution of MD(α′), the yield of direct electrons. The extracted S(θ) distributions (see Fig. 3A) resemble the ones reported previously (24), where small quantitative differences are attributed to the different SFI wavelengths used [λ = 1290 nm versus λ = 800 nm in the work of Mikosch et al. (24)]. Use of this S(θ) permits the determination of the R(θ) × Q(θ) distributions from MR(α′). These distributions are plotted in Fig. 3C for the two SFI channels. The solid lines represent the most likely distribution resulting from the experiment, whereas the shaded areas reflect the confidence intervals that arise from the alignment deconvolution and from the propagation of the statistical errors (see the Supplementary Materials). Fig. 3 Effect of the shape of the continuum wave packet on rescattering. (A and C) Channel-resolved molecular-frame polar angle–dependent ionization probability S(θ) (A) and rescattering probability R(θ) × Q(θ) (C), as obtained from the deconvolution of the experimental data shown in Fig. 2C with the alignment distribution of the molecular ensemble. The shaded areas delimit the confidence intervals arising from the uncertainty of the alignment distribution (vertical shading) and from the statistical error of the measurements in Fig. 2C (horizontal shading) (see the Supplementary Materials). The latter error contributes significantly only to the D1 rescattering probability. (B and D) The same observables as in (A) and (C) obtained from the TD-RIS calculation. Note that while the ionization probability of the D0 channel in (A) and (B) has been scaled arbitrarily, the relative scale between D0 and D1 reflects their contributions to ionization. Figure 3C shows unambiguously that the experimental channel-specific product of the return probability R(θ) and the high-angle scattering probability Q(θ) differs significantly between D0 and D1, both in shape and in amplitude. Note that the probability R(θ) × Q(θ) is plotted on an absolute scale. Depending on the molecular-frame polar angle along which the laser is polarized, between 0.75 and 1.30% of the continuum electrons rescatter in the D0 SFI channel. The rescattering probability is significantly higher in the D1 SFI channel, namely, between 1.85 and 3.05%. In both channels, R(θ) × Q(θ) displays a pronounced minimum for polarization of the strong laser field along the long axis of the molecule (θ = 0°). In this direction, the respective ionization probabilities (see Fig. 3A) are both maximal. However, for the D0 SFI channel, the rescattering probability distribution peaks at θ ≈ 60°/120°, whereas for the D1 channel, we find R(θ) × Q(θ) to be maximal for orthogonal polarization (θ = 90°), where the D0 channel displays a shallow minimum. To independently corroborate our experimental results, we performed numerical calculations using the time-dependent resolution-in-ionic-states (TD-RIS) theory method (29). Perfectly one-dimensionally aligned molecules were used in the model while averaging over the azimuthal angle φ. Because of the very substantial computational cost, the simulations had to be conducted for dedicated half-cycle (ionization) and few-cycle (recollision) laser pulses at λ = 800 nm (see the Supplementary Materials). The derived angle- and channel-resolved ionization [S(θ)] and the derived rescattering [R(θ) × Q(θ)] probabilities are plotted in Fig. 3 (B and D, respectively) and can be compared with their experimental counterparts in Fig. 3, A and C. There is good qualitative agreement between the experimentally and computationally obtained ionization probabilities S(θ) for both SFI channels, in accordance with the earlier SFI study at λ = 800 nm (24). The calculations confirm the observation from experiment that the R(θ) × Q(θ) distributions for D0 and D1 display clear differences. Although the calculated distributions, plotted in Fig. 3D, are outside the confidence intervals of the experiment plotted in Fig. 3C, there is good qualitative agreement, especially keeping in mind the limitations of the theoretical model in considering the laser pulse (see the Supplementary Materials). The theory results for R(θ) × Q(θ) reproduce all key features found in the experiment: the pronounced minimum in the distribution for θ = 0° in both the D0 and the D1 channel, the maxima at θ ≈ 60°/120° in the D0 channel and at θ = 90° in the D1 channel, and the dip at θ = 90° in the D0 channel. To assist in the interpretation of our results, we performed another TD-RIS calculation: We simulated laser-induced electron scattering where, at the outer turning point of the laser-driven continuum electron motion, the actual channel-dependent continuum wave packet was substituted by the same artificial Gaussian continuum wave packet for both SFI channels. This Gaussian wave packet is then driven back toward the cation where scattering occurs. Figure 4A shows an isosurface of the Gaussian electron wave packet as created at the outer turning point and shortly after the electron-ion scattering event, for a molecule aligned perpendicular to the laser polarization axis. The phase of the wave packet is depicted as a color scale, so that a color gradient indicates a moving wave packet. The transverse and longitudinal width of the initial Gaussian wave packet was set to 6.3 a0, similar to the spatial separation of the outermost carbon atoms in the 1,3-butadiene molecule. This choice ensures that (i) most of the wave packet recollides and that (ii) upon recollision, the wavefront is, to a good approximation, planar, as can be seen from the parallel stripes indicating the phase gradient. Our model is, hence, reminiscent of the conventional diffraction of electron beams (30) but ensures that, under the influence of the laser field, the colliding electrons feature a kinetic energy distribution very similar to the one in the recollision experiment presented below. Because the return probability R is independent of θ by design in this simulation, we can determine the high-angle scattering probability Q(θ) as a function of the molecular-frame polar angle θ of the laser polarization, averaging over the azimuthal molecular-frame angle φ. Q(θ) is plotted in Fig. 4B for the D0 (blue) and the D1 (red) cation (see the Supplementary Materials for further details). Although a characteristic distribution is observed, almost no difference is found between the two ionic states. This observation confirms our expectation that Q(θ) does not depend on the state of the 1,3-butadiene cations (see Introduction). Fig. 4 Channel indepedence of the high-angle scattering probability. Simulated strong laser field–driven scattering of a three-dimensional Gaussian continuum wave packet (replacing the actual channel-dependent wave packet launched by SFI) off the Embedded Image 1,3-butadiene ion in the electronic ground state (D0) and in the first excited state (D1). (A) Snapshots of the electron density isosurfaces (of value Embedded Image, where a0 is the Bohr radius) at two instants in time during the 2.7-fs period laser field (see the Supplementary Materials), for a laser polarization perpendicular to the plane of the molecule (θ = 90°, φ = 90°) and the ion in the D0 ground state. For the definition of θ and φ, see the coordinate system in Fig. 1. The maximum of the oscillatory electric field defines zero time. The color scale describing the phase of the electron wave function is the same as in Fig. 1. (B) Comparison of the computationally obtained θ-dependent, φ-averaged scattering probability R(θ) × Q(θ) for a Gaussian wave packet scattering off the Embedded Image ion in the D0 and the D1 state. Because the return probability R(θ) is channel- and angle-independent by design in this calculation [that is, R(θ) = R], the calculation proves that the high-angle scattering probability Q(θ) is near-independent of the electronic state of the 1,3-butadiene molecular cation, as expected. Given the identical high-angle scattering probability distributions Q(θ) for both channels, the channel dependence of the molecular-frame rescattering probability R(θ) × Q(θ), observed both experimentally and in the TD-RIS calculations, is a remarkable result. Note that the computed Q(θ) in Fig. 4B features a rather different angular dependence compared to both of the channel-dependent R(θ) × Q(θ) distributions. These findings imply that the molecular-frame return probability distribution R(θ) depends both on the ionization channel and on the angle of the laser polarization with respect to the molecular frame. In other words, contrary to the expectations of the simple SFA theories, R(θ) is a distribution that is strongly system-dependent. The molecular-frame and channel dependence of R(θ) is understood in terms of the Dyson orbitals associated with the two continuum channels (Fig. 1, A and B). The nodal planes of the Dyson orbitals are much more strongly reflected in the channel-specific rescattering probability distributions (Fig. 3, C and D) than in the ionization probability distributions (Fig. 3, A and B). We argue that this is the result of the continuum propagation, during which the structure of the respective Dyson orbital is preserved and expanded. To strong-field ionize a molecule with a field directed along the nodal plane of a Dyson orbital, the outgoing photoelectron must acquire nonzero transverse momentum (16, 31). The momentum increases when the spacing between the lobes separated by the nodal plane decreases. The transverse spread of the wave packet strongly decreases the probability of recollision, particularly that of recollision leading to high-angle scattering, for which small impact parameters are required. In 1,3-butadiene, the pronounced minimum of the rescattering probability found for both D0 and D1 for molecular alignment along the laser polarization axis (θ = 0°) (see Fig. 3, C and D) coincides with a node in the plane of the molecule for both Dyson orbitals. In contrast, the molecular-frame ionization probabilities feature maxima for a laser field pointing in this direction (see Fig. 3, A and B). For parallel alignment, the continuum wave packets are expected to appear as narrow bimodal distributions for both D0 and D1, due to the structure of the Dyson orbitals. Because of their high transverse velocity, these wave packets will largely miss the molecule after propagation. For perpendicular polarization of the strong laser field (θ = 90°), on the other hand, the two SFI channels feature a rather different Dyson orbital structure. There is a nodal plane for the D0 Dyson orbital for all azimuthal angles φ (see Fig. 1A). During laser-driven propagation, the two lobes associated with maximal electron density (shown in Fig. 1A for a laser polarization perpendicular to the plane of the molecule) broaden and evolve away from the molecule. Hence, we expect a local minimum in the rescattering probability, in agreement with the observation (see Fig. 3, C and D). Because the separation between the lobes in the D0 Dyson orbital is larger for θ = 90° than for θ = 0°, the transverse velocity of the continuum wave packet is smaller for perpendicular than for parallel polarization. Consequently, a somewhat higher fraction of the wave packet returns to the core for perpendicular polarization, and the minimum in the rescattering probability at θ = 90° is less pronounced than the minimum at θ = 0°, in line with the experimental finding. In contrast, for the D1 Dyson orbital, maxima are found in the φ-dependent wave function amplitude when the laser is polarized perpendicular to the plane of the molecule, and nodes are found when the laser is polarized in the plane of the molecule (see Fig. 1B). The φ-averaged, θ-dependent integrated amplitude therefore features a maximum for perpendicular polarization, and the recollision probability peaks for θ = 90° (see Fig. 3, C and D). We note that different continuum propagation time intervals are encoded in the electron momentum maps originating from rescattering (32), which will be exploited in future work. While 1,3-butadiene allows us to separate different SFI channels in a convenient fashion, we believe that there is nothing special about the molecule used regarding its recollision dynamics. 1,3-Butadiene can be regarded as a model system representative for molecules featuring nodal planes. We hence anticipate that our findings of molecular orbital imprint in laser-driven electron recollision are of general nature. Our results caution that the relative contribution of different SFI channels to HHG and LIED probes of molecular dynamics (25, 3335) is not simply reflected by the SFI yield and by the interaction cross sections associated with these channels. Moreover, our findings are highly relevant for the interpretation of both LIED and HHG experiments within the SFA, including the quantitative rescattering theory (36). For instance, in LIED, the momentum distribution of electrons undergoing a single rescattering event along the long trajectory is given in the molecular frame by Embedded Image. Here, pr denotes the momentum upon return, ϑr indicates the scattering angle, W denotes the amplitude of the returning wave packet, and σ indicates the desired DCS containing the structural information of the molecule (11). The final momentum of the electron Embedded Image differs from the electron momentum after scattering (defined by pr and ϑr) by the vector potential at the instant of recollision. Thus far, a separation ansatz W(pr, θ) = S(θ) × W′(pr) is usually made when analyzing LIED photoelectron momentum distributions measured for unaligned and aligned molecular samples (11, 13, 15). This ansatz separates the amplitude of the returning wave packet into the angular-dependent ionization probability S(θ) and an angle-independent return probability W′(pr) that depends exclusively on the return momentum. S(θ) is determined by a calculation (11, 13). Our findings show that the return probability is both molecular frame– and SFI channel–dependent. An approximation that is presently made in the interpretation of LIED and HHG experiments is not universally valid and has to be complemented by an experimental or theoretical analysis of the continuum propagation as presented here. Note that, from the perspective of formal theory (see section S7), the factorization ansatz S(θ) × R(θ) × Q(θ) itself can come under scrutiny near nodal planes owing to the wave packet coherence preserved in the propagation and interaction. The fact that the return probability is molecular frame– and SFI channel–dependent, as established here, is inherent to continuum propagation and hence expected to be universal. We anticipate that the observed effect increases in importance for mid-infrared laser fields (37), where subcycle excursion times are longer than those for laser fields in the near-infrared, allowing more time for continuum evolution. Experimental details In the experiment, we used a home-built reaction microscope (REMI) (26) to measure the photoelectron momentum from the SFI of 1,3-butadiene molecules in coincidence with the ion mass. The applied electric and magnetic field strengths were E = 30 V/cm and B = 17 G, respectively. We limited the detected ionization rate to ≤0.2 events per shot to minimize the occurrence of false coincidences. In addition, we implemented further measures to filter out uncorrelated events in the acquired data (see the Supplementary Materials). In this way, we ensured that the overwhelming majority of electrons and ions detected in coincidence originated from the same ionization event. Linearly polarized laser pulses with a central wavelength of λ = 1290 nm, a pulse duration of 40 fs, an energy per pulse of 23 μJ, and a repetition rate of 10 kHz were used. These were derived by frequency conversion of pulses from an amplified Ti:Sa laser system (Amplitude Technologies) in a commercial optical parametric amplifier (TOPAS, Light Conversion). For one-dimensional field-free spatial alignment of the supersonically cooled molecules, a nonadiabatic scheme was used (38). Fundamental pulses of the 800-nm wavelength Ti:Sa laser system were temporally stretched to a pulse duration of 0.55 ps and applied 58 ps before the 1290-nm pulses, such that the recollision experiment took place at the rotational half-revival. No ionization yield resulted from the alignment pulses alone. A motorized waveplate in the alignment laser arm was used to vary α′, the laboratory-frame angle between the linear polarizations of the alignment and SFI pulses. To obtain enough statistics in the weak rescattering tail for both SFI channels, data were acquired at just five different α′ angles. The data were accumulated in random order over many short measuring intervals for the different angles to minimize the effects of potential small variations of the laser parameters. In total, each photoelectron spectrum contains the events of 3.8 × 108 laser shots (10.5 hours of data acquisition at the 10-kHz repetition rate of the laser). Further experimental details can be found in the Supplementary Materials. Deconvolution of molecular-frame properties from laboratory-frame observables Laboratory-frame experimental observables measured for aligned molecules derived from a convolution of the molecular-frame molecular properties with the probability distribution of molecular axis alignment angles. We used primed (unprimed) characters for laboratory-frame (molecular-frame) parameters. Assuming azimuthal symmetry, the equationEmbedded Image(1)Embedded Image(2)was used to deconvolve the molecular-frame property X(θ) by fitting the experimental observable MX(α′), using the most probable alignment distribution A(θ′) (see section S6). θ′ and φ′ are the polar and azimuthal laboratory-frame angles, respectively; θ is the polar molecular-frame angle; and α′ is the relative laboratory-frame angle between the polarizations of the alignment and the SFI laser pulses. X(θ) was parameterized by a sum of even Legendre polynomials (even, because we used aligned, not oriented molecules) according toEmbedded Image(3)where the b2k are fit parameters. For the laboratory-frame observables in this study (see Fig. 2C), the polynomial was terminated at kmax = 2 to not overparameterize the fits to the limited number of experimental data points. For the determination of the confidence intervals shown in Fig. 2, see the Supplementary Materials. Coupled channels rescattering simulations The strong field–driven recollision dynamics were simulated using the TD-RIS method (29). Here, we present the first application of TD-RIS to the simulation of three-dimensional photoelectron spectra including recollision. TD-RIS combines ab initio quantum chemistry for the computation of multielectron wave functions with single-particle time-dependent numerical grid solutions. The field-free n-electron neutral state (|N〉) and the lowest few (n − 1)-electron singly ionized states (|Im〉) were used as the basis. The wave function of the nth continuum electron associated with each ionic state was represented on a three-dimensional Cartesian numerical grid. The coupling to the laser field was treated in the dipole approximation and length gauge. In TD-RIS, the ansatz for the n-electron wave function isEmbedded Image(4)where |χm(t)〉 is the one-electron continuum wave packet associated with ionic state m and Embedded Image is the normalized source orbital, which is related to the Dyson orbital Embedded Image byEmbedded Imagewith Embedded Image. Finally, |Ñ〉 is the normalized component of the neutral ground state |N〉 orthogonal to the set of source ion states Embedded Image. Inserting the ansatz (Eq. 4) into the time-dependent Schrödinger equation leads to (after some manipulations) coupled equations for the amplitudes b(t), am(t), and the respective one-electron continuum wave packets |χm(t)〉, which were represented on three-dimensional Cartesian grids. For further details, see the study of Spanner and Patchkovskii (29). The coupled equations are solved numerically using the leap-frog algorithm. For computational details, the reader is referred to the Supplementary Materials. Supplementary material for this article is available at section S1. Computational details to the coupled channels rescattering simulations section S2. Effects of adiabatic polarization on the electronic structure section S3. Experimental methods section S4. Data analysis section S5. Determination of the alignment distribution section S6. Confidence interval of the deconvoluted molecular-frame properties section S7. SFA analysis of photoelectron rescattering in the presence of symmetries fig. S1. Electric field (black line) of the 800-nm pulse with a total duration of 4.4 fs and a peak intensity of 3 × 1013 W/cm2. fig. S2. Channel- and polarization-resolved rescattering probability in the molecular frame for the D0 channel (blue) and the D1 channel (red) of 1,3-butadiene. fig. S3. Electric field mimicking a half-cycle 800-nm pulse with a peak intensity of 3 × 1013 W/cm2 used for the SFI simulations (see Fig. 3B in the main text). fig. S4. Influence of adiabatic polarization on the Dyson orbitals for the two lowest ionization channels of 1,3-butadiene (D0 and D1). fig. S5. Measured parent ion (Embedded Image) yield as a function of the laboratory-frame angle α′ between the linear polarizations of the alignment and the SFI beams, for maximally aligned molecules (Δt = 58 ps). fig. S6. Number of counts on the ion detector as a function of the ion time of flight and the spatial impact position in molecular beam direction (x). fig. S7. Determination of the alignment distribution present in the experiment. table S1. Overlap of adiabatically polarized cationic many-electron wave functions [D0(F), D1(F)] in 1,3-butadiene with the corresponding field-free wave functions [D0, D1]. References (3942) Acknowledgments: We thank R. Peslin from the A2 mechanical workshop at the Max-Born-Institut for excellent support. Funding: This work was financially supported by the German Science Foundation (DFG SCHU 645/8-1). Author contributions: J.M., S.P., and C.P.S. designed the experiment. T.B. extended TD-RIS simulations to the recollision regime and performed the calculations. F.S., C.P.S., and J.M. performed the measurements and analyzed the data. T.B. and S.P. analyzed the TD-RIS simulations. All authors discussed the results and contributed to the manuscript. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors. View Abstract Navigate This Article
e6402547429496a1
Thursday, March 04, 2010 Quantum Interactions Create Space-time The notion that spacetime is an emergent phenomenon is, by my reckoning, being proposed by an increasing number of thinkers. Physicists and philosophers working in quantum gravity and quantum foundations are turning to the idea that the spacetime of relativity is not fundamental, but rather something which arises from a more fundamental world of quantum mechanical systems and their interactions. I just saw a reference to one such argument which was made a few years ago in an article by Avshalom C. Elitzur and Shahar Dolev called “Quantum Phenomena Within a New Theory of Time”. This was published in the 2005 collection Quo Vadis Quantum Mechanics?, Avshalom C. Elitzur, Shahar Dolev, Nancy Kolenda, Eds. Elitzur and Dolev examine several puzzles over the nature of time in quantum mechanics and are led to the hypothesis that quantum interactions (measurements) themselves are responsible for the creation of spacetime. A couple of quotes from section 17.10, titled “An Outline of the Spacetime Dynamics Theory”: Could it be, then, that the two phenomena – time’s passage and wave-function collapse – are not only real, but the latter is the very manifestation of the former? A wave function, after all, is a sum of many equally possible outcomes, while the measurement brings about the realization of one out of them, the others vanishing. Is this not the very difference between future and past? And is collapse not elusive because it creates the elusive ‘now’? Suppose that there is indeed a ‘now’ front, on the one side of which there are past events, adding up as the ‘now’ progresses, while on its other side there are no events, and hence, according to Mach, not even spacetime. Spacetime thus ‘grows’ into the future as history unfolds. What role does the wave function play in this creation of new events? The dynamically evolving spacetime allows a radical possibility. Rather than conceiving of some empty spacetime with which the wave function evolves, the reverse may be the case. The wave function evolves beyond the ‘now’, i.e., outside of spacetime, and its ‘collapse’ due to the interaction with other wave functions creates not only the events, but also the spacetime within which they are located in relation to one another. The famous peculiarities of the quantum interaction – nonlocality, the coexistence of mutually exclusive states, backward causation and the inconsistent histories presented in the previous sections, thus become more natural.    Can the reciprocal effects of spacetime and matter – the celebrated lesson of general relativity – thus possible gain a quantum mechanical explanation? Perhaps it is the wave function, we submit, that is more primitive than spacetime, and the spacetime connecting the two events is the product of their interacting wave functions. Thomas J McFarlane said... On the face of it, there seems to be a problem with this proposal due to the fact that the Schrödinger equation has time built into it already. How can time emerge from wave functions that are themselves solutions of an equation that presupposes time already as a background? Steve said... Hi. Good question. There are two levels. They're making a distinction between the emergent spacetime of relativity, which is a purely geometric structure derived from the distribution of the measurement events, and the underlying background time of the wave functions. The reason I think this is worthy of consideration is that the spacetime of relativity has no concept of the flow of time anyway, so under this vision that is accounted for by the underlying more fundamental quantum time. Doru said... I find the Space-time Dynamics Theory to be a very valid argument. Even from a philosophical perspective, the wave function seems to be transcending the space-time bound as a more fundamental ontological explanation. The real problem we have here is the problem of mind and consciousness. Space and time is the only solution we have for this problem. Steve said... Hi Doru: with reference to your last sentence, I think we need time for consciousness, but I'm not sure we need space (as we have normally understood it). TechTonics said... John Baez: ..."there's no "time operator" in quantum mechanics! Since the underlying quantum formalism has no time operator, the Schrodinger equation, which is super-structure, adds time to bridge QM to General Relativity, or microcosm to macrocosm where humans insist upon time recognition. I don't think the objection offered by McFarlane is fundamental. Refer to quantum gravity. TechTonics said... I group my interests in philosophical musings under the category, Causality and Machine Learning, or the case of how did Causality evolve/emerge a time- perceiving consciousness? Here is a real good paper on the problem of time within quantum gravity. Prima Facie Questions in Quantum Gravity Authors: C.J.Isham I wish now to consider in more detail three exceptionally important questions that can be asked of any approach to quantum gravity. These are: (i) what background structure is assumed?; (ii) what role is played by the spacetime diffeomorphism group?; (iii) how is the concept of ‘time’ to be understood? One of the major issues in quantum gravity is the so-called ‘problem of time’. This arises from the very different roles played by the concept of time in quantum theory and in general relativity. Let us start by considering standard quantum theory." Steve said... Thanks very much for the link. I looked at what Baez said about time not being an operator in qm. This is an idea that t isn't like the other observables and there is some debate about whether the time/energy uncertainly pair is unlike the position/momemtum pair, etc. I think regardless of this, a time parameter is still a fundamental part of qm, as it indexes our experiences in the situations qm is used to describe. TechTonics said... QM is not a universal theory. GR is a universal theory in which time is considered fundamental. In QM time is not an operator meaning it is not an observable. The time in the S.E. parameter is arbitrary so that is why I say it is not fundamental. In the chapter under discussion, Zeh is cited as a reference, and within the article, Aharonov is mentioned which is why I selected them as authoritative resources. Zeh Time in Quantum Theory ... "For this reason, von Neumann [3] referred to the time-dependent --> Schrödinger equation as a 'second intervention', since Schrödinger had invented it solely to describe consequences of time-dependent external 'perturbations' of a quantum system. In non-relativistic quantum mechanics, the time parameter t that appears in the Schrödinger wave function *ψ(q,t) is identified with Newton's absolute time." [SH: This is how the gap is bridged between the not universal theory of QM and the classical/GR regime which is considered universal, because although time is not observable in GR either, time is considered fundamental, "Spacetime is usually interpreted with space being three-dimensional and time playing the role of a fourth dimension that is of a different sort from the spatial dimensions." Time in the Quantum Theory and the Uncertainty Relation for Time and Energy Y. Aharonov and D. Bohm Because time does not appear in Schrodinger's equation as an operator but only as a parameter, the time-energy uncertainty relation must be formulate in a special way. First, it is not consistent with the general principles of the quantum theory, which require that all uncertainty relations be expressible in terms of the mathematical formalism, i.e. by means of operators [observable], wave functions, etc." Steve said... Hi. Thanks Techtonics for your comments. Re:"QM is not a universal theory. GR is a universal theory...": I've been interested in proposals that turn this 180 degrees around. In these, GR is seen as an emergent, not fundamental theory; a network of quantum mechanical systems and their interactions would be fundamental. You say the time parameter in qm is arbitrary, and I understand the rationale for saying this, but it's drawn from our experience. Whereas with GR (and some quantum gravity theories like loop quantum gravity) you can't get our intuitive experience of time back out! The idea of time as one of the dimensions in GR isn't enough -there is no preferred time direction picked out by the theory, it is really just geometry. A list of posts on theories of this type is at the bottom of this post. TechTonics said... Causality First Rafael Sorkin’s Causal Sets and Fotini Markopoulou’s Quantum Causal Histories. She finishes with a note on time: Just as the emergent locality has nothing to do with the fundamental micro-locality, time and causality will also be unrelated macro vs. micro. So, the theory “puts” in time at the micro-level (via its causality constraints), but emergent spacetime will have no preferred time slice –as required in general relativity. TT: I haven't read as many of those theories as you have. I did read Fotini and was impressed. But you quote her as saying "time and causality will also be unrelated macro vs. micro." I think GR is still received as a universal theory because humans perceive causality as events unfolding through time. The resistance to accepting QM as a universal theory arises from not understanding the physical result of the double-split experiment, which Feymann described as the "central mystery" of quantum mechanics. Or the causality involved with EPR and non-locality. Classically, the speed of light is precisely determined in terms of a meter which limits causality. In the 13 or so major interpretations of what QM describes about reality, one has time considered as instantaneous which wreaks a bit of havoc with a traditional grasp of causality, and other various QM interpretations about the speed of light. I'm not going to buy a theory which doesn't make testable predictions. I like most what Penrose had to say in the Foreward to that Quo Vadis book. The theories are incomplete, either QM or GR or both. What bothers me about Fotini's idea is that both QM and GR have experiments which confirm the theories to 99.9999+ accuracy. It makes me think that there should be a reconciliation possible; but it doesn't seem Fotini is forecasting such a reconciliation with that remark: "time and causality will also be unrelated macro vs. micro." I read her paper over a year ago so my memory is fuzzy. I recall that the root of the debate is over whether the turtle's back is discrete or continuous. TechTonics said... I just found this newer paper by Loll, Coupling Point-Like Masses to Quantum Gravity with Causal Dynamical Triangulations Authors: Igor Khavkine, Renate Loll, Paul Reska (Submitted on 24 Feb 2010) I have a fondness for Loll's approach because I like fractals. Though Fotini mentions CDT, the paper you linked to her seemed to conflict with CDT in too many ways. Steve said... I haven't read that paper yet, but I do feel that the cdt results are "telling us something" about quantum gravity. I think that we probably will get a better motivated and more insightful micro-theory than the cdt one in a future theory. TechTonics said... I also found that post by Scott interesting. I've heard it said, like a complaint, that everything is emergent. But maybe that is really an insight. The Big Bang or the Big Bounce, stars forming and then reforming so that they create more complex elements when they nova, planets forming, life, intelligence and consciousness. This all seems quite unpredictable from the point of origin. Becoming more particular, von Neumann purposely designed an automata capable of universal computation (UC). Was that emergent? Then Conway designed The Game of Life in 1970. He proved Life capable of UC in 1982. I've read that Life (UC) described as an emergent phenomenon. The cases appear to be different in my view. There are complex patterns which have rules, that are so complex that it has been proven that no computer can find the rule. How can you distinguish that type of rule from random, as in Rule 30, which is a pseudo-random generator which re-cycles after a "billion billion lifetimes of the universe" (Wolfram). I found your website after looking into CAs and emergence. I've really enjoyed reading some so far, of your many available insightful posts. I can read a paper and compare what I get out of it (which never seems to be enough) with your analysis; quite a treasure. Also my first impression of Fotini's paper has improved with aging. TechTonics said... I have been reading your past posts and taking notes. I was hoping to find a consistent explanation between the ideas explored which shared a common philosophical perspective. Maybe you can connect all the dots in preparation for writing your book :-) Philosophers usually dismiss the notion of "strong emergence" because it is magical, transcending causality. Chalmers adopts a type of dualism in suggesting that the only type of strong emergence might be consciousness. Most philosophers describe weak emergence in terms of knowledge (or lack thereof). Weak emergence assumes causality although the rule of the pattern which connects cause and effect is unknown or irreducible. Apparently the objective universe existed before humans evolved who describe it in theories which can never be proved, thus epistemological constraints. One such theory suggests that the universe expanded at a super-luminal velocity, which I think is faster than causality propagates. At what point can one distinguish some causing event/property as fundamental, from the created event (effect) which follows, and is now described as emergent? The universe was evolving before sentience arose which could experience such epistemological constraints. The Limit of the Bayesian Interpretation "I read the paper “Subjective probability and quantum certainty” ... To bring the point home, this particular paper features the discussion of quantum experiments with a certain outcome: the authors show that this outcome is to be interpreted as a certainty of epistemic belief on the part of the observer, not an objective certainty." Causality: Models, Reasoning, and Inference (2000) by Judea Pearl Preface "Ten years ago, when I began writing Probabilistic Reasoning in Intelligent Systems (1988), I was working within the empiricist tradition. In this tradition, probabilistic relationships constitute the foundations of human knowledge, whereas causality simply provides useful ways of abbreviating and organizing intricate patterns of probabilistic relationships. Today, my view is quite different. I now take causal relationships to be the fundamental building blocks both of physical reality and of human understanding of that reality, and I regard probabilistic relationships as but the surface phenomena of the causal machinery that underlies and propels our understanding of the world." Tech: What primitive or fundamental concept of causality can exist which doesn't also entail the idea of events existing in space which create the following timely t+1 next event? If causality is primitive, how then are space and time considered emergent? It would seem to me that causality, space and time must all have emerged from a common cause, or, they are all fundamental since causality is defined in terms of, or exists in terms of, space and time and events. Isn't gravity essentially causal? Is everything emergent when viewed from the first few nano-seconds of expansion? Do quantum fluctuations include and explain the plasma conditions? Steve said... Thanks for your kind words about the blog. I always appreciate dialogue with others who think about these things. I also appreciate your comment that it’s hard to discern a coherent philosophical story from this format – I should attempt to restate or summarize some of the themes which run through this stuff. Re: emergence This is a tricky subject. I had convinced myself awhile back that most examples of emergence in nature were epistemological or explanatory, not truly ontological – with one exception: The collapse of a quantum system when measured. The idea is that the ordered sequence of measurement events (like the causal set idea) emerges from the underlying dimensionless quantum world of possibilia. So causality, and also time, which is the index of causality, are primitive. Space is not primitive. Space (and the geometric spacetime picture of GR) are derived from the primitive relations between and among the events (“geometrogenesis”). With regard to mind, the speculative idea is that panexperientialism is true, and so there is a mote of first person experience present in every event. So the emergence of human consciousness is not an additional ontological type of emergence beyond the first type. Steve said... Instead of "dimensionless" quantum world, I should perhaps have said "hyper-dimensioned" or something like that.. TechTonics said... I was talking to a French AIT expert about the term "algorithmically incompressible" as an incorrect description (due to the definition) of describing CAs that must unfold in a simulation, there is no predictive shortcut. Wolram's term, "computationally irreducible" is correct. Anyway, he mentioned in passing that QM thoughts about emergence would depend upon which interpretation one chose. 30. For instance, the Schrödinger equation gives a deterministic evolution of the wave-function. In the "traditional interpretation" of quantum mechanics a measurement "collapses the wave-function." This collapse DOES NOT obey Schrödinger’s equation. After the collapse Schrödinger’s equation again governs the wavefunction evolution. Randomness enters during the measurement. "One of the central problems in interpretation of quantum theory is the duality of time evolution of physical systems: 1. Unitary evolution by the Schrödinger equation 2. Nondeterministic, nonunitary change during measurement of physical observables, at which time the system "selects" a single value in the range of possible values for the observable. This process is known as wavefunction collapse. Moreover, the process of observation occurs outside the system, which presents a problem on its own if one considers the universe itself to be a quantum system. This is known as the measurement problem. SH: So the standard interpretation postulates I think, that there are myriad wave-function collapses in this universe. However the Everett, Many-Worlds, and Many-Minds interpretation postulate a universal wave-function: "However Albert and Loewer suggest that the mental does not supervene on the physical, because individual minds have trans-temporal identity of their own. The mind selects one of these identities to be its non-random reality, while the universe itself is unaffected." With regard to "panexperientialism is true", isn't that a type of dualism which entails supervience? Steve said... No; panexperientialism is meant to avoid dualism by specifying that mental events and physical events are the same things. Neither supervenes on the other. The only dualism here is a dualism of perspectives. Mental events are ones we participate in directly; physical events are the others we infer exist via a third person perspective.
84560387b9568623
You are here PHYS 401: Quantum Mechanics Term: Spring Credits: 4 Degree Requirements: Introduction to topics in quantum physics, including observables and measurement, position and momentum representations, wave mechanics, the time-dependent Schrödinger equation, Hilbert space vectors and operators, the Hamiltonian, potential wells and the harmonic operator, introduction to Dirac notation, scattering theory, the Hydrogen atom, angular momentum, and spin. (Course offered Spring semester of even numbered years.) Prerequisites: Modern Physics Mathematical Methods of Physics,
01cd3f77d4a21508
My Hidden Weirdness My definition of the accompanying text (below) is that the text  is a fine art picture. . So I can frame it and hang it in my salon (a picture) and I can read it like the page of a book (a text). There is nothing special about it in our every day (classical ) world. But when people visit my salon they ask me “from where did you get this picture” and when they take a closer look they discover the text and read…. and suddenly they ask me “what is it a picture or a text on quantum mechanics ?”… and my answer its “both”…….That’s weird….now in an oblique manner the text below is weirder because in the quantum world the word “both” has a precise meaning : wave and particle and all that follow from it… but I  was told that I am a part of the quantum world… So am I weird ?…. Quantum mechanics states that you cannot precisely measure both position and momentum. Just because you can’t measure it, doesn’t mean it doesn’t have position and momentum at the same time. The theory seems based on this principle, but why? Viktor T. Toth Viktor T. Toth, IT pro, part-time physicist No, quantum mechanics does not state that you cannot simultaneously measure both position and momentum precisely. It is a consequence of the theory, but it is not what the theory is based on. Quantum mechanics states that a classical position, classical momentum, or other classical observables do not exist except in the rare cases when the quantum object interacts with something classical (such as an instrument.) When you look at the mathematics (and you have to look at the mathematics; quantum mechanics cannot be intuited) something amazing emerges. The formal equations of quantum mechanics, such as the Schrödinger equation, can be “derived” easily from classical physics. However, this equation offers many more solutions than its classical counterpart. Quantum mechanics begins when we look at these solutions and accept them as valid descriptions of reality, despite the fact that they seemingly make no intuitive sense, certainly not in the context of classical physics. Now you may wonder, what on Earth possesses us to go down this rabbit hole? Very simple: physics is based on experiment and observation. And we found that this is how the physical world works. When we look at this much richer world of quantum solutions, we find that indeed, most of the time that particle does not have a classical position or a classical momentum. Moreover, the math tells us, when it is confined to a classical position by a measurement, its classical momentum does not exist; it remains in a superposition of states. So when you think of an electron inside a cathode ray tube, going from the cathode to the screen while mysteriously going through two holes at the same time, and ask yourself, “What was the electron’s path?”, unfortunately the only legitimate answer sounds just as mysterious as the little boy telling Neo in the film The Matrix that there is no spoon: There is no (classical) path. It’s not that we cannot measure it. It truly does not exist. And whether we like it or not, that’s the way Nature works. But there is one advantage that we have over a piece of fiction like The Matrix: our outlandish statement is grounded in firm mathematics that leads to testable predictions, through which our outlandish claims can be  (and have been, countless times) verified and validated.
53f744a8ddb9cffa
All Issues Volume 23, 2018 Volume 22, 2017 Volume 21, 2016 Volume 20, 2015 Volume 19, 2014 Volume 18, 2013 Volume 17, 2012 Volume 16, 2011 Volume 15, 2011 Volume 14, 2010 Volume 13, 2010 Volume 12, 2009 Volume 11, 2009 Volume 10, 2008 Volume 9, 2008 Volume 8, 2007 Volume 7, 2007 Volume 6, 2006 Volume 5, 2005 Volume 4, 2004 Volume 3, 2003 Volume 2, 2002 Volume 1, 2001 Discrete & Continuous Dynamical Systems - B 2008 , Volume 9 , Issue 1 Select all articles Bayesian online algorithms for learning in discrete hidden Markov models Roberto C. Alamino and Nestor Caticha 2008, 9(1): 1-10 doi: 10.3934/dcdsb.2008.9.1 +[Abstract](614) +[PDF](255.6KB) Monotonicity properties of the blow-up time for nonlinear Schrödinger equations: Numerical evidence Cristophe Besse, Rémi Carles, Norbert J. Mauser and Hans Peter Stimming 2008, 9(1): 11-36 doi: 10.3934/dcdsb.2008.9.11 +[Abstract](531) +[PDF](684.3KB) We consider focusing nonlinear Schrödinger equations (NLS), in the $L^2$-critical and supercritical cases. We present a systematic numerical investigation of the dependence of the blow-up time on properties of the data or on the (parameters of the) equation in three cases: dependence on the strength of the nonlinearity in the equation when the initial data is fixed; dependence on the strength of a damping term in the equation when the initial data is fixed; and dependence upon the strength of a quadratic oscillation in the initial data when the equation and the initial profile are fixed. For some cases, analytic results are available and presented. In most situations our numerical counterexamples show that monotonicity in the evolution of the blow-up time does not occur. In addition they show that in certain regimes the blow-up time is very sensitive to the different parameters that we modulate.     Our numerical solutions are very reliable since not only we test independence on the precise setting of the numerical problem (size of the periodic domain, discretization etc.) but we compare the same simulations with two different methods in two independent codes: a spectral time splitting code and a relaxation method, with results identical at the order of precision. Deterministic walks in rigid environments with aging Leonid A. Bunimovich and Alex Yurchenko 2008, 9(1): 37-46 doi: 10.3934/dcdsb.2008.9.37 +[Abstract](433) +[PDF](146.4KB) Aging is an abundant property of materials, populations, and networks. We consider some classes of cellular automata (Deterministic Walks in Random Environments) where the process of aging is described by a time dependent function, called a rigidity of the environment. Asymptotic laws for the dynamics of perturbations propagating in such environments with aging are obtained. Graeme D. Chalmers and Desmond J. Higham 2008, 9(1): 47-64 doi: 10.3934/dcdsb.2008.9.47 +[Abstract](502) +[PDF](206.8KB) Stochastic differential equations with Poisson driven jumps of random magnitude are popular as models in mathematical finance. Strong, or pathwise, simulation of these models is required in various settings and long time stability is desirable to control error growth. Here, we examine strong convergence and mean-square stability of a class of implicit numerical methods, proving both positive and negative results. The analysis is backed up with numerical experiments. On a nonlocal reaction-diffusion population model Keng Deng 2008, 9(1): 65-73 doi: 10.3934/dcdsb.2008.9.65 +[Abstract](624) +[PDF](156.8KB) In this paper, we consider a nonlocal parabolic initial value problem that models a single species which is diffusing, aggregating, reproducing and competing for space and resources. We establish a comparison principle and construct monotone sequences to show the existence and uniqueness of the solution to the problem. We also analyze the long-time behavior of the solution. Transitivity of a Lotka-Volterra map Juan Luis García Guirao and Marek Lampart 2008, 9(1): 75-82 doi: 10.3934/dcdsb.2008.9.75 +[Abstract](531) +[PDF](368.0KB) The dynamics of the transformation $F: (x,y)\rightarrow (x(4-x-y),xy)$ defined on the plane triangle $\Delta$ of vertices $(0,0)$, $(0,4)$ and $(4,0)$ plays an important role in the behaviour of the Lotka--Volterra map. In 1993, A. N. SharkovskiĬ (Proc. Oberwolfach 20/1993) stated some problems on it, in particular a question about the trasitivity of $F$ was posed. The main aim of this paper is to prove that for every non--empty open set $\mathcal{U} \subset \Delta$ there is an integer $n_{0}$ such that for each $n>n_{0}$ it is $F^{n}(\mathcal{U}) \supseteq \Delta \setminus P_{\varepsilon}$, where $P_{\varepsilon} = \{ (x,y) \in D : y<\beta, \mbox{ $where$ F(t,\varepsilon)=(\alpha,\beta) \mbox{ and } t \in[0,2] \}$ and $\varepsilon \rightarrow 0$ for $n \rightarrow \infty$. Consequently, we show that the map $F$ is transitive, it is not topologically exact and it is almost topologically exact. Additionally, we prove that the union of all preimages of the point $(1,2)$ is a dense subset of $\Delta$. A coupled map lattice model of tree dispersion Miaohua Jiang and Qiang Zhang 2008, 9(1): 83-101 doi: 10.3934/dcdsb.2008.9.83 +[Abstract](594) +[PDF](217.4KB) We study the coupled map lattice model of tree dispersion. Under quite general conditions on the nonlinearity of the local growth function and the dispersion (coupling) function, we show that when the maximal dispersal distance is finite and the spatial redistribution pattern remains unchanged in time, the moving front will always converge in the strongest sense to an asymptotic state: a traveling wave with finite length of the wavefront. We also show that when the climate becomes more favorable to growth or germination, the front at any nonzero density level will have a positive acceleration. An estimation of the magnitude of the acceleration is given. Modeling group dynamics of phototaxis: From particle systems to PDEs Doron Levy and Tiago Requeijo 2008, 9(1): 103-128 doi: 10.3934/dcdsb.2008.9.103 +[Abstract](489) +[PDF](955.2KB) This work presents a hierarchy of mathematical models for describing the motion of phototactic bacteria, i.e., bacteria that move towards light. Based on experimental observations, we conjecture that the motion of the colony towards light depends on certain group dynamics. This group dynamics is assumed to be encoded as an individual property of each bacterium, which we refer to as ’excitation’. The excitation of each individual bacterium changes based on the excitation of the neighboring bacteria. Under these assumptions, we derive a stochastic model for describing the evolution in time of the location of bacteria, the excitation of individual bacteria, and a surface memory effect. A discretization of this model results in an interacting stochastic many-particle system. The third, and last model is a system of partial differential equations that is obtained as the continuum limit of the stochastic particle system. The main theoretical results establish the validity of the new system of PDEs as the limit dynamics of the multi-particle system. Noncoercive damping in dynamic hemivariational inequality with application to problem of piezoelectricity Zhenhai Liu and Stanislaw Migórski 2008, 9(1): 129-143 doi: 10.3934/dcdsb.2008.9.129 +[Abstract](434) +[PDF](225.3KB) In this paper we consider an evolution problem which model the frictional skin effects in piezoelectricity. The model consists of the system of the hemivariational inequality of hyperbolic type for the displacement and the time dependent elliptic equation for the electric potential. In the hemivariational inequality the viscosity term is noncoercive and the friction forces are derived from a nonconvex superpotential through the generalized Clarke subdifferential. The existence of weak solutions is proved by embedding the problem into a class of second order evolution inclusions and by applying a parabolic regularization method. Phase-locking and Arnold coding in prototypical network topologies Stefan Martignoli and Ruedi Stoop 2008, 9(1): 145-162 doi: 10.3934/dcdsb.2008.9.145 +[Abstract](534) +[PDF](4932.0KB) The patch recovery for finite element approximation of elasticity problems under quadrilateral meshes Zhong-Ci Shi, Xuejun Xu and Zhimin Zhang 2008, 9(1): 163-182 doi: 10.3934/dcdsb.2008.9.163 +[Abstract](429) +[PDF](220.5KB) In this paper, some patch recovery methods are proposed and analyzed for finite element approximation of elasticity problems using quadrilateral meshes. Under a mild mesh condition, superconvergence results are established for the recovered stress tensors. Consequently, a posteriori error estimators based on the recovered stress tensors are asymptotically exact. Basic spike-train properties of a digital spiking neuron Hiroyuki Torikai 2008, 9(1): 183-198 doi: 10.3934/dcdsb.2008.9.183 +[Abstract](504) +[PDF](512.2KB) A digital spiking neuron is used to generate spike-trains of variable spike-intervals. Multiple co-existing periodic spike-trains are observed, depending on initial states. By focusing on a simple parameter case, we clarify the number of co-existing periodic spike-trains and determine their periods theoretically. Using a spike-interval modulation, the spike-train is coded by a digital sequence. We clarify that the set of co-existing periodic spike-trains is in a one-to-one relation to a set of binary numbers. We finally discuss to what extent these theoretical results may provide the mathematical basis for technological applications. 2017  Impact Factor: 0.972 Email Alert [Back to Top]
33be6c68a343f3ff
Wednesday, September 21, 2011 Imagining the Ninth Dimension "The string landscape can be visualized schematically as a mountainous terrain in which different valleys represent different forms for the extra dimensions, and altitude represents the cosmological constant's value." I've been saying that the eighth dimension is as far as you need to go for any expressions of matter, while the ninth can only contain information/meme patterns, preferences for one kind of reality over another. How could I arrive at such an ambitious statement? With my Imagining the Tenth Dimension project, I begin by saying that a point indicates a position in a system. In Imagining the Sixth Dimension, I mentioned that thinking about the set of all possible states for our unique universe would be thinking about our universe's phase space. In fact, that's the definition of phase space: a space in which all possible states of a system are represented. I believe there's a way to apply this thinking to every single dimension - in a sense, a dimension when considered as a "set of all possible states for that dimension" becomes a finite but unbounded hypersphere, and that hypersphere becomes a point in the next dimension up. Let's go back and see how that logic holds up. If I'm on a boat in the middle of the ocean, I can see a horizon that appears to be the same in every direction. From this I can deduce that there is a slight curvature to the surface of the ocean, which is topologically speaking a 2D plane, and understand that I'm really on a 3D sphere. From the 2D topological perspective, I could head in a specific direction forever, giving me the impression that I was on an infinitely flat surface, but with the added curvature of the third dimension we can see how "apparently infinite" can be equated with "finite but unbounded". With the knowledge that it takes a certain amount of "time" for light to reach our eyes, we realize that what we're seeing around us is not space, but space-time, and that as counter-intuitive as this may seem at first it's actually impossible for us to see 3D space: we can imagine and use the logic of 3D shapes, but we can only see them from our moving position within 4D space-time. From our position within 4D space-time, we look out to the furthest reaches and see a cosmological horizon which is the same in all directions. From this we can deduce that there is a slight curvature to space-time, and that we're really a point moving on the surface of a 5D hypersphere. There are many other indications that our reality comes from the fifth dimension: back in 1921 Einstein accepted this idea as proposed by Kaluza. Holographic universe theories propose that we are an interference pattern projected from the fifth dimension, or from the "edge of the universe" but I disagree with those who say that this edge is far, far away.  Think of it like this: the third dimension is at the "edge" of the second dimension no matter where our imaginary 2D flatlanders are located. In the same way, this "edge" they speak about in holographic universe theories as being at an additional right angle to our space-time reality is not far away, it's right "here" in the next dimension up, no matter where we are within our space-time reality. And Hugh Everett, even through he didn't propose extra dimensions with his Many Worlds Interpretation, did propose that the branching universes derived from quantum mechanics occur within a space which is orthogonal (at right angles) to space-time. Some quantum physicists are fond of saying that extremely unlikely events such as one of us suddenly disappearing from here and reappearing on the moon are allowable within the quantum wave function, but they are so unlikely that they would take longer than the life of the universe to occur. Likewise, Everett talked about how there are branching tree-like structures which are causally connected, and he even allowed for the possibility that some of those branches might fuse back together further down the causal chain, but he was very clear that causality could never be violated - so the universe where dinosaurs never became extinct or JFK was never murdered or where I died in a car crash last year would exist within the universal wave function as described by the Schrödinger equation, but they are now inaccessible from the universe we are currently observing. Those other universes, in a manner of speaking, are beyond the horizon of our 5D probability space, which leads me to conclude that we are a 5D point moving on the surface of a 6D hypersphere. This sixth dimensional "phase space", as some have called it, includes all possible versions of our universe, from its beginning to its end. But within that phase space, we never wander off into one of the other universes with different physical laws, because those are in effect "beyond the horizon" of our universe's phase space, and from this we can deduce that the system representing our universe as a timeless whole is a point on a 7D hypersphere. From here we are beginning to move into discussions of information flow rather than physical realities, but we're not all the way there yet. In his book Just Six Numbers, Sir Martin Rees tells us that we only need to define six "deep forces" to describe our unique universe. Adjusting any one of those parameters by surprisingly small amounts would cause our universe to fall apart as the laws of physics break down. So if our unique universe is located at a position within the multiverse landscape, or constrained by a D7 brane as some string theorists have suggested, then are we moving, or are we stationary on the surface of this 7D hypersphere? There has been some evidence that the basic physical laws of our universe may have been slightly different at the earliest history of our universe, which would indicate that perhaps we have changed our 7D position slightly according to the logic we're pursuing here. But the idea that there is a certain natural selection occurring at the seventh dimension and beyond also makes sense - if we move too far away from our position, the incredibly delicate balance of forces that allow our universe to exist would collapse, so at nearby positions within this multiverse landscape there might not be universes that cohere into any meaningful structures, but further away another universe completely different from ours could be assembled with its own unique set of intricately connected physical laws and its own unique expression within the sixth dimension and below. We also talked last entry about how we can imagine a data set of universes within the seventh dimension which would then require the "beyond the horizon" additional degree of freedom of the eighth dimension for us to be able to simultaneously consider other data sets not included within the seventh dimensional one: but to be clear, those data sets could be interchangeable, so this is more of a question of reference frames than it is of some data not being part of the seventh dimension. In that sense, the seventh dimension harkens back to the "garden hose" analogy used by string theorists: it's useful to imagine that the seventh dimension looks like a straight line, but when we move closer we can see the dimension has the potential for additional twists and turns that are inside the "rolled up tube" that is, topologically speaking, the "plane" of the eighth dimension. (We looked at the following animation of vibrating Calabi-Yau Manifolds before in June 2011, in an entry called "Will the LHC Reveal Extra Dimensions?") With this project, I'm proposing that the eighth dimension would encompass every possible physical expression of every possible universe. This would even include the extremely unlikely universes that result from oscillating rather than static constants - the degree of freedom to allow such changes would be within the eighth dimension. So no matter what universe we are imagining, we can visualize it as a point on an 8D hypersphere, but in the case of our own universe I suspect that we are not partaking of that additional potential degree of freedom, so we are definitely not moving away from our 7D position within the eighth dimension. What's beyond the horizon of the 8D construct we've just envisioned? String theorists who talk about there being ten to the power of five hundred possible universes are really describing the different possible shapes the extra dimensions could take. In The Hidden Reality, Brian Greene uses the following image to picture the terrain of possible extra-dimensional shapes: he calls this terrain the Landscape Multiverse (as opposed to the Brane Multiverse, the Quilted Multiverse and so on), and describes how quantum tunnelings through this mountainous string landscape realize every possible form for the extra dimensions in one or another bubble universe. To tie this idea to my approach to visualizing the extra dimensions, the topological "plane" of this landscape is the eighth dimension, and the additional degree of freedom allowing this tunneling to occur would be in the ninth dimension. (this graphic © 2011 by Brian Greene) So here we are in the ninth dimension. Now we really are into the realm of organizing patterns, or "big picture memes" as I've called them in my book, where we are finally fully into the "information" side of the "information equals reality" equation. What caused our particular universe to be selected from out of the sea of potential patterns that roil and froth like quantum foam at the ninth dimension? Back in July 2008 we talked in this blog about Michael Shermer, who's the well-known publisher of Skeptic Magazine. Michael's goal has been to poke holes in the questionable claims of fringe science, the paranormal, and a wide range of other areas that he has targeted with his razor-sharp debunking skills. This is why I found it quite marvelous when I picked up an issue of Scientific American back then, and found that Mr. Shermer's regular column that month was entitled "Sacred Science: can emergence break the spell of reductionism and put spirituality back into nature?". Mr. Shermer's article is about a fellow who comes from my neighboring province of Alberta, Canada: Stuart Kauffman, founding director of the Institute for Biocomplexity and Informatics at the University of Calgary, who has written a book called "Reinventing the Sacred". To quote from Michael Shermer's article about the book: Kauffman reverses the reductionist's causal arrow with a comprehensive theory of emergence and self-organization that he says 'breaks no laws of physics' and yet cannot be explained by them. God 'is our chosen name for the ceaseless creativity in the natural universe, biosphere and human cultures,' Kauffman declares. By the time we are thinking about the ninth dimension as selection patterns that represent a generalized preference for one kind of universe over another, I believe we're in the same intellectual neighborhood as the "God 2.0" concept. And I think Michael Shermer, famed atheist and skeptic, got it right when he concluded his article saying that Stuart Kauffman's "God 2.0 is a deity worthy of worship". Why do I say this? Because by now we're talking about labels: "a rose by any other name would smell as sweet".  Whether you want to call these selection patterns that caused our universe to be selected from out of this sea of potential information patterns "God", or some other less emotionally-charged name, doesn't change the ninth dimensional reality that we're describing here. As I say in the last verse of my song The Unseen Eye: Now the universe of all universes If the truth be known Is an awful bore, viewed as a whole But just a tiny shard viewed from any angle Reveals complexity It reveals such beauty, reveals a soul So does it make a difference How we got to what we see If it’s really just coincidence It’s still a wondrous thing If you are one of those persons who recoil at the use of words like "God" and "soul", I apologize. This project is not an attempt to enforce a spiritual viewpoint onto the nature of reality, but it also tries to show that there are a great many possible connections between these different schools of thought. If you prefer physics over philosophy, so be it, that is one point of view. But likewise, if you prefer spirituality over science, I'm hoping that this project has given you some new food for thought for where the meeting ground between the two might reside. There are 26 songs I attached to this project (I chose that number as a bit of an inside joke for fans of the history of string theory), and the very last one is called "Thankful". Having a sense of wonder and gratitude for the immense processes which selected the universe we are in right here and right now is, to my way of thinking, a completely appropriate response. Are you enjoying the journey? Rob Bryanton Next: Wrapping It Up in the Tenth Dimension Imagining the Eighth Dimension Imagining the Seventh Dimension Imagining the Sixth Dimension Imagining the Fifth Dimension Imagining the Fourth Dimension Imagining the Third Dimension Imagining the Second Dimension ak40sexy said... awesome entry as always rob! i love how all encompassing and open minded your ideas are. definitely a useful guide for the emerging scientific and spiritual paradigm that is shaping the future of humanity. thanks for your thought and insight! Black Napalm said... Hello, Rob! Have you seen this article? What are your thoughts/comments? Rob Bryanton said... Thanks ak40sexy! And Black Napalm, there's certainly a lot of buzz about this new report that neutrinos arrived 60 billionths of a second faster than allowed by the speed of light limit. Have you seen this article? According to my approach to visualizing the dimensions, the only way something can travel faster than the speed of light is if it somehow burrows or "folds" through an extra dimension. Could these neutrinos have done just a little bit of that? It's still too early to say for sure whether we're talking about something that is real and confirmable by other experimenters. Definitely worth keeping an eye on! Anonymous said... Time travel happens every day. :) Once we all can see the 90º interface of higher dimensions presented to us as our creative reality expands, we have the option to use our mind intent engage this angular momentum when we are emotionally motivated enough to alter the "time" line. As in Schrödinger's cat, the "time" spent during the period when the cat is in multiple dimensions offers the opportunity for optional dimensions to present themselves for validity in "your" "time" line. It is up to you to weight the "occurrence" with your integrity. The omniverse will respond by offering fertile creative "space" at the next Plank length. Anonymous said... Hey, What's the difference between the tenth and ninth dimension? Rob Bryanton said... Hi anonymous, what's the difference between the ninth and tenth dimension? M-Theory says there are ten spatial dimensions plus time. "Time" is a way of describing change from state within any dimension. So within this approach to visualizing the dimensions the ninth dimension can have "time" added to it, in other words it's a set of information patterns that we can navigate within using "time". The tenth dimension, though, has to be the timeless version of that information: absolutely every possible pattern conceived of as one timeless whole, an "ultimate ensemble" as Tegmark refers to it. As soon as you add "time" back into the equation, you are automatically spilled back into the other dimensions... as you navigate from state to state, whatever those states might be depending upon the dimension you're thinking about. Unknown said... Hello Mr. Bryanton. Thanks to your videos I could understand dimension between 4 and 8.But I really couldn't get the idea of 9th. I understood how much freedom it gives but really can't tell the difference between 8 and 9. Unknown said... Something comes to our mind …we constantly think and we forget …where do information go? Sounds travel where? Why is destination predefined …..theory of nothingness…there is some reality in universe is made up of matter ….I say some because what beyond it ? We know what we comprehend is true or rational ….basis what? Basis filling up of emptiness of what is retained by our mind …why do we sleep? to fill emptiness with emptiness …to relax ….why people are not satisfied? They want their emptiness to be filled …but why do they want to fill emptiness…what are the answers do they look for ? Desires keeps on coming and going likewise thoughts ..they keep on coming and going ? Go where ? There are dimensions in our world but we know nothing about them precisely… Why ? The power of unknown is so strong that we cannot come it of it ? Much stronger than gravity…more than beliefs? And of course insurances business? Why do we loose consciousness? Or why are people consious? Why do people go in coma ? Or why not everybody is in coma? Information comes and goes out of a black box …radiating ..why do we sometimes go on staring something…why do we loose track of time at that time ? It is because in actuality nothing is there ? Absurd right ? But think of matrix movie …was it real.?no it was a movie but why did you thought of that movie ? It was not there in your mind a second ago…I made you think of it …likewise we are surrounded with information for nothing ….means we are constantly thinking of something to absolve nothing . Tenth Dimension Vlog playlist
6e55507bc6446125
« · » Section 13.7: The Coulomb Potential for the Idealized Hydrogen Atom Please wait for the animation to completely load. Now consider the radial part of the Schrödinger equation in Eq. (13.21)  written as [−(ħ2/2μ)(1/r2) d/dr (r2 d/dr) + l(l + 1)ħ2/(2μr2) + V(r)] R(r) = ER(r) . (13.26) We want to rewrite these terms, especially the derivative, into a more standard form. The substitution, R(r) = u(r)/r, simplifies the derivative term, in the above equation to yield [−(ħ2/2μ) (d2/dr2) + l(l + 1)ħ2/(2μr2) + V(r)] u(r) = Eu(r) . When we find solutions to this equation, we must keep in mind that we are solving for u(r), not R(r), and that we must divide u(r) by r to give the true radial wave function, R(r).  Before looking at a particular V(r), we look at the general equation and interpret terms. We see what we can interpret as an effective potential: Veff = l(l + 1)ħ2/(2μr2) + V(r) , where l(l + 1)ħ2/(2μr2) is the potential associated with the so-called the centrifugal barrier. Now consider the following Coulomb potential, V = −e2/r, which describes the potential energy function for an electron in the proximity of a proton: the potential responsible for the basic structure of the hydrogen atom.4  When we insert this Coulomb potential in the radial differential equation, we have a differential equation that describes the electron: [−(ħ2/2μe) (d2/dr2) + l(l + 1)ħ2/(2μer2) − e2/r] u(r) = Eu(r) . (13.27) where μe is the electron's mass and e is the charge of the electron. In Animation 1 the effective potential for the Coulomb problem  Veff = l(l + 1)ħ2/(2μer2) + V(r) is shown for l = 0, 1, 2. Notice that as l gets bigger, the centrifugal barrier increases as well. To get this equation into standard form, divide by −ħ2/2μe, which yields [ (d2/dr2) − l(l + 1)/r2 + 2μee2/(ħ2r)] u(r) = (−2μeE/ħ2) u(r) . We now define κ2 = (−2μeE/ħ2) (which is real since E < 0) and the dimensionless quantities ρ = κr and ρ0 = 2μee2/(κ ħ2). Making these substitutions yields [ (d2/dρ2) − l(l + 1)/ρ2 0/ρ − 1] u = 0 . We begin our analysis of the solutions of this differential equation by considering the two special limiting cases: Case I: ρ→ 0 (r→ 0). In this case the centrifugal barrier dominates in Eq. (13.46) [ (d2/dρ2) − l(l + 1)/ρ2] u = 0 . (13.28) We find that the general solution to this equation is u = Aρl+1+Bρl and therefore the normalizable piece is just u ∝ ρl+1 ∝  rl+1. Case II: ρ → ∞ (r → ∞). In this case the centrifugal term, l(l + 1)/ρ2, and the potential energy, ρ0/ρ, vanish from Eq. (13.28) at large ρ.  This leaves [ (d2/dρ2) − 1] u = 0 . (13.48) which for E < 0 gives the normalizable solution u = exp(−ρ) = exp(−κr). Now that we have an idea of what the bound states should look like asymptotically, we can find the entire solution. After much algebra we first find that E = μee4/(2n2ħ2) = −R (1/n2) , where R = μee4/(2ħ2) is the Rydberg and is 13.6 eV.  This result describes the energy levels for the Coulomb problem, and hence, the basic energy level structure for the hydrogen atom. We now simplify ρ. We use ρ = κr and the definition of κ to find ρ = μee2/(2) r = (r/na0) , where a0 = ħ2e2 is the Bohr radius. We can make use of further substitutions, this time yielding the radial wave functions Rnl(r) = Anl er/na0 [(r/na0)l+1/r] vn(r/na0) , (13.29) where Anl = [(2/(na0)3(nl − 1)!/(2n[(n + l)!]3)] is the normalization for the radial energy eigenfunction. In addition, vn( r/na0) = L2l+1n−l−1(2r/na0) are the associated Laguerre polynomials. The unnormalized radial wave functions, above without Anl, are shown in Animation 2.  In the animation, distances are given in terms of Bohr radii, a0. You may enter values of n and l and see the radial energy eigenfunction that results. We find that the entire energy eigenfunction, properly normalized, is simply the product of the radial and angular solutions: ψnlm = Rnl(r) Ylm(θ,φ) . Note that given that there are n2 states per n value, and that the energy just depends on n, the solutions have an n2 energy degeneracy. 4In Chapter 14 we discuss corrections to the Coulomb potential which are responsible for the remaining structure in the hydrogen spectral lines. We also generalize the Coulomb potential to include hydrogenic atoms, those with one electron and Z protons. The OSP Network: Open Source Physics - Tracker - EJS Modeling Physlet Physics Physlet Quantum Physics
23d524df2ae21d22
Atomik birimler Vikipedi, özgür ansiklopedi Atla: kullan, ara Atomic birimler (Atomic units) (au veya a.u.) form a system of natural units which is especially convenient for atomic physics calculations. There are two different kinds of atomic units, Hartree atomic units[1] and Rydberg atomic units, which differ in the choice of the unit of mass and charge. This article deals with Hartree atomic units. In atomic units, the numerical values of the following four fundamental physical constants are all unity by definition: Atomic units are often abbreviated "a.u." or "au", not to be confused with the same abbreviation used also for astronomical units, arbitrary units, and absorbance units in different contexts. Kullanım ve notasyon[değiştir | kaynağı değiştir] Atomic units, like SI units, have a unit of mass, a unit of length, and so on. However, the use and notation is somewhat different from SI. Suppose a particle with a mass of m has 3.4 times the mass of electron. The value of m can be written in three ways: • "m = 3.4~m_e". This is the clearest notation (but least common), where the atomic unit is included explicitly as a symbol.[2] • "m = 3.4~\mathrm{a.u.}" ("a.u." means "expressed in atomic units"). This notation is ambiguous: Here, it means that the mass m is 3.4 times the atomic unit of mass. But if a length L were 3.4 times the atomic unit of length, the equation would look the same, "L = 3.4~\mathrm{a.u.}" The dimension needs to be inferred from context.[2] • "m = 3.4". This notation is similar to the previous one, and has the same dimensional ambiguity. It comes from formally setting the atomic units to 1, in this case m_e = 1, so 3.4~m_e = 3.4.[3][4] Temel atomik birimler[değiştir | kaynağı değiştir] These four fundamental constants form the basis of the atomic units (see above). Therefore, their numerical values in the atomic units are unity by definition. Fundamental atomic units Dimension Name Symbol/Definition Value in SI units[5] mass electron rest mass \!m_\mathrm{e} 9.10938291(40)×10−31 kg charge elementary charge \!e 1.602176565(35)×10−19 C angular momentum reduced Planck's constant \hbar = h/(2 \pi) 1.054571726(47)×10−34 J·s electric constant Coulomb force constant 1/(4 \pi \epsilon_0) 8.9875517873681×109 kg·m3·s-2·C-2 İlgili fiziki sabitler[değiştir | kaynağı değiştir] Dimensionless physical constants retain their values in any system of units. Of particular importance is the fine-structure constant \alpha = \frac{e^2}{(4 \pi \epsilon_0)\hbar c} \approx 1/137. This immediately gives the value of the speed of light, expressed in atomic units. Some physical constants expressed in atomic units Name Symbol/Definition Value in atomic units speed of light \!c \!1/\alpha \approx 137 classical electron radius r_\mathrm{e}=\frac{1}{4\pi\epsilon_0}\frac{e^2}{m_\mathrm{e} c^2} \!\alpha^2 \approx 5.32\times10^{-5} proton mass m_\mathrm{p} m_\mathrm{p}/m_\mathrm{e} \approx 1836 Derived atomic units[değiştir | kaynağı değiştir] Below are given a few derived units. Some of them have proper names and symbols assigned, as indicated in the table. kB is Boltzmann constant. Derived atomic units Dimension Name Symbol Expression Value in SI units Value in more common units length Bohr radius \!a_0 4\pi \epsilon_0 \hbar^2 / (m_\mathrm{e} e^2) = \hbar / (m_\mathrm{e} c \alpha) 5.2917720859(36)×10−11 m 0.052918 nm=0.52918 Å energy Hartree energy \!E_\mathrm{h} m_\mathrm{e} e^4/(4\pi\epsilon_0\hbar)^2 = \alpha^2 m_\mathrm{e} c^2 4.35974417(75)×10−18 J 27.211 eV=627.509 kcal·mol−1 time \hbar / E_\mathrm{h} 2.418884326505(16)×10−17 s velocity a_0 E_\mathrm{h} / \hbar = \alpha c 2.1876912633(73)×106 m·s−1 force \! E_\mathrm{h} / a_0 8.2387225(14)×10−8 N 82.387 nN=51.421 eV·Å−1 temperature \! E_\mathrm{h} / k_\mathrm{B} 3.1577464(55)×105 Şablon:Convert/ScientificValue/LoffAonSoffT pressure E_\mathrm{h} / {a_0}^3 2.9421912(19)×1013 Pa electric field \!E_\mathrm{h} / (ea_0) 5.14220652(11)×1011 V·m−1 5.14220652(11) GV·cm−1=51.4220652(11) V·Å−1 electric dipole moment e a_0 8.47835326(19)×10−30 C·m 2.541746 D SI and Gaussian-CGS variants, and magnetism-related units[değiştir | kaynağı değiştir] There are two common variants of atomic units, one where they are used in conjunction with SI units for electromagnetism, and one where they are used with Gaussian-CGS units.[6] Although the units written above are the same either way (including the unit for electric field), the units related to magnetism are not. In the SI system, the atomic unit for magnetic field is 1 a.u. = \frac{\hbar}{e a_0^2} = 2.35×105 T = 2.35×109 G, and in the Gaussian-cgs unit system, the atomic unit for magnetic field is 1 a.u. = \frac{e}{a_0^2} = 1.72×103 T = 1.72×107 G. (These differ by a factor of α.) Other magnetism-related quantities are also different in the two systems. An important example is the Bohr magneton: In SI-based atomic units,[7] \mu_B = \frac{e \hbar}{2 m_e} = 1/2 a.u. and in Gaussian-based atomic units,[8] \mu_B = \frac{e \hbar}{2 m_e c}=\alpha/2\approx 3.6\times 10^{-3} a.u. Atomik birimde Bohr modeli[değiştir | kaynağı değiştir] Atomic units are chosen to reflect the properties of electrons in atoms. This is particularly clear from the classical Bohr model of the hydrogen atom in its ground state. The ground state electron orbiting the hydrogen nucleus has (in the classical Bohr model): • Orbital velocity = 1 • Orbital radius = 1 • Angular momentum = 1 • Orbital period = 2π • Ionization energy = 12 • Electric field (due to nucleus) = 1 • Electrical attractive force (due to nucleus) = 1 Non-relativistic quantum mechanics in atomic units[değiştir | kaynağı değiştir] The Schrödinger equation for an electron in SI units is - \frac{\hbar^2}{2m_e} \nabla^2 \psi(\mathbf{r}, t) + V(\mathbf{r}) \psi(\mathbf{r}, t) = i \hbar \frac{\partial \psi}{\partial t} (\mathbf{r}, t). The same equation in au is - \frac{1}{2} \nabla^2 \psi(\mathbf{r}, t) + V(\mathbf{r}) \psi(\mathbf{r}, t) = i \frac{\partial \psi}{\partial t} (\mathbf{r}, t). For the special case of the electron around a hydrogen atom, the Hamiltonian in SI units is: \hat H = - {{{\hbar^2} \over {2 m_e}}\nabla^2} - {1 \over {4 \pi \epsilon_0}}{{e^2} \over {r}}, while atomic units transform the preceding equation into \hat H = - {{{1} \over {2}}\nabla^2} - {{1} \over {r}}. Comparison with Planck units[değiştir | kaynağı değiştir] Both Planck units and au are derived from certain fundamental properties of the physical world, and are free of anthropocentric considerations. It should be kept in mind that au were designed for atomic-scale calculations in the present-day universe, while Planck units are more suitable for quantum gravity and early-universe cosmology. Both au and Planck units normalize the reduced Planck constant. Beyond this, Planck units normalize to 1 the two fundamental constants of general relativity and cosmology: the gravitational constant G and the speed of light in a vacuum, c. Atomic units, by contrast, normalize to 1 the mass and charge of the electron, and, as a result, the speed of light in atomic units is a large value, 1/\alpha \approx 137. The orbital velocity of an electron around a small atom is of the order of 1 in atomic units, so the discrepancy between the velocity units in the two systems reflects the fact that electrons orbit small atoms much slower than the speed of light (around 2 orders of magnitude slower). There are much larger discrepancies in some other units. For example, the unit of mass in atomic units is the mass of an electron, while the unit of mass in Planck units is the Planck mass, a mass so large that if a single particle had that much mass it might collapse into a black hole. Indeed, the Planck unit of mass is 22 orders of magnitude larger than the au unit of mass. Similarly, there are many orders of magnitude separating the Planck units of energy and length from the corresponding atomic units. See also[değiştir | kaynağı değiştir] References[değiştir | kaynağı değiştir] 1. ^ Hartree, D. R. (1928). "The Wave Mechanics of an Atom with a Non-Coulomb Central Field. Part I. Theory and Methods". Mathematical Proceedings of the Cambridge Philosophical Society (Cambridge University Press) 24 (1): ss. 89–110. doi:10.1017/S0305004100011919.  2. ^ a b Pilar, Frank L. (2001). Elementary Quantum Chemistry. Dover Publications. ss. 155. ISBN 978-0-486-41464-5.  3. ^ Bishop, David M. (1993). Group Theory and Chemistry. Dover Publications. ss. 217. ISBN 978-0-486-67355-4.  4. ^ Drake, Gordon W. F. (2006). Springer Handbook of Atomic, Molecular, and Optical Physics (2nd bas.). Springer. ss. 5. ISBN 978-0-387-20802-2.  5. ^ Şablon:Cite article 6. ^ "A note on Units". Physics 7550 — Atomic and Molecular Spectra. University of Colorado lecture notes.  7. ^ Chis, Vasile. "Atomic Units; Molecular Hamiltonian; Born-Oppenheimer Approximation". Molecular Structure and Properties Calculations. Babes-Bolyai University lecture notes. ) 8. ^ Budker, Dmitry; Kimball, Derek F.; DeMille, David P. (2004). Atomic Physics: An Exploration through Problems and Solutions. Oxford University Press. ss. 380. ISBN 978-0-19-850950-9.  External links[değiştir | kaynağı değiştir] Şablon:SI units navbox Şablon:Systems of measurement
d53bbed2878e5c0b
Isospectral Hermitian counterpart of complex non Hermitian Hamiltonian \(p^{2}-gx^{4}+a/x^{2}\) Asiri Nanayakkara, Thilagarajah Mathanaranjan In this paper we show that the non-Hermitian Hamiltonians \(H=p^{2}-gx^{4}+a/x^2\) and the conventional Hermitian Hamiltonians \(h=p^2+4gx^{4}+bx\) (\(a,b\in \mathbb{R}\)) are isospectral if \(a=(b^2-4g\hbar^2)/16g\) and \(a\geq -\hbar^2/4\). This new class includes the equivalent non-Hermitian – Hermitian Hamiltonian pair, \(p^{2}-gx^{4}\) and \(p^{2}+4gx^{4}-2\hbar \sqrt{g}x\), found by Jones and Mateo six years ago as a special case. When \(a=\left(b^{2}-4g\hbar ^{2}\right) /16g\) and \(a<-\hbar^2/4\), although \(h\) and \(H\) are still isospectral, \(b\) is complex and \(h\) is no longer the Hermitian counterpart of \(H\). Mathematical Physics (math-ph) PT spectroscopy of the Rabi problem Yogesh N. Joglekar, Rahul Marathe, P. Durganandini, Rajeev K. Pathak We investigate the effects of a time-periodic, non-hermitian, PT-symmetric perturbation on a system with two (or few) levels, and obtain its phase diagram as a function of the perturbation strength and frequency. We demonstrate that when the perturbation frequency is close to one of the system resonances, even a vanishingly small perturbation leads to PT symmetry breaking. We also find a restored PT-symmetric phase at high frequencies, and at moderate perturbation strengths, we find multiple frequency windows where PT-symmetry is broken and restored. Our results imply that the PT-symmetric Rabi problem shows surprisingly rich phenomena absent in its hermitian or static counterparts. Selective enhancement of topologically induced interface states C. Poli, M. Bellec, U.Kuhl, F. Mortessagne, H. Schomerus An attractive mechanism to induce robust spatially confined states utilizes interfaces between regions with topologically distinct gapped band structures. For electromagnetic waves, this mechanism can be realized in two dimensions by breaking symmetries in analogy to the quantum Hall effect or by employing analogies to the quantum spin Hall effect, while in one dimension it can be obtained by geometric lattice modulation. Induced by the presence of the interface, a topologically protected, exponentially confined state appears in the middle of the band gap. The intrinsic robustness of such states raises the question whether their properties can be controlled and modified independently of the other states in the system. Here, we draw on concepts from passive non-hermitian parity-time (PT)-symmetry to demonstrate the selective control and enhancement of a topologically induced state in a one-dimensional microwave set-up. In particular, we show that the state can be isolated from losses that affect all other modes in the system, which enhances its visibility in the temporal evolution of a pulse. The intrinsic robustness of the state to structural disorder persists in the presence of the losses. The combination of concepts from topology and non-hermitian symmetry is a promising addition to the set of design tools for optical structures with novel functionality. Supercritical blowup in coupled parity-time-symmetric nonlinear Schrödinger equations João-Paulo Dias, Mário Figueira, Vladimir V. Konotop, Dmitry A. Zezyulin Analysis of PDEs (math.AP); Optics (physics.optics) $\(PT\) Symmetry, Conformal Symmetry, and the Metrication of Electromagnetism Philip D. Mannheim Multi-stability and condensation of exciton-polaritons below threshold Unidirectionally Invisible Potentials as Local Building Blocks of all Scattering Potentials Ali Mostafazadeh We give a complete solution of the problem of constructing a scattering potential v(x) that possesses scattering properties of one’s choice at an arbitrary prescribed wavenumber. Our solution involves expressing v(x) as the sum of at most six unidirectionally invisible finite-range potentials for which we give explicit formulas. Our results can be employed for designing optical potentials. We discuss its application in modeling threshold lasers, coherent perfect absorbers, and bidirectionally and unidirectionally reflectionless absorbers, amplifiers, and phase shifters. Quantum Physics (quant-ph); Optics (physics.optics) Explicit energy expansion for general odd degree polynomial potentials Asiri Nanayakkara, Thilagarajah Mathanaranjan In this paper we derive an almost explicit analytic formula for asymptotic eigenenergy expansion of arbitrary odd degree polynomial potentials of the form \(V(x)=(ix)^{2N+1}+\beta _{1}x^{2N}+\beta _{2}x^{2N-1}+\cdot \cdot \cdot \cdot \cdot +\beta _{2N}x\) where \(\beta _{k}^{\prime }\)s are real or complex for \(1\leq k\leq 2N\). The formula can be used to find semiclassical analytic expressions for eigenenergies up to any order very efficiently. Each term of the expansion is given explicitly as a multinomial of the parameters \(\beta _{1},\beta _{2}….\) and \(\beta _{2N}\) of the potential. Unlike in the even degree polynomial case, the highest order term in the potential is pure imaginary and hence the system is non-Hermitian. Therefore all the integrations have been carried out along a contour enclosing two complex turning points which lies within a wedge in the complex plane. With the help of some examples we demonstrate the accuracy of the method for both real and complex eigenspectra. Mathematical Physics (math-ph) Hofstadter’s Cocoon Katherine Jones-Smith, Connor Wallace Hofstadter showed that the energy levels of electrons on a lattice plotted as a function of magnetic field form an beautiful structure now referred to as “Hofstadter’s butterfly”. We study a non-Hermitian continuation of Hofstadter’s model; as the non-Hermiticity parameter \(g\) increases past a sequence of critical values the eigenvalues successively go complex in a sequence of “double-pitchfork bifurcations” wherein pairs of real eigenvalues degenerate and then become complex conjugate pairs. The associated wavefunctions undergo a spontaneous symmetry breaking transition that we elucidate. Beyond the transition a plot of the real parts of the eigenvalues against magnetic field resembles the Hofstadter butterfly; a plot of the imaginary parts plotted against magnetic fields forms an intricate structure that we call the Hofstadter cocoon. The symmetries of the cocoon are described. Hatano and Nelson have studied a non-Hermitian continuation of the Anderson model of localization that has close parallels to the model studied here. The relationship of our work to that of Hatano and Nelson and to PT transitions studied in PT quantum mechanics is discussed. \(\mathcal{PT}\)-symmetric Hamiltonian Model and Exactly Solvable Potentials Özlem Yeşiltaş Searching for non-Hermitian (parity-time)\(\mathcal{PT}\)-symmetric Hamiltonians with real spectra has been acquiring much interest for fourteen years. In this article, we have introduced a \(\mathcal{PT}\)-symmetric non-Hermitian Hamiltonian model which is given as \(\hat{\mathcal{H}}=\omega (\hat{b}^\dagger\hat{b}+\frac{1}{2})+\alpha (\hat{b}^{2}-(\hat{b}^\dagger)^{2})\) where \(\omega\) and \(\alpha\) are real constants, \(\hat{b}\) and \(\hat{b^\dagger}\) are first order differential operators. Moreover, Pseudo-Hermiticity that is a generalization of \(\mathcal{PT}\) symmetry has been attracting a growing interest \cite{mos}. Because the Hamiltonian \(\mathcal{H}\) is pseudo-Hermitian, we have obtained the Hermitian equivalent of \(\mathcal{H}\), which is in Sturm- Liouville form, leads to exactly solvable potential models which are effective screened potential and hyperbolic Rosen-Morse II potential. \(\mathcal{H}\) is called pseudo-Hermitian, if there exists a Hermitian and invertible operator \(\eta\) satisfying \(\mathcal{H^\dagger}=\eta \mathcal{H} \eta^{-1}\). For the Hermitian Hamiltonian \(h\), one can write \(h=\rho \mathcal{H} \rho^{-1}\) where \(\rho=\sqrt{\eta}\) is unitary. Using this \(\rho\) we have obtained a physical Hamiltonian \(h\) for each case. Then, the Schr\”{o}dinger equation is solved exactly using Shape Invariance method of Supersymmetric Quantum Mechanics. Mapping function \(\rho\) is obtained for each potential case. Quantum Physics (quant-ph)
74b39239a81167fd
Wave function From Wikipedia, the free encyclopedia Jump to: navigation, search In quantum mechanics, the Wave function, usually represented by Ψ, or ψ, describes the probability of finding an electron somewhere in its matter wave. To be more precise, the square of the wave function gives the probability of finding the location of the electron in the given area, since the normal answer for the wave function is usually a complex number. The wave function concept was first introduced in the legendary Schrödinger equation. Mathematical Interpretation[change | change source] The formula for finding the wave function (i.e., the probability wave), is below: i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{x},\,t)=\hat H \Psi(\mathbf{x},\,t) where i is the imaginary number, ψ (x,t) is the wave function, ħ is the reduced Planck constant, t is time, x is position in space, Ĥ is a mathematical object known as the Hamilton operator. The reader will note that the symbol \frac{\partial}{\partial t} denotes that the partial derivative of the wave function is being taken. Related pages[change | change source]
12b7ff82adcdbdac
Periodic systems of small molecules From Wikipedia, the free encyclopedia Jump to: navigation, search Periodic systems of molecules are charts of molecules similar to the periodic table of the elements. Construction of such charts was initiated in the early 20th century and is still ongoing. It is commonly believed that the periodic law, represented by the periodic chart, is echoed in the behavior of molecules, at least small molecules. For instance, if one replaces any one of the atoms in a triatomic molecule with a rare gas atom, there will be a drastic change in the molecule’s properties. Several goals could be accomplished by constructing an explicit representation of this periodic law as manifested in molecules: (1) a classification scheme for the vast number of molecules that exist, starting with small ones having just a few atoms, for use as a teaching aid and tool for archiving data, (2) forecasting data for molecular properties based on the classification scheme, and (3) a sort of unity with the periodic chart and the periodic system of fundamental particles.[1] Physical periodic systems of molecules[edit] Periodic systems (or charts or tables) of molecules are the subjects of two reviews.[2][3] The systems of diatomic molecules include those of (1) H. D. W. Clark,[4][5] and (2) F.-A. Kong,[6][7] which somewhat resemble the atomic chart. The system of R. Hefferlin et al.[8][9] was developed from (3) a three-dimensional to (4) a four-dimensional system Kronecker product of the element chart with itself. \begin{pmatrix}\rm Li &\rm Be \\\rm Na &\rm Mg \end{pmatrix} \rm Li_2 &\rm LiBe &\rm BeLi &\rm Be_2 \\ \rm LiNa &\rm LiMg &\rm BeNa &\rm BeMg \\ \rm NaLi &\rm NaBe &\rm MgLi &\rm MgBe \\ \rm Na_2 &\rm NaMg &\rm MgNa &\rm Mg_2 \\ The Kronecker product of a hypothetical four-element periodic chart. The sixteen molecules, some of which are redundant, suggest a hypercube, which in turn suggests that the molecules exist in a four-dimensional space; the coordinates are the period numbers and group numbers of the two constituent atoms.[10] A totally different kind of periodic system is (5) that of G. V. Zhuvikin,[11][12] which is based on group dynamics. In all but the first of these cases, other researchers provided invaluable contributions and some of them are co-authors. The architectures of these systems have been adjusted by Kong[7] and Hefferlin [13] to include ionized species, and expanded by Kong,[7] Hefferlin,[9] and Zhuvikin and Hefferlin[12] to the space of triatomic molecules. These architectures are mathematically related to the chart of the elements. They were first called “physical” periodic systems.[2] Chemical periodic systems of molecules[edit] Other investigators have focused on building structures that address specific kinds of molecules such as alkanes (Morozov);[14] benzenoids (Dias);[15][16] functional groups containing fluorine, oxygen, nitrogen and sulfur (Haas);[17][18] or a combination of core charge, number of shells, redox potentials, and acid-base tendencies (Gorski).[19][20] These structures are not restricted to molecules with a given number of atoms and they bear little resemblance to the element chart; they are called “chemical” systems. Chemical systems do not start with the element chart, but instead start with, for example, formula enumerations (Dias), the hydrogen-displacement principle (Haas), reduced potential curves (Jenz),[21] a set of molecular descriptors (Gorski), and similar strategies. E. V. Babaev[22] has erected a hyperperiodic system which in principle includes all of the systems described above except those of Dias, Gorski, and Jenz. Bases of the element chart and periodic systems of molecules[edit] The periodic chart of the elements, like a small stool, is supported by three legs: (a) the BohrSommerfeldsolar systematomic model (with electron spin and the Madelung principle), which provides the magic-number elements that end each row of the table and gives the number of elements in each row, (b) solutions to the Schrödinger equation, which provide the same information, and (c) data provided by experiment, by the solar system model, and by solutions to the Schroedinger equation. The Bohr–Sommerfeld model should not be ignored: it gave explanations for the wealth of spectroscopic data that were already in existence before the advent of wave mechanics. Each of the molecular systems listed above, and those not cited, is also supported by three legs: (a) physical and chemical data arranged in graphical or tabular patterns (which, for physical periodic systems at least, echo the appearance of the element chart), (b) group dynamic, valence-bond, molecular-orbital, and other fundamental theories, and (c) summing of atomic period and group numbers (Kong), the Kronecker product and exploitation of higher dimensions (Hefferlin), formula enumerations (Dias), the hydrogen-displacement principle (Haas), reduced potential curves (Jenz), and similar strategies. A chronological list of the contributions to this field[3] contains almost thirty entries dated 1862, 1907, 1929, 1935, and 1936; then, after a pause, a higher level of activity beginning with the 100th anniversary of Mendeleev’s publication of his element chart, 1969. Many publications on periodic systems of molecules include some predictions of molecular properties, but starting at the turn of the Century there have been serious attempts to use periodic systems for the prediction of progressively more precise data for various numbers of molecules. Among these attempts are those of Kong,[7] and Hefferlin[23][24] A collapsed-coordinate system for triatomic molecules[edit] The collapsed-coordinate system has three independent variables instead of the six demanded by the Kronecker-product system. The reduction of independent variables makes use of three properties of gas-phase, ground-state, triatomic molecules. (1) In general, whatever the total number of constituent atomic valence electrons, data for isoelectronic molecules tend to be more similar than for adjacent molecules that have more or fewer valence electrons; for triatomic molecules, the electron count is the sum of the atomic group numbers (the sum of the column numbers 1 to 8 in the p-block of the periodic chart of the elements, C1+C2+C3). (2) Linear/bent triatomic molecules appear to be slightly more stable, other parameters being equal, if carbon is the central atom. (3) Most physical properties of diatomic molecules (especially spectroscopic constants) are closely monotonic with respect to the product of the two atomic period (or row) numbers, R1 and R2; for triatomic molecules, the monotonicity is close with respect to R1R2+R2R3 (which reduces to R1R2 for diatomic molecules). Therefore, the coordinates x, y, and z of the collapsed-coordinate system are C1+C2+C3, C2, and R1R2+R2R3. Multiple-regression predictions of four property values for molecules with tabulated data agree very well with the tabulated data (the error measures of the predictions include the tabulated data in all but a few cases).[25] 1. ^ Chung, D.-Y. (2000). "The Periodic Table of Elementary Particles". arXiv:0003023. 2. ^ a b Hefferlin, R. and Burdick, G.W. 1994. Fizicheskie i khimicheskie periodicheskie sistemy Molekul, Zhurnal Obshchei Xhimii, vol. 64, pp. 1870–1885. English translation: "Periodic Systems of Molecules: Physical and Chemical". Russ. J. Gen. Chem. 64: 1659–1674.  3. ^ a b Hefferlin, R. 2006. The Periodic Systems of Molecules pp. 221 ff, in Baird, D., Scerri, E., and McIntyre, L. (Eds.) “The Philosophy of Chemistry, Synthesis of a New Discipline,” Springer, Dordrecht ISBN 1-4020-3256-0. 4. ^ Clark, C. H. D. (1935). "The periodic Groups of Non-Hydride Di-Atoms". Trans. Faraday Soc 31: 1017–1036. doi:10.1039/tf9353101017.  5. ^ Clark, C. H. D (1940). "Systematics of Band-Spectral Constants. Part V. Interrelations of Dissociation Energy and Equilibrium Internuclear Distance of Di-Atoms in Ground States". Trans. Faraday Soc. 36: 370–376.  6. ^ Kong, F (1982). "The Periodicity of Diatomic Molecules". J. Mol. Struct 90: 17–28. Bibcode:1982JMoSt..90...17K. doi:10.1016/0022-2860(82)90199-5.  7. ^ a b c d Kong, F. and Wu, W. 2010. Periodicity of Diatomic and Triatomic Molecules, Conference Proceedings of the 2010 Workshop on Mathematical Chemistry of the Americas. 8. ^ Hefferlin, R., Campbell, D. Gimbel, H. Kuhlman, and T. Cayton (1979). "The periodic table of diatomic molecules—I an algorithm for retrieval and prediction of spectrophysical properties". Quant. Spectrosc. Radiat. Transfer 21 (4): 315–336. Bibcode:1979JQSRT..21..315H. doi:10.1016/0022-4073(79)90063-3.  9. ^ a b Hefferlin, R (2008). "Kronecker-Product Periodic Systems of Small Gas-Phase Molecules and the Search for Order in Atomic Ensembles of Any Phase". Comb. Chem. High Through. Screen 11: 690–706.  10. ^ Gary W. Burdick and Ray Hefferlin, "Chapter 7. Data Location in a Four-Dimensional Periodic System of Diatomic Molecules", in Mihai V Putz, Ed., Chemical Information and Computational Challenges in the 21st Century, NOVA, 2011, ISBN 978-1-61209-712-1 11. ^ Zhuvikin, G.V. and R. Hefferlin (1983). Periodicheskaya Sistema Dvukhatomnykh Molekul: Teoretiko-gruppovoi Podkhod, Vestnik Leningradskovo Universiteta (16). pp. 10–16.  12. ^ a b Carlson, C.M., Cavanaugh, R.J, Hefferlin, R.A, and of Zhuvikin, G.V. (1996). "Periodic Systems of Molecular States from the Boson Group Dynamics of SO(3)xSU(2)s". Chem. Inf. Comp. Sci 36: 396–398. doi:10.1021/ci9500748.  13. ^ Hefferlin, R. et al. (1984). "Periodic Systems of N-atom Molecules". J. Quant. Spectrosc. Radiat. Transfer 32 (4): 257–268. Bibcode:1984JQSRT..32..257H. doi:10.1016/0022-4073(84)90098-0.  14. ^ Morozov, N. 1907. Stroeniya Veshchestva, I. D. Sytina Publication, Moscow. 15. ^ Dias, J.R. (1982). "A periodic Table of Polycyclic Aromatic Hydrocarbons. Isomer Enumeration of Fused Polycyclic Aromatic Hydrocarbons". Chem. Inf. Comput. Sci. 22: 15–22. doi:10.1021/ci00033a004.  16. ^ Dias, J. R. (1994). "Benzenoids to Fullerines and the Circumscribing and Leapfrog Algorithms". New J. Chem. 18: 667–673.  17. ^ Haas, A. (1982). "A new classification principle: the periodic system of functional groups". Chemicker-Zeitung 106: 239–248.  18. ^ Haas, A. (1988). "Das Elementverscheibungsprinzip und siene Bedeutung fur die Chemie der p-Block Elemente". Kontakte (Darmstadt) 3: 3–11.  19. ^ Gorski, A (1971). "Morphological Classification of Simple Species. Part I. Fundamental Components of Chemical Structure". Roczniki Chemii 45: 1981–1989.  20. ^ Gorski, A (1973). "Morphological Classification of Simple Species. Part V. Evaluation of Structural Parameters of Species". Roczniki Chemii 47: 211–216.  21. ^ Jenz, F (1996). "The Reduced Potential Curve (RPC) Method and its Applications". Int. Rev. Phys. Chem. 15 (2): 467–523. Bibcode:1996IRPC...15..467J. doi:10.1080/01442359609353191.  22. ^ Babaev, E.V. and R. Hefferlin 1996. The Concepts of Periodicity and Hyper- periodicity: from Atoms to Molecules, in Rouvray, D.H. and Kirby, E.C., “Concepts in Chemistry,” Research Studies Press Limited, Taunton, Somerset, England. 23. ^ Hefferlin, R. (2010). "Vibration Frequencies using Least squares and Neural Networks for 50 new s and p Electron Diatomics". Quant. Spectr. Radiat. Transf. 111: 71–77. Bibcode:2010JQSRT.111...71H. doi:10.1016/j.jqsrt.2009.08.004.  24. ^ Hefferlin, R. (2010). Internuclear Separations using Least squares and Neural Networks for 46 new s and p Electron Diatomics.  25. ^ Carlson, C., Gilkeson, J., Linderman, K., LeBlanc, S. Hefferlin, R., and Davis, B (1997). "Estimation of Properties of Triatomic Molecules from Tabulated Data Using Least-Squares Fitting". Croatica Chemica Acta 70: 479–508.
53240ad0c18465e8
July 17, 2011 Breakthrough in the creation of massive numbers of entangled qubits Olivier Pfister, a professor of physics in the University of Virginia's College of Arts & Sciences, has just published findings in the journal Physical Review Letters demonstrating a breakthrough in the creation of massive numbers of entangled qubits, more precisely a multilevel variant thereof called Qmodes. Pfister and researchers in his lab used sophisticated lasers to engineer 15 groups of four entangled Qmodes each, for a total of 60 measurable Qmodes, the most ever created. They believe they may have created as many as 150 groups, or 600 Qmodes, but could measure only 60 with the techniques they used. Each Qmode is a sharply defined color of the electromagnetic field. In lieu of a coin toss measurement, the Qmode measurement outcomes are the number of quantum particles of light (photons) present in the field. Hundreds to thousands of Qmodes would be needed to create a quantum computer, depending on the task. Physical Review Letters - Parallel Generation of Quadripartite Cluster Entanglement in the Optical Frequency Comb Scalability and coherence are two essential requirements for the experimental implementation of quantum information and quantum computing. Here, we report a breakthrough toward scalability: the simultaneous generation of a record 15 quadripartite entangled cluster states over 60 consecutive cavity modes (Q modes), in the optical frequency comb of a single optical parametric oscillator. The amount of observed entanglement was constant over the 60 Q modes, thereby proving the intrinsic scalability of this system. The number of observable Q modes was restricted by technical limitations, and we conservatively estimate the actual number of similar clusters to be at least 3 times larger. This result paves the way to the realization of large entangled states for scalable quantum information and quantum computing. Arxiv - Parallel Generation of Quadripartite Cluster Entanglement in the Optical Frequency Comb (11 pages - including supplemental information) Conclustion - We demonstrated that the optical frequency comb of a single optical parametric oscillator lives up to its promise as an extremely scalable system for quantum information. We simultaneously generated a record number of quadripartite cluster states, in a record number of Qmodes, all equally entangled. The quantum comb was read by two-tone homodyne detection. Even though the size of the entangled states themselves is not a record, compared to the 14-ion GHZ state, we demonstrated stringent state preparation requirements for cluster states, a universal quantum computing resource. A practical quantum computer will require an increase in both the number of entangled modes and amount of squeezing. However, the projective measurements required for one-way quantum computing can already be performed on the clusters that we generated. Variants of our setup will allow the generation of multiple cube graphs, and a scalable quantum wire and square-grid lattice. "With this result, we hope to move from this multitude of small-size quantum processors to a single, massively entangled quantum processor, a prerequisite for any quantum computer," Pfister said. Pfister's group used an exotic laser called an optical parametric oscillator, which emitted entangled quantum electromagnetic fields (the Qmodes) over a rainbow of equally spaced colors called an "optical frequency comb." With their experiments, Pfister's group completed a major step to confirm an earlier theoretical proof by Pfister and his collaborators that the quantum version of the optical frequency comb could be used to create a quantum computer. "Some mathematical problems, such as factoring integers and solving the Schrödinger equation to model quantum physical systems, can be extremely hard to solve," Pfister said. "In some cases the difficulty is exponential, meaning that computation time doubles for every finite increase of the size of the integer, or of the system." Randomness plays a greater role in quantum evolution than in classical evolution, Pfister said. Randomness is not an obstacle to deterministic predictions and control of quantum systems, but it does limit the way information can be encoded and read from qubits. Physical Review Letters - One-Way Quantum Computing in the Optical Frequency Comb (2008) One-way quantum computing allows any quantum algorithm to be implemented easily using just measurements. The difficult part is creating the universal resource, a cluster state, on which the measurements are made. We propose a scalable method that uses a single, multimode optical parametric oscillator (OPO). The method is very efficient and generates a continuous-variable cluster state, universal for quantum computation, with quantum information encoded in the quadratures of the optical frequency comb of the OPO. "As quantum information became better understood, these limits were circumvented by the use of entanglement, deterministic quantum correlations between systems that behave randomly, individually," he said. "As far as we know, entanglement is actually the 'engine' of the exponential speed up in quantum computing." blog comments powered by Disqus
5add369972a79f6e
physics, As conceived by Daniel Bernoulli in Hydrodynamica (1738), gases consist of numerous particles in rapid, random motion. He assumed that the pressure of a gas is produced by the direct impact of the particles on the walls of the container.Encyclopædia Britannica, Inc.; based on Daniel Bernoulli, Hydrodynamica (1738)science that deals with the structure of matter and the interactions between the fundamental constituents of the observable universe. In the broadest sense, physics (from the Greek physikos) is concerned with all aspects of nature on both the macroscopic and submicroscopic levels. Its scope of study encompasses not only the behaviour of objects under the action of given forces but also the nature and origin of gravitational, electromagnetic, and nuclear force fields. Its ultimate objective is the formulation of a few comprehensive principles that bring together and explain all such disparate phenomena. Physics is the basic physical science. Until rather recent times physics and natural philosophy were used interchangeably for the science whose aim is the discovery and formulation of the fundamental laws of nature. As the modern sciences developed and became increasingly specialized, physics came to denote that part of physical science not included in astronomy, chemistry, geology, and engineering. Physics plays an important role in all the natural sciences, however, and all such fields have branches in which physical laws and measurements receive special emphasis, bearing such names as astrophysics, geophysics, biophysics, and even psychophysics. Physics can, at base, be defined as the science of matter, motion, and energy. Its laws are typically expressed with economy and precision in the language of mathematics. Both experiment, the observation of phenomena under conditions that are controlled as precisely as possible, and theory, the formulation of a unified conceptual framework, play essential and complementary roles in the advancement of physics. Physical experiments result in measurements, which are compared with the outcome predicted by theory. A theory that reliably predicts the results of experiments to which it is applicable is said to embody a law of physics. However, a law is always subject to modification, replacement, or restriction to a more limited domain, if a later experiment makes it necessary. The ultimate aim of physics is to find a unified set of laws governing matter, motion, and energy at small (microscopic) subatomic distances, at the human (macroscopic) scale of everyday life, and out to the largest distances (e.g., those on the extragalactic scale). This ambitious goal has been realized to a notable extent. Although a completely unified theory of physical phenomena has not yet been achieved (and possibly never will be), a remarkably small set of fundamental physical laws appears able to account for all known phenomena. The body of physics developed up to about the turn of the 20th century, known as classical physics, can largely account for the motions of macroscopic objects that move slowly with respect to the speed of light and for such phenomena as heat, sound, electricity, magnetism, and light. The modern developments of relativity and quantum mechanics modify these laws insofar as they apply to higher speeds, very massive objects, and to the tiny elementary constituents of matter, such as electrons, protons, and neutrons. The scope of physics The traditionally organized branches or fields of classical and modern physics are delineated below. Illustration of Hooke’s law of elasticity of materials, showing the stretching of a spring in proportion to the applied force, from Robert Hooke’s Lectures de Potentia Restitutiva (1678) is generally taken to mean the study of the motion of objects (or their lack of motion) under the action of given forces. Classical mechanics is sometimes considered a branch of applied mathematics. It consists of kinematics, the description of motion, and dynamics, the study of the action of forces in producing either motion or static equilibrium (the latter constituting the science of statics). The 20th-century subjects of quantum mechanics, crucial to treating the structure of matter, subatomic particles, superfluidity, superconductivity, neutron stars, and other major phenomena, and relativistic mechanics, important when speeds approach that of light, are forms of mechanics that will be discussed later in this section. In classical mechanics the laws are initially formulated for point particles in which the dimensions, shapes, and other intrinsic properties of bodies are ignored. Thus in the first approximation even objects as large as the Earth and the Sun are treated as pointlike—e.g., in calculating planetary orbital motion. In rigid-body dynamics, the extension of bodies and their mass distributions are considered as well, but they are imagined to be incapable of deformation. The mechanics of deformable solids is elasticity; hydrostatics and hydrodynamics treat, respectively, fluids at rest and in motion. The three laws of motion set forth by Isaac Newton form the foundation of classical mechanics, together with the recognition that forces are directed quantities (vectors) and combine accordingly. The first law, also called the law of inertia, states that, unless acted upon by an external force, an object at rest remains at rest, or if in motion, it continues to move in a straight line with constant speed. Uniform motion therefore does not require a cause. Accordingly, mechanics concentrates not on motion as such but on the change in the state of motion of an object that results from the net force acting upon it. Newton’s second law equates the net force on an object to the rate of change of its momentum, the latter being the product of the mass of a body and its velocity. Newton’s third law, that of action and reaction, states that when two particles interact, the forces each exerts on the other are equal in magnitude and opposite in direction. Taken together, these mechanical laws in principle permit the determination of the future motions of a set of particles, providing their state of motion is known at some instant, as well as the forces that act between them and upon them from the outside. From this deterministic character of the laws of classical mechanics, profound (and probably incorrect) philosophical conclusions have been drawn in the past and even applied to human history. Lying at the most basic level of physics, the laws of mechanics are characterized by certain symmetry properties, as exemplified in the aforementioned symmetry between action and reaction forces. Other symmetries, such as the invariance (i.e., unchanging form) of the laws under reflections and rotations carried out in space, reversal of time, or transformation to a different part of space or to a different epoch of time, are present both in classical mechanics and in relativistic mechanics, and with certain restrictions, also in quantum mechanics. The symmetry properties of the theory can be shown to have as mathematical consequences basic principles known as conservation laws, which assert the constancy in time of the values of certain physical quantities under prescribed conditions. The conserved quantities are the most important ones in physics; included among them are mass and energy (in relativity theory, mass and energy are equivalent and are conserved together), momentum, angular momentum, and electric charge. The study of gravitation LISA, a Beyond Einstein Great Observatory, is scheduled for launch in 2015. Jointly funded by the National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA), LISA will consist of three identical spacecraft that will trail the Earth in its orbit by about 50 million km (30 million miles). The spacecraft will contain thrusters for maneuvering them into an equilateral triangle, with sides of approximately 5 million km (3 million miles), such that the triangle’s centre will be located along the Earth’s orbit. By measuring the transmission of laser signals between the spacecraft (essentially a giant Michelson interferometer in space), scientists hope to detect and accurately measure gravity waves.Encyclopædia Britannica, Inc.This field of inquiry has in the past been placed within classical mechanics for historical reasons, because both fields were brought to a high state of perfection by Newton and also because of its universal character. Newton’s gravitational law states that every material particle in the universe attracts every other one with a force that acts along the line joining them and whose strength is directly proportional to the product of their masses and inversely proportional to the square of their separation. Newton’s detailed accounting for the orbits of the planets and the Moon, as well as for such subtle gravitational effects as the tides and the precession of the equinoxes (a slow cyclical change in direction of the Earth’s axis of rotation) through this fundamental force was the first triumph of classical mechanics. No further principles are required to understand the principal aspects of rocketry and space flight (although, of course, a formidable technology is needed to carry them out). The four dimensional space-time continuum itself is distorted in the vicinity of any mass, with the amount of distortion depending on the mass and the distance from the mass. Thus, relativity accounts for Newton’s inverse square law of gravity through geometry and thereby does away with the need for any mysterious “action at a distance.”Encyclopædia Britannica, Inc.The modern theory of gravitation was formulated by Albert Einstein and is called the general theory of relativity. From the long-known equality of the quantity “mass” in Newton’s second law of motion and that in his gravitational law, Einstein was struck by the fact that acceleration can locally annul a gravitational force (as occurs in the so-called weightlessness of astronauts in an Earth-orbiting spacecraft) and was led thereby to the concept of curved space-time. Completed in 1915, the theory was valued for many years mainly for its mathematical beauty and for correctly predicting a small number of phenomena, such as the gravitational bending of light around a massive object. Only in recent years, however, has it become a vital subject for both theoretical and experimental research. (Relativistic mechanics refers to Einstein’s special theory of relativity, which is not a theory of gravitation.) The study of heat, thermodynamics, and statistical mechanics Heat is a form of internal energy associated with the random motion of the molecular constituents of matter or with radiation. Temperature is an average of a part of the internal energy present in a body (it does not include the energy of molecular binding or of molecular rotation). The lowest possible energy state of a substance is defined as the absolute zero (−273.15 °C, or −459.67 °F) of temperature. An isolated body eventually reaches uniform temperature, a state known as thermal equilibrium, as do two or more bodies placed in contact. The formal study of states of matter at (or near) thermal equilibrium is called thermodynamics; it is capable of analyzing a large variety of thermal systems without considering their detailed microstructures. First law The first law of thermodynamics is the energy conservation principle of mechanics (i.e., for all changes in an isolated system, the energy remains constant) generalized to include heat. Second law The second law of thermodynamics asserts that heat will not flow from a place of lower temperature to one where it is higher without the intervention of an external device (e.g., a refrigerator). The concept of entropy involves the measurement of the state of disorder of the particles making up a system. For example, if tossing a coin many times results in a random-appearing sequence of heads and tails, the result has a higher entropy than if heads and tails tend to appear in clusters. Another formulation of the second law is that the entropy of an isolated system never decreases with time. Third law The third law of thermodynamics states that the entropy at the absolute zero of temperature is zero, corresponding to the most ordered possible state. Statistical mechanics (Left) Random motion of a Brownian particle; (right) random discrepancy between the molecular pressures on different surfaces of the particle that cause motion.The science of statistical mechanics derives bulk properties of systems from the mechanical properties of their molecular constituents, assuming molecular chaos and applying the laws of probability. Regarding each possible configuration of the particles as equally likely, the chaotic state (the state of maximum entropy) is so enormously more likely than ordered states that an isolated system will evolve to it, as stated in the second law of thermodynamics. Such reasoning, placed in mathematically precise form, is typical of statistical mechanics, which is capable of deriving the laws of thermodynamics but goes beyond them in describing fluctuations (i.e., temporary departures) from the thermodynamic laws that describe only average behaviour. An example of a fluctuation phenomenon is the random motion of small particles suspended in a fluid, known as Brownian motion. Quantum statistical mechanics plays a major role in many other modern fields of science, as, for example, in plasma physics (the study of fully ionized gases), in solid-state physics, and in the study of stellar structure. From a microscopic point of view the laws of thermodynamics imply that, whereas the total quantity of energy of any isolated system is constant, what might be called the quality of this energy is degraded as the system moves inexorably, through the operation of the laws of chance, to states of increasing disorder until it finally reaches the state of maximum disorder (maximum entropy), in which all parts of the system are at the same temperature, and none of the state’s energy may be usefully employed. When applied to the universe as a whole, considered as an isolated system, this ultimate chaotic condition has been called the “heat death.” The study of electricity and magnetism Although conceived of as distinct phenomena until the 19th century, electricity and magnetism are now known to be components of the unified field of electromagnetism. Particles with electric charge interact by an electric force, while charged particles in motion produce and respond to magnetic forces as well. Many subatomic particles, including the electrically charged electron and proton and the electrically neutral neutron, behave like elementary magnets. On the other hand, in spite of systematic searches undertaken, no magnetic monopoles, which would be the magnetic analogues of electric charges, have ever been found. The field concept plays a central role in the classical formulation of electromagnetism, as well as in many other areas of classical and contemporary physics. Einstein’s gravitational field, for example, replaces Newton’s concept of gravitational action at a distance. The field describing the electric force between a pair of charged particles works in the following manner: each particle creates an electric field in the space surrounding it, and so also at the position occupied by the other particle; each particle responds to the force exerted upon it by the electric field at its own position. Radio waves, infrared rays, visible light, ultraviolet rays, X-rays, and gamma rays are all types of electromagnetic radiation. Radio waves have the longest wavelength, and gamma rays have the shortest wavelength.Encyclopædia Britannica, Inc.Classical electromagnetism is summarized by the laws of action of electric and magnetic fields upon electric charges and upon magnets and by four remarkable equations formulated in the latter part of the 19th century by the Scottish physicist James Clerk Maxwell. The latter equations describe the manner in which electric charges and currents produce electric and magnetic fields, as well as the manner in which changing magnetic fields produce electric fields, and vice versa. From these relations Maxwell inferred the existence of electromagnetic waves—associated electric and magnetic fields in space, detached from the charges that created them, traveling at the speed of light, and endowed with such “mechanical” properties as energy, momentum, and angular momentum. The light to which the human eye is sensitive is but one small segment of an electromagnetic spectrum that extends from long-wavelength radio waves to short-wavelength gamma rays and includes X-rays, microwaves, and infrared (or heat) radiation. Spectrum of white light by a diffraction grating. With a prism, the red end of the spectrum is more compressed than the violet end.Courtesy of Bausch & Lomb, Rochester, N.Y.Because light consists of electromagnetic waves, the propagation of light can be regarded as merely a branch of electromagnetism. However, it is usually dealt with as a separate subject called optics: the part that deals with the tracing of light rays is known as geometrical optics, while the part that treats the distinctive wave phenomena of light is called physical optics. More recently, there has developed a new and vital branch, quantum optics, which is concerned with the theory and application of the laser, a device that produces an intense coherent beam of unidirectional radiation useful for many applications. The formation of images by lenses, microscopes, telescopes, and other optical devices is described by ray optics, which assumes that the passage of light can be represented by straight lines, that is, rays. The subtler effects attributable to the wave property of visible light, however, require the explanations of physical optics. One basic wave effect is interference, whereby two waves present in a region of space combine at certain points to yield an enhanced resultant effect (e.g., the crests of the component waves adding together); at the other extreme, the two waves can annul each other, the crests of one wave filling in the troughs of the other. Another wave effect is diffraction, which causes light to spread into regions of the geometric shadow and causes the image produced by any optical device to be fuzzy to a degree dependent on the wavelength of the light. Optical instruments such as the interferometer and the diffraction grating can be used for measuring the wavelength of light precisely (about 500 micrometres) and for measuring distances to a small fraction of that length. Atomic and chemical physics Between 1909 and 1910 the American physicist Robert Millikan conducted a series of oil-drop experiments. By comparing applied electric force with changes in the motion of the oil drops, he was able to determine the electric charge on each drop. He found that all of the drops had charges that were simple multiples of a single number, the fundamental charge of the electron.Encyclopædia Britannica, Inc.Millikan oil-drop experiment.Encyclopædia Britannica, Inc.One of the great achievements of the 20th century was the establishment of the validity of the atomic hypothesis, first proposed in ancient times, that matter is made up of relatively few kinds of small, identical parts—namely, atoms. However, unlike the indivisible atom of Democritus and other ancients, the atom, as it is conceived today, can be separated into constituent electrons and nucleus. Atoms combine to form molecules, whose structure is studied by chemistry and physical chemistry; they also form other types of compounds, such as crystals, studied in the field of condensed-matter physics. Such disciplines study the most important attributes of matter (not excluding biologic matter) that are encountered in normal experience—namely, those that depend almost entirely on the outer parts of the electronic structure of atoms. Only the mass of the atomic nucleus and its charge, which is equal to the total charge of the electrons in the neutral atom, affect the chemical and physical properties of matter. Although there are some analogies between the solar system and the atom due to the fact that the strengths of gravitational and electrostatic forces both fall off as the inverse square of the distance, the classical forms of electromagnetism and mechanics fail when applied to tiny, rapidly moving atomic constituents. Atomic structure is comprehensible only on the basis of quantum mechanics, and its finer details require as well the use of quantum electrodynamics (QED). Atomic properties are inferred mostly by the use of indirect experiments. Of greatest importance has been spectroscopy, which is concerned with the measurement and interpretation of the electromagnetic radiations either emitted or absorbed by materials. These radiations have a distinctive character, which quantum mechanics relates quantitatively to the structures that produce and absorb them. It is truly remarkable that these structures are in principle, and often in practice, amenable to precise calculation in terms of a few basic physical constants: the mass and charge of the electron, the speed of light, and Planck’s constant (approximately 6.62606957 × 10−34 joule∙second), the fundamental constant of the quantum theory named for the German physicist Max Planck. Condensed-matter physics The first transistor, invented by American physicists John Bardeen, Walter H. Brattain, and William B. Shockley.AT&T Bell Labs/Science Photo Library/Photo Researchers, Inc.This field, which treats the thermal, elastic, electrical, magnetic, and optical properties of solid and liquid substances, grew at an explosive rate in the second half of the 20th century and scored numerous important scientific and technical achievements, including the transistor. Among solid materials, the greatest theoretical advances have been in the study of crystalline materials whose simple repetitive geometric arrays of atoms are multiple-particle systems that allow treatment by quantum mechanics. Because the atoms in a solid are coordinated with each other over large distances, the theory must go beyond that appropriate for atoms and molecules. Thus conductors, such as metals, contain some so-called free electrons, or valence electrons, which are responsible for the electrical and most of the thermal conductivity of the material and which belong collectively to the whole solid rather than to individual atoms. Semiconductors and insulators, either crystalline or amorphous, are other materials studied in this field of physics. Other aspects of condensed matter involve the properties of the ordinary liquid state, of liquid crystals, and, at temperatures near absolute zero, of the so-called quantum liquids. The latter exhibit a property known as superfluidity (completely frictionless flow), which is an example of macroscopic quantum phenomena. Such phenomena are also exemplified by superconductivity (completely resistance-less flow of electricity), a low-temperature property of certain metallic and ceramic materials. Besides their significance to technology, macroscopic liquid and solid quantum states are important in astrophysical theories of stellar structure in, for example, neutron stars. Nuclear physics Particle tracks from the collision of an accelerated nucleus of a niobium atom with another niobium nucleus. The single line on the left is the track of the incoming projectile nucleus, and the other tracks are fragments from the collision.Courtesy of the Department of Physics and Astronomy, Michigan State UniversityThis branch of physics deals with the structure of the atomic nucleus and the radiation from unstable nuclei. About 10,000 times smaller than the atom, the constituent particles of the nucleus, protons and neutrons, attract one another so strongly by the nuclear forces that nuclear energies are approximately 1,000,000 times larger than typical atomic energies. Quantum theory is needed for understanding nuclear structure. Like excited atoms, unstable radioactive nuclei (either naturally occurring or artificially produced) can emit electromagnetic radiation. The energetic nuclear photons are called gamma rays. Radioactive nuclei also emit other particles: negative and positive electrons (beta rays), accompanied by neutrinos, and helium nuclei (alpha rays). A principal research tool of nuclear physics involves the use of beams of particles (e.g., protons or electrons) directed as projectiles against nuclear targets. Recoiling particles and any resultant nuclear fragments are detected, and their directions and energies are analyzed to reveal details of nuclear structure and to learn more about the strong force. A much weaker nuclear force, the so-called weak interaction, is responsible for the emission of beta rays. Nuclear collision experiments use beams of higher-energy particles, including those of unstable particles called mesons produced by primary nuclear collisions in accelerators dubbed meson factories. Exchange of mesons between protons and neutrons is directly responsible for the strong force. (For the mechanism underlying mesons, see below Fundamental forces and fields.) In radioactivity and in collisions leading to nuclear breakup, the chemical identity of the nuclear target is altered whenever there is a change in the nuclear charge. In fission and fusion nuclear reactions in which unstable nuclei are, respectively, split into smaller nuclei or amalgamated into larger ones, the energy release far exceeds that of any chemical reaction. Particle physics Very simplified illustrations of protons, neutrons, pions, and other hadrons show that they are made of quarks (yellow spheres) and antiquarks (green spheres), which are bound together by gluons (bent ribbons).One of the most significant branches of contemporary physics is the study of the fundamental subatomic constituents of matter, the elementary particles. This field, also called high-energy physics, emerged in the 1930s out of the developing experimental areas of nuclear and cosmic-ray physics. Initially investigators studied cosmic rays, the very-high-energy extraterrestrial radiations that fall upon the Earth and interact in the atmosphere (see below The methodology of physics). However, after World War II, scientists gradually began using high-energy particle accelerators to provide subatomic particles for study. Quantum field theory, a generalization of QED to other types of force fields, is essential for the analysis of high-energy physics. Subatomic particles cannot be visualized as tiny analogues of ordinary material objects such as billiard balls, for they have properties that appear contradictory from the classical viewpoint. That is to say, while they possess charge, spin, mass, magnetism, and other complex characteristics, they are nonetheless regarded as pointlike. During the latter half of the 20th century, a coherent picture evolved of the underlying strata of matter involving two types of subatomic particles: fermions (baryons and leptons), which have odd half-integral angular momentum (spin 1/2, 3/2) and make up ordinary matter; and bosons (gluons, mesons, and photons), which have integral spins and mediate the fundamental forces of physics. Leptons (e.g., electrons, muons, taus), gluons, and photons are believed to be truly fundamental particles. Baryons (e.g., neutrons, protons) and mesons (e.g., pions, kaons), collectively known as hadrons, are believed to be formed from indivisible elements known as quarks, which have never been isolated. Quarks come in six types, or “flavours,” and have matching antiparticles, known as antiquarks. Quarks have charges that are either positive two-thirds or negative one-third of the electron’s charge, while antiquarks have the opposite charges. Like quarks, each lepton has an antiparticle with properties that mirror those of its partner (the antiparticle of the negatively charged electron is the positive electron, or positron; that of the neutrino is the antineutrino). In addition to their electric and magnetic properties, quarks participate in both the strong force (which binds them together) and the weak force (which underlies certain forms of radioactivity), while leptons take part in only the weak force. Baryons, such as neutrons and protons, are formed by combining three quarks—thus baryons have a charge of −1, 0, or 1. Mesons, which are the particles that mediate the strong force inside the atomic nucleus, are composed of one quark and one antiquark; all known mesons have a charge of −2, −1, 0, 1, or 2. Most of the possible quark combinations, or hadrons, have very short lifetimes, and many of them have never been seen, though additional ones have been observed with each new generation of more powerful particle accelerators. The quantum fields through which quarks and leptons interact with each other and with themselves consist of particle-like objects called quanta (from which quantum mechanics derives its name). The first known quanta were those of the electromagnetic field; they are also called photons because light consists of them. A modern unified theory of weak and electromagnetic interactions, known as the electroweak theory, proposes that the weak force involves the exchange of particles about 100 times as massive as protons. These massive quanta have been observed—namely, two charged particles, W+ and W, and a neutral one, W0. In the theory of the strong force known as quantum chromodynamics (QCD), eight quanta, called gluons, bind quarks to form baryons and also bind quarks to antiquarks to form mesons, the force itself being dubbed the “colour force.” (This unusual use of the term colour is a somewhat forced analogue of ordinary colour mixing.) Quarks are said to come in three colours—red, blue, and green. (The opposites of these imaginary colours, minus-red, minus-blue, and minus-green, are ascribed to antiquarks.) Only certain colour combinations, namely colour-neutral, or “white” (i.e., equal mixtures of the above colours cancel out one another, resulting in no net colour), are conjectured to exist in nature in an observable form. The gluons and quarks themselves, being coloured, are permanently confined (deeply bound within the particles of which they are a part), while the colour-neutral composites such as protons can be directly observed. One consequence of colour confinement is that the observable particles are either electrically neutral or have charges that are integral multiples of the charge of the electron. A number of specific predictions of QCD have been experimentally tested and found correct. Quantum mechanics Although the various branches of physics differ in their experimental methods and theoretical approaches, certain general principles apply to all of them. The forefront of contemporary advances in physics lies in the submicroscopic regime, whether it be in atomic, nuclear, condensed-matter, plasma, or particle physics, or in quantum optics, or even in the study of stellar structure. All are based upon quantum theory (i.e., quantum mechanics and quantum field theory) and relativity, which together form the theoretical foundations of modern physics. Many physical quantities whose classical counterparts vary continuously over a range of possible values are in quantum theory constrained to have discontinuous, or discrete, values. Furthermore, the intrinsically deterministic character of values in classical physics is replaced in quantum theory by intrinsic uncertainty. According to quantum theory, electromagnetic radiation does not always consist of continuous waves; instead it must be viewed under some circumstances as a collection of particle-like photons, the energy and momentum of each being directly proportional to its frequency (or inversely proportional to its wavelength, the photons still possessing some wavelike characteristics). Conversely, electrons and other objects that appear as particles in classical physics are endowed by quantum theory with wavelike properties as well, such a particle’s quantum wavelength being inversely proportional to its momentum. In both instances, the proportionality constant is the characteristic quantum of action (action being defined as energy × time)—that is to say, Planck’s constant divided by 2π, or ℏ. The Bohr theory sees an electron (left) as a point mass occupying certain energy levels. Wave mechanics sees an electron as a wave washing back and forth in the atom in certain patterns only. The wave patterns and energy levels correspond exactly.Encyclopædia Britannica, Inc.In principle, all of atomic and molecular physics, including the structure of atoms and their dynamics, the periodic table of elements and their chemical behaviour, as well as the spectroscopic, electrical, and other physical properties of atoms, molecules, and condensed matter, can be accounted for by quantum mechanics. Roughly speaking, the electrons in the atom must fit around the nucleus as some sort of standing wave (as given by the Schrödinger equation) analogous to the waves on a plucked violin or guitar string. As the fit determines the wavelength of the quantum wave, it necessarily determines its energy state. Consequently, atomic systems are restricted to certain discrete, or quantized, energies. When an atom undergoes a discontinuous transition, or quantum jump, its energy changes abruptly by a sharply defined amount, and a photon of that energy is emitted when the energy of the atom decreases, or is absorbed in the opposite case. The uncertainty principle, expressed by the German physicist Werner Heisenberg in 1927, states that the position and the velocity of an object cannot both be precisely measured at the same instant.Encyclopædia Britannica, Inc.Although atomic energies can be sharply defined, the positions of the electrons within the atom cannot be, quantum mechanics giving only the probability for the electrons to have certain locations. This is a consequence of the feature that distinguishes quantum theory from all other approaches to physics, the uncertainty principle of the German physicist Werner Heisenberg. This principle holds that measuring a particle’s position with increasing precision necessarily increases the uncertainty as to the particle’s momentum, and conversely. The ultimate degree of uncertainty is controlled by the magnitude of Planck’s constant, which is so small as to have no apparent effects except in the world of microstructures. In the latter case, however, because both a particle’s position and its velocity or momentum must be known precisely at some instant in order to predict its future history, quantum theory precludes such certain prediction and thus escapes determinism. When a beam of X-rays is aimed at a target material, some of the beam is deflected, and the scattered X-rays have a greater wavelength than the original beam. The physicist Arthur Holly Compton concluded that this phenomenon could only be explained if the X-rays were understood to be made up of discrete bundles or particles, now called photons, that lost some of their energy in the collisions with electrons in the target material and then scattered at lower energy.Encyclopædia Britannica, Inc.The complementary wave and particle aspects, or wave–particle duality, of electromagnetic radiation and of material particles furnish another illustration of the uncertainty principle. When an electron exhibits wavelike behaviour, as in the phenomenon of electron diffraction, this excludes its exhibiting particle-like behaviour in the same observation. Similarly, when electromagnetic radiation in the form of photons interacts with matter, as in the Compton effect in which X-ray photons collide with electrons, the result resembles a particle-like collision and the wave nature of electromagnetic radiation is precluded. The principle of complementarity, asserted by the Danish physicist Niels Bohr, who pioneered the theory of atomic structure, states that the physical world presents itself in the form of various complementary pictures, no one of which is by itself complete, all of these pictures being essential for our total understanding. Thus both wave and particle pictures are needed for understanding either the electron or the photon. Although it deals with probabilities and uncertainties, the quantum theory has been spectacularly successful in explaining otherwise inaccessible atomic phenomena and in thus far meeting every experimental test. Its predictions, especially those of QED, are the most precise and the best checked of any in physics; some of them have been tested and found accurate to better than one part per billion. Relativistic mechanics In classical physics, space is conceived as having the absolute character of an empty stage in which events in nature unfold as time flows onward independently; events occurring simultaneously for one observer are presumed to be simultaneous for any other; mass is taken as impossible to create or destroy; and a particle given sufficient energy acquires a velocity that can increase without limit. The special theory of relativity, developed principally by Albert Einstein in 1905 and now so adequately confirmed by experiment as to have the status of physical law, shows that all these, as well as other apparently obvious assumptions, are false. Specific and unusual relativistic effects flow directly from Einstein’s two basic postulates, which are formulated in terms of so-called inertial reference frames. These are reference systems that move in such a way that in them Isaac Newton’s first law, the law of inertia, is valid. The set of inertial frames consists of all those that move with constant velocity with respect to each other (accelerating frames therefore being excluded). Einstein’s postulates are: (1) All observers, whatever their state of motion relative to a light source, measure the same speed for light; and (2) The laws of physics are the same in all inertial frames. As an object approaches the speed of light, an observer sees the object become shorter and its time interval become longer, relative to the length and time interval when the object is at rest.Encyclopædia Britannica, Inc.The first postulate, the constancy of the speed of light, is an experimental fact from which follow the distinctive relativistic phenomena of space contraction (or Lorentz-FitzGerald contraction), time dilation, and the relativity of simultaneity: as measured by an observer assumed to be at rest, an object in motion is contracted along the direction of its motion, and moving clocks run slow; two spatially separated events that are simultaneous for a stationary observer occur sequentially for a moving observer. As a consequence, space intervals in three-dimensional space are related to time intervals, thus forming so-called four-dimensional space-time. The second postulate is called the principle of relativity. It is equally valid in classical mechanics (but not in classical electrodynamics until Einstein reinterpreted it). This postulate implies, for example, that table tennis played on a train moving with constant velocity is just like table tennis played with the train at rest, the states of rest and motion being physically indistinguishable. In relativity theory, mechanical quantities such as momentum and energy have forms that are different from their classical counterparts but give the same values for speeds that are small compared to the speed of light, the maximum permissible speed in nature (about 300,000 kilometres per second, or 186,000 miles per second). According to relativity, mass and energy are equivalent and interchangeable quantities, the equivalence being expressed by Einstein’s famous mass-energy equation E = mc2, where m is an object’s mass and c is the speed of light. The general theory of relativity is Einstein’s theory of gravitation, which uses the principle of the equivalence of gravitation and locally accelerating frames of reference. Einstein’s theory has special mathematical beauty; it generalizes the “flat” space-time concept of special relativity to one of curvature. It forms the background of all modern cosmological theories. In contrast to some vulgarized popular notions of it, which confuse it with moral and other forms of relativism, Einstein’s theory does not argue that “all is relative.” On the contrary, it is largely a theory based upon those physical attributes that do not change, or, in the language of the theory, that are invariant. Conservation laws and symmetry Since the early period of modern physics, there have been conservation laws, which state that certain physical quantities, such as the total electric charge of an isolated system of bodies, do not change in the course of time. In the 20th century it has been proved mathematically that such laws follow from the symmetry properties of nature, as expressed in the laws of physics. The conservation of mass-energy of an isolated system, for example, follows from the assumption that the laws of physics may depend upon time intervals but not upon the specific time at which the laws are applied. The symmetries and the conservation laws that follow from them are regarded by modern physicists as being even more fundamental than the laws themselves, since they are able to limit the possible forms of laws that may be proposed in the future. Conservation laws are valid in classical, relativistic, and quantum theory for mass-energy, momentum, angular momentum, and electric charge. (In nonrelativistic physics, mass and energy are separately conserved.) Momentum, a directed quantity equal to the mass of a body multiplied by its velocity or to the total mass of two or more bodies multiplied by the velocity of their centre of mass, is conserved when, and only when, no external force acts. Similarly angular momentum, which is related to spinning motions, is conserved in a system upon which no net turning force, called torque, acts. External forces and torques break the symmetry conditions from which the respective conservation laws follow. In quantum theory, and especially in the theory of elementary particles, there are additional symmetries and conservation laws, some exact and others only approximately valid, which play no significant role in classical physics. Among these are the conservation of so-called quantum numbers related to left-right reflection symmetry of space (called parity) and to the reversal symmetry of motion (called time reversal). These quantum numbers are conserved in all processes other than the weak force. Other symmetry properties not obviously related to space and time (and referred to as internal symmetries) characterize the different families of elementary particles and, by extension, their composites. Quarks, for example, have a property called baryon number, as do protons, neutrons, nuclei, and unstable quark composites. All of these except the quarks are known as baryons. A failure of baryon-number conservation would exhibit itself, for instance, by a proton decaying into lighter non-baryonic particles. Indeed, intensive search for such proton decay has been conducted, but so far it has been fruitless. Similar symmetries and conservation laws hold for an analogously defined lepton number, and they also appear, as does the law of baryon conservation, to hold absolutely. Fundamental forces and fields Sequence of events in the fission of a uranium nucleus by a neutron.Encyclopædia Britannica, Inc.The four basic forces of nature, in order of increasing strength, are thought to be: (1) the gravitational force between particles with mass; (2) the electromagnetic force between particles with charge or magnetism or both; (3) the colour force, or strong force, between quarks; and (4) the weak force by which, for example, quarks can change their type, so that a neutron decays into a proton, an electron, and an antineutrino. The strong force that binds protons and neutrons into nuclei and is responsible for fission, fusion, and other nuclear reactions is in principle derived from the colour force. Nuclear physics is thus related to QCD as chemistry is to atomic physics. According to quantum field theory, each of the four fundamental interactions is mediated by the exchange of quanta, called vector gauge bosons, which share certain common characteristics. All have an intrinsic spin of one unit, measured in terms of Planck’s constant ℏ. (Leptons and quarks each have one-half unit of spin.) Gauge theory studies the group of transformations, or Lie group, that leaves the basic physics of a quantum field invariant. Lie groups, which are named for the 19th-century Norwegian mathematician Sophus Lie, possess a special type of symmetry and continuity that made them first useful in the study of differential equations on smooth manifolds (an abstract mathematical space for modeling physical processes). This symmetry was first seen in the equations for electromagnetic potentials, quantities from which electromagnetic fields can be derived. It is possessed in pure form by the eight massless gluons of QCD, but in the electroweak theory—the unified theory of electromagnetic and weak force interactions—gauge symmetry is partially broken, so that only the photon remains massless, with the other gauge bosons (W+, W, and Z) acquiring large masses. Theoretical physicists continue to seek a further unification of QCD with the electroweak theory and, more ambitiously still, to unify them with a quantum version of gravity in which the force would be transmitted by massless quanta of two units of spin called gravitons. The methodology of physics Physics has evolved and continues to evolve without any single strategy. Essentially an experimental science, refined measurements can reveal unexpected behaviour. On the other hand, mathematical extrapolation of existing theories into new theoretical areas, critical reexamination of apparently obvious but untested assumptions, argument by symmetry or analogy, aesthetic judgment, pure accident, and hunch—each of these plays a role (as in all of science). Thus, for example, the quantum hypothesis proposed by the German physicist Max Planck was based on observed departures of the character of blackbody radiation (radiation emitted by a heated body that absorbs all radiant energy incident upon it) from that predicted by classical electromagnetism. The English physicist P.A.M. Dirac predicted the existence of the positron in making a relativistic extension of the quantum theory of the electron. The elusive neutrino, without mass or charge, was hypothesized by the German physicist Wolfgang Pauli as an alternative to abandoning the conservation laws in the beta-decay process. Maxwell conjectured that if changing magnetic fields create electric fields (which was known to be so), then changing electric fields might create magnetic fields, leading him to the electromagnetic theory of light. Albert Einstein’s special theory of relativity was based on a critical reexamination of the meaning of simultaneity, while his general theory of relativity rests on the equivalence of inertial and gravitational mass. Although the tactics may vary from problem to problem, the physicist invariably tries to make unsolved problems more tractable by constructing a series of idealized models, with each successive model being a more realistic representation of the actual physical situation. Thus, in the theory of gases, the molecules are at first imagined to be particles that are as structureless as billiard balls with vanishingly small dimensions. This ideal picture is then improved on step by step. The correspondence principle, a useful guiding principle for extending theoretical interpretations, was formulated by the Danish physicist Niels Bohr in the context of the quantum theory. It asserts that when a valid theory is generalized to a broader arena, the new theory’s predictions must agree with the old one in the overlapping region in which both are applicable. For example, the more comprehensive theory of physical optics must yield the same result as the more restrictive theory of ray optics whenever wave effects proportional to the wavelength of light are negligible on account of the smallness of that wavelength. Similarly, quantum mechanics must yield the same results as classical mechanics in circumstances when Planck’s constant can be considered as negligibly small. Likewise, for speeds small compared to the speed of light (as for baseballs in play), relativistic mechanics must coincide with Newtonian classical mechanics. Some ways in which experimental and theoretical physicists attack their problems are illustrated by the following examples. The modern experimental study of elementary particles began with the detection of new types of unstable particles produced in the atmosphere by primary radiation, the latter consisting mainly of high-energy protons arriving from space. The new particles were detected in Geiger counters and identified by the tracks they left in instruments called cloud chambers and in photographic plates. After World War II, particle physics, then known as high-energy nuclear physics, became a major field of science. Today’s high-energy particle accelerators can be several kilometres in length, cost hundreds (or even thousands) of millions of dollars, and accelerate particles to enormous energies (trillions of electron volts). Experimental teams, such as those that discovered the W+, W, and Z quanta of the weak force at the European Laboratory for Particle Physics (CERN) in Geneva, which is funded by its 20 European member states, can have 100 or more physicists from many countries, along with a larger number of technical workers serving as support personnel. A variety of visual and electronic techniques are used to interpret and sort the huge amounts of data produced by their efforts, and particle-physics laboratories are major users of the most advanced technology, be it superconductive magnets or supercomputers. Theoretical physicists use mathematics both as a logical tool for the development of theory and for calculating predictions of the theory to be compared with experiment. Newton, for one, invented integral calculus to solve the following problem, which was essential to his formulation of the law of universal gravitation: Assuming that the attractive force between any pair of point particles is inversely proportional to the square of the distance separating them, how does a spherical distribution of particles, such as the Earth, attract another nearby object? Integral calculus, a procedure for summing many small contributions, yields the simple solution that the Earth itself acts as a point particle with all its mass concentrated at the centre. In modern physics, Dirac predicted the existence of the then-unknown positive electron (or positron) by finding an equation for the electron that would combine quantum mechanics and the special theory of relativity. Relations between physics and other disciplines and society Influence of physics on related disciplines Because physics elucidates the simplest fundamental questions in nature on which there can be a consensus, it is hardly surprising that it has had a profound impact on other fields of science, on philosophy, on the worldview of the developed world, and, of course, on technology. Indeed, whenever a branch of physics has reached such a degree of maturity that its basic elements are comprehended in general principles, it has moved from basic to applied physics and thence to technology. Thus almost all current activity in classical physics consists of applied physics, and its contents form the core of many branches of engineering. Discoveries in modern physics are converted with increasing rapidity into technical innovations and analytical tools for associated disciplines. There are, for example, such nascent fields as nuclear and biomedical engineering, quantum chemistry and quantum optics, and radio, X-ray, and gamma-ray astronomy, as well as such analytic tools as radioisotopes, spectroscopy, and lasers, which all stem directly from basic physics. Apart from its specific applications, physics—especially Newtonian mechanics—has become the prototype of the scientific method, its experimental and analytic methods sometimes being imitated (and sometimes inappropriately so) in fields far from the related physical sciences. Some of the organizational aspects of physics, based partly on the successes of the radar and atomic-bomb projects of World War II, also have been imitated in large-scale scientific projects, as, for example, in astronomy and space research. The great influence of physics on the branches of philosophy concerned with the conceptual basis of human perceptions and understanding of nature, such as epistemology, is evidenced by the earlier designation of physics itself as natural philosophy. Present-day philosophy of science deals largely, though not exclusively, with the foundations of physics. Determinism, the philosophical doctrine that the universe is a vast machine operating with strict causality whose future is determined in all detail by its present state, is rooted in Newtonian mechanics, which obeys that principle. Moreover, the schools of materialism, naturalism, and empiricism have in large degree considered physics to be a model for philosophical inquiry. An extreme position is taken by the logical positivists, whose radical distrust of the reality of anything not directly observable leads them to demand that all significant statements must be formulated in the language of physics. The uncertainty principle of quantum theory has prompted a reexamination of the question of determinism, and its other philosophical implications remain in doubt. Particularly problematic is the matter of the meaning of measurement, for which recent theories and experiments confirm some apparently noncausal predictions of standard quantum theory. It is fair to say that though physicists agree that quantum theory works, they still differ as to what it means. Influence of related disciplines on physics Interior of the U.S. Department of Energy’s National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory, Livermore, California. The NIF target chamber uses a high-energy laser to heat fusion fuel to temperatures sufficient for thermonuclear ignition. The facility is used for basic science, fusion energy research, and nuclear weapons testing.U.S. Department of EnergyThe relationship of physics to its bordering disciplines is a reciprocal one. Just as technology feeds on fundamental science for new practical innovations, so physics appropriates the techniques and instrumentation of modern technology for advancing itself. Thus experimental physicists utilize increasingly refined and precise electronic devices. Moreover, they work closely with engineers in designing basic scientific equipment, such as high-energy particle accelerators. Mathematics has always been the primary tool of the theoretical physicist, and even abstruse fields of mathematics such as group theory and differential geometry have become invaluable to the theoretician classifying subatomic particles or investigating the symmetry characteristics of atoms and molecules. Much of contemporary research in physics depends on the high-speed computer. It allows the theoretician to perform computations that are too lengthy or complicated to be done with paper and pencil. Also, it allows experimentalists to incorporate the computer into their apparatus, so that the results of measurements can be provided nearly instantaneously on-line as summarized data while an experiment is in progress. The physicist in society Tracks emerging from a proton-antiproton collision at the centre of the UA1 detector at CERN include those of an energetic electron (straight down) and a positron (upper right). These two particles have come from the decay of a Z0; when their energies are added together, the total is equal to the Z0’s mass.David Parker—Science Photo Library/Photo ResearchersBecause of the remoteness of much of contemporary physics from ordinary experience and its reliance on advanced mathematics, physicists have sometimes seemed to the public to be initiates in a latter-day secular priesthood who speak an arcane language and can communicate their findings to laymen only with great difficulty. Yet, the physicist has come to play an increasingly significant role in society, particularly since World War II. Governments have supplied substantial funds for research at academic institutions and at government laboratories through such agencies as the National Science Foundation and the Department of Energy in the United States, which has also established a number of national laboratories, including the Fermi National Accelerator Laboratory in Batavia, Ill., with one of the world’s largest particle accelerators. CERN is composed of 14 European countries and operates a large accelerator at the Swiss–French border. Physics research is supported in Germany by the Max Planck Society for the Advancement of Science and in Japan by the Japan Society for the Promotion of Science. In Trieste, Italy, there is the International Center for Theoretical Physics, which has strong ties to developing countries. These are only a few examples of the widespread international interest in fundamental physics. Basic research in physics is obviously dependent on public support and funding, and with this development has come, albeit slowly, a growing recognition within the physics community of the social responsibility of scientists for the consequences of their work and for the more general problems of science and society.
66e8a3562dc82f7e
Sign up × In quantum mechanics when we talk about the wave nature of particles are we referring in fact to the wave function? Does the wave function describes the probability of finding a particle (ex: photons) at some location? So do the "waves" describe probabilities just the way in classical physics the electromagnetic waves describe the perturbations of the electric and magnetic fields? share|cite|improve this question 4 Answers 4 up vote 3 down vote accepted No, because the wavefunctions are not waves in space. They are waves in enormous high-dimensional spaces of possibilities. If you have two particles, the wavefunction is waving in 6 dimensions (the two positions of the two particles make a six dimensional space of possibilities), if you have three particles, the wavefunction is in 9 dimensions. So it is always wrong to think of it as a wave in space, like a field. There is a field which obeys the Schrodinger equation, but this classical field is a classical wave, like E and B, which describes many coherent bosons in the same quantum state all moving together, like a superfluid or a Bose-Einstein condensate. share|cite|improve this answer I wasn't thinking of it as a wave in space. I was making an analogy. So the wave-function of a photon describes the probability of finding a particle at a certain location? If not what does it describe? –  Buzai Andras Jul 2 '12 at 22:28 It describes the probability of finding several particles in several locations. It's a wave over possible universes, not a wave over one particle's position (unless the system is one particle). –  Ron Maimon Jul 3 '12 at 2:23 The key point to understand about wave/particle duality is that when we describe some system (e.g. an electron) as a wave what we mean is that it interacts like a wave. Similarly when we describe it as a particle we mean it interacts like a particle. The electron itself is neither a wave or a particle: it's, well, an electron. The other point is that we can describe our system using various mathematical approaches. When you say wavefunction I'd guess you're thinking about the solutions to the Schrödinger equation. The Schrödinger equation is basically a wave equation so it works very well when describing wave-like interactions. It can be used to describe particle-like interactions, but this gets messy because you have to model your particle as the superposition of infinitely many waves. You're quite correct that the wavefunction describes the probability of finding the particle, but the wavefunction is not simply a wave like a sine wave. share|cite|improve this answer Usual quantum mechanics is roughly based on following principles : 1. For any given ("small") physical system $S$ there is associated a set $H_S$ of physical states. 2. At any instant of time $t$ system $S$ exists in some state $a_t\in H_S$. Time evolution of this state is governed by a first order (in time) differential equation called Schrodinger equation. 3. State $a_t$ carries all information about the system that one can hope to get. This information is probabilistic and depends upon what 'observable' you want to measure. In particular if you want to measure position you will see a particle, if you want to measure momentum you will see a wave. However words could sometimes be misleading; so you should consult some good text e.g. Cohen, Tannoudji Volume 1. Edit : "Wave" as this term is used in QM does not mean ordinary physical wave but mathematical "probability wave". So when we talk about wave nature of particle what we are referring to is the position space probability wave (or wave function ) associated with it. So you are right :-) share|cite|improve this answer Does the wave nature of a particle refer to the wave function? No, and for a very simple reason. The wave nature of a quantum particle refers to the empirical evidence - observations - that quantum particles, like the electron and the photon, exhibit classical wave like properties such as interference and diffraction. The wave function of a quantum particle is part of a mathematical model that, in the non-relativistic limit, accurately predicts both the wave like and particle like observations. The wave function's magnitude squared is interpreted as a probability density. share|cite|improve this answer Your Answer
8350884ff9003d6a
The Many Interpretations of Quantum Mechanics What is the ultimate nature of reality?  Are quantum effects constantly carving us into innumerable copies, each copy inhabiting a different version of the universe? Or do all those other worlds pop out of existence as mere might-have-beens? Do our particles surf on quantum waves? Or are we ultimately made of the quantum waves alone? Or do the waves merely represent how much information we could possess about the state of the world? And if the waves are just a kind of information, information about what? Or is the information all that there is—and all that we are?  Those are the kind of questions in play when a physicist tackles the dry-sounding issue of, “what is the correct interpretation of quantum mechanics?” About 80 years after the original flowering of quantum theory, physicists still don’t agree on an answer.  And although quantum mechanics is primarily the physics of the very small—of atoms, electrons, photons and other such particles—the world is made up of those particles. If their individual reality is radically different from what we imagine then surely so too is the reality of the pebbles, people and planets that they make up.  As recounted by our December article, The Many Worlds of Hugh Everett by journalist Peter Byrne, 50 years ago the iconoclastic physics student Hugh Everett introduced the idea that quantum physics is incessantly splitting the universe into alternate branches. Byrne’s article talks about Everett’s life (did you know his son is the lead singer of the rock band Eels?) as well as about his theory and the “Copenhagen Interpretation” he aimed to supplant. But many other interpretations of quantum mechanics exist, and today Copenhagenists have more subtle variants to choose from than the one that Everett once called “a philosophic monstrosity.” Here is an all-too-short run-down on some of them.  The basic scenario an interpretation must address is when a quantum system is prepared in a combination of states known as a superposition. For example, a particle can be at both location A and B, or in the infamous thought experiment, Schrödinger’s quantum cat can be alive and dead at the same time. The problem is that when we observe or measure a superposition, we get but one result: our detector reports either “A” or “B,” not both; the cat would appear either very alive   or very dead.  Copenhagen Interpretation  This interpretation (or variants of it) has long been the party line for quantum physicists. The Schrödinger equation describes how a wave function evolves smoothly and continuously over time, up until the point when our big, clunky measuring apparatus intervenes. The wave function enables us to predict, say, there’s a 60% probability we’ll detect the particle at location A. After we detect it at A or B, we have to represent the particle with a new wave function that conforms with the measurement result.  Many Worlds Interpretation  Everett’s theory. Also known as the relative state formulation.  The superposition of the particle spreads to the apparatus, and to us looking at the apparatus, and ultimately to the entire universe. The components of the resulting superposition are like parallel universes: in one we see outcome A, in another we see outcome B. All the branches coexist simultaneously, but because they are completely non-interacting the “A” copy of us is completely unaware of the “B” copy and vice versa. Mathematically, this universal superposition is what the Schrödinger equation predicts if you describe the whole universe with a wave function.  What bothers people about this interpretation is its conclusion that we are perpetually dividing into multiple copies, which may have ghastly implications as well as being bizarre.  Bohmian Interpretation  Also known as the De Broglie–Bohm interpretation or the pilot wave interpretation.  This theory postulates that every particle not only has a wave function but also exists as an actual particle riding along at some precise but unknown location on the wave and being guided by it. How the wave guides the particle is described by a new equation that is introduced to accompany the standard Schrödinger equation. The randomness of quantum measurements comes about because we cannot know exactly where a particle started out. The theory was proposed by David Bohm in 1952 (a few years before Everett’s theory), extending a theory of Louis De Broglie’s from 1927.  Changing the Rules  Some theorists seek to find a mechanism that causes the “collapse” of the wave function from a superposition of possibilities to a single outcome. For example, Roger Penrose has proposed that gravitational effects may play this role. Other models, such as the Ghirardi-Rimini-Weber theory, introduce specific modifications to the Schrödinger equation. By differing from standard quantum theory, such models in principle might be falsifiable by experiment (or conversely, standard theory could be falsified in their favor).  Decoherence Theory  This is not an interpretation, but it is an important element of the modern understanding of quantum mechanics. It expands upon the kind of mathematical analysis that led Everett to his interpretation, because it analyzes the effect that stray quantum interactions with the surrounding environment have on a system in a superposition. The chief conclusion is that the almost unstoppable loss of information through these channels “decoheres” a quantum superposition, making it more like an ordinary classical state. It explains very well why we see the classical world that we do, and clarifies the requirements to keep quantum effects manifest in the lab.  Copenhagenists can point to decoherence as an explanation of what makes large classical systems different from small quantum systems (in general, large systems decohere much more readily and rapidly than tiny ones). Everettians can point to it as a more complete explanation of how the parallel branches form and become independent. But best of all, decoherence can be studied experimentally, and a very active area of quantum research is confirming it and exploring it in ever greater detail.  Consistent Histories  This scheme analyzes sequences of states of a system (which may include the whole universe), to find what questions can be consistently answered about the system, such as “was the particle at A or B at time T?” The measurement problem, however, is not resolved: the question of which histories actually happen remains a matter of probabilities just as with the standard Copenhagenist approach.  Is it Real?  In some respects the decision between a Copenhagenist and an Everettian viewpoint boils down to a basic question: Is the wave function real or is it just information? If it is “real”—in some sense the universe really consists of quantum waves propagating around—then one tends to be driven to an Everettian viewpoint; the “collapses” that wave functions must undergo to produce the one reality that we see are too problematic. But if the wave function is just information, for example, a representation of what an experimenter knows about a system, then that “collapse” is completely natural. Imagine the standard classical scenario of flipping a coin. Before you look at it, your knowledge of its state is “50% chance of heads, 50% chance of tails.” When you look, your knowledge instantaneously changes to, say, “100% heads, 0% tails.”  “Shut Up and Calculate!”  Some physicists talk of the “shut up and calculate interpretation”: ignore the philosophical puzzle of how the classical and the quantum coexist and use the Schrödinger equation (and all the subsequent mathematical developments of quantum theory) to compute quantities of practical interest. These include energy levels of atoms; predictions for particle collider experiments; the properties of semiconductors, superconductors and other materials; and so on. It is all that most physicists ever need.  Transactional Interpretation  This interpretation has waves traveling forward and backward in time, setting up standing waves, for example between an emitter of a particle and its subsequent detector. It was proposed by John G. Cramer (physicist and science fiction author) in 1986 and claimed by him to provide insight into puzzles such as wave function collapse and the Schrödinger’s cat experiment. These insights have led Cramer to pursue an experiment to try to demonstrate the sending of signals backward in time (which most quantum physicists will tell you is impossible if standard quantum mechanics is correct). Share this Article: Email this Article
31dd340a10d1fd15
From Citizendium, the Citizens' Compendium Jump to: navigation, search This article has a Citable Version. Main Article Related Articles  [?] Bibliography  [?] External Links  [?] Citable Version  [?] Catalogs [?] Chemistry is the science of materials. Chemists consider that all of the materials in the world are matter, primarily made up of atoms. The combination of at least two atoms connected by a chemical bond leads to molecules. Ions are derived from atoms or molecules by loss or gain of one or more electrons leading to charged particles. Salts are composed of cations (positively charged ions) and anions (negatively charged ions), so that the substance or material is neutrally charged (without net charge). Chemists use their view of matter at the atomic to molecular level to explain how different materials interact, and how they change under varying conditions. Chemists can induce practical changes of substances, and make new compounds including drugs, explosives, cosmetics and foods. Chemical synthesis of materials involves bringing together substances in bulk under conditions that they can interact to give different substances. These interactions are called chemical reactions, and always involve rearrangement of electrons around the reacting atoms of each molecule. The formation of bonds between atoms or molecules means sharing electrons between the composite atoms, and this can transform one substance into another; such as the synthesis of water (H2O) from two gases: hydrogen (H2) and oxygen (O2). Some chemical interactions require energy: the substances must be mixed and heated for a chemical reaction to occur. These energy requiring reactions are called endothermic reactions. Whereas other chemical reactions release heat, and those energy releasing reactions are called exothermic reactions. On a molecular level, reactions can also be initiated by the addition or removal of electrons using electromagnetic radiation (light). Laboratory, Institute of Biochemistry, University of Cologne Chemistry is about the electrical or electrostatical interactions of matter. These interactions might be between two substances, or between matter (electrons) and energy, especially in conjunction with the First Law of Thermodynamics. Traditionally, chemistry involves interactions between the electrons of substances in chemical reactions, where one or more substances are changed into other substances. Sometimes these reactions are facilitated and enhanced in efficiency by a catalyst, which might be another chemical substance (such as sulfuric acid catalyzing the electrolysis of water), or a non-material phenomenon (such as electromagnetic radiation in photochemical reactions). Traditional chemistry also deals with the analysis of chemicals both in and apart from a reaction, as in spectroscopy. Ordinary matter consists of atoms and the subatomic components that make up atoms; protons, electrons and neutrons. Atoms can be combined to produce more complex forms of matter such as ions, molecules or crystals. The structure of the world we experience, and the properties of the matter we interact with, are determined by the properties of chemical substances and their interactions. Steel is harder than iron due to the incorporation of carbon resulting in a more rigid crystalline lattice. Wood burns or undergoes rapid oxidation because it can react spontaneously with oxygen in a chemical reaction above a certain temperature. Substances tend to be classified in terms of their energy or phase as well as their chemical compositions. The three phases of matter are Solid, Liquid, and Gas. Solids at room temperature have low kinetic energy and are fixed structures which can resist gravity and other weak forces attempting to rearrange them, due to their tight bonds. Liquids have weaker bonds, with no structure, and they flow with gravity. Gases have no bonds and act as free particles. Water (H2O) is a liquid at room temperature because its molecules are bound by intermolecular forces called hydrogen bonds. Hydrogen sulfide (H2S) on the other hand is a gas at room temperature and pressure, as its molecules are bound by weaker hydrogen bonds dipole-dipole interactions. Due to the higher electronegativity of oxygen compared to sulphur, the individual atoms in water tend to have higher charges than in hydrogen sulphide, leading to stronger hydrogen bonding in H2O than in H2S. Hydrogen bonds in water have enough energy to keep the water molecules from separating from each other but not enough to stop them from 'sliding around', making it a liquid at temperatures between 0°C and 100°C at sea level. Lowering the temperature or energy allows tighter organized bonds to form, creating a solid, and releasing energy. In this, water (H2O) behaves anomalously, as its largest density as a fluid is at 4 ˚C, not at its freezing point. Increasing the energy heat of fusion will melt the ice although the temperature will not change until all the ice is melted. Increasing the temperature of the water will cause boiling (see heat of vaporization) when there is enough energy to break the weak polar bonds at 100 °C, allowing the H2O molecules to disperse enough to be a gas. Note that in each case, energy is required to break the hydrogen bonds, as well as to further separate the molecules from each other. History of chemistry For more information, see: History of chemistry. The roots of chemistry can be traced to the phenomenon of burning. Fire was a mystical force that transformed one substance into another and was of primary interest to mankind. It was fire that led to the discovery of iron and glass. After gold was discovered, and became used as a precious metal, many people were interested to find a method that could convert other substances into gold. This led to the protoscience called Alchemy. Alchemists discovered many chemical processes that led to the development of modern chemistry. Chemistry as we know it today was invented by Antoine Lavoisier with his law of Conservation of mass in 1783. The discoveries of the chemical elements has a long history culminating in the creation of the periodic table of the chemical elements by Dmitri Mendeleev. The Nobel Prize in Chemistry, created in 1901, gives an excellent overview of chemical discovery in the past 100 years. The chemical industry represents an important economic activity. The global top 50 chemical producers in 2004 had sales of 587 billion dollars with a profit margin of 8.1% and research and development spending of 2.1% of total chemical sales.[1] Subdisciplines of chemistry • Organic chemistry is the study of the preparation and characterization of organic carbon compounds. The number of known organic compounds ranges in the millions, owing to carbon's distinctive ability to form long chains and rings while simultaneously bonding with hydrogen or peripheral functional groups. Organic chemistry is of great commercial importance, providing the underpinnings for the lucrative pharmaceutical and synthetic polymer industries. • Nuclear chemistry is the study of the chemistry of radioactive materials, the chemical effects of radiation and all chemistry associated with nuclear equipment and processes. Modern transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field. Other areas of specialty within chemistry include astrochemistry, atmospheric chemistry, chemo-informatics, electrochemistry, environmental chemistry, flow chemistry or rheology, geochemistry, green chemistry, medicinal chemistry, organometallic chemistry, petrochemistry, pharmacology, photochemistry, phytochemistry, solid-state chemistry, sonochemistry, supramolecular chemistry, surface chemistry, and thermochemistry. Other technical fields with strong ties to chemistry include chemical engineering, materials science, medicine, metallurgy, molecular biology, molecular genetics, nanoscience or nanotechnology, pharmacology and many sub-disciplines within the field of physics. Social sciences with ties to chemistry include the study of science, technology, and society, and the history of science. Fundamental concepts For more information, see: IUPAC nomenclature. Nomenclature refers to the system for naming chemical compounds. There are well-defined systems in place for naming chemical species. IUPAC nomenclature describes the complete structural information of small (monomeric) molecules. Organic compounds are named according to the organic nomenclature system. Inorganic compounds are named according to the inorganic nomenclature system. "Common names," or chemical names not based on the IUPAC nomenclature system, are frequently used for many substances. These names may be preferable as the IUPAC name may be very long (over 50 characters). Materials Safety Data Sheets require the listing of all common names as well as IUPAC names for a given substance. For more information, see: Atom. (PD) Drawing: Lawrence Berkeley National Laboratory A common concept of an atom's structure © Drawing: Courtesy of Modern concept of the quantum atom For more information, see: Chemical elements and Periodic Table of Elements. For more information, see: Ion. For more information, see: Chemical compound. For more information, see: Molecule. A molecule is a combination of two or more atoms in a definite arrangement held together by chemical bonds. It is the smallest indivisible portion of a pure compound or element that retains a set of unique chemical properties. For more information, see: Chemical substance. Electron atomic and molecular orbitals For more information, see: Chemical bond. States of matter For more information, see: Phase (matter). Chemical reactions For more information, see: Chemical reaction. Quantum chemistry For more information, see: Quantum chemistry. Quantum chemistry mathematically describes the fundamental behavior of matter at the molecular scale. It is, in principle, possible to describe all chemical systems using this theory. In practice, only the most simple chemical systems may qualitatively be investigated in purely quantum mechanical terms, and approximations must be made for most practical purposes (e.g., Hartree-Fock, post Hartree-Fock or density functional theory, see computational chemistry for more details). Hence a detailed understanding of quantum mechanics is not primarily necessary for most chemistry, as the important implications of the theory (principally the orbital approximation) can be understood and applied in simpler terms. In quantum mechanics (several applications in computational chemistry and quantum chemistry), the Hamiltonian, or the physical state, of a particle can be expressed as the sum of two operators, one corresponding to kinetic energy and the other to potential energy. The Hamiltonian of a particle with no electric charge and no spin is described by the Schrödinger wave equation of a particle without forces being exerted on it. Solutions of the Schrödinger equation for the hydrogen atom gives the form of the wave function for atomic orbitals, and the relative energy of say the 1s,2s,2p and 3s orbitals. The orbital approximation can be extended to understand the other atoms e.g. helium, lithium and carbon. Chemical laws For more information, see: Chemical law. The most fundamental concept in chemistry is the law of conservation of mass, which states that there is no detectable change in the quantity of matter during an ordinary chemical reaction. Modern physics shows that it is actually energy that is conserved, and that energy and mass are equivalent or related; a concept which becomes important in nuclear chemistry. Conservation of energy leads to the important concepts of equilibrium, thermodynamics, and kinetics. Main article: Etymology of alchemy The word chemistry comes from the earlier study of alchemy, from the old French alkemie; and the Arabic al-kimia: "the art of transformation." An alchemist was called a 'chemist' in popular speech, and later the suffix "-ry" was added to this to describe the art of the chemist as "chemistry". Alchemy in turn is thought to possibly derive either from Greek word chemeia (χημεία) meaning "cast together", "pour together", "weld", "alloy", etc. (from khumatos, "that which is poured out, an ingot"), or from the Coptic name for Egypt kēme, or alternately, from Persian kimia meaning "gold".
6df2c1ec77a8da8c
Advanced Quantum Mechanics Fall, 2013 Building on Professor Susskind’s previous Continuing Studies courses on quantum mechanics, this course will explore the various types of quantum systems that occur in nature, from harmonic oscillators to atoms and molecules, photons, and quantum fields.  Students will learn what it means for an electron to be a fermion and how that leads to the Pauli exclusion principle. They will also learn what it means for a photon to be a boson and how that allows us to build radios and lasers. The strange phenomenon of quantum tunneling will lead to an understanding of how nuclei emit alpha particles and how the same effect predicts that cosmological space can “boil.”  Finally, the course will delve into the world of quantum field theory and the relation between waves and particles. Lectures in this Course 1. 1 Review of quantum mechanics and introduction to symmetry The course begins with a brief review of quantum mechanics and the material presented in the core Theoretical Minimum course on the subject. The concepts covered include vector... [more] 2. 2 Symmetry groups and degeneracy Professor Susskind presents an example of rotational symmetry and derives the angular momentum operator as the generator of this symmetry. He then presents the concept of degenerate states, and shows that any two symmetries that do not commute imply... [more] 3. 3 Atomic orbits and harmonic oscillators Professor Susskind uses the quantum mechanics of angular momentum derived in the last lecture to develop the Hamiltonian for the central force coulomb potential which describes an atom.  The solution of the Schrödinger equation for this system leads... [more] 4. 4 Professor Susskind builds on the discussion of quantum harmonic oscillators from the last lecture to derive the higher order energy states and wave functions.  He then moves on to discuss spin states of particles, and introduces the Pauli matrices,... [more] 5. 5 Fermions: a tale of two minus signs Professor Susskind presents the quantum mechanics of multi-particle systems, and demonstrates that fermions and bosons are distinguished by the two possible solutions to the wave function equation when two particles are swapped.  When two particles... [more] 6. 6 Quantum field theory Professor Susskind introduces quantum field theory.  Excepting gravity, quantum field theory is our most complete description of the universe.  Each quantum field corresponds to a specific particle type, and is represented by a state vector... [more] 7. 7 Quantum field theory 2 Professor Susskind continues with the presentation of quantum field theory.  He reviews the derivation of the creation and annihilation operators, and then develops the formulas for the energy of a multi-particle system.  This derivation... [more] 8. 8 Second quantization Professor Susskind answers a question about neutrino mixing and relates the oscillating quantum states of a neutrino to a precessing electron spin in a magnetic field.  He then discusses a recent article about whether an electron is a sphere.  After... [more] 9. 9 Quantum field Hamiltonian Professor Susskind presents the Hamiltonian for a quantum field, and demonstrates how these Hamiltonians describe particle interactions such as decay and scattering. He then introduces the field theory for fermions by deriving the Dirac equation.... [more] 10. 10 Fermions and the Dirac equation Professor Susskind closes the course with the presentation of the quantum field theory for spin-1/2 fermions.  This theory is based on the Dirac equation, which, when Dirac developed it in 1928, was the first thory to account fully for special... [more]
9ee2e9aa032c06f1
Home » » 17 Equations that would change the world - Pythagora's Theorem? 17 Equations that would change the world - Pythagora's Theorem? Written By Real Kevin Jay on Friday, July 27, 2012 | 8:34 AM Few academic subjects evoke as polarized reactions as ‘Mathematics’ does. It is a dreary chore for most, an inescapable rite of passage that they have to endure during their growing years.  Many others relish it with aplomb, finding the subject as gripping as a suspense novel or as intriguing as life itself. But it takes someone like Professor Ian Stewart to bridge this divide by unraveling the mysteries of a complex subject and presenting them in simplified English before millions of readers who are more excited by the recipe of a pie, than the value of Pi. If his attire (in the picture above) reminds you of Steven Spielberg’s iconic character Indiana Jones, it’s probably no strange coincidence. If the fictional Dr Jones is a University professor and as the industry magazine Archaeology described him, a 'great diplomat for archaeology', so is Professor Stewart for Mathematics. With more than 80 books to his credit, Professor Stewart has popularized the subject like few others. His books on Mathematics have consistently made it to the ‘bestsellers’ lists; his creations even include three comic books on the subject. The Emeritus Professor of Mathematics at the University of Warwick is currently in the news for his latest book ‘17 Equations that Changed the World’. "Equations are the lifeblood of mathematics, science, and technology. Without them, our world would not exist in its present form," Stewart says. According to Prof. Stewart, the following 17 equations have changed the world: Pythagoras's Theorem, Logarithms, Calculus, Newton's Law of Gravity, The Square Root of Minus One, Euler's formula for Polyhedra, Normal Distribution, Wave Equation, Fourier Transform, Navier-Stokes Equation, Maxwell's Equations, Second Law of Thermodynamics, Relativity, Schrödinger equation, Information Theory, Chaos Theory and Black Scholes Equation. So what inspired him to write the book and are there any equations that could pull us out of the current global downturn? Professor Stewart spoke to Yahoo! India Finance Editor, Neeraj Gangal, in an exclusive interview: What was the trigger that pushed you to write this one dedicated to ‘equations’? A Dutch publisher, who has translated some of my books, was talking to my English publisher at a book festival, and asked him whether he knew of a popular book on mathematical equations. Not the nuts and bolts of the mathematics, but where they came from historically, what they did for humanity, what they’re used for now, and what they mean. He replied that there are a few nice books about equations, but not one like that. The more we thought about the idea, the more potential we saw in it. It was slightly dangerous to try to tackle equations head on, because many people find them intimidating. On the other hand, that’s what science popularisation should be about: making intimidating things comprehensible and friendly. So we delayed a couple of other books that we were planning to produce, to make time for this one. Judging by the response, it was the right choice! How many of the 17 equations influence each one of us the most in our routine life - on a daily basis? We ‘use’ about ten of them almost every day; not consciously, but they are behind the scenes, built into our technology, influencing our lives. Engineers have to know about them, and the rest of us benefit without realising they’re present. Radio, TV, and wireless communications rely on Maxwell’s equations and the wave equation. The food we eat comes from crops that have been bred using the equations of statistics. Aircraft overhead, and our cars, involve the equations of aerodynamics. The Internet requires equations from information theory, computer chips use equations from quantum mechanics, digital cameras use the Fourier transform. I could continue for some time — design of buildings for earthquake protection, satellite navigation, communications satellites, design of bridges... Why are most youngsters intimidated by the very mention of Mathematics?  Over the years, mathematics has acquired a negative image. It’s not ‘cool’. It’s also demanding and unforgiving --- if the answer’s wrong, then it’s wrong, and no amount of clever argument can change that. In the USA a recent study shows that the mathematical abilities of mathematics teachers have declined. Basically, if you’re good at maths then you have a huge range of jobs available, of which teaching is just one. To teach maths well, you have to be confident about your own understanding of it. But many teachers aren’t. But even the best ones are heavily constrained by the need to teach to a specific syllabus, one that focuses too strongly on technique. I find that if young people are made aware of the way maths affects our lives, of its creative aspects, of how you can tackle new problems, and generally just enjoy the subject, then their attitudes become much more positive. Which equation, do you think, has had the greatest impact on human civilisation? Overall, the equation(s) behind calculus, worked out by Newton and Leibniz, with some predecessors like Fermat. In Newton’s hands, calculus became the key that opened up what he called the System of the World. How the Universe works. Ironically, he didn’t use calculus in his epic Principia Mathematica, but it influenced his thoughts. The mathematical physicists of Europe turned calculus into the basis of the whole of science --- heat, light, sound, waves, gravity. Then electricity and magnetism got in on the act as well. Many of my 17 equations were made possible by calculus; for instance, Maxwell’s equations, which among other things gave us radio and TV. In the current global financial gloom, which is the one equation that you would suggest bankers/ financial experts look at? I’d like them to stop looking for mathematical tricks that promise huge profits but don’t adequately reflect the realities of the market or the risks involved. I’d like them to consider stability and control of the financial sector, not just runaway surges that lead to meltdown when they go wrong. The work of Robert May and the Bank of England’s Andrew Haldane, on stability equations motivated by ecosystems, would be a good starting-point. Outside the world of equations altogether, bankers need to deal with an ingrained culture of dishonesty and greed (we’ve seen half a dozen examples in recent years). How difficult was it writing this book, considering you are as popular among readers from non-Mathematical backgrounds? I was confident that I could make the material accessible, even to non-mathematicians, because of the historical and cultural aspects. The equations were the characters in a drama, so to speak; the main action was the drama itself. The hardest part, in some ways, was to choose the equations. Not because there were too few, but because there were too many. My first attempt listed about 40. I wanted to do a thorough job on each equation, so the most I could sensibly handle was 20, preferably fewer. So I started throwing out anything that hadn’t made a really major impact on human history, combining related equations into one chapter, and so on. Then I took a deep breath and removed three or four for which the story wasn’t as strong. After that it was surprisingly easy to write. I learned a lot about history, and areas of human activity that I’ve not worked in myself, and that meant that a lot of the material was fresh, even to me. That generally helps with the writing. About Professor Ian Stewart Ian Stewart was educated at Cambridge (MA) and Warwick(PhD). He has honorary doctorates from Westminster, Louvain, Kingston, and the Open University. He is an Emeritus Professor and Digital Media Fellow in the Mathematics Department at Warwick University, with special responsibility for public awareness of mathematics and science. He has held visiting positions in Germany, New Zealand, Hong Kong, and the USA. His present field of research is the effects of symmetry on dynamics, with applications to pattern formation and chaos theory in areas including animal locomotion, fluid dynamics, mathematical biology, chemical reactions, electronic circuits, computer vision, quality control of wire, and intelligent control of spring coiling machines. bit of it. I have you book marked to look at new things you post… ppi claim letter for loan Also visit my website ; ppiclaimsletter.org.uk
50f61446e2290fdb
Physics Summer School Milky Way Galaxy over hills“The size and age of the Cosmos are beyond ordinary human understanding. Lost somewhere between immensity and eternity is our tiny planetary home. In the last few millennia we have made the most astonishing and unexpected discoveries about the Cosmos and our place within it. They remind us that human beings have evolved to wonder, that understanding is a joy, and that knowledge is a prerequisite to survival.”            Carl Sagan Physics attempts to explain the natural world, from the smallest quantum particles to the largest spiral galaxies, from the inner workings of your laptop computer to the dazzling explosion of a dying star. Physicists look at the big questions: Why are we here? How did the universe begin? How will it end? What is everything made from? Why is the past different from the future? Are we alone in the Universe? It is by asking these questions, and seeking answers, that human beings have begun to understand and appreciate the underlying fabric of reality. The Physics Summer School is an opportunity for students to explore some of the most exciting and challenging ideas in contemporary physics. Each course consists of a structured 5-day programme covering a selection of interesting and diverse topics, ranging from black holes and time travel to the search for exo-planets and life outside our solar system. This Summer School is particularly appropriate for students who may be considering further study of physics, mathematics or engineering at undergraduate level, or who are interested in related disciplines such as chemistry or computing. Physics Summer School – Part 1 focuses on Classical Mechanics and Astrophysics and is open to students aged 15-18. As part of this programme students will learn how to estimate the number of particles in the universe; design spaceships to Mars; search for exo-planets in distant solar systems; calculate the size of the observable universe; measure the temperature of the sun’s surface using a desk lamp; determine the composition of stars by analysing their light, and predict the fate of the universe using the Second Law of thermodynamics. A full schedule for this course can be seen here. Physics Summer School – Part 2 focuses on Quantum Mechanics and Relativity and requires that students have completed a minimum of one year of A-Level Mathematics (or equivalent) at the time of the course. As part of this challenging programme, students will learn how to use the Schrödinger equation to predict the energy levels of the Hydrogen atom; determine the properties of the quantum vacuum using Heisenberg’s uncertainty principle; measure the curvature of space and time using Einstein’s theory of relativity; calculate the expansion rate of the universe; explore the mysteries of dark matter; determine the Schwarzschild radius of black holes and explore the mysteries of ten-dimensional superstring theory. A full schedule for this course can be found here. Classes are small, typically containing twelve to fourteen students, all of whom will share a passion for physics and a curiosity to build on their existing knowledge and embrace new ideas. Classes will consist of a combination of lectures, group discussions, team games and problem sets, creating a comfortable environment for students to share ideas amongst their peers and to progress from their existing knowledge toward more challenging material. An expert Tutor will lead each session in a seminar format, but learning will be largely student-led wherever practical. ‘The Debate Chamber Physics course was both challenging and stimulating and one which provided a firm grounding on some cutting edge, modern physics. We were taught about both the very large and very small, from special and general relativity to quantum mechanics and the philosophical interpretations of the measurement problem. I would thoroughly recommend this course to anyone contemplating studying the subject at university or to anyone who just wants to learn some physics!’ Practical Details Part 1 of the course is open to all students aged 15-18, and Part 2 is open to students who have completed a minimum of AS Mathematics (or equivalent) at the time of the course. Part 1 of the course will take place 16th – 20th July (and repeated 13th – 17th August) 2018. Part 2 of the course will take place 23rd – 27th July (and repeated 20th – 24th August) 2018. The cost of the Physics Summer School is £495 per student for five days, or book both Parts of the course (if eligible) for £850. Please note that accommodation is not included, and must be arranged independently if required. To book a place or places at the Physics Summer School, or if you have any further questions, simply call on 0845 519 4827, email, or book online. Booking Form
a1fc28563d24bab8
• Open Access Nuclear Magnetic Resonance of Hydrogen Molecules Trapped inside C70 Fullerene Cages We present a solid-state NMR study of H2 molecules confined inside the cavity of C70 fullerene cages over a wide range of temperatures (300 K to 4 K). The proton NMR spectra are consistent with a model in which the dipole–dipole coupling between the ortho-H2 protons is averaged over the rotational/translational states of the confined quantum rotor, with an additional chemical shift anisotropy δHCSA=10.1 ppm induced by the carbon cage. The magnitude of the chemical shift anisotropy is consistent with DFT estimates of the chemical shielding tensor field within the cage. The experimental NMR data indicate that the ground state of endohedral ortho-H2 in C70 is doubly degenerate and polarized transverse to the principal axis of the cage. The NMR spectra indicate significant magnetic alignment of the C70 long axes along the magnetic field, at temperatures below ∼10 K. 1. Introduction Fullerenes consist of symmetrical carbon-only cages surrounding a nanoscale cavity.1 Synthetic routes have been developed for inserting small molecules such as H2 and H2O into the cavity, which may then be resealed.24 This “molecular surgery” procedure has been successfully completed on C70, which consists of 70 carbon atoms arranged in an ellipsoidal shape resembling a rugby ball, with point group symmetry D5h. In the resultant product, most of the fullerenes encapsulate one hydrogen molecule, obeying the formula H2@C70 (Figure 1), although there is a small percentage, about 3 %, of doubly occupied cages.5 Figure 1. The endohedral fullerene H2@C70 shown in a) equatorial and b) polar views. The carbon cage and the endohedral H2 are shown by a ball-and-stick representation. The endohedral hydrogen molecules behave as molecular quantum rotors and exhibit quantization of all motional degrees of freedom (vibrations, rotations and translations).6, 7 The homogeneity of the trapping sites and the relative isolation of the guest molecules make these systems excellent targets for solid-state spectroscopic studies815 and quantum mechanical calculations.1618 Comparison with high-resolution spectroscopic data allows refinement of the parameters for the non-bonded interaction of the hydrogen molecule with the carbon surface.13, 19 H2 also displays spin isomerism. According to the Pauli principle the molecular wave function is antisymmetric with parity −1 with respect to the exchange of the two identical protons. In the electronic ground state of H2 the parity of the molecular wave function is (−1)(I+J) where I is the nuclear spin and J is the angular momentum quantum number for rotations around the center of mass. The nuclear spin singlet (I=0, para-H2) is combined with even-J functions, while the nuclear spin triplet (I=1, ortho-H2) is combined with odd-J functions. The small moment of inertia leads to a large separation between the rotational energy levels of free H2, while the spin mixing terms are relatively small. As a result, the interconversion between spin isomers is slow in the absence of an external spin catalyst. Spin-isomer conversion in dihydrogen endofullerenes has been induced by molecular oxygen,20, 21 covalently linked magnetic switches22 and by photo-excitation of electronic triplet states.23 In this paper we report 1H NMR lineshapes and spin-lattice relaxation times T1 for a powder sample of H2@C70. The 1H NMR spectra display temperature-dependent lineshapes, with no evidence of orthopara conversion for the endohedral H2 molecules. The lineshapes are consistent with a model in which the dipole–dipole coupling between the nuclei is averaged over the accessible translational–rotational wavefunctions and in which there is also a significant chemical shift anisotropy (CSA) interaction. The proton CSA is found to be almost temperature-independent and is attributed to the effect of the carbon cage. This conclusion is supported by DFT calculations of the chemical shift tensor field inside the cage. The confinement potential of H2 inside C70 reflects the D5h point-group symmetry of the cage. The non-spherical symmetry leads to a splitting of the three-fold degenerate ortho-H2 rotational ground state into two levels with degeneracies 1 and 2. Studies of the five-dimensional quantum mechanics of H2 in a modelled cage potential have predicted that the ground state is non-degenerate, while the upper sublevel is doubly degenerate.19 However, as discussed below, the experimental NMR data indicate that the ground state of ortho-H2 in C70 is two-fold degenerate, with a non-degenerate upper sublevel. This energy ordering is consistent with recent infrared spectroscopic data.24 An unexpected effect is detected in the 1H NMR spectra of H2@C70 at temperatures below ∼10 K. Changes in the lineshape indicate significant alignment of the C70 long axes along the magnetic field, indicating that either the C70 molecules rotate on their crystal lattice points to align along the magnetic field, or possibly that entire domains or crystallites reorient with the field. These results show that the endohedral hydrogen molecules may act as low-temperature “NMR indicators” which report on the behaviour of the enclosing carbon cages. Materials and Methods H2@C70 was synthesized via “molecular surgery” according to the method of Komatsu and Murata.3, 5 A sufficiently large orifice was opened in each C70 cage by a series of controlled reactions. Molecular hydrogen was forced into the open cages and remained trapped when ordinary conditions were restored. The holes were sealed by another series of chemical reactions without escape of the hydrogen. High-performance liquid chromatography was used to remove the residual empty fullerenes leaving a sample with ∼100 % of the fullerenes filled. About 25 mg of H2@C70 was dissolved in 3 mL of CS2. H2@C70 was precipitated by adding the solution to 50 mL of pentane while stirring, and then centrifuging. The precipitated H2@C70 was separated and heated at 60 °C in vacuum for 3 days, at 80 °C for 3 more days and at 180 °C for further 3 days. All results were obtained on 3 mg of H2@C70 in a low-proton-content Pyrex tube, evacuated for approximately 1 hour at 80 °C and flame-sealed. NMR Experiments The proton spectrum of the static sample, shown in Figure 2, was obtained at room temperature in a magnetic field of 14.1 T using a Bruker AVANCE-II+ spectrometer in Southampton and a home-built NMR probe with a solenoid coil. A single 90° pulse with duration 4.5 μs was used to excite the proton free induction decay. The NMR signal was detected after a ring-down delay of 5 μs. The spectrum is an average of 4 transients with an inter-pulse delay of 5 s. Figure 2. 1H NMR spectrum of a solid sample of H2@C70 at room temperature in a magnetic field of 14.1 T, acquired under static conditions (i.e. without sample rotation). The spectrum was obtained by the Fourier transformation of NMR signals induced by 90° radio-frequency pulse. The chemical shift scale was calibrated by setting the peak of adamantane in the solid phase at 1.8 ppm. The chemical shift of the H2@C70 peak is −24.7 ppm while the signal of the protonated impurity has a maximum at 1.2 ppm. All other NMR data were obtained using a high-field FT-NMR spectrometer with a magnetic field of 8.5 T at the National Institute of Chemical Physics and Biophysics in Tallinn (Estonia). The instrumental apparatus is partly described in refs. 8, 9. The experiments were performed on a static sample using a home-built cryogenic NMR probe with a radiofrequency solenoid coil perpendicular to the static magnetic field. The rf fields gave rise to a 1H nutation frequency of ∼250 kHz. The temperature was monitored by using a calibrated LakeShore Cernox sensor placed close to the sample and controlled with an accuracy of ±0.1 K down to 4.3 K. The pulse sequence for acquisition of the variable-temperature NMR data is shown in Figure 3. This consisted of a saturation comb, a variable recovery delay τd, and a solid echo sequence of two 90° pulses, followed by acquisition of the free-induction decay (FID). The saturation comb ensured a reproducible initial condition for each NMR pulse sequence and consisted of 120 90° pulses separated by delays of 500 μs. All 90° pulses had a duration of between 1.2 and 0.9 μs, which ensured approximately uniform excitation over the spectral bandwidth. Figure 3. Pulse sequence used for variable-temperature 1H NMR on H2@C70. A comb of 90° pulses is used to saturate the magnetization, followed by a variable delay τd for recovery of the longitudinal magnetization. A solid echo sequence composed of two 90° pulses, with a relative phase shift of 90°, refocusses the inhomogeneous dephasing caused by the intramolecular dipole–dipole interaction for isolated spin-1/2 pairs. Signal acquisition is started at the top of the echo. The solid echo block consisted of two 90° pulses with a relative phase shift of 90°, separated by a delay of 100 μs. Signal acquisition was initiated 100 μs after the second pulse. In the case of isolated homonuclear 2-spin-1/2 systems, in which the linewidth is dominated by the orientation-dependent intramolecular dipole–dipole interaction, the strong inhomogeneous decay of the NMR signal between the two 90° pulses is accurately reversed after the second pulse. Observation of the signal at the peak of the spin echo therefore allowed observation of the rapidly decaying initial part of the free-induction decay, while partially suppressing the signals from protonated impurities, which do not refocus accurately, since they do not originate with isolated spin pairs. This procedure was used before in the study of endohedral hydrogen-fullerene complexes.8, 9 The integrated area of the NMR signals was monitored as a function of the recovery delay τd, and fitted to an exponential function, in order to determine the spin-lattice relaxation time constant T1. Typically, 32 experiments were recorded for each T1 recovery curve. After making rough estimates of T1 through preliminary studies, fully relaxed NMR spectra were obtained by using recovery delays τd which are long compared to T1. Computational Methods The absolute chemical shielding tensor field inside the C70 cage (Figure 4) was calculated using the following procedure in Gaussian 09.25 The energy minimum geometry was obtained with the DFT M06/cc-pVDZ method. GIAO DFT M06/cc-pVDZ calculations of chemical shielding tensors were then performed repeatedly for 2500 randomly selected positions of a ghost atom inside and outside the cage, corresponding to 50 000 spatial data points after taking into account the D5h symmetry of the cage. The shielding tensors were then interpolated using cubic splines onto a flat 3D grid (200 points in each dimension) and the resulting tensor field cubes were used for the average shielding tensor estimates and plotting. Figure 4. Longitudinal and equatorial plane cross-sections of absolute chemical shielding tensor field (in ppm, computed using GIAO DFT M06/cc-pVDZ method in Gaussian09) inside an empty C70 cage at the Born–Oppenheimer energy minimum geometry. Upper panels: isotropic chemical shielding σiso=(σXX+σYY+σZZ)/3; lower panels: chemical shielding anisotropy σCSA=σZZσiso, where Z indicates the long axis of the cage. The magnetic susceptibility tensor of a C70 molecule was calculated for an empty cage at four different levels of theory (GIAO M06/cc-pVDZ, GIAO M06/cc-pVTZ, CSGT M06/cc-pVDZ, CSGT M06/cc-pVTZ) in Gaussian 09, with similar results for the eigenvalues of the susceptibility tensor (to within 10 %) in all cases. The choice of the M06 exchange-correlation functional26 in all calculations is dictated by its superior performance with dispersion interactions that are expected to be significant in an extended and strained aromatic system such as the C70 fullerene cage. Another reason is relatively accurate excited state energies, which determine the accuracy of magnetic shielding calculations because they appear in perturbation theory denominators.27 After the shielding tensor field was obtained, the average anisotropy of the chemical shielding induced by the cage was obtained as follows: the spatial volume accessible to the hydrogen molecule inside the fullerene cage was estimated by taking the difference between the geometrical volume of the cage and the union of the carbon atom spheres with an assigned van der Waals radius of 1.70 Å. The average of the chemical shielding tensor field, obtained from DFT calculations as described above, was calculated over the resulting “accessible” volume and its anisotropy computed as a difference between the component along the axis of the cage and the isotropic component. 2. Results 2.1. Room-Temperature NMR The room-temperature 1H spectrum of the static sample, obtained by a single 90° pulse, is shown in Figure 2. This displays two resolved peaks at −24.7 ppm and +1.2 ppm, with an amplitude ratio of 1:2. The narrow −24.7 ppm peak is assigned to the endohedral protons of H2@C70, since the unusual chemical shift is similar to the −23.97 ppm shift for H2@C70 in solution.5 The broader signal at +1.2 ppm is attributed to protonated impurities, probably from occluded solvent molecules. It was verified that this peak is not due to background signals from the probe or the Pyrex tube. We could not resolve any signals from doubly-occupied C70 cages, which are observed in solution at −23.80 ppm.5 It is unusual to obtain resolved proton spectra in the solid state, without the assistance of magic-angle spinning or homonuclear dipolar decoupling. The strong negative chemical shift of the endohedral H2 molecules, and the low proton density, make this sample a special case. 2.2. Variable-Temperature NMR Figure 5 shows the 1H NMR spectra acquired at a range of temperatures, using the saturation-recovery solid-echo pulse sequence shown in Figure 3. Figure 5. 1H spectra of H2@C70 at 8.5 T. For each temperature the experimental spectrum is shown in black with the fitted line shape in gray. The asterisks denote signals from the protonated impurity. Each experimental spectrum is normalized to its own maximum intensity. All spectra are averages of 16 transients with an inter-scan delay between 10 and 20 times longer than the measured T1 for the endohedral hydrogen. The 297 K spectrum (top left in Figure 5) is similar to the single-pulse spectrum in Figure 2, but with a greatly reduced impurity peak (indicated by the asterisk). This is because the 90°–90° echo sequence strongly suppresses the proton impurity signals, as discussed above. The endohedral peak gets broader and starts showing a structure below 225 K. At 160 K the line shape develops into an asymmetric two-horn pattern with two shoulders. A symmetric two horn pattern is typical of powder spectra for randomly oriented isolated homonuclear spin pairs as originally observed by Pake.28 The spectral asymmetry is attributed to an axially symmetric chemical shift anisotropy interaction, collinear with the magnetic proton-proton dipolar interaction between the two protons. As discussed below, the spectra are consistent with a homonuclear dipole–dipole interaction that increases in magnitude with temperature, while the chemical shift anisotropy contribution is almost temperature-independent. As result the spectra become more symmetrical at low temperature when the dipole–dipole term dominates. The impurity peak (indicated by an asterisk) decreases in amplitude (relative to the endohedral peak) over the temperature range between 230 and 40 K, only to reappear at lower temperatures. There are two reasons for this: 1) the relaxation time T1 of the endohedral protons is unusually short over this temperature range. This leads to an enhanced intensity of the endohedral proton peak relative to the impurity peak, since the proton magnetization of the impurity fails to recover fully after the saturation comb, while the rapidly relaxing endohedral proton magnetization recovers almost completely. 2) the endohedral proton peak becomes much broader at low temperatures, due to the strong intramolecular dipole–dipole coupling, while the width of the impurity peak does not have a strong temperature dependence. The endohedral H2 lineshape at 15.0 K resembles closely the classical Pake form, with strong inner horns (due to dipolar interaction tensors with principal axes perpendicular to the magnetic field), and weaker shoulders (due to dipolar interaction tensors with principal axes parallel to the magnetic field). However, below 15.0 K, an unusual intensity perturbation is clearly seen. The “parallel” shoulders gain strongly in intensity as the temperature is decreased, at the expense of the “perpendicular” horns. At the lowest temperature of 4.4 K, the shoulders are so strongly enhanced that they form a pair of outer horns of greater amplitude than the inner horns of the normal Pake pattern. As discussed below, we attribute this effect to magnetic orientation of the C70 cages at low temperature. 2.3. Spin-Lattice Relaxation The 1H nuclear spin-lattice relaxation was measured at each temperature by following the recovery of the NMR signal as a function of the delay τd using the sequence shown in Figure 3. At temperatures below ∼100 K, the spin-lattice relaxation is clearly anisotropic. The magnetization recovery of H2 molecules encapsulated in C70 cages with long axes perpendicular to the field (the “horns”) is significantly faster than that of H2 molecules in C70 cages with long axes parallel to the field (the “shoulders”). In both cases there is a good fit to a single-exponential recovery curve. The T1 values for H2 encapsulated in cages with long axes parallel and perpendicular to the magnetic field are plotted separately against temperature in Figure 6. The perpendicular and parallel relaxation time constants converge at a temperature of ∼70 K. Figure 6. Temperature dependence of the 1H spin-lattice relaxation time T1 for H2@C70 at a magnetic field of 8.5 T, on a log–log scale. Triangles: relaxation time constant of the endohedral H2 (spectral integral) at temperatures T>100 K. Squares: relaxation time constant of the “perpendicular horns” at temperatures T<100 K. Squares: relaxation time constant of the “parallel shoulders” at temperatures T<100 K. Dashed lines: best fits of the relaxation time constants to the equation T1(T)=ATa+BTb. Grey dashed line: best fit to the perpendicular horn data, with A=16.1 s, B=0.02 ms, a=−1.9, b=1.4. Black dashed line: best fit to the parallel shoulder data, with A=61.0 s, B=0.02 ms, a=−2.1, b=1.5. At temperatures above ∼100 K, the spectral features are less distinct, and there is also overlap with the protonated impurity peak. The recovery of the complete spectral integral was analyzed in this regime, and was found to be biexponential. The fast-relaxing component was attributed to the endohedral hydrogen, and the slower component to the protonated impurity. The endohedral relaxation time constants are shown in Figure 6. As shown in Figure 6, the T1 values for the endohedral H2 are roughly proportional to T−2 in the low-temperature regime, and proportional to T+1.5 in the high-temperature regime. 3. Discussion 3.1. Spatial Quantization The endohedral H2 molecule is a quantum rotor confined to the interior of the C70 cage. The general features of the quantum dynamics of confined H2 in a symmetric potential have been discussed.29, 30 The position and orientation of the rotor is defined by the vector R, which describes the position of the H2 centre of mass relative to the centre of the cage, and the internuclear vector r. The Born–Oppenheimer quantum dynamics of the confined H2 molecule has six degrees of freedom: three to describe the location of the centre of mass, two for the orientation of the confined molecule, and one for the internuclear distance. The spatial quantum state of the confined H2 molecule may be described by a set of six quantum numbers. For example, an analysis of the infrared spectrum of H2@C6012 used the following quantum numbers: v, N, L, J, Λ, MΛ, where v is a vibrational quantum number, N is a translational quantum number, L is a quantum number for the quantized orbital motion of the molecule inside the cavity, J is a quantum number for the molecular rotation, and Λ, MΛ are quantum numbers for the coupled rotational and orbital angular momentum. Since C70 has lower symmetry than C60, not all of these quantum numbers are good quantum numbers for the spatial eigenfunctions confined inside C70. Nevertheless, it remains true that the spatial Schrödinger equation for the confined molecule has six degrees of freedom. In the discussion below, a spatial stationary state is denoted Ψn(R,r), where the spatial quantum numbers are denoted by the collective symbol n. In addition, there are two quantum numbers I and MI∈{−I,−I+1…+I} for the nuclear spin angular momentum. Proton NMR spectra are generated by ortho-H2 molecules in the state I=1, which have odd values of the rotational angular momentum J. The Schrödinger equation for H2 inside C70 has been solved numerically using a confining potential derived from a spectroscopically optimized carbon–hydrogen interaction.19 The symmetry and the degeneracy of the spatial energy levels conform to the irreducible representations of the symmetry point group D5h of C70.31 At room temperature and below, only the J=1 states of ortho-H2 are significantly populated. The ortho-H2 J=1 ground state, which is triply degenerate in the case of H2@C60, is split in the case of H2@C70 into a non-degenerate A2′′ symmetric level, corresponding to a rotating hydrogen molecule longitudinally polarized with respect to the C70 long axis, and a doubly degenerate E1′ symmetric level, corresponding to transverse rotational polarization with respect to the C70 long axis. The spatial wavefunction of H2 in the A2′′ state has the form of a pz atomic orbital, oriented along the long axis of the cage (see Figure 7). The spatial wavefunctions of the degenerate E1′ states may be represented either as transverse px and py orbitals, or as complex superpositions of those orbitals, giving rise to torus-like complex wavefunctions (see Figure 7). Figure 7. The two lowest energy levels of ortho-H2@C70 ordered after the experimental findings, see section 4.3.2. The labels E1′ and A2′′ indicate the irreducible representations of the group D5h to which the levels belong. The wavefunctions of the confined hydrogen are represented by the torus-shaped and pz-shaped orbitals inside the fullerene, respectively. {X,Y,Z} denotes the cage axis system in which the Z axis is directed along the principal axis of C70. Numerical analysis of the spatial Schrödinger equation for H2@C70 predicts that the non-degenerate A2′′ level is ∼0.9 meV lower in energy than the doubly-degenerate E1′ level.19 However, as discussed below, the NMR results support the opposite ordering, with a doubly-degenerate ground level, and a non-degenerate upper level. The lowest energy levels of H2 in C70, and their associated wavefunctions, are shown with the experimentally supported energy ordering in Figure 7. 3.2. Spin Hamiltonian The NMR spectrum is obtained from an effective spin Hamiltonian averaged over all the populated energy levels since lattice modes induce fast transitions between the molecular states with respect to the timescale of the spin interactions. The D5h symmetry of the cage imposes uniaxial symmetry on the average chemical shift tensor and the dipole–dipole 1H–1H coupling tensor. The unique principal axes of both of these are tensors are parallel to the C70 long axis, which is assumed to subtend an angle β with respect to the applied magnetic field. In a powder sample, β is randomly distributed over the ensemble of C70 molecules. The effective spin Hamiltonian for endohedral H2, in the high-field limit, is dependent on the orientational angle β of the C70 long axis with respect to the magnetic field, and on temperature. If the spin-rotation interaction is omitted (see discussion below), the spin Hamiltonian may be written as Equation (1): equation image(1) where the spherical tensor spin operators are given in terms of the nuclear spin angular momentum operators by Equations (2) and (3):(3) equation image(2) equation image(3) and P2(cosβ)=1/2 (3 cos2β−1). The isotropic chemical shift of the endohedral protons is denoted by δiso. The chemical shift anisotropy and proton–proton dipole–dipole coupling interactions are both described by traceless symmetric second-rank tensors. The ZZ components of these coupling tensors are denoted by equation image and equation image, where the Z-axis denotes the long axis of the C70 cage. The angular bracket equation image denotes the average over all thermally populated spatial quantum states, for example [Eq. (4)]: equation image(4) where pn(T) is the Boltzmann population of the spatial state with quantum numbers n at temperature T, and equation image(n) is the dipole–dipole interaction tensor component for an individual spatial quantum state [Eq. (5)]: equation image(5) Here θ(r) is the angle between the internuclear vector r and the long axis of the fullerene cage, and bHH(r)=−(μ0/4 π)γH2ħr−3 is the proton–proton dipole–dipole coupling constant for a proton–proton distance r. At high temperatures, a large number of spatial states are populated, while at sufficiently low temperatures, only the ground spatial state is populated. The low-temperature NMR spectrum is therefore a sensitive probe of the spatial ground state. The temperature-dependence of the NMR spectra may be used to test models of the excited spatial states, and their energies. Equation (1) omits the spin–rotation interaction, which contributes strongly to the proton magnetic resonance spectra of H2 in molecular beams32, 33 and to proton relaxation in the gas and solution phases.34 The effects of spin–rotation interactions on the spectra of the confined H2 in the solid state have been discussed in ref. 35 where it is shown that the spin–rotation interaction would affect only the spectra in levels with E symmetry, resulting in lineshapes with a characteristic concavity at the centre of the spectra. The observation of spectral spin–rotation effects would require that transitions among the two degenerate states are slow on the spectral NMR timescale. No experimental NMR spectra have yet displayed unambiguous evidence of spin–rotation interactions at the time of writing. It must be assumed that rapid transitions between the spatial quantum states, induced by lattice fluctuations, suppress the spectral effects of spin–rotation interactions, even at cryogenic temperatures. 3.3. Lineshapes The two single-quantum transition frequencies of ortho-H2 depend on the angle β between the C70 axis and the applied magnetic field B0 as in Equation (6): equation image(6) In the absence of line broadening, the NMR line shape in the frequency domain s(ω)=s+(ω)+s(ω) is given by Equation (7)36:(7) equation image(7) where equation image are the solutions of ω=ω±(equation image,T). For isotropically distributed C70 cages, the probability density of β is given by Equation (8): equation image(8) An isotropic orientational distribution is found to be sufficient to treat the proton spectra of H2@C70 at temperatures above ∼15 K. According to Equation (6), the chemical shift anisotropy is added to the dipolar constant for one single-quantum transition, and subtracted for the other. This gives rise to an asymmetric Pake-like doublet in powder samples (Figure 8). The horn–horn separation is equation image, independent of the shift anisotropy equation image. The frequency coordinate halfway between the two horns is equation image. The fitted values of the spatial-average isotropic chemical shift equation image, the spatial-average chemical shift anisotropy equation image, the spatial-average dipole–dipole coupling equation image, and their confidence limits, are plotted against temperature in Figure 9. The fitted lineshapes are shown below the experimental spectra in Figure 5. Figure 8. a) H2@C70 molecule with long axis at an angle β with the external magnetic field B0. b) 1H spin energy levels and the single-quantum transitions giving rise to the NMR spectrum. c) The powder NMR spectrum (asymmetric Pake doublet) and its two components, assuming uniaxial dipole–dipole coupling and chemical shift anisotropy tensors sharing the same principal axis system. Figure 9. Temperature dependence of the spin interaction parameters a) equation image, b) equation image, c) equation image and d) the magnetic orientation parameter A from the fitting of experimental spectra in a field of 8.5 T. The gray line in (d) shows the function A(T)=8.7 K/T. All temperature axes are shown with a log scale. The vertical scale in (d) is also a log scale. The spatial-average isotropic shift equation image and chemical shift anisotropy equation image are both independent of temperature over the full range from 4 K to 297 K, within the confidence limits of the analysis. The spatial-average isotropic chemical shift and the shift anisotropy are given by equation image=−24.7±3 ppm and equation image=10.1±4 ppm. The spatial-average dipole–dipole coupling equation image, on the other hand, has a strong temperature-dependence above ∼15 K, falling steeply from ∼60 kHz at 15 K to <1 kHz at 297 K. 3.3.1. Chemical Shifts The values and the temperature independence of the parameters equation image and equation image suggests that these chemical shift interactions are dominated by a source external to the H2 molecule. The experimental isotropic chemical shift of H2 in the gas phase is δiso=7.40 ppm.37 For molecular hydrogen in the gas phase the chemical shift anisotropy can be estimated directly from the knowledge of the spin–rotation interaction coupling:38 equation image=1.6 ppm using the spin–rotation coupling from molecular beam studies.39 If the CSA were due to the H2 molecule itself, its magnitude would reduce in step with the dipolar coupling when the temperature is increased (Section 4.3.2). We postulate, based on the measurements reported herein and previous literature studies of probes other than H2 in fullerene systems,4042 that the electrons of the C70 cage generate the dominant contribution to both isotropic and anisotropic chemical shifts for the endohedral protons. Figure 4 shows results of DFT chemical shielding tensor calculations within and just outside the C70 cage. Figures 4 a,b show the position dependence of the isotropic chemical shielding σiso(R) while Figures 4 c,d show the relevant component of the chemical shift anisotropy tensor equation image(R)=σZZ(R)−σiso(R), where the Z-axis is parallel to the long axis of the cage. In order to compare with experiment, these shielding values were converted into chemical shift. After points outside the cage or within van der Waals contact of a carbon atom were excluded, the isotropic chemical shift (relative to the gas-phase molecular hydrogen) was found to be δiso=−25.8 ppm, while the chemical shift anisotropy is equation image=8.7 ppm. These values represent the average of the chemical shielding over the van der Waals volume of the cage. The agreement of the calculated chemical shift anisotropy equation image=8.7 ppm with the experimental value equation image=10.1±4 ppm is gratifying. The agreement of the calculated value of the isotropic chemical shift δiso(R)=−25.8±0.5 ppm with the experimental value equation image=−24.7±3 ppm is also encouraging. We also report excellent agreement with the result of −28.8 ppm reported for a helium-3 probe in ref. 40, but note that the agreement with our experimental value may be partly fortuitous, since the DFT calculation gives the additional chemical shift provided by the cage to whatever is inside, for an isolated C70 molecule in vacuum, while the experimental values are relative to tetramethylsilane in solution, and are measured in a bulk solid. The most relevant aspect of the isotropic shift calculations is that the shielding inside the cage is predicted to be quite uniform, which agrees well with the observed temperature-independence of the isotropic chemical shift. 3.3.2. Dipolar Couplings The experimental values of the spatially-averaged dipole–dipole coupling equation image are strongly temperature-dependent, decreasing sharply at high temperature as more spatial quantum states are accessed. This corresponds to the quantum equivalent of motional narrowing. However, the dipole–dipole coupling parameter equation image becomes temperature-independent below ∼10 K. This indicates that 1) the lowest energy level of ortho-H2 is separated from the next excited state by an energy corresponding to ∼10 K, and 2) the dipole–dipole coupling parameter in the spatial ground state is given by equation image|=60±2 kHz, where n0 is the set of quantum numbers for the spatial ground state. This value of the dipole–dipole coupling may be used to assign the spatial ground state. As shown by Tomaselli,35 the theoretical dipole–dipole couplings in the longitudinal A2′′ and transverse E1′ states are given by Equations (9) and (10):(10) equation image(9) equation image(10) where equation image is the proton–proton dipolar coupling constant averaged over the ground-state vibrational wave function.35 If we assume that the ground state is E1′, then the experimental value of equation image=60±2 kHz leads to the following estimate of the vibrationally averaged 1H–1H distance: equation image=73.8±0.8 pm. This agrees reasonably well with the 1H–1H distance of rHH=74.6 pm in free H2.39 The assignment of a A2′′ ground state would only agree with the experimental data if an unfeasibly long internuclear distance of 92.9±1.0 pm were assumed for the hydrogen molecule. Our conclusion that the ground state of ortho-H2@C70 has symmetry E1′ is supported by recent infrared studies of the same system.24 However, numerical modelling of the five-dimensional quantum mechanics for endohedral hydrogen in C70, using an empirical Lennard-Jones potential, concluded that the ground state has A2′′ symmetry.19 A more refined description of the hydrogen–carbon interaction may be needed to match the experimental evidence. 3.3.3. Magnetic Alignment of C70 The experimental results in Figure 5 indicate that the orientational distribution of the fullerene cages ceases to be isotropic at temperatures below ∼15 K. In order to interpret the low-temperature lineshapes, we postulate the following temperature-dependent probability distribution for the angle β between the long axes of the C70 cages and the applied magnetic field [Eq. (11)]: equation image(11) where N(T) is a normalisation constant. The sign of the parameter A determines the sense of the magnetic alignment: positive values favour orientation of the C70 long axes parallel to the field while negative values favour orientations perpendicular to the field. The isotropic distribution p(β)=piso(β) is recovered in the limit of A=0. At this point, we allow an arbitrary temperature-dependence for the parameter A. Equation (11) is informed by the physical insight that each C70 molecule has a magnetic susceptibility tensor, with second-rank rotational properties. We fitted the lineshapes below 15 K by varying the interaction parameters equation image, equation image, and equation image as well as the parameter A. The fitted lineshapes are shown below the experimental spectra in Figure 5. The fitted parameters, and their confidence limits, are shown in Figure 9. The magnetic orientation parameter A was found to have a linear dependence on inverse temperature, suggesting a thermal process associated with a magnetic energy ΔE [Eq. (12)]: equation image(12) where kB is the Boltzmann constant. The estimated value of the magnetic reorientation energy is ΔE/kB=8.7±0.3 K. Hence, at temperatures lower than ∼10 K, the long axes of the C70 cages align significantly along the magnetic field, causing a strong enhancement of the Pake pattern “shoulders”, at the expense of the inner “horns”. The alignment of the C70 cages with the magnetic field could involve the anisotropy Δχ of the magnetic susceptibility tensor. For isolated single molecules this would lead to a magnetic interaction energy given by ΔEχB02/2 μ0 where Δχ is the susceptibility anisotropy.43 However, the computed value of Δχ=6.3×10−33 m−3 corresponds to an orientational energy of ΔE/kB=0.01 K in a magnetic field of 8.5 T. We conclude that the magnetic anisotropy of individual C70 molecules is about two orders of magnitude too small to explain the observed effect. The observed magnetic orientation must therefore be a cooperative effect involving many neighbouring C70 molecules. This could involve the magnetic reorientation of microscopic C70 domains, or possibly the reorientation of entire crystallites. Alignment of molecules with respect to an applied magnetic field is well-known in the solution NMR of biomolecules, where it is used to assist molecular structure determination.44 Cooperative molecular alignment in a magnetic field is also well-known for liquid crystals.45 The phenomenon observed here seems to be of the same kind, but occurs at an extraordinarily low temperature. We are not aware of an analogous physical phenomenon in this temperature regime. We have also performed preliminary experiments at a higher magnetic field of 14.1 T. Contrary to expectations, the magnetic ordering effects were found to be weaker than those shown in Figure 5. This requires further investigation, but it is possible that the degree of magnetic alignment in H2@C70 depends on the sample preparation (type and amount of occluded solvents, homogeneity, crystallinity) and possibly on the thermal history as well. More investigations are in progress. 3.4. Spin–Lattice Relaxation Nuclear spin relaxation in molecular hydrogen is determined by the modulation of the spin–rotation and of the dipolar Hamiltonians induced by the interaction of the molecular angular momentum J with the lattice.34, 46 Spin relaxation is effective when the temperature-dependent correlation time τc for the fluctuating interactions matches the Larmor frequency of the spins equation image.47 T1 is long at low temperatures when equation image≪1 or at high temperature when equation image≫1. The experimental observation of a single minimum equation image=14±3 ms at T=70 K (see Figure 6) supports a model with a single mechanism over the full temperature range, and with a monotonic dependence of the correlation time τc on temperature. Fedders derived explicit formulae for the theoretical 1H spin relaxation rates for ortho-H2 trapped into a solid at sites with cubic, axial and low symmetry.46 The derived expressions for equation image do not depend on the correlation time τc and are independent of the model used to describe the interaction with the lattice. For the magnetic field B0=8.5 T used in our experiments, the Fedders theory predicts equation image=5 ms in the case of a uniaxial H2 environment.46 This is almost three times smaller than the observed value of equation image. This observation is in contrast to the study of ortho-H2 trapped in rare gas and para-H2 matrices, where a good match with the Fedders theory was observed.48 We do not understand the reasons for the observed deviations from the Fedders theory at the present time. The strongly anisotropic environment of H2 in C70, which gives rise to resolved spectral features, could play a role. It is also possible that cross-relaxation with the slowly-relaxing impurity protons artificially lengthens the H2 relaxation time. A phenomenon of this type has been observed in studies of relaxation for H2 encapsulated in an open-cage fullerene containing protonated exohedral groups.8 High purity powders are in preparation in our laboratory in order to address these issues. 4. Conclusions The physical picture of H2@C70 that emerges from these investigations is summarised, in a highly simplified form, in Figure 10. At high temperature T>340 K (top left), the C70 cages rotate isotropically and the solid phase has cubic symmetry.1, 49, 50 In addition, the endohedral H2 molecules explore a wide range of accessible quantum states (Figure 10 shows only the A′′2 and E1 states, populated in the degeneracy ratio of 1:2, for simplicity). This gives rise to a greatly reduced dipole–dipole interaction, corresponding to classical motional averaging of the dipole–dipole coupling through isotropic molecular tumbling. Figure 10. Pictorial representation of H2@C70 in the solid phase, according to the proton NMR data. Two spatial quantum states for the endohedral H2 molecules are shown: A′′2, which is shown as a pz-orbital shape, and the doubly degenerate E1 state, which is shown as a torus. At 340 K the C70 molecules are free to orient and the instantaneous orientational distribution is isotropic. Below the orientational ordering phase transition at 270 K the cages lose their rotational freedom and domains of coaxial molecules are formed. The correlated domains are represented in the Figure by 2×2 blocks. At 15 K the H2 molecules are all found in the E1 ground state, and at 4 K the C70 domains orient along the magnetic field. On lowering the temperature, isotropic reorientation is replaced by a fast tumbling or a precessional motion of the cages along a preferred crystal axis.50 Other studies suggest that the plastic cubic phase with almost isotropic rotation of the cages axis persists even below room temperature.5153 In any case it has been recognized that below 270 K (top right), correlations develop between the orientations of neighbouring C70 cages. The Figure shows the C70 orientations organised in four 2×2 blocks. At a temperature of 15 K (lower left), all H2 molecules are in the E1 ground state. This generates a Pake pattern with a dipole–dipole coupling constant of ∼60 kHz. Although neighbouring C70 molecules have correlated orientations, there is no net orientation with respect to the magnetic field. At a temperature of 4 K, the C70 molecules partially align so that their long axes are parallel to the magnetic field. Figure 10 should not be taken too literally. For example it is likely that the C70 cages align cooperatively in small domains, or perhaps that entire crystallites align with the magnetic field. Furthermore, it is yet not known whether the magnetic alignment behaviour is also exhibited by empty C70 cages, or whether the endohedral H2 molecules somehow influence the behaviour of the cages and their mutual interaction. Although there is a possibility that a phase transition occur at ∼15 K, the observation of alignment effects over a considerable temperature range suggests that this is not the case. In addition, even if there were a phase transition, there would still need to be a mechanism leading to macroscopic alignment of the cages, in which the magnetic susceptibility anisotropy is still implicated. The low-temperature magnetic alignment of C70 cages requires further investigation, in order to elucidate whether the thermal and physical history of the sample, or the presence of impurities, play a role, or whether hysteresis is observed with respect to temperature. Higher-purity samples are currently in preparation in our laboratory for the purpose of such studies. The low-temperature NMR data show that endohedral H2 molecules may act as “spies”, allowing the proton NMR spectra to report on the low-temperature behaviour of the fullerene cages—in a similar way to muon spectroscopy.5053 This might be of use in other contexts as well, such as the study of fulleride superconductivity.54, 55 This research was supported by EPSRC-UK through grant EP/I029451/1 and grant EP/H003789/1, Royal Society University fellowship Scheme (M.C.) and by grant SF0690034s09 from the Estonian Ministry of Education and Research. The authors at Columbia thank the National Science Foundation for financial support through grant NSF-CHE-11-11398. The authors are grateful to Ole G. Johannessen and Enno Joon for helpful assistance during the experimental sessions. We also thank Prof. Jim Emsley for discussions.
81791696f483b228
Take the 2-minute tour × (This is a simple question, with likely a rather involved answer.) What are the primary obstacles to solve the many-body problem in quantum mechanics? Specifically, if we have a Hamiltonian for a number of interdependent particles, why is solving for the time-independent wavefunction so hard? Is the problem essentially just mathematical, or are there physical issues too? The many-body problem of Newtonian mechanics (for example gravitational bodies) seems to be very difficult, with no solution for $n > 3$. Is the quantum mechanical case easier or more difficult, or both in some respects? In relation to this, what sort of approximations/approaches are typically used to solve a system composed of many bodies in arbitrary states? (We do of course have perturbation theory which is sometimes useful, though not in the case of high coupling/interaction. Density functional theory, for example, applies well to solids, but what about arbitrary systems?) Finally, is it theoretically and/or practically impossible to simulate high-order phenomena such as chemical reactions and biological functions precisely using Schrodinger's quantum mechanics, over even QFT (quantum field theory)? (Note: this question is largely intended for seeding, though I'm curious about answers beyond what I already know too!) share|improve this question Why do you restrict it to quantum problems ? –  Cedric H. Nov 4 '10 at 23:19 You could say restrict, but in many ways it's generalising! In any case, the problem is rather different for quantum mechanics, and certainly more interesting I find. –  Noldorin Nov 4 '10 at 23:30 5 Answers 5 up vote 10 down vote accepted First let me start by saying that the $N$-body problem in classical mechanics is not computationally difficult to approximate a solution to. It is simply that in general there is not a closed form analytic solution, which is why we must rely on numerics. For quantum mechanics, however, the problem is much harder. This is because in quantum mechanics, the state space required to represent the system must be able to represent all possible superpositions of particles. While the number of orthogonal states is exponential in the size of the system, each has an associated phase and amplitude, which even with the most coarse grain discretization will lead to a double exponential in the number of possible states required to represent it. Thus in quantum systems you need $O(2^{2^n})$ variables to reasonable approximate any possible state of the system, versus only $O(2^n)$ required to represent an analogous classical system. Since we can represent $2^m$ states with $m$ bits, to represent the classical state space we need only $O(n)$ bits, versus $O(2^n)$ bits required to directly represent the quantum system. This is why it is believed to be impossible to simulate a quantum computer in polynomial time, but Newtonian physics can be simulated in polynomial time. Calculating ground states is even harder than simulating the systems. Indeed, in general finding the ground state of a classical Hamiltonian is NP-complete, while finding the ground state of a quantum Hamiltonian is QMA-complete. share|improve this answer An informative answer. Interestingly, I've heard that (universal) qunatum computers should be able to simulate other quantum computers in polynomial times, just that classical computers can't. –  Noldorin Nov 5 '10 at 16:14 Yes, that is true. Calculating ground states seems to be beyond their reach though. –  Joe Fitzsimons Nov 5 '10 at 16:39 Ah I see. I suppose calculating ground states and performing complete quantum simulations of systems typically cover different ranges of application, however, so it's not all bad news. Anyway, cheers for the detail, you seem to be very knowledgeable on the subject; the answer is yours. –  Noldorin Nov 9 '10 at 3:22 @Noldorin: Thanks. I only know this stuff because I have spent quite a while working in this exact field. By the way,ground states are to some extent less relevant because the systems for which is is computationally hard to calculate the ground state of (at least on a QC) don't cool efficiently either. –  Joe Fitzsimons Nov 9 '10 at 3:50 the N-body problem in classical mechanics is not computationally difficult to approximate a solution to. This is somewhat misleading. The systems are typically chaotic, so they're fundamentally impossible to predict, for the same reason that long-term weather forecasts are fundamentally impossible. Calculating ground states is even harder than simulating the systems. This may be true in some abstract sense, but it is not true in a practical sense. For example, it's relatively straightforward to get a good approximation to the ground state of a nucleus using the Hartree-Fock method. –  Ben Crowell Aug 16 '13 at 15:06 The answer is fairly simple -- classical N-body problem has its solution in $6N$ 1D functions of time, quantum N-body problem has its solution in one complex function, but $3N$-dimensional (not counting spin and similar stuff). Then, there is no wonder why one can find analytical solutions only for trivial problems or at least make $N$ huge and escape into statistical mechanics. And yes, this is only the problem of mathematical complexity here. From modelling point of view exact solving also seems hopeless, with only memory complexity of $\mathcal{O}(K^{3N})$. For the rest of the answer I will restrict myself to quantum chemistry/material science, since this is the most exploited region -- this means we are now talking about atoms. First of all, atoms have small and very heavy nuclei, which thus can be treated as almost stationary sources of electrostatic potential; this reduces the problem to electrons only (Born-Oppenheimer approx.). Now, there are two main routes to follow: Hartree-Fock or Density Functional Theory. In HF, one roughly represents the many-body weavefunction as a combination of some standard base functions -- then one can optimize their contributions to get minimal energy, yet using extended Hamiltonian to adjust the effects of such approximation. In DFT, one encouraged by Hohenberg-Kohn theorems reduces the many body weavefunction to electron probability density field (3-dimensional), and accordingly Shroedinger equation terms into density functionals (and there approximations are applied). Next, it can be either solved as this 3D field or in Kohn-Sham way, which is pretty much Hartree-Fock for DFT (one represents density with base functions). People sometimes are making something analytical here, but those are mostly theories made to support computational approaches. And finally your last question: those approximate methods (but still ab initio -- there are no experimental parameters there) do predict things like chemical reactions, various spectra and other measurable quantities; accuracy is problematic though. Biology is mostly out of reach because of a time scale; at least there are hybrid methods able to mix for instance the classical simulation of protein motion with quantum simulation of the binding site when it is squeezed enough so something quantum like enzymatic reaction can take place. share|improve this answer Looks like a pretty good answer, I'll read it properly tomorrow. In any case, it's important to make clearer that that although the "properties of the solution* are "fairly simple", the solutions themselves are certainly not! –  Noldorin Nov 5 '10 at 1:37 Note that H-F, DFT are the main approximation techniques to the quantum many-body problem, though neither are "well-controlled approximations" in the sense that they are used as the first term in a convergent expansion to the actual solution. And I'm not sure what level of computational complexity they reduce the problem to, though that's an important question. –  j.c. Nov 5 '10 at 14:42 @j.c. Those are approximated theories rather than approximated ways of solving equations. Reduction of complexity is obvious -- 3N-dim function to a vector of parameters in case of HF or to 3-dim field in case of DFT. –  mbq Nov 5 '10 at 17:23 In addition to what mbq said, it might be interesting to know that things get really funny in relativistic quantum mechanics, that is using the Klein-Gordon and the Dirac equation (but without the "second" quantization of Quantum Field Theory). There, there's one wave function per particle sort, so no matter how many particles of one kind you consider, the only thing that changes is the field itself. You only get more degrees of freedom by actually adding another kind of particles. Of course, since Fermions require Spinors, you may end up with other computational issues then... share|improve this answer The problem here, of course, is that the field modes are continuous variables. –  Joe Fitzsimons Nov 5 '10 at 6:47 By which I mean the problem with simulating the system, not a problem with your answer. –  Joe Fitzsimons Nov 5 '10 at 7:04 Yeah, I was curious as to whether QFT actually makes things easier in some respect. It's a tricky scenario. –  Noldorin Nov 5 '10 at 16:15 It definitely can only make things harder as you can encode a discrete system in the CV but not necessarily the other way around. –  Joe Fitzsimons Nov 5 '10 at 16:38 Noldorin: QFT would probably make things even more complicated, I was only wondering whether the unquantized relativistic QM equations would yield an advantage over the non-QFT Schrödinger equation, but as @Joe mentions, this may not be the case... –  Tobias Kienzler Nov 8 '10 at 8:00 On a more abstract level, the problem is linearity versus non-linearity. It's straightforward to solve a number of linear equations, and they always yield an analytic answer. However, non-linear equations produce chaotic behaviour, which cannot be generalised in most cases. As an example, the 3-body Newtonian problem involves 2C3 = 3 non-linear equations; the nonlinearity comes from the r2 relationship. And 3 non-linear relations are the minimum requirement for a chaotic system. Similarly, quantum mechanics involves a large number of non-linear equations - given a set of 3 electrons, each will repel the others via a non-linear relation, and with even more complexity than the Newtonian problem where all things are known and determinable. So, the simple answer is that the problem is mathematics that can't be solved for the general case, which result from the physics, and that the quantum case is indeed worse than the classical one. share|improve this answer The many-body equation is immensely difficult to study, both classically and quantum-mechanically. The late John Pople, of Northwestern University, won a Nobel Prize in 1998 for his numerical models of wave functions of atoms, developing a theoretical basis for their chemical properties. Here is a link: share|improve this answer Thanks for the info. I may just have to read some of Pople's papers some day, out of curiosity. :) –  Noldorin Nov 7 '10 at 1:14 Your Answer
01bbcf128622a30e
Response to Richard Well, thanks for the reply and it’s nice to know you too and to have cyber met you. You seem to be on the ball with your questions as an “armchair scientist” with the brightness of your bulb. Thoughts could travel faster using entanglement, as I wish words would express those thoughts just as well. I’m working on my mastery of words to convey what I am thinking, but it is difficult, I have to admit. My preferred method of exchanging thoughts would be telepathy and it’s ironic and inspiring you brought that up. I am interested in math and geology, I will check out your link, as I would like to get some of my thoughts published also, fearing its just a wet dream though. I hope not… Elementary logic does explain the Universe, if I could only write it all down. I have pages upon pages of stuff that explains how we could cure cancer or how to tap unlimited energy from the infinitely dense space, where space is proven to be infinitely dense in the physics textbooks, Gravitation pg 426. “…present day quantum field theory “gets rid by a renormalization process” of an energy density in the vacuum that would formerly be infinite if not removed by this renormalization.” I write people trying to tell them, but no one wants to listen as there are so many crack pots out there the real people wanting to help get lost. I’m not crazy, but am starting to think I am. Uttering stuff like, I think I can cure cancer and not even cut you open, as McCoy in star trek said, “put away your butcher’s knives”. Yeah, many youtube videos do push the limits, but science isn’t. They have stuck to the Big Bang and the expanding universe still ignoring infinity which they physically measure and calculate. I’m sorry, but science shouldn’t use limits and re-normalization just to make their world a finite one; proven in their own textbook. The Universe is not finite and answers of infinity should be embraced rather than ignored such as the Schrödinger equation and the Dirac equations, any calculation which gives infinity as a solution. Ignoring infinity isn’t a true scientific method. I feel as if my concepts could benefit society. I’ve been doing this alone and bouncing ideas/ concepts off someone else with even some science background could only help the process. I would like to find someone to help, as many concepts use existing and proven technology. The technology of the future is frequencies, as we have found to discover, MRI, quantum computers, cell phones or even a drone. After all that rambling, let me respond to your most recent questions. Not to answer your question with a question, however the answer is a complex one. Defining forever concerning a photon, how far can light travel before hitting the edge of the Universe? We see it 13.8 billion light years away and we will probably see it even further if we overexpose for 6 months, or longer still. Can one side of our visible Universe see the other side of our visible Universe 27.6 billion light years? We won’t know that answer for a long time, if we are still around because we can’t seem to stop repeating history. Does a photon last after coming out of the Universe, as energy comes out of an atom? If a wave of energy would exit our Universe that is. Anything atomic seems to last forever only changing its form. What forever meant or defined as; a photon has to go through dust, radiation, gases, and all kinds of obstacles for us to see it. It therefore loses energy resulting in the entire spectrum shifting to lower energy, this is the reason for the shift. Not that it’s zooming away from us faster than the speed of light. The speed of light in this Universe will always remain at 300k/s with some slight variations as chaos can’t be predicted. Forever has many meanings so I will let you decide if I really answered what forever is. The General Equation of Relativity still exists and I don’t think people are getting away from it or using quantum physics/mechanics for fun. It’s just that other forces are at work scientist refuse to examine and admit, the same with infinity. I mean we are stuck on lightning being made by dust in the clouds, where we know water kills any static electricity. There is much more of a charge in the ionosphere, as we see from the borealis here and on other planets. Try the comb experiment under a faucet, and then get the comb wet. Sprites and blue jets exist for a reason above the clouds and has a lot to do with the plasma, charged particles the sun makes. Electricity is found everywhere in the Universe, in our bodies, in our technology, even on Saturn. Why doesn’t science admit the many implications of what we are all made of, the electromagnetic spectrum? Why don’t we have a science of the electromagnetic spectrum, a formal classification for the study of frequencies? In my opinion it’s the most important part of our cosmos. One sentence I do like to ask scientists without getting any response yet is, “If the Universe made all the atoms and all we find in the Universe is atoms and atomic particles, shouldn’t the Universe be atom like then?” How do we know the Universe has an outer edge, perhaps defined as chaos or probability? Space is black, correct, so light waves outside of the Universe are too big for us to see, as we can’t see atoms using light, the light waves are too big. This is one basis for my theory I’ve coined, Scale Theory, which I’ve started a book for. Even if it isn’t all true, I do think it will hold peoples interest in wonderment. I hope you are a little bit intrigued at least. Because this is just scratching the surface of everything I’ve written down and can’t get a single person wanting to know more, just for kicks or science fiction if anything. Yes, I probably come across as a crack pot, but how else do you tell someone you have life changing ideas hoping they don’t think you’re crazy? If I had a lab with all the right equipment and an assistant with in-depth knowledge of the capabilities of the equipment, I know I could get something to work proving my point. Yes, lots of studies still need to be conducted, but one must start somewhere or die trying. You can reach me at Better yet, you can leave a reply below. I’m not sure it works, as no one has replied. I hope it works… In response to Richard: “I will put a ‘think’ on this and get back to you soon. Jack. I have been watching a lot of YouTubes these days with folks going against the General Equation of Relativity or some such, and I think some people just want to have a bit of fun with Quantum physics and such. When you ask me if I think photons live forever…define forever. Meanwhile, a friend of mine work on S.T.E.M. high school rocketry stuff, aerospace and astronautics and such, so if you like math, we just published the first of its kind math-lab textbook if you care to look it up and see what you think of it. I, myself, am merely an armchair scientist, though I do environmental science, and write about geology, geomorphology and the like ( But you, sire, appear to me to be one of the brighter bulbs in the pack, so you won’t get that kind of brightness from me, but I will enjoy picking your brain so to say. Meanwhile, I still watch all the cosmic YouTube stuff just to try and keep up with the bigger IQ types, even though some propose speeds faster than C. Then again, math doesn’t lie, but why restrict ourselves to the speed of light? After all, if you were zipping around the Kuiper belt and you and I were in touch by telepathy, thoughts go much faster and farther than anything, don’t you think? HA! Here’s the URL to our book, Jack. Nice to know you.” Leave a Reply You are commenting using your account. Log Out / Change ) Twitter picture Facebook photo Google+ photo Connecting to %s
2a65ff957006269a
Conical Quantum Dot Application ID: 723 Quantum dots are nano- or microscale devices created by confining free electrons in a 3D semiconducting matrix. Those tiny islands or droplets of confined “free electrons” (those with no potential energy) present many interesting electronic properties. They are of potential importance for applications in quantum computing, biological labeling, or lasers, to name only a few. Quantum dots can have many geometries including cylindrical, conical, or pyramidal. This model studies the electronic states of a conical InAs quantum dot grown on a GaAs substrate. To compute the electronic states taken on by the quantum dot/wetting layer assembly embedded in the GaAs surrounding matrix, the 1-band Schrödinger equation is solved. The four lowest electronic energy levels with corresponding eigenwave functions for those four states are solved in this model using the Coefficient form in COMSOL Multiphysics. The model was based upon the paper: R. Melnik and M. Willatzen, “Band structure of conical quantum dots with wetting layers,” Nanotechnology, vol. 15, 2004, pp. 1-8. COMSOL Multiphysics®
324df4fb3f8a9ef7
Philosophy Lexicon of Arguments Author Item Excerpt Meta data Books on Amazon I 74 Cause/Causality/Empiricism/VsCauses - Russell: the law of gravity is given in equations - there are no "causes" and "effects" here. - Equations/Cartwright: are today's generalizations. - They are the heart of science. I 75 Explanations by equations are often redundant. - I.e. there are alternative equations! - Cause: cannot be redundant. - Equation: causes nothing, but includes phenomena in a frame. I 79 Alternative equations: offer different laws. - (they compete) - E.g. multiple versions of the Schrödinger equation - CartwrightVsRussell: I prefer causes rather than laws. Car I N. Cartwright How the laws of physics lie Oxford New York 1983 > Counter arguments against Cartwright > Counter arguments in relation to Causes Ed. Martin Schulz, access date 2017-05-23
758f064518100501
IN-DEPTH | Logic takes on the physics paradoxes: Review essay on The Spacetime Model: A Theory of Everything by Jacky Jerôme  The history of science is the record of the achievements of individuals who…met with indifference or even open hostility on the part of their contemporaries…A new idea is precisely an idea that did not occur to those who designed the organizational frame, that defies their plans, and may thwart their intentions. —Ludwig von Mises, The Ultimate Foundation of Economic Science[1] A few days after watching CERN’s Higgs boson press conference, it occurred to me that if the hypothesized Higgs field is supposed to be responsible for mass, and gravity is directly related to mass, it should be fairly obvious that mass, gravity, and the Higgs field might all turn out to be aspects of the same deeper phenomenon, rather than separate, interacting layers. A search soon revealed some theorizing out there to this effect. Given the list of seemingly impossible paradoxes that have been generated in the name of quantum physics over the past century, and which have spun off an entire quantum mysticism genre, I became curious as to whether there might be alternative models that attempt to bridge the usual list of physics paradoxes in a way that made more sense. In a free 222-page PDF (Version 6.00, 2 July 2012 [Originally 2005]) replete with illustrations, Jacky Jerôme of France claims to have elaborated a single model capable of suggesting rational accounts of most of the headline physics enigmas. He characterizes it as a substantial build off of the basics of Einstein’s four-dimensional spacetime, that does not resort to any fantastic additional dimensions, yet is still consistent with experimental evidence and the accepted descriptive mathematics of both quantum mechanics and general relativity. That is a big claim, yet he still tries to avoid overhyping it: “Despite the fact that this theory is logical, coherent, and makes sense, the reader must be careful, bearing in mind that the Spacetime Model has not yet been validated by experimentation.” That said, he offers reasoned degrees of confidence as he applies his underlying concepts to particular issues, and at several points suggests further experiments to test claims. His work appears to be both compatible with the laws of logic and a provocative contender for the holy grail of physics, a “Theory of Everything,” that is, a physics model that accounts for the behavior of both the very large and the very small using the same principles.[2] Students of economics in the tradition of Ludwig von Mises might quickly recognize the potentially large gains to be had if formerly separate “micro” and “macro” specialties can really be integrated into a unified model.[3] They will also recognize the possibility that in certain situations, thinkers outside of the current establishment can be offering superior ideas that are built on fundamentally different perspectives than the conventionally accepted ones. The dual themes of logic and physics here might also capture the attention of fans of the epic rationalist-fantasy storyworld of Ayn Rand’s Atlas Shrugged, in which philosophy and physics are portrayed as the dual-pinnacle disciplines of the “rational mind.” Finally, serious students of contemplative traditions curious about the popular claims of quantum mysticism will have a fresh opportunity to consider whether and how the contrasting Spacetime Model may or may not relate to various traditional contemplative claims about the fundamental nature of reality. Two ways to look at banging a drum Jerôme’s writing caught my attention early when he made a critical distinction between the mathematical description of physical phenomena and their causal-rational explanation: We could think that the basic laws of physics are extremely complex since the mathematics of general relativity and quantum mechanics are. Such is not the case…It is thus advisable to distinguish the basic phenomena, generally very simple, from the laws governing them, generally using mathematics, which may be extremely complex.[4] He gives the example of a child knowing how to produce noise by banging a drum. We can readily understand in causal-rational terms that noise results from impacts on the drum, whereas describing the surface waves in physical-mathematical terms requires complex know-how and calculations including Bessel functions. Thus, causal-rational explanation and mathematical description are revealed as two different modes or aspects of knowledge about the same phenomena. The claim I have sometimes heard that “one can only understand quantum physics through mathematics” always struck me as a little suspicious. It speaks of a mystery that is inherently unapproachable to the non-math-genius. Yet the above distinction enables an alternative interpretation. What if this claim only signals that while the speaker understands this rarefied mathematics, he also simply lacks a rationally acceptable causal explanation of what it describes? After all, even if the subject is the same, these are two different approaches to knowledge of that subject. Each employs different languages, skills, and methods. If these approaches form a team, isn’t it possible that one of those partners (causality) could go astray even as the other (math) remained on track? Bridging these two approaches, Jerôme tackles paradoxes such as the wave-particle duality, the nature of photons, the constancy of the speed of light amid the relative motion of matter, the behavior of black holes, the location of the mysteriously missing antimatter in the universe, how such high energy is produced by nuclear reactions, and how fantastic numbers of electrons and positrons everywhere could have the same volume and charge (just either positive or negative) to unimaginably high degrees of precision. By the end, he even offers a fascinating alternative to the “Big Bang” theory of the start of the universe. He claims the Spacetime Model makes much more sense of the relevant issues and observations, while accounting for a long list of otherwise “mysterious” phenomena in the process. Any attempt at an account of the origin of the universe must ultimately be speculative to some degree, but here we must also note that any knowledge claim in the natural sciences can never be validated 100%, as those in the more abstract disciplines such as logic, praxeology, and geometry can be. Natural science hypotheses must compete with rivals on the relative question of which available contender better accounts for the observations. Yet this is not a matter of “empirical” experimentation alone. Logic (internal consistency, etc.) must also play a role in evaluating competing hypotheses. Jerôme notes that: Wrong reasoning can lead to wrong results. For example, we know three different theories of mass and gravity, which are mathematically verified: the Higgs boson, Superstrings, and the Spacetime Model. At least two of these three theories are wrong, despite the fact that they are all three mathematically verified. Here is a typical example of the way Jerôme attempts to make sense out of the numerous established mathematical principles that have been left to appear mysterious in causal-rational terms: “E = mc2. This formula is fully verified using mathematics and experimentation, but no one is able to explain it using logic and good sense. However, the solution is quite simple within the Spacetime Model.” Positivism still roosting at home? Such an advance of mathematical description over causal-rational explanation in fundamental physics should not be surprising in view of the relevant history of controversies regarding the respective roles of reason and empirical observation. Radical empiricism and logical positivism viewed axiomatic logical principles as unscientific, metaphysical anachronisms, not “really real” because they could not be empirically “observed” (meaning measured). As Ludwig von Mises noted: …the category of regularity is rejected by the champions of logical positivism. They pretend that modern physics has led to results incompatible with the doctrine of a universally prevailing regularity...In the microscopic sphere, they say…The categories of regularity and causality must be abandoned and replaced by the laws of probability.[5] It was just this mindset that accompanied the emergence of enigmas allegedly implied in a series of experiments and models in fundamental physics. The slit experiments, Schrödinger's cat, Heisenberg’s uncertainty principle, and so on, were trotted out as evidence that logic and causality had met their match, that the universe is at bottom governed by chance and uncertainty and that some entities (not really being entities as old-school philosophers might have understood them) can exist in one place and another at the same time. Maybe quarks are telepathic! Proponents of such claims did not seem to notice the possibility that it was their previous rejection of logic that enabled an environment in which stop-gap speculations could gain sober recognition. Instead of these enigmas being viewed as no more than bemusing placeholders awaiting more coherent replacements, they were instead embraced and cited as evidence against old-fashioned reason and its “metaphysical,” a priori conceits. However, such thinking not only missed its own circularity, it also missed that an experimental result and the quality of a hypothesis forwarded to explain it are entirely different matters. The quality of a hypothesis depends in part on applying the very axiomatic logic that had been abandoned. Paradoxes that appeal to the minds of those who have rejected the strictures of logic show no mystical insight, but only the failure to apply to their thinking the inescapable, ancient rules for forming and validity-checking explanations of anything whatsoever. In this light, Jerôme’s comment is telling: “As a physicist, it is necessary to leave this philosophical aspect to the philosophers and try to solve this enigma in a scientific way, with a logical and rational explanation.” This could be from the pages of Atlas Shrugged, since his let’s-get-practical use of the word “philosophers” in this sentence seems to imply that these are by definition anti-rationalist philosophers. Yet rationalist philosophers, part of whose message is precisely to uphold the requirements of logic and consistency for any valid knowledge claim, demand exactly the kind of “logical and rational explanation” that Jerôme sets as his goal. A breath of relatively reasonable quantum air Against this backdrop, I found refreshing Jerôme’s unabashed resort to “deduction,” “possibility,” and “logical consistency.” The results are consistently fascinating and provocative. He appears to make fairly short work of one physics paradox after another within a unified framework. In a key early move, he specifies a more consistent definition for volume as “closed volume.” In doing so, he notes conventional inconsistences in volume definitions across scales, highlighting the importance of what is and is not “counted” as closed volume. In his model, it is closed volume alone, and not any of the other varieties of volume he details, that creates the central phenomenon of spacetime displacement. Particles and nuclei form closed volumes, but the distributed charges of the outer electrons of atoms are so diffuse that they do not. And whereas waves do not form closed volumes and therefore have no mass; particles do and therefore have. One might also take the converse perspective and define closed volume as “that which displaces spacetime.” “Particles,” in this model, result from “pieces of wave” that form closed volumes in spacetime. As these move and reopen, they can subsequently turn back into waves. Only closed volumes cause displacement in the elastic four-dimensional spacetime fabric that Einstein described, which produces what we have come to see from two different sets of observations as “gravity” and “mass” (“mass effect”). Even the hypothesized Higgs field entails an additional dimension. The Spacetime Model claims to be able to dispense with this while still accounting for the observations associated with the entire Standard Model of particle physics, Higgs boson included. As Jerôme puts it: The 4D expression of the mass effect means that the universe can be described with only 4D expressions, as Einstein thought his whole life. We don’t need extra dimensions such as 5D, 6D, 7D...nD (string theory), or extra fields such as the Higgs field. In reality, the proposed theory is close to the Higgs boson theory. The major difference is that the famous Higgs field is nothing but spacetime....mass and gravitation are nothing but the consequence of the pressure of spacetime on closed volumes. His conclusion that “Everything is made out of spacetime” can certainly still leave us with a sense of the mysterious, but somehow manages to clean up the mystery compared to the more typical litany of enigmas. As Mises often emphasized, any given state of theory in a field must run up against some “ultimate given,” that is, it can never be expected to explain every possible thing: Scientific research sooner or later, but inevitably, encounters something ultimately given that it cannot trace back to something else of which it would appear as the regular or necessary derivative. Scientific progress consists in pushing further back this ultimately given. But there will always remain something that—for the human mind thirsting after full knowledge—is, at the given stage of the history of science, the provisional stopping point. It was only the rejection of all philosophical and epistemological thinking by some brilliant but one-sided physicists of the last decades that interpreted as a refutation of determinism the fact that they were at a loss to trace back certain phenomena—that for them were an ultimately given—to some other phenomena (UFES, p. 48). Jerôme’s ultimate given is quite ultimate indeed: an elastic 4D spacetime with a substructure of Spacetime Cells (sCells). Everything else is built from that. It may be easiest to start by conceiving of an sCell as a “neutral electron.” However, Jerôme’s real point is the converse: that an “electron” is a “negatively charged sCell.” Its positively charged partner in existence is called a “positron,” which explains the positive charges of protons in this model. Positrons and electrons always do have the same mass (closed volume) of 510.998918 KeV (electron masses confirmed with “precision of <0.0000086%”) and protons and electrons the same charge (with the opposite pole) of 1.602176565(35) x 10−19 Coulombs. Jerôme writes, “The relative difference between the absolute values is less than 10-21! So, the question is, ‘How can we explain the incredible equality of these electric charges?’” He hypothesizes a joint origin of both characteristics in the splitting and reproduction of identical sCells that constitutes the ongoing creation of spacetime (more on this below), which would account for this uncanny precision of commonalities. Starting with a fabric of sCells, when the neutral charge of one transfers to another, the result is one below-neutral cell and another nearby and equally above-neutral cell. These two always appear as a precisely opposite pair because the above-average charge of one and the below-average charge of the other are nothing more than two symmetrical results of a single transfer. They always have the same mass because their shared sCell substructure already predefines this in the same way in both cases. In this view, electrons and positrons are visible to us because of their charges, whereas sCells in their background average neutral state are undetectable (cannot be “observed” directly), precisely because of their neutrality, and are therefore hidden in plain sight. Positrons and electrons are just two types of lit-up sCell. Electromagnetic waves, massless because they do not form closed volumes, propagate through this sCell fabric at a consistent speed in vacuum, but never any faster (light travelling through transparent matter has been measured at slower speeds and quite slow speeds have been measured under extraordinary experimental conditions within matter cooled to near absolute zero). Jerôme attributes this to a maximum cell-to-cell transfer rate that is a natural limiting characteristic of the medium of sCells themselves. That we have come to call this maximum transfer speed of 299,792,458m/s “the speed of light” reflects the way in which we observed it and can measure it. Jerôme identifies neutral, positive, and negative states of sCells as the basic building blocks of all other particles. He proceeds to suggest how these components alone can account for the formation, disappearance, properties, masses, and charges of up and down quarks, protons, neutrons, hydrogen atoms, and onward. Neutral sCells can contribute to mass effects themselves, but only when they become enclosed within a subatomic particle or nucleus and thereby come to “count” as part of a closed volume. This pair model simultaneously accounts for the location of antimatter in the universe. Rather than being hidden many light years away, it is hidden right under our noses, concealed quite near its partner in existence within other particles. Jerôme also claims to dispose of the hypothesized Strong force as a separate force; those effects result from the enveloping rubber-band-like effect of “distributed charge fields.” In fact, according to this model, there are only two fundamental forces from which the other apparently separate forces derive: Hooke’s Force (constraint and pressure), which applies to all particles, and Coulomb’s Force (attraction and repulsion), which applies only to charged particles (Figure 5-1). He argues that the concept of a photon as a particle makes no sense. He explains why a photon must be a “quantified wave” and never a particle, and how a quantified wave travelling through an sCell substructure is both consistent with experimental evidence and in principle logically comprehensible. As for black holes, he writes: “Inside a closed volume, as inside a black hole, nothing happens. The light doesn’t exist and therefore can’t escape…” He also claims to have solved the wave-particle duality. His method of doing so is largely logical and deductive, working from a simple set of widely accepted observations. And in another illustration of differentiating mathematical description and causal-rational explanation, whereas “Schrödinger’s probability concept must be replaced by a more realistic concept called the Distributed Charge Model,” the Schrödinger equation can still be used just as before! For the finale, he offers a simple, elegant, and unified account of the beginning and ongoing growth (“expansion”) of the universe through sCell expansion and division reminiscent of the way that living cells divide and reproduce in vast quantities with nearly unimaginable precision and a few extremely rare minor variations. This approach simultaneously supplies accounts of a long list of observations for which the Big Bang offers only question marks. A single internally consistent model is thus able to suggest accounts of the major observations at both the micro and macro levels of physics, including most of the usual list of enigmas. The real nature of spin and some other points remain relatively elusive, he admits, but ventures some tentative parameters and possibilities in each case. Simplifications are used to get the basics across to general readers, while the math-heavy sections and recalculations of fundamentals using closed volume definitions are set off as supplemental information, which can be skimmed or skipped by the non-specialist. Most of the book should be within reach of those with a reasonable general science education (though more would make things easier) and might be read in a motivated afternoon or two. The prose is brief and clear and the illustrations helpful in bringing home the arguments. The English is “off” just enough to reveal that it is not the author’s first language, but the meaning remains clear and easy to follow. Although the book is clear, a quick copyediting by a native speaker would still lift the quality level. Any bones left for quantum mysticism? If this model does pass the tests of internal logical consistency, it is still left to face tests of experimentation. In contrast, some of the competing paradox-ridden and n-dimensional theories it targets do not appear to pass the tests of logic, Ockham’s Razor included. Some may be rejected on logical grounds alone. Others might be rejected if there exists a competing theory that both explains the observations and better passes the tests of logic. Ideological opponents of “metaphysical” a priori logic would have been loath to reject a hypothesis based on logic alone. Yet not doing so has probably contributed to allowing dead-end speculations to run, permeating scientific culture, and poisoning tendencies in pop philosophy for a century. The Spacetime Model could put a damper on many of the popular claims of the “new physics supports mysticism” genre, particularly claims that logic, predictability, and consistent causality are mere illusions, or that subject-object differentiation is not to be relied upon. That said, there are still some extraordinary and mind-bending claims to be found in the Spacetime Model itself that might easily be viewable as resonant with certain claims found in some traditional contemplative traditions. In the Spacetime Model, it is not only that “all is spacetime”, but more specifically that particles (matter), waves (energy), and space (medium) all consist of the same stuff, which is, in this view, “elastic four-dimensional spacetime substructure.” From there, consider some traditional formulations such as the Tibetan “non-duality of form and formlessness” and the typically pithy Zen “not one; not two.” Matter, energy, and space are presented as being both different from each other (not one) and also consisting only of the same spacetime stuff as one another (not two).[6] However strange images from our attempts to understand the deep structures of physics may appear, and even though atoms are quite clearly “99.999% vacuum with 0.001% waves or matter-energy,” as Jerôme puts it, none of this has any bearing on the reality in which we as persons do and must live and act. Matter, however strange its ultimate substructure, still behaves according to the laws of causality, and so does its substructure. Probability is ultimately a measurement of our own degree of ignorance about the precise operations of physical causality.[7] Moreover, what is visible at one level of magnification (atomic level: mostly empty) does not necessarily also apply to the view at another level of magnification (the scale at which we live and act, where stuff does bounce off walls). As Hans-Hermann Hoppe has pointed out,[8] Paul Lorenzen, in Normative Logic and Ethics,[9] argues that all of our knowledge of natural sciences, even physics itself, presupposes certain a priori true assumptions and norms that are not derivable from “empirical” experimentation, a set of knowledge types he labels protophysics, which are “definitions and the ideal norms that make measurements possible” (p. 60). Nothing we discover by measurement can validly contradict the presuppositions of measuring or we will have taken the rug from under the basis of our own claims, rendering them nothing more than sounds, chirps or barks! And the winner is…? So where is the grand reaction to Jerôme’s rather comprehensive challenge to conventional physics models and hypotheses? I have not been able to find much of one online, either by specialists or anyone else. Is it because our Mr. Jerôme is just dead wrong and hopelessly naïve in his imaginings? Is it because there are so many competing “theories of everything” out there, a dime a dozen? Or might there be something special about this one? What if this Spacetime Model really is a simpler, more elegant explanation of all the observations than the mixed and matched crop of better-known theories it challenges, and is compatible with experimental results and QM/GR mathematics, as claimed? What if it does explain much of what is in need of explaining in a better way – not perfect, just better – than the competition? A conventional mindset would have to quickly reject such possibilities: Let’s get real. He has no official position in the physics community. His speculations and diagrams are self-published on his own website! Certainly it must just be an amateur effort compared to the real experts in the establishment with their mysterious, peer-reviewed ways! Maybe. But in light of our earlier discussions of the philosophical background radiation and our distinction between mathematical description and causal-rational explanation, such a conclusion may now look less reasonable than it might have. There certainly are mathematical geniuses at work and checking on each other in a language very few people can speak well enough to even listen in. That is all to the good as far as it goes (gains from specialization), but is it also a good excuse for not making sense in causal-rational terms? Maybe these are two separate matters that deserve more robust differentiation. So I retain doubts about just writing this all off based on institutional factors such as academic pedigree and position. Yet speaking of institutional factors, we do know that establishments in many fields tend to want to remain…established. We also know that one of the ways guilds and priesthoods have always tried to preserve advantages and privileges is through the construction and preservation of a public image that highlights the great mystery and impenetrability of their subject, which is obviously accessible only to the anointed! The very first line of the copyright notice page of Jerôme’s book reminds us that: “Scientific peer journals do not accept papers from independent researchers whatever their content.” Whatever their content? Including author bio as one factor among others in accepting papers would surely make sense, but it is hard to imagine something less “scientific” and more pre-modern and guild-like than excluding intellectual work based on the author’s institutional status alone. Fortunately, in this day and age, Mr. Jerôme’s carefully developed, clearly presented set of arguments are just a click away at no cost but time and mental effort for anyone to review, consider, and attempt to refute or improve upon (or maybe print out and tape to the doors of CERN?). However this comes out, though, we ought to keep up the hard work of applying the laws of logic even when it is not easy, and not start mumbling in resigned despair: “It doesn’t really matter. Who is Jacky Jerôme anyway?” Postscript: What about Beckmann? After initially writing a draft of this review of Jerôme's book, an early reader led me to Questioning Einstein: Is Relativity Necessary by Tom Bethel, which is largely a presentation and update for general readers of the ideas of Petr Beckmann, as presented in the more technical Einstein Plus Two. This is certainly also worthy of a careful reading and also touches many issues of the relationship between empirical knowledge, the role of logic, and problems with “official” knowledge institutions that I address in the review of Jerôme’s book. However, the Beckmann/Bethel line of thinking operates only at the "macro" relativity level. In quick summary, it argues that contrary to conventional wisdom, Einstein’s special theory of relativity is on weaker, not stronger, empirical grounds, than general relativity, whereas general relativity is stronger empirically, but was made unnecessarily complex in order not to contradict the earlier special relativity claims. The observed evidence for general relativity, claim these authors, can be explained using classical physics, whereas special relativity is essentially “unfalsifiable” (its assumptions inevitably "don’t apply" to any case of evidence that actually threatens to contradict it). I do not discuss the Beckmann/Bethel line here in detail so as to focus on Jerôme’s theories, but my general impression is that the Jerôme and Beckmann/Bethel perspectives do not appear necessarily contradictory. Meanwhile, Jerôme’s model appears to make even stronger claims, which go beyond the behavior of gravity and mass to explaining what both gravity and mass are in causal-rational terms that are built up right from the micro level. One Beckmann/Bethel addition to that might presumably be to modify Jerôme’s language for describing the macro level to further remove specifically Einsteinian terminology, even “four-dimensional spacetime,” which Jerôme is still fond of maintaining in his book (and which I will also keep in my review below for simplicity). I found no evidence that either of these parties is aware of the work of the other, and yet I do not see any obvious reason why both alternative theories could not be bounced off of one another and probably cross-improved for the trouble. The Beckmann/Bethel line of thinking is also summarized elsewhere. [1] Indianapolis: Liberty Fund (2006) 117. [2] While context does or should limit the meaning of “everything” here, the “Theory of Everything” formulation still ought to be qualified to head off reductionist interpretations. As the American philosopher Ken Wilber has often pointed out, any physics “theory of everything” cannot cover “everything,” as it excludes phenomena of consciousness viewed from the interior, that is, as Mises might phrase it, from the subjective perspective of an acting person. We cannot deny that such a perspective exists without self-contradiction and it is not reducible to material description. Subjective phenomena of consciousness are emergent from, but not reducible to, physical phenomena. Thus, “everything” should at least be used with this reservation to avoid what Wilber calls “flatland,” as described, for example, in Integral Psychology. Boston: Shambala (2000), pp. 70–71. [3] Ludwig von Mises, Human Action: A Treatise on Economics. The Scholar’s Edition. Auburn, Alabama: Mises Institute (1998 [1949]). Murray N. Rothbard, Man, Economy, and State, with Power and Market. The Scholar’s Edition. Auburn, Alabama: Mises Institute (2004 [1962, 1970]). [4] While the original text is quite clear and easy to read, the author is not a native speaker of English, and in citing quotations, I have made occasional typographical alterations to language and punctuation only to head off unnecessary distraction for readers of the present article. [5] The Ultimate Foundation of Economic Science (pp. 19–20). [6] The Spacetime Model also suggests an uncanny depth to the basic elements of Ken Wilber’s integral four-quadrant model of all phenomena, one element of another “theory of everything,” but one not limited to the field of physics. Various accounts may be found in: The Marriage of Sense and Soul. New York: Random House (1998), esp. Chap. 5; Integral Psychology. esp. Chap. 14; and Integral Spirituality Boston: Integral Books (2006), esp. Introduction and Chaps. 1, 7, and 8. The second stage of the start of spacetime within the Spacetime Model is an expansion of a single sCell until it splits into two identical sCells (and then four, eight, 16, etc.). Here, 14.1 billion years ago, we already have the singular/plural distinction that forms the vertical axis of Wilber’s model. Then, at the very first sign of matter from the rare appearance of density variation in a few sCells, we find a positron and electron pair and with each of those, we already have closed volumes defining an interior and an exterior. That polarity forms the horizontal axis of Wilber’s model. The Spacetime Model thus offers possible root foundations for the construction of the integral four-quadrant model from among the very first things to ever happen in the history of spacetime. [7] As Mark R. Crovelli recently summarized this view: “If every event and phenomenon which occurs in the world has an antecedent cause of some sort, then we are forced to say that probability is a measure of human ignorance or uncertainty about the causal factors at work in the world…Man’s uncertainty in such a world could only stem from his inability to comprehend or account for all of the relevant causal factors at work in any given situation” (p. 166). in “All Probabilistic Methods Assume A Subjective Definition For Probability,” Libertarian Papers. 4 (1): 163–174. [8] “On praxeology and the praxeological foundation of epistemology,” The Economics and Ethics of Private Property, 2nd Edition. Auburn: Mises Institute (2006), pp. 265–294. [9] Mannheim: Bibliographisches Institut (1969).
c50ea0d8381949b3
The Essence of Quantum Mechanics In the 20th century, science has unveiled an incredible world that even the most daring science-fiction writers could have never even thought of. This world was so surprising that it completely puzzled the most brilliant minds of the century, from Albert Einstein to Richard Feynman, from Niels Bohr to Paul Dirac, from David Hilbert to Werner Heisenberg or from Erwin Schrödinger to Wolfgang Pauli. Apparent paradoxes kept being found, but they all ended up to be caused by mistaken common sense. Meanwhile, the central question of the measurement problem around which quantum mechanics is constructed still has no commonly accepted explanation… It sounds horribly complicated! The basic ideas of quantum mechanics are definitely troubling. But they are not that complicated. The major difficulty is to get rid of common sense because common sense makes a lot of mistaken assumptions. This is why learning quantum mechanics is an extremely humiliating and fruitful journey to undertake. Also, quantum mechanics has terrible philosophical implications. That’s why I invite you to embark on the adventure of the unveiling of quantum mechanics. To get you excited, let’s start with an extract from the Fabric of the Cosmos: Quantum Leap, by NOVA and hosted by Brian Greene: This article provides some unusual popularized explanations of quantum physics. I believe it can provide a new perspective for this complicated theory. This is why I strongly suggest you read it, even though you have already been watching a lot of popularization of quantum mechanics. Wave-Particle Duality For long, physicists have been debating about the true nature of light. While Newton imagined it made of small particles, this rather intuitive concept was suddenly shattered by the double slit experiment realized by Thomas Young in the early 1800s. In this article we will almost only focus on this experiment and its numerous variants, as, according to Nobel prize winner Richard Feynmann, [it] has in it the heart of quantum mechanics. In reality, it contains the only mystery. So what’s the double slit experiment? A beam of light was sent on two tiny close holes, before being captured on a screen behind. The screen then displayed a troubling interference pattern, that is, a succession of dark and bright areas. The following video by Veritasium displays the experiment with so simple tools that you can do it yourself: This experiment is usually rather done with lasers, but Derek Müller’s box enables to take it to the street in a pretty cool way. So, I guess that this refuted the idea that light was made of particles… Indeed. To explain this phenomenon, the wave theory of light was introduced, with James Clerk Maxwell’s theory of electromagnetism as the golden age of the theory. In this setting, light was seen as an electromagnetic wave. Wait… What’s a wave? OK. First, let me talk about fields. A field is something that fills up space. More precisely, at any point in space, the field takes some value. The concept of field is illustrated in the following extract of a video by Minute Physics: Now, if, at some point of the field, values are high, then there is a perturbation of the field at this point. This perturbation can then propagate, like waves on water. A wave is a propagating perturbation of a field. OK. So, in the 1800s, people thought that light was a propagating perturbation of the electromagnetic field? Yes! This enabled to explain a lot of other phenomenons. But then came the discovery of the photoelectric effect. Photoelectric Effect What’s that? In some specific setting, lighting a metal could induce an electrical current. What was troubling is that the electrical current was induced if and only if the wavelengths of light were small enough. If you used red light, there was no current. But if you used blue light, there was. This appeared like a discontinuous phenomenon which was incompatible with the continuity of Maxwell’s equations. So what would account for this discontinuity? In one of his 4 earthshaking papers of 1905, Albert Einstein explained the discontinuity by the fact that light could actually be thought of as a composition of elementary particles called quanta of light or photons! He justified this surprising assumption by the fact that the energy distribution of light was very similar to the energy distribution of particles of a gas. Another evidence supporting his idea was the discovery of spectral lines of emission and absorption. I already have a lot to say so I won’t dwell to much on this here. But if you can, please write more about these phenomenons. But I thought that Young proved that light wasn’t made of particles! I know! Yet, this didn’t prevent Einstein from receiving a Nobel prize for his idea of photons. I’m totally lost! Is light made of particles or not? We’ll get to this! But for now, let’s keep confusing ourselves with another weird experiment. As opposed to light, it was thought that all matter was made of atoms, and that these atoms were made of protons, neutrons and electrons. In particular, electrons were considered as elementary indivisible particles. Indeed, scientists managed to isolate them one at the time… But they then tried Young’s double slit experiment with beams of electrons instead of beams of light, and found interference patterns! This is displayed in the following figure from the Northern Arizona University website: Electron Interference This means that, just like photons, electrons appear to have both particle and wave properties! This is what scientists called the wave-particle duality, and is often interpreted as particles sometimes behaving like waves. In fact, quantum field theory even suggests that forces also are both waves and particles, as explained in Thibault’s article. This doesn’t sound very scientific… I totally agree! So let’s better understand what’s going on! Wave Function The solution provided by many scientists in the 1920s and 1930s was to think of matter and light as a collection of elementary entities. But these elementary entities, which we call photons, electrons or anything else, are not actually particles. Rather, each entity is a wave. However, physicists have now got used to calling these elementary entities particles, but you need to keep in mind that these are not to be thought of as point objects but rather as waves. Let me rephrase the scientists’ idea of that time because it’s very important: All things are made of elementary waves. This phrase corresponds to quantum mechanics, not more developed theories like quantum electrodynamics (QED) where waves can be combined to form new elementary waves, making each not really elementary… Let’s stick with quantum mechanics here! Waves instead of point particles? I have so much more trouble visualizing it… I know! The following figure displays a 1-dimension wave function, that maps locations with the amplitude of the wave. Keep in mind that the following wave function represents one particle only. Wave Function Note that this figure fails to represent the fact that the wave function actually has complex values. But this won’t be much of a problem for this article though. Does this assumption of elementary waves account for interference patterns? Yes, because elementary entities are waves! These waves can add up to be constructive or destructive, creating interference patterns. What about the fact that electrons could be isolated? Yes too, because beams of electrons are actually a collection of elementary waves known as electrons. What about the photoelectric effect? This enabled to account for the photoelectric effect as well. The current is induced if electrons can capture an elementary wave which is energetic enough to make them leave the orbit of the atoms. The captured elementary wave is a photon with enough energy. Wow! The wave function does account for all the experiments you mentioned! It surely does! But there are more troubling experiments I haven’t mentioned yet… Because electrons can be isolated, it’s possible to do the double slit experiment by shooting them one at a time. We can then visualize where they land on the screen. Let’s recapitulate all the double slit experiments we have discussed so far with the following video of Dr Quantum: I skipped the conclusion given by the video because I find it misleading, as it shows the electron as a particle dividing in 2 and recombining itself. It’s an interesting interpretation but it’s absolutely not what quantum physics says. OK, so what happens when electrons were shot one at a time? Each electron lands at one particular location on the screen. But the locations where they arrive are not all the same! This is very weird, since all electrons are similar. More precisely, they are described by identical wave functions. And yet, they all end up at different location! How can this be explained? Physicist Max Born proposed that, when measured, the waves collapse in a inherently random way. This means that they become very concentrated at a relatively precise location. This is usually interpreted as them becoming particles. But to really understand it, you should keep in mind that a collapsed wave is a very localized wave, rather than a point particle. A localized wave is almost nil almost everywhere except around the location of the wave: Localized Wave Once collapsed the wave then resumes its propagation. But more than the way waves look, what needs to be stressed is the fact that the collapse of the wave functions occurs in a inherently random way. This means that two identical experiments yield different results! Does this mean that anything can happen? Does this even have a sense? If anything can happen, why is the world still making sense, at least at our scale? These were questions that even Albert Einstein couldn’t handle, as he famously refuted quantum theory by stating that God doesn’t play dice with the world. Yet, experiments after experiments, over almost 100 years, this quantum effect has been proven again and again. So it’s really true? Anything can occur in the quantum world? Things are totally unpredictable? What I haven’t told you yet is that, behind this seemingly quantum unpredictability, there actually is amazing well-defined world of probabilities. Even though the results of experiments could not be predicted, the probabilities of the outcomes could be computed with incredible precisions. Let’s reconsider the double slit experiment with one electron at a time. As more and more electrons were sent, an interference pattern appeared, as if all were sent simultaneously. What physicists realized is that this interference pattern was revealing the probabilistic distribution of the outcomes of the single electron experiment. Yet, this interference pattern also corresponded to the wave functions! So the wave functions are related to probabilistic distributions? Precisely! From the elementary waves that form all things are deduced the probabilistic distributions of where they collapse! More precisely, the square of the norm of the value of the wave function at a point is the probability of finding the collapsed wave there once we measure it. This is explained in the following extract from The Fabric of the Cosmos: Quantum Leap: But if all that matters is the probabilistic distribution, why care about the waves? You need to consider the waves to understand constructive and destructive interferences. Also, waves contain information about energy and momentum of particles. But more importantly, as explained in this more advanced article, the dynamics of the wave is precisely described by Schrödinger equation. But finally and more importantly, according to quantum mechanics, the true nature of particles is being waves. What most popularizing videos call particles should rather be called localized waves. If you keep this in mind, I think you’ll understand quantum mechanics much better. To be complete, I’d have to say that the evolutive properties of a particle are not only defined by its wave, but also its spin. This is of great importance for advanced ideas like the Pauli exclusion principle which explains the fundamental octet rule in chemistry. The Measurement Problem It’s time for us to get to the most troubling part of quantum physics. Let’s reconsider once again the double slit experiment. As physicists tried to understand how the electrons were moving, they decided to add detectors to observe slits through which electrons were moving. The result was profoundly shocking, as explained in this extract from The Quantum Universe: What happened? When slits were observed, there was no longer any interference pattern! Everything suddenly occurred as if electrons had been moving in straight lines all along. This was absolutely shocking: The outcome was modified by the mere fact that we were observing electrons along the way! Waw! This is extremely troubling! But what was observed at slits? Detectors showed that electrons were moving through one of the slits, and one only! Recall that when we weren’t observing slits, the wave was spread and went through both slits. But wait… Does this mean that the wave function collapsed at slits, when measured? Yes, exactly! In fact, if we consider that particles are always waves that simply collapse randomly for any measurement, then we can understand what is happening. As we measure the particle at the slits, it collapses and becomes very localized. Because the size of the slit is larger than the spreading of the collapsed wave function, it’s good approximation to say that the particle only goes through one slit. Now, because the wave function collapsed, it’s no longer the same. Thus, it will not behave as if it hadn’t collapsed. This means that the result will no longer be the same as if we hadn’t observed particles at the slits. That’s why observation of slits affects the outcome of the experiment. Waw! This is troubling! I know! But wait! There’s more. New technologies have enabled to delay the observation of slits, as explained below: And yet, even when the observation of slits was delayed, things occur as if the observation of slits wasn’t delayed: There was no interference pattern! I don’t get it! I thought that waves collapsed when measured, not before they were measured! I know. This is extremely troubling! This leads us to the most fundamental misunderstood question of quantum physics, known as the measurement problem. Shortly put, it consists in asking: What’s a measurement? Out of context, it would sound like a technical detail. Yet, as we have been discussing it, measurements are parts of the laws of physics, as they affect the outcomes of experiments. And yet, we don’t even know what they really are. What do you mean? Of course we know what a measurement is, don’t we? In 1935, Nobel prize winner Erwin Schrödinger introduced one of science’s most famous thought experiment to illustrate how much we don’t understand measurement (although I’ve heard that his goal was rather to ridiculize quantum physics). Oh yeah! Wasn’t it something with a cat? Yes! It consists in putting a cat in a box with a radioactive atom and poison. If the atom decays then the poison is released and the cat dies. The box is left for a minute. This gives the atom a fifty-fifty chance to decay. Now, if don’t equip the box with any measurement device, then it is in a quantum state. This implies that the cat is both dead and alive, as displayed in the following video from The Open University. Isn’t it equivalent to saying that the cat is dead or alive? No! Reconsider the double slit experiment. When we observe slits, the electron is indeed in the left or right slit. But if we don’t observe slits, it is both in the left and right slits, because the wave function spreads over the two slits. The two cases are very different, as we can see in the results of the experiments. Similarly, here, the cat isn’t dead or alive. It is both dead and alive. And until measurement, that is, opening the box and checking, it is in this superposition of states. Waw! That’s troubling! But wait… Doesn’t the cat count as an observer? Doesn’t its heartbeat count as a measure? Well, that’s a troubling question you’re asking here… This is precisely the measurement problem! And, shortly put, there is no accepted explanation for it. But there are ideas to explain it, aren’t they? We’ll get to this eventually. But first, let me show you the measurement problem at its worst: entanglement. What I’m about to present here is my own creation. It seems to perfectly represents entanglement to me, but I might be mistaken since I’m not an expert of quantum physics. If you are, please correct or confirm what I’m about to say. Let’s reconsider Schrödinger’s cat. Assume we could now separate the radioactive atom and the cat, after they have been put together for a minute. Let’s keep the radioactive atom with us on Earth, while we send the cat far far away, in some distant galaxy. Now, if we still haven’t made any measurement of the cat nor the atom, both are in a quantum superposition state. The cat is both dead and alive, while the atom both has and hasn’t decayed. But the fates of the cat and of the atom are linked. When we measure the state of the atom, quantum mechanics says that it will instantaneously affect the state of the cat. Are you saying that if we observe that the atom has decayed, then the cat instantly dies? Yes! And if the atom hasn’t decayed, then the cat instantly lives! It’s as if the information about our measurement instantaneously affected the state of the cat. Waw! That’s weird. But didn’t Einstein prove that nothing could occur instantaneously? He did! In his theory of relativity, Einstein proved that nothing travels faster than light! Worse than that, he proved that simultaneity didn’t even exist, as it depended on the observer. Therefore, entanglement seems to totally contradict his theory! This was unacceptable for Albert Einstein, who famously referred to this phenomenon as spooky action at a distance. OK… But can’t we imagine that the atom and the cat fall in definite states as soon as they are separated, in which case measuring the atom enables us to simply deduce the state of the cat, given the state of the atom? What you’re saying corresponds to saying that the cat is dead or alive rather than dead and alive. That’s what Einstein suggested. After decades of endless debates, Irish physicist John Bell proposed an experiment to settle the arguments. This experiment was then improved by Alain Aspect who separated the cat and the atom far enough so that there had to be a faster-than-light communication. This is explained in the Fabric of the Cosmos: Quantum Leap. So what was the result of the experiments? The experiments showed that Einstein was wrong. The cat and the atom were actually in a quantum superposition state, and there was an actual spooky action at a distance. So instantaneous communication is possible? Weirdly enough, it appears so. Applications are amazing! Perfectly secure cryptography can be achieved, as explained in Scott’s article on quantum cryptography. Even crazier, as shown later in the Fabric of the Cosmos: Quantum Leap, teleportation devices are currently being constructed based on entanglement. So far they’ve teleported photons, but at some point in the future, they might teleport people! The idea is based on the possibility of influencing the quantum state of the atom to increase the chances of eventually measuring it not-decayed. This instantaneously affect the state of the cat and increases its chances of being alive, when we will have measured the state of the atom. OK, that’s it, I’m totally lost by entanglement! In a talk at Google, Ron Garret provides a better understanding of entanglement. As he says, what entanglement really is is nothing less than a measurement. Entanglement isn’t hard to understand, provided that we really understand what measurement is. All the difficulty of the understanding of quantum physics therefore boils down to the big question I mentioned earlier… What’s a measurement? Yes! That’s the big question of quantum mechanics! Let me recall what we have found out about measurement in this article. Measurement implies a collapsing. But it’s not only about the collapsing of a wave function. Rather, it’s a collapsing of a collection of wave functions throughout time (as shown by the double slit experiment with delayed observation of slits) and space (as proved by entanglement). This means that, because of this property of measurement in quantum mechanics, our universe is inherently a non-local, and everything seems connected. But isn’t there any scientist who at least have an idea of an answer to the measurement problem? It’s very hard to grasp the concept of measurement. Any explanation of it must involve a ground-breaking twist of our vision of the world. In fact, one of the rare ideas which might make sense is a terribly shocking one. Einstein once said: I like to think that the moon is there even if I am not looking at it. But Einstein’s view has been completely shattered by the measurement problem of quantum mechanics. Instead, many physicists claim that there is no out there out there, and that reality only exist when we look at it. I’ll leave you with the explanation of Dr Amit Goswami, in this extract from The Holographic Universe: I want to add that I can’t really agree with any of these interpretations of quantum mechanics, because I simply don’t understand any of them. Once again, the measurement problem is still highly misunderstood and there is no predominant opinion in the scientific community as displayed in this video from Sixty Symbols. To face the measurement problem, a new concept of reality is probably needed, and Hawking’s idea of model-dependent realism may well help us. But it’s far from being an accepted concept to this day. Now, things may not be as weird as what I’ve just presented here. As explained by the theory of decoherence, the apparent collapse of the wave function may be just a side effect of the Schrödinger equation applied to a great number of particles. This is what I’ve explained in this extract of my talk More Hiking in Modern Math World: Let’s Conclude By assuming that all things are made of elementary waves, and by using equations of quantum mechanics to study the dynamics of these waves, physicists have been incredibly successful in explaining a very large range of weird experimental observations. This simply makes quantum mechanics the most tested and yet not refuted theory science has ever produced. And its applications to technologies are overwhelming! In particular, the invention of the transistor, which has then led to the explosion of new electronic and telecommunication technologies, has been the result of the understanding of quantum mechanics. What a crazy world our world seems to be! You’re probably deeply confused. I know I am. Let me reassure you by a few more quotes by Nobel prize winners: • Anyone not shocked by quantum mechanics has not yet understood it. (Niels Bohr) • If that turns out to be true, I’ll quit physics. (Max von Laue, on the wave properties of electrons) • Had I known that we were not going to get rid of this damned quantum jumping, I never would have involved myself in this business! (Erwin Schrödinger) • Not only is the Universe stranger than we think, it is stranger than we can think. (Werner Heisenberg) • I think I can safely say that nobody understands quantum mechanics. (Richard Feynman) • God not only plays dice, He also sometimes throws the dice where they cannot be seen. (Stephen Hawking, not a Nobel prize winner, on black holes) Can you recapitulate the essence of quantum mechanics in one or two sentences? Sure. The essence is that everything is made of elementary waves. Their dynamics either corresponds to Schrödinger’s deterministic equation, or to a probabilistic collapsing. The latter one refers to the measurement problem. Although the probabilistic nature of its outcome is very well described, the cause of its occurrence is still highly misunderstood. This is partly because of its non-locality through space and time, and partly due to the lack of definition of the concept of measurement. If you want to go further in the understanding of quantum mechanics, read my article on the dynamics of wave functions! :But if we understood measurement, we’d understand the Universe, right? Hum… No. Quantum mechanics seems incapable to include gravity. Although Paul Dirac managed to include Einstein’s theory of special relativity, making things work with Einstein’s general relativity is still mainly considered to be an open problem. Physicists are still in a quest to a unified theory of everything. The best candidate so far is string theory, but it’s still questioned by plenty of scientists. More on Science4All Cryptography and Quantum Physics Cryptography and Quantum Physics By Scott McKinney | Updated:2016-02 | Views: 1779 Dynamics of the Wave Function: Heisenberg, Schrödinger, Collapse Dynamics of the Wave Function: Heisenberg, Schrödinger, Collapse By Lê Nguyên Hoang | Updated:2016-02 | Prerequisites: The Essence of Quantum Mechanics, Imaginary and Complex Numbers, Linear Algebra and Higher Dimensions | Views: 5601 Spacetime of Special Relativity Spacetime of Special Relativity By Lê Nguyên Hoang | Updated:2016-02 | Views: 2486 Spacetime of General Relativity Spacetime of General Relativity By Lê Nguyên Hoang | Updated:2016-01 | Views: 10475 1. Now if I’m to wrap my around this….. when we figure out one or two aspects.. he changes it to another… ? Swapping out one for the other… ?just a random selection of reality we are in? So we not suppose to figure it out… it’s a system that won’t allow us to get further? No matter what. One cancels the other out, but all work individually but when there placed together…???? A broken clock is right twice a day…. and that means it wrong most of the time… so give the slight fraction determine a final result? Too much questions to ask… I’ll leave simple for now and the rest… and when I say we, I mean the human race. Leave a Reply
a1f14c22fac1940f
Sunday, May 24, 2009 One of those projects that took 30+ hours... The song "Sax Rohmer #1" is a good song. Its music video is just flat-out impressive. So, I decided to try to copy it into a closet. The video, which (unless you're Ari and hate YouTube) you should watch first is here. The next four pictures should be stitched together, but the text of the verses is "Bells ring in the tower, wolves howl in the hills, chalk marks show up on a few high windowsills. And a rabbit gives up somewhere, and a dozen hawks descend: every moment leads toward its own sad end. // Ships loosed from their moorings capsize and then are gone, sailors with no captains watch a while and then move on. And an agent crests the shadows, and I head in her direction. All roads lead toward the same blocked intersection." So, you can try to trace them through the three photos. And since the entire chorus never gets written out, I did my own version on the leftover wall. I know it's heart-shaped, but believe me, that was totally unintentional. Whew. Comments? Friday, May 22, 2009 "Ow, my foot" or Ithaca summary with a side of Chacos pan and some complaining for garnish. Because it's what I'm noticing most at the moment, I think I've broken my left foot. Not in half, not off, just have some weird fracture in there somewhere in the vicinity of my fourth tarsal/metatarsal area. It's a result of slacklining, but because feet are feet and you can't do much for them, I figured just walking on it and taking ibuprofen now and again was about the only thing I could do for it, But then I wore my sandals yesterday, and it got a lot worse. Let's talk about the sandals. I've heard nothing but good things from everyone I know who has and wears Chacos. Chaco is a great company, located pretty close to me, hometown-wise (so, partially local), and generously offer a 50% discount to Peace Corps Invitees. I took them up on it, ordered two pairs of shoes. A pair of super-fantastic hiking boots, and a pair of ZX/2 sandals. I love the boots--they rock my socks. Actually, I rock them with socks, but that's (not funny, Katherine--shhh) not the point. But the sandals continue to confuse me. I wore them some, and it was fine. But then I started wearing them regularly, to hike all over Ithaca, for example, and started getting pretty bad blisters all over my feet, no matter how I adjusted the straps. Keep in mind, I really want them to fit, because I like Chaco, and I know that their shoes work really well for a lot of people. But they don't fit my feet, haven't started to wear to my feet, and have given me four thumb-sized blisters. Having gone slacklining with Jen (who visited me for three days--hooray!) and Sophia, and broken (or somehow messed up) the toe/its meta/tarsal, the Chacos exacerbate the swelling kinda painfully (I discovered yesterday), and I'm really about to disown them. I may have to disown them anyway, though, because they're really a pain to slide on and off, and apparently I'll be doing that a lot in Mauritania. I'm kind of tempted to just get a few pairs of the Teva flipflops I like so much and wear 'em out pair by pair. After all, if I can play basketball in them, I can probably deal with the sand and so forth. But, the soft soles make me nervous. Anyway, yes, I'm frustrated at my inability to fall in love with (falling, I can do) with my ZX/2 and unimpressed at my foot, which is making it hard to walk and do things. Let's see. Ithaca--I arrived here 4/22 or so and have gotten to spend a beautiful three weeks with all my amazing friends here. Kevin and I created lots of projects and mischief, Helen and I had adventures of all sorts, I discovered food that Erica doesn't like, and got to know the new Whitbees better. I also got to connect with Nick, Paul, Adam, Simon, Paul, Mark, John, Chad, Liana, Maddy, Sophia, Jen, Becca, and lots of other great people (i.e. if I forgot you, I'm sorry; I still love you!). Originally, I was going to camp in Danby State Forest, but then I chickened out, and have stayed in Whitby, doing chores and home improvement projects to pay for my room and board. Helen, and now Erica have left, so of course I'm heartbroken about that (second-time around goodbyes were even harder than the first, but hard goodbyes are the flip side of meaningful, important friendships), but Jen came to visit, so that's a good thing. She kept me from being too sad, because we laughed a whole lot, and yes, I broke the rule and made jokes after quiet hours. And of course I miss Kevin a whole bunch, but it's a little bit hard to complain to him about his going to Kentucky on a really cool service trip for a week. I mean, it's easy to complain, but it's hard to not feel stupid for doing so. As he pointed out, it's funny if I'm complaining that one of us is going to go prohibitively far away and be outside easy means of contact for a longer-than-easy period of time to do public service-type work. Hi, Kettle, it's Pot, and, uh...I was wondering if you had a cup of sugar I can borrow? So, yeah. I miss Kevin. But, all in all, Ithaca has been wonderful, I'm working on one last house project before I go, and enjoying my time here. I booked my tickets to staging yesterday, and am leaving at 8:11am on June 15th for Philadelphia. I'm also doing my best to ignore the fact that Mauritania's political situation is looking less than stable. That said, can we all just acknowledge right now that I'm a powerfully unfathomed weapon. Want an African country to have political collapse/unrest? Tell me I'm going there with some sort of institution in some sort of official capacity. You can even time it--it will begin to happen between a month and 20 hours before I'm scheduled to leave. Oh, and I called Peace Corps to ask what was up and guess what they said? "Flexibility and patience are key..." I'll be back in Grand Junction the evening of May 28th. Happy Birthday, Toby! [[Don't comment about: Chacos, "being patient/flexible" with respect to Peace Corps, "don't worry" about the political situation in RIM.]] Monday, May 18, 2009 Denver, Rhode Island, Massachusetts, Connecticut, New York I've been gone for a month, but I haven't written anything; originally that was because I was going to be sleeping way out in the forest and did not want any weirdos to come find me. This blog, I have discovered, is extremely googleable. Since I wimped out on the camping-for-a-month plan, it's just been good ol'-fashioned laziness, though. I flew to Providence and visited Renée, which was a whole lot of fun. We ate wonderful food, played with the kittehs, went on all kinds of adventures (starring graveyards, duckpin bowling, Scrabble, lots of talking, and playing outside). I forgot things, I think. In fact, I'm sure I forgot things, but I know I had a really good time, and that Renée was a really great host. So, shout-out to Renée for letting me visit. Then my cousin picked me up and we went to a Red Sox game with his friend Liz and her sister Kitty, who are both incredibly sweet people. They took me to the fancy section, and we ate really tasty food and watched the game from waaaaaay high up above home plate. Notable things about the game were a home run, and (more amusingly) a pickle between second and third. I kept getting confused about which team was batting, and thus was really happy sometimes when the Orioles got a hit. But I figured it out, eventually (there's that Cornell degree working for me). We went back to Portsmouth, Rhode Island, and I got to see Emily (another cousin), play outside, make bread, and try to ruin some teak that Tom was working on for his boat. I mostly failed at ruining the teak, and succeeded at making the bread. Also featured was a really intense conversation during which I failed to really explain well why although you can't do infinite experiments to demonstrate quantum stuff, it still exists. Also, I couldn't really do justice to why something can work at the particle level but not be applicable on the macroscopic level (why we don't tunnel through our chairs occasionally). I think the first one really led to the second, because theoretically if you do something an infinite number of times, your wavefunction should in fact line up with the chair's eventually. The point against which I was arguing was that if you can't do the requisite number of experiments, then you can't make that assertion. Where I really got frustrated, though, was at the subsequent assertion that "just because something is supported by math doesn't make it true." In fact, it does make it true, but not because math is infallible--it is true by nature of the fact that the math has been described by many other experiments to be both predictive and accurate. Thus, if the math says something, it probably doesn't have a vested interest somewhere else, and is thus believable. This is all contingent on having "good" equations, but since Schrödinger's seems to work pretty well so far, I fail[ed] to see how you can discount it simply because it is counterintuitive. The counterintuitive doesn't go down so well sometimes, I guess. Jen had a good point, though (she wasn't there, but she is sitting next to me as I type), when she notes that it isn't always obvious that physical (in the scientific sense) intuition does not apply on subatomic levels. I guess this is true for math, too. Just because you can balance your checkbook wrong does not mean that the Schrödinger equation will sometimes be flat out (and untracably) erroneous. Anyway, I felt (and still feel) pretty strongly about this, as you may have noticed. Then I visited my cousins' shipyard in Bridgeport, CT, which was really cool, and visited my other cousin in Mamaroneck, NY. My great aunt was there, so I got to see her, and re-meet some of my younger cousins, before Paul gave me a ride into The City, where I visited Andrew for an evening! Hooray Andrew! The next day I went for various adventures in The City. High points were turtling (falling over backwards due to a verrry heavy backpack and verrrry tired abs) in a flowerbed in Madison Square Garden, and visiting Evan. We went to the Tenement Museum, and learned a lot about the origin of the term 'sweatshop' and living conditions in the garment district in the 1800s. I got to see Naomi, too, before adventuring off to where I was staying for the evening--with another Paul. Seeing Paul was also a lot of fun--I don't remember most of what we talked about, but usual suspects are philosophy, morality, theory of relationships, and absurd jokes. Oh, and oregano (or was it basil?) tea. The next day, I successfully caught the Shortline up to Ithaca and Whitby, where I've been ever since. So, a shout-out to Tom, Liz, Kitty, Paul, Andrew (and Andres, whose bed I slept in), Evan, and Paul for your hospitality and general awesomenesses. And if any of you find my shirt, would you please tell me? It's my favorite one... Ithaca will be a new entry sometime soon.
2579fc31c3aa3ac4
söndag 29 maj 2011 Mathematical Secret of Flight 1 Computed Lift and Drag of a 3d NACA0012 wing for different angles of attack by Unicorn (blue) compared with different experiments. My talk on June 15 at Svenska Mekanikdagar 2011, is now available for preview as describing joint work with Johan Hoffman and Johan Jansson. Based on accurate solution of the incompressible Navier-Stokes equations we identify the true mechanism for the generation of large lift L at small drag D of a wing with lift to drag quotient L/D of size 10 - 50, which is not described in the literature. We combine the Navier-Stokes equations with a slip boundary condition on the wing motivated by the experimental fact that the skin friction is small for a slightly viscous fluid such as air or water, and we exhibit the role the slip condition in two crucial aspects: • prevention of separation at the crest of the wing generating large lift • 3d slip-separation at the trailing edge not destroying large lift and causing small drag. Text books claim following Prandtl, named the father of modern fluid mechanics, that both lift and drag result from a boundary layer arising from a no-slip condition. We obtain lift and drag in full accordance with experiments by solving the Navier-Stokes equations with a slip condition, which does not generate any boundary layer, and we thus present strong evidence that lift and drag do not originate from any boundary layer. In short, we show that solutions to the Navier-Stokes equations with slip are computable and correctly capture the physics of (subsonic) flight. See also • To solve the Navier-Stokes equations for, say, the flow over an airplane requires a finely spaced computational grid to resolve the smallest eddies. • Consider a transport airplane with a 50-meter-long fuselage and wings with a chord length (the distance from the leading to the trailing edge) of about five meters. If the craft is cruising at 250 meters per second at an altitude of 10,000 meters, about 10 quadrillion (10^16) grid points are required to simulate the turbulence near the surface with reasonable detail. Kim and Moin express the necessity dictated by Prandtl to resolve thin boundary layers to correctly compute lift and drag of a wing or an entire airplane, which would require 50 years of Moore's law to increase the computing power with a factor 10^10 to reach the dictated 10^16 points. We show that this is possible already today using 10^6 points by using slip without boundary layers to resolve. Monstrosity of Quantum Mechanics 6: Collapse of Wave Function Is quantum mechanics a physics beauty contest with all possibilities collapsing into one actuality upon observation? Who would you choose? Since the multi-dimensional wave function of quantum mechanics is supposed to represent a probability distribution over all possibilities, the high dimensionality has to be drastically reduced to become an actuality of some interest. This is supposed to happen in an interaction with an observer, referred to as collapse of the wave function, where the observer somehow picks one of all the potentialities and makes it into an actuality, as when Miss America somehow is chosen among many candidates by some educated physics observers. Is then quantum mechanics a beauty contest? Well, ask your favorite physicist about the nature of the collapse of the wave function. Is it real? What is collapsing? Physical reality or our knowledge about reality. Or is it quantum mechanics itself which collapses upon critical observation? lördag 28 maj 2011 Monstrosity of Quantum Mechanics 5: Passive Observation Impossible Is passive observation really impossible in the world of quantum mechanics? David Albert, together with Barry Loewer inventor of a version of the Many-Worlds Interpretation referred to as Many-Minds (different from the one I suggest), tells us that the physical process of making an observation in the quantum world necessarily interferes with what is being observed. In other words, the ideal of fully passive observation of classical mechanics, cannot be upheld in quantum mechanics. The observer will always interfere more or less with what is being observed. Albert tells us that this is the big difference between classical and quantum mechanics. But is this true? Is fully passive observation impossible in quantum mechanics? Maybe, or maybe not, depending on what is meant by an observation. A human being can make observations in different forms: 1. Inspection of an analog physical apparatus capabable of measuring some phenomenon. 2. Inspection of digital simulation of the phenomen. Here 2. represents a digital simulation based on solving the Schrödinger equation describing the phenomenon, e g the ground state of an atom, and observing its energy, while 1. would be to directly observe the emission spectrum. Th nice thing about 2. is that it is a completely passive observation, in the sense that the computational process is independent of the observer making the final observation of the energy as a number coming out of the computation. So maybe passive observation is possible in quantum mechanics. Maybe quantum mechanics is not so different from classical mechanics. Not so mysterious? fredag 27 maj 2011 Monstrosity of Quantum Mechanics 4: Quantum Computers The belief of the modern physicist that the linear multi-dimensional Schrödinger equation describes the quantum world of atoms and molecules, has led to the idea of the quantum computer: • device for computation that makes direct use of quantum mechanical phenomena, such as superpositionand entanglement, to perform operations on data. • Experiments have been carried out in which quantum computational operations were executed on a very small number of qubits (quantum bits). I have noticed in previous posts that the linear multi-dimensional Schrödinger equation is a monster, which cannot be solved, not even on any thinkable supercomputer with any thinkable known microprocessor technique. The dimensionality is simply overwhelming. We have noticed that the impossibility of solving the multi-dimensional Schrödinger equation results from the fact the equation describes all possibilities rather than specific actualities, which is overwhelming for microprocessors limited to performing computations on specific data. The Schrödinger equation is thus a monster computationally, and to handle such a beast a monster computer is needed, a computer which computes all possibilities rather than specific actualities, which computes on all data rather than on specific data: In other words a quantum computer is needed. Are there any quantum computers? No, only with a few quantum bits. Is it possible to construct a quantum computer? Nobody knows. Few seem to believe one can. Does the multi-dimensional Schrödinger equation give a realistic description of the atomic world? Nobody knows because solutions cannot be computed and compared to experimental observation. Can you solve a monster equation on a monster computer, that is a device which simulates a real analog monster by being a real digital monster? What if the multi-dimensional Schrödinger equation is just an invented fictional monster, which will disappear as soon you stop talking about it? Compare with the post today on The Reference Frame singing praise to the Copenhagen Interpretation of the multidimensional Schrödinger equation, as if it has a meaning. Read yourself and ask if you understand anything. tisdag 24 maj 2011 Monstrosity of Quantum Mechanics 3: Many-Worlds The monstrosity of quantum mechanics is expressed in full bloom in Everett's Many-Worlds interpretation reflecting that solutions of the linear multi-dimensional Schrödinger equation can freely be superimposed. The Schrödinger cat in its closed box thus can be in a state of superposition of both alive and dead and only upon opening the box for observation does the cat have to collapse into either alive or dead, as if there were two possible parallel universa prior to collapse into one actual universe. The solution of the linear multi-dimensional Schrödinger equation thus is interpreted as a universal wave-function supposedly representing all possible universa, out of which a specific actual universe is singled out in one way or the other. How to react to this breath-taking ocean of possibilities? In this case there seems to be two possibilities: 1. Accept the linear multi-dimensional Schrödinger equation as given by God. 2. Replace the linear multi-dimensional Schrödinger as a basic model of quantum mechanics with something more reasonable. I would vote for 2. and I explore one possibility in Many-Minds Quantum Mechanics. After all, it was Schrödinger and not God who wrote down the equation. It was Schrödinger who understood that his equation had serious flaws and should be replaced by a version describing actualities instead of possibilities. What do you say? 1 or 2? One actuality or all possibilities? Would you prefer all possible lives before one actual life. Compare with the title of the biography: A Life of Erwin Schrödinger. Nobody would be able to write a biography with the title All Possible Lives of Erwin Schrödinger, and even if somebody could, nobody would be interested in reading it. måndag 23 maj 2011 Monstrosity of Quantum Mechanics 2 The simplicity (linearity) of the Schrödinger equation is seductive and has misled many minds. Quantum mechanics as a description of the microscopic world of atoms and molecules is based on Schrödinger's wave equation, which as a mathematical object is (see above) • scalar • linear • multidimensional in 3N coordinates for N electrons/kernels with solutions called wave-functions commonly denoted as Psi: • Psi (x1, x2, ..., xN, t) with xj representing the three position coordinates of particle j with j=1,...,N, and t denoting time. The wave function Psi thus depends on 3N independent real variables plus time. The simplicity of the Schrödinger wave equation (scalar and linear) as a description of a complex reality, is thus balanced by an extreme richness of the wave function depending on 3N + 1 independent variables. The richness of the wave-function thus makes it impossible to give it a physical meaning as representing a configuration or distribution of electrons and kernels, which threatened to kill quantum mechanics at birth but was rescued by Max Born declaring that • | Psi (x1,....,xN,t) |^2 represents the probability of the configuration given by the coordinates (x1,...,xN,t), and by Niels Bohr declaring that the wave function as a probability distribution, upon observation could collapse into a definite physical state, as when opening the box containing the Schrödinger cat. Born and Bohr thus developed the Copenhagen Interpretation (of quantum mechanics), which is today the officially accepted truth although contested by alternatives as hidden variables and many-worlds interpretations, without any winner. Schrödinger himself left quantum mechanics as soon as the Copenhagen Interpretation captured the minds of most physicists. The richness of the wave function is in fact a monstrosity already for small systems with N = 100 say, not to speak of real systems of 10^23 particles in a mole of gas, as pointed out by Walter Kohn, Nobel Prize in Physics in 1998: • The wave function does not exist for N larger than 100. • Why? Because it cannot be computed, because of the many dimensions. Kohn got the Nobel Prize for computing electron densities instead of probabilities as solutions of a non-linear version of the Schrödinger equation in 3 space dimension, referred to as density functional theory. If now the wave-function as solution to the Schrödinger equation does not exist, there must something fishy about the Schrödinger equation. What? We saw that the equation is scalar and linear and thus has a simple structure, which is not problematic in itself, but if it necessitates a monstrous richness in dimensions, it seems that one should question the very formulation of the Schrödinger equation as a scalar linear multidimensional equation. From where did Schrödinger get his equation? Did he derive it from basic principles? Not really. It is more of an ad hoc invention expressing particle interaction by electrostatic Coulomb potentials combined with a new mysterious form of kinetic energy. How can we know that the equation is a good model of physics if it cannot be solved? How can we check that its solutions give correct predictions if they cannot be computed and thus determined? Nevertheless it is mantra of modern physics that the Schrödinger equation is a good model, but it is a mantra without physical meaning about an equations which cannot be solved. It is like claiming that a certain truth is hidden in a riddle which cannot be solved. Thus, new versions of the Schrödinger equation are needed. I explore one such line of thought in Many-Minds Quantum Mechanics in the spirit of the Hartree method as a non-linear coupled system of one-electron/kernel Schrödinger equations. The simplicity of linearity (and superposition) of the multi-dimensional Schrödinger equation is here replaced by a non-linear complexity, but the system solutions only depend on three space dimensions which makes a direct physical interpretation possible, without probabilities and wave function collapse. This is a realist approach as compared to the non-realist Copenhagen Interpretation. Compare with Lars-Göran Johansson: Interpreting Quantum Mechanics: A Realsist's View in Schrödinger's Vein suggesting a form of realist wave-particle duality as continuous waves for propagation and discontinuous particles for exchange of energy. söndag 22 maj 2011 Charles Mackay: Madness of Crowds and CO2 Alarmism In Extraordinary Popular Delusions and the Madness of Crowds published in 1841, Charles Mackay debunks witch-hunts, alchemy and economic bubbles. Today Mackay would have been writing about the crowd madness of CO2 alarmism, with the witches being the polluters of CO2, the alchemists the CO2 alarmists and the bubble the green economy. Mackay said many clever things obviously anticipating CO2 climate alarmism, while giving hope to skeptics of CO2 alarmism: • Aid the dawning, tongue and pen: Aid it, hopes of honest men! • He who has mingled in the fray of duty that the brave endure, must have made foes. If you have none, small is the work that you have done. • Truth... and if mine eyes Can bear its blaze, and trace its symmetries, Measure its distance, and its advent wait, I am no prophet - I but calculate. fredag 20 maj 2011 Monstrosity of Quantum Mechanics Schrödinger trying to slay the many-headed monster of the wave function (assisted by Einstein) however without success. Basically, classical physics is Newtonian mechanics and modern physics is quantum mechanics. Quantum mechanics is supposed to be described by Schrödinger's equation, worshipped by modern physicists. The equation was formulated by Erwin Schrödinger in 1925 seeking an equation with wave-like solutions called wave functions describing the dynamics of atoms and molecules resulting from an interplay of positive kernels and negative electrons under attractive and repulsive electric Coulomb forces. Nothing strange in principle, but what Schrödinger had created showed to be nothing but a Monster. Monster? Why? Well, the wave function for the simplest case of the Hydrogen atom with one electron depends on 3 space coordinates and and time, but the wave function for an atom with N electrons depends on 3N space coordinates (and time), which makes it into a Many-Headed Monster beyond direct physical interpretation: • Instead of describing an actuality in 3 space dimensions, the wave function describes all possibilities • Instead of describing a specific actual sequence of 1000 coin flips, the wave function describes the 2^1000, much more than 10^100 = googol, possible sequences of coin flips. • Instead of describing the life of one specific actual human being, it describes the lives of all possible human beings. As soon as Schrödinger understood that he had created a scientific monster, he tried to kill it but failed and then he withdrew from physics, while the Monster captured the minds of all the modern physicists (except Einstein's) who quickly formed a whole army under the leadership of Niels Bohr and his Copenhagen Interpretation of the wave function as a probability distribution of all possibilites. To get from possibility to actuality, the idea of collapse of the wave-function was invented, a Monstrous Idea to handle a Monster. Before collapse the Schrödinger Cat in the box would be in a state of superposition of alive and dead with all possibilities still present, and only upon opening of the box and inspection, would the Cat collapse into an actuality as alive or dead. This Monstrous Idea has led modern physics into an endless desert of Multiverses and Many-Worlds of all possibilities. A recent contribution to this monstrosity is the The Multiverse Interpretation of Quantum Mechanics by Raphael Bousso and Leonard Susskind: • We argue that the many-worlds of quantum mechanics and the many worlds of the multiverse are the same thing, and that the multiverse is necessary to give exact operational meaning to probabilistic predictions from quantum mechanics. Decoherence - the modern version of wave-function collapse - is subjective in that it depends on the choice of a set of unmonitored degrees of freedom, the "environment". Read and try to understand where physics is today... For a new approach without monsters, see Many-Minds Quantum Mechanics based on a different non-linear version of the Schrödinger equation as a coupled system of one-particle three-dimensional equations. The thesis of Hugh Everett III behind the many-worlds interpretation exhibits the difficulties or rather monstrosities of the usual scalar linear multidimensional version of Schrödinger's equation. We will return to Everett's thesis in search of a connection between many-minds and many-worlds physics. Since we all have different conceptions of the world, maybe we in fact live in a many-worlds universe, one for each mind. Of course, the following questions then comes up: What is a mind and how many are there? Another monstrosity perturbing the minds of many modern physicists is the Greenhouse Gas Effect, but there are some physicists fighting this monster, as e g William Happer: The Truth about Greenhouse Gases referring to Charles Mackay's Extraordinary Popular Delusions and the Madness of Crowds first published in 1841. The development of modern physics into monstrosity is described in Dr Faustus of Modern Physics. Free Will and Finite Precision Computation 5 • The idea tackles one of history's great philosophical debates. torsdag 19 maj 2011 Free Will and Finite Precision Computation 4 Daniel Dennet advocates a compatibilism of determinism and free will, expressed as a capacity of human beings developed by evolution to avoid (unpleasant) things by voluntary action: Seeing a brick being thrown at us by some unfriendly agent, we typically choose to duck. Dennet argues that we do that by free will, since it would also be possible to choose to not duck and take the hit to get a case to bring to court. Dennet argues that either the future is fully determined by the past (Laplace demon) or the future is fully undetermined in the quantum sense that the next position of an electron is not determined but subject to throwing a dice. In either case we cannot really influence what is going to happen, and thus we cannot exercise any free will: whatever happens, happens. Yet Dennet claims that we have a free will in the sense that we can decide to avoid certain things, but not all: We will not have time to duck if the brick is replaced by a bullet. I get the impression that Dennet's resolution of the apparent contradiction between a free will and full determinism/indeterminism, is a scholastic resolution in the sense that something essential is being missed and nothing really new is brought in to solve the eternal free will problem. Would finite precision computation be helpful? The idea here is that little things may be left to be decided by the dice while major things are predetermind by a finite precision Laplacian demon. More precisely, we know that • there are major things that we cannot do even if we would like to (limited free will): e g fly like a bird. • there are major things we can do which we have decided to do (according to a predetermined master plan): e g go to college. • there are little things which we can decide by free will, which we could leave to the dice if we cannot decide: e g meat or fish for dinner. This opens to a finite precision resolution of the free will problem: • big things determined by a finite precision Laplace demon/master plan of ours • little things decided by a dice. In extreme cases a small thing could become big and would then be described as a strike of luck or accident: to win on the lottery or get hit by a falling brick. A Dark Side of CO2 Alarmism and The Royal Swedish Academy and makes the following Call: • Greatly increase access to reproductive health services... • reduce birth rates. Basic Science: Climate Sensitivity Less Than 0.3 C onsdag 18 maj 2011 What is a Princess Allowed to Say? About CO2 and Great Transformation? Crown Princess Victoria of the Kingdom of Sweden stated in her presentation at the 3rd Nobel Laureate Symposium on Global Sustainability organized by the Royal Swedish Academy of Sciences: • Burdens must be shared by everyone (including masses of poor people). • Wind turbines, solar connectors, panels and geothermal energy, why is it that countries are using soo little of renewable energy sources despite having the knowledge and technique? • We can and must change our life styles and the manner in which we use energy. • What are we waiting for? The work has to start here and now. • The world succeeded to come together and decide upon removal of freons. • To succeed we need to reconnect humanity with the biosphere. • This is no small task. I see no better persons though than Nobel Laurates to carry this critical message to the world: • The need for a Great Transformation. • Our generation has the knowledge and ability to create a sustainable world for future generations. The presentation poses the following questions • Does the Princess make political statements? • Is the Princess allowed to make political statements? • Is the Princess allowed to advocate specific techniques for generating energy? • Is the Royal Swedish Academy influenced by Royals? Any answers? See also my Newsmill article about biased jury. The Princess speaks the same words as Hans Joachim Schellnhuber, main organizer and ideolog of the symposium, according to New York Times known for his "aggressive stance on climate policy": 1. Earth’s population could be devastated by buildup of greenhouse gases. Does the Princess understand what she is saying? PS1 The verdict of the Jury of Nobel Laurates of the Symposium is expressed in the Stockholm-Memorandum: • Greatly increase access to reproductive health services... reduce birth rates. • Introduce strict resource efficiency standards • Launch a major research initiative on the earth system. • Scale up our education efforts to increase scientific literacy. This is nothing but A New Brave World......but is there place for a Princess in this New Brave World? A resource efficient renewable Princess? tisdag 17 maj 2011 Free Will and Finite Precision Computation 3 This is a continuation of Free Will 2: So can we make it rain tomorrow by leaving the car window open? Can the flap of a butterfly in Brazil set up a tornado in Texas? How can we tell? Well we have already answered this question: Take away the butterfly and observe tornados anyway, close the window and observe rain anyway. Or let the butterfly flap and observe no tornados, open the window and observe no rain. Evidently we are talking about a big effect (tornado, rain) from a small cause (butterfly, car window), which is only possible if the system under consideration is unstable. Why? Because the definition of an unstable system is that a small cause can have a big effect. If the effect of any small causes is small, then the system is stable. Most of the systems we can observe are (more or less) stable, because unstable systems tend to break down or explode into non-existence. Is the weather unstable? Well, we say that the weather is unstable when changes are unpredictable, and we know that this is often the case. How unstable can then the weather be? Can it be so unstable that the flap of butterfly can cause a tornado? Probably not. We expect that sufficiently small perturbations cannot change the major features of the weather and cause a tornado. This means that it is irrelevant whether a butterfly flaps or not, or if we leave the car windows open or not, as concerns tornados and rain. If we accept that small causes do not change major features, that is, that we are dealing with a (more or less) stable system as a typical system which we may be confronted with, then we could say that we could leave certain little things to be determined by chance, by throwing a dice: • It would not change anything essential. • It would save us time for essentials by avoiding getting drowned into pedantry. • In fact, it would be necessary to not get bogged down by details. • In other words, we would have to act with finite precision in order to not get stuck on the spot at a specific point in time. • Time is advancing and so we have to advance as well and thus we have to take decisions with finite precision only, because we have no time to do everything with infinite precision. We are now approaching the question of free will. Can we do anything we could of think of doing? No, our abilities are limited, but within these limits we would say that we have some form of free will. We could decide what to study at the university, with whom to engage, how to dress, what to eat, what to say, but all these decisions could fit into some form of master plan for our life, which we probably should search for if we don't have any. Our free will would not be entirely free but subordinate to a master plan, which we may have chosen by free will or inherited from our parents, spouse or friends or society. So maybe our free will as concerns big things is not that free, as if the main pattern of our life largely is predetermined. We could still argue that we have a free will to decide little things, what movie to see, what to have for dinner et cet, but we could also say that we will only spend limited time on these issues to find the "optimal solution". We could even use a dice to decide if we cannot easily make up our mind or come to some agreement with somebody. But you would not like Luke Rhinehart decide big things, such as getting divorced or not, by throwing the dice, because that would quickly ruin your life. In short, you would act with finite precision and feel that you have some form of free will in particular for little things, possibly exercised using dice, while you may feel that the main path of your life (or at least other people's lives) is more or less predetermined. This corresponds to something between full determinism (no dice) and full indeterminism (all dice) as a for of finite precision determinism (dice only for little things). In other words, a free will which is not completely free, but not completely unfree either: • a finite precision free will. PS Suppose Tom wants to show Harry that he has a free will. Consider the following conversation: Tom: Look, I can decide to lift my left arm or my right arm according to my own free will. Harry: How you decide to lift the left or the right? Do have some predetermined preference? Tom: Of course not, then it would not be free will. Harry: OK, but if you are completely neutral, how are going to decide? Tom: Let me think...should I lift the right arm...or should I lift the left...what could be a good reason to lift the right arm...instead of the left...well, I cannot really decide...I need more time... but even so I don't how to choose while staying fully neutral... Harry: Can I offer some help? What about flipping a coin? Tom: Flipping a coin? Yes, that must be the only possibility which is completely neutral, without any perdetermined predjudice for right and left. That's what I will do to not get held up by this silly test... What Does a Nobel Laurate Understand about CO2? 1 • fossil fuel raising CO2 above the limits of the Holocene • exit door from the Holocene had been opened • Great Acceleration has not been an environmentally benign phenomenon • eroding the Earth’s resilience, ocean acidification. • carbon-based model unsustainable • low-carbon society is a Great Transformation • global energy system decarbonised • greenhouse gas emissions absolute minimum • low-carbon societies • quantum leap for civilisation • universal consensus • Global Enlightenment • new social contract • science subservient role • sustainability is a question of imagination. We ask the questions: and will report on answers..stay tuned... • This requires resources to renewable energy of unprecedented size. måndag 16 maj 2011 Free Will and Finite Precision Computation 2 Continuation of Free Will 1: David Foster Wallace studies the logical argument of the fatalism of Richard Taylor going back to Aristotle stating that we cannot do anything other than what we actually do, in other words that the future is predetermined and free will is only an illusion. The logic is that a statement of the form "Tomorrow it will rain" is either true or false: If it is true then it will have to rain, and if it is not true then it cannot rain, in both cases showing that what will happen tomorrow is predetermined. Of course, this is a simple logical trick based on the idea that a statement must be either true or not true. But nothing says that this must apply to the statement "Tomorrow it will rain". It is a statement without definite truth value when (today) it is uttered; only in retrospect knowing the outcome is it possible to assign it a truth value and then the predetermination disappears. But there are other arguments showing that the future is determined by the present. The best one is that of Laplace's demon: Laplace's demon would thus be able to tell today if it is going to rain tomorrow by simply computing the solution to the equations of motion describing the evolution of particles (atoms, molecules) making up the climate system, and would thus be able to make a necessarily true prediction of a coming event, thus showing that it is predetermined. But is there a Laplace demon? Human computing power is capable of solving the equations of motion for particle systems of millions or billions of particles (10^6 - 10^9), but not the octillions (10^30) of real systems. We are all too familiar with the fact that human intellect cannot tell for sure if it is going to rain tomorrow. But is the weather still predetermined? Is there anything we can do by free will to make it rain or not? Can we make it rain tomorrow by leaving our car windows open? The investigation continues... Self-publishing on Google Books? I have published the following new books of mine on Google Books as fullview with free PDF download: The idea is to compare direct publishing on Google Books with self-publishing on e.g. Amazon CreateSpace, or to conventional publishing through an established publisher as ebook or printed book. It appears that Amazon CreateSpace requires conversion of pdf to a different ebook format, which is not automatic and tricky if math formulas are involved. Maybe somebody has some good advice to give. söndag 15 maj 2011 Free Will and Finite Precision Computation 1 In recent books I have shown that the concept of finite precision computation, in reality in analog form and in simulation of reality in digital form, can be used to give rational deterministic (mathematical) explanations of the following phenomena: • direction of time • 2nd law of thermodynamics • blackbody radiation, which have evaded explanations using both classical deterministic exact mathematics and classical statistical physics. Finite precision computation opens classical exact determinism to some imprecision or indeterminism, without going all the way to the full indeterminism of statistical physics, and thus avoids the impossibility of both extreme determinsim and extreme indeterminsim. In finite precision computation, little things may be decided by throwing a dice, corresponding to chopping a decimal expansion into a finite number of digits, while big things still may be fully deterministic. The concept can be described as one of the following options for using dice throw to decide what to do: • Full Determinism: Calculate everything exactly. Never throw a dice. • Full Indeterminsim: Calculate nothing. Always throw a dice. • Finite Precision: Calculate the big. Throw a dice to decide the small. Full Indeterminism is represented by the cult novel The Dice Man by George Cockcroft about the psychiatrist Luke Rhinehart, who decides to let the dice decide everything with catastrophical results from using it to decide big things like getting divorced or not. Full Determinsim is represented by the fatalism of Richard Taylor exhibited by the cult author David Foster Wallace, who took his own life on Sept 12 2008, maybe after asking the dice to decide to pull the trigger or not. Wallace wrote a college thesis on Taylor's fatalism with title Fate, Time, and Language: An Essay of Free Will, republished in 2010 by Columbia State University. Can Finite Precision Computation be used to shed some light on the eternal philosophical problem Free Will? I will address this question in a sequence of posts, while reading a bit of Wallace. I will start with the following question: • Is it helpful to let a dice decide little things? onsdag 11 maj 2011 Har Professorn Avskaffat Sig Själv? I den nya Högskolelagen är det prefekten som "leder verksamheten", dvs bestämmer vad som skall göras och sägas, medan professorn/läraren "har hand om" utbildning och forskning, dvs gör jobbet, på order av ledningen. Jag tar upp detta i ett inlägg på Newsmill: med utgångspunkt från mina erfarenheter av censur och munkavle på KTH, redovisat under KTH-gate. Naturligtvis refuserades artikeln av Svd och DN var ju inte att tänka på. Munkavle på! Ytterst handlar det om akademisk tankefrihet, om vem som skall bestämma vad som är aktuell vetenskaplig sanning, professorn/vetenskapsmannen eller administratören/politikern? Mina professorskollegor i landet är anmärkningsvärt indifferenta till frågan, som om den inte berörde dem: • Kan det verkligen vara så att professorn avskaffat sig själv utan att någon har märkt något och än mindre sagt något? • Utan att professorns fackförbund SULF haft någon invändning? • Kanske det i den nya högskolan inte behövs några professorer med uppgift att tänka självständigt? • Har professorn utan att protestera låtit sig förses med munkavle? Debatten på Newsmill kanske ger svar. PS1 Vad gäller munkavle, censur och tystande av kritiska röster, så är det naturligtvis effektivt så länge det funkar till 100%, men det förutsätter att alla kanaler stängs och att övervakningen är total. Detta är dock svårt att uppnå i dagens nya informationsvärld: Debatten i klimatfrågan har nu tagits över av den fria blogg-sfären och det politiskt korrekta tänkandet har tappat sin hegemoni på vetenskaplig sanning. Något för KTH, DN och SvD att betänka, kanske. Det finns ju också skygglappar att sätta på. PS2 Lustigt nog arrangerar KTH ett Symposium om Akademiskt Ledarskap 13/5 till ära av Ingrid Melinder (som inte är professor), där naturligtvis de administratörer som satt munkavle med benäget bistånd av Melinder, talar: Peter Gudmundson och Folke Snickars. Kanske läge att ta upp KTH-gate? Nog inte: det är bara administratörer som får tala om akademiskt ledarskap. Professorer får hålla tyst, i den nya högskolan (utom Mathias Uhlen med sina 800 milj/år), medan Scoutförbundet får utveckla sin ledarskapsfilosofi, för högskolan. måndag 9 maj 2011 SULF om Censur och KTH-gate Efter att ha blivit utsatt för censur med ett direkt personangrepp av KTH backat av Rektor Peter Gudmundson, vilket jag redovisat i en serie poster under KTH-gate, vände jag mig till mitt fackförbund SULF för att se om jag kunde få något stöd. Jag presenterade mitt fall vid ett möte med förbundsjurist Carl Falck, som därefter deltog i ett möte med Rektor stöttad av sin adjutant Anders Lundgren, varvid mycket tydligt framkom att Rektor inte hyste minsta betänklighet att genom sitt agerande allvarligt skada min professionella verksamhet. Anders Lundgren var noga med att påpeka att KTH aldrig (aldrig) kommenterar uppgifter i pressen som tillskrivs Rektor, även om de är grovt felaktiga och hårt drabbar den som utsätts för felaktiga nedvärderande uppgifter. Aldrig! KTH har principer, och KTH följer sina principer, även om det är tufft. Peter Gudmundson är gammal hockeyspelare och är van vid hårda puckar. Carl Falck meddelar mig kort därefter, efter att ha träffat Anders Lundgren utan min närvaro: • Vi har diskuterat frågan vid flera tillfällen inom förbundet, men har nu kommit fram till att det idag inte finns utrymme för oss att föra denna fråga vidare. Det svar jag kan ge dig, och det är vårt gemensamma svar, är att vi inte kommer att agera fortsättningsvis i detta ärende. Förbundordförande Anna Götlind bekräftar med: • Som SULF:s förbundsjurist Carl Falck tidigare meddelat dig kommer SULF inte att bistå dig vidare i ditt ärende. • SULF är alltid positiva till debatt om den akademiska friheten, men som förbund kan vi inte debattera i enskilda ärenden. Tja, vad skall man nu säga om detta? Ja inte kan man säga att SULF givit mig något vidare stöd. Ur min synvinkel har SULF snarast försvårat min situation genom att till synes helt liera sig med KTHs ledning. Carl Falck säger att det idag inte finns utrymme medan Anna Götlind använder argumentet att SULF inte tar upp enskilda ärenden, som om det trots allt skulle finnas utrymme bara inte mitt ärende vore så himla enskilt. Jag har snällt betalat min avgift till SULF i 40 år (minst 100.000 kr) utan att någonsin besvära med något enda litet ärende. När jag till slut vänder mig till SULF i en utsatt situation får jag kalla handen. Det är klart att jag känner mig ganska korkad som gått på en sådan nit. Men jag är väl inte ensam i den gamla trogna skaran som trodde att facket var till för medlemmen. Bland de unga är det väl inte så populärt att betala fackföreningsavgift. SULF stadgar säger: • SULF har till uppgift att tillvarata och bevaka medlemmarnas fackliga, sociala och ekonomiska intressen samt att företräda medlemmarna i sådana frågor. Skall jag tolka detta som att mina professionella intressen i min roll som professor ligger utanför SULFs ansvarsområde? Tar inte SULF upp enskilda ärenden utan bara ärenden som gäller alla medlemmarna? Skall alla medlemmarna ha utsatts för censur för att SULF skall ta upp frågan? Vi får höra vad Anna Götlind svarar: • Jag svarar inte. Jag har redan svarat att SULF inte bistår dig i detta ärende. Så småningom får jag väl sammanfatta mina erfarenheter av SULF i ett debattinlägg i förbundets tidskrift Universitetsläraren, såvida inte detta också censureras bort...men Newsmill finns ju alltid....nytt inlägg kommer snart... Definition as Physical Fact In science and philosophy the distinction between synthetic and analytic statements is fundamental, according to Kant's Critique of Pure Reason. An analytic statement is about language and its truth can be evaluated by checking the meaning of the words forming the statements. A definition is analytic as a specification of the meaning of a new word in terms of previously defined words, e.g. bachelor as unmarried man. A synthetic statement is about some reality and can in principle be checked by observing the reality. The statement "1 meter is equal to 100 centimeters", is analytic, while the statement "this stick is 1 meter long", is synthetic. To subject an analytic statement to experimental observation would be ridiculous: To check by experiment if there are 100 centimeters on 1 meter would not give a Nobel Prize, just laughs. So if an experiment is set up to test a statement, that is a sign that the statement is viewed as synthetic. In modern physics the distinction between a definition (analytic statement) and synthetic statement is sometimes blurred into statements which are viewed to be both analytic (true by definition) and synthetic about some reality, or rather sometimes analytic and sometimes synthetic, sometimes definition sometimes fact. Such a statement makes it possible to say something about reality which cannot be denied, and it is directly recognized as such. When you hear a physicist making a statement claiming that something cannot be denied, then the statement is such a double analytic-synthetic statement. Here are some key examples: 1. The speed of light in vacuum is constant. 2. Heavy mass is equal to inertial mass. The constancy of the speed of light is a definition since according to the 1983 standard length unit of a meter is defined as a certain fraction of a lightsecond = the distance traveled by light in one second. The speed of light is thus by definition equal to 1 lightsecond per second, no more no less. On the other hand, a physicist is convinced that the speed of light is constant as a physical fact. A physicist would say that because the speed of light is constant in reality, it can be used to define the length standard. So we have a definition which is a physical fact at the same time: Double analytical-synthetic. Einstein was a master of this form of double-play: The basic assumption of special relativity is that the speed of light is constant, and Einstein uses this statement sometimes as analytic and sometimes as synthetic. Very clever and very confusing. But according to Kant it is not reasonable. In general relativity Einstein uses the equality of heavy and inertial mass both as definition and physical fact. In this case experimental verification of equality could give a Nobel Prize. In climate science the following statement is the very basis of climate alarmism: • No-feedback climate sensitivity is equal to 1 C, with climate sensitivity the global warming from doubled atmospheric CO2. This is presented as an undeniable fact and as such is an example of a double analytic-synthetic statement. The 1 C comes from a direct application of Stefan-Boltzmann's radiation law Q = sigma T, in its differentiated form dQ ~ 4 dT with Q ~ 240 W/m2, T ~ 288 K and dQ = 4 W/m2 as "radiative forcing" from doubled CO2. Thus dT = 1 C as climate sensitivity. This statement is analytic because the simple algebraic law Q = sigma T cannot tell anything about the reaction of the complex Earth-atmosphere system upon a small perturbation. So climate sensitivity = 1 C is a definition but it is used as statement of factual global warming of 1 C. It is a double analytic-synthetic statement, and it is recognized as an undeniable fact about reality. It is so undeniable that even skeptics like Lindzen, Monckton and Spencer, are convinced that it is a true fact and not just a definition. We just learned that a double analytic-synthetic statement can be extremely powerful, the very basis of climate alarmism, yet it is easy to discover as soon as one is aware of the double-play. I hope the reader is stimulated to find other examples of double analytic-synthetic statements used in the debate today. They are not difficult to find once the light is on. For example, what about the statement: • Educated people are superior to no so well educated people! Definition or fact, or both? söndag 8 maj 2011 The Final Solution by The Royal Swedish Academy • Together with Stockholm Environment Institute, Stockholm Resilience Centre, Beijer Institute for Ecological Economics and Potsdam Institute for Climate Impact Research, the Royal Swedish Academy of Sciences will bring together some of the world’s most renowned thinkers and experts on global sustainability, 16-19 May 2011 in Stockholm. Only for invited guests. • Normatively, the carbon-based economic model is also an unsustainable situation. • The transformation towards a low-carbon society is therefore as much an ethical imperative as the abolition of slavery and the condemnation of child labour. • This structural transition is the start of a "Great Transformation" into a sustainable society, which must inevitably proceed within the planetary guard rails of sustainability. • By the middle of the century, the global energy systems must largely be decarbonised. • Production, consumption patterns and lifestyles in all of the three key transformation fields must be changed in such a way that global greenhouse gas emissions are reduced to an absolute minimum over the coming decades, and low-carbon societies can develop. • The extent of the transformation ahead of us can barely be overestimated. • In terms of profound impact, it is comparable to the two fundamental transformations in the world‘s history: • the Neolithic Revolution, i.e. the invention and spreading of farming and animal husbandry, and the Industrial Revolution, meaning the transition from agricultural to industrialised society. • This would be something of a quantum leap for civilisation. • It should in principle also be possible to reach a universal consensus regarding human civilisation's ability to survive within the natural boundaries imposed by planet Earth. • This necessarily presupposes an extensive "Global Enlightenment". • So nothing less than a new social contract must be agreed to. • Science will play a decisive, although subservient, role here. • Ultimately, sustainability is a question of imagination. In other words, a Final Solution to the Carbon Question will be presented by the Royal Swedish Academy of Sciences. The basic idea is to comb Europe through from West to East from North to South for carbon and transport it to Eastern Poland, where it will be gassed (in special camps, see picture above). This will secure a carbon-free sustainable Europe, which will serve as a model for the rest of the world including its 3 billion people who still do not have access to essential modern energy services. The Symposium will conclude with a memorandum signed by key Nobel Laureates, crowned by a dinner hosted by King Carl XVI Gustaf. Among the invited 50 of the world’s most renowned thinkers, we find: • Martin Rees, President Royal Society, • Mikhail Gorbachev, Nobel Peace Prize 1990 • Andreas Carlgren, Swedish Minister of Environment • Murray Gell-Mann, Nobel Prize in Physics 1969 for his contributions and discoveries concerning the classification of elementary particles and their interactions. • David Gross, Nobel Prize in Physics 2004 for the discovery of asymptotic freedom in the theory of the strong interaction. • Johan Rockström, Stockholm Resilience Center • Anders Wijkman, Stockholm Environment Institute. Note that the key Nobel Laurates when signing the memorandum accept that science will play a subservient role. It is natural to compare with Manifesto of the Ninety-Three and the suppression of quantum mechanics and relativity in the Soviet Union as "idealistic" and "bourgeois", and in Nazi-Germany as "Jewish physics". fredag 6 maj 2011 Presentation at Stockholm Initiative: The IPCC Trick The Hustler (1886-1905) by Ernst Josephson (student at the Royal Academy of Fine Arts in 1867) Here is a summary of a short presentation at the annual meeting of the Stockholm Initiative at the Royal Academy of Fine Arts, May 7. 1. IPCC Climate Sensitivity = 3 C • The CO2 climate alarmism of IPCC is based on an estimate of climate sensitivity (global warming by doubled CO2) of 3 C obtained by positive feedback from a no-feedback sensitivity of 1 C. • No-feedback sensitivity is obtained by definition from Stefan-Boltzmann dQ ~ 4 dT with dQ = 4 W/m2 assumed "radiative forcing" from doubled CO2. • Note: A definition says nothing about reality. The 4 W/m2 of "radiative forcing" is a theoretical assumption rather than observed reality. Insolation constant. 2. The Question • What is the global warming effect of a 1 % change of atmospheric radiative properties? • 4 W/m2 is about 1 % of gross insolation of 360 W/m2 • 3 C = 1 % of gross temperature 288 K • Reasonable?? Unreasonable?? 3. Observation + Simple Models: Climate Sensitivity = 0.3 C Combining basic mathematical models and direct observation of • temperatures, lapse rate, insolation and thermodynamics, one obtains a climate sensitivity which is 10 times smaller than IPCC: • 1 % change of atmospheric radiative properties • 0.3 C is about 1% of "atmospheric effect" of 33 C (= 288 - 255 K) • wellposed (stable): 1% forcing gives 1% = 0.3 C 3. IPCC Trick: Backradiation • Real radiative exchange between surface and atmosphere: 30 - 60 W/m2 • 1 % change of atmospheric properties: 0.3 - 0.6 W/m2 net radiative forcing • IPCC backradiation exchange: 300 - 400 W/m2 • 1 % change of atmospheric properties: 4 W/m2 gross radiative forcing • view 3 C as 1% of gross temperature 288 K, not 1% of "atmospheric effect". 4. Backradiation Fiction In Computational Blackbody Radiation I give a new mathematical derivation of Planck's radiation law showing that backradiation is fiction. This is mathematical evidence that the 3 C of IPCC is based on fiction: 10 times too big. 5. Wellposedness: Butterfly in Brazil vs Torando in Texas IPCC claims that a small cause (1% or 0.1% change of atmospheric properties) can have a big effect (global warming of 3 C = 10% of atmospheric effect 33 C). 6. The Lorenz Model Can a butterfly in Brazil set off a torando in Texas? • Can be disproved by removing butterfly and observing tornados. • Can never be proved, because a very precise model is required (both butterfly and tornado). Requires unstable system: small cause - big effect. 7. Is global climate unstable? Observations say No rather than Yes. Atmosphere as air conditioner: Radiative forcing changes intensity of thermodynamics with little temperature change. Compare with boiling water: heat forcing gives more vigorous boiling at steady temperature. 8. KTH-gate KTH censored my mathematical analysis of climate models. Unique in (Swedish) modern academic history (after 1632). At present my professors union SULF hesitates to take up my case, as if my union and KTH were acting in tandem to silence my voice. How is this possible? Well, in the new university system in Sweden 0f 2011, it is the administrative hierarchy of rector, dean and prefect, which determines the scientific truth and not the professor (as during 1632 - 2010). The censorship of my work is therefore fully logical and apparently accepted even by the professors union, and also by Swedish professors. Only one has questioned the censorship, Ingemar Nordin. torsdag 5 maj 2011 The IPCC Trick 6 The variation of the Earth surface temperature over day and night (shown in the above picture) with its diurnal temperature range DTR and phase lag L, can be used to determine the heat capacity C and damping A in the following simple model of the Earth-atmosphere system: • C dT/dt + A T = F sin(t), where T(t) is the Earth surface temperature deviation from its mean as function of suitably scaled time t, and F = is the mean radiative forcing of the Earth-atmosphere system. Fitting this model to the data DTR = 20 C, F = 240 W/m2 and L = 3 h, we obtain • A ~ C ~ 0.7 x 240/10 ~ 16 . An additional forcing of 4 W/m2, by IPCC attributed to doubled CO2, could thus be estimated to require an increase of the mean temperature of 4/16 = 0.25 C, to restore heat balance. In other words, the observed diurnal temperature variation suggests a climate sensitivity of 0.25 C. We have now seen two different arguments based on simple models and observation, indicating that climate sensitivity is smaller 0.3 C. We compare with the climate sensitivity of IPCC of 3 C underlying CO2 climate alarmism obtained by a clever use of the IPCC Trick, which is 10 times larger.
3c2afb28feab66e3
Sturm–Liouville problem, Sturm-Liouville problemor eigenvalue problemin mathematics, the determination of the set of values of the constants in the solution of a given second-order differential equation that make the solution satisfy not only the differential equation but also a set of specified auxiliary conditions usually called boundary values (see boundary value). The principles of solving this problem were established by the a certain class of partial differential equations (PDEs) subject to extra constraints, known as boundary values, on the solutions. Such equations are common in both classical physics (e.g., thermal conduction) and quantum mechanics (e.g., Schrödinger equation) to describe processes where some external value (boundary value) is held constant while the system of interest transmits some form of energy. In the mid-1830s the French mathematicians Charles-François Sturm and Joseph Liouville in the 1830s; in the 20th century those principles have been applied in the development of quantum mechanics, as in the solution of the Schrödinger equation and its boundary values. A simple example of such a problem is finding a solution y(x) to the equation y″ + c2y = 0 such that the function equals zero if x is equal to 0 or some number a. The function y = sin cx satisfies the equation, but it meets the auxiliary conditions only if c = ±nπ/a, in which n = 0, 1, 2, . . . . These problems are also called eigenvalue problems and involve more generally the problem of finding a solution of the equation independently worked on the problem of heat conduction through a metal bar, in the process developing techniques for solving a large class of PDEs, the simplest of which take the form [p(x)y′]′ + [q(x) - k − λr(x)]y = f(x) that satisfies the auxiliary conditions a1y(a) + a2y′(a) = 0 and a3y(b) + a4y′(b) = 0, in which a1, a2, a3, and a4 are constants. To determine when this equation has a solution, the related homogeneous equation is first considered; i.e., the equation with the function f(x) equal to zero = 0 where y is some physical quantity (or the quantum mechanical wave function) and λ is a parameter, or eigenvalue, that constrains the equation so that y satisfies the boundary values at the endpoints of the interval over which the variable x ranges. If the functions p, q, and r satisfy suitable conditions, then, as in the simpler example above, the equation will have a family of solutions, called eigenfunctions, corresponding to certain values of k, called eigenvalues. Then, if the value of k in the original nonhomogeneous equation is different from these eigenvaluesthe eigenvalue solutions. For the more-complicated nonhomogeneous case in which the right side of the above equation is a function, f(x), rather than zero, the eigenvalues of the corresponding homogeneous equation can be compared with the eigenvalues of the original equation. If these values are different, the problem will have a unique solution. If k equals On the other hand, if one of these eigenvalues matches, the problem will have either no solution or a whole family of solutions, depending on the properties of the function f(x).
d03739f84545a57d
Stationary state From Wikipedia, the free encyclopedia   (Redirected from Energy eigenstates) Jump to: navigation, search For the concept used in classical economics, see Steady-state economy § The stationary state in classical economics. A stationary state is a pure[citation needed] quantum state with all observables independent of time. It is an eigenvector of the Hamiltonian.[1] This corresponds to a state with a single definite energy (instead of a quantum superposition of different energies). It is also called energy eigenvector, energy eigenstate, energy eigenfunction, or energy eigenket. It is very similar to the concept of atomic orbital and molecular orbital in chemistry, with some slight differences explained below. A harmonic oscillator in classical mechanics (A–B) and quantum mechanics (C–H). In (A–B), a ball, attached to a spring, oscillates back and forth. (C–H) are six solutions to the Schrödinger Equation for this situation. The horizontal axis is position, the vertical axis is the real part (blue) or imaginary part (red) of the wavefunction. (C,D,E,F), but not (G,H), are stationary states, or standing waves. The standing-wave oscillation frequency, times Planck's constant, is the energy of the state. A stationary state is called stationary because the system remains in the same state as time elapses, in every observable way. For a single-particle Hamiltonian, this means that the particle has a constant probability distribution for its position, its velocity, its spin, etc.[2] (This is true assuming the particle's environment is also static, i.e. the Hamiltonian is unchanging in time.) The wavefunction itself is not stationary: It continually changes its overall complex phase factor, so as to form a standing wave. The oscillation frequency of the standing wave, times Planck's constant, is the energy of the state according to the Planck–Einstein relation. Stationary states are quantum states that are solutions to the time-independent Schrödinger equation: • is a quantum state, which is a stationary state if it satisfies this equation; • is the Hamiltonian operator; • is a real number, and corresponds to the energy eigenvalue of the state . This is an eigenvalue equation: is a linear operator on a vector space, is an eigenvector of , and is its eigenvalue. If a stationary state is plugged into the time-dependent Schrödinger Equation, the result is:[3] Assuming that is time-independent (unchanging in time), this equation holds for any time t. Therefore, this is a differential equation describing how varies in time. Its solution is: Therefore, a stationary state is a standing wave that oscillates with an overall complex phase factor, and its oscillation angular frequency is equal to its energy divided by . Stationary state properties[edit] Three wavefunction solutions to the time-dependent Schrödinger equation for a harmonic oscillator. Left: The real part (blue) and imaginary part (red) of the wavefunction. Right: The probability of finding the particle at a certain position. The top two rows are two stationary states, and the bottom is the superposition state , which is not a stationary state. The right column illustrates why stationary states are called "stationary". As shown above, a stationary state is not mathematically constant: However, all observable properties of the state are in fact constant. For example, if represents a simple one-dimensional single-particle wavefunction , the probability that the particle is at location x is: which is independent of the time t. The Heisenberg picture is an alternative mathematical formulation of quantum mechanics where stationary states are truly mathematically constant in time. As mentioned above, these equations assume that the Hamiltonian is time-independent. This means simply that stationary states are only stationary when the rest of the system is fixed and stationary as well. For example, a 1s electron in a hydrogen atom is in a stationary state, but if the hydrogen atom reacts with another atom, then the electron will of course be disturbed. Spontaneous decay[edit] Spontaneous decay complicates the question of stationary states. For example, according to simple (nonrelativistic) quantum mechanics, the hydrogen atom has many stationary states: 1s, 2s, 2p, and so on, are all stationary states. But in reality, only the ground state 1s is truly "stationary": An electron in a higher energy level will spontaneously emit one or more photons to decay into the ground state.[4] This seems to contradict the idea that stationary states should have unchanging properties. The explanation is that the Hamiltonian used in nonrelativistic quantum mechanics is only an approximation to the Hamiltonian from quantum field theory. The higher-energy electron states (2s, 2p, 3s, etc.) are stationary states according to the approximate Hamiltonian, but not stationary according to the true Hamiltonian, because of vacuum fluctuations. On the other hand, the 1s state is truly a stationary state, according to both the approximate and the true Hamiltonian. Comparison to "orbital" in chemistry[edit] An orbital is a stationary state (or approximation thereof) of a one-electron atom or molecule; more specifically, an atomic orbital for an electron in an atom, or a molecular orbital for an electron in a molecule.[5] For a molecule that contains only a single electron (e.g. atomic hydrogen or H2+), an orbital is exactly the same as a total stationary state of the molecule. However, for a many-electron molecule, an orbital is completely different from a total stationary state, which is a many-particle state requiring a more complicated description (such as a Slater determinant). In particular, in a many-electron molecule, an orbital is not the total stationary state of the molecule, but rather the stationary state of a single electron within the molecule. This concept of an orbital is only meaningful under the approximation that if we ignore the electron-electron repulsion terms in the Hamiltonian as a simplifying assumption, we can decompose the total eigenvector of a many-electron molecule into separate contributions from individual electron stationary states (orbitals), each of which are obtained under the one-electron approximation. (Luckily, chemists and physicists can often (but not always) use this "single-electron approximation.") In this sense, in a many-electron system, an orbital can be considered as the stationary state of an individual electron in the system. In chemistry, calculation of molecular orbitals typically also assume the Born–Oppenheimer approximation. See also[edit] 1. ^ Quantum Mechanics Demystified, D. McMahon, Mc Graw Hill (USA), 2006, ISBN 0-07-145546-9 2. ^ Cohen-Tannoudji, Claude, Bernard Diu, and Franck Laloë. Quantum Mechanics: Volume One. Hermann, 1977. p. 32. Further reading[edit]
82791437c4dcc037
Ask the Alchemist #26 What factors affect refining time for chocolate, and what are some different ways to tell if it is done? Are there ways to speed it up or extend it if needed?All good, common questions that have pretty straightforward, albeit, somewhat less than perfectly helpful answers most people want. Forewarned….. I guess the first thing that needs to be done is to define that we (or at least I) are talking about refining in a Melanger containing 1-9 lbs of chocolate. I will say at the outset that the choice of the Melanger has little effect on refining time. Maybe a little, but probably the least to the point they can all be considered equal for this discussion. After that, these all affect refining time: • Amount refined • The recipe • Moisture • Your tastes The first is pretty straightforward. The more chocolate, the longer the time. I can generally have a small 1 lb batch of 80% chocolate refined in 12 hours, and it’s very close in 8 hours….but sometimes it’s 14-16 hours.Putting in the second items, if I have two recipes that differ only in the amount of cocoa butter, the one with more cocoa butter will tend to refine faster. Why – it’s really related to the viscosity (or how thick) of the chocolate. The less viscous, the more force can be applied to the refining process. That relates to item 3 – the moisture. The more moisture, the thicker and more viscous the batch will be and the addition of lecithin to bind some of that moisture can reduce your refining time. How much? It could be as little of a difference of 1-2 hours or it could be 10-12-20 hours….depending on how much you are refining….and what your recipe is (see how helpful this isn’t?) Back to the recipe; If you have more things to refine in your batch, it WILL take more time. 70% dark will take longer than 80% dark. 50% milk will take longer than 50% ‘dark’ (it’s not very dark at that point) because milk powder takes longer than sugar. 20% milk with 30% sugar vs 15% milk with 35% sugar…hell if I know. Too close to call without trying it. Seeing a pattern yet? As for when – well, that is easy…..wait for it….when it seems right to you. Are YOU happy with it? Does it seem gritty still? Let it keep going. Seriously. Sure, you could work out some fancy ass, expensive way to get a particle size distribution plot (no, it’s not just one number), but in the long run, it’s how it feels in your mouth. Can you speed it up? You can pre-grind your cocoa nibs (only if you are adding them direct), and/or sugar, but I’ve only found this to affect the total time by 1-2 hours – not usually worth the effort in my opinion. Can you extend it? Now this is an oddly good question. Why you may ask would you want to extend it? Because Melangers do TWO things. They refine and they conch. Two DIFFERENT things happen initially, at the same time, when using a Melanger. Refining is particle size reduction. Conching is much more chemical in nature (oxidation among other things) that occurs by the stirring of melted (refined) chocolate. Basically this means you refine for the first 0-24 hours until it cannot get any smoother. But Conching is happening somewhere around the 2 hour mark until you stop – maybe 10-12-20 hours after it is smooth. It could well be you want the refining time to match that conching time a little closer (because you like the flavor – this is NOT something I’ve played with, but have heard about it). How would you do that? Loosen the tension on the Melanger (only possible with the Spectra’s and Premier wet grinder). What does that all mean? It means what I’ve always said. Refining will take anywhere from 8-48 hours with the average falling somewhere in the 18 hour mark, depending on your recipe, how much you are refining and how smooth you want it. Why can’t I tell you any better? Well, because the interrelated, multi-variable function is just too damn complex, and we don’t really even know what that equation looks like. What might it look like? Couldn’t I just give it? OK here. Happy? Have fun. Solve away. OK, so that is NOT the equation for refining chocolate. That is the Time-dependent Schrödinger equation or single non-relativistic particle – i.e. position of the electron in SIMPLEST system we have – that of the hydrogen atom. But it would look similar and it just gets more complex from there. It is WAY easier and more productive (and FUN) to just know it’s about 18 hours for an average batch of average chocolate and that you should taste it until you are happy. 1 Comment
e63b6ac1c6592b14
Home Page Chemogenesis Web Book Chemical Thesaurus Tutorials and Drills Frequently Asked Questions Lewis Theory Most chemistry is taught in terms of Lewis theory Most chemistry is learned in terms of Lewis theory Most chemistry understood in terms of Lewis theory Most chemists think in terms of Lewis theory most of the time So, what is Lewis theory? Electrons dance to a subtle & beautiful quantum mechanical tune, and the resulting patterns are complicated & exquisite. As chemists we try to understand the dance. Our physicist friends attempt to understand the music. What Is Lewis Theory? Lewis theory is the study of the patterns that atoms display when they bond and react with each other. The Lewis approach to understanding chemical structure and bonding is to look at many chemical systems, to study the patterns, count the electrons in the various patterns and to devise simple rules associated with stable/unstable atomic, molecular and ionic electronic configurations. Lewis theory makes no attempt to explain how or why these empirically derived numbers of electrons – these magic numbers – arise. Although, it is striking that the magic numbers are generally (but not exclusively) positive integers of even parity: 0, 2, 4, 6, 8... For example: • Atoms and atomic ions show particular stability when they have a full outer or valence shell of electrons and are isoelectronic with He, Ne, Ar, Kr & Xe: Magic numbers 2, 10, 18, 36, 54. • Atoms have a shell electronic structure: Magic numbers 2, 8, 8, 18, 18. • Sodium metal reacts to give the sodium ion, Na+, a species that has a full octet of electrons in its valence shell. Magic number 8. • A covalent bond consist of a shared pair electrons: Magic number 2. • Atoms have valency, the number of chemical bonds formed by an element, which is the number of electrons in the valence shell divided by 2: Magic numbers 0 to 8. • Ammonia, H3N:, has a lone pair of electrons in its valence shell: Magic number 2. • Ethene, H2C=CH2, has a double covalent bond: Magic numbers (2 + 2)/2 = 2. • Nitrogen, N2, N≡N, has a triple covalent bond: Magic numbers (2 + 2 + 2)/2 = 3. • The methyl radical, H3C•, has a single unpaired electron in its valence shell: Magic number 1. • Lewis bases (proton abstractors & nucleophiles) react via an electron pair: Magic number 2. • Electrophiles, Lewis acids, accept a a pair of electron in order to fill their octet: Magic numbers 2 + 6 = 8. • Oxidation involves loss of electrons, reduction involves gain of electrons. Every redox reaction involves concurrent oxidation and reduction: Magic number 0 (overall). • Curly arrows represent the movement of an electron pair: Magic number 2. • Ammonia, NH3, and phosphine, PH3, are isoelectronic in that they have the same Lewis structure. Both have three covalent bonds and a lone pair of electrons: Magic numbers 2 & 8. • Aromaticity in benzene is associated with the species having 4n+2 π-electrons. Magic number 6. Naphthalene is also aromatic: Magic number 10. • Etc. Lewis theory is numerology. Lewis theory is electron accountancy: look for the patterns and count the electrons. Lewis theory is also highly eclectic in that it greedily begs/borrows/steals/assimilates numbers from deeper, predictive theories and incorporates them into itself, as we shall see. Physics    •••    Stamp Collecting    •••    Lewis Theory Ernest Rutherford famously said: "Physics is the only real science. The rest are just stamp collecting." Imagine an alien culture trying to understand planet Earth using only a large collection of postage stamps. The aliens would see all sorts of patterns and would be able to deduce the existence of: countries, national currencies, pricing strategies, differential exchange rates, inflation, the existence of heads of state, what stamps are used for, etc., and – importantly – they would be able to make predictions about missing stamps. But the aliens would be able to infer little about the biology of life on our planet by only studying stamps, although there would be hints in the data: various creatures & plants, males & females, etc. So it is with atoms, ions, molecules, molecular ions, materials, etc. As chemists we see many patterns in chemical structure and reactivity, and we try to draw conclusions and make predictions using these patterns: This is Lewis theory. But this Lewis approach is not complete and it only gives hints about the underlying quantum mechanics, a world observed through spectroscopy and mathematics. Consider the pattern shown in Diagram-1: Now expand the view slightly and look at Diagram-2: You may feel that the right hand side "does not fit the pattern" of Diagram-1 and so is an anomaly. So, is it an anomaly? Zoom out a bit and look at the pattern in Diagram-3, the anomaly disappears: But then look at Diagram-4. The purple patch on the upper right hand side does not seem to fit the pattern and so it may represent anomaly: But zooming right out to Diagram-5 we see that everything is part of a larger regular pattern: Digital Flowers at DryIcons When viewing the larger scale the overall pattern emerges and everything becomes clear. Of course, the Digital Flowers pattern is trivial, whereas the interactions of electrons and positive nuclei are astonishingly subtle. This situation is exactly like learning about chemical structure and reactivity using Lewis theory. First we learn about the 'Lewis octet', and we come to believe that the pattern of chemistry can be explained in terms of the very useful Lewis octet model. Then we encounter phosphorous pentachloride, PCl5, and discover that it has 10 electrons in its valence shell. Is PCl5 an anomaly? No! The fact is that the pattern generated through the Lewis octet model is just too simple. As we zoom out and look at more chemical structure and reactivity examples we see that the pattern is more complicated that indicated by the Lewis octet magic number 8. Our problem is that although the patterns of electrons in chemical systems are in principle predictable, new patterns always come as a surprise when they are first discovered: • The periodicity of the chemical elements • The 4n + 2 rule of aromaticity • The observation that sulfur exists in S8 rings • The discovery of neodymium magnets in the 1990s • The serendipitous discovery of how to make the fullerene C60 in large amounts While these observations can be explained after the fact, they were not predicted beforehand. We do not have the mathematical tools to do predict the nature of the quantum patterns with absolute precision. The chemist's approach to understanding structure and reactivity is to count the electrons and take note of the patterns. This is Lewis theory. Some Chemistry Patterns There following some diagrams showing chemistry patterns. Do not think about these as chemistry systems, yet, but just as patterns. Shell Structure of Atoms: Atomic Orbitals: Filling of Atomic Orbitals (Pauli Exclusion Principle, Aufbau Principle, Hund's Rule & Madelung's Rule): The Janet or Left-Step Periodic Table: The conventional representation of the periodic table can be regarded as a mapping applied to the Janet formulation. The pattern is clearer in the Janet periodic table. VSEPR Geometries: Congeneric Series, Planars & Volumes: Homologous Series of Linear Alkanes: Aromatic Hydrocarbon π-systems: As chemists we attempt to 'explain' many of these patterns in terms of electron accountancy and magic numbers. Caught In The Act: Theoretical Theft & Magic Number Creation The crucial time for our understand chemical structure & bonding occurred in the busy chemistry laboratories at UC Berkeley under the leadership of G. N. Lewis in the early years of the 20th century. Lewis and colleagues were actively debating the new ideas about atomic structure, particularly the Rutherford & Bohr atoms and postulated how they might give rise to models of chemical structure, bonding & reactivity. Indeed, the Lewis model uses ideas directly from the Bohr atom. The Rutherford atom shows electrons whizzing about the nucleus, but to the trained eye, there is no structure to the whizzing. Introduced by Niels Bohr in 1913, the Bohr model is a quantum physics modification of the Rutherford model and is sometimes referred to the Rutherford–Bohr model. (Bohr was Rutherford's student at the time.) The model's key success lay in explaining (correlating with) the Rydberg formula for the spectral emission lines of atomic hydrogen. [Greatly simplifying both the history & the science:] In 1916 atomic theory forked or bifurcated into physics and chemistry streams: • The physics fork was initiated and developed by Bohr, Pauli, Sommerfield and others. Research involved studying atomic spectroscopy and this lead to the discovery of the four quantum numbers – principal, azimuthal, magnetic & spin – and their selection rules. More advanced models of chemical structure, bonding & reactivity are based upon the Schrödinger equation in which the electron is treated as a resonant standing wave. This has developed into molecular orbital theory and the discipline of computational chemistry. Note: quantum numbers and their selection rules are not 'magic' numbers. The quantum numbers represent deep symmetries that are entirely self consistent across all quantum mechanics. • The chemistry fork started when Lewis published his first ideas about the patterns he saw in chemical bonding and reactivity in 1916, and later in a more advanced form in 1923. Lewis realised that electrons could be counted and that there were patterns associated with structure, bonding and reactivity behaviour. These early ideas have been extensively developed and are now taught to chemistry students the world over. This is Lewis theory. Theories, Models, Ideas... A word should be said about the philosophical nature of theory as it is possible to take two extreme positions: realist or anti-realist. • The realist believes that theories are a true, real and actual representation of reality in that theories describe what the physical world is actually like. • To the anti-realist theories are simply a representation or description of reality. • An instrumentalist is an anti-realist who simply uses the toolbox of conceptual and mathematical techniques to describe and understand the physical world. Even though theories are all we have, they should be used but not believed. Do not inhale. This is crucial, because when tested to the extreme all chemical theories are found wanting, a situation that always confuses the realist. The cynical anti-realist knows model breakdown is inevitable because theories are not real. Now, chemistry teachers and textbook authors [yes, we are all guilty] are prone to present arguments about the nature of chemical structure and bonding without adding any anti-realist provisos... which confuses students. (But hey, we were confused when we were learning this stuff.) For example, most of the structure and bonding ideas in textbooks, and this web book is no exception, are expressed in terms of Lewis/VSEPR ideas. Lewis theory is a fantastic model and it works nearly all the time. In fact, Lewis theory is so good that most chemists can get away with assuming that it is true, that it is real. However, Karl Popper introduced the notion that for a theory to be deemed scientific it must be possible to devise experiments that tests a theory to destruction: can the theory be falsified! (Note: religions all fail this test.) Popper Falsification Black Crow Theory Theory:   All crows are black If just one white crow is found, for what ever reason, then the black crow theory cannot be a true and full description of the world. It cannot be real. Black crow theory (BCT) may remain a useful model that works most of the time, but it cannot be true in a philosophical sense. There is one very common molecule known to everybody that is not explained using Lewis logic: Diatomic oxygen, O2 Oxygen, O2, "should" – according to Lewis logic – have the structure O=O and be like F2 and N2                 F–F      O=O        N≡N Indeed, this is the representation commonly used in beginning and high school level textbooks. But, oxygen, O2, presents to the experimentalist as a blue, paramagnetic, diradical species, •O-O•, able to exist in singlet and triplet forms. The physical and chemical properties of O2 are NOT explained by Lewis logic. Diatomic oxygen is a white crow. The empirical (experiential) observations associated with O2 can be explained in terms of molecular orbital theory, see elsewhere in this webbook. This does not totally invalidate Lewis theory, but it does warn us that the Lewis model is fallible and so the model cannot be "a true and real representation of the physical world", in the realist sense. Question: Is O2 an anomaly? No, O2 is not an anomaly. The diradical structure is inevitable with respect to the patterns of the molecular orbitals, as discussed here. The quantum mechanical patterns of molecular structure are more subtle than the magic numbers of Lewis theory. Question: Is molecular orbital theory a chemical theory of everything? Is MO theory real? No, molecular orbital theory is not real. While MO theory gives a more accurate and capable description of diatomic oxygen O2 than Lewis theory, MO theory cannot explain why chiral (optically active) molecules like glucose rotate plane polarised light. This phenomenon requires explanation in terms of quantum electrodynamics, QED. Modern Lewis Theory At the top of this page it was stated that "Lewis theory is highly eclectic in that it greedily begs/borrows/steals/assimilates numbers from deeper, predictive theories and incorporates them into itself". Indeed, modern Lewis theory is the set of assimilated magic numbers & electron accountancy rules used by most chemists to explain chemical structure and reactivity. • Electrons in Shells & The Lewis Octet Rule: The idea that electrons are point like negative charges that exist in atomic shells is the quintessential Lewis approach. The full shell numbers: 2, 8, 8, 18, 18 are determined by experiment. There is no reason within Lewis theory as to why the numbers should be as they are, other than the pattern itself, the dance: The first magic number of Lewis theory is 8, the number associated with Lewis octet. The octet rule is taught to beginning chemistry students the world over. It is a wonderful and useful rule because so much main group and organic chemistry – s- and p-block chemistry – exhibits patterns of structure, bonding & reactivity that can be explained in terms of the Lewis octet rule of 8 electrons... and 2 electrons... Students must soon realise that 8 is not the only magic number because helium, He, and the lithium cation, Li+, have two electrons in their full shell, magic number 2. So, there are two Lewis magic numbers 2 and 8. This is the first hint that matters are a little more involved than first indicated. Question: Is the magic number 2 an anomaly? No, it tells us that the Lewis octet rule, although useful, is not subtle enough to describe the entire pattern. The patterns we see are an echo of the underlying quantum mechanical patterns. • Covalent Bonding: A covalent bond is a form of chemical bonding characterised by the sharing of pairs of electrons between atoms. The Lewis octet rule – with its magic numbers of 2 & 8 – can be used to explain much main group and organic chemistry. In the diagram of methane, CH4, above, the carbon atom has 8 electrons in its valence shell and each hydrogen has 2 electrons in its valence shell. However, at university entrance level students will have come across two phosphorous chlorides, phosphorus trichloride, PCl3, and phosphorous pentachloride, PCl5. There is no issue with PCl3 as it has a full octet, magic number 8 and is isoelectronic with ammonia, NH3. Phosphorous pentachloride, PCl5, has 10 electrons in its valence shell, and this represents a NEW Lewis magic number, 10. The structure and geometry/shape of phosphorous pentachloride, PCl5, is usually covered with reference to valence shell electron pair repulsion (VSEPR), as discussed on the next page of this webbook. • Ionic Bonding: First introduced by Walther Kossel, the ionic bond can be understood within the Lewis model. The reaction between lithium and fluorine gives the ionic salt lithium fluoride, LiF, where the Li+ ion is isoelectronic with He and F is isoelectronic with Ne, both ions have filled valence shells, magic numbers 2 & 8: • Isoelectronicity: From Wikipedia: "Two or more molecular entities (atoms, molecules, ions) are described as being isoelectronic with each other if they have the same number of valence electrons and the same structure (number and connectivity of atoms), regardless of the nature of the elements involved." • The cations K+, Ca2+, and Sc3+, the anions Cl, S2−, and P3− are all isoelectronic with the Ar atom. • The diatomics: CO, N2 & NO+ are isoelectronic because each have 2 nuclei and 10 valence electrons (4 + 6, 5 + 5, and 5 + 5, respectively). Isoelectronic structures represent islands of stability in "chemistry structure space", where chemistry structure space represents the set of all conceivable structures, possible and impossible. Lewis theory does not explain how or why the various sets of isoelectronic structures are stable, but it takes note of the patterns of stability. • Valence Shell Electron Pair Repulsion: From Wikipedia: "Valence shell electron pair repulsion (VSEPR) is a model used to predict the shape of individual molecules based upon the extent of electron-pair electrostatic repulsion." The VSEPR model states that the electron pairs in a species valence shell will repel each other so as to give the most spherically symmetric geometry. For example: Phosphorous pentachloride, PCl5, has 10 ÷ 2 = 5 electron pairs in its valence shell. These repel to give an AX5 structure with a trigonal bipyramidal geometry: There is a beautiful pattern to the various VSEPR structures and geometries, or read more on the next page of this webbook, here: The VSEPR technique is pure Lewis theory. Like Lewis theory it employs point-like electrons, but in pairs. VSEPR predicts that the electron pairs will repel each other, and that non-bonded lone-pairs will repel slightly more than bonded pairs. The net effect to is maximise the distance between electron pairs and so generate the most spherically symmetric geometry about the atomic centre. VSEPR is an astonishingly good "back of an envelope" method for making predictions about the shapes of small molecules and molecular ions. VSEPR introduces to Lewis theory the idea that molecular systems, atoms-with-ligands, pair the electrons in the atoms valence shell and maximise the spherical symmetry about the atomic centre . • Molecular Models: Lewis theory and the VSEPR technique are so successful that it is possible to build physical models of molecular structures. It is astonishing how well these physical 'balls & sticks" achieve their objective their objective or modelling molecular and network covalent materials. • Lewis Acids & Lewis Bases: A central theme of the Chemogenesis webbook is the idea that Lewis acids and Lewis bases such as borane, BH3, and ammonia, NH3, react together • Borane, BH3, has only six electrons in its valence shell, but it wants eight "to fill its octet" and it is an electron pair acceptor. • Ammonia, NH3, has a full octet, but two of the electrons are present as a reactive lone-pair. • Borane reacts with ammonia to form a Lewis acid/base complex in which both the boron and the nitrogen atoms now have a full octets. No explanation is given within Lewis theory as to why the magic number 8 should be so important. • Aromatic π-Systems and The 4n + 2 Rule: Some unsaturated organic ring systems, such as benzene, C6H6, are unexpectedly stable and are said to be aromatic. A quantum mechanical basis for aromaticity, the Hückel method, was first worked out by physical chemist Erich Hückel in 1931. In 1951 von Doering succinctly reduced the Hückel analysis to the "4n + 2 rule". • Aromaticity is associated a cyclic array of adjacent p-orbitals containing 4n+2 π-electrons, where n is zero or any positive integer. • Aromaticity is associated with cationic, anionic and heterocyclic π-systems, as well as neutral hydrocarbon structures like benzene and naphthalene. • Aromaticity can be identified by a ring current and associated down-field chemical shift in the proton NMR spectrum. (Aromatic compounds have a chemical shift of 7-8 in the proton spectrum). von Doering's 4n+2 rule – as it should be called – gives the set of magic numbers 2, 6, 10, 14, 18... The 4n + 2 rule is applicable in many situations, but not all: Pyrene contains 16 conjugated electrons (8 π-bonds, n = 1.5), and coronene contains 24 conjugated electrons (12 π-bonds, n = 5.5): Pyrene and coronene are both aromatic by NMR ring current AND by the Hückel method, but they both fail von Doering 4n + 2 rule. This tells us that although it is a useful method, the 4n + 2 rule does not have the subtly of quantum mechanics. The 4n + 2 rule is pure Lewis theory. • Resonance Structures & Curly Arrow Pushing: Lewis theory is used to explain most types of reaction mechanisms, including Lewis acid/base, redox reactions, radical, diradical and photochemical reactions. Whenever a curly arrow is used in a reaction mechanism, Lewis theory is being evoked. Do not be fooled by the sparse structural representations employed by organic chemists, curly arrows and interconverting resonance structures are pure Lewis theory: • Reaction Mechanisms: Lewis theory is very accommodating and is able to 'add-on' those bits of chemical structure and reactivity that it is not very good at explaining itself. Consider the mechanism of electrophilic aromatic substitution, SEAr: The diagram above is pure Lewis theory: • The toluene is assumed to have a sigma-skeleton that can be described with VSEPR. • The benzene π-system is added to the sigma-skeleton. • The curly arrows are showing the movement of pairs of electrons, pure Lewis. • The Wheland intermediate is deemed to be non-aromatic because it does not possess the magic number six π-electrons. Lewis Theory and Quantum Mechanics Quantum mechanics and Lewis theory are both concerned with patterns. However, quantum mechanics actively causes the patterns whereas Lewis theory is passive and it only reports on patterns that are observed through experiment. We observe patterns of structure & reactivity behaviour through experiment. Lewis theory looks down on the empirical evidence, identifies patterns in behaviour and classifies the patterns in terms of electron accountancy& magic numbers. Lewis theory gives no explanation for the patterns. In large part, chemistry is about the behaviour of electrons and electrons are quantum mechanical entities. Quantum mechanics causes chemistry to be the way it is. The quantum mechanical patterns are can be: • Observed using spectroscopy. • Echoes of the underlying quantum mechanics can be seen in the chemical structure & reactivity behaviour patterns. • The patterns can be calculated, although the mathematics is not trivial. Another way of thinking about these things: Atoms and their electrons behave according to the rules of quantum mechanics. This quantum world projects onto our physical world which we observe as being constructed from matter. As chemical scientists we observe matter and look for patterns in structure and reaction behaviour. Even though quantum mechanics is all about patterns, when we observe matter we see 'daughter patterns' that show only echoes of the 'parent' quantum mechanical patterns. To see the quantum mechanics directly we seen to study spectroscopy: Falsification Of The Lewis-VSEPR Approach At one level Lewis theory is utter tosh (complete rubbish). Electrons are not point charges. The covalent bond does not have a shared pair of electrons as explained by Lewis theory. Ammonia, H3N:, does not have a 'lone pair of electrons'. VSEPR is not a theory, but just a neat trick... a very, very, very useful neat trick! Yet, Lewis theory and the associated VSEPR method work so well that it is actually rather difficult to falsify the approach. It is hard to think of counter examples where the model breaks down. As discussed above, oxygen, O2, is a paramagnetic diradical. But O2 is only a diatomic molecule and does not have an ABC bond angle and so it is not appropriate to use VSEPR analysis. In hydrogen peroxide, H-O-O-H, the two oxygen atoms behave as typical Lewis/VSEPR atomic centres. Carbon monoxide, another diatomic, 'looks all wrong' using the Lewis approach unless it is constructed as C≡O+. Carbenes, such as methylene, CH2, do not fit into the Lewis scheme very well. Carbenes are diradicals. The bond angle in hydrogen sulfide, H2S, is 92.2° is much less than the bond and in water, H2O, which is 104.5°. This is not explained by VSEPR. There are many examples like this: the nitrogen in an amide is planar and not trigonal pyramidal; SF4 consists of an equilibrium of two interconverting see-saw forms; ammonia and phosphine, NH3 and PH3 have rather different bond angles; etc. But these examples are explainable perturbations rather than counter examples that disprove the approach. The Lewis structure of nitrogen dioxide, NO2, is not easy to construct in an unambiguous way, as discussed by Dan Berger. Nitrogen dioxide is a radical species with an unshared electron. In a related way, it is difficult to draw out the Lewis structure of a nitro function in nitrobenzene in an unambiguous manner. Copper(II) ions, Cu2+, such as [Cu(H2O)6]2+ exhibit Jahn-Teller distortion, a subtle quantum mechanical effect. Lewis theory, in the form of the Drude approach, is not good at modelling metals. Crucially, the Lewis model does NOT predict the aromatic stabilisation of benzene. However, the Lewis approach happily assimilates von Doering's – useful but not perfect – 4n + 2 rule. Interestingly, once aromaticity is incorporated into the Lewis methodology, VSEPR can be used to predict benzene's 120° bond angles. There are Jahn-Teller distortions in organic chemistry, for example cyclobutadiene, but they are rare and end up giving the same result as predicted by VSEPR! The cyclobutdienyl dication is aromatic, both by experiment and by the 4n+2 rule. It has a cyclic array of 4 p-orbitals containing 2 π-electrons so n = 0. Cyclobutadiene, according to the Hückel method "should" be a perfectly square diradical, but this a high energy state. As shown by Jahn and Teller, the actual molecule will not have an electronically degenerate ground state but will instead end up with a distorted geometry in which the degeneracy is removed and one molecular orbital (the lower energy one) becomes the unique HOMO. For cyclobutadiene that means distorting to a rectangle with two short bonds, where the π-bonds are found, and two long bonds. The net result is to give the structure as predicted by Lewis/VSEPR. Many thanks to members of the ChemEd list for examples, discussions & clarifications. Not Lewis Theory By way of counter example, consider Spectroscopy: The diagram below is a completely random spectrum showing a clear pattern pulled from the internet using Google image search. The only aim of this image is to show a spectrum that is clearly a regular pattern. (The regularity of quantum patterns are not always quite so obvious due to overlapping signals): A Rydberg series of profiles. Fig 3, Gabriel, AH, Connerade, JP, Thiery, S, et al , Application of Fano profiles to asymmetric resonances in helioseismology, ASTRON ASTROPHYS, 2001, Vol: 380, Pages: 745 - 749, ISSN: 0004-6361, here There is no point in using any type of Lewis theory to help explain the atomic spectra. These patterns are explained with exquisite precision using quantum mechanics. Timeline of Structural Theory Valence Shell Electron Pair Repulsion © Mark R. Leach 1999- Queries, Suggestions, Bugs, Errors, Typos... If you have any: Suggestions for links Bug, typo or grammatical error reports about this page,
ca44ba2f943537d4
link to homepage Institute for Advanced Simulation (IAS) Navigation and service Simulation of quantum systems Equilibration of nano-scale systems at finite temperature As well for classical as quantum systems it is well-known that if the interaction between a system and a much larger reservoir having a large number of degrees of freedom and a dense distribution of energy levels, is weak, the system is described by a canonical ensemble when the composite system is described by the microcanonical ensemble with a given total energy. In the case of quantum systems, it has recently been shown that the microcanonical mixed state for the composite system is not a required starting point for the system to be described by a canonical ensemble, but that the composite system being initially in a randomly picked pure state is sufficient. Using approximation-free simulation methods we have studied the equilibration of systems of 4 spin-1/2 particles coupled to a reservoir of 18 to 31 spin-1/2 particles, both the system and the reservoir being described by general quantum spin-1/2 Hamiltonians. The initial state of the composite system was taken to be a product state of a pure state of the system and a pure state of the reservoir, representing the reservoir at a given temperature in the canonical ensemble. We solved the time-dependent Schrödinger equation, governing the time-evolution of the closed composite quantum system, numerically and then analyzed the behavior of the reduced density matrix of the system, obtained by tracing out the degrees of freedom of the reservoir. As a function of time we have calculated the variance of the set of eigenvalues of the reduced density matrix, the entropy, the degree of decoherence of the system and the difference between the reduced density matrix and the canonical distribution. Our simulation results show that, independent of the strength of the interaction between the system and the reservoir and the initial temperature of the reservoir, the system evolves to a stationary state of which the properties strongly depend on the initial temperature of the reservoir. This equilibration is remarkable given the relative small size of the reservoir, since usually in equilibration studies the hypothesis of having a large reservoir is essential. We show that for sufficiently large initial temperatures of the reservoir, the stationary state of the system is represented by a canonical ensemble density matrix at some finite effective temperature. For decreasing temperatures, the reduced density matrix of the system deviates from the canonical density matrix. The deviation increases for decreasing values of the interaction strength between the system and the reservoir. Dynamics and manipulation of quantum spin systems Molecular magnets are generally considered as potential candidates for realizing scalable quantum information processing. Crucial for quantum information applications is that the qubits (i.e. spin-1/2 particles) in these magnets exhibit coherence over a sufficiently long period of time. For instance, the V15 molecular magnet is an assembly of 15 spin-1/2 electrons that has been shown to display Rabi oscillations, indicating that it may be a suitable system for obtaining long coherence times. Experiments indicate that the conventional Bloch equations are insufficient to fully describe the quantum dynamical response of these systems to applied fields, which is essential for quantum information applications. For instance, empirically one finds that the observed coherence time depends on the applied microwave power, among other things. This observation has been associated with ad-hoc stochastic noise in the applied microwave field but the origin of this noise remains elusive. We study this problem starting from a realistic model of the magnetic properties of a collection of molecular magnets, including local anisotropic fields, dipolar interactions etc. By solving the time-dependent Schrödinger equation of the interacting spin system directly and by adopting the same procedure as used in pulsed electron-spin-resonance experiments, we can follow the time evolution of the spins explicitly and extract the information that is necessary to disentangle the different processes that give rise to the observed phenomena. Note that in the simulation model it is essential to account for the (long-range) dipole-dipole interactions that are always present in real magnetic materials. Read more: Journal Article ; ; ; ; Scaling of diffusion constants in the spin-$\frac{1}{2}$ XX ladder Physical review / B 90(9), 094417 (2014) [10.1103/PhysRevB.90.094417]  GO Journal Article ; ; ; Macroscopically deterministic Markovian thermalization in finite quantum spin systems Physical review / E 89(1), 012131 (2014) [10.1103/PhysRevE.89.012131]  GO Journal Article ; ; ; ; ; Quantum decoherence scaling with bath size: Importance of dynamics, connectivity, and randomness Physical review / A 87(2), 022117 (2013) [10.1103/PhysRevA.87.022117]  GO Contribution to a book/Contribution to a book ; ; Data analysis of Einstein-Podolsky-Rosen-Bohm laboratory experiments Proc. of SPIE SPIE Optical Engineering + Applications, Meeting locationSan Diego, California, 26 Aug 2013 - 29 Aug 20132013-08-262013-08-29 88321N-1 - 88321N-11 (2013) [10.1117/12.2021860] special issue: "The Nature of Light: What are Photons? V",  GO Journal Article ; ; ; ; ; ; Equilibration and thermalization of classical systems New journal of physics 15(3), 033009 (2013) [10.1088/1367-2630/15/3/033009]  GO Journal Article ; ; ; ; An Efficient Algorithm for Simulating the Real-Time Quantum Dynamics of a Single Spin-1/2 Coupled to Specific Spin-1/2 Baths Journal of physics / Conference Series 402, 012019 (2012) [10.1088/1742-6596/402/1/012019]  GO Journal Article ; ; ; ; Dynamics of a Single Spin-1/2 Coupled to x- and y-Spin Baths: Algorithm and Results Physics procedia 34, 90-99 (2012) [10.1016/j.phpro.2012.05.015]  GO Journal Article ; ; ; ; ; Quantum simulations and experiments on Rabi oscillations of spin qubits: Intrinsic vs extrinsic damping Physical review / B 85, 014408 (2012) [10.1103/PhysRevB.85.014408]  GO Journal Article ; ; ; ; ; Approach to Equilibrium in Nano-scale Systems at Finite Temperature Journal of the Physical Society of Japan 79, 124005 (2010) [10.1143/JPSJ.79.124005]  GO
8dc9d62fedfd4067
Dismiss Notice Join Physics Forums Today! Wavefunction collapse: is that really an axiom 1. Oct 11, 2007 #1 Can the wavefunction collapse not be derived or is it really an axiom? How can the answer to this question (yes or no) be proven? If it is an axiom, is it the best formulation, is it not a dangerous wording? Let's enjoy this endless discussion !!! 2. jcsd 3. Oct 11, 2007 #2 User Avatar Science Advisor There are several different interpretations of QM. In some of them, there is no need for a collapse postulate. 4. Oct 11, 2007 #3 User Avatar Staff Emeritus Science Advisor Gold Member I think the thing we can sensibly say is that wavefunction collapse cannot follow from the unitary time evolution, which is easy to establish. 5. Oct 11, 2007 #4 But it is if you include the interaction with the measuring device into the QM model. 6. Oct 11, 2007 #5 User Avatar Staff Emeritus Science Advisor Gold Member Only if you choose some model other than unitary evolution for describing the measurement process. 7. Oct 12, 2007 #6 Do you mean that the "measurement axiom" is contradictory to my/the postulate that the (unitary) equation of evolution governs all interactions? (including measurement sytems) Last edited: Oct 12, 2007 8. Oct 12, 2007 #7 User Avatar Staff Emeritus Science Advisor Gold Member yes, of course! That's the whole issue (or better, half the issue) in the "measurement problem". It is (to me at least) one of the reasons to consider seriously MWI. There's no unitary evolution (no matter how complicated) that can result in a collapsed wavefunction. This can be shown in 5 lines of algebra. 9. Oct 12, 2007 #8 The algebra is simple and true. The problem is that the wavefunction collapse doesn't really exist. Just like microreversibility doesn't contradict the second law: microreversibility doesn't necessarily imply the existence of a chaos demon. 10. Oct 12, 2007 #9 The algebra is simple and true. (even simple inspection of collapse algebra is enough for that, specially on the density matrix) The problem is that the wavefunction collapse doesn't really exist. In addition, I am quite sure that the collapse axiom can be derived from the Schrödinger equation. But the understanding is missing, to my knowledge. 11. Oct 12, 2007 #10 User Avatar Science Advisor There is actually a proof that it cannot. The proof is based on the fact that the Schrodinger equation involves only local interactions, while the collapse, including the cases with two or more entangled particles, requires nonlocal interactions. There is, however, something that contains some elements of a collapse but can be obtained from the Schrodinger equation. This is the environment-induced decoherence. And it is closely related to the second law emerging from time-symmetric laws of a large number of degrees of freedom. See e.g. http://xxx.lanl.gov/abs/quant-ph/0312059 (Rev. Mod. Phys. 76, 1267-1305 (2004)) Last edited: Oct 12, 2007 12. Oct 12, 2007 #11 could you please show it then? i don't know about you guys, but i am so sick of qualitative arguments involving wavefunction collapse, etc. to address the OP, it is my understanding is that if you postulate the Born interpretation then wavefunction collapse follows from that; in other words, physicists weren't just sitting around and postulated "wave collapse" as some popular books/shows would like one to believe. more concretely, [tex]\langle \Omega \rangle_\alpha = \sum_i \omega_i | \langle \omega_i | \alpha \rangle|^2[/tex] where the Born interpretation is that the quantity[tex]| \langle \omega_i | \alpha \rangle|^2[/tex] is to be interpreted as the probability amplitude of measuring a value [tex]\omega_i[/tex] [tex]= \sum_i \omega_i \langle \omega_i | \alpha \rangle ^* \langle \omega_i | \alpha \rangle[/tex] [tex]= \sum_i \omega_i \langle \alpha | \omega_i \rangle \langle \omega_i | \alpha \rangle[/tex] [tex]= \sum_i \langle \alpha | \omega_i \rangle \omega_i \langle \omega_i | \alpha \rangle[/tex] [tex]= \sum_i \langle \alpha | \omega_i \rangle \langle \omega_i | \Omega | \omega_i \rangle \langle \omega_i | \alpha \rangle[/tex] [tex]= \langle \alpha |\left(\sum_i | \omega_i \rangle \langle \omega_i |\right) |\Omega |\left(\sum_i | \omega_i \rangle \langle \omega_i | \right)\alpha \rangle[/tex] [tex]=\langle \alpha | \Omega | \alpha \rangle[/tex] so for some state [tex]\phi = \sum_i c_i \psi_i[/tex] that is NOT an eigenket of the operator (but can always be formed from a linear combination of eigenkets), we have: [tex]\langle \phi | \Omega | \phi \rangle = \langle \phi | \Omega | \sum_j c_j \psi_j \rangle[/tex] [tex]=\langle \phi | \sum_j c_j \omega_j \psi_j \rangle[/tex] [tex]=\sum_j \langle \sum_i c_i \psi_i | c_j \omega_j \psi_j \rangle[/tex] [tex]=\sum_i \sum_j c_i^* c_j \omega_j \langle \psi_i | \psi_j \rangle[/tex] and by orthogonality of states, we have: [tex]=\sum_i |c_i|^2 \omega_i[/tex] which shows that the average value of our experiments will be a weighted average of the eigenstates, i.e. the "wavefunction collapse" s.t. any individual measurement will be a particular eigenvalue. in the classical limit, the spectra of eigenvalues is nearly continuous and so the effect is unnoticeable. so whats the big deal?? can someone explain to me why this is, for some people, such a big damn mystery?? 13. Oct 12, 2007 #12 There is no need for any algebra to show that the wavefunction collapse is not described by a unitary evolution. This follows simply from the definition of the wavefunction. By definition, the wavefunction is a probability amplitude. This means that the measurements described by the wavefunction is a random probabilistic unpredictable process, which cannot be described by deterministic "unitary evolution". That's the whole point of quantum mechanics. In my opinion, looking for a unitary description of the collapse is equivalent to looking for "hidden variables". 14. Oct 12, 2007 #13 I agree, meopemuk. The collapse is not an unitary transformation, and it is even not a transformation at all. After the collapse, there is no wave function anymore, but a statistical mixture. That's the axiom. My view is that after the interaction of a small system with a measuring device, the state of the small system loses its meaning, and only the combined wavefunction has a meaning. The problem that remains is how does the axiom emerge from the "complex" evolution. This is a challenge, and I am confident that it will or can be explained trivially. I am also sure that solving this problem is not really useful for the progress of QM, that is it of the kind of problem that time and generations solves. 15. Oct 12, 2007 #14 User Avatar Staff Emeritus Science Advisor Gold Member Ok, here goes the "proof". Axiom 1: every state of a system is a ray in Hilbert space. Now, consider the system "measurement device + SUT" (SUT = system under test, say, an electron spin). This is quantum-mechanically described by a ray in hilbert space. As we have degrees of freedom belonging to the SUT and other degrees of freedom belonging to the measurement device, the hilbert space of the overall system is the tensor product of the hilbert spaces of the individual systems H = H_m x H_sut Now, consider that before the measurement, the SUT is in a certain state, say |a> + |b> and the measurement system is in a classically-looking state |M0>. As we have now individually assigned states for each of the subsystems, the overall state is given by the tensor product of both substates: |psi0> = |M0> x ( |a> + |b> ) Now we do a measurement. That comes down to having an interaction hamiltonian between both subsystems, and from that interaction hamiltonian follows a unitary evolution operator over a certain time, say time T. We write this operator as U(0,T), it evolves the entire system from time 0 to time T. Now, let us first consider that our SUT was in state |a> and our measurement system was in (classically looking) state |M0>, which is its state before a measurement was done. "doing a measurement" would result in our measurement device get into a classically looking state |Ma> for sure, assuming that |a> was an eigenvector of the measurement. As such, our interaction between our system and our measurement apparatus, described by U(0,T) is given by: U(0,T) { |M0> x |a> } = |Ma> x |a> Indeed, the state of the measurement device is now for sure the classically-looking state Ma, and (property of a measurement on a system in an eigenstate) the SUT didn't change. We can tell now the same story if the system was in state b: U(0,T) { |M0> x |b> } = |Mb> x |b> where Mb is the classically looking state of the measurement apparatus with the pointer on "b". Now from linearity of U follows that: U(0,T) { |M0> x (|a> + |b>) } = |Ma> x |a> + |Mb> x |b> We didn't find "sometimes |Ma> x |a> and sometimes |Mb> x |b>" ; the unitary evolution gave us an entangled superposition. Now, you can say "yes, but |u> + |v> means: sometimes |v> and sometimes |u>" but that's not true of course. Consider the other measurement apparatus N which does the following: U(0,T) { |N0> x (|a> + |b>) } = |Nu> x (|a> + |b> ) U(0,T) { |N0> x (|a> - |b>) } = |Nd> x (|a> - |b>) From this, we can deduce that U(0,T) {|N0> x |a> } = 1/2 (|Nu> x { |a> + |b> } + |Nd> x { |a> - |b> }) Clearly if |u> + |v> means "sometimes u and sometimes v" then we could never have that |a> + |b> (which is then "sometimes a and sometimes b") always gives rise to Nu and never gives rise to Nd, because |a> gives rise to sometimes Nu and sometimes Nd. 16. Oct 12, 2007 #15 User Avatar Staff Emeritus Science Advisor Gold Member The problem is that the transition for a density matrix to go from "superposition" to "statistical mixture" is the following transformation: take the densitymatrix of the "superposition", and write it in the matrix form in the *correct basis*. Now put all non-diagonal elements to 0. You now have the statistical mixture. But again, that is a point-wise state change (if you take the density matrix to define the state) which is not described by the normal evolution equation of the density matrix. In other words, the transformation "superposition" -> "mixture" for the density matrix is again a state change which is not described by a physical interaction (which is normally described by the usual evolution equation of the density matrix). In other words, that's nothing else but another way of writing down a non-unitary evolution, which is not the result of a known physical interaction. 17. Oct 12, 2007 #16 My guess is that 99% of all experiments involve a single measurement of the system's state. (The counterexample is the bubble chamber, where we repeatedly measure particle's position and obtain a continuous track) In these cases we do not care what is the state of the systems and its wavefunction after the measurement (collapse). It is important to realize that one needs to consider the abrupt change of the wavefunction after measurement only in (not very common) experiments with repeated measurements performed on the same system. 18. Oct 12, 2007 #17 I would rather say: which is more conveniently described by a non-unitary transformation Decoherence can already easily wipe off non-diagonal elements. I also remember my master thesis 25+ years ago. I worked on the Stark effect in beam-foil spectroscopy: the time-dependence with the quantum beats and the atomic decay. (by the way the off-diagonal elements of the H-atoms exiting the foil were crucial in the simulation) The hamiltonian had to simulate also the decays of the atomic levels. Looks-like a non-unitary transformation too, isn't it? I also didn't want to embarass myself with full QED stuff. Guess how I modelled that: - adding a non-hermitian term to the hamiltonian (related to the decay rates) - and calculating the resulting non-unitary evolution operator (the density matrix was therefore decaying, which is quite natural) This is nothing strange or surprising and it shows clearly how non-unitary evolution can simply occur as a limit case of an unitary transformation. With such a simple approach the Stark effect is very well calculated, for the energy levels, for the perturbed lifetimes and for the time-dependence and polarisations of light emission. In somewhat pedantic words (I am not a mathematician): the limit of a series of unitary transformations may not be an unitary transformation, looks like that to me. And this can just mean that in some situations the unitary evolution may just be an academic vision: coarse graining makes it practical and non-unitary. Why should I go for more science-fiction? Last edited: Oct 12, 2007 19. Oct 12, 2007 #18 User Avatar Staff Emeritus Science Advisor Gold Member More precisely, decoherence can wipe off the non-diagonal elements of a relative state -- you have to use a partial trace to discard all of the information about the environment before you can get the matrix to look diagonal. That is incorrect. If [itex]T_i \to T[/itex] then [itex]T_i^* \to T^*[/itex]. Finally, because multiplication is continuous, [tex]1 = \lim_i 1 = \lim_i T_i^* T_i = (\lim_i T_i^*) (\lim_i T_i) = T^* T.[/tex] 20. Oct 12, 2007 #19 User Avatar Staff Emeritus Science Advisor Gold Member Nobody is asking you to go for science-fiction. They are asking you to go to MWI. :tongue: And, incidentally, your argument appears quite analogous to rejecting the kinetic theory of gases simply because temperature and pressure are good enough for his favorite applications. Last edited: Oct 12, 2007 21. Oct 12, 2007 #20 User Avatar Staff Emeritus Science Advisor Gold Member Don't understand me wrong. Collapse is a very practical and good working "approximation", of course. The point is that in order for collapse to occur, you have to leave quantum mechanics. You have, as Bohr wanted it, to decide somehow about a transition to a classical world. When you do classical mechanics, you can describe your measurement apparatus classically. That means, for instance, in a Lagrangian formulation, that you can give a generalized degree of freedom Qm to the pointer of your measurement apparatus if you want to. Say that you have an (oldfashioned) voltmeter, connected to a system (an electric network). You can solve simply for the behaviour of the network, calculate the voltage difference between two points, and "make your transition" to a measurement, that is, say that you stop there with the system physics, and the measurement apparatus, DEFINED to be a volt meter, will measure that difference. But you can also include the voltmeter into your system, add the degree of freedom Qm (and all the other internal degrees of freedom of the apparatus) to your "system", and redo all the classical dynamics. You will now simply find that the end state of that variable Qm will simply indicate the result (the position of the pointer). So it is up to you to decide whether or not the internal dynamics of the apparatus and of Qm was worth the effort (taking into account the non-idealities of your measurement apparatus), but that doesn't change much the result. But in quantum mechanics, if you REMAIN in quantum mechanics, and you do so, you find TWO TOTALLY DIFFERENT outcomes. Indeed, the state description of your pointer is now given by "pointer states", the classically-looking states |Qm>. You might think, inspired by the classical example, that if you do the quantum-mechanical calculation completely, that if you include the apparatus, you would find sometimes a |Qm = 5V> and sometimes a |Qm=2V> and that the randomness is somehow given by tiny interactions with the environment or whatever. That this is the result of the apparent randomness of quantum mechanics, but that at the end of the day, you will find your apparatus in one or other pointer state, corresponding to what you observed. In that case, one would have the quantum-mechanical equivalent of the above classical procedure, and it would be a matter of convenience whether or not we include the apparatus in the physical description. We would have that: |a> |Q0> would always evolve in |a> |Qa> and that (|a> + |b>) |Q0> would evolve half of the time in |a> |Qa> and half of the time in |b> |Qb>. This would then be the "correspondence rule" and we would fully understand it. The dynamics of the interaction with the apparatus would somehow be responsible for the apparent random behaviour of quantum mechanics, and at the end of the day, we would see that the apparatus always ends up in one of its "pointer states". It would then be an approximation to go directly from (|a> + |b>) to |a> or to |b>, a very good one, exactly as in the case of the classical volt meter. BUT THIS IS IMPOSSIBLE in quantum mechanics if the evolution is unitary, no matter how complicated the interaction will be. That's the little proof I provided. No matter how complicated the dynamics, if the time evolution is unitary, the above evolution is not possible. You DO NOT end up in a single pointer state. The end state, namely |a> |Qa> + |b> |Qb> is not a good approximation of "sometimes |a> |Qa> and sometimes |b> |Qb>". We obtain, when we carefully work out the dynamics of the interaction of the measurement apparatus with the system, a totally different state, which is NOT a pointer state. THIS is the difficulty in principle. There is no description, in terms of quantum mechanics, which corresponds to the classical end state and which follows out of the dynamics. It is not even a close approximation. You have to LEAVE quantum mechanics in order to be able to say that the "apparatus is now in pointer state Qb". Quantum-mechanically, you can't claim that, because out of the calculation does NOT follow that the apparatus is in state |Qb>. Have something to add? Similar Discussions: Wavefunction collapse: is that really an axiom 1. Wavefunction collapse (Replies: 8) 2. Wavefunction Collapse (Replies: 5) 3. Collapse Wavefunction (Replies: 2) 4. Wavefunction collapse (Replies: 1)
22390944cfa0777a
Foundational questions of quantum information April 4-5, 2012 Workshop "Foundational questions of quantum information" Dates: April 4-5, 2012 Jointly organized by LARSIM and QuPa Venue: Amphi Opale, 46 rue Barrault, Paris 13e April 4 9:30-9:45 Coffee and Opening 9:45-10:45 Robert Raussendorf (University of British Columbia) 10:45-11:00 Coffee 11:00-12:00 Oscar Dahlsten (University of Oxford) 14:15-15:15 Matthew Pusey (Imperial College London) 15:15-16:15 Michel Bitbol (CREA, CNRS-Ecole Polytechnique) 16:15-16:45 Coffee 16:45-17:45 Virginie Lerays (LRI, Université Paris Sud) April 5 9:30-9:45 Coffee 9:45-10:45 Damian Markham (LTCI, CNRS-Télécom ParisTech) 10:45-11:00 Coffee 11:00-12:00 Kavan Modi (University of Oxford and Centre for Quantum Technologies, National University of Singapore) 14:15-15:15 Giacomo Mauro d'Ariano (University of Pavia) 15:15-16:15 Caslav Brukner (University of Vienna) 16:15-16:45 Coffee 16:45-17:45 Alexei Grinbaum (LARSIM, CEA-Saclay) Robert Raussendorf "Symmetry constraints on temporal order in measurement-based quantum computation" We discuss the interdependence of resource state, measurement setting and temporal order in measurement-based quantum computation. The possible temporal orders of measurement events are constrained by the principle that the randomness inherent in quantum measurement should not affect the outcome of the computation. We provide a classification for all temporal relations among measurement events compatible with a given initial quantum state and measurement setting, in terms of a matroid. Conversely, we show that classical processing relations necessary for turning the local measurement outcomes into computational output determine the resource state and measurement setting up to local equivalence. Further, we find a symmetry transformation related to local complementation that leaves the temporal relations invariant.  Oscar Dahlsten "Tsirelson’s bound from a Generalised Data Processing Inequality" The strength of quantum correlations is bounded from above by Tsirelson’s bound. We establish a connection between this bound and the fact that correlations between two systems cannot increase under local operations, a property known as the data processing inequality. More specifically, we consider arbitrary convex probabilistic theories. These can be equipped with an entropy measure that naturally generalizes the von Neumann entropy, as shown recently by Short and Wehner. We prove that if the data processing inequality holds with respect to this generalized entropy measure then the underlying theory necessarily respects Tsirelson’s bound. We moreover generalise this statement to any entropy measure satisfying certain minimal requirements. Based on arXiv:1108.4549. Matthew Pusey "Comparing two explanations for qubits" I will discuss two long-standing realist models for qubits - one due to Bell and the other to Kochen and Specker. I will argue that the latter provides a much more compelling explanation of various quantum information phenomena, mainly thanks to the feature that multiple quantum states can apply to the same real state. Finally I will show that, on the other hand, it is precisely this feature that prevents the latter model from explaining a very particular phenomena. Based on arXiv:1111.3328. Michel Bitbol "Kant and quantum mechanics: a middle way between the ontic and epistemic approaches" Instead of either formulating new metaphysical images of the so-called "quantum reality" or rejecting any metaphysical attempt in an empiricist spirit, the case of quantum mechanics might require a redefinition of metaphysics. The sought redefinition will be performed in the spirit of Kant, according to whom metaphysics is the discipline of the boundaries of human knowledge. This can be called a "reflective" conception of metaphysics. Along with this perspective, theoretical structures are neither ontic nor purely epistemic. They do not express exclusively the structure of reality out there, or the form of our own knowledge, but their active interface. Our understanding of the structure of quantum mechanics then works in two steps : (1) The most basic structures of quantum mechanics are neither imposed onto us (by some pre-structured reality) nor arbitrary (just meant to "save the phenomena"), but made necessary by the general characteristics of our demand of knowledge. (2) Yet, there can also be additional features of theoretical structures corresponding to special characteristics of our demand of knowledge, adapted to certain directions of research or to cultural prejudice. The "surplus structure" of some of the most popular interpretations of quantum mechanics will be understood this way. Finally, it will be shown that some of the major "paradoxes" of quantum mechanics, such as the measurement problem, can easily be dissolved by way of this reflective attitude. Virginie Lerays "Detector efficiency and communication complexity" In the standard setting of communication complexity, two players each have an input and they wish to compute some function of the joint inputs. This has been the object of much study in computer science and a wide variety of lower bound methods have been introduced to address the problem of showing lower bounds on communication. Physicists have considered a closely related scenario where two players share a predefined entangled state. Each is given a measurement as input, which they perform on their share of the system. The outcomes of the measurements follow a distribution which is predicted by quantum mechanics. The goal is to rule out the possibility that there is a classical explanation for the distribution, through loopholes such as communication or detector inefficiency. In an experimental setting, Bell inequalities  are used to distinguish truly quantum from classical behavior. Bell test and communication complexity are both measures of how far a distribution is from the set of local distributions (those requiring no communication), and one would expect that if a bell test shows a large violation for a distribution, it should require a lot of communication and vice versa. We present a new lower bound technique for communication complexity based on the notion of detector inefficiency  for the setting of simulating distributions, and show that it coincides with the best lower bound in communication complexity known until now.  We show that it amounts to constructing an explicit Bell inequality. Joint work with Sophie Laplante and Jérémie Roland. Damian Markham "On non-linear extensions of quantum mechanics" We present some observations on the restrictions imposed on non-linear extensions of quantum mechanics with respect to non-signaling. We see that non-signaling can be understood as imposing the destruction of correlations, a property noticed for closed time-like curves by Bennett et al, arising from the 'non-linearity trap'. We discuss in what sense such theories can still allow for 'local' cloning and state discrimination. Joint work with Julien Degorre. Kavan Modi "Entanglement distribution with quantum communication" Two distant labs cannot increase the entanglement between them via classical communication. However, they can do so via quantum communication. Surprisingly, the communicated system need not be entangled with either / both of the labs, but it must be quantum correlated (as determined by quantum discord). We show that quantum discord that bounds the increase in the entanglement via quantum communication. Additionally, the bound also leads to subadditivity of entropy and gives an interpretation for negative conditional entropy. Giacomo Mauro d'Ariano "Physics from Informational Principles" Recently quantum theory has been derived from six principles that are of purely informational nature. The "(epistemo)logical" nature of these principles makes them rock solid. We want now to take a pause of reflection about the general foundations of Physics, and re-examine how solid are principles as the Galilean relativity and the Einsteinian equivalence principle. Are they truly compelling? Why are they under dispute, and violations are considered? Following the route of the informational paradigm, I will suggest three new candidate principles, all of informational nature: 1) The Church–Turing–Deutsch principle, namely that theory must allow simulating any physical process by a universal finite computer (this implies that the information involved in any process is locally bounded); 2) topological locality of interaction; 3) topological homogeneity of interactions. These principles along with the six ones for Quantum Theory suggest a new foundation of Quantum Field Theory as Quantum Cellular Automata theory. I will show how this framework can actually provide an extension of Quantum Field Theory to include localized states and observables, whereas Galileo's and Einstein's covariance and other symmetries are only approximate, and to be  recovered only in the field-limit, whereas their violation make the extended theory in-principle falsifiable. The new informational principles open totally unexpected routes and re-definitions of mechanical notions (as inertial mass, Planck constant, Hamiltonian, Dirac equation as free flow of information), Minkowsian space­‐time as emergent, and an unexpected role for Majorana field in the solution of the so-called Feynman problem of simulating anti-­commuting fields by the automaton. Caslav Brukner "Tests distinguishing between quantum and more general probabilistic theories" The historical experience teaches us that every theory that was accepted at a certain time was later inevitably replaced by a deeper and more fundamental theory. There is no reason why quantum theory should be an exception in this respect. At present, quantum theory has been tested against very specific alternative theories, such as hidden variables, non-linear Schrödinger equations or the collapse models. The common feature of all of them is that they keep one or the other basic principle of the classical world intact. Yet, it is very unlikely that a post-quantum theory will be based on pre-quantum concepts. In contrast, it is likely that it will break not only principles of classical but also quantum physics. This gives us a motivation for the following research program: 1) To reconstruct quantum mechanics from a set of axioms. 2) To weaken the axioms and to look for broader structures. 3) To test quantum theory against them. Following this approach I will present two tests that can distinguish between quantum theory and more general probabilistic theories. Alexei Grinbaum "Quantum observers and Kolmogorov complexity" Different observers do not have to agree on how they identify a quantum system. We explore a condition based on algorithmic complexity that allows a system to be described as an objective "element of reality". We also suggest an experimental test of the hypothesis that any system, even much smaller than a human being, can be a quantum mechanical observer. Maj : 28/03/2012 (1899) Retour en haut
5e1b0e526cb065ba
The best computer is invisible. Anticipation – A Spooky Computation Conference on Computing Anticipatory Systems (CASYS 99), Liege, Belgium, August 8-11, 1999 Mihai Nadin Program in Computational Design University of Wuppertal Computer Science, Center for the Study of Language and Information 201 Cordura Hall Stanford University Robert Rosen, in memoriam As the subject of anticipation claims its legitimate place in current scientific and technological inquiry, researchers from various disciplines (e.g., computation, artificial intelligence, biology, logic, art theory) make headway in a territory of unusual aspects of knowledge and epistemology. Under the heading anticipation, we encounter subjects such as preventive caching, robotics, advanced research in biology (defining the living) and medicine (especially genetically transmitted disease), along with fascinating studies in art (music, in particular). These make up a broad variety of fundamental and applied research focused on a controversial concept. Inspired by none other than Einstein–he referred to spooky actions at distance, i.e., what became known as quantum non-locality–the title of the paper is meant to submit my hypothesis that such processes are related to quantum non-locality. The second goal of this paper is to offer a cognitive framework–based on my early work on mind processes (1988)–within which the variety of anticipatory horizons invoked today finds a grounding that is both scientifically relevant and epistemologically coherent. The third goal of this paper is to identify the broad conceptual categories under which we can identify progress made so far and possible directions to follow. The fourth and final goal is to submit a co-relation view of anticipation and to integrate the inclusive recursion in a logic of relations that handles co-relations. Keywords: auto-suggestive memory, co-relation, non-locality, quantum semiotics, self-constitution, interactive computation 1 Introduction Anticipation could become the new frontier in science. Trends, scientific fashions, and priority funding programs succeed one another rapidly in a society that experiences a dynamics of change reflected in ever shorter cycles of discovery, production, and consumption. Frontiers mark stark discontinuities that ascertain fundamentally new knowledge horizons. Einstein stated, “No problem can be solved from the same consciousness that created it. We must learn to see the world anew.” It is in this respect that I find it extremely important to begin by putting the entire effort into a broad perspective. 2 The Philosophic Foundation of Anticipation is Not Trivial Philosophical considerations cannot be avoided (provided that they are not pursued as a means in themselves). Robert Rosen (1985) quoted David Hawkins, “Philosophy may be ignored but not escaped.” Rosen, whose work deserves to be integrated in current scientific dialog more than was been the case until his untimely death, understood this thought very well. Anticipation bears a heavy burden of interpretations. As initial attempts (Rosen, 1985; Nadin, 1988; Dubois, 1992) to recover the concept and to give it a scientific foundation prove, the task is difficult. We face here the dominant deterministic view inspired by a model of the universe in which a net distinction between cause and effect can be made. We also face a reductionist understanding of the world, which claims that physics is paradigmatic for everything else. Moreover, we are captive to an understanding of time and space that corresponds to the mathematical descriptions of the physical world: Time is uniquely defined along the arrow from past to future; space is homogeneous. Finally, we are given to the hope that science leads to laws on whose basis we may make accurate predictions. Once we accept these laws, anticipation can at best be accepted as one of these predictions, but not as a scientific endeavor on its own terms. A clear image of the difficulties in establishing this foundation results from revisiting Rosen’s work on anticipatory systems, above all his fundamental work, Life Itself (1991). Indeed, his rigorous argumentation, based on solid mathematical work and on a grounding in biology second to none among his peers, makes sense only against the background of the philosophic considerations set forth in his writings. It might not matter to a programmer whether Aristotle’s causa finalis (final cause) can be ascertained or justified, or deemed as passé and unacceptable. A programmer’s philosophy does not directly affect lines of code; neither do disputes among those partial to a certain world view. What is affected is the general perspective, i.e., the understanding of a program’s meaning. If the program displays characteristics of anticipation, the philosophic grounding might affect the realization that within a given condition–such as embodied in a machine–the simulation of anticipatory features should not be construed as anticipation per se. The philosophic foundation is also a prerequisite for defining how far the field can be extended without ending up in a different cognitive realm. Regarding this aspect, it is better to let those trying to expand the inquiry of anticipation–let me mention again Dubois (since 1996) and the notions of incursion and hyperincursion, Holmberg (since 1997) and space aspects–express themselves on the matter. Van de Vijver (1997), among few others (cf. CASYS 98 and the contributions listed in the Program for CASYS 99) has already attempted to shed light on what seems philosophically pertinent to the subject. She is right in stating that the global/local relation more adequately pertains to anticipation than does the pair particular/universal. The practical implications of this observation have not yet been defined. From my own perspective–based on pragmatics, which means grounding in the practical experience through which humans become what they are–anticipation corresponds to a characteristic of live beings as they attain the condition at which they constitutes their own nature. At this level, predictive models of themselves become possible, and progressively necessary. The thematization of anticipation, which as far as we know is a human being’s expression of self-awareness and connectedness, is only one aspect of this stage in the unfolding of our species. According to the premise of this perspective, pragmatics–expressed in what we do and how and why we do what we do–is where our understanding of anticipation originates. This is also where it returns, in the form of optimizing our actions, including those of defining what these actions should be, what sequence they follow, and how we evaluate them. All these are projections against a future towards which each of us is moving, all tainted by some form of finality (telos), or at least by its less disputed relative called intentionality. The generic why of our existence is embedded in this intentionality. The source of this finality are the others, those we interact with either in cooperating or in competing, or in a sense of belonging, which over time allowed for the constitution of the identity called humanness. Gordon Pask (1980), the almost legendary cybernetician, called such an entity a cognitive system. 2.1 Self-Entailment and Anticipation In a dialog on entailment (cf.–a fundamental concept in Rosen’s explanation of anticipation–a line originating with François Jacob was dropped: “Theories come and go, the frog stays.” (Incidentally, Jacob is the author of The Logic of Life, Princeton University Press, 1993). This brings us back to a question formulated above: Does it matter to a programmer (the reader may substitute his/her profession for the word programmer) that anticipation is based on the self-entailment characteristic of the living? Or that evolution is the source of entailment? If we compare the various types of computation acknowledged since people started building computers and writing software programs, we find that during the syntactically driven initial phases, such considerations actually could not affect the pragmatics of programming. Only relatively recently has a rudimentary semantic dimension been added to computation. In the final analysis, it does not matter which microelectronics, computer architecture, programming languages, operating systems, networks, or communication protocols are used. For all practical purposes, what matters is that between the world and the computation pertinent to some aspects of this world, the relations are still extremely limited. If a programmer is not just in the business of writing lines of code for a specific application that might improve through a syntactically supported emulation of anticipatory characteristics–think about macros that save typing time by “guessing” which word or expression a user started to type in and “filling in” the letters or words–then it matters that there is something like self-entailment. It matters, too, that the notion of self-entailment supports more adequate explanations of biological processes than any other concept of the physical sciences. On a semantic level, the awareness of self-entailment (through self-associative memory) leads to better solutions in speech and handwriting recognition. However, once the pragmatic level is reached–we are still far from this–understanding the philosophic implications of the nature and condition of anticipation becomes crucial. The reason is that it is not at all clear that characteristics of the living–self-repair, metabolism, and anticipation–can be effectively embodied in machines. This is why the notion of frontier science was mentioned in the Introduction. The frontier is that of conceiving and implementing life-like systems. Whether Rosen’s (M, R)-model, defined by metabolism and repair, or others, such as those advanced in neural networks, evolutionary computation, and ALife, will qualify as necessary and sufficient for making anticipation possible outside the realm of the living remains to be seen. I (Nadin, 1988, 1991) argue for computers with a variable configuration based on anticipatory procedures. This model is inspired by the dynamics of the constitution and interaction of minds, but does not suggest an imitation of such processes. The issue is not, however, reducible to the means (digital computation, algorithmic, non-algorithmic, or heterogenous processing, signal processing, quantum computation, etc.), but to the encompassing goal. 2.2 Specializations To nobody’s surprise, anticipation, in some form or another, is part of the research program of logic, cognitive science, computer science, robotics, networking, molecular biology, genetics, medicine, art and design, nanotechnology, the mathematics of dynamic systems, and what has become known as ALife, i.e., the field of inquiry into artificial life. Anticipation involves semiotic notions, as it involves a deep understanding of complexity, or, better yet, of an improved understanding of complexity that integrates quantitative and qualitative aspects. It is not at all clear that full-fledged anticipation, in the form of machine-supported anticipatory functioning, is a goal within the reach of the species through whose cognitive characteristics it came into being and who became aware of it. Machines, or computations, for those who focus on the various data processing machines, able to anticipate earthquakes, hurricanes, aesthetic satisfaction, disease, financial market performance, lottery drawings, military actions, scientific breakthroughs, social unrest, irrational human behavior, etc., could well claim total control of our universe of existence. Indeed, to correctly anticipate is to be in control. This rather simplistic image of machines or computations able to anticipate cannot be disregarded or relegated to science fiction. Cloning is here to stay; so are many techniques embodying the once disreputed causa finalis. A philosophic foundation of anticipation has to entertain the many questions and aspects that pertain to the basic assertion according to which anticipation reflects part of our cognitive make-up, moreover, constitutes its foundation. Even if Kuhn’s model of scientific paradigm change had not been abused to the extent of its trivialization, I would avoid the suggestion that anticipation is a new paradigm. Rather, as a frontier in science, it transcends its many specializations as it establishes the requirement for a different way of thinking, a fundamentally different epistemological foundation. 3 Pro-Action vs. Re-Action Now that the epistemological requirement of a different way of thinking has been brought up, I would like to revisit work done during the years when the very subject of anticipation seemed not to exist (except in the title of Rosen’s book). My claim in 1988 (on the occasion of a lecture presented at Ohio State University) was that anticipation lies at the foundation of the entire cognitive activity of the human being. Moreover, through anticipation, we humans gain insight into what keeps our world together as a coherent whole whose future states stand in correlation to the present state as minds grasp it. Minds exist only in relation to other minds; they are instantiations of co-relations. This is also the main thesis of this paper. For over 300 years–since Descartes’ major elaborations (1637, 1644) and Newton’s Principia (1687)–science has advanced in understanding what for all practical purposes came to be known as the reactive modality. Causality is experienced in the reactive model of the universe, to the detriment of any pro-active manifestations of phenomena not reducible to the cause-and-effect chain or describable in the vocabulary of determinism. It is important to understand that what is at issue here is not some silly semantic game, but rather a pragmatic horizon: Are human actions (through which individuals and groups identify themselves, i.e., self-constitute, Nadin 1997) in reaction to something assumed as given, or are human actions in anticipation of something that can be described as a goal, ideal, or value? But even in this formulation (in which the vocabulary is as far as it can be from the vitalistic notions to which Descartes, Newton, and many others reacted), the suspicion of teleological dynamics–is there a given goal or direction, a final vector?–is not erased. Despite progress made in the last 30 years in understanding dynamic systems, it is still difficult to accept the connection between goal and self-organization, between ideal, or value, and emergent properties. 3.1 Minds Are Anticipations The mind is in anticipation of events, that is, ahead of them–this was my main thesis over ten years ago. Advanced research (Libet 1985, 1989) on the so-called “readiness potential” supported this statement. In recent years, work on the “wet brain” as well as work supported by MR-based visualization technologies have fully confirmed this understanding. Having entered the difficult dialog on the nature of cognitive processes from a perspective that no longer accepted the exclusive premise of representation –another heritage from Descartes–I had to examine how processes of self-constitution eventually result in shared knowledge without the assumption of a homunculus. What seemed inexplicable from a perspective of classical or relativist physics–a vast amount of actions that seemed instantaneous, in the absence of a better explanation for their connectedness–was coming into focus as constitutive of the human mind. Anticipatory cognitive and motoric scripts, from which in a given context one or another is instantiated, were advanced at that time as a possible description for how, from among many pro-active possible courses of action, one would be realized. Today I would call those possible scripts models and insist that a coherent description of the functioning of the mind is based on the assumption that there are many such models. Additionally, I would add that learning, in its many realizations, is to be understood as an important form of stimulating the generation of models, and of stimulating a competitive relation among them. [Von Foerster (1999) entertains a motto on his e-mail address that is an encapsulation of what I just described: "Act always as to increase the number of choices."] In a subtle way, defense mechanisms–from blinking to reflexes of all types–belong to this family. Anticipatory nausea and vomiting (whether on a ship or related to chemotherapy) is another example. The phantom limb phenomenon (sensation in the area of an amputated limb) is mirrored by pain or discomfort before something could have actually caused them. There is a descriptive instance in Lewis Carroll’s Through the Looking Glass. Before accidentally pricking her finger, the White Queen cries: “I haven’t pricked it yet, but I soon shall.” She lives life in reverse, which is what anticipation ultimately affords–provided that the interpretation process is triggered and made part of the self-constitutive pragmatics. 3.1.1 Anticipation is Distributed As recently as this year, results in the study of the anticipation of moving stimuli by the retina (Berry, et al 1999) made it clear that anticipation is distributed. The research proved that anticipation of moving stimuli begins in the retina. It is no longer that we expect the visual cortex to do some heavy extrapolation of trajectory (this was the predominant model until recently) but that we know that retinal processing is pro-active. Even if pro-activity is not equally distributed along all sensory channels–some are slower in anticipating than others, not the least because sound travels at a slower speed than light does, for example–it defines a characteristic of human perception and sheds new light on motoric activity. 3.1.2 Knowledge as Construction But there is also Kelly’s (1995) constructivist position, which must be acknowledged by researchers in the psychological foundation of anticipation. The adequacy of our constructs is, in his view, their predictive utility. Coherence is gained as we improve our capacity to anticipate events. Knowledge is constructed; validated anticipations enhance cognitive confidence and make further constructs possible. In Kelly’s terms, human anticipation originates in the psychological realm (the mind) and reflects the intention to make possible a correspondence between a future experience and certain of our anticipations (Kelly, 1955; Mancuso & Adams-Weber, 1982). Since states of mind somehow represent states of the world, adequacy of anticipations remains a matter of the test of experience. The basic function of all our representations, as the “fundamental postulate” ascertains, is anticipation (a temporal projection). Alternative courses of action in respect to their anticipated consequences represent the pragmatic dimension of this view. Observed phenomena and their descriptions are not independent of the assumptions we make. This applies to the perceptual control theory, as it applies to Kelly’s perspective and to any other theory. Moreover, assumptions facilitate or hinder new observations. For those who adopted the view according to which a future state cannot affect a present state, anticipation makes no sense, regardless of whether one points to the subject in various religious schemes, in biology, or in the quantum realm. The situation is not unlike that of Euclidean geometry vs. non-Euclidean geometries. To see the world anew is not an easy task! Anticipation of moving stimuli, to get back to the discovery mentioned above, is recorded in the form of spike trains of many ganglion cells in the retina. It follows from known mechanisms of retinal processing; in particular, the contrast-gain control mechanism suggests that there will be limits to what kinds of stimuli can be anticipated. Researchers report that variations of speed, for instance, are important; variations of direction are not. Furthermore, since space-based anticipation and time-based anticipation have a different metric, it remains to be seen whether a dominance of one mode over the other is established. As we know, in many cases the meeting between a visual map (projection of the retina to the tectum) and an auditory map takes place in a process called binding. How the two maps are eventually aligned is far from being a matter of semantics (or terminology, if you wish). Synchronization mechanisms, of a nature we cannot yet define, play an important role here. Obviously, this is not control of imagination, even if those pushing such terms feel more forceful in the de facto rejection of anticipation. Arguing from a formal system to existence is quite different from the reverse argumentation (from existence to formalism). Arguing from computation can take place only within the confines of this particular experience: the more constrained a mechanism, the more programmable it is (as Rosen pointed out, 1991, p. 238). Albeit, reaction is indeed programmable, even if at times it is not a trivial task. Pro-active characteristics make for quite a different task. The most impressive success stories so far are in the area of modeling and simulation. To give only one example: Chances are that your laptop (or any other device you use) will one day fall. The future state–stress, strain, depending upon the height, angle, weight, material, etc.–and the current state are in a relation that most frequently does not interest the user of such a portable device. It used to be that physical models were built and subjected to tests (this applies, for instance, to cars as well as to photo cameras). We can model, and thus to a certain point anticipate, the effects of various possible crashes through simulations based on finite-element analysis. That anticipation itself, in its full meaning, is different in nature from such simulations passes without too much comment. The kind of model we need in order to generate anticipations is a question to which we shall return. 3.2 A Rapidly Expanding Area of Inquiry An exhaustive analysis of the database of the contributions to fundamental and applied research of anticipation reveals that this covers a wide area of inquiry. In many cases, those involved are not even aware of the anticipatory theme. They see the trees, but not yet the forest. More telling is the fact that the major current directions of scientific research allow for, or even require, an anticipatory angle. The simulation mentioned above does not anticipate the fall of the laptop; rather, it visualizes–conveniently for the benefit of designers, engineers, production managers, etc.–what could happen if this possibility were realized. From this possibilistic viewpoint, we infer to necessary characteristics of the product, corresponding to its use (how much force can be exercised on the keyboard, screen, mouse, etc.?) or to its accidental fall. That is, we design in anticipation of such possibilities. Or we should! I would like to mention other examples, without the claim of even being close to a complete list. 3.2.1 An Example from Genetics But more than Rosen, whose work belongs rather to the meta-level, it was genetics that recovered the terminology of heredity. Having done so, it established a framework of implicit anticipations grounded in the genetic program. Of exceptional importance are the resulting medical alternatives to the “fix-it” syndrome of healthcare practiced as a “car repair” (including the new obsession with spare parts and artificial surrogates). Genetic medicine, as slow in coming as it is, is fundamentally geared towards the active recognition of anticipatory traits, instead of pursuing the reactive model based on physical determinism. Although there is not yet a remedy to Huntington’s disease, myotonic dystrophy, schizophrenia, Alzheimer’s disease, or Parkinson’s disease, medical researchers are making progress in the direction of better understanding how the future (the eventual state of diagnosed disease) co-relates to a present state (the unfolding of the individual in time). In the language of medicine, anticipation describes the tendency of such hereditary diseases to become symptomatic at a younger age, and sometimes to become more severe with each new generation. We now have two parallel paths of anticipation: one is that of the disorder itself, i.e., the observed object; the other, that of observation. The elaborations within second-order cybernetics (von Foerster, 1976) on the relation between these paths (the classical subject-object problem) make any further comment superfluous. The convergence of the two paths, in what became known as eigen behavior (or eigen value), is of interest to those actively seeking to transcend the identification of genetic defects through the genetic design of a cure. After all, a cure can be conceived as a repair mechanism, related to the process of anticipation. 3.2.2 Art, Simulacrum, Fabrication That art (healing was also seen as a special type of art not so long ago), in all its manifestations, including the arts of writing (poetry, fiction, drama), theatrical performance, and design–driven by purpose (telos) and in anticipation of what it makes possible–incorporates anticipatory features might be accepted as a metaphor. But once one becomes familiar with what it means to draw, paint, compose, design, write, sing, or perform (with or without devices), anticipation can be seen as the act through which the future (of the work) defines the current condition of the individual in the process of his or her self-constitution as an artist. What is interesting in both medicine and art is that the imitation can result only in a category of artifacts to be called simulacrum. In other words, the mimesis approach (for example, biomimesis as an attempt to produce organisms, i.e., replicate life from the inanimate; aesthetic mimesis, replicating art by starting with a mechanism such as the one embodied in a computer program) remains a simulacrum. Between simulacra and what was intended (organisms, and, respectively, art) there remains the distance between the authentic and the imitation, human art and machine art. They are, nevertheless, justified in more than one aspect: They can be used for many applications, and they deserve to be valued as products of high competence and extreme performance. But no one could or should ignore that the pragmatics of fabrication, characteristic of machines, and the pragmatics of human self-constitution within a dynamic involving anticipation are fundamentally different. 3.2.3 Learning (Human and Machined-Based) Learning–to mention yet another example–is by its nature an anticipatory activity: The future associates with learning expectations and a sui generis reward mechanism. These are very often disassociated from the context in which learning takes place. That this is fundamentally different from generating predictive models and stimulating competition among them might not be totally clear to the proponents of the so-called computational learning theory (COLT), or to a number of researchers of learning–all from reputable fields of scientific inquiry but captive to the action-reaction model dominant in education. It is probably only fair to remark in this vein that teaching and learning experiences within the machine-based model of current education are not different from those mimicked in some computational form. Computer-based training, a very limited experience focused on a well defined body of information, can provide a cost-efficient alternative to a variety of training programs. What it cannot do is to stimulate and trigger anticipatory characteristics because, by design, it is not supposed to override the action-reaction cycle. 3.2.4 Reward Alternatively, one can see promise in the formalism of neural networks. For instance, anticipation of reward or punishment was observed in functional neuroanatomy research (cf. Knutson, 1998). Activation of circuitry (to use the current descriptive language of brain activity) running from the medial dorsal thalamus through the anterior cingulate and mesial prefrontal cortex was co-related not to motor response but to personality variations. Accordingly, it is quite tempting to look at such mechanisms and to try to introduce reward anticipation in neural networks procedures as a method of increasing the performance of artificially mimicked decision-making. Homan (1997) reports on neural networks that “can anticipate rewards before they occur, and use these expectations to make decisions.” The focus of this type of research is to emulate biological processes, in particular the dopamine-based rewarding mechanism that lies behind a variety of goal-oriented mechanisms. Dynamic programming supports a similar objective. It focuses on states; their dynamic reassessment is propagated through the neural network in ways considered similar to those mapped in the successful enlisting of brain capabilities. Training, as a form of conditioning based on anticipation, is probably complementary to what one would call instinct-based (or natural) action. 3.2.5 Motion Planning Animation and robot motion planning, as distant from each other as they appear to some of us, share the goal of providing path planning, that is, to find a collision-free path between an initial position (the robot’s arm or the arm of an animated character) and a goal position. It is clear that the future state influences the current state and that those planning the motion actually coordinate the relation between the two states. In predictive programs, anticipation is pursued as an evaluation procedure among many possibilities, as in economics or in the social sciences. The focus changes from movement (and planning) to dynamics and probability. A large number of applications, such as pro-active error detection in networks, hard-disk arm movement in anticipation of future requests, traffic control, strategic games (including military confrontation), and risk management prompted interest in the many varieties under which anticipatory characteristics can be identified. 3.3 Aspects of Anticipation At this point, where understanding the difference between anticipation as a natural entailment process and embodying anticipatory features in machine-like artifacts meet, it is quite useful to mention that expectation, prediction, and planning–to which others add forecasting and guessing–are not fully equivalent to anticipation, but aspects of it. Let us also make note of the fact that we are not pursuing distinctions on the semantic level, but on the pragmatic–the only level at which it makes sense to approach the subject. 3.3.1 Expectation, Prediction, Forecast The practical experience through which humans constitute themselves in expectation of something–rain (when atmospheric conditions are conducive), meeting someone, closing a transaction, etc.–has to be understood as a process of unfolding possibilities, not as an active search within a field of potential events. Expectation involves waiting; it is a rather passive state, too, experienced in connection with something at least probable. Predictions are practical experiences of inferences (weak or strong, arbitrary or motivated, clear-cut or fuzzy, explicit or implicit, etc.) along the physical timeline from past to the future. Checking the barometer and noticing pain in an arthritic knee are very different experiences; so are the outcomes: imperative prediction or tentative, ambiguous foretelling. To predict is to connect what is of the nature of a datum (information received as cues, indices, causal identifiers, and the like) experienced once or more frequently, and the unfolding of a similar experience, assumed to lead to a related result. It should be noted here that the deterministic perspective implies that causality affords us predictive power. Based on the deterministic model, many predictive endeavors of impressive performance are succesfully carried out (in the form of astronomical tables, geomagnetic data, and calculations on which the entire space program relies). Under certain circumstances (such as devising economic policies, participating in financial markets, or mining data for political purposes), predictions can form a pragmatic context that embodies the prediction. In other words, a self-referential loop is put in place. Not fundamentally different are forecasts, although the etymology points to a different pragmatics, i.e., one that involves randomness. What pragmatically distinguishes these from predictions is the focus on specific future events (weather forecasting is the best known pragmatic example, that is, the self-constitution of the forecaster through an analytic activity of data acquisition, processing, and interpretation, whose output takes very precise forms corresponding to the intended communication process). These events are subject to a dynamics for which the immediate deterministic descriptions no longer suffice. Whether economic, meteorological, geophysical (regarding earthquakes, in particular), such forecasts are subject to an interplay of initial conditions, internal and external dynamics, linearity, and nonlinearity (to name only a few factors) that is still beyond our capacity to grasp, moreover to express in some efficient computational form. Although forecasts involve a predictive dimension, the two differ in scope and in the specific method. A computer program for predicting weather could process historic data (weather patterns over a long period of time). Its purpose is global prediction (for a season, a year, a decade, etc.). A forecasting algorithm, if at all possible, would be rather local and specific: Tomorrow at 11:30 am. Dynamic systems theory tells us how much more difficult forecasting is in comparison with prediction. Our expectations, predictions, and forecasts co-constitute our pragmatics. That is, they participate in making the world of our actions. There is formative power in each of them. Although expecting, predicting, and forecasting good weather will not bring the sun out, they can lead to better chances for a political candidate in an election. Indeed, we need to distinguish between categories of events to which these forms of anticipation apply. Some are beyond our current efforts to shape events and will probably remain so; others belong to the realm of human interaction. Recursion would easily describe the self-referential nature of some particular anticipations: expected outcome = f(expectation). That such cases basically belong to the category of indeterminate problems is more suspected than acknowledged. Mutually reinforcing expectations, predictions, and forecasts are the result of more than one hypothesis and their comparative (not necessarily explicit) evaluation. This model can be relatively efficiently implemented in genetic computations. 3.3.2 Plans, Design, Management Plans are the expression of well or less well defined goals associated with means necessary and sufficient to achieve them. They are conceived in a practical experience taking place under the expectation of reaching an acceptable, optimal, or high ratio between effort and result. Planning is an active pursuit within which expectations are encoded, predictions are made, and forecasts of all kind (e.g., price of raw materials and energy sources, weather conditions, individual and collective patterns of behavior, etc.) are considered. Design and architecture as pragmatic endeavors with clearly defined goals (i.e., to conceive of everything that qualifies as shelter and supports life and work in a “sheltered” society: housing, workplace, various institutions, leisure, etc.) are particular practical experiences that involve planning, but extend well beyond it, at least in the anticipatory aesthetic dimension. Every design is the expression of a possible future state–a new chip, a communication protocol, clothing, books, transportation means, medicine, political systems or events, erotic stimuli, meals–that affects the current state–of individuals, groups, society, etc.–through constitution of perceived and acknowledged needs, expectations, and desires. The dynamics of change embodied in design anticipations is normally higher than that of all other known human practical experiences. Policy, management, and prevention (to name a few additional aspects or dimensions of anticipation) involve giving advance thought, looking forward, directing towards something that as a goal influences our actions in reaching it. All these characteristics are part of the dictionary definitions of anticipation. The various words (such as those just referred to) involved in the scientific discourse on anticipation, i.e., its various meanings, pertain to its many aspects; but they are not equivalent. 3.4 Resilience It is probably useful to interrupt this account of the many ways through which anticipation penetrates the scientific agenda and to invoke a distinction that, in the beginning, defies our acquired understanding of anticipation, at least along the distinctions made above. In a deceptively light presentation, Postrel (1997) suggests a counterdistinction: resilience vs. anticipation. If the subject were only what distinguishes Silicon Valley from the Boston area, both known as regions of technical innovation and fast economic growth, the two elements invoked–predictable weather patterns, and earthquakes, anything but predictable–we would not have to bother. However, her article presents the political theory of a proficient political scholar, Wildawski (1988), focused on meeting the challenge of risk through anticipation, understood as planning that aspires to perfect foresight, or through resilience, a dynamic response based on providing adjustments. The definitions are quite telling: “Anticipation is a mode of control by a central mind; efforts are made to predict and prevent potential dangers before damage is done. . . . Resilience is the capacity to cope with unanticipated dangers after they have become manifest, learning to bounce back.” Not surprising is the inference that “anticipation seeks to preserve stability: the less fluctuation, the better. Resilience accommodates variability. . . .” We seem to have here a reverse view of all that has been presented so far: Anticipation means to see the world as predictable. But it also qualifies anticipation as being quite inappropriate within dynamic systems, that is, exactly where anticipation makes a difference! Rapid changes, especially unexpected turns of events, seem the congenial weakness of anticipation in this model. (Those critical of the evolution theory refer to punctuated equilibrium, i.e., fast change for which evolution theory has yet to produce a convincing account.) Hubristic central planning and over-caution can undermine anticipation. This view of anticipation would also imply that it cannot be properly pursued within open systems or within transitory processes–again, where we could most benefit from it. Resilience depends on spontaneity, serendipity, on the unforeseeable. Wildavsky expressed this in rather sweeping statements: “. . . not only markets rely on spontaneity; science and democracy do as well. . . .” Computations of risk are, of course, also part of the subject of anticipation. 3.5 Synchronization Yet another element of this methodological overview (far from being complete) is synchronization. It can serve here as a terminological cue, or, to recall Rosen (1991), co-temporality or simultaneity would do. In the canonical description of anticipation–the current state of the system is defined by a future state–one aspect of time, sequentiality or precedence (one instant precedes the other) takes over. Yet in the universe of simultaneous events, we encounter anticipation, not only as it refers to space aspects, but as it takes the form of synchronization mechanisms. Whether in genetic mechanisms, in musical perceptions (where temporality is definitory), or in the perception of the world (I have already mentioned above the way in which the visual and the auditory “map” are brought in sync, the so-called binding problem, i.e., integration of sensory information arriving on different channels), to name just a few, the coordination mechanism is the final guarantor of the system’s coherent functioning. As a synchronization mechanism, anticipation means to “know” (the quotation marks are used to identify a way of speaking) when relatively unrelated, or even related, events have to be integrated in order to make sense. It is therefore helpful to consider this particular kind of anticipation as the result of the work of a “conductor” (or switch, for those technically inclined) eliciting the various sound streams originating from independent sources, each operating within its own confines, to merge in a synchronized concert. Cognitively, this means to ensure that what is synchronous in the world is ultimately perceived as such, although information arrives asynchronously in the brain. Synchronization, as opposed to precedence, is not tolerant of error. Precedence is less restrictive: The cold temperatures that might affect the viability (survival) of a deciduous tree, and the cycle of days and night affected by the cycle of seasons allow for a range. This is why leaves fall over a relatively long time, depending upon tree kinds and configurations (lone trees, groves, forests, etc.). So we learn that not only is there a variety of soft-defined forms of anticipation (weather prediction, even after data collection, processing, and interpretation have made spectacular advances, is as soft as soft gets), but also that there are high precision mechanisms that deserve to be accounted for if we expect to understand, and moreover make use of, anticipatory technologies. 3.6 Some Working Hypotheses 3.6.1 Rosen’s Model Rosen distinguishes the difference between the dynamics of the coupled given object system S and the model M; that is, the difference between real time in S and the modeling time of M (faster than that of S) is indicative of anticipation. True, time in this particular description ceases to be an objective dimension of the world, since we can produce quite a variety of related and unrelated time sequences. He also remarks that the requirement of M to be a perfect model is almost never fulfilled. Therefore, the behavior of such a coupled system can only be qualified as quasi-anticipatory (in which E represents effectors through which action is triggered by M within S); cf. Fig. 1. Fig. 1 Rosen’s model As aspects of this functioning, Rosen names, rather ambiguously, planning, management, and policies. Essential here are the parametrization of M and S and the choice of the model. The standard definition, quoted again and again, is that an anticipatory system “contains a predictive model of itself and/or of its environment, which allows it to change state at an instant in accord with the model’s predictions pertaining to a later instant” (Rosen 1985, p. 339). The definition is not only contradictory–as Dubois (1997) noticed–but also circular–anticipation as a result of a weaker form of anticipation (prediction) exercised through a model. Much more interesting are Rosen’s examples: “If I am walking in the woods and I see a bear appear on the path ahead of me, I will immediately tend to vacate the premises”; the “wired-in” winterizing behavior of deciduous trees; the biosynthetic pathway with a forward activation. Each sheds light on the distinction between processes that seem vaguely correlated: background information (what could happen if the encounter with the bear took place, based on what has already happened to others); the cycle of day and night and the related pattern of lower temperatures as days get shorter with the onset of autumn; the pathway for the forward activation and the viability of the cell itself. What is not at all clear is how less than obvious weak correlations end up as powerful anticipation links: heading away from the bear (”I change my present course of action, in accordance with my model’s prediction,” 1985, p. 7) usually eliminates the danger; loss of leaves saves the tree from freezing; forward activation, as an adaptive process, increases the viability of the cell. We have a “temporal spanning,” as Rosen calls it. In his example of senescence (”an almost ubiquitous property of organisms,” “a generalized maladaptation without any localizable failure in specific subsystems,” 1985, p. 402), it becomes even more clear that the time factor is of essence in the biological realm. 3.6.2 Inclusive Recursion (the Dubois Path) Dubois (1997, p. 4) is correct in pointing out that this approach is reminiscent of classical control theory. He submits a formal language of inclusive (or implicit) recursion, more precisely, of self-referential systems, in which the value of a variable at a later time (t+1) explicitly contains a predictive model of itself (p. 6): x(t+1) = f[x(t), x(t+1), p), p] (1a) In this expression, x is the state variable of the system, t stands for time (present, t–1 is the past, t+1 is the future), and p is a control parameter. Dubois starts from recursion within dynamical discrete systems, where the future state of a system depends exclusively on its present and past x(t+1) = f[... x(t–1), x(t), x(t+1), p] (1b) He further defines incursion, i.e., an inclusive or implicit recursion, as x(t+1) = f[... x(t–2), x(t–1), x(t), x(t+1), ..., p] (2) and exemplifies its simplest case as a self-referential system (cf. 1a and 1b). The embedded nature of such a system (it contains a model of itself) explains some of its characteristics, in particular the fact that it is purpose (i.e., finality, or telos) driven. Having provided a mathematical description, Dubois further reasons from the formalism submitted to the mechanism of anticipation: The dynamic of the system is represented by D S/D t = [S(t+D t) – S(t)]/ D t = F[S(t), M(t+D t)] (3) That of the predictive model is: D M/D t = [M(t+D t) – M(t) = G[M(t)] (4) In order to avoid the contradiction in Rosen’s model, Dubois suggests that D M/D t = [M(t+D t) – M(t)]/ D t = F[S(t), M(t+D t)] (5) Obviously, what he ascertains is that there is no difference between the system S and the anticipatory model, the result being D S/D t = [S(t+D t) – S(t)]/ D t = F[S(t), S(t+D t)] (6) which is, according to his definition, an incursive system. That Rosen and Dubois take very different positions is clear. In Rosen’s view, since the “heart of recursion is the conversion of the present to the future” (1991, p. 78), and anticipation is an arrow pointing in the opposite direction, recursions could not capture the nature of anticipatory processes. Dubois, in producing a different type of recursion, in which the future affects the dynamics, partially contradicts Rosen’s view. Incursion (inclusive or implicit recursion) and hyperincursion (an incursion with multiple solutions) describe a particular kind of predictive behavior, according to Dubois. Building upon McCulloch and Pitts (1943) formal neuron and taking von Neumann’s suggestion that a hybrid digital-analog neuron configuration could explain brain dynamics, Dubois (1990, 1992) submitted a fractal model of neural systems and furthered a non-linear threshold logic (with Resconi, 1993). The incursive map x(t) = 1 – abs(1–2x(t+1)) (7) where “abs” means “the absolute value” and in which the iterated x(t) is a function of its iterate at a future time t+1, can subsequently be transformed into a hyper-recursive map: 1 – 2x(t+1) = ± (1–x(t)) (8) so that x(t+1) = [1 ± x(t)–1]/2 (9) It is clear that once an initial condition x(0) is defined, successive iterated values x(t+1), for t=0,1,2,…T, produce two iterations corresponding to the ± sign. In order to avoid the increase of the number of iterated values, i.e., in order to define a single trajectory, a control function u(T–k) is introduced. The resulting hyperincursive process is expressed through x(t+1) = [1 + (1–2u(t+1))(x(t)–1]/2 = x(t)/2 + u(t+1) – x(t) · u(t+1)(10) It turns out that this equation describes the von Neumann hybrid version through the x(t) as a floating point variable and the control function u(t) as a digital variable, accepting 0 and 1 as values, so that the sign + or – result from Sg = 2u(t) – 1, for t=1, 2,…T (11) It is tempting to see this hybrid neuron as a building block of a functional entity endowed with anticipatory properties. Let me add here that Dubois has continued his work in the direction of producing formal descriptions for neural net applications, memory research, and brain modeling (1998). His work is convincing, but, again, it takes a different direction from the work pursued by Rosen, if we correctly understand Rosen’s warning (1991) concerning the non-fractionability of the (M, R)-system, i.e., its intrinsic relational character. Nevertheless, Dubois’ results will be seen by many as another suggestion that the hybrid analog/digital computation better reflects the complexity of the living and thus might support effective information processing for applications in which the living is not reduced to the physical. 3.6.3 Space-Based Computation Cellular automata, as discrete space-time models, constitute yet another way of modeling anticipation as a space-based computation. More details can be found in the work of Holmberg (1997), who introduces the concept of spatial automata and correctly positions this approach, as well as some basic considerations on the nature of anticipation in technological applications, within systems theory. Not surprisingly, the community of researchers of anticipation is generating further working hypotheses (Julià 1998; Sommer, 1998, addressing intentionality and learnability, respectively). It is very difficult to keep a record of all of these contributions, and even more difficult to comment on works in their incipient phase. Applications of fundamental theoretical anticipatory models are also being submitted in increasing numbers. Dubois himself suggested quite a number of applications, including robotics and neural machines. My focus is on variable configuration computers (regardless of the nature of computation). Obviously, those and similar attempts (many in the program of the CASYS conferences) are quite different from training in various sports, sports performance (think about anticipation in fencing!), political action, the functioning of the judicial system, the dissemination of writing rules for achieving suspense, the automatic generation of jokes (Barker, 1996), the building of economic models, and so on. 3.6.4 Dynamic Competing Models Without attempting to submit a full-fledged alternative to either Rosen’s or Dubois’ anticipation descriptions, I will only mention once more that my own work speaks in favor of a changing set of models and of a procedure for maintaining competition among them. Fig. 2 Changing models and competition among models Since a diagram is a formalism of sorts, not unlike a mathematical or logical expression, I also reason from it to the dynamics of the system. The diagram ascertains that anticipation implies awareness, and thus processes of interpretation—hence semiotic processes. Mathematical or logical descriptions do not explicitly address awareness, but rather build upon it as a given. Some scientists subsequently commit the error of assuming that because awareness is not explicitly encoded in the formulae, it plays no role whatsoever in the system described. As we shall see in the discussion of the non-local nature of anticipation, quantum experiments suggest that in the absence of the observer, our descriptions of the universe make no sense. 3.6.5 Variability and Computation To make things even more challenging, there are instances in which anticipation, resulting from the dynamics of natural evolution, is subject to variability, i.e., change. In every game situation, anticipations are at work in a competitive environment. Chess players, not unlike “black-box” traders on the financial or stock markets, as well as professional gamblers, could provide a huge amount of testimony regarding “anticipation as a moving target.” In my model of an anticipation mechanism based on a changing number of models and on stimulating competition among them, games can serve as a source of information in the validation process. The mathematics of game theory, not unlike the mathematics of ALife formal descriptions applied to trading mechanisms or to flocking behavior, is in many respects pertinent to questions of anticipation. What is not explicitly provided through the ever expanding list of application examples is the broad perspective. Indeed, when the performing musician of a well known musical score seeks an expression that deviates from the expected sound (without being unfaithful to the composer), we have anticipation at work: not necessarily as a result of an understanding of its many implications, rather as a spontaneously developed means of expression. Many similar anticipation-based characteristics are recognizable in the practical human experience of self-constitution in competitive situations, in survival instances (some action performed ahead of the destructive instant), in the interpretation of various types of symptoms. After all, the immune system is one of the most impressive examples of the (M,R) models that Rosen describes. It is in anticipation of an infinity of potential possible factors that affect the organism during its unfolding from inception to death. The metabolism component and the repair component, although different, are themselves co-related. From the perspective opened by the subject of anticipation, it is implausible that a cure for a deficient immune system will be found in any place other than its repair function. In contradistinction, as we shall see, when one searches for information on the World-Wide Web, there is anticipation involved in the mechanism of pre-fetching information that eventually gives the user the feeling of interactivity, even though what technology makes possible is a simulacrum. The question to be asked, but not necessarily answered in this paper, is: To what extent does becoming aware of anticipation, or living in a particular anticipation (of a concert, of a joke, or of an inherited disease), affect our practical experiences of self-constitution, regardless of whether we build a technology inspired by it or only use the technology, or to what extent are such experiences part of the technology? Friedrich Dürrenmatt, the Swiss writer, once remarked (1962, in a play entitled The Physician Sits), “A machine only becomes useful when it has grown independent of the knowledge that led to its discovery.” This statement will follow us as we get closer to the association between anticipation and computation. It suggests that if we are able to endow machines with anticipatory characteristics (prediction, expectancy, planning, etc.), chances are that our relation to such machines will eventually become more natural. This might change our relation to anticipation altogether, either by further honing natural anticipation capabilities or by effecting their extinction. The broader picture that results from the examination of what actually defines the field of inquiry identifiable as anticipation–in living systems and in machines–is at best contradictory. To be candid, it is also disconcerting, especially in view of the many so-called anticipation-based claims. But this should not be a discouraging factor. Rather, it should make the need for foundational work even more obvious. One or two books, many disparate articles in various journals, plus the Proceedings of the Computing Anticipatory Systems (CASYS) conferences do not yet constitute a sufficient grounding. It is with this understanding in mind that I have undertaken this preliminary overview (which will eventually become my second book on the subject of anticipation). Since the time my book (1991) was published, and even more after its posting on the World-Wide Web, I have faced colleagues who were rather confused. They wanted to know what, in my opinion, anticipation is; but they were not willing to commit themselves to the subject. It impressed them; but it also made them feel uneasy because the solid foundation of determinism, upon which their reputations were built, and from which they operate, seemed to be put in question. In addition, funding agencies have trouble locating anticipation in their cubbyholes, and even more in providing peer reviews from people willing to jump over their shadow and entertain the idea that their own views, deeply rooted in the paradigm of physics and machines, deserve to be challenged. My research at Stanford University–which constituted the basis for this report–provided a stimulating academic environment, but not many possible research partners. Students in my classes turned out to be far more receptive to the idea of anticipation than my colleagues. The summary given in this section stands as a testimony to progress, but no more than that, unless it is integrated in the articulation of research hypotheses and models for future development. 4 Minds, Knowledge, Computation–a Borgesian Horizon The anticipatory nature of the mind–and by this I mean the processes of mind constitution as well as mind interaction–together with the understanding of anticipation as a distributed characteristic of the human being, represents an epistemological and cognitive premise. Let us put these ascertainments in the broader perspective of knowledge–the ultimate goal of our inquiry (knowledge at work included, of course). Niels Bohr (1934), well ahead of the illustrious founders of second-order cybernetics or of today’s constructivist model of science, risked a rather scandalous sentence: “It is wrong to think that the task of physics is to find out how nature is.” He went on to claim that “Physics concerns what we can say about nature.” In this vein, we can say that Rosen and others have proven that anticipation is a characteristic of natural processes. We can also take this description and try to make it the blueprint of various applications (some of which were reported above). 4.1 Computation and Prolepsis Computation is the dominant aspect of the Weltanschauung today. It is not only a representation, but also the mechanism for processing representations (for which reason I call the computer a semiotic engine). The attempt to reduce everything there is to computation is not new. Science might be rigorous, but it is also inherently opportunistic. That is, those constituting themselves as scientists, (i.e., defining themselves in pragmatic endeavors labeled as science) are human beings living in the reality of a generic conflict between goals and means. Having said this, well aware that Feyerabend (1975) et al articulated this thought even more obliquely, I have to add that anticipation as computation is, from an epistemological perspective, probably more appropriate to our understanding of the concept than what various pre-computation disciplines had to say or to speculate about anticipation. Between Epicurus’ (cf. 1933) term prolepsis–rule, or standard of judgment (the second criterion for truth)–and the variety of analytical interpretations leading to the current infatuation with anticipation, there is a succession of epistemological viewpoints. It is not that background knowledge–”the idea of an object previously acquired through sensations” to which Epicurus referred as a necessary condition for understanding–changed its condition from a criterion of truth to a computational entity. After all, computer systems used in speech recognition or in vision involve a proleptic component. (The machine is trained to recognize something identified as such.) Rather, the pragmatic framework changed, and accordingly we constitute ourselves as researchers of the world in which we live by means of computation rather than by means used in Epicurus’ physics and corresponding theory of knowledge (the canon, as it is known). What I want to say is that computation and the subsequent attempt to see anticipation as computation are but another description of the world and, particularly in the latter case, of our attempts to form an effective body of knowledge about it. In his discussion of prolepsis, in Critique of Pure Reason, Kant (1781) saw it within his description of the world, that is, in the form of “something that can be known a priori.” In Kant’s view, only the “property of possessing a degree” is subject to anticipation. Indeed, in computation we can attach certain weights to various data before the data are actually input. These weights will affect the result and, in many cases, the art; that is, the appropriateness of specifying weights influences predictions and forecasts. But no one would infer à rebours that Kant saw the world as a computation, or that knowledge was the result of a computational process. 4.2 Evolutionary Computation The substratum of basic principles on which a theory of anticipation relies (Epicurus, Kant, Rosen, etc.) affects the theory itself, and thus its possible technological implementations. It has not actually been convincingly demonstrated that we can compute anticipation. What has been accomplished, again and again, is the embodiment of anticipatory characteristics, such as prediction, expectation, management, planning, etc., in computer programs. What has also been carried out is the implementation of control mechanisms, and, bringing us closer to our subject, the modeling of selection mechanisms in the now well known genetic computing models inspired by the guiding Darwinian concept. Evolutionary computation might well end up displaying anticipatory characteristics if we take the time and the knowledge needed to apply ourselves to the task. It will not be a spontaneous birth, rather a designed and carefully executed computation. Entailment might prove the critical element, as Rosen’s work seems to indicate. 4.2.1 Co-Relation vs. Computation Once a modeling relation is established between a natural system and a formal one, we can start inferring from the formal system to the natural. Let me mention that here we are in the territory of views that often contradict each other. (For instance, Daniel Dubois and myself are still in dialog over some of the examples to follow.) Neural networks or models of ALife, such as the simulation of collections of concurrently interacting agents, qualify as candidates for such an exercise. However, almost no effort has been made to elucidate the functioning of the causal arrow from the future to the present. In winter, temperatures will fall below the freezing point; leaves fall from deciduous trees in anticipation, but the trigger comes from a different process, i.e., the diminishing length of daylight, which stands in no direct causal relation to the phenomenon mentioned yet again. This is a co-relation of processes, not a computation, or at least not a Turing machine-based computation. The migration of birds is another example; yet others are the immune system, the sleep mechanism, the blinking mechanism, and the behavior of Pfiesteria (the single-cell microorganisms that produce deadly toxins in anticipation of the fish they will eventually kill). But if we want to stick to computation, which is a description different from the one pursued until now, we land in a domain of parallel processes, not very sophisticated, probably even less sophisticated than the level of a UNIX operating system, but of a much higher order of magnitude. We are in what was described as a big numbers-based reality. If we could control the process “shorter days,” we could eventually graph the inter-relation among the various components at work leading to the shedding of leaves during autumn, or to the sophisticated patterns of behavior of birds preparing for migration. 4.3 Large Numbers and Simple Processes In respect to brain activity, things are definitely more complicated, but they also fall in the realm of incredibly large numbers applying to rather simple entities and processes. The ongoing CAM-Brain Project (Hugo de Garis, 1994) is supposed to result in an artificial brain of one billion neurons (compare this to the 100 to 120 billion neurons of a wet brain implemented) on Field Programmable Gate Arrays. These digital circuits can be reconfigured as the tasks at hand might require. The notion of reconfiguration elicits our understanding of anticipation. Still, it remains to be seen whether the artificial brain will actually drive a robot or only simulate the robot’s functioning, as it also remains to be seen whether evolutionary patterns will support vision, hearing, their binding, coordinated movements, and, farther down the line, decision-making. The mind in anticipation of events (as I defined mind) is a lead. If we could parametrize the cognitive process and control the various channels, we could in principle learn more about how neuroactivity precedes moving one’s hand by 800 milliseconds, and what the consequences of this forecast for human anticipation abilities are. These are all possible experiments, after each of which we will end up not only with more data (the blessing and curse of our age!), but also necessarily with the desire to gain a better understanding of what these data mean. If Rosen’s hypothesis that anticipation is what distinguishes the biological realm (life) from the physical world, it remains to be seen whether we can do more than to compute only particular aspects of it–prediction, expectation, planning, etc.–outside the living. Pseudo-anticipation is already part of our practical experience: satellite launches, virtual surgery, pre-fetching data in order to optimize networks are but three examples of effective pseudo-anticipation. If we could create life, we could study how anticipation emerges as one of its irreducible, or only as one of its specific, properties. Short of this, ALife is involved in the simulation of lifelike processes. Rosen, in defining complexity as not simulatable, comes close to Feynman’s (1982) hope that one can best study physics by actually conducting the calculations of the world of physics on the physical entities to be studied. One can call this epistemological horizon Borgesian, knowing that an ideal Borgesian map was none other than the territory mapped. At this point, we need to arrive at a deeper understanding of what we want to do. Regardless of the metaphor, the epistemological foundation does not change. The knowing subject is already shaped by the implicit anticipatory dimension of mind interaction; in other words, the answer to the question meant to increase our knowledge is anticipated. Computation is as adequate a metaphor as we can have today, provided that we do not expect the metaphor to automatically generate the answers to our many questions. Regardless, the question concerning anticipation in the living and in the non-living is far from being settled, even after we might agree on a computational model or expand to something else, such as co-relation, which could either transcend computation or expand it beyond Turing’s universal machine. 5 Revisiting Non-Locality I took it upon myself to approach these matters well aware that I am advancing in mined territory. Comparisons notwithstanding, such was the situation faced by the proponents of quantum theory. To nobody’s surprise, Einstein took quantum mechanics, as developed by Heisenberg, Schrödinger, Dirac, et al, under scrutiny, and, well before the theory was even really established, raised objections to it, as well as to Bohr’s interpretation. From these objections (the complete list is known as the EPR Paper, 1935, for Einstein, Podolski, and Rosen), one in particular seems connected to the subject of anticipation. Einstein had a major problem with the property of non-locality–the correlations among separated parts of a quantum system across space and time. He defined such correlations as “spooky actions at distance” (”spukhafte Fernwirkungen”), remarking that they have to take place at speeds faster than that of light in order to make various parts of the quantum system match. In simple terms, this spooky action at distance refers to the links that can develop between two or more photons, electrons, or atoms, even if they are remotely placed in the world. One example often mentioned is the decay of a pion (a subatomic particle). The resulting electron and positron move in opposite directions. Regardless how far apart they are, they remain connected. We notice the connection only when we measure some of their properties (well aware of the influence measurement has), their spin, for example. Since the initial pion had no spin, the electron and the positron will have opposite sense spins, so that the net spin is conserved at zero. So, at distance, if the spin of the electron is clockwise, the spin of the positron is counter-clockwise. It would be out of place to enter here into the details of the discussion and the ensuing developments. Let me mention only that in support of the EPR document, Bohm (1951) tried, through his notion of a local hidden variable, to find a way for the correlations to be established at a speed lower than that of light. He wanted to save causality within quantum predications. Bohm’s attempt recalls what the community of researchers is trying to accomplish in approaching aspects of anticipation (such as prediction, expectation, forecast, etc.) with the idea that they cover the entire subject. Bell (1964, 1966) produced a theorem demonstrating that certain experimental tests could distinguish the predictions of quantum mechanics from those of any local hidden variable theory. (Incidentally, physicist Henry P. Stapp, 1991 characterized Bell’s theorem as “the greatest discovery of all science.”) Again, this recalls by analogy Rosen’s position, according to which anticipation is what (among other things) distinguishes the living from the rest of the world. It states that we can clearly discern a particular aspect of anticipation provided in some formal description or in some computer implementation from one that is natural. I mention these two episodes from a history still unfolding in order to explain that what we say in respect to nature–as Bohr defined the goal of physics–will be ultimately subjected to the test of our practical experiences. Einstein has been proven wrong in respect to his understanding of non-locality through many experiments that baffle our common sense, but his theory of relativity still stands. Spooky actions at distance are a very intuitive description of how someone educated in the spirit of physical determinism and thinking within this spirit understands how the future impacts the present, or how anticipation computes backwards from the future to the present. He, like many others, preached the need for learning “to see the world anew,” but was unable to position himself in a different consciousness than the one embodied in his theory. As I worked on this text (more precisely, after reworking a draft dated July 22, 1999), Daniel Dubois graciously drew my attention to a number of his research accomplishments pertinent to the connection between anticipation and non-locality. Indeed, over the last seven years, he has applied his mathematical formalism to quite a number of computational aspects of anticipation. Consequently, he was able to establish, by means of incursion and hyperincursion, that the computation pertinent to the membrane neural potential (used as a model of a brain) “gives rise to non-locality effects” (Dubois, 1999). His argument is in line with von Neumann’s analogy between the computer and the brain. But we are not yet beyond a first analogy (or reference). Non-locality is, in the last analysis, distance independent. Furthermore, non-locality is not a limited characteristic of the universe, but a global rule. In the words of Gribbin (1998), non-locality “cuts into the idea of the separateness of things.” If the “no-signaling” criterion (energy or information travel no faster than the speed of light) protects the “chain of cause and effect,” (effects can never happen before their causes), non-locality ensures the coherence of the universe. Reconciliation between non-locality and causality might therefore be suggestive for our understanding of anticipation. In such a case, the co-relation among elements involved in anticipation can be seen as a computation, but one different in nature from a digital computer, i.e., in a Turing machine. It follows from here that anticipation understood as co-relation–a notion we will soon focus on–must be a computation different in type than that embodied in a Turing machine. 5.1 Quantum Semiotics, Link Theory, Co-Relation Let me preface this section ascertaining that anticipation is a particular form of non-locality, which is quite different from saying that there is non-locality in anticipation. (This is what actually distinguishes my thesis from the results of Dubois.) More precisely, its object is co-relations (over space and time) resulting from entanglements characteristic of the living, and eventually extending beyond the living, as in the quantum universe. These co-relations correspond to the integrated character of the world, moreover, of the universe. Our descriptions ascertain this character and are ultimately an active constituent of this universe. We introduce in this statement a semiotic notion of special significance to the quantum realm: Sign systems not only represent, but also constitute our universe. As with qubits (information units in the quantum universe), we can refer to qusigns as particular semiotic entities through which our descriptions and interpretations of quantum phenomena are made possible. 5.1.1 The Semiotic Engine As a semiotic engine (Nadin, 1998), a digital computer processes a variety of possible descriptions of ourselves and of the universe of our existence. These descriptions can be indexical (marks left by the entity described), iconic (based on resemblance), or symbolic (established through convention). Anticipatory computation is based on the notion that every sign is in anticipation of its interpretation. Signs are not constituted at the object level, but in an open-ended infinite sign process (semiosis). In sign processes, the arrow of time can run in both directions: from the past through the present to the future, or the other way around, from the future to the present. Signs carry the future (intentions, desires, needs, ideals, etc., all of a nature different from what is given, i.e., all in the range of a final cause) into the present and thus allow us to derive a coherent image of the universe. Actually, not unlike the solution given in the Schrödinger equation, a semiosis is constituted in both directions: from the past into the future, and from the future into the present, and forward into the past. The interpretant (i.e., infinite process of sign interpretation) is probably what the standard Copenhagen Interpretation of quantum mechanics considered in defining the so-called “intelligent observer.” The two directions of semiosis are in co-relation. In the first case, we constitute understandings based on previous semiotic processes. In the second, we actually make up the world as we constitute ourselves as part of it. This means that the notion of sign has to reflect the two arrows. In other words, the Peircean sign definition (i.e., arrow from object to representamen to interpretant) has to be “reworded”: Fig. 3 Qusign definition The language of the diagram allows for such a “rewording” much better than so-called natural language: The interpretant as a sign refers to something else anticipated in and through the sign. (Peirce’s original definition of sign is, “something which stands to somebody in some respect or capacity,” 2.228.) Qusigns are thus the unity between the analytical and the synthetic dimension of the sign; their “spin” (to borrow from the description of qubits) can well describe the particular pragmatics through which their meaning is constituted. 5.1.2 Knowing in Advance The 1930 Copenhagen Interpretation of quantum mechanics (developed primarily by Bohr and Heisenberg) should make us aware of the fact that observation (as in the examples advanced by Rosen, et al), measurement (as in the evaluation of learning performance of neural networks), and descriptions (such as those telling us how a certain software with anticipatory features works) are more pertinent to our understanding of what we observe, measure, or describe than to understanding the phenomena from which they derive. To measure is to describe the dynamics of what we measure. The coherence we gain is that of our own knowledge, where dynamics resides as a description. Albeit, the anticipation chain takes the path of something that smacks of backward causality, which the established scientific community excluded for a long time and still has difficulty in understanding. Quantum particle “tunneling”–a phenomenon related to quantum uncertainty and to wave-particle duality–might explain our own existence on the planet, but we still don’t know what it means (as Feynman repeatedly stated it, verbally and in writing, 1965). Quite a number of experiments (cf. Raymond Chiao, University of California-Berkeley; Paul Kwiat, University of Innsbruck; Aephraim Steinberg, US National Institute of Standards and Technology, Maryland, among others) ended up confirming that “the way in which a photon starting out on its journey behaves” in different experimental set-ups suggests that anticipation is at work in the quantum realm. They behave (cf. Gribbin, 1999) as if they “knew in advance what kind of experiment they were about to go through.” In view of these experiments, Rosen would have a hard time trying to argue that anticipation is a property exclusive of the living. Moreover, we find in such examples the justification for quantum semiotics: “The behavior of the photons at the beam-splitter is changed by how we are looking at them, even when we have not yet made up our minds about how we are going to look at them. The computer-controlled pseudo-random layout of the device used in the experiment is anticipated by the photon,” (Gribbin and Chimsky, 1996). In other words, it is an interpretant process. I should mention here that within the relatively young field of mathematical research called link theory, a framework that generalizes the notion of causality is established in a way that removes its unidirectionality (cf. Etter, 1999). The relational aspect of this theory makes it a very good candidate for a closer look at anticipation, in particular, at what I call co-relations. 5.1.3 Coupling Strength In various fields of human inquiry, the clear-cut distinction between past, present, and future is simply breaking down. No matter how deep and broad grudges against a reductionist physical model (such as Newton’s) are, Newtonian dynamics is reversible in time, and so is quantum mechanics. The goal of producing a “unified” description of the universe can be justified in more than one way, but regardless of the perspective, coupling strength is what interests us, that is, what “holds” the “universe” together. This applies to the coherence of the human mind, as it applies to monocellular organisms or to the cosmos at large. It might be that anticipation, in a manner yet unknown to us, plays a role in the coupling of the many parts of the universe and of everything else that appears as coherent to us. Galileian and Newtonian mechanics advanced answers, which were subsequently reformulated and expressed in a more comprehensive way in the theory of relativity (special and general), and afterwards in quantum theories (quantum mechanics, quantum field theory, quantum gravity). In the mechanical universe, to anticipate could mean to pre-compute the trajectory of the moving entity seen as constitutive of broad physical reality. But the causal chain is so tight that the fundamental equation allows only for the existence of recursions (from the present to the future), which we can represent by stacks and compute relatively easily. The past is closed; the future, however, is open, since we can define ad infinitum the coordinates of the changing position of a moving entity. No guesswork: Everything is determined, at least up to a certain level of complexity. Relativity does not do away with the openness of the future, but makes it more difficult to grasp. Within black holes, inherent in the relativistic description but not reducible to them, time is cyclic. In Einstein’s curved space-time, a circular “time-line” (Etter’s pun) is no more surprising than a “circle around a cylinder in ordinary space.” This, however, leads to a cognitive problem: how to accommodate a cycle with openness. Anticipation related to this description of time is quite different from that which might be associated with a physical-mechanical description. 5.2 Possible and Probable Quantum theories, as we have suggested, pose even more difficult questions in regard to non-locality, and thus to entanglement. In this new cognitive territory, things get even more difficult to comprehend. Determinism, which means that something is (1) or is not (0) caused by something else, gives way to a probabilistic and/or possibilistic distribution: Something is caused probably (i.e., to a certain degree expressed in terms of probability, that is, statistic distribution) by something else. Or it is caused possibly (in Zadeh’s sense, 1977), which is a determination different from probability (although not totally unrelated), by something else. Probabilistic influences can be represented through a transition matrix. Given the relation between two entities A and B and their respective states, we can define a Markov chain, i.e., a transition matrix whose ijth entry is the probability of i given j. Such a chain tells us how influences are strung together (chained) and can serve as a predictive mechanism, thus covering some subset of what we call anticipation. Recently, weather satellite observations of the density of green vegetation in Africa (an indication of rainfall) were connected through such processes to the danger of an outbreak of Rift Valley Fever, in which Linthicum (1999) devised a metrics based on climate indicators for a forecasting procedure. The “black boxes” chained in such processes have a single input and a single output representing the complete state variable of the system as it changes over time. Climate and health (the risk of malaria, Hanta virus, cholera) are related in more than one way (Epstein, 1999). These examples are less probabilistic than possibilistic. If we pursue possibilities, that is, infer from a determined set of what is possible, a different form of prediction can eventually be achieved. Abductive inferences belong to this category and are characteristic of functional diagnosis procedures. Here we have an example of semiotics at work, i.e., abductions on symptoms, not really far from what Epicurus meant by prolepsis. 5.2.1 Linked Incursions For the aspects of anticipation that belong to a non-deterministic realm, we can further try to link descriptions of the form y = f(x) or z = g(w) (12a, b) Indeed, if we substitute y for w, our descriptions become y = f(x) and z = g(y), that is, z = g(f(x)) (13a, b, c) The result is a functional relation of the composed functions. Without going into the details of Etter’s theory, let me suggest that it can serve as an efficient method for encoding a variety of relations (not only in the case of the identity of two variables). If in the functional description we substitute not the variables (w with y, as shown in the example given above) but the relation between them, we reach a different level of relational encoding that can better support modeling. I even suggest that recursions, incursions, and hyperincursions can be defined for co-related events. For example:’ x(ti+1) = f[x(ti), x (ti +1), p] (14) y(tj+1) = g[y (tj), y(tj+1), r] (15) in which time in the two systems is obviously not the same (ti ¹ tj). A co-relation of time can be established, as can a co-relation among the states x(ti) and y(tj) or the two systems, through the intermediary of a third system acting as the “conductor,” or coordinator, z(ti, tj, tk), i.e., dependent upon both the time in each system and its own time metrics. To elaborate on the mathematics of linked incursions goes beyond the intentions of this paper. Let us not forget that we are pursuing an analysis of the particular ways in which anticipation takes place in the successive unified descriptions of the universe produced so far. 5.2.2 Alternative Computations In the quantum perspective of a double identity–particle and wave–trajectory is the superposition of every possible location that a moving entity could conceivably occupy. This is where recursivity, in the classic sense, breaks down. I suspect that Dubois was motivated to look beyond recursivity for improved mathematical tools, to what he calls incursion and hyperincursion, for this particular reason. But I also suspect that linked incursions and hyperincursions will eventually afford more results in dealing with various aspects of anticipation and non-locality. In respect to the explicit statement, prompted by quantum mechanics non-locality, that anticipation could be a form of computation different from that described by a Turing machine, it is only in the nature of the argument to say that a full-fledged anticipation, not just some anticipatory characteristics (prediction, planning, forecasting, etc.) is probably inherent in quantum computation. Rosen recognized early on (1972) that quantum descriptions were a promising path, although among his publications (even more manuscripts belong to his legacy, cf. 1999) there are no further leads in this direction. Efforts to transcend digital computing through quantum computation are significant in many ways. From the perspective of anticipation, I think Feynman’s concept comes closer to what we are after: understanding the quantum dynamics not by using a digital computer (as in the tradition of reductionist thinking), but by making use of the elements involved in quantum interactions. As the situation is loosely described: Nature does this calculation all the time! The same thing can be said about protein folding, a typical anticipatory process–a small increase in energy (warming up) drives the folding process back, only in order to have it repeated as the energy decreases. This process might also well qualify as an anticipatory computation, with a particular scope, not reducible to digital computation. (As a matter of fact, protein folding exceeds the complexity of digital computation.) It is an efficient procedure, this much we know; but about how it takes place we know as little as about anticipation itself. 5.2.3 Anticipation as Co-Relation (Or: Co-relation as Anticipation?) Having advanced the notion of anticipation as a co-relation, I would like to point to instances of co-relation that are characteristic of experiences of practical human self-constitution in fields other than the much researched control theory of mechanisms, economic modeling, medicine, networking, and genetic computing. There is, as Peat (undated) once remarked, a strong concern with “a non-local representation of space” in art and literature. The integration of many viewpoints (perspectives) of the same event illustrates the thought. Reconstruction (in the perception of art and literature) means the realization of a future state (describable as understanding or as coordination of the aesthetic intent with the aesthetic interpretation) in the current state of the dynamic system represented by the work of art or of writing, and by its many interpreters (open-ended process). In Descartes’ and Newton’s traditions, space and time are local: a taming of artistic expression took place. Peat claims that the “tableau,” i.e., the painting, becomes a snapshot in which “motion and change is frozen in a single instant of time. This is a form of objectivity which the concert, the novel, and the diarist express.” With the advent of relativity and quantum physics, many perspectives are overlaid. As Peat puts it, “In our century, painting has returned to the non-local order.” This holds true for writing (think about Joyce), as well as it does for the dynamic arts (performance, film, video, multimedia). Complementary elements, entangled throughout the unifying body of the work or of its re-presentation, are brought into coherence by co-relations within non-locality-based interactions. Peat goes on to show that communication “cries our for a non-local” description: source and receiver cannot be treated as separable entities. (They are linked, as he poetically describes the process, “by a weak beam of coherent light.”) Meaning—which “cannot be associated exclusively with either participant” (n.b., in communication)—could be “said to be ‘non-local’.” 6 The Relational Path to Co-Relations That computation, in one of its very many current forms or in a combination of such forms (such as hybrid algorithmic-nonalgorithmic computations), can embody and serve as a test for hypotheses about anticipation should not surprise. Neither should the use of computation imply the understanding that anticipation is ultimately a computation, that it is the only form, or the appropriate form, through which we can implement anticipation-based notions. It is an exciting but dangerous path: If everything is described as a computation—no matter how different computation forms can be—then nothing is a computation, because we lose any distinguishing reference. Epistemologically, this is a dead end. Furthermore, it has not yet been established whether information processing is a prerequisite of anticipation or only one means among many for describing it. While we could, in principle, embody anticipatory features in computer programs, we might miss a broad variety of anticipation characteristics. For instance, progress was made in describing the behavior of flocks (cf. The Swarm Simluation System at the Santa Fe Institute). But bird migration goes far beyond the modeled behavioral interrelationships. Trigger information differentials, group interaction, learning, orientation, etc. are far more sophisticated than what has been modeled so far. The immune system is yet another example of a complexity level that by far exceeds everything we can imagine within the computational model. Be all this as it may, our current challenge is to express co-relations, which appear as predefined or emerging relations in a dynamic system, by means of information processing in some computational form, or by means of describing natural entanglements. If we could reach these goals, we would effect a change in quality–from a functional to a relational model. Here are some suggestions for this approach. 6.1 Function and Relation Relations between two or among several entities can be quite complicated. A solid relational foundation requires the understanding of what distinguishes relation from function. For all practical purposes, functions (also called mappings) can be linear or non-linear. (Of course, further distinctions are also important: They can be many or single-valued, real or complex-valued, etc.) Relations, however, cover a broader spectrum. A relation of dependence (or independence) can be immediate or intermediated. It can involve hierarchical aspects (as to what affects the relation more within a polyvalent connection), as well as order or randomness. Relations, not unlike functions, can be one-to-one, one-to-many, many-to-one, many-to-many. We can define a negation of a relation, a double negation, inverse relation, etc. A full logic of relations has not been developed, as far as I know. Rudimentary aspects are, however, part of what after Peirce (1870, 1883) and Schröder (The Circle of Operation of Logical Calculus, 1877) became known as a logic of relations. Russell and Whitehead (Principia Mathematica, 1910) made further clarifications. Let us assume a simple case: xRy, in which x stands in relation to y (son of, higher than, warmer than, premise of, etc.). If we consider various aspects of the world and describe them as relationally connected, we can wind up with statements such as xR1y, zR2w, etc. In this form, it is not clear that Ri exhausts all the relations between the related entities; neither is it clear to what extent we can establish further relations between two relations Ri and Rj and thus eventually infer from their interrelationship new relations among entities that did not have an apparent relation in the first place. In a wide sense, a relation is an n-ary (n=1, 2, 3….) “connection”; a binary relation is a particular case and means that the relation xRy is true or false for a pair x,y in the Cartesian product XxY. As opposed to functions, for which we have relatively good mathematical descriptions, relations are more difficult to encode, but richer in their encodings. Their classification (e.g., inverse relation, reflexive, symmetric, transitive, equivalent, etc.) is important insofar it leads to higher orders (e.g., a reflexive and transitive relation is called a pre-ordering, while an ordering is a reflexive, transitive, and antisymmetric relation). 6.1.1 N-ary Relations If we revisit some of the examples of anticipation produced so far in the literature–Rosen’s deciduous trees, Peat’s communication as a non-local unifying process, Linthicum’s and Epstein’s metrics of weather data and disease patterns, the cognitive implications of the many competing models from which one is eventually instantiated in an action, or the hyperincursion mechanism developed by Dubois (to name but a few)–it becomes obvious that we have chains of n-ary relations: xRin y (in which Rin is a specific Ri n-ary relation); that is, in a given situation, several relations are possible, and from all those possible, some are more probable than others. To anticipate means to establish which co-relations, i.e., which relations among relations are possible, and from those, which are most probable. Anticipation is a process. It takes place within a system and we interpret it as being part of the dynamics of the system. Observed from outside the system–deciduous trees lose their leaves, birds migrate, tennis players anticipate the served ball–anticipation appears as goal-driven (teleologic). In particular, coherence is preserved through anticipation; or a different coherence among the variables of a situation is introduced (such as playing chess, or predicting market behavior). Pragmatically, this results in choices driven by possibilities, which appear as embodied in future states. The tennis ball is served and has to be returned in a well defined area–and this is an important constraint, an almost necessary condition for the game ever to take place! At a speed of over 100 miles per hour, the served ball is not returned through a reaction-based hit, but as a result of an anticipated course of action, one from among many continuously generated well ahead of the serve or as it progresses. If the serving area is increased by only 10%, chances for anticipation are reduced in a proportion that changes the game from one of resemblance and order to a chaotic, incoherent action that makes no competitive sense. The competition among the various models (all possibilities, but along a probability distribution corresponding to the particular style of the serving player) allows for a successful return, itself subject to various models and competition among them. The whole game can be seen as an unfolding chain of co-relations, i.e., a computation controlled by a range of acceptable parameters. The immune system works in a fundamentally similar fashion. Co-relations corresponding to a wide variety of acceptable parameters are pursued on a continuous basis. Acclimatization, i.e., the way humans adapt to changes in seasons, is but a preservation of the coherence of our individual and collective existence under the influence of anticipated changes in temperature, humidity, day-night cycle, and a number of other parameters, some of which we are not even aware. 6.1.2 Instantiated Co-Relations But having given the example of an unfolding sequence does not place us in the domain of non-locality. For this we need to distinguish between the diachronic and synchronic axes. A strictly deterministic explanation will always place the anticipated in the sequence of cause-and-effect/action-reaction. The tennis ball is served, days are getting shorter, a virus causes an infection–all seen as causes. In the anticipatory view, the ball is actually not yet served as the sequence of models, from among which one will become the return, started being generated. The anticipation leading to the fall of leaves is the result of a co-relation involving more than one parameter. What appears as a reaction of the immune system is actually also a co-relation involving the metabolism and self-repair function. On the one hand, we have an unfolding over time; on the other, a synchronic relation that appears as an infinitely fast process. In reality we have a co-relation, an intertwining of many relations among a huge number of variables of which we are only marginally, if at all, aware. Assuming that we have a good description of the n-ary relations R1n, R2n,… Rin, moreover that we can even “relate” relations of a different order (n=3 vs. n=4, for instance), and express this relation in a co-relation, it becomes clear that co-relations are descriptive of higher order relations. For example, two binary relations are identical when their converses are identical. In any sequences of the form xRiy, zRjw, uRkv, etc. we are trying to identify what the relation is among the various relations Ri, Rj, Rk, etc., represented by Ri Ra Rj, Rj Rb Rk, etc. The co-relations, Ra , Rb , Rg (e.g., son of and daughter of correspond to progeny, but among the co-relations, we will find similarity or distinction, among other things) can apply to the subsets of all Ri (i=1,… n) sharing a certain distinctive characteristic (such as similarity). We can further define referents (Ref) and relata (Rel), as well as a relation between referents or relata denoted as Sg (Sagitta, i.e., arrow). By no accident, the arrow can graphically suggest a dynamics from the present to the future (prediction), or the other way around, from the future to the present (anticipation). After Peirce, Tarski (1941) produced an axiomatized theory of relation that, not unlike Boolean logic, could serve as a basis for effective computations of relations and co-relations. It is quite possible that the computation of co-relation could be built around the formalism of quantum computing. In this case, we would operate on the value of the entanglement, not on the state of a particle. It is a task that invites further work. Last but not least, we invite the thought of considering relations among incursions and hyperincursions as a means of testing their descriptive power even more deeply. 6.2 Making Use of the Co-Relation Model Having advanced this model of anticipation as a form of computation, based on the dynamic generalization of models and on competition among them, and encoded in a formalism that captures co-relations (thus the spirit of non-locality), I would like to present some examples speaking in favor of an understanding of anticipation that occasionally comes close to what I have proposed above. These are not direct applications of the theory I have advanced so far, rather they are suggestive of its possible directions, if not of its meaning. 6.2.1 Anticipatory Document Caching Incidentally, anticipatory document caching with the purpose of reducing latency on Web transactions is introduced in a language reminiscent of Einstein’s observation, “Everyone talks about the speed of light but nobody ever does anything about it.” The reason for the provocative introduction is obvious: interactive HTML (i.e., text transmission through the Web) requires at least T-1 connection speeds (i.e., 1.5M bps). Once images are used, the requirement increases to T-3 lines (45M bps). Cross-country interactive screen images push the limit to 155M bps. Places such as the major cities on the West Coast of the USA (San Francisco, Los Angeles) are at least 85 milliseconds away from cities on the East Coast (Boston, New York). Interactivity under the limitations of the speed of light–assuming that we can send data at such speed and on the shortest path–is an illusion. In view of this practical observation, those involved in the design of networks, of communication protocols, of client-server access and the like are faced with the task of reducing the time between access request and delivery. Among the methods used are the utilization of inter-request bandwidth (transfer of unrequested files when no other use is made), proactive requests (preloading a client or intermediate cache with anticipated requests), optimization of topology (checking where files will be best used, combining identical requests and responses over shared links). What Touch et al (1992, 1996, 1998) accomplished is an effective procedure for providing co-relations. Evidently, they realize that such correlations cannot rely on a second channel through which requests would travel faster than the information itself. Accordingly, they initiate processes in fact independent of the communication between the client and the remote server. Such processes facilitate an anticipatory behavior based on predictive cues corresponding to the searched information. They also define where in a network of such optimization servers should be placed. I insist upon this mechanism of implementation not only because of its significance for the networked community, but primarily in view of the understanding that anticipatory computation is one of producing meaningful co-relations. The entanglement between the search process and pre-fetching data is stricto sensu a pseudo-anticipation. But so are all other implementations known to date. These are all models of possible actions, and it is quite practical to think of generating even more models as the user gets involved in a certain transaction. 6.2.2 Software Design The same idea was implemented by high-end 3D modeling software (e.g., UNIGRAPHICS), under the guidance of a better understanding of what designers can and would do at a certain juncture in visualizing their projects. The use of computation resources within such programs makes for the necessity to anticipate what is possible and to almost preclude functions and utilities that make no sense at a certain point. This is realized through a STRIM function. Instead of allowing the program to react to any and all possible courses of action, some functions are disabled. Henceforth, the functions essential to the task can take advantage of all available resources. (This is what STRIM makes possible.) It is by all practical means a pro-active concept based on realizing the co-relations within the various components of the program. 6.2.3 Agents Coordination Another aspect of co-relation is coordination. It can be ascertained that cooperative activities can take place only if a minimum of anticipation–in one or several of the forms discussed so far–is provided. This applies to every form of cooperation we can think of: commerce, work on an assembly line (where anticipation is built in through planning and control mechanisms), the pragmatics of erecting a building, the performing arts, sports. Coordination is a particular embodiment of anticipation. It can be expressed, for instance, in requirements of synchronization defined to ensure that from a set of possibilities the optimum is actually pursued. Thus, in a given situation, from a broad choice of what is possible, what is optimal is accomplished. The goal is to maximize the probability of successful cooperation. This is achieved by implementing anticipatory characteristics. I would like to mention here as an example the Robo Cup world champion, designed and implemented by Manuela Veleso, Peter Stone, and Michael Bowling (of Carnegie Mellon University). This is an autonomous agent collaboration with the purpose of achieving precise goals (in this case, winning a soccer game between robotic teams) in a competitive environment. Stated succinctly in the words of the authors, “Anticipation was one of the major differences between our team and the other teams,” (1998). Let us focus on this aspect and briefly describe the solution. What was accomplished in this implementation is a model of an unfolding soccer game. But instead of the limited action-reaction description, the authors endowed the “players” (i.e., agents) with the ability to maximize their contributions through anticipatory movements corresponding to increasing the team’s chance to execute successful passes leading to scoring. It is a relational approach: Agents are placed in co-relation (”taking into account the position of the other robots–both teammates and adversaries”) and in respect to the current and possible future positions of the ball. It is evidently a multi-objective description, that is, a dynamic set of models, with what the authors call “repulsion and attraction points.” The anticipation algorithm (SPAR, Strategic Positioning with Attraction and Repulsion) contains weighted single-objective decisions. Correctly assuming that transitions among states (i.e., choices among the various models) for each of the cooperating agents takes time (computing cost, in a broader sense), the authors implement the anticipatory feature in the form of selection procedures. The goal is to increase (ideally, to find the maximum) the probability of future collaboration as the game unfolds. The agents are given a degree of flexibility that results in adjustment supposed to enhance the probability of individual actions useful to the team. Additionally, an algorithm was designed in order to allow the “players” (team agents) to position themselves in anticipation of possible collaboration needs among teammates. Individual action and team collaboration are coordinated in anticipation (i.e., predictive form) of the actions of the opponents. At times, though, the anticipatory focus degrades to reactive moves. Less successful in the competition, but inspired by Rosen’s definition, the team of the University of Caen (France) defined the following program: “Anticipation allows the consideration of global phenomena that cannot be treated through a local reactive approach. The anticipation of the actions of the adversary or of its teammates, the anticipation of the change of the other teamplayers’ roles, the anticipation of the ball’s movements, and the anticipation of conflicts among teammates are some of the forms of anticipation that our system tries to account for,” (Stinckwich, Girault, 1999). 6.2.4 Auto-Associative Memories Along the same line of thought, it is worth mentioning that in the area of cognitive sciences, neural architectures involving auto-associative memories are used in attempts to implement anticipatory characteristics. Such memories reproduce input patterns as output. In other words, they mimic the fact that we remember what we memorize, which in essence we can describe through recursive or, better yet, incursive functions. The association of patterns of memorized information with themselves is powerful because, in remembering, we provide ourselves part of what we are looking for; that is, we anticipate. The context is supportive of anticipation because it supports the human experience of constituting co-relations. We can apply this to computer memory. Instead of memory-gobbling procedures, which hike the cost of computation and affect its effectiveness, auto-associative memory suggests that we can better handle fewer units, even if these are of a bigger size. Jeff Hawkins (1999), who sees “intelligence as an ability … to make successful predictions about its input,” i.e., as an internal measure of sensory prediction, not as a measure of behavior (still an AI obsession) applied his pattern classifier to handprinted-character recognition. The Palm Pilot�™ might sooner than we think profit from the anticipatory thought that went into its successful handwriting recognition program that Hawkins authored. 6.3 Interactivity Such and similar examples are computational expressions of the many aspects of anticipation. Their interactive nature draws our attention towards the very telling distinction between algorithmic and interaction computation. In algorithmic computation, we basically start with a description (called algorithm) of what it takes to accomplish a certain task. The computer–a Turing machine–executes a single thread operation (the van Neumann paradigm of computation) on data appropriately formatted according to syntactic constraints. As such, the process of computation is disconnected from the outside world. Accordingly, there is no room for anticipation, which always results from interaction. In the interactive model, the outside world drives the process: Agents react to other agents; robots operate in a dynamic environment and need to be endowed with anticipatory traits. Searches over networks, not unlike airline ticket purchasing and other interactive tasks, are driven by those who randomly or systematically pursue a goal (find something or let something surprise you). As Peter Wegner (1996), one of the proponents of interactive computation expresses it, “Algorithms are ’sales contracts’ that deliver an output in exchange for an input. A marriage contract specifies behavior for all contingencies of interaction (’in sickness and health’) over the lifetime of the object (’till death do us part’).” The important suggestion here is that we can conceive of object-based computation in which object operations (two or more) share a hidden state. Fig. 4 Interactive computation: the shared state None of the operations (or processes) are algorithmic, since they do not control the shared state, but participate in an interaction through the shared state. They are also subject to external interaction. What is of exceptional importance here is that the response of each operation to messages from outside depends on the shared state accessed through non-local variables of operations. The non-locality made possible here corresponds to the nature of anticipation. Interactive systems are inherently incomplete, thus decidable in Gödel’s sense (i.e., not subject to Gödelian strictures in respect to their consistency). Interactivity requires that the computation remain connected to the practical experiences of human self-constitution, i.e., that we overcome the limitations of syntactically limited processing, or even of semantic referencing, and reach the pragmatic level. Processes in this kind of computation are multi-threaded, open-ended, and subject to predictive or not predictive interactions. The Turing machine could not describe them; and implementation in anticipatory computing machines per se is probably still far away. This brings up, somehow by association, the question of whether the category of artifacts called programs are anticipatory by design or by their condition. The question is pertinent not only to computers, since in the language of modern genetics, programming (as the encoding of DNA, for example) plays an important role. It is, however, obvious that silicon hardware (as one possible embodiment of computers) and DNA are quite different, not only in view of their make-up, but more in view of their condition. If birds are “programmed” for their migratory behavior, then these “programs” are based on entailment schemes of extreme complexity. The same applies even more to the immune system. 6.3.1 Virtual Reality A special category of interactive computation is represented by virtual reality implementations, all intrinsically pseudo-anticipatory environments of multi-sensorial condition. In the virtual domain, a given set of co-relations can be established or pursued. Entanglement is part of the broader design. Various processes are triggered in a confined space-and-time, i.e., in a subset of the world. Non-locality is a generic metaphor in the virtual realm made possible by the integration of the human subject. Sure, as we advance towards molecular, biological, and genetic computation–where the distinction between real and virtual is less than clear-cut–we reach new levels of pragmatic integration. Evolutionary computation will probably be driven by the inherent anticipatory characteristic of the living. As designs of computation processes at the chromosome level are advanced, a foundation is laid for computation that involves and facilitates self-awareness. Interaction at this level goes deeper than interaction embodied in the examples mentioned above; that is, at this level, mind-interaction-like mechanisms are possible, and thus true anticipation (not just the pseudo type) emerges as a structural property. We are used to the representation of anticipatory processes through models that have a higher speed than the systems modeled: A rocket launch is anticipated in the simulation that “runs” ahead of the real time of the launch. The program anticipates, i.e., searches for all kinds of correlations–the proper functioning of a very complex system consisting of various elements tightly integrated in the whole. We have here, not unlike the case of data pre-fetching, or of integration through search in a space of possibilities, or of auto-associative memory, a mechanism for ensuring that co-relations are maintained above and beyond the deterministic one-directional temporal chain. The more interesting bi-directional chain is not even imaginable in such applications. The spookiness of anticipatory computation is not only reducible to the speed of interactions that worried Einstein. It also involves a bi-directional time arrow. The account given in this paper, which simultaneously occasioned the advancement of my own model, identifies the many perspectives of the possible frontier in science represented by the subject of anticipation. 7. Conclusion In order to ascertain anticipatory computation as an effective method, working models that display anticipatory characteristics need to be realized. The examples given herein can be seen as the specs for such possible models. Work in alternative computing models is illustrative of what can be done and of the return expected. Co-relations, difficult to deal with once we part from the world of first-order objects, are another promising avenue, as are possibilistic-based computations. Finally, if quantum effects prove to take place also in a world of large scale, anticipation, as entanglement (i.e., co-relation), might turn out to be the binding substratum of our universe of existence. ReferencesBarker, M. (1996) developed a class based on How to Write Horror Fiction, by William F. Nolan. Bartlett, F.C. (1951). Essays in Psychology. Dedicated to David Katz, Uppsala: Almqvist & Wiksells, pp. 1-17. Bell, John S. (1964). Physics, 1, pp. 195-200. Bell, John S. (1966). Review of Modern Physics, 38, pp. 447-452. Berry, M.J., I.H. Brivanlon, T.A. Jordan, M. Meister (1999). Nature 318, pp. 334-338. Bohm, David (1951). Quantum Theory, London: Routledge. Bohr, Niels (1987). Atomic Theory and Description of Nature: Four Essays with an Introductory Survey, AMS Press, June 1934. (See also The Philosophical Writings of Niels Bohr, Vol. 1, Oxbow Press. Descartes, René (1637). Discourss de la méthode pour bien conduire sa raison et chercher la vérité� dans les sciences, Leiden. Descartes, René (1644). Principia philosophiae. Dubois, Daniel (1992). Le labyrinthe de l���intelligence: de l’intelligence naturelle a l’intelligence fractale, InterEditions/Paris, Academia/Louvain-la-Neuve. Dubois, Daniel M. (1992). “The Hyperincursive Fractal Machines as a Quantum Holographic Brain,” CCAI 9:4, pp.335-372. Dubois, Daniel, G. Resconi (1992). Hyperincursivity: a new mathematical theory, Presses Universitaires de Liège. Dubois, Daniel M. (1996). “Hyperincursive Stack Memory in Chaotic Automata,” Actes du Symposium ECHO: Modèles de la boucle évolutive (A.C. Ehresmann, G.L. Farre, J-P.Vanbreemersch, Eds.), Université de Picardie Jules Verne, pp. 77-82. Dubois, Daniel M. (1999). “Hyperincursive McCullogh and Pitts Neurons for Designing a Computing Flip-Flop Memory,” Computing Anticipatory Systems: CASYS ‘98, Second International Conference, AIP Conference Proceedings 465, pp. 3-21. Dürrenmatt, Friedrich (1992). The Physician Sits, Grove Press. (Originally published as Die Physiker, 1962. A paperback English edition was published by Oxford University Press, 1965.) Einstein, Podolski, and Rosen Paper (1935). The Physical Review 47, pp. 777-780. Epicurus (1933). cf. Tallium Cicero, De Natura Decorum (Trans. Harry Rackham), Loeb Classical Library. Epstein, Paul R., K. Linthicum, et al (1999). “Climate and Health,” Science, July 16, 1999, pp. 347-348. Etter, Thomas (1999). Psi, Influence, and Link Theory, (manuscript dated June 11, 1999). Feyerabend, Paul (1973). Against Method, London: New Left Books. Feynman, Richard P. (1965). The Character of Physical Law, BBC Publications. Feynman, Richard P. (1982). “Simulating physics with computers,” International Journal of Theoretical Physics, 2:6/7: 467-488. Foerster, Heinz von (1976). “Objects, tokens for (eigen)-behaviors,” Cybernetics Forum, 5:3-4, pp. 91-96. Foerster, Heinz von (1999). Der Anfang von Himmel und Erde hat keinen Namen, Vienna: Döcker Verlag., 2nd ed. Garis, Hugo de (1994). An Artificial Brain: ATR’s CAM-Brain Project, New Generation Computing 12(2):215-221, 1994. Gribbin, John (1998). New Scientist, August 1998. Gribbin, John (1999). Gribbin/ Quantum Gribbin, John, Mark Chimsky (1996). Schrödinger’s Kittens and the Search for Reality: Solving the Quantum Mysteries, New York: Little, Brown & Co. Hawkins, Jeff (1999). “That’s Not How My Brain Works,” interview in Technology Review, July/August, pp. 76-79. Holmberg, Stig (1998). “Anticipatory Computing with a Spatio Temporal Fuzzy Model” Computing Anticipatory Systems: CASYS ‘97 First International Conference, AIP Conference Procedings 437 (D.M. Dubois, Ed.), The American Institute of Physics, pp. 419-432. Homan, Christopher (1997). Beauty is a Rare Thing, Homan Julià, Pere (1998). Intentionality, Self-reference, and Anticipation, Computing Anticipatory Systems: CASYS ’97 First International Conference, AIP Conference Procedings 437 (D.M. Dubois, Ed.), The American Institute of Physics, pp. 209-243. Kant, Immanuel (1781). Kritik der reinen Vernunft, 1 Auflage. (cf. Critique of Pure Reason, Translated by Norman Kemp-Smith, New York: Macmillan Press, 1781.) Kelly, G.A. (1955). The Psychology of Personal Constructs, New York, Norton. Knutson, Brian (1998). Functional Neuroanatomy of Approach and Active Avoidance Behavior, Libet, Benjamin (1989). Neural Destiny. Does the Brain Have a Mind of Its Own?” The Sciences, March/April 1989, pp. 32-35. Libet, Benjamin (1985). “Unconscious Cerebral Initiative and the Role of Conscious in Voluntary Action,” The Behavioral and Brain Sciences, vol. 8, number 4, December 1985, pp. 529-539. Linthicum, Kenneth et al (1999). “Climate and Satellite Indicators to Forecast Rift Fever Epidemics in Kenya,” Science, July 16, 1999, pp. 367-368. Mancuso, J.C., J. Adams-Weber (1982). Anticipation as a constructive process, in C. Mancuso & J. Adams-Weber (Eds.) The Construing Person, New York, Praeger, pp. 8-32. Nadin, Mihai (1988). Minds as Configurations: Intelligence is Process, Graduate Lecture Series, Ohio State University. Nadin, Mihai (1991). Mind-Anticipation and Chaos. Stuttgart: Belser Presse. (The text can be read in its entirety on the Web at Nadin, Mihai (1997). The Civilization of Illiteracy. Dresden: Dresden University Press. Nadin, Mihai (1998). “Computers,” entry in The Encyclopedia of Semiotics (Paul Bouissac, Ed.), New York: Oxford University Press, pp. 136-138. Newton, Sir Isaac (1687). Philosophiae naturalis principia mathematica. Peat, David (undated). Non-locality in nature and cognition,, Charles S. (1870). “Description of a Notation for the Logic of Relatives, Resulting from an Amplification of the Conceptions of Boole’s Calculus Logic,” Memoirs of the American Academy of Sciences, 9. Peirce, Charles S. (1883). “The Logic of Relatives,” Studies in Logic by Members of the Johns Hopkins University. Peirce, Charles S. (1931-1935). The Collected Papers of Charles Sanders Peirce, Vols. I-VI (C. Hartshorne and P. Weiss, Eds.), Harvard University Press. The convention for quoting from this work is to cite volume and paragraph, separated by a decimal point: 2.226. Postrel, Virginia (1997). “Reason on Line,” Forbes ASAP, August 25, 1997. Powers, William T. (1973). Behavior: The Control of Perception, Amsterdam: de Gruyter. Powers, William T. (1989). Living Control Systems, I and II (Christopher Langton, Ed.) New Canaan: Benchmark Publications. More information at Rosen, Robert (1972). Quantum Genetics, Foundation of Mathematical Biology, Vol. I, Subcellular Systems. New York/London: Academic Press, 1972. Rosen, Robert (1985). Anticipatory Systems, Pergamon Press. Rosen, Robert (1991). Life Itself: A Comprehensive Inquiry into the Nature, Origin, and Fabrication of Life, New York: Columbia University Press. Rosen, Robert (1999). Essay on Life Itself, New York: Columbia University Press. Sommers, Hans (1998). “The Consequences of Learnability for A a priori Knowledge in a World,” Computing Anticipatory Systems: CASYS ‘97 First International Conference, AIP Conference Procedings 437 (D.M. Dubois, Ed.), The American Institute of Physics, pp 457-468. Stapp, Henry P. (1991) Quantum Implications: Essays in Honor of David Bohm (B.J. Hiley & F.D. Peat, Eds.), Routledge. Stinckwich, Serge and François Girault (1999). Modélisation d’un Robot Footballeur, Memoire de DEA, Caen. See also: Swarm Simulation System. See: Tarski, Alfred (1941). “On the Calculus of Relations,” Journal of Symbolic Logic, 6, pp. 73-89. Touch, Joseph D. et al (1992). A Model for Latency in Communication. Touch, Joseph D. (1998). Large Scale Active Middleware.Touch, Joseph D., John Heidemann, Katia Obraczka (1996). Analysis of HTTP Performance.Touch, Joseph D. See also, Manuela, Peter Stone, Michael Bowling (1998). Anticipation: A Key for Collaboration in a Team of Agents, paper presented at the 3rd International Conference on Autonomous Agents, October 1998. Vijver, Gertrudis van de (1997). “Anticipatory Systems. A Short Philosophical Note,” Computing Anticipatory Systems: CASYS ‘97 First International Conference, AIP Conference Procedings 437 (D.M. Dubois, Ed.), The American Institute of Physics, pp. 31-47. Wegner, Peter (1996). The Paradigm Shift from Algorithms to Interaction, draft of October 14, 1996. Wildawski, Aaron B. (1988). Searching for Safety. Zadeh, Lotfi (1977). Fuzzy Sets as a Basis for a Theory of Possibility, ERL MEMO M77/12. Posted in Anticipation, LectAnticipation, Lectures copyright © 2o14 by Mihai Nadin | Powered by Wordpress
519481a705b7b5d0
 A Classical Interpretation of the Wave Mechanics of Quantum Theory San José State University Thayer Watkins Silicon Valley & Tornado Alley A Classical Interpretation of the Wave Mechanics of Quantum Theory Historical Background In the early 1920's Werner Heisenberg in Copenhagen under the guidance of the venerable Niels Bohr and Max Born and Pascual Jordan of Gottingen University were developing the New Quantum Theory of physics. Heisenberg, Born and Jordan were in their early 20's, the wunderkind of physics. By 1925 Heisenberg had developed Matrix Mechanics, a marvelous intellectual achievement based upon infinite square matrices. Then in 1926 the Austrian physicist, Erwin Schrödinger, in six journal articles established Wave Mechanics based upon partial differential equations. The wunderkind of quantum theory were not impressed by Schrödinger, an old man in his late thirties without any previous work in quantum theory and Heisenberg made some disparaging remarks about Wave Mechanics. But Schrödinger produced an article establishing that Wave Mechanics and Matrix Mechanics were equivalent. Wave Mechanicswas easier to use and became the dominent approach to quantum theory. Schrödinger's field had been optics and he had been prompted to start to work in quantum theory by the work of Louis de Broglie which asserted that particles have a wave aspect just as radiation phenomena have a particle aspect. Schrödinger's equations involved an unspecified variable which was called the wave function. He thought that it would have an interpretation simular to such variables involved in optics. However Niels Bohr and the wunderkind had a different interpretation. Max Born at Gottingen University wrote to Bohr suggesting that the squared magnitude of the wave function in Schrödinger's equation was a probability density function. Bohr replied that he and the other physicists with him in Copenhagen had never considered any other interpretation of the wave function. This interpretation of the wave function became part of what was known as the Copenhagen Interpretation. Erwin Schrödinger did not agree with this interpretation. Bohr had a predeliction to emphasize the puzzling aspects of quantum theory. He said But Bohr also articulated the Corresondence Principle. He said that the validity of classical physics was well established so for quantum theoretic analysis to be valid its limit when scaled up to the macro level had to be compatible with the classical analysis. It is very important to note that observable world at the macro level involved averaging over time and space. Physical systems are not observed at instants because no energy can be transferred at an instant. Likewise there can be no observations can be made at a point in space. Therefore for a quantum analysis to be compatible with the classical analysis at the macro level it must not only be scaled up but also averaged over time or space. The classical harmonic oscillator is deterministic but there is still a legitimate probability density function which is the time-spent probability density function. If the solution to the Schrödinger equation for a physical system gives a probability density] function then the limit as the energy increases without bound is also a probability density function. The spatial averaged limit has to also be a probability density function. For compatibility according to the Correspondence Principle that spatially average limit of the quantum system has to be the time-spent probability density function. That indicates that the quantum probability density function is in the nature of a time-spent probability density function. This means that the quantum probability density can be translated into the motion of quantum system. This involves sequences of relatively slow movement and then relatively fast movement. The positions of relatively slow movement correpond to what the Copenhagen Interpretation. designates as allowable states and the places of relatively fast movement are what the Copenhagen Interpretation designates as quantum jumps or leaps. When the periodic motion of quantum system is being exercuted at untold billions of times per second it may seem like the particle exists simultaneously at multiple locations but that is not the physical reality. It is only the dynamic appearance. A rapidly rotating fan seems to have the fan smeared over a blurred disk. HOME PAGE OF applet-magic HOME PAGE OF Thayer Watkins
001262ef4de42440
From Citizendium, the Citizens' Compendium Jump to: navigation, search This article is developing and not approved. Main Article Related Articles  [?] Bibliography  [?] External Links  [?] Citable Version  [?] In physics, the polarizability of an electric charge-distribution ρ describes the ease by which ρ can be polarized under the influence of an external electric field E. To explain the concept of polarization of a charge distribution, it is noted that an electric field E is a vector, which by definition "pushes" a positive charge in the direction of the vector and "pulls" a negative electric charge in opposite direction (against the direction of E). Because of this "push-pull" effect the field will distort the charge-distribution ρ, with a build-up of positive charge on that side of ρ to which E is pointing and a build-up of negative charge on the other side of ρ. One calls this distortion the polarization of the charge-distribution. Since it is implicitly assumed that ρ is stable, there are internal forces that keep the charges together. These internal forces resist the polarization and determine the magnitude of the polarizability. The concept of polarizability is very important in atomic and molecular physics. In atoms and molecules the electronic charge-distribution is stable, as follows from quantum mechanical laws, and an external electric field polarizes the electronic charge cloud. The amount of shifting of charge can be quantitatively expressed in terms of an induced dipole moment. Am electric dipole of a continuous charge-distribution is defined as If there is no external field we call the dipole permanent, written as pperm. A permanent dipole moment may or may not be equal to zero. For highly symmetric charge-distributions (for instance those with an inversion center), the permanent moment is zero. Under influence of an electric field the charge-distribution will distort and the dipole will change, where pind is the induced dipole, i.e., the change in dipole due to the polarization of the charge-distribution. Assuming a linear dependence in the field, we define the polarizability by the following expression This relation can be generalized to higher powers in E (in the general case one uses a Taylor series), the polarizabilities arising as factors of E2, and E3 are called hyperpolarizabilities and hyper-hyperpolarizabilities, respectively. The relation above is valid when the vector p is parallel to the vector E, i.e., α is a single real number, a scalar. It can happen that the two vectors (cause and effect) are non-parallel, in that case the defining relation takes the form By writing these two vectors in component form we implicitly assumed the presence of a Cartesian coordinate system. The polarizability α is expressed with respect to the very same coordinate system by a matrix, We know that choice of another Cartesian basis (coordinate system) changes the column vectors pind and E, while the physics of the situation is unchanged, neither the electric field, nor the induced dipole changes, only their representation by column vectors changes. Similarly, upon choice of another basis the polarizibility α is represented by another 3×3 matrix. This means that α is a second rank (because there are two indices) Cartesian tensor, the polarizability tensor of the charge-distribution. From the defining equation follows that p has the dimension charge times distance, which in SI units is C m (coulomb times meter). In Gaussian units this is statC cm (statcoulomb times centimeter). An electric field has dimension voltage divided by distance, so that in SI units E has dimension V/m and in Gaussian units statV/cm. Hence the dimension of α is  SI: C m2 V−1 Gaussian: statC cm2 statV−1 = cm3, where we used that in Gaussian units the dimension of V is equal to statC/cm (because of Coulomb's law). In Gaussian units the polarizability has dimension volume, and accordingly polarizability is often considered as a measure for the size of the charge-distribution (usually an atom or a molecule). The conversion between the two units is: here c is the speed of light (≈ 3×108 m/s), 4πε0 = 107/c2 (see electric constant) and the suffix on the symbol α indicates the unit in which the polarizability is expressed. Sometimes one defines the polarizability in SI units by the equation This definition has the advantage that α'SI has dimension volume (m3). Clearly where the power of ten is due to converting from m to cm. Sometimes one also encounters the definition which gives a polarizability α" with dimension volume and a factor 4π larger than α′. The energy of a dipole in an infinitesimal field is given by where the dot indicates a dot product between the vectors. Integration to finite E gives The second term becomes for a non-isotropic polarizibility in three different, but fully equivalent, notations, Quantum mechanical expression Classically, electric charge distributions, such as atoms and molecules, were known to exist, but the classical Maxwell theory could not explain their stability. The empirically known polarizability was likewise unexplainable. This changed after the advent of quantum mechanics. By means of the quantum mechanical technique of perturbation theory one can derive an expression for the induction energy Uind. One introduces a perturbation operator for a system of N particles: where qk is the charge of the kth particle and rk its position vector (expressed with respect to some Cartesian coordinate system). Clearly, the dipole operator is defined by In perturbation theory one assumes that the unperturbed (without external field) Schrödinger equations are solved That is, we assume that all states and corresponding energies are known. Further it is assumed that the states constitute an orthonormal basis for the vector space they belong to. The second-order perturbed energy is Comparing the second-order energy U(2) with the induction energy Uind gives a quantum mechanical expression for the polarizability tensor: Frequency-dependent polarizability When a charge-distribution is hit by a monochromatic electromagnetic wave with electric component   Ecosωt   the polarizibility becomes a function of the angular frequency where ν the frequency, k the modulus of the wave vector and c the speed of light. The interaction of the wave with the charge distribution is described by the quantum mechanical operator: where the dipole operator μ is defined above. Time-dependent perturbation theory leads to the following expression, The quantity |α(ω)|2 is proportional to the cross section for elastic light scattering (Rayleigh scattering), and with a small modification it also gives the cross section for inelastic light scattering (Raman scattering). The index of refraction n of a charge-distribution is related by the Lorentz-Lorenz relation to its frequency-dependent polarizability α(ω) and hence it follows that n is a function of ω. This leads to the phenomenon of dispersion of light (occurrence of rainbows). The function α(iω) of imaginary frequency gives rise to one of the components of intermolecular forces, namely dispersion (London) forces.
435bd67deaa8c135
Advances in Graphene-Based Science and Application Yasuhiro HATSUGAI, University of Tsukuba 1. Introduction Graphene is a two dimensional array of carbon atoms on a honeycomb lattice. Its experimental realization [1,2] opens a breakthrough new world in physics and material science, which put a road to the Stockholm again as its zero dimensional (0D) and 1D analogue, C60 and polyacetylene. These carbon based materials have further large variety such as quasi-1D carbon nano-tubes, 3D diamond and graphite. They are clearly key ingredients for coming development of nano-science and technology. Physically most of them belong to a class of insulators/semiconductors which are characterized by a finite excitation gap. In the family, graphene is special, which is a zero-gap semiconductor. Since the energy gap is vanishing, any standard description is no longer applicable. Then a law to govern behavior of electrons in graphene is not a usual Schrödinger equation but a relativistic law by Dirac for vanishing mass. 2. More than new material It is true that graphene can be useful and groundbreaking new material for nano-technology and supplies a basic platform for various industrial applications. At the same time, graphene is physically fundamental since it is a perfect 2D crystal and electrons live there are relativistic and quantum particles. One of the surprises of the papers is that a theoretically famous “theorem” prohibits isolation of 2D perfect crystal, although it is really realized. The other is that realization of the zero-gap semiconductor implies lots of fancy predictions for the massless Dirac fermions by high energy particle physicists should be confirmed within labs. Graphene is a stage for condensed matter realization of the quantum theory with relativity and gauge symmetries. 3. Conclusions Significance of graphene’s experimental realization is, at least, twofold, for huge possibility as groundbreaking new material and for fundamental physics. Let me stress the latter in the talk which I hope to be useful key ideas in graphene based technology for a long term over several decades. Although the massless Dirac fermions living in graphene are anomalous, they are, at the same time, quite universal in that they also appear in many different physical systems such as a d-wave superconductor and a topological insulator which is another quite hot topic in the recent condensed matter physics with its relation to possible spintronics applications. I will also put the focus on the universality without going into any math details. Loading more stuff… Loading videos…
f03e9b6820a1b1f7
Sign up × My stackexchange post [] was somewhat unsatisfactory (also because I may not have stated clear enough what my interest was). So here it goes! Let $M$ be a compact Riemannian manifold and $\Delta$ be the Laplace-Beltrami operator. It is well-known that the solution operator to the heat equation $e^{t \Delta}$ is smoothing for $t>0$ and has a smooth integral kernel $k_t(x, y) \in C^\infty(M \times M)$. Furthermore, $k_t$ has an asymptotic expansion $$ k_t(x, y) \sim \underbrace{(4 \pi t)^{-n/2} \exp \left( -\frac{1}{4t} \mathrm{dist}(x, y)^2 \right)}_{:= e_t(x, y)} \sum_{j=0}^\infty t^j \Phi_j(x, y) $$ meaning that $$ \left| k_t(x, y) - e_t(x, y) \sum_{j=0}^N t^j \Phi_j(x, y) \right| \leq C t^{N+1}$$ uniformly in $x$ and $y$ in a neighborhood of the diagonal. Now by by formally substituting $t \rightarrow it$, one gets the formal asymptotic series $$ e_{it}(x, y) \sum_{j=0}^\infty (it)^j \Phi_j(x, y),$$ which has the property that it formally (i.e. termwise, as asymptotic series in $t$) solves the Schrödinger equation $ \left(i \frac{\partial}{\partial t} + \Delta\right)k_t = 0.$ Now my question is the following: Does this asymptotic series have any relation to the solution operator $e^{it\Delta}$ of the Schrödinger equation, or to its distribution kernel? share|cite|improve this question 2 Answers 2 In this 2006 paper you can find the long-time asymptotics of the Schrödinger kernel on Riemannian manifolds. My understanding is that the analytic continuation to imaginary time gives the correct answer provided there are no zero-energy resonant states. share|cite|improve this answer No, in general, the expansion in small times of the heat kernel $k_t(x,y)$ does not tell much about the Schrodinger semigroup. You may think about the circle case for which the kernel of $e^{t \Delta}$ has an expansion at any order $ k_t(x,y)=\frac{1}{\sqrt{4\pi t}} e^{-\frac{(y-x)^2}{4t}} (1 +O(t^m))$ but the Schrodinger semigroup $e^{it \Delta}$ has a rather complicated behavior. The Schrodinger kernel in that case is actually a distribution. If $t$ is a rational multiple of $2\pi$, it is a finite linear combination of delta functions which is interestingly connected to Gauss sums. share|cite|improve this answer Your Answer
b533fb27c73300b3
Orbital hybridisation From Wikipedia, the free encyclopedia   (Redirected from Sp² bond) Jump to: navigation, search This article is about hybridisation in valence bond theory. For s-p mixing, see molecular orbital diagram. In chemistry, hybridisation (or hybridization) is the concept of mixing atomic orbitals into new hybrid orbitals (with different energies, shapes, etc., than the component atomic orbitals) suitable for the pairing of electrons to form chemical bonds in valence bond theory. Hybrid orbitals are very useful in the explanation of molecular geometry and atomic bonding properties. Although sometimes taught together with the valence shell electron-pair repulsion (VSEPR) theory, valence bond and hybridisation are in fact not related to the VSEPR model.[1] Chemist Linus Pauling first developed the hybridisation theory in 1931 in order to explain the structure of simple molecules such as methane (CH4) using atomic orbitals.[2] Pauling pointed out that a carbon atom forms four bonds by using one s and three p orbitals, so that "it might be inferred" that a carbon atom would form three bonds at right angles (using p orbitals) and a fourth weaker bond using the s orbital in some arbitrary direction. In reality however, methane has four bonds of equivalent strength separated by the tetrahedral bond angle of 109.5°. Pauling explained this by supposing that in the presence of four hydrogen atoms, the s and p orbitals form four equivalent combinations or hybrid orbitals, each denoted by sp3 to indicate its composition, which are directed along the four C-H bonds.[3] This concept was developed for such simple chemical systems, but the approach was later applied more widely, and today it is considered an effective heuristic for rationalising the structures of organic compounds. It gives a simple orbital picture equivalent to Lewis structures. Hybridisation theory finds its use mainly in organic chemistry. Atomic orbitals[edit] Main article: Atomic orbital Orbitals are a model representation of the behaviour of electrons within molecules. In the case of simple hybridisation, this approximation is based on atomic orbitals, similar to those obtained for the hydrogen atom, the only neutral atom for which the Schrödinger equation can be solved exactly. In heavier atoms, such as carbon, nitrogen, and oxygen, the atomic orbitals used are the 2s and 2p orbitals, similar to excited state orbitals for hydrogen. Hybrid orbitals are assumed to be mixtures of atomic orbitals, superimposed on each other in various proportions. For example, in methane, the C hybrid orbital which forms each carbonhydrogen bond consists of 25% s character and 75% p character and is thus described as sp3 (read as s-p-three) hybridised. Quantum mechanics describes this hybrid as an sp3 wavefunction of the form N(s + 3pσ), where N is a normalization constant (here 1/2) and pσ is a p orbital directed along the C-H axis to form a sigma bond. The ratio of coefficients (denoted λ in general) is 3 in this example. Since the electron density associated with an orbital is proportional to the square of the wavefunction, the ratio of p-character to s-character is λ2 = 3. The p character or the weight of the p component is N2λ2 = 3/4. The amount of p character or s character, which is decided mainly by orbital hybridisation, can be used to reliably predict molecular properties such as acidity or basicity. [4] Two possible representations[edit] Molecules with multiple bonds or multiple lone pairs can have orbitals represented in terms of sigma and pi symmetry or equivalent orbitals. The sigma and pi representation of Erich Hückel is the more common one compared to the equivalent orbital representation of Linus Pauling. The two have mathematically equivalent total many-electron wave functions, and are related by a unitary transformation of the set of occupied molecular orbitals. Types of hybridisation[edit] Four sp3 orbitals. Hybridisation describes the bonding atoms from an atom's point of view. For a tetrahedrally coordinated carbon (e.g., methane CH4), the carbon should have 4 orbitals with the correct symmetry to bond to the 4 hydrogen atoms. Carbon's ground state configuration is 1s2 2s2 2p2 or more easily read: C ↑↓ ↑↓   1s 2s 2p 2p 2p The carbon atom can utilize its two singly occupied p-type orbitals, to form two covalent bonds with two hydrogen atoms, yielding the singlet methylene CH2, the simplest carbene. The carbon atom can also bond to four hydrogen atoms by an excitation of an electron from the doubly occupied 2s orbital to the empty 2p orbital, producing four singly occupied orbitals. C* ↑↓ 1s 2s 2p 2p 2p The energy released by formation of two additional bonds more than compensates for the excitation energy required, energetically favouring the formation of four C-H bonds. Quantum mechanically, the lowest energy is obtained if the four bonds are equivalent, which requires that they be formed from equivalent orbitals on the carbon. A set of four equivalent orbitals can be obtained that are linear combinations of the valence-shell (core orbitals are almost never involved in bonding) s and p wave functions,[5] which are the four sp3 hybrids. C* ↑↓ 1s sp3 sp3 sp3 sp3 In CH4, four sp3 hybrid orbitals are overlapped by hydrogen 1s orbitals, yielding four σ (sigma) bonds (that is, four single covalent bonds) of equal length and strength. A schematic presentation of hybrid orbitals overlapping hydrogen orbitals translates into Methane's tetrahedral shape Three sp2 orbitals. Ethene structure Other carbon based compounds and other molecules may be explained in a similar way. For example, ethene (C2H4) has a double bond between the carbons. For this molecule, carbon sp2 hybridises, because one π (pi) bond is required for the double bond between the carbons and only three σ bonds are formed per carbon atom. In sp2 hybridisation the 2s orbital is mixed with only two of the three available 2p orbitals, C* ↑↓ 1s sp2 sp2 sp2 2p forming a total of three sp2 orbitals with one remaining p orbital. In ethylene (ethene) the two carbon atoms form a σ bond by overlapping two sp2 orbitals and each carbon atom forms two covalent bonds with hydrogen by s–sp2 overlap all with 120° angles. The π bond between the carbon atoms perpendicular to the molecular plane is formed by 2p–2p overlap. The hydrogen–carbon bonds are all of equal strength and length, in agreement with experimental data. Two sp orbitals The chemical bonding in compounds such as alkynes with triple bonds is explained by sp hybridisation. In this model, the 2s orbital is mixed with only one of the three p orbitals, C* ↑↓ 1s sp sp 2p 2p resulting in two sp orbitals and two remaining p orbitals. The chemical bonding in acetylene (ethyne) (C2H2) consists of sp–sp overlap between the two carbon atoms forming a σ bond and two additional π bonds formed by p–p overlap. Each carbon also bonds to hydrogen in a σ s–sp overlap at 180° angles. Hybridisation and molecule shape[edit] Hybridisation helps to explain molecule shape, since the angles between bonds are (approximately) equal to the angles between hybrid orbitals, as explained above for the tetrahedral geometry of methane. As another example, the three sp2 hybrid orbitals are at angles of 120° to each other, so this hybridisation favours trigonal planar molecular geometry with bond angles of 120°. Other examples are given in the table below. Classification Main group Transition metal[6][7] • Linear (180°) • sp hybridisation • E.g., CO2 • Bent (90°) • sd hybridisation • E.g., VO2+ • Tetrahedral (109.5°) • sd3 hybridisation • E.g., MnO4 AX5 - AX6 - Hybridisation of hypervalent molecules[edit] Main article: Hypervalent molecule Expanded valence shell[edit] Hybridisation is often presented for main group AX5 and above, as well as for many transition metal complexes, using the hybridisation scheme first proposed by Pauling. Classification Main group Transition metal AX2 - • Linear (180°) • sp hybridisation • E.g., Ag(NH3)2+ AX3 - AX4 - AX5 Trigonal bipyramidal or Square pyramidal[9] • Octahedral (90°) • d2sp3 hybridisation • E.g., Mo(CO)6 AX7 Pentagonal bipyramidal, Capped octahedral or Capped trigonal prismatic[10][7] In this notation, d orbitals of main group atoms are listed after the s and p orbitals since they have the same principal quantum number (n), while d orbitals of transition metals are listed first since the s and p orbitals have a higher n. Thus for AX6 molecules, sp3d2 hybridisation in the S atom involves 3s, 3p and 3d orbitals, while d2sp3 for Mo involves 4d, 5s and 5p orbitals. Contrary evidence[edit] In 1990, Magnusson published a seminal work definitively excluding the role of d-orbital hybridization in bonding in hypervalent compounds of second-row elements, ending a point of contention and confusion. Part of the confusion originates from the fact that d-functions are essential in the basis sets used to describe these compounds (or else unreasonably high energies and distorted geometries result). Also, the contribution of the d-function to the molecular wavefunction is large. These facts were incorrectly interpreted to mean that d-orbitals must be involved in bonding.[11] Similarly, p orbitals in transition metal centers were thought to participate in bonding with ligands, hence the 18-electron description; however, recent molecular orbital calculations have found that such p orbital participation in bonding is not very significant.[12][13] As shown by computational chemistry, hypervalent molecules can be stable only given strongly polar (and weakened) bonds with electronegative ligands such as fluorine or oxygen to reduce the valence electron occupancy of the central atom to a maximum of 8[14] (or 12 for transition metals).[6] This requires an explanation that invokes sigma resonance in addition to hybridisation, which implies that each resonance structure has its own hybridisation scheme. As a guideline, all resonance structures have to obey the octet (8) rule for main group centers and the duodectet (12) rule for transition metal centers. Classification Main group Transition metal AX2 - Linear (180°) Di silv.svg AX3 - Trigonal planar (120°) Tri copp.svg AX4 - Square planar (90°) Tetra plat.svg AX5 Trigonal bipyramidal (90°, 120°) Trigonal bipyramidal or Square pyramidal[9] Penta phos.svg AX6 Octahedral (90°) Octahedral (90°) Hexa sulf.svg Hexa moly.svg AX7 Pentagonal bipyramidal (90°, 72°) Pentagonal bipyramidal, Capped octahedral or Capped trigonal prismatic[10][7] Hepta iodi.svg Isovalent hybridisation[edit] Although ideal hybrid orbitals can be useful, in reality most bonds require orbitals of intermediate character. This requires an extension to include flexible weightings of atomic orbitals of each type (s, p, d) and allows for a quantitative depiction of bond formation when the molecular geometry deviates from ideal bond angles. The amount of p-character is not restricted to integer values; i.e., hybridisations like sp2.5 are also readily described. The hybridisation of bond orbitals is determined by Bent's rule: "Atomic s character concentrates in orbitals directed toward electropositive substituents". Molecules with lone pairs[edit] For molecules with lone pairs, the σ lone pair and bonding pairs hybridise to a varying extent[15] to give the molecule's bond angles. For example, the two bond-forming hybrid orbitals of oxygen in water can be described as sp4 to give the interorbital angle of 104.5°. This means that they have 20% s character and 80% p character and does not imply that a hybrid orbital is formed from one s and four p orbitals on oxygen since the 2p subshell of oxygen only contains three p orbitals. Consistent with our treatment of multiple bonds, the lone pairs are also distinguished according to σ and π symmetry. For example, in water one of the two lone pairs is in a pure p-type orbital with its electron density perpendicular to the H–O–H framework,[15][16] while the other lone pair is in an s-rich orbital that is in the same plane as the H–O–H bonding.[15][16] • Trigonal pyramidal (AX3E1) • E.g., NH3 • Bent (AX2E1–2) • Planar symmetry with one π bond or lone pair. • E.g., SO2, H2O • Monocoordinate (AX1E1–3) • Linear symmetry with two π bonds or lone pairs. • E.g., CO, SO, HF Hypervalent molecules[edit] For hypervalent molecules with lone pairs, the bonding scheme can be split into two components: the "resonant bonding" component and the "regular bonding" component. The "regular bonding" component has the same description (see above), while the "resonant bonding" component consists of resonating bonds utilizing p orbitals. The table below shows how each shape is related to the two components and their respective descriptions. Regular bonding component (marked in red) Bent Monocoordinate - Resonant bonding component Linear axis Seesaw (AX4E1) (90°, 180°, >90°) T-shaped (AX3E2) (90°, 180°) Linear (AX2E3) (180°) Tetra sulf.svg Tri chlo.svg Di xeno.svg Square planar equator - Square pyramidal (AX5E1) (90°, 90°) Square planar (AX4E2) (90°) Penta chlo.svg Tetra xeno.svg Pentagonal planar equator - Pentagonal pyramidal (AX6E1) (90°, 72°) Pentagonal planar (AX5E2) (72°) Hexa xeno.svg Penta xeno.svg Hybridisation defects[edit] Hybridisation of s and p orbitals to form effective spx hybrids requires that they have comparable radial extent. While 2p orbitals are on average less than 10% larger than 2s, in part attributable to the lack of a radial node in 2p orbitals, 3p orbitals which have one radial node, exceed the 3s orbitals by 20-33%.[17] The difference in extent of s and p orbitals increases further down a group. The hybridisation of atoms in chemical bonds can be analyzed by considering localized molecular orbitals, for example using natural localized molecular orbitals in a natural bond orbital (NBO) scheme. In methane, CH4, the calculated p/s ratio is approximately 3 consistent with "ideal" sp3 hybridisation, whereas for silane, SiH4, the p/s ratio is closer to 2. A similar trend is seen for the other 2p elements. Substitution of fluorine for hydrogen further decreases the p/s ratio.[18]The 2p elements exhibit near ideal hybridisation with orthogonal hybrid orbitals. For heavier p block elements this assumption of orthogonality cannot be justified. These deviations from the ideal hybridisation were termed hybridisation defects by Kutzelnigg.[19] Photoelectron spectra[edit] There is a popular misconception that the concept of hybrid orbitals incorrectly predicts the ultraviolet photoelectron spectra of many molecules. While this is true if Koopmans' theorem is applied to localized hybrids, quantum mechanics requires that the (in this case ionized) wavefunction obey the symmetry of the molecule which implies resonance in valence bond theory. For example, in methane, the ionized states (CH4+) can be constructed out of four resonance structures attributing the ejected electron to each of the four sp3 orbitals. A linear combination of these four structures, conserving the number of structures, leads to a triply degenerate T2 state and a A1 state.[20] The difference in energy between each ionized state and the ground state would be an ionization energy, which yields two values in agreement with experiment. Hybridisation theory vs. molecular orbital theory[edit] Hybridisation theory is an integral part of organic chemistry and in general discussed together with molecular orbital theory. For drawing reaction mechanisms sometimes a classical bonding picture is needed with two atoms sharing two electrons.[21] Predicting bond angles in methane with MO theory is not straightforward. Hybridisation theory explains bonding in alkenes[22] and methane.[23] Bonding orbitals formed from hybrid atomic orbitals may be considered as localized molecular orbitals, which can be formed from the delocalized orbitals of molecular orbital theory by an appropriate mathematical transformation. For molecules with a closed electron shell in the ground state, this transformation of the orbitals leaves the total many-electron wave function unchanged. The hybrid orbital description of the ground state is therefore equivalent to the delocalized orbital description for ground state total energy and electron density, as well as the molecular geometry that corresponds to the minimum total energy value. See also[edit] 1. ^ Gillespie, R.J. (2004), "Teaching molecular geometry with the VSEPR model", Journal of Chemical Education 81 (3): 298–304, Bibcode:2004JChEd..81..298G, doi:10.1021/ed081p298  2. ^ Pauling, L. (1931), "The nature of the chemical bond. Application of results obtained from the quantum mechanics and from a theory of paramagnetic susceptibility to the structure of molecules", Journal of the American Chemical Society 53 (4): 1367–1400, doi:10.1021/ja01355a027  3. ^ L. Pauling The Nature of the Chemical Bond (3rd ed., Oxford University Press 1960) p.111–120. 4. ^ "Acids and Bases". Orgo Made Simple. Retrieved 23 June 2015.  5. ^ McMurray, J. (1995). Chemistry Annotated Instructors Edition (4th ed.). Prentice Hall. p. 272. ISBN 978-0-131-40221-8 6. ^ a b Weinhold, Frank; Landis, Clark R. (2005). Valency and bonding: A Natural Bond Orbital Donor-Acceptor Perspective. Cambridge: Cambridge University Press. pp. 381–383, 367. ISBN 978-0-521-83128-4.  7. ^ a b c Kaupp, Martin (2001). ""Non-VSEPR" Structures and Bonding in d(0) Systems". Angew Chem Int Ed Engl. 40 (1): 3534–3565. doi:10.1002/1521-3773(20011001)40:19<3534::AID-ANIE3534>3.0.CO;2-#.  8. ^ King, R. Bruce (2000). "Atomic orbitals, symmetry, and coordination polyhedra". Coordination Chemistry Reviews 197: 141–168. doi:10.1016/s0010-8545(99)00226-x.  9. ^ a b Angelo R. Rossi, Roald. Hoffmann (1975). "Transition metal pentacoordination". Inorganic Chemistry 14 (2): 365–374. doi:10.1021/ic50144a032.  10. ^ a b Roald. Hoffmann , Barbara F. Beier , Earl L. Muetterties , Angelo R. Rossi (1977). "Seven-coordination. A molecular orbital exploration of structure, stereochemistry, and reaction dynamics". Inorganic Chemistry 16 (3): 511–522. doi:10.1021/ic50169a002.  11. ^ E. Magnusson. Hypercoordinate molecules of second-row elements: d functions or d orbitals? J. Am. Chem. Soc. 1990, 112, 7940–7951. doi:10.1021/ja00178a014 12. ^ C. R. Landis, F. Weinhold (2007). "Valence and extra-valence orbitals in main group and transition metal bonding". Journal of Computational Chemistry 28 (1): 198–203. doi:10.1002/jcc.20492.  13. ^ Gernot Frenking, Nikolaus Fröhlich (2000). "The Nature of the Bonding in Transition-Metal Compounds". Chemical Reviews 100 (2): 717–774. doi:10.1021/cr980401l.  14. ^ David L. Cooper , Terry P. Cunningham , Joseph Gerratt , Peter B. Karadakov , Mario Raimondi (1994). "Chemical Bonding to Hypercoordinate Second-Row Atoms: d Orbital Participation versus Democracy". Journal of the American Chemical Society 116 (10): 4414–4426. doi:10.1021/ja00089a033.  15. ^ a b c Allen D. Clauss, Stephen F. Nelsen, Mohamed Ayoub, John W. Moore, Clark R. Landis and Frank Weinhold (2014). "Rabbit-ears hybrids, VSEPR sterics, and other orbital anachronisms". Chemistry Education Research and Practice 15: 417–434. doi:10.1039/C4RP00057A.  16. ^ a b Laing, Michael (1987). "No rabbit ears on water. The structure of the water molecule: What should we tell the students?". J. Chem. Educ. 64: 124–128. doi:10.1021/ed064p124.  17. ^ Kaupp, Martin (2007). "The role of radial nodes of atomic orbitals for chemical bonding and the periodic table". Journal of Computational Chemistry 28 (1): 320–325. doi:10.1002/jcc.20522. ISSN 0192-8651.  18. ^ Kaupp, Martin (2014) [1st. Pub. 2014]. "Chapter 1: Chemical bonding of main group elements". In Frenking, Gernod & Shaik, Sason. The Chemical Bond. Wiley-VCH. ISBN 978-1-234-56789-7.  19. ^ Kutzelnigg, W. (August 1988). "Orthogonal and non-orthogonal hybrids". Journal of Molecular Structure: THEOCHEM 169: 403–419. doi:10.1016/0166-1280(88)80273-2.   – via ScienceDirect (Subscription may be required or content may be available in libraries.) 20. ^ Sason S. Shaik; Phillipe C. Hiberty (2008). A Chemist's Guide to Valence Bond Theory. New Jersey: Wiley-Interscience. ISBN 978-0-470-03735-5.  21. ^ Clayden, Jonathan; Greeves, Nick; Warren, Stuart; Wothers, Peter (2001). Organic Chemistry (1st ed.). Oxford University Press. p. 105. ISBN 978-0-19-850346-0.  22. ^ Organic Chemistry, Third Edition Marye Anne Fox James K. Whitesell 2003 ISBN 978-0-7637-3586-9 23. ^ Organic Chemistry 3rd Ed. 2001 Paula Yurkanis Bruice ISBN 978-0-130-17858-9 External links[edit]
4c29710c3e3d0e32
New Energy Technologies Document Sample New Energy Technologies Powered By Docstoc NEW ENERGY TECHNOLOGIES #6 1. Large-Scale Shakharov condition, David Noever and Christopher Bremner 2. Matter as a resonance longitudinal wave process, Alexander V. Frolov 3. Physical Principles of The Time Machine, Alexander V. Frolov 4. Time Machine Project by Alexander V. Frolov 5. Kozyrev-Dirak radiation, Ivan M. Shakhparonov 6. The Electrical Vortex Non-Solenoidal Fields, S. Alemanov 7. Physical Mechanism of Nuclear Reactions at Low Energies, V.Oleinik, Yu. Arepjev 8. The Evolution of Lifter Technology, T. Ventura 9. Reality and consciousness in education and activity, A.Smirnov 10. Old new energy, Y. Andreev, A. Smirnov 11. On the influence of time on matter, A. Belyaeva 12. Life without diseases and ageing-preventive electrical bio-heater features, A. Belyaeva 13. Technical report, on Belyaeva’s high efficient ceramic heater, Sh. Mavlyandekov 14. Fundamental properties of aether, A. Mishin 15. Effect of Magnetic Blow Wave Field on Wine Systems, I. Shakhparanov and others 16. Nikola Tesla and Instantaneous Electric Communication, V. Korobeynikov 17. The Unitied Gravitation theory, I. Kuldoshin 18. New Sources of Energy from the Point of View of Unitary Quantum Theory, L.G. Sapogin, Yu.A. Ryabov, V.V. Graboshnikov 19. Antigravitation Force and antigravitation of matter. Methods of its creation, A. K. Gaponov 20. The capacitor, which has energy of atomic bomb (Review of A. Gaponov’s research) consequence of many commonly accepted concepts REFERENCES and dogmas of the modern “scientific perspective of 1. . Richard P Feynman, Robert B. Leighton, Matthew Sands. natural phenomena”. This crisis situation in modern The Feynman Lectures on Physics, Addison-Wesley, 1964, physics is a direct consequence of many conservative Vol. 2, Ch. 1. Paragraph 6 “Electromagnetism in Science and scientific viewpoints, unfortunately supported and Technology” (the very end of paragraph) protected by modern official academic science. The 2. J. Maxwell, Selected Works on the Electromagnetic Field evolution of our consciousness has been influenced Theory, Gostekhizdat, Moscow (1954). by many undoubtedly well known experts and has 3. G. V. Nikolaev, Non-contradictory Electrodynamics. been evolving for a long time in the environment of Theories, Experiments, and Paradoxes, Publishing House of specific scientific vacuum and requires immediate the Tomsk State University, Tomsk (1997). revival. Even methods used for dissemination of new 4. A. S. Kompaniets, in: Theoretical Physics, State Technical knowledge should be improved, if one actually wishes and Theoretical Press, Moscow (1957), pp. 126-128. to accelerate the progress of Humankind. 5. R. T. Sigalov, T. I. Shapovalova, Kh. Kh. Karimov, and N. I. Samsonov, New Research of Forces of the Magnetic Field, FAN Press of the Uzbekskaia SSR, Tashkent (1975). The perspective for practical applications of new previously unknown scientific phenomena and effects 6. Ya. I. Frenkel, Electrodynamics. Vol. 1, United Scientific and Technical Presses, Leningrad/Moscow (1934). looks very attractive, and they may be achieved by cooperative efforts of the human intellect. New 7. G. V. Nikolaev and B. V. Okulov, Inertial Properties of Electrons, deposited at VINITI, No. 4399-77, Moscow (1978). breakthrough technologies of the 21st Century will require serious changes of many commonly accepted 8. Observations of the Aharanov-Bohm Effect, Nature, No. 7, 106 (1983). concepts and dogmas in fundamental physics. This process of progressive development cannot be 9. G. V. Nikolaev, Scientific Vacuum. Crisis in Basic Physics. Is There Any Way Out?! Publishing House Kursiv, Tomsk stopped. (1999). Theoretical Background Sakharov Condition Zel’dovich [1] first suggested that gravitational interactions could lead to a small disturbance in the (non zero) quantum fluctuations of the vacuum and thus David Noever and Christopher Bremner give rise to a finite value of Einstein’s cosmological NASA Marshall Space Flight Center, constant in agreement with astrophysical data. Using Space Sciences Laboratory dimensional analysis and the suggestion by Zel’dovich, Mail Code: ES76, Huntsville AL 35812 Sakharov [2] derived a value for Newton’s gravitational constant, G , in only one free parameter, frequency, ω : Editor’s note: This article was presented by the autors for publication in New Energy Technologies. For the first G ~ c5 h ∫ ω dω ~ 1 ∫ ω dω time it was published in 1999 by the American Institute where c is the speed of light and h is the Planck of Aeronautics and Astronautics, Inc. All copyrights belong to the authors. constant. The free parameter in frequency when integrated over all values from zero to high frequencies Abstract must contain the usual integration cutoff value (Planck frequency on observable electromagnetic phenomenon). Recent far reaching theoretical results have used the quantum vacuum noise as a fundamental Puthoff [3] and others [4 5] have extended Sakharov’s electromagnetic radiation field to derive a frequency condition in a relativistically consistent model to determine constants of proportionality. His model (ω ) dependent version of Newton’s gravitational derives an acceleration term in first order expansion (in coupling term, G (ω ) . This paper reconciles the cut-off flat space time), then equates inertial and gravitational mass (by the equivalence principle) to make contact frequency with the observed cosmological constant, and then briefly puts forward a realizable laboratory test with the gravitational constant, G , directly as: case in the 10 - 100 MHz frequency range. One analogy is drawn between the classical vacuum energy G = (πc 5 / hω c2 ) ~ 1 / ∫ ωdω experiments with attraction between two closely spaced plates (Casimir cavity) and the arbitrarily dense material boundaries possible in Bose condensates, such which is the Sakharov condition [2,3]. This paper revisits as irradiation at MHz frequencies of superfluid helium the meaning of the cutoff frequency, ω c ,for radiation or superconductors. interactions, of which the quantum vacuum [6-10] and Page 204 Planck frequency are only the leading terms, and for fluctuations are included. (N.B. To account for equal which linear combinations of forces can introduce other gravitational mass effects in neutrons and protons, the plausible frequencies. One purpose of this ZPF oscillations must involve subatomic charges, or reexamination is whether the resulting gravitational ‘parton’ effects. The assumption derives from high coupling constant, G , can be reconciled with the frequency interactions of ZPF wherein these subatomic anticipated energy density of the universe [11] without particles are asymptotically free to oscillate as resorting to extreme space time curvature and thus yield independent or free particles as quantum noise). enough critical density to contain the expansion of the universe. Finally we particularize the case to the high- A further far reaching consequence [3] is mass itself density fluctuations possible in Bose condensates [12], becomes interpretable as a dependent quantity derived a potential experimental test case for how the effects from a damped (with decay constant Ã) oscillation of vacuum noise might manifest observably. driven by random ZPF: One far-reaching consequence of the vacuum energy m = Γc 3 / G = 2hΓ / π 2 c 3 ∫ ωdω model is the attractive force of gravity becomes reducible to the radiative interaction between with the only two free parameters, the damping factor oscillating charges, e.g. the zero point field (ZPF) Γ, and again the frequency, ω . The internal kinetic applied to subatomic charges. Mass and inertia arise energy of the system contributes to the effective mass. from the fundamentally electromagnetic ZPF This leads to an overall average spectral density, written in terms of mass as: This random background gives the usual quantum mechanical energy spectrum from particle field effects: ∆ρ ′(ω ) = m 2 c 5ω / 2hω c4 r 4 ρ (ω )dω ~ ω 3 dω for the electromagnetic field distribution near (1/r4) to the mass, m, which in detail is half electric and half a very important dimensional relationship, since the third power in frequency avoids anomalous Doppler shifts from velocity boosts, or stated alternatively is the One additionally attractive feature is the correct spectra for a Lorentzian (non accelerated) correspondence between this derivation and the view invariant radiation field [13]. of gravity as a dynamical scaleinvariance breaking More specifically, the energy spectrum [3] can be model (e.g. symmetry breaking near the Planck mass written as: energy [14]). A final result includes the force calculation between two ZPF radiation oscillators of the correct ρ (ω )dω = [ω 2 / π 2 c 3 ][hω / 2]dω = form yielding Newton’s average force law = hω 3 / 2π 2 c 3 dω ~ ω 3 dω < F >= −Gm 2 / r 2 which is an expression in the first parenthesis of the Thus, for a Newtonian force to first order in a flat space density of the normal modes and in the second time, Sakharov [2] could be credited for proposing parenthesis of the average energy per mode. When this gravity as not a fundamentally separate force and energy density is integrated over all frequencies, the Puthoff [3] and co workers [4-5] applied the vacuum ω3 divergence produces well known infinities in the electromagnetic field to equate gravity to a long-range integration limit of high frequencies, thus an assumed radiation force (e.g. van der Waals like force). Higher cutoff frequency (appropriate to experimental order oscillator y gravity modes vary as observation limits at the Planck frequency), is usually introduced: (sin[ω o / ω c ])2 . To first order, a weak G coupling constant, ω ρ = (ñ 5 / hG ) 1/ 2 ( ) G = πc 5 / hω c2 , appears for high frequency cutoff at the Planck scale. A corollar y in analogy to For mass, m , moving in an accelerated reference frame electromagnetic shielding by ordinary matter can be g = -a=Gm/r2 , the resulting energy spectrum includes rationalized as the problem of frequency mismatch at a gravitational spectral shift [3], high Planck frequencies, e.g. ZPF cannot be fundamentally shielded. In other words, frequency mismatch precludes gravity shielding by matter. ∆ρ ' (ω )dω = hω / 2π 2 c 5 [Gm / r 2 ]2 dω ~ 1 / r 4 dω The purpose here is to revisit the only free parameter, a kind of short range (1/r4) gravitational energy shift, the frequency cutoff, more in the spirit of a mass but electromagnetic in origin when zero point resonant frequency. The motivation for this approach Page 205 can be summarized as: 1) the generality of other complementary radiation effects without relying on ZPF alone (e.g. other isotropic, homogeneous radiation ρ (E ) = ∫ ρ (E )dE = hω c4 / 8π 2 c 3 sources); 2) the weak coupling constant, G , yields a vastly smaller than observed size of the universe (e.g. which must have a mass equivalent, contribute to the too small cosmological constant) when the Planck universe’s curvature, and thus have a fundamental frequency is used as a cutoff value; and (3) the particle relation to the critical density to contain the expansion of the universe [14 15]. The mass - equivalent ZPF to mass, m = Γc3/G, can be viewed as a renormalized or reach the universe’s critical density [15], ‘dressed’ mass with a resonant interaction potential that is frequency dependent in its coupling constant, G , and ρ ~ 10 −29 g cm -3 would necessarily limit the cutoff with ‘bare’ mass that is large, ( ) mο ~ mρ / m , where frequency for gravity to the value, ω c < 7 ⋅10 7 s −1 , or between 10 -100 MHz. mρ = (hc / G ) is 1/ 2 the experimentally unobservable, the Planck mass. A higher frequency greatly overshoots the cosmological constant, Λ , and induces extreme curvature in the In particular, why this large ‘bare’ mass does not universe. This problem has been cited frequently and generate a large gravitational field is not a unique stated most bluntly, as either ZPF or the cosmological anomaly in the Sakharov derivation, since similarly large constant requires revision. The relevance here arises vacuum point energies are common to field theories. from similarly large positive coupling terms in quantum gravity [15], which also generate a local gravitational The important point is that the derivation G (ω ) is Instability for typical upper limits on the cosmological general however to any isotropic radiation field with constant, Λ/8πG<1012 cm-4. [ ] the Lorentz invariant energy spectra ρ (ω ) ~ ω 3 , thus the candidates for the cutoff frequency of the particular Rather than to dwell on the inconsistencies that plague radiation source can be interpreted as a Planck scale attempts to reconcile quantum gravity, we particularize the problem to a case where the restriction to Planck only if the rest mass, mο , is not composed of many scale becomes less clew, namely the high density terms, rather than just the ZPF leading term. Since the fluctuations and universal scaling introduced in a Bose ZPF is akin to a van der Waals force [3 5], polarizability condensate. A Bose condensate, such as superfluid (in charge and mass) must be considered, but without helium or superconductors [15 19], becomes of potential also excluding any number of linear combinations that interest, mainly because of its arbitrarily dense might have alternative cutoff frequencies, ω ñ , or boundaries and the classic Casimir experiment [20 22] which allows such dense material boundaries (two damping terms, Ã, ‘ala particle physics interpretations closely spaced conducting plates), if available, to for resonant masses during renormalization. In other modulate the background quantum fluctuation of ZPF. words, once a gravitational energy spectrum, ρ (ω ) is In other words, the matter-ZPF interaction becomes postulated that is Lorentzian invariant, many measurable by the observed attraction between two fundamental sizes (or corresponding frequency values) material boundaries. What dense boundaries might are smeared (or dressed) by any number of characteristic generate in Bose condensates remains a subject of great frequencies between zero and the high frequency interest. electromagnetic (Planck) cutoff ω ρ . Quite simply, is the expression, ω ñ = ω ρ , a requirement for all radiation The significant case to investigate is whether Casimir- sources? like interactions [20 22] will not only couple to ZPF radiation at a scale comparable to the quantum noise Many types of particle oscillations may satisfy the (or other radiation field), but also alter the value imposed general requirements of a Sakharov condition, each by the Sakharov condition for G. It remains an open having a characteristic mass (and energy) as in question whether this potential coupling interaction calculating the mass of any fundamental particle at its shares, as in ordinary critical phenomenon, the density resonant frequency (including underlying partial correlation function, Φ, that is both independent of the charges or dense bosons). This brings the calculation coupling strength (or universal in renormalization) and to a consideration of the high density fluctuations consistent with the observed average energy density characteristic of a Bose condensate [15 19]. While the of the visible universe. high density variation may intrinsically be of interest, the exploration has more to do with reconciling the ZPF Thus the purpose here has bow to restate the Sakharov interpretation of the Sakharov condition with the condition in the gravitational coupling constant, G, observed cosmological constant [14]. based on its only free parameter, a frequency cutoff, ω c . Any potential relevance arises from similarly large A “top down” view of calculating the cutoff frequency values for the positive coupling term in quantum gravity, imposes the self consistency test for the cosmological which generate conditions for a local gravitational constant, Λ , from the outset. To calculate, the total frequency integrated energy density of the universe instability for typical upper limits on the constant, must be included: Λ/8πG<1012 cm-4. Page 206 To restate the Sakharov condition, matter in the vacuum contact with proposals to modulate the Casimir provides boundaries for reduced ‘Casimir like’ modes capacitative plates for continuous extraction of energy available for otherwise isotropic radiation from quantum [27]. This result requires fur ther investigation fluctuations (broad spectral noise). That this view experimentally, particularly to compare with previous reproduces Einstein gravity has been examined, reports for anomalies in AC- tuned electrical capacitors including the full relativistic derivation [4-5]. The details [28]. of the appropriate mass, however, remain buried in the kinetic energy of general internal particle (‘parton’) motion [3]. Any appeal to a specific par ton 1. Ze1’dovich Ya. B. JETP Letters, 6, 345, 1967. representation is limited only by essentially free 2. Sakharov A. Vacuum quantum fluctuations in Curved Space particles with high frequency interactions, including and the Theory of Gravitation, Sov. Phys. Reports, 12,1968,1040 underlying partial charges or dense bosons. The basis of considering arbitrarily high-density fluctuations in 3. Puthoff H. E. (1989) Gravity as a zero-point fluctuation force, Physical Review A, 39(5): 2333 2342, March 1, 1989. Bose condensate in analogy to the ZPF-Casimir experiment remains both an empirical and theoretical 4. Haisch B., Rueda A., Puthoff H.E., (1994) Inertia as a Zero Point Field Lorentz Force, Physical Review A, 49:678 694. case to examine. There exist laboratory scale cases [15- 5. Haisch B., Rueda A., Puthoff H.E.,”Inertia as a Zero Point Field 19] where resonant radiation in the required 10-100 MHz Force” Physical Review A 49, N 2, 678 (1994). range appear to produce anomalous effect for such Bose condensates as superconductors, but further work to 6. Ambjorn J. and Wolfram S. (1983) Properties of the Vacuum, 1. Mechanical and Thermodynamic, and Properties of the Vacuum, confirm these results would be needed. In other 2. Electrodynamics, Annals of Physics contexts, these effects have been discussed as the 7. Ambjorn J. and Wolfram S. (1983) Properties of the Vacuum. 1. Schiff-Barnhill effect for superconductors interacting Mechanical and Thermodynamic, Annals of Physics, 147:1 32. with a gravitational field [23], but for the static rest moss rather than an effective mass in a conduction band. 8. Fulcher et al., “The Decay of the Vacuum,” Sci. Am., vol. 241, p. 150, Dec. 1979 Experimental Propositions 9. Puthoff, H.E. “Source of Vacuum Electromagnetic Zero Point Energy” Physical Review A 4 0, 4857 Nov 1 (1989); Errata and Comments, Physical Review A 4 1, March 1(1990); Physical J. Weber [24,25] proposed the use of a superconducting Review A4 4, 3382, 3385 (1991) Bose condensate for gravity wave detection, principally 10. Senitzky I.P “Radiation Reaction and Vacuum Field Effects in because of its potentially higher signal to noise ratio in Heisenberg Picture Quantum Electrodynamics”, Phys. Rev. carrying electrical signals upon length dilations in a Lett. 31(15), 955 (1973). As pointed out by Puthoff [3] the relativistic framework for gravity waves travelling near relativistic results for the Sakharov condition have so far been the speed of light. W. Weber and Hickman [26] derived encouraging, while the consequences for nuclear interactions an experimentally testable relation based on torquing in all coordinate frames have not been fully explored. of a charged capacitor parallel to a gravity field, with 11. da Costa L. N., Freudling W., Wegner G., Giovanelli R., Haynes M. P and Salzer J. J. (1996) The Mass Distribution in the Nearby Universe, Astrophysical Journal Letters, 468: LS L8 and τ = 2 E g / π [α / (1 − α ) ] 1/ 2 Plate L 1 12. Modanese G. (1996) Theoretical analysis of a reported weak gravitational shielding effect, Europhy. Lett., 35(6):413 418. where the capacitor will rotate relative to the gravity 13. Shupe M.A. The Lorentz invariant vacuum media, Am. J. Phys. vector, for α = 2GM / rc2 , is Schwarzschild radial 53, 122 (1985). A cautionary note is that lower frequency cutoffs coordinate [dR = dr(1-α)1/2] , Eg is dependent on the can violate Lorentzian invariance, thus allowing a moving capacitor charge and geometry of the plates, detector to reveal absolute motion by recording Doppler sifted frequencies. Standard methods might treat such effects like Eg = [Q2d/2εWL(1-α)1/2], for a plate separation, and the cancellation of terms that remove anomalous ZPF infinities radial dimensions,W and L, charge Q, and ε the from field theories, but these topics remain to be explored. permittivity of free space. For plate separations of 2 mm 14. Zee, A. Phys. Rev, Lett, 42,417 (1979); Phys. Rev. D. 23, 858, on Earth, the maximum torque is approximately (1981). τ = 10 −12 Nm, when charged to 2/3 dielectric 15. Torr D. G. and Li. N. (1993) Gravitoelctric Electric Coupling breakdown. While not entirely promising for detection vVia Superconductivity, Foundations of Physics Letters, 6(4): 371 383. of such low torques, the large separation (2 mm) distance between capacitative plates naturally prompts 16. Unnikrishan C. S. (1996) Does a superconductor shield gravity? Physics C, 266:133 137. generalization to the classic Casimir force [21] experiments only recently confirmed experimentally 17. Podkletnov E. and Nieminen R (1992) A Possibility of Gravitational Force Shielding by Bulk YBa 2 Cu 3 O 7-x [20]. In particular, we rewrite the torque values to Superconductor, Physics pp.203: 441 444. include the frequency terms derived with the Sakharov condition 18. Li N, and Torr D. G. (1992) Gravitational effects on the magnetic attenuation of superconductors, Physical Review B, 46(9): 5489 [G = (πc 5 / hω c2 )] : 5494. A simple consequence of the Sakharov condition G = (πc 5 / hω c2 ) ~ 1 / ∫ ωdω , can be written for the α = 2 Mπc / hω r 3 2 gravitomagnefic per meability as: The appeal of this formulation is that a frequency ( ) µ g = 4πG / c 2 = 4π 2c 3 / h ∫ ° ωcωdω ~ 1 / ∫ ωdω which suggest that the same frequency resonance implied by the ZPF dependent torque is derived, which further makes derivation will share similar consequences for vector gravity Page 207 effects. See also, DeWitt, B. S. Superconductors and 24. Weber J. (1960), Detection and Generation of Gravitational Gravitational Drag, Phys. Rev. Lett. 16, 102(1966). Waves, Physical Review, 117(1)306 313. 19. Li N., Noever D., Robertson T., Koczor R., and Brantley, W. (1997) 25. Weber J. (1966) Gravitational Shielding and Absorption, The Static Test for a Gravitational Force Coupled to Type II YBCO Physical Review (The American Physical Society), 146(4): 935 Superconductors, Physics p., 55, 287. 937. 20. Lamoreaux S. K. (1997) Demonstration of the Casimir Force in 26. Weber W. and Hickman H. (1997) A possible interaction between the 0.6 to 6 mm Range, Phys. Rev. Letters, 78:5 8. gravity and the electric field, Spec. Science Tech. 20, 133 136 21. Milonni P et al., “Radiation pressure from the vacuum: Physical 27. Forward R.L. “Extracting electrical energy from the vacuum by interpretation of the Casimir force”, Phys. Rev. A, Vol. 38, No. cohesion of charged foliated conductors” Phys. Rev. B, Vol. 30, 3, 1621 August 1988. No. 4,1700 August 1994 22. . Milonni P W. (1994) The Quantum Vacuum, Academic Press, 28. Woodward, J. F. (1992) A Stationary Apparent Weight Shift San Diego, CA. From a Transient Machian Mass Fluctuation, Foundations of Physics Letters, 5:425 442. 23. Schiff L.I. and Barnhill M.V. Bull. Am. Phys. Soc. 11, 96, (1966) and refn. 18. essential development of the generally accepted notions The Problem of Electron and about space and time. At present all the necessary Physical Properties of Time: prerequisites are available, both theoretical and technical, for the practical mastering of the own fields To the Electron Technologies of the 21st of particles and of the physical properties of time. 1. Introduction. The Problem of Electron and Future V.P Oleinik Outlook Department of General and Theoretical Physics, Electrodynamics, what is this? What is its value for National Technical University of Ukraine man? Electrodynamics is the theory of electromagnetic “Kiev Polytechnic Institute”, interaction, one of four interactions existing in nature. Prospect Pobedy 37, Kiev, 03056, Ukraine **Institute of Its role in the life of society is seen from the fact that Semiconductor Physics, National Academy of Sciences, Prospect Nauky 45, Kiev, 03028, Ukraine; the most part of natural phenomena, which we e-mail: encounter at every step, is of electromagnetic origin: it is due to the interaction of electromagnetic field with “…it is necessary to periodically subject to the electrically charged particles entering into atoms and deepest revision the principles, which were molecules. It is fair to say that electromagnetism plays recognized as final and were no longer discussed”. Louis de Broglie a crucial role in the life of mankind as it determines the ways of technical advance of society [1]. The key problem of quantum electrodynamics is the The results of an approach based on the synthesis of problem of electron, which can be formulated as follows: standard quantum electrodynamics and of the ideas of to construct from the first principles a non-contradictory self-organization in physical systems are briefly model of electron, which takes into account outlined. The quantum model of electron as an open experimental facts, i.e. to find the dynamical equation self-organizing system is constructed, with the physical capable of describing the unique physical properties of mechanism of self-organization consisting in the back electron, its internal structure, its behaviour when it influence of the own field created by electron on the interacts with electromagnetic field. same electron. The own field is considered as a physical property of electron, intrinsically inherent in electrically Electron was discovered a little more than 100 years charged matter, which is included in the definition of ago, in 1897. With discovering the electron the revolution the particle from the very beginning. The own field of in physics began, which has resulted in unprecedented electron endows the particle with wave properties and technical advance of society. The summit of represents a bearer of superluminal signals, which can development was reached in the middle of the 1950s be used for the creation of qualitatively new and then the long period of evolutional development communication systems. Because of the inseparable link followed, when new physical principles were used to between space and time, the force in relativistic describe various physical processes and phenomena. mechanics is the cause of change not only of the velocity The violent development of physics became slower in of particle, but also of the course of time along the the 1970s and was replaced by stagnation in the particle’s trajectory. For this reason the flow of time in subsequent years. The stagnation in electrodynamics some area of space depends on the character of physical continuing already over a period of several decades is processes, occurring in it, and, therefore, time can be gradually giving place now to a new ascent. The new controlled by slowing down or accelerating its course scientific revolution is starting, which is associated with with the help of material processes. The conclusions of electron again, much as it happened hundred years ago. the paper are not in conflict with the special theory of The reason is that electron is the most unique particle relativity (STR); they are a direct consequence of storing in itself the deepest mysteries of nature and the relativistic equations of motion and represent an degree, to which they are disclosed, determines the Page 208 wants to stay anonymous, until his patent application is done and university verification tests will be done). The claims are: 1200 Watts coil out with about 1076.4 Watts in into the driving motor at 3450 RPM. 8 amps 117volts at no load 9.2 amps 117 volts at full load. The output of about 1200 Watts is already a total overunity operation! As they just increase the input power by about 140 Watts only between idle and load state and they get 1200-Watts output it seems indeed a case, where Lenz law is violated! This generator also has NO motor effect! If you supply current to the coil, the permanent magnet in the center will not rotate; cause the flux just stays inside the toroid core! There you can magnetic path, while the second input coil and the see, that the back drag does not influence the second output coil extend around portions of the mechanical rotation of the magnet!" Stefan used very second magnetic path." Yes, it is the same bi-directional good criterion to prove high efficiency of the design: principle we discussed above: two parts of the magnetic There is no back-torque effect! It is most important flux and each coil produce effect to reduce flux due to aspect of Gramm’s generator. You can contact directly this superposition. Stefan Hartmann: Keplerstr. 11 B, 10589 Berlin, Germany. Tel: +49 30 345 00 497, FAX: +49 30 345 00 498 email: (Please, note: Dr. Harman referred to my old web site www.time- which is closed now). So, basic principles of MEG and Φ-machines are the same. It was patented more than 100 years ago. Primary magnetic flux is topologically separated in two (or more) fluxes, which are mutually compensated in the ring core. Advantages of MEG are absence of moving parts since special input coils produce changes of primary flux. Also level of saturation in ferromagnetic material obviously should be corresponding to intensity of primary Fig.4.2 magnetic field, which is created by the permanent Diagram of prototype by Bearden. magnet, Fig.4.1. Besides MEG the same principle can be (and already In conclusion I’d like to confirm our sincere interest was!) realized in many other systems. So, there is no to develop joint work with all new energy research any news in the USA patent #6,362,718 granted for "The teams if they are not trying to obscure the issue of Motionless Magnetic Generator". What did they claim? the technology by means of complex theoretical You can find it in the patent: "The first input coil and the constructions and common words about zero point first output coil extend around portions of the first energy. Matter as a Resonance recently accepted physical notion. In the article [1] David Noever and Christopher Bremner used it to derive Longitudinal Wave Process a frequency – dependent version of Newton’s gravitational coupling term G. On the other hand we Alexander V. Frolov can consider the quantum vacuum noise as aether fluctuations. Dr. Alexander Mishin [2] described Abstracts experiments on registration of these processes by means of special equipment. Both approaches (ZPF and There is experimental data on gravitation anomalies for aether fluctuation) allow to conclude that mass and cases of resonance irradiation of the Bose condensates inertia arise from these oscillations. However if we are (superfluid helium or superconductor) at 10-100 MHz considering the oscillation as some aether process then frequencies. It is developed by the author in frames of we can assume and describe some physical mechanism his aether theory that can be used for practical of this process. applications in aerospace and new energetics. One of consequence of the vacuum energy model, which ZPF or aether fluctuations is described in [1] is that “the attractive force of gravity becomes reducible to the radiative interaction between The fundamental electromagnetic radiation field (Zero oscillating charges…” Let‘s clarify which kind of Point Field) ZPF or the quantum vacuum noise is a radiation can be created by oscillating electric charges. Page 90 There are many different sources to find the answer on I think some specialization is necessary here to explain this question and one of them is the article by Prof. Kirill experimental gravity anomalies with Bose condensates P.Butusov [3] on symmetrization of Maxwell’s equations experiments (superfluid helium or superconductors): and practical methods of generation of longitudinal special process in matter can be used as the gravity waves in vacuum. So, ZPF model has a direct relation screen and this approach does not involve the with the aether model since indirectly it leads to the frequency-matching problem. question of longitudinal waves in vacuum. Physically they are waves of density of energy and in the aether We have concluded above that any matter element is a model the waves are areas of more dense and more resonance process and its energy is derived from ZPF. rarefied aether. Let’s note that there are standing waves It is useful to note that these are longitudinal wave besides moving waves. oscillations of energy density in aether. In this case, the gravity shield problem can be solved in frames of To consider the interaction of some mass particles and the aether vortex conception of matter. the fundamental field the notion of subatomic charges “partons” was introduced [1]. So, the mass itself The longitudinal wave is a moving (or standing) areas “becomes interpretable as a dependent quantity derived of rarified and thickened aether. Let’s consider the from a damped oscillation driven by random ZPF” [1]. moving wave, which is responsible for gravitation The authors wrote about “internal kinetic energy” of attraction effect. How can we stop, re-direct or reflect the mass particle and it can be considered as a function longitudinal wave in aether by means of aether vortexes of ZPF oscillation frequency. In the aether theory of mass (matter elements)? We can produce interaction with there is a similar notion of “aether vortex”, which this wave only by means of other longitudinal waves. represents some cyclical process of some frequency and it is possible to calculate its kinetic energy. This aether In macro-level this idea can be realized as longitudinal vortex model of matter elements allows to assume real wave generator. Electromagnetic processes, which can methods to change parameters of vortex and to get be used as sources of directed longitudinal waves, are changes in parameters of existence of the matter. On known and some of them are described in [3]. In other the other hand we can discuss the possibility to change way the gravity shield can be produced as longitudinal some physical parameters of aether in areas of the vortex waves generated by natural aether vortexes (i.e. by to get the same result. This possibility follows from the matter elements) if the matter exist in a special exited well-known N.Kozyrev’s experiments, which were state, for example for cases of resonance irradiation of named “investigation of active properties of time”. superfluid helium or superconductor at 10-100 MHz N.Kozyrev used chronal (temporal) approach in his frequencies. theory. We have to change his notion “the density of time” to “the density of aether” to get a direct link Matter element as resonance process between his experiments and the aether theory of mass. In [1] the authors wrote that it is possible to calculate N.Kozyrev and others have [4,5,6] experimentally “the mass of any fundamental particle at its resonant demonstrated that irreversible processes in matter frequency.” There is the question: what is the general produced changes of aether density in the area of the basis of whole spectrum of stable elements masses? experiment. Detectors of different type can register this change. It is obviously that any matter element (i.e. the In 1996 the author published the article “The concept aether vortex) in this area of changed aether density of mass process” [7]. At first in this work physical sense should get more inner (kinetic) energy or slow the inner and notion of 3-dimensional curvature was introduced. motion. From the chronal point of view these are changes By analogy with known mathematical notion of linear of inner time of this matter element. curvature ρ1 = (where r is radius) and uniform Gravity shield r One more interesting point that is discussed in the 2 surface curvature ρ2 = it was proposed to calculate article by Noever and Bremner [1] is a problem of gravity r shield. The authors show that resonance interaction curvature of a 3-dimensional space as with ZPF produces “the particle mass” and it can be viewed as “a renormalized or “dressed” mass with a 3 ρ3 = (1) resonant interaction potential. Similar resonance r approach is used in the conception of de Broglie’s matter The radius r in this case means that in a 3-space there waves. Also the authors [1] mentioned the existence of is some periodical process. In other words, 3- an experimentally unobservable mass. In this case ZPF dimensional matter is a resonance process. cannot be fundamentally shielded by matter since “frequency mismatch precludes gravity shielding by Further, de Broglie used formulations E=hf and E=pc matter” [1]. The only way to get screening of ZPF (where p is momentum, h is Planck constant, f is fluctuations seems to be very complex: it is necessary frequency and c is velocity of light) to derive the to provide frequency matching for whole wavelength following: band of the oscillations. hf=pc (2) Page 91 that allows us to get the well-known formulation Calculations for planet Earth in [7] were based on the known period of orbital rotation T=31557600 sec that corresponds to frequency of electromagnetic oscillations λ= (3) f = 1 / T = 3.168861 ⋅ 10 −8 (1 / s ) (8) There is another logical branch of this idea that leads and wave-length to the understanding of the mass properties of matter as a resonance process. Instead of E=pc in [7] it was λ = c / f = 9.46... ⋅ 1016 (m) (9) proposed to use E=mc2. In strength of the wave-particle duality we can write the equation The curvature (if this wave-length is considered as radius of the resonator) is following wave number mc 2 = hf (4) ρ = 1057.00 ⋅ 10 −20 (1 / m) (10) and from this equation the mass can be presented as resonance electromagnetic oscillations Also we can use other known data about the planet. Daily rotation period of our planet is known T=86400 h sec and we can calculate its wavelength m= f (5) c2 λ = 3469,82(m) and corresponding curvature (wave- Let’s note that f=1/T, where T is some period of number). Sure, it is also a whole number with a good oscillation. So, we can write the following accuracy: h 1 ρ = 2882 ⋅ 10 −7 (1 / m) (11) m= 2 (6) c T The laws of physics in macro cosmos and micro cosmos h are similar. From these calculations it was assumed that where 2 is new constant between mass and period whole formation of mass spectrum of stable chemical elements of matter is determined by similar physical of time. mechanism. There is an important conclusion: any mass is a process Creation of mass and there is some period of time, which corresponds to this mass. In other words, there is no physical sense of In shor t we can summarize that technology of time separate from some process of existence of mass. longitudinal waves in aether is a real basis for creation Product mass and period is a constant value, which was of matter with mass and inertia properties. N. Tesla named as a chronal constant used this method to produce different objects: from ball lightning up to electrons. Velimir Abramovic says in his h article [8]: “The principle of resonance and harmonic mT = = const (7) oscillation of aether seems to be so clear that all c2 problems of modern physics, especially a problem of energy conversion, will be solved with its development. The chronal constant is a parameter of some real By means of his vacuum tube Tesla got protons, space and it is equal to 0.73725 ⋅ 10 [ Js 2 / m 2 ] electrons and neutrons directly from aether and reproduced them at any distance. Instead of giving a possibility to the bundle of protons to move through Also in this work [7] there was a demonstration of space to some place, he created conditions for several examples of newly discovered physical law: momentary appearance of arbitrary quantity of particles spatial curvature of some natural objects (proton, planet, in the given place.” DNA molecule) is a whole number. There is some analogy with the nuclear physics notion of wave Any objects can be classified as aether vortex and number. From this fact we can assume that main natural parameters of this vortex determine its mass, electric matter elements exist in main resonance states. For charge and other properties of matter. example, if Bohr radius is 0.52917 Angstrom, then we can find the wave-length l =πd and the linear curvature The “parton” as element of matter in [1] is a useful tool is ρ = 1/l = 3.0075·109 (m) and 3-dimensional curvature for description of physical properties of aether. of this object is ρ = 3/l = 1.0025·109 (m) that is unit of mater, corresponding to simplest atom, i.e. unit matter Longitudinal waves in Woodward’s experiment engine. Let’s note that it is near the unit and some distortion of 0.0025 means non-ideal resonance state of In [1] the authors state that resonant radiation in the the system. required 10-100 MHz range appears to produce Page 92 anomalous effects for such Bose condensates as References superconductors. In my opinion it is a particular case of discussed above technology of longitudinal waves 1. D. Noever and C. Bremner, NASA Marshall Space Flight in aether due to possibility of transformation of Center, Large Scale Sakharov Condition, 35th AIAA/ASME/ SAE/ASEE joint Propulsion Conference and Exhibit, 1999, transverse electromagnetic waves in longitudinal Los Angeles, Ca, USA. waves in the superconductors. This transformation in plasma is a well known physical mechanism. 2. Alexander M. Mishin, The physical system of artificial biofield (experimental research of ether), New Energy Technologies, #1, 2001, p.45. More facts to prove this idea: by Woodward [9] there is a special requirement, i.e. the frequency of mechanical 3. Kirill P.Butusov, Symmetrization of Maxwell-Lorenz vibrations should be twice the frequency of electrical equations, New energy Technologies, #2(5), 2002, p.14. oscillations in the capacitor, which demonstrates the 4. Kozyrev N.A. Selected Works, 1991, Leningrad, publ. by weight anomalies. But from the other hand it is a Leningrad State University. common rule for creation of longitudinal weaves in plasma! Also it is a necessary condition for generation 5. L. Shikhobalov, N. Kozyrev’s ideas today, New Energy of parametrical oscillations! So, we can assume that Technologies #2 (5), 2002, p. 20. basis of the effects in [1] and [9] is a generation of 6. Alexander V. Frolov, Kozyrev on possibility of decrease of longitudinal wave in aether. mass and weight of the body, New Energy Technologies #2 (5), 2002, p. 35. 7. The Concept of Mass Process, Frolov A.V., Proceedings of the congress “New Ideas in Natural Sciences”, 1996, Any element of matter can be considered as resonance St.Petersburg, published by PiK Co., p.123-134. process of aether oscillations, which are longitudinal waves. There is an analogy with description of these 8. Tesla, Velimir Abramovic, New Energy technologies, #1(4) 2002, p.17. longitudinal waves and well-known matter waves by de Broighl. Experimenting on the longitudinal waves 9. Woodward, J.F. A Stationary Apparent Weight Shift From generation and especially experiments on standing a Transient Machian Mass Fluctuation, Foundations of waves to get gradient of aether pressure allows to Physics Letters, 5, p.425-442, 1992. develop gravity control technology. Gerlovin’s Theory of Activation These principles are based on his two important conclusions from the TFF: Alexander V. Frolov a) “Space around us is not empty, physical vacuum consists of material physical objects, i.e. elementary It is a review of the famous book by Ilia L. Gerlovin “Basis particles of vacuum (EPV). These particles are of unified theory of all interactions in matter”published responsible for main activation processes; in 1990, St.Petersburg, Russia. We hope this article let b) Force interactions between atoms in molecule, you discover some new aspects of physical vacuum between molecules in crystals has not spherical structure to develop more new experimental methods. symmetry in the crystals of solid bodies, but an axial Comments made by Alexander V. Frolov, Editor. symmetry and the interactions are changing in time with very high frequency of about 1018 Hz. This In [1] the author wrote about different methods to feature of force interactions also makes its own activate water solutions: mechanical, thermal, acoustic, contribution to the activation of mediums.” [1, p. magnetic and electrical. One of the known methods is 314] an activation by means of electrohydraulic method. There is also some information about activation of other So, it was assumed that the phenomenon of activation mediums, mainly liquids, but also some gases and solid of mediums can be defined as anisotropy of force bodies. interactions, which leads to “meta-stable state, which can be called structurally activated state of the given There are no theoretical explanations of these facts to structure”. explain all aspects of these phenomena. Furthermore, complexity of interpretation of these phenomena in Here is some difference in principle between chemical frames of common physical notions induced some term “activation”, which characterizes a transformation scientists to announce these phenomena as non- of molecule or atom in some active state with an existing and “illegal”. increased energy, which is sufficient to provide a chemical reaction. It is energy activation. Gerlovin Ilia L. Gerlovin formulated the physical principles of described new notion, a structural activation: “This theory of activation of mediums on the basis of new phenomenon can be classified as some change of physical theory, the Theory of Fundamental Field (TFF). structure of activation object. With this, energy of molecule can have no changes, and active properties Page 93 Coupled with aetherodynamics time conception, which In September 2002, Faraday Labs Ltd Company plans was suggested by Alexander V. Frolov, the works on to complete testing of the first experimental system, the control of space-time parameters gain the possibility and to start the patenting and research of applied for development and commercial application. As a aspects, first of all in medicine. theoretical basis there are those N. Kozyrev works where his conception of “time density” are replaced by that of “aether density” according to Frolov. Physical Principles of the Time subjects of experimenting with non-reversible changes in matter, for example, in crystallization or melting Machine processes. Also it is possible to use special electromagnetic processes, for example, Chernobrov’s "converging waves" or other longitudinal waves as Alexander V. Frolov methods of aether compression or rarefaction. If we assume that process of existence of elements of matter Experimental success of research team headed by Dr. physically can be explained as aether vortex processes Vadim A. Chernobrov, Moscow was reported in [1]. The then its rate is a parameter of aether income/outcome time course can be controlled as rate of any process in balance (aether inflow in element of matter and aether local space-time (inner space of the Time Machine). It outflow from the element of matter). It was also can be decelerated or accelerated by means of special described in Time Rate Control (TRC) theory [3]. To "converging electromagnetic waves". Ordinary waves control this balance it is necessary to develop technology move from the source whereas special "converging of longitudinal waves generation, its focusing and waves" move to some central point, i.e. into the focus of resonance effects. The previous research and the system. In Chernobrov’s design of the Time Machine experimenting on the topic has been made by N. Tesla. this process is organized by means of several spherical envelops, which consist of several electromagnets. Let’s assume that we have some technology to change Electronic control unit controls the processes in this parameters of time course. How should we organize this design. Dr. Chernobrov reported about 3% change of the local space-time (what is spatial topology of the design)? time course in 4th version of the system, which was There is a very interesting experiment to get the answer: tested with a human inside. The goal of Dr. Chernobrov’s rotation of a heavy cone (for example, lead cone) entrains work is to research the medical aspects and surrounding aether, so a vortex appears, which is a experimental investigation of the principles. Several toroidal formation of aether (rings). The rings can exist important conclusions were obtained from the project: in space for a long period. The further question is: Why the time course can be controlled and character of the does the beam of light (laser beam) directed to the cone changes is different for acceleration and deceleration by tangent create a luminous ring? We can assume that due to natural properties of photons (light propagates along the geodesic line in space) some autonomous Other known publication and research projects on the closed toroidal space should be created in such same topic seem to be very far from any commercial experiment. The next thought is: since space and any and practically useful application. Obviously the topic matter exist in time then we can speak about some is very new and fantastic for most of scientific autonomous time. The general conclusion is to be the community and at first we have to clarify the physical following: autonomous 4-dimensional space-time can be principles of the time control project, which is started created as toroidal aether vortex. by Faraday Labs Ltd. Here is point to note some aspects of research project In this project we believe that notion of time is one of by Prof. Robert Mallett, Connecticut University, USA. In possible description of real physical properties of our fact, sometime next year, he hopes to produce the first Universe. So, it is not mathematical abstraction but some piece of technology that eventually will allow him to aspect of physical reality and we can discover some build a time machine. By Mallett it will be a device that physical properties of time. Russian astrophysicist N. employs lasers "to twist space". Why is he going to close A. Kozyrev [2] developed a theory of active properties of the beam of light? His theoretical background is time and according to his point of view there are two knowledge about black holes, i.e. understanding of the properties: time course and time density. Prof. Kozyrev connection between gravity and curvature of space-time. demonstrated experimentally that time density in area In Einstein’s theory both matter and energy can bend of some process (changes of matter) is dependent on space and time. So Prof. Mallett assumes that curvature entropy parameters of the processes. In [3] it was of space-time can be changed not only by mass (like a demonstrated that Kozyrev’s experiments could be black hole) but it can be affected by energy of photons. interpreted in aether theory and it has led to simple This has led Prof. Mallett to consider the possibility of physical conclusions and clear experimental using a circulating beam of light to twist space and to perspectives: time course and its density can be create closed loops in time. It is predicted that a spinning explained and controlled as parameters of aether. neutral particle, when placed in the ring, is dragged Directions of aether flow and density of aether are around by the resulting gravitational field [4]. From the Page 57 first view it is the same approach we have considered is stress or deformation (it is some static field) or above (experiment with aether toroidal rings). But oscillations of aether. proposals by Prof. Mallett differ in principle from the aether conception. Let’s introduce the notion of chronal (temporal) charge to consider some technical aspects. In electrodynamics The main aspect of this technology is a creation of we assume an electric charge as element of matter with autonomous (self-closed) toroidal space-time. positive or negative electric properties and we have to Autonomous geodesic world line of this space-time is compare it with some reference (zero charge or test self-closed. Any photon should be circulating in this charge). Let’s note that in any case we have to consider system due to its properties: photon is always moving "charge of some particle" but not an "abstract charge". along the straight line of the space. So, we can postulate that any element of matter has zero chronal charge if it is moving from Past in Future with More deep understanding of this technology follows from standard (usual for measurements of surface of our the explanation of photon as oscillation of aether. Any planet) time course. If the time course (i.e. existence of photon can be considered as result of relative motion of some element of matter) is decelerated then it can be the matter (observer) in absolute space (immovable measured as decrease of standard oscillation frequency aether). Usually a photon is considered as moving object of the matter. Time course acceleration means some in space. But we can assume that observer is in the increase of standard oscillation frequency of the matter. motion and the photon is oscillations of the absolute Let’s determine that in the first case it is negative chronal space (immovable aether). Which approach is more real charge and in the second case it is some positive chronal one? Sure, it is more easy to consider a photon as moving charge. Atomic clock is one of possible methods to object but let’s remember fact of our real motion in the measure zero chronal charge or to find some relative Universe and fact of the Universe expansion. positive or negative difference. So, ideas by Prof. Mallett are very far from the aether It is predicted here that motion of chronal charge nature of the time phenomenon. He follows the black should produce a chronal field. Some provisional data holes theory and general understanding of space-time was received by Frolov from simple experiments on the distortion due to mass or energy presence. Also he rotation of a heat source. Accelerated motion of chronal knows that a light beam should be closed in a ring. charge (changes of density of chronal current) should However Prof. Mallett is very far from physical basis of produce aethero-induction effect that is an analogy (or the effects. The key of time rate control is technology more general case) of Faraday’s induction effect. This of artificial aether flow, creation of aether vortex effect can be detected as secondar y (induced) systems (AVS), management on density and direction deceleration of time course in nearest area of accelerated of aether flow. There are several technical methods to time matter. Another case is a secondary (induced) produce it. Any light beam should be curved in self- acceleration of time course in the nearest area of closed "light ring" if it is placed in a toroidal aether vortex decelerated time matter. and we can say that this system has own space-time. Technical realization of aethero-induction method seems What does "some changes of time course” mean? We to be very close to idea, which is described in classical can measure it as some changes of standard rate of epic "Back to the Future". At first, it is necessary to create oscillation process, for example, some stable wavelength or to collect some chronal charge in a "flux condenser" of laser beam or quartz oscillations. There is a well- and then to accelerate it in space up to some velocity. known experiment with two atomic clocks (one of the According to the aether conception, this creation of the clocks is placed on the roof of some building and another chronal charge is a real technical process. one is placed on surface of planet). Due to vertical component of gravity the time course should be different It is assumed that estimated chronal effects are and it can be measured. How can we organize difference demonstrated as some threshold field, i.e. space-time in these measurements if both atomic clocks are placed has some stable discrete energy levels and changes of in the same altitude? its curvature should have discrete threshold mode. All new aspects disclosed in this paper are the subject of a It is necessary to consider gravity nature in frames of patent process. Faraday Labs Ltd organizes the aether conception. Two atomic clocks demonstrate experimental program on the topic. Practical application difference in measurements due to difference in aether of this technology is new energy systems and propulsion flow density. Hence, by means of aetherodynamics methods. methods it is possible to control the rate of oscillation processes in the atomic clocks and in any matter (i.e. References time course itself). 1. Chernobrov V.A. Experiments with a man in the Time Machine, New Energy Technologies, #3 November-December 2001, p.6. The aetherodynamics methods have a clear analogy with 2. Kozyrev N.A. Selected works, Publ. by Leningrad State University, electrodynamics: motion of charge produce field and 1991. there is the induction law. Really, classical 3. Frolov A.V. Practical application of the Time Rate Control theory, electrodynamics can be considered as particular case New Energy Technologies, #3 November-December 2001, p. 15. 4. Mallett, R.L., Weak gravitational field of the electromagnetic of the aetherodynamics. So, physical sense of any field radiation in a ring laser, Phys. Lett. A, 2000. Page 58 investigated completely yet. It was found also that harmful effect on biological systems is not related to the process of movement in Time itself but is a result of the difference of the Time rate value in various parts of a body (a biological system). Inside of the laboratory setup it was also discovered that Time could be changed with some inertia. Areas of space having different Time rates have vague borders. With sufficient difference in Time rate the human can see an area with a different Time rate as some white mist. Higher the difference – the mist is denser, that can be used as an alarm signal for biological systems. It is possible to consider Time-travel as possible and (after experiments with mice) there are reasons to suppose it will be safe for travelers if they follow certain rules. It is especially necessary to emphasize: the trips through Time (due to new discovered properties of Time) introduced into science by John Willer in the 50’s, are can’t affect the Past and they can’t change our past travels in 5th and 6th dimensions, i.e. the “classical” Time history. All the so-called paradoxes for the traveler in travels, which were described by H. Wells. Time (for example when “he meets himself in the Past” or “he kills his grandfather in his childhood” have clear Editor’s: As the reader could note, the author does not solutions in 3-dimensional Time. disclosure the secrets of the TM design. From the photo you can see the electromagnets, which form the regular It is possible to consider as a proven fact that Time has stereometrical construction as well as the cables from more than one dimension, i.e. O. Bartini’s theoretical the TM to the control unit. Dr. Chernobrov mentioned calculations are confirmed by these experiments: Time the converging electromagnetic waves only. So, to has 3 dimensions. Hence our Earth world can be understand how it works, it is necessary to get a clear considered as a 6-dimensional object: length, width, notion of the converging electromagnetic waves. Let’s height, age or date of Time, variant of a History or imagine the ripple effect created by a stone in the water. erosion of Time, density or rate of Time. The concept of The waves move from a central point to periphery. The “the Arrow of Time” as fourth dimension (moment of converging waves are just an opposite process: the Time) is a particular case of the concept of sixth waves move from periphery to the central point. Is it dimension (rate of Time) that leads to the physical possible in Nature? Yes, sure. Dr. Chernobrov wrote: concepts of gravitation and energy and they are “Let’s throw a hoop on the water and inside of the hoop simultaneously connected. Concepts of the “ Einstein- we’ll see converging waves.” The Time Machine Rosen bridges” known since 1916 or “worm-holes” technology by Dr. Chernobrov is based on the similar Time Machine Project Alexander V. Frolov Scientific Expert of the Russian Physical Society, General Director, Faraday Lab Ltd Tel/fax: 7-812-380-6564 Tel: 7-921-993-2501 May 29, 2002 Faraday Labs Ltd and Dr. Vadim Chernobrov have signed the agreement on scientific-research work on investigation of active properties of time. Alexander V. Frolov, General Director Faraday Labs Ltd and Ph. In the course of the previous experimental works, Dr. Vadim A. Chernobrov have just signed the Contract carried out by Dr. Chernobrov’s research team during the period from 1984-2002, four versions of Time interconnection of electromagnetic processes and Machine had been made and tested. At these devices physical proper ties of space-time. Special (the biggest system is about 1 meter in diameter) the electromagnets, operating in pulse mode, are placed at effects of deceleration and acceleration of time course the spherical frame. They create the so-called were created and measured. The principles of control “converging wave”, which by Alexander Frolov is a of time course velocity were based on the longitudinal wave in nature. Page 56 theoretical basis there are those N. Kozyrev works where his conception of “time density” are replaced by that of “aether density” according to Frolov. in matter, for example, in crystallization or melting Machine processes. Also it is possible to use special electromagnetic processes, for example, Chernobrov’s "converging waves" or other longitudinal waves as Alexander V. Frolov methods of aether compression or rarefaction. If we assume that process of existence of elements of matter Experimental success of research team headed by Dr. physically can be explained as aether vortex processes then its rate is a parameter of aether income/outcome time course can be controlled as rate of any process in balance (aether inflow in element of matter and aether outflow from the element of matter). It was also can be decelerated or accelerated by means of special "converging electromagnetic waves". Ordinary waves control this balance it is necessary to develop technology move from the source whereas special "converging of longitudinal waves generation, its focusing and resonance effects. The previous research and the system. In Chernobrov’s design of the Time Machine experimenting on the topic has been made by N. Tesla. this process is organized by means of several spherical envelops, which consist of several electromagnets. Let’s assume that we have some technology to change Electronic control unit controls the processes in this parameters of time course. How should we organize this design. Dr. Chernobrov reported about 3% change of the time course in 4th version of the system, which was There is a very interesting experiment to get the answer: work is to research the medical aspects and surrounding aether, so a vortex appears, which is a experimental investigation of the principles. Several toroidal formation of aether (rings). The rings can exist important conclusions were obtained from the project: the time course can be controlled and character of the changes is different for acceleration and deceleration by tangent create a luminous ring? We can assume that due to natural properties of photons (light propagates along the geodesic line in space) some autonomous Other known publication and research projects on the closed toroidal space should be created in such same topic seem to be very far from any commercial experiment. The next thought is: since space and any and practically useful application. Obviously the topic matter exist in time then we can speak about some is very new and fantastic for most of scientific autonomous time. The general conclusion is to be the community and at first we have to clarify the physical following: autonomous 4-dimensional space-time can be principles of the time control project, which is started created as toroidal aether vortex. by Faraday Labs Ltd. Here is point to note some aspects of research project In this project we believe that notion of time is one of by Prof. Robert Mallett, Connecticut University, USA. In possible description of real physical properties of our fact, sometime next year, he hopes to produce the first Universe. So, it is not mathematical abstraction but some piece of technology that eventually will allow him to aspect of physical reality and we can discover some physical properties of time. Russian astrophysicist N. the beam of light? His theoretical background is time and according to his point of view there are two knowledge about black holes, i.e. understanding of the properties: time course and time density. Prof. Kozyrev connection between gravity and curvature of space-time. demonstrated experimentally that time density in area In Einstein’s theory both matter and energy can bend of some process (changes of matter) is dependent on space and time. So Prof. Mallett assumes that curvature entropy parameters of the processes. In [3] it was demonstrated that Kozyrev’s experiments could be interpreted in aether theory and it has led to simple This has led Prof. Mallett to consider the possibility of physical conclusions and clear experimental using a circulating beam of light to twist space and to perspectives: time course and its density can be create closed loops in time. It is predicted that a spinning explained and controlled as parameters of aether. neutral particle, when placed in the ring, is dragged Directions of aether flow and density of aether are around by the resulting gravitational field [4]. From the Page 57 Kozyrev-Dirak Radiation other natural crystal structures. In the definite sense nature demonstrates the way to rejuvenate Its influence on animals compound structures. As it is well known, vital functions of biological systems on the Earth depend on Dr. Ivan M. Shakhparonov the structure and composition of water. Therefore, we have a right to expect considerable changes in the vital International Academy of Energy-Informational functions of biological organisms under the influence Sciences of KDFR. In the experiment with animals, that were made in the Experimental Devices Center of Oncology Researches (COR) at the Russian Academy of Medical Sciences (RAMS), on the applying In experiments with animals there were applied the of Kozyrev-Dirak’s Focused Radiation (KDFR), it has been devices, which concentrated KDR (KDCR) and had 50 found that KDFR decreases the quantity of glucose in Wtt aggregate electrical power. The description is the blood, reduces its tenacity, promotes the presented in [2]. strengthening of immunity and the rise of the quantity of marrow cells. KDFR indication was obtained by calorimetric method [1], along the way of movement of the main bunch (with Introduction 10 cm across diameter) and at angle of 45° from the geometrical axis of a device. This time researchers in Russia and abroad experiment on ball lightnings by means of nonoriented circuits, Researches of Bleeding Duration which are similar to the electric analogues of Mobius band, also by means of Klein bottle and their Let us consider KDFR influence on the blood combinations. Non-oriented fields are investigated very composition of animals. At the experiment 24-28 gram intensively now. Accordingly, organisms of the weighting, pelletized fed male mice were used. In the researchers, who observe the interactions of such fields process of the experiment it was discovered that 3 and with a matter, are also changed, thus they should take 4 hour processing of mice with KDFR at the distance it into account on making such experiments. The aim of 2.5 m and at the presence of animals in the sphere of the article is to show in which way the fields of maximum radiating power, caused some changes of nonoriented circuits influence on animal and human fibrillation system. The bleeding duration was organism. Besides the article has for its object the determined according to Duke method. Two groups of prevention of negative consequences, which can appear animals were used at the experiments: a group with 4- for experimenters through the research process. hour duration of KDRF processing and a group with 6- hour duration. Time of bleeding was considered in Experiments with animals that were carried out in 1992- dynamics at 1, 2, 7, 14, 21, 28 and 35 day (Fig. 1). The 1993 in Russian Academy of Medical Sciences (RAMS) bleeding duration of the intact animals was determined had not been published in proper time because there by the value 128±11 sec. After the applying of KDFR were no quantitative methods of radiation detection. there was noticed some increase of bleeding duration Later, in 1996 they were developed [1] and KDFR to 261±15 sec and 223±21 sec on the first day after the parameters were measured in that geometry, which stopping of the influence. In the subsequent periods were applied in RAMS. In 1998 powerful and super- bleeding duration gradually decreases up to the level powerful KDRF sources were obtained. These sources of physiological norm. The whole normalization of the were applied (and are applied now) in the researches index is observed at the animals, which were processed at the controlled radioactive decay [2]. Kozyrev and by KDFR during 4 and 6 hours, on 28-35 day up to Nasonov [3] and later also Lavrentyev with the 115±12 and 133±18 sec correspondingly. In the process collaborators [4,5] have proved experimentally that the of observations at the animals, the correlation between Sun and some stars generate the radiation, which has time of fibrillation and periods of KDRF processing of early unknown properties. We suggest that the the animals has not been revealed (Fig. 1). radiation, discovered by Kozyrev [3], and the radiation, which is researched by us and by other experimenters with nonoriented circuits, are of the same phenomenon. At first, it should be noted that on interaction between Kozyrev-Dirak radiation (KDR) and a matter made it colder. As it was demonstrated above [1], cooling effect can be explained by matter re-magnetization under the influence of KDFR beam (adiabatic demagnetization). According to the still unpublished data, KDFR bunch destroys matter lattice by the way of it’s moving. However, after a couple of week matter reconstructs it to the almost tabulated points, without defects, blockness and other damages, which are peculiar to Page 281 In the course of the experiment the strongly marked (24 hours before the slaughter), forage was taken away chronometric hypocoagulation was discovered due to from the animals. The determination of biochemical the extension of the parameter “K” or, probably, because indexes was provided by means of biochemical analyzer of the change of aggregation properties of platelets HITACHI. As a result of the experiment it was (Table 1). determined that at the first day after influence of KDRF there was a tendency of decrease of the glucose content For the determination of biochemical indexes serum was (Table 2). Other indexes varied in the limit of obtained from 5-8 ml of rats’ venous blood. Beforehand physiological norm. Table 1 Parameters of thromboflexogramm after KDFR, 4 hours Animal # Parameters of thromboflexogramm Fibrio gene Fibrinal activity R (sec) K (sec) Ma (mm) 1 72 ∞ 10 - - 2 102 ∞ 18 - - 1 90 ∞ 10 275 75 2 180 150 52 315 90 1 180 ∞ 5 - 100 Table 2 KDFR influence on the glucose content in blood of the rats Time (days) KDFR 4 hours Test KDFR 6 hours Test after the experiment (mmole/l) (mmole/l) ( mmole/l) (mmole/l) 1 3.14 6.12 6.39 7.27 10 7.59 9.35 8.90 6.69 30 6.05 6.69 Research of haemopoiesis system Thus, any dependence of the biological effect on the exposition has not been revealed. For instance, at the Several criteria were considered: the dependence of 7th day after one hour of the exposition the number of biological effect on the distance, on the power flux karyocytes was equal to 28.45±1.87×106 at the same density, on the duration of processing. Besides, KDRF time after 3-hour processing it came to 27.65±0.74×106 . influence on mice survival was considered. Alongside with the change of the distance to the biological object from 1.5 to 2.5 m the tendency towards At the experiment 24-28 gram weighting, pelletized fed the increase of the number of marrow cells has kept male mice were used. The marrow was examined in within the same limits 28.27±1.32×10 6 and the dynamic at 1, 3 and 7 day after KDFR influence. Six 29.57±0.88×106 . animals were taken on each point. Af ter the decapitation of the mouse their thighbones were taken Dependence of the biological effect on the power out and af ter that the absolute number of flux density myelokaryocytes was calculated by the standard method in Goryaev chamber. The comparative investigation of KDFR influence on the biological object in the coverage of KDCR (along its Dependence of the biological effect on the distance geometric axis) and outside the coverage has demonstrated that alongside with the increase of the In all experiments the maximum flux density along the radiation intensity there was a tendency towards the geometrical axis of KDCR device was a constant. There decrease of stimulative influence of KDFR on were used four temporal modes of the influence (1, 2, 3, haemopoiesis. 4 hours) and three points of long distance between KDSR and the biological object (0.5; 1.5; and 2.5 m). At 0.5 m Dependence of the biological effect on the duration distance there were no differences in the number of of processing marrow cells in comparison with the control cells. With the increase of distance between KDCR and the object On processing the animals at distances up to 2.5 m from from 0.5 to 1.5 m some tendency to the increase of the KDCR and on increase of exposition to 3-4 hours it is number of marrow cells up to the 7th day was observed. possible to obtain reliably significant difference in the Four-hour KDRF processing caused the increase of the number of marrow cells from the physiological norm to number of karyocytes up to 29.99±1.25×106 (P<0,001). the 7th day. Page 282 KDRF influence on the survival of mice came to 2 hours; and for the third one it came to 4 hours. Each group consisted of 6 animals. The test group The experiments, determining the survival reaction consisted of six mice with repeatedly inoculated of animals, were made by means of gamma radiation. sarcoma-37 and which had not been processed with 30-day survival is the criterion of determination. KDFR. As a result, the average lifetime of tested animals Conditions for the experiments are the following: in the was equal to 9 days. The average lifetime of animals of coverage of KDCR and aside the coverage, (the distance KDFR groups came to 48 days (for 1-hour KDFR between the KDCR and the object is 2.5 m in the influenced group); to 12 days (for 2-hour group); and to coverage of KDCR and 0.5m outside the coverage). Time 31 days (for 4-hour group). Thus, the average lifetime of of influence is 4 hours. Animals of both sex were used. the experimental group came to 29 days. Besides, in Two groups of animals were used. The test group was the group, which has been processing with KDFR during put to the gamma radiation in the diapason of doses, 1 hour, the half of mice had survived (three of six which caused marrowy syndrome, i.e. from 7.5 to 8.5 mice). Gr. The second group of animals after the irradiation in the same diapason of doses was repeatedly processed At the second stage of the experiment the repeated (5 with KDFR. Time of the influence is 4 hours at 7.5 Gr times during 2 hours) KDFR influence on the mice was gamma radiation on 15 mice in one bath and 8.5 Gr on applied. These mice have been inoculated with 15 mice in another bath. Total gamma radiation of the sarcoma-37 at seven days before the beginning of the animals was made by means of the source 137Cs with influence. As a result, the average lifetime of the animals the dosage rate of 5.2 Gr/min. Gamma radiation in this was equal to 27 days, and for the mice, which were dose diapason causes death of the animals during the processed with fivefold KDFR influence, the average development of the marrowy syndrome, i.e. from the 6th lifetime was equal to 76 days. The obtained results are to the 20th day along with the aplasia of haematogenic the evidence of inhibition of swelling development for tissue. Combination of gamma radiation and KDFR the animals, which were processed with KDFR sometimes leads to the slight increase of the number of influence. This leads to the increase of lifetime of such survived animals. If the animals are irradiated by animals in comparison with the test. Thereby, at a great gamma rays at first and then by KDFR, the death control extent the results of the previous experiments on the at 7.5 Gr radiation is equal to 5.5% from the total number strengthening of immune status after KDFR influence of the animals and at the following KDFR processing were confirmed. 16% of the irradiated animals die. However, 67% of the animals in the tested group have died after KDFR Results and discussion processing and after the coming next gamma radiation with 8.5 Gr total dose. And in the group, which was Let us make a conclusion. At the KDFR influence on processed with KDFR, only 46% of the animals died. animals’ organism the following effects were observed: decrease of blood viscosity; strongly pronounced Immunity strengthening hypocoagulation; decrease of contents of glucose. Increase in the number of karyocytes and the extended For the investigation of KDFR influence the following lifetime of the animals, infected with Ehrlich cancer and tests were chosen: activity of natural killers and T-killers, sarcoma-37 were also observed. which had been obtained by the immunization in vitro in the unidirectional mixed culture of lymphocytes and As for human being, the researches in this area have also in the reaction of blast transformation on the not been carried out yet and they are still confined to specific mitogen [6, 7]. All tests were made on the 7th the single observations. It is possible to give an example day after a single KDFR influence. Unfortunately, data from the author’s practice. In 1975 nonoriented circuit have been obtained with the applying of radioisotope of 3kWtt power was examined. Field strength was preparations. Though the experiments of this kind were measured. The author of the article had been working successful and though they have demonstrated the in the field for about 8 hours. And after five hours after increase of some immune reactions’ level, there is a the experiment I had felt bad. That time it was nothing certain doubt in the relevancy of radioisotopes known about the influence of the new radiation on application [2]. Thus the series of experiments was human organism. The arrived ambulance has quickly made. These experiments were aimed at the diagnosed that I was close to hypoglycemic coma. On investigation of KDFR influence on the development of several hours after the intravenous glucose injection, the swelling process. The aim of the experiment is the my state has become normal. Now we know that before investigation of KDFR influence on the development of the experiments with powerful KDFR bunches it is Ehrlich cancer and sarcoma-37, which were repeatedly necessary to eat sugar. Thus we believe that the data, inoculated to mice. At the first stage of the experiment which were obtained after the experiments with there was a single KDFR influence on the mice animals, can be applied to a human being. We can repeatedly inoculated with sarcoma-37, on the 2nd day suggest that the manifestation of the symptoms of the after the repeated inoculation of the swelling cells to KDFR influence on human organism depends on the the animals. The repeated inoculation was made power of the applied source, on the total mass of the intramuscularly in a right thigh, in a dose 106 of cells organism and on the time of it in the coverage of the per a mouse. Time of KDFR influence for the first group irradiation. From aforesaid it is clear that the experiment of animals was equal to 1 hour; for the second one it with powerful KDFR sources is far from being harmless Page 283 and it is better to make it distantly after exclusion of 3. Kozyrev N.A. Selected works, Leningrad State University, man presence near experimental stands and devices. 1991, part 3 At the same time it is quite obvious that on applying of 4. Lavrentyev M.M. Eganova I.A. Luzet M.L., Fominykh S.F./ small capacity and fixed time of irradiation it is possible / Reports by AS USSR. – 1990, Vol. 314, #2, p. 352-355 to develop methods for curing of human diseases, which 5. Lavrentyev M.M., Gusev V.A., Eganova I.A., Luzet M.L., are considered now as incurable (for instance of Fominykh S.F.// Reports by AS USSR. – 1990, Vol. 315, #2, p. 368-370 diabetes, some diseases of haematogenic system, of cancer and possibly of AIDS. 6. Methodological materials on experimental pharmacological and clinical trials of immune modulating References effect of pharmacological remedies. - Ministry of Health USSR, M., 1984 1. Proceeding of the International Scientific Conference “New Ideas in Natural Sciences” Problems of Modern Physics, 7. Talmadge J.E. and Chiragos M.A. Comparison of p. 176-187 immunomodulatory and immunotherapeutic properties of biologic response modifiers. Springer Seminar 2. Journal of new energy, Vol. 3, #4,1999, I.M. Shakhparonov Immunopathol., 1985, 8, 429-443 “Interaction between Kozyrev – Dirak radiation and radionuclides”, p. 85-89 Effect of Magnetic Blow Wave The graphite, which is initially diamagnetic, transforms to paramagnetic one with general radiation doze of about 7·1019 neutrons/cm2. Other types of radiations Field on Wine Systems could not affect this way (Svoistva 1975). So one unit of MBW can be considered as 1·105 of neutron masses. This fact may be regarded as an indirect evidence for I.M.Shakhparonov (Corresponding author), S.A.Grin, assuming that MBW and magnetic monopole are the S.R.Tsimbalaev, L.N.Kreindel, V.N.Kocheshkova, same things. In the absence of excited radioactivity a A.I.Podlesny, S.Yu.Gelfand slow MBW [v/c < 1·104] occurs, which does not ionize atoms (Devons, 1963). Therefore, their interaction with AGD Firm, Peschanyi pereulok, House No.20P korpus No.1, Lfl. 33 the matter can be observed only indirectly. No data exist 125252, Moscow, A- 252, Russian Federation on the interaction of MBW with organic substances. The Russian Institute of Canning Industry, Shkolnaya Street. 78. experiments and results reported in the present 142703 Vidnoe 3. Moscow Region, Russian Federation communication may be a starting point for development of technology and to formulate the methods for vintage wine and best quality spirit production. Materials and Methods Authors communicate the data on influence of Magnetic Blow Wave (MBW) field on several wineproducts. It was Assuming that MBW and magnetic monopole are the found, that MBW did not lead to significant changes in same things, a number of conditions were selected for the major components of the wineproduct (sugar, all experiments. The MBW source and the samples were organic acids, minerals). At the same time the taste and placed in the same axis and the axis was oriented aroma of treated wine become more pleasant; content according to magnetic meridian direction. Such of heavy alcohols and wine stone in the treated samples magnetic orientation is appropriate, as the energy of was less than in non treated ones. A mechanism of magnetic monopole theoretically increases in a transformations was also discussed. magnetic field (Devons, 1963). All of samples were placed at 250 cm distance from MBW source, in Keywords: Magnetic Blow Wave (MBW), Wineproduct, hermetically closed glasses. It should be noticed that GLC of aroma compounds and ethanol, HPLC of sugars, MBW could penetrate through many other barriers, for Atomic Absorption Spectrometry (AAS) of minerals, example into cast iron reservoir with wall thickness of Heavy alcohols and aldehydes, Wine stone, Turbidity 5 cm (Amaldi, 1970). tendency, Organoleptic evaluation The quality investigations were made by using of Magnetic Blow Wave (MBW) was obtained for the first standard equipment. HPLC, equipped with time during the investigations on ball lighting refractometric detector was used for sugars estimation. generation under the laborator y conditions Separation of organic acids in forms of their ethyl esters (Shakhparonov 1994). MBW as a physical object is and acid esters was carried out chromatographically interesting because of some facts, which suggest that using a column packed with polyethylenglycol MBW is a magnetic monopole. The MBW can also succinate and the following temperature option: initial interact with the matter and transforms it in a definite temperature is 120ºC, final temperature is 220ºC, way. Typical example is an elementary carbon in the temperature growth rate: 8º/min. GLC was also form of graphite, which is transformed by such magnetic employed for determination of ethanol. Minerals content treatment into ferromagnetic substance (ibid). Page 284 conclusion: in order to obtain complete information technologies, which, we believe, will change our life in about any system, it should be destroyed. However, the 21st century more than all the scientific and technical destruction of tissues of the man in order to get revolutions of the 20th century. information about their state is a too high price to pay for the information about his health. Reference 1. Cartan E. Compates Rendus. Akad.Sci., Paris, 1922, v.174. However, the above Van Hoven’s criterion can be 2. V.Melnikov, P .Pronin. Problem of gravitation constant stability satisfied with the minimum influence, when the cells and additional interactions. Itogi Nauki I Tekniki, ser. Astronomy, are not destroyed and the atoms of these cells, being v.41, Gravitation and Astronomy, Moscow, VINTI, 1991. 3. G.I.Shipov Theory of the Physical Vacuum. Nauka, Moscow 1997. primary sources of torsion spectrums to be registered, 4. I.Ternov, V.Bordovitsin. On the modern interpretation of classical are bring into the non-equilibrium state by means of spin theory of Ya.Frenkel. UFN, 1980, v.132, No.2. outer disturbing influence. 5. V.Bagrov, V.Bordovitsin. Classical spin theory. Izvestiya VUZ, Phys.Series, 1980, No.2. 6. F.I.Belinfante. On the spin augular momentum of mesons. Physica In order to choose the frequency of the disturbing VI, 1939, v-6, No.9. torsion influence correctly, it is necessary to take into 7. M.Markov. The very early universe. Proc of the Nuttfield Workshop, account the role of water in physical and biochemical Eds. G.V.Gibbson, S.W.Hawking, S.T.Siklov, Cambridge, 1988. organization of tissues of the human organism. 8. Hideo Uchida.A method apparatus for detecting a fluid. Patent England, No 511662, May 24, 1978. 9. Anatoly Akimov. Heuristic discussion of the problem of finding At the same time, it is necessary to take into account long-range interactions. EGS-concepts. CISE VENT, preprint N7A, the resonance torsion frequencies of various human Moscow, 1992. organs. Finally, it turns out that the signal of torsion 10. IITAP RANS, TORTECH USA, Horizonts of the Science and Technology XXI age Proc, Editor A.E.Akimov, Folium, Moscow, 2000, disturbance should be rather sophisticated considering vol.1 (in Russian). both these factors. The TORDI system is a ready-to-use 11. N.Kozyrev, Astronomical observations by means of the physical production device. Nevertheless, it is important to properties of time. In “Flarestarse” International Symposium in understand that the model is not the limit of scientific Byurakan, 1976,Armenian Academy of Sciences Publ., Yerevan, 1977 (in Russian) and technical potential incorporated in it and that 12. Bouwmecster D. Nature, v.390, 11 dec, 1997. enhanced variants of the system will appear with the 13. G.I.Shipov. Theoretical and Experimental Research of Inertial course of time. Mass of Four-Dimensional Gyroscope. ITTAP RANS, preprint N10, Moscow, 2001, (in Russia). 14. The way of correction of metal alloy microstructure. Patent Summing up, I would like to draw your attention once Russian, RU 2107105, 1998. more to the fact that work on torsion technologies is 15. A.Dolgov, Yu.Zel’dovich, M.Sazshin. Cosmology of the Early not limited by the directions that were discussed here. Universe. MGU Publ., Moscow, 1988. Actually, as it was pointed out in the beginning, on- 16. I.A.Wheeler. Einstein’s vision. Springer Verlag, 1968. 17. Convegno Internazionale:Quale Fisica per 2000, Proc. Bologna, going development includes all branches of economy, 1991. see: The Manual of Energy Devices and Systems. industry, agriculture and medicine, as well as all Complied D.A.Kelly, D.A.K. WLFUB, Burband, California, 1986, Publ. problems of everyday life. Technologies that we N1269/F-289. mentioned are the forerunner of the fact that the 18. Daytlov V.L. Polarization Model of the Heterogeneous Physic Vacuum. Institute Mathematical, Sibirians Academic Science, 1998 mankind is on the threshold of the age of torsion (in Russian). The Electrical Vortex D is electric induction, B is magnetic induction, v is velocity of motion, å 0 is electric constant. Non-Solenoidal Fields Herewith the appearing electric induction is always Sergey B. Alemanov transverse to the direction of motion. It is possible to E-mail: Phone 7 (095) 323-6848 formulate the rule of origin for electric induction under the condition of rectilinear motion: if to dispose the right A mistake was found in the electrodynamics: it is hand palm so four fingers shows the motion direction detected that all electrodynamics’ postulates of the magnetic flow (the field), connected with moving corresponds to the experimental facts, but vortex magnet, and the vector B fells into palm, then the moved electric fields has unclosed inductive lines. aside big finger will indicate the direction of vector D. The given rule is like the rule for Lorenz’ force, but on When the magnet is moving, then the current of the contrary (the difference is in frame). In the first case magnetic induction is moving together with it. From the charge moves, but the magnet rests. Here the known velocity of motion v and the value of magnetic magnet moves, but the charge, which points the induction B, it is possible to calculate the intensity E of direction for lines of force of electric induction, is appearing vortex field according to electrodynamics immovable. So, there it is the rule for left hand, but here, formula of transformation of fields E=vB. on the contrary, it is the rule for right hand. Thereby, if the charge moves, but the magnet is immovable, then If to change the E=vB on induction D= å 0E in formula the rule of left hand uses for determination of the force. But if the magnet moves, but the charge rests, then the of fields’ transformation, that will get D= å 0Bv, where rule of right hand uses for determination of the force. Page 31 The origin of electric force is connected with that, the On the Fig.1 the moving magnet is conditionally vortex electric field D= å 0Bv appears around moving represented (motion is toward to the text, magnet is magnet (the magnetic field does not act on immovable moving away). N and S are poles of magnet. The charges). direction of lines of electric induction, appearing when the magnet is moving, specified by arrows à and ß. In common literature on electrodynamics there is no any Part of the lines begins in positive area (+) and finishes difference between electric vortex field and solenoidal in negative area (-), the areas are placed on the ends of field, but these are different notions. The sign of magnet. The flow of electric induction through closed solenoidal field is the closed lines of electric induction surface is not a zero; that is to say, these areas of (the flow of vector D through the closed surface is a disturbance are moving electric charges. zero), but for the vortex field the sign is following: the work of forces can be different from zero under the From the electrodynamics textbook again: “The flow of condition of motion along a closed line. That is to say, vector D through any closed surface is equal to algebraic the vortex fields can agitate the rotational currents. amount of external charges, covered by this surface. In the electrodynamics these postulates has the same role, From the electrodynamics textbook: “The work of forces as Newton’ laws in classical mechanics.” of vortex electric field can be different from zero, when the electric charge is moving along a closed line.” Thereby, according to postulate, it is necessary to consider the appearing dissimilar areas of disturbance For instance, when the magnet moves, the vortex (+) and (-) to electric charges, or it is necessary to electric field appears and this field can be solenoidal or change the postulate. not, depending on magnet’s orientation. Let’s take such example: the magnet moves evenly, rectilinearly, and It is interesting, that a part of lines of electric induction, it’s poles are oriented transversely to direction of motion. which placed frontal and behind magnet, starts and According to the rule of origin for electric induction finish at infinity, since the distribution of magnetic induction around magnet has not determined borders. (D= å 0Bv that is the rule of right hand), the appearing vortex electric flow is not a solenoidal, since the lines For clarity, it is possible to make following calculation. of electric induction are not closed. Its begins in one For instance, the coil (loop or turn) with current, as a conditional area of disturbance (+), accompanies the magnet, moves evenly and rectilinearly, but its magnetic moving magnet, and it finish in another area of poles are oriented transversely on motion direction. disturbance (-). For presentation it is enough to consider Under such motion the lines of electric induction are only two areas (+) and (-), represented on Fig.1. These not closed, and the dissimilar areas of electric field’s dissimilar areas of disturbance appears because that disturbance appears in space on the edges of this coil. flow of magnetic induction inside the magnet has the inverse direction, that outside the magnet. On Fig.2 the moving coil with current is conditionally represented. It moves from left to right side of the page. The arrows on the coil indicate the direction of current. The appearing dissimilar areas of disturbance of electric field are marked by signs (+) and (-). Knowing, that in (+ ) medium of the coil B=µ0I/2r and according to D= å 0Bv, it is possible to find the electric induction, appearing in N the center, between two dissimilar areas D=ε0µ0Iv/2r, (-) where I is current in the coil, r is radius of the coil, v is S velocity of motion, å 0 is electric constant, ì 0 is magnetic constant. The electromagnetic disturbances in transverse electromagnetic waves has the similar field construction, there also dissimilar areas of disturbance of electric field exists, that is to say the lines of electric inductance are not closed. Only the Fig.1 Fig.2 currents of electric displacement and magnetic induction are closed. That moving disturbance of electric and magnetic fields presents itself as transverse electromagnetic Let’s consider another example: magnet moves disturbance. Also, it is necessary to notice, that under rectilinearly, but its poles are oriented longitudinally to such magnet’s motion, the appearing vortex electric direction of motion. According to the rule for origin of field is not closed, but the current of electric electric induction (D= å 0Bv is the rule of right hand), displacement, connected with it, is closed (a currents the appearing rotational electric flow is solenoidal, since are always closed). In given example, for clarity, it is in this case the inductive lines become closed lines. possible to present a intensity of electric field through Usually in books on the electrodynamics such moving the Lorenz’ force, if to take the frame, in which the magnet is considered, and the wrong conclusion is magnet rests, and the test charge moves. thereof done, that vortex electric field is always Page 32 solenoidal, herewith it is forgotten, that poles of the lines are always closed, like force lines of magnetic magnet can be oriented not only along the direction field.” of motion, but across also. But before this fundamental postulate, confirming, that From the electrodynamics textbook: “The vortex electric force lines of vortex electric field are always closed, it field differs from electrostatic field that it is not related was necessary to consider all variants of change for the with any electric charges and its lines of intensity are magnetic field, including the variant of the transverse closed lines.” motion of the magnet. That is to say, the consideration of physical processes could not be unilateral. Faraday From theory and from experiments it follows, that considered the longitudal motion of magnet and under transverse motion of magnet the lines of discovered the electromagnetic induction, but the disturbance of vortex electric field can be unclosed transverse motion of magnet that have the principle and, accordingly, the flow of induction through the importance for understanding of field processes in closed surface is not a zero. Then there is a direct electrodynamics was not considered. Thereby, the discrepancy to facts in modern electrodynamics. It is longitudal motion of magnet brings to arising a vortex strange, but for the whole history of researches in electric field with closed force lines, but transverse magnetism the transverse magnet’s motion was not is motion of magnet brings to arising a vortex electric field, considered. It leads to revising of electrodynamics’ where the lines of forces are not closed. In this case it postulates, which plays such role in electrodynamics, lead to induced electric charges. It is necessary to as the Newton’s laws plays in classical mechanics. The notice, that this is first mistake, detected in postulates, giving invalid belief about field processes, electrodynamics postulates for all time of existence of accordingly, do not allow to make some correct electrodynamics. calculations. Fallaciousness of these postulates was one of the reasons, on which the electrodynamics could not From the electrodynamics textbooks: “…Gauss’ theorem to consider and to calculate the discrete electromagnetic is valid not only for electrostatics, but also for waves (photons), where the magnetic field also is the electrodynamics, which using a variable in time transverse field (the field construction and calculation electromagnetic fields. We are not sure if this hypothesis of photons are represented on the page http:// is valid or it is not valid… Only the experiment can That is to say, not only give the answer on this question. The whole collection particles has the charges, but areas of disturbance of of experimental facts speaks in favor of this hypothesis.” field (without particles) are the charges also, where But, unfortunately, the experiment with transverse the flow of electric induction through the closed surface motion of magnet was not considered seriously in this is not a zero. Thereby, the vortex electric fields can be textbook. not only as closed flows of induction, but as well as inducted electric charges, accordingly, the laws for (Editor’s note: Well-known Searl’s experiments and electric charges are valid for induced electric charges Godin & Roshchin’s experiments are based on such also. For instance, in the law of conservation of charge: transverse motion of magnets (rollers). In Alemanov’s if somewhere the area of disturbance with positive sign article it was demonstrated that in this case the appears, that negative area appears also. experiment should lead to induced electric charges. Really it was detected in experiments. Hence this From the electrodynamics textbook: “The vortex electric missed aspect of electrodynamics is very important field is generated by the variable magnetic field. Its force for development of the new energy technologies.) Gravito-Inert Mass Mass (m) is a physical value, one of characteristics of matter, which defines its inert and gravitational J.A. Asanbaeva properties. Accordingly, we distinguish inert mass (mi) and gravitational mass (mg). 720000, Kyrgyzstan, Bishkek, Kadyrov’s Scientific Center +996 (312) 47-25-40, +996 (312) 65-02-83 Inert mass (mi) characterizes dynamical properties of a body, its property to accelerate under the action of the Nature of mass is one of the important problems of force (F ) and according to the second Newton’s law is modern physics. It is accepted to consider that the mass considered to be constant coefficient of proportionality H H of elementary particle is determined by fields, which for the given body between F and acceleration a . are connected with it (electromagnetic, nuclear and H H (1) others). However, we didn’t create any quantitative Fi = m i a theory of mass. There is no theory to explain why masses of elementary particles form a discrete spectrum Gravitational mass (mg) is a source of gravity field. of values and to allow determining this spectrum. Ever y body creates its gravity field, which is Page 33 18. Kozyrev N.A. Selected Transactions (Leningrad University Press, 26. Oleinik V.P. The Newest Development of Quantum Leningrad, 1991) (in Russ.). Electrodynamics: Self-Organizing Electron, Superluminal Signals, Dynamical Inhomogeneity of Time. Physical Vacuum and Nature. 4 19. Lavrent’ev M.M., Eganova I.A., Medvedev V.G., Olejnik V.K., p.3-17 (2000) (in Russ.). and Fominykh S.F. On Scanning of Celestial Sphere by Kozyrev’s Sensor. Doklady AN SSSR, 323 (4), p.649-652 (1992) (in Russ.). 27. Jefimenko O.D. Electromagnetic Retardation and Theory of Relativity (Electret Scientific Company, Star City, 1997). 20. Akimov A.E., Kovaltchuk G.U., Medvedev B.P Oleinik V.K., Pugatch A.F. Preliminary Results of Astronomic Observations of the ., 28. Oleinik V.P Borimsky Ju.C., Arepjev Ju.D. Time, What is this? Sky by the Kozyrev Procedure. Academy of Sciences of Ukraine, Dynamical Properties of Time. Physical Vacuum and Nature. 5 p.65- Central Astronomic Observatory. Preprint CAO-92-5P 1992, p.16 (in 82 (2001); New Ideas in Electrodynamics: Physical Properties of Time. Russ.). Semiconductor Physics, Quantum Electronics and Optoelectronics. 3 N4 p.558-565 (2000). E-print: quant-ph/0010027. 21. Oleinik V.P. Superluminal Transfer of Information in Electrodynamics. SPIE Material Science and Material Properties for . 29. Borimsky Ju.C., Oleinik V.P The Course of Time in Classical and Infrared Optoelectronics. 3890 p. 321-328 (1998) (http:// Quantum Systems and Dynamical Principle. Physical Vacuum and /). Nature. 6 (2001) (in print). 22. Oleinik V.P. Faster-than-Light Transfer of a Signal in Electrodynamics. Instantaneous Action-at-a-Distance in Modern The Principle of Self-Organization, which can be Physics (Nova Science Publishers, Inc., New York, 1999), p.261-281. formulated as follows: any material object 23. Eganova I.A. The World of Events Reality: Instantaneous Action represents an open self-organizing system whose as a Connection of Events through Time. Instantaneous Action-at-a- internal str uctures are formed with the Distance in Modern Physics (Nova Science Publishers, Inc., New York, 1999). participation of the whole universe. Apparently, the Principle of Self-Organization, incorporated in 24. Lavrent’ev M.M. and Eganova I.A. Physical Phenomena nature as one of the integral properties of matter, Predicted and Revealed by N.A.Kozyrev, in the Light of Adequacy of Space-Time to Physical Reality. Philosophy of Science, 1(3), p.34-43 is nothing more nor less than a spirit (or absolute (1997) (in Russ.). idea, or creator) which operates the world and 25. Logunov A.A. Lectures in the Theory of Relativity and Gravity. creates all its variety. A Present-Day Analysis of the Problem. (Nauka, Moscow, 1987) (in Physical Mechanism of Nuclear energies of translational motion of the centers of mass of nuclei and electron. Because of the existence of simple Reactions at Low Energies mechanism of nuclear reactions at low energies, nuclear reactor turns out to be an atomic delayed-action bomb, which may blow up by virtue of casual reasons, as it V.P Oleinik* and Yu.D. Arepjev has taken place, apparently, in Chernobyl. The use of cold nuclear reactions for production of energy will Tell me what the electron is, provide mankind with cheap, practically and I shall explain to you everything else. inexhaustible, and non-polluting energy sources. W. Thomson Nuclear reactions at low energies, occurring in physical The physical mechanism of nuclear reactions at low and biological systems, and, in particular, the cold energies caused by spatial extension of electron is fusion (CF) of nuclei, attract ever increasing attention considered. Nuclear reactions of this type represent (see review articles [1,2]). This is explained by the fact intra-electronic processes, more precisely, the processes that research on CF (in what follows, by cold fusion we occurring inside the area of basic localization of electron. shall understand any nuclear reactions at low energies) Distinctive characteristics of these processes are opens up the way to the solution of the problem which defined by interaction of the own field produced by was set more than 50 years ago in the field of controlled electrically charged matter of electron with free nuclei. thermonuclear reactions (CTR) and which has not been Heavy nucleus, appearing inside the area of basic solved that is the problem to provide mankind with localization of electron, is inevitably deformed because cheap fuel. An important point is that CF allows to of interaction of protons with the adjoining layers of create not only cheap, but also non-polluting energy electronic cloud, which may cause nuclear fission. If two sources, as nuclear reactions at low energies are not or more light nuclei occur “inside” electron, an attractive accompanied by radiations dangerous to health ( γ - force will appear between the nuclei that may result in radiations, streams of fast neutrons and other particles). the fusion of nuclei. The intra-electronic mechanism of Note that the energetic problem facing mankind is nuclear reactions is of a universal character. For its presently of special interest in connection with the fact realization it is necessary to have merely a sufficiently that, according to expert evaluations, the oil-and-gas intensive stream of free electrons, i.e. heavy electric resources in the world will suffice only for some decades. current, and as long as sufficiently a great number of For this reason the study of CF is among the most free nuclei. This mechanism may operate only at small important problems of physics. Page 215 It is necessary to note that, relying on the standard occurs. Study of optical spectrum of plasma arising at theory of nuclear reactions describing nuclear processes discharge and of the mass-spectrometric analysis of in vacuum, experts in the field of nuclear physics, sediments, which remained after the discharge, shows engaged in CTR, reject the very possibility of existence that in plasma there is an appearance of a significant of nuclear fusion at low energies. Two basic objections number of chemical elements which were not presented are raised against CF: in the initial material of explosive foil and electrodes and also that the isotope structure of the foil material 1. at low energies the penetrability of Coulomb barrier changes appreciably. The change of experimental around nuclei is so small that the probability of nuclear conditions, for example, of energy contribution in foil, fusion is practically equal to zero; its mass and dimensions results only in redistribution 2. distinction between the atomic and nuclear energy of intensity of plasma spectral lines, i.e. in the change scales is so great that the energy, which might be of statistical weight of chemical elements in plasma, evolved as a result of nuclear fusion, could not be but the composition of chemical elements remains transferred directly to atomic lattice; therefore the unchanged and it essentially depends on the material energy above should be emitted in the form of streams of foil. As it is seen from the received results, nuclear of γ -quanta, fast neutrons and other particles. reactions, which take place at electric discharge, are However, such streams of sufficient intensity have not not accompanied by the occurrence of a neutrons stream been registered. and γ -radiation and proceed at low energies of atomic The answer to the first objection against existence of CF is that at the heart of CF are nuclear processes The research mentioned above as well as many others, occurring in environment, and the basic role is played carried out by different researchers in different here, apparently, by collective effects caused by laboratories, allow to draw a conclusion that existence interaction of nuclei with particles of environment in of nuclear reactions at low energies is reliably which the nuclear reaction takes place. The laws established. governing the behavior of interacting nuclei in vacuum are inapplicable to the description of CF of nuclei [3]. The development of research on CF is hampered by the Nuclear reactions occurring at low energies submit to absence of theory of the phenomenon. As noted by completely different laws, which can be established Schwinger [3,4], the situation in CF is closely parallel only provided that collective effects mentioned above to that one in high-temperature superconductivity: are taken into account. For this reason the standard reality of the last, as a result of careful experimental theory of nuclear reactions in vacuum can by no means research, is completely established, though theory of refute the existence of CF. the phenomenon is absent till now. As to the impossibility of transferring the energy In [5], to account for the transformation of chemical between levels of various scales, we can give an elements, the hypothesis is put forward that at the example of the phenomenon of sonoluminescence electric explosion of foil in the plasma channel magnetic (luminescence of a liquid when a sound wave causing monopoles are formed which may overcome the cavitation passes through it) [4], in which the energy Coulomb barrier even at insignificant kinetic energy due transfer from an acoustic wave to electromagnetic field to the great magnitude of their magnetic charge. The occurs with appreciable probability in spite of the fact monopole, appearing not far from a nucleus, causes its that the distinction between energies of acoustic polarization: those nucleons of the nucleus, which are phonons and quanta of light reaches 11 orders. situated more close to the monopole, experience stronger influence of the last, than the nucleons situated As early as 10 years ago J. Schwinger, the Nobel winner on the opposite side of the nucleus. As a result, a and the known expert in the field of the theory of deformation of the nucleus arises (the nucleus is elementary particles and quantum electrodynamics, lengthened), which may result in nuclear fission. asserted that it is impossible to deny the reality of CF phenomenon [3,4]. Since then the CF phenomenon for Obvious drawback of this mechanism of nuclear nuclei was repeated hundreds times in laboratories all reactions is that magnetic monopoles have yet to be over the world, tens of patents on the ways of energy found out in nature. generation on the basis of CF were registered and enormous number of experimental works were Numerous attempts to construct a consistent theory of published, which not only confirmed the existence of CF (see reviews [1,2]) have not been crowned with effect, but also contained its detailed analysis. success. As it was noted above, for the CF to be described, the account of the collective effects may be The most convincing evidence for the existence of impor tant caused by interaction of nuclei with nuclear reactions at low energies seems to give the environment, in which nuclear reaction takes place. mass-spectrometric research of reaction products [5] as But does it suffice to take into account these effects in well as research on biological systems [6]. Detailed order that the theory of the phenomenon is constructed? study of electric explosion of foil made of especially pure The analysis of the experiments on transformation of materials in water, described in [5], suggests that at chemical elements at low energies and on the CF of electric discharges transformation of chemical elements nuclei suggests that the discussed phenomenon does Page 216 not fall within the domains of exotic ones: it seems to of the casual reasons. Hence, though nuclear stations occur in nature constantly, at every step, in both may provide mankind with energy, however atomic physical and biological systems. Therefore, it is natural engineering is a ver y dangerous way of energy to expect that nuclear reactions at low energies should production. The only acceptable way of solving the have a simple physical explanation. energetic problem consists in the use of nuclear reactions at low energies. However such explanation, which is not beyond the scope of existing representations, is yet to be found. Quantum model of electron as an open self- Does not it mean that we are facing here the situation organizing system similar to that which has arisen in physics at the end of the 19th century and which has been figuratively The basis for the standard formulation of quantum described in the words: on the light sky of physics there electrodynamics (QED) is the hypothesis that electron are only two small dark clouds – the radiation of is a structureless point particle which does not absolutely black body and the Michelson experiments? experience self-action. This assumption results in Let us remind that in order for these clouds to be serious difficulties – the divergences of mass and charge removed, it has taken the revision of physical notions of electron and the impossibility to explain stability of about electromagnetic field as well as about space and the particle (see, for example, [10-12]). The difficulties mentioned above are very serious. As is noted in [8], there is a simple physical mechanism According to Dirac, the difficulties of QED “in view of of nuclear transformations at low energies which their fundamental character can be eliminated only existence follows from the quantum theory of electron by radical change of the foundations of the theory, as an open self-organizing system [9]. If two or the probably, radical to the same extent as transition from greater number of light nuclei appear inside free the Bohr orbits theor y to modern quantum electron, more precisely, inside the area of basic mechanics” ([13], p. 403). “Correct conclusion”, Dirac localization of the particle, because of interaction of emphasizes, “is that the basic equations are incorrect. nuclei with electrically charged matter of electronic They should be changed in such a way that divergences cloud, a force of attraction appears between the nuclei do not appear at all”. which may result in fusion of nucleus. This means that cold nuclear reaction represents an intra-electronic The main reason of occurrence of difficulties is the process which character is defined by physical assumption that electron is a point-like particle. properties of the own field produced by electrically Therefore, abandonment of this hypothesis is inevitable. charged matter of electron. The purpose of this paper As an analysis of the problem shows, the key to is more detailed consideration of the mechanism above constructing a consistent quantum theor y of stemming from the spatial extension of electron. electromagnetism lies in taking account of the Coulomb self-action of electron, i.e. the back action of the own In section 2 physical ideas are formulated and basic field created by charged particle in environmental space results are schematically presented of quantum theory upon the same particle. In the special case that the of electron as an open self-organizing system. The particle is at rest in an inertial reference frame, own theory outlined is necessary to elucidate the origin of field of the particle turns into static Coulomb field. the mechanism resulting in the occurrence of nuclear reactions of fusion and fission at low energies. The E.Schrödinger who suggested the historically first essence of the developed approach consists in that the physical interpretation of quantum mechanics put one own field created by electron is treated as a of the boldest ideas concerning the problem of electron congenital, integral physical property of electron, forward. According to Schrödinger’s hypothesis, the intrinsically inherent in the particle by the very quantity e Ψ (r ) ( e and Ψ (r ) are charge and wave nature of things and for this reason the own field and self-action are included in the definition of the particle function of electron, respectively) is the density of at the initial stage of formulating the theory. As is seen spatial distribution of electron’s charge and, from the received results, electron represents a quantum consequently, the linear sizes of electron are the same (elementary excitation) of the field of electrically as those of atom [14,15]. However, they did not succeed charged matter. It is a solition, which physical and in substantiating the interpretation and, for this reason, geometrical properties are described by the non-linear it was rejected by the majority of physicists [16]. and non-local dynamical equation similar to the known Dirac equation. An important step to the correct understanding of the physical nature of electron was made by A. Barut and In section 3 the application of quantum model of self- by his collaborators [16-18] who formulated and organizing electron to nuclear reactions at low energies developed quantum theor y of electromagnetic is considered. It is noted that because of the presence processes on the basis of self-energy picture (the Self- of simple physical mechanism of nuclear reactions at Field QED). Using expression for the total own energy low energies, which is of a universal character, nuclear of electron, they managed to calculate the Lamb shift reactors represent, in effect, nuclear delayed-action and other radiative corrections and to show that bombs, which from time to time may blow up by virtue radiative phenomena may be described in terms of the Page 217 action function, without using the second quantization insignificantly alter the physical properties of non- method. As is pointed out by Barut [17], “the correct interacting particles. However, such an approach to quantum equation of motion for radiating electron is not interaction between physical fields is obviously of an the Dirac or the Schrödinger equation for bare electron, idealized character because particles constantly but an equation containing an additional non-linear self- interact “with vacuum as with some kind of physical energy term”. medium in which the particles move” [27]. Interaction of particles with vacuum fluctuations is not small and New lines of approach to the problem of electron are it cannot be removed. offered in [9, 19-24]. The formulation of electrodynamics is considered which represents a synthesis of standard It is well also to bear in mind that the necessary quantum electrodynamics and ideas of the theory of self- intermediary at studying micro-objects are the means organization [25]. The physical mechanism of self- of observations (the devices) with the classical field organization of electron consists in self-action. Taking corresponding to them which should be taken into into account the self-action means that electron is account in consistent quantum theory [28]. Inclusion in treated as a feedback system. theoretical scheme of arbitrarily weak classical external field results in occurrence of non-zero width Γ of energy Let us outline schematically the results of the levels of “dressed” particles. The basic impossibility to formulation of quantum electrodynamics in which isolate a real particle from vacuum fluctuations of the electron is an open self-organizing system. field and from the classical sources connected to the means of observation is indicative, thus, of necessity Editor’s note: The authors develop mathematics by using to take into account the non-zero width of energy levels Lagrangian functions, 7 equations. You can contact the of real particles [26]. authors for more information about. The use of the harmonic oscillator model, when Thus, the negative result is received: we have tried to describing the interaction of electromagnetic radiation take into account self-action of electron in a natural way with substance, seems to be the main source of serious by supplementing the Lagrangian function with the self- difficulties of the standard formulation of quantum energy term, but we came to an equation that has no theory, as such an approach means apparent neglect of reasonable physical solutions at all. This result seems those physical processes which, proceeding constantly, to mean that the standard theoretical scheme reaches are responsible for inseparable coupling of real physical here the limits of its applicability and so, remaining in system to surrounding medium. Introducing artificial its framework, it is impossible to solve the problem of notion about switching on and switching out of electron and elucidate the physical nature of interaction of oscillator with radiation field, we are able electromagnetic interaction. to calculate within the framework of existing theory the width of energy levels of oscillator, but we cannot assert Essentially new point, which is introduced in [9] into with certainty that such an approach results in correct quantum mechanics consists in the replacement of the description of interaction. model of isolated system described by harmonic oscillator with the model of open system. Let us From the reasoning given above it is seen that they are advance the arguments indicating the inevitability of the models with energy levels of non-zero width that using the model of open system as a basis of the should form the basis for the description of interaction description of interaction between microparticles [26]. of radiation with substance. It is necessary to formulate such a quantum theory, which would take into account Note, first of all, that quantum particle theory based on the energy levels of non-zero width Γ. The case in point the use of the models of isolated system is, strictly is that one should introduce an infinitesimal damping speaking, physically meaningless. Really, any Γ into the initial set of equations describing interaction observation conducted on a system represents a process of charged particles with electromagnetic field. Such of interaction of the system with the means of an approach means the violation in infinitesimal of observation. But in case of microparticles (quantum homogeneity of physical system relative to translations particles) this interaction is not weak and consequently in time. Necessity of violating the homogeneity of time it is inadmissible to neglect it, i.e. microparticles should follows from that fact that in the usual approach (with be necessarily considered as essentially non-isolated Γ = 0) the states of the system of interacting fields have systems. degeneracy of infinitely large multiplicity in relation to time translations. According to the fundamental A starting point of the standard formulation of quantum Bogoliubov’s concept of quasi-averages [29], when mechanics is the physical idea that interaction between describing the behavior of degenerate systems, one physical fields can be reduced to collision of the should include into Hamiltonian an infinitesimal term particles corresponding to these fields, the particles removing degeneracy. In the theory presented here before and after collision being considered as free ones. degeneracy of states of quantized fields relative According to these representations, quantum translations in time is removed by introducing the mechanics is based on the notions of “bare”, non- infinitesimal damping Γ into Lagrangian. Thereby the interacting particles, with the interaction between them degeneracy under study is removed already in the being considered as an additional factor which can only initial, zero-order approximation, which is of Page 218 fundamental importance for the approach based on ~ other, Ψ , to the surrounding medium, in which the perturbation theory. particle moves. Formulation of the physical idea that quantum friction Editor’s note: You can contact the authors directly for arises at the very elementary level - at the level of one more information (8-16 equations). particle is given in monograph [26]. Impossibility to isolate real particle from the surrounding world is that Equation (16) coincides in its appearance with the usual property which should be taken into account already in Dirac equation for charged particle in an external field the one-particle theory (for each kind of particles), even described by 4-potential . However, in reality, it differs before switching on the interaction with other particles. essentially from Dirac’s equation. The distinction Model of the particle as an open system ( Γ ≠ 0 ) is consists in that equation (16) is non-linear and non- attractive owing to the fact that from the very beginning local, with the non-locality being of both spatial and the degeneracy of states relative to time translations is time character. Potential (A || ) and vor tex ( A ⊥ ) absent in it, the degeneracy, which is removed in standard approach by taking into account the components of the 4-potential, entering equation (16), interaction of particle with vacuum field fluctuations differ from each other by their physical nature: the and classical fields. The basis for the developed former describes the Coulomb field and is expressed formulation is the fundamental concept of quasi- quadratically in terms of the wave function components averages supplemented with the requirement that the of electron, and the latter describes transverse equations of motion of the particle with Γ ≠ 0 follow electromagnetic waves and is expressed in terms of vortex electromagnetic field. As a detailed analysis from the action principle. It should be emphasized that shows, solutions to the basic dynamical equation the non-zero damping Γ is introduced into describe the clots of self-acting electrically charged electrodynamics with the aim to establish the structure matter, localized in space, i.e. the particle is a soliton. of the Lagrangian function, which takes into account the property of openness of physical system. After The internal energy spectrum of electron is discrete with establishing the structure, the limiting transition an indefinitely large number of levels, and to each value Γ → 0 is fulfilled. of internal energy Ek (k is the set of quantum numbers) there correspond certain linear dimensions and In our opinion, the development of quantum theory geometrical form of the region of localization of will be inevitably connected with the use of models electron’s charge. Dimensions and the number of of open system; as such models reflect more extreme of wave function increase with increasing the completely the physical essence of interrelations in value of energy Ek. The distribution of electric charge the real world. It is necessary, thus, to define more of atomic electron in the ground state consists of the exactly the concept of openness of physical system, range of basic localization with the linear dimensions which, on the one hand, would describe real system of the order of Bohr radius a0 (a 0 ~ 10 -10m) and of the accurately enough and, on the other, would be simple tail stretching up to infinity. It is essential that because enough to describe the particular physical processes. of non-linearity of the dynamical equation of electron, wave function does not obey the superposition As open system has the richer physical contents in principle. By virtue of this, electron acquires the comparison with isolated system, some essentially new properties of absolutely rigid body: the perturbation mathematical ideas are needed for its description. First acting on electron at an instant of time in the range of of all, it is necessary to increase the number of basic localization becomes known at the next instant independent dynamical variables describing the t + 0 at any distance from the particle. particle as open system. In papers [9,19-24], as a basis for the description of self-acting electron, the simplest In Fig. 1 the results of calculation are represented model of open system is used which can be described schematically, carried out on the basis of equation (13), by the Morse-Feshbach-Bateman Lagrangian function of the distribution of electric charge in atomic and free [30,31] and which was successfully used for the electrons in the ground (a) and first excited (b) states. description of dispersive medium (the review of articles, in which applications of the model of open system to According to [9,19], the atom represents a system of electrodynamics of dispersive medium are considered, nuclear and electronic solitons interacting with each is given in monograph [26]). In this model the number other, the internal energy spectrum of the hydrogen of dynamical variables is doubled as compared with the atom, due to electromagnetic interaction, being of a isolated system, namely, to each dynamical variable of zoned character. The occurrence of zoned structure of “bare” particle, Ψ , there correspond two dynamical energy spectrum of hydrogen atom is explained as ~ follows. Free nucleus, because of existence of Coulomb variables, which are denoted by Ψ and Ψ . These self-action, has a discrete internal energy spectrum. As quantities are considered as components of the wave the interaction of nucleus with electron is small in function describing the quantum state of self-acting comparison with the energy of Coulomb self-action of particle. One of them, say, Ψ , corresponds in a sense the nucleus, it can be taken into account by perturbation to the particle alone (to the “bare” particle) and the theory. From here it follows at once that each energy Page 219 Fig. 1. Density of electric charge (ρ) of electron in the ground state (a) and in the first excited state (b): the continuous lines correspond to electron in the hydrogen atom, and the dotted ones to free electron, r is the distance from the center of mass of electron measured in Bohr radii. level of free nucleus is split in a zone. There are accordingly increases due to tunnel transition. Under indefinitely many zones (Balmer’s replicas) and in each certain conditions this process may result in fusion of of them there are indefinitely many energy levels. The nuclei. Obviously, the process in question can occur only lowest zone coincides with the usual Balmer spectrum. at small energies of translational motion of the centers of mass of electron and nuclei: nuclei should be “inside” Physical mechanism of nuclear reactions at low electron long enough for them to have time to come energies nearer to each other as a result of electron-nuclear interaction. This mechanism of nuclear fusion is of a The quantum theory presented above schematically of universal character. In order for it to be realized, it is electron as an open self-organizing system is indicative necessary to have only a stream of free electrons of the existence of the following mechanism of nuclear intensive enough, i.e. heavy electric current, and as long reactions at low energies [8]. as sufficiently great number of free nuclei. If there occur in the region of basic localization of free If heavy nuclei appear “inside” free electron, owing to electron, which linear sizes in the ground state of the their interaction with the electronic cloud there occurs particle are several times as large as those for hydrogen polarization of nuclei. Because the own field of electron atom (see Fig. 1), two or the greater number of nuclei, interacts with protons more strongly than with neutrons, each of them attracts on itself the adjoining areas of nuclei are deformed (become extended), and this electronic cloud, resulting in compression of the process may result in the decomposition of nuclei to electronic cloud as a whole. As a result, there appears fragments (in nuclear fission). automatically an attraction of the nuclei, which proved to be “inside” electron, on each other (see Fig. 2). As is noted in [7], the official version of the reasons for Chernobyl accident contains serious contradictions, a Calculation shows that the Coulomb barrier around number of facts concerning the accident has no nuclei is deformed, its height decreases and the convincing explanations, and this circumstance forces probability of penetration through the barrier to search for the true reasons for the happening, since Fig. 2. The schematic image of interaction of nuclei with electronic cloud: (a) 1 is the region of basic localization of electron, 2 and 3 are nuclei, F1 and F2 are the attractive forces between nuclei, which appear at the expense of electronic cloud compression induced by Coulomb forces; (b) ρ is the charge density, 1 is electronic soliton, 2 and 3 are nuclear solitons, Xn (n=1, 2,3) are coordinates of the centers of mass of particles. Page 220 “not having understood the mechanism of the one magnetic monopoles. The scenario of development of tragedy, we sooner or later shall become witnesses of events during the accident, described in [7], seems to the other”. The authors hypothesize that the reason of be quite plausible if only to understand by initiators of the accident was penetration into the nuclear reactor nuclear fission not hypothetical monopoles but free of magnetic monopoles, which have caused the decay electrons, which powerful pulse might arise as a result of nuclei 238U, and this has resulted in production of of electric discharge in the region of turbo-generators. delayed neutrons, growth of power output of the reactor and explosion. As an argument in favor of the The existence of simple physical mechanism of nuclear assumption, the fact is presented that nucleus 238U are reactions at low energies, indicated in this paper, disintegrated under the action of “strange” radiation implies that nuclear reactors are, in effect, nuclear appearing at explosion of foil. delayed-action bombs, which will blow up from time to time. Explosion of nuclear reactor may take place In the opinion of the authors of [5,7], “strange” radiation because of casual short circuit at an electric subcircuit, is created by those magnetic monopoles, which form owing to which there appears an intensive stream of bound states with nuclei of atoms. These compound free electrons. This stream, having got for any reasons particles give the abnormally wide tracks similar to in nuclear reactor, may initiate explosion of the reactor. those of a creeping caterpillar, and also the tracks of It follows from here that though nuclear stations may complicated shape reminiscent of spirals and gratings. provide mankind with cheep energy, atomic energetics Character of tracks changes when imposing magnetic represents a very dangerous way of producing energy field, which, as the authors believe, is an argument in (as well as the energetics using controlled favor of the assumption above. There are also some thermonuclear fusion). The only acceptable way of special tracks very similar to scratches and ink spots. resolving the energetic problem consists in the use of “Strange” radiation is of spherical form, it resembles a nuclear reactions at low energies. ball lightning, and its duration is more than ten times as great as that of the current pulse arising at electric According to the results obtained, nuclear reactions at discharge. With the course of time the luminous sphere low temperatures occur “inside” electron under the (the ball-like plasma formation) is dividing into many action of own field of particle. Hence, to elucidate small “balls”. physical mechanism of CF, it is necessary to study in detail intra-electronic processes and physical properties It is our opinion that “strange” radiation is caused by of own fields of particles. Note that the own field, by its free electrons in excited state arising in the area of physical properties, essentially differs from the field of electric discharge. According to [9, 19], linear sizes of electromagnetic waves: this is the field of standing the region of basic localization of such electrons can waves of matter, it is of purely classical character and make many tens of sizes of atom. The heavy nucleus, may not be reduced to the set of photons. The own field for example, the nucleus 238U, appearing inside the of charged particle plays in nature a special role, electronic cloud, is inevitably deformed because of consisting in that it transforms environmental space into interaction of protons with adjoining layers in the the physical environment (physical vacuum) with the distribution of electric charge of electron, and this properties of absolutely rigid body [32]. deformation can cause nuclear fission. If two or the greater number of light nuclei appears “inside” electron, As it was repeatedly noted in the literature [1,2], then attractive forces arise between nuclei, which may experiments on CF are badly reproduced, and this fact result in fusion reaction. When electric discharge is gives rise to doubt the ver y existence of the strong enough, the areas of basic localization of some phenomenon. Bad reproducibility of results seems to electrons can overlap, and if a nucleus lands in the area be explained by the fact that CF depends upon great of overlap, because of Coulomb attraction of nucleus number of parameters: upon electric current density, on the adjoining layers of electronic clouds, a bound concentration of free nucleus, concentration of state may be formed, of two electrons and the nucleus, impurities and dislocations in samples, sizes of samples characterized by the relative stability and significant etc. In order to obtain reproducibility of results, it is spatial extension. necessary that all these parameters, describing the environment in which nuclear reactions occur, be the Obviously, if the concentration of free electrons is great same in various experiments, but to achieve this as a enough, there may be formed some relatively stable difficult task. bunch of plasma consisting of great number of free electrons and nuclei, which in virtue of chaotic In conclusion we shall dwell upon the problem of linear movement of nuclei and because of the absence of dimensions of electron, which is of special interest in preferred directions should have approximately connection with the mechanism of nuclear reactions spherical form. Let us note that atomic electrons, indicated here. The inference that the dimensions of belonging to additional energy zones of atom (Balmer’s electron in the ground state of atom are of the order of replicas associated with nuclear self-action, see Section Bohr radius, i.e. of the order of atomic dimensions, 2) can contribute to “strange” radiation. following from dimension considerations [9,19] and confirmed by quantum model of electron, seems As is seen from above, to account for the reasons for completely unexpected. At first sight, it is in conflict Chernobyl accident, there is no need to involve with both the theory of quarks and experimental data Page 221 on scattering of electrons. According to quark models, Isotopes (Mn55=Fe57) in Growing Biological Cultures. The Sixth the radius of electron corresponding to its quark International Conference on Cold Fusion, Progress in New Hydrogen Energy (Ed. M. Okamoto) Oct. 13-18, 1996, Hokkaido, Japan, Vol. 2, structure makes up the quantity of the order of 10-22 m p. 687; Infinite Energy, 2, #10, p.63 (1996). [33]. It is necessary to emphasize, however, that the 7. Urutskoev L.I., Geras’ko V.N. On the Possible Mechanism of above-mentioned magnitude of linear dimensions of Chernobyl Accident. electron refers to the internal structure induced by Coulomb field. The last is long-distance and . 8. Oleinik V.P To Electronic Technologies of the 21st Century: on the consequently the linear dimensions of internal Threshold of Revolution in Communication Systems. Collection of structures produced by it (i.e. spatial inhomogeneities Reports, Millenium 2002, International Conference “To Innovations in the 21st Century”, Odessa, April 13, 2002, p.268-273. in the distribution of electric charge in various quantum 9. Oleinik V.P The Problem of Electron and Superluminal Signals. states) should considerably exceed the dimensions of (Contemporary Fundamental Physics) (Nova Science Publishers, quark structures connected with electron. There seems Inc., Huntington, New York, 2001). to exist a hierarchy of internal structures of particle 10. Berestetsky V.B., Lifshits E.M., Pitaevsky L.P. Relativistic produced by Coulomb forces, nuclear forces, inter-quark Quantum Theory, part 1. (Moscow, Nauka, 1968). interactions etc. characterized by the smaller and 11. Medvedev B.V. Foundations of Theoretical Physics. (Moscow, smaller linear sizes. Nauka, 1977). 12. Dirac P .A.M. Relativistic Wave Equation of Electron. Progress in As to the experiments on scattering of high energy Phys. Sciences, 129, #4, p.681–691 (1979). electrons, according to which the internal structure of 13. Dirac P .A.M. The Principles of Quantum Mechanics. (Moscow, electron is not manifested up to distances of the order Nauka, 1979). 14. Schrödinger E. Quantisierung als Eigenwertproblem. Vierte of 10-16 ÷ 10-17 m, two arguments, at least, can be Mitteilung. Ann. der Physik, Bd. 81, S.109–139 (1926). adduced in favor of that there is no contradiction here 15. Schrodinger E. Selected Works on Quantum Mechanics, Edited with the experiment. Firstly, in experiments on by Polak L.S. (Moscow, Nauka, 1976). scattering, investigators were trying to register the 16. Barut A.O. Schrodinger’s Interpretation of as a Continuous Charge details of internal structure of electron within intervals Distribution. Ann. der Physik, Bd. 45, S.31-36 (1988). 17. Barut A.O., van Huele J.F. Quantum Electrodynamics Based on much smaller than Bohr radius, which is why it is not Self-Energy: Lamb Shift and Spontaneous Emission without Field surprising that results of experiments proved to be Quantization. Phys.Rev., A32, #6, p.3187–3195 (1985). negative: at high energies electrons behave like point . 18. Barut A.O., Dowling J.P Quantum Electrodynamics Based on Self- particles, their internal structure has no time to be Energy: Spontaneous Emission in Cavities. Phys.Rev., A36, #2, manifested. Secondly, the results of experiments were p.649–654 (1987). 19. Arepjev Yu.D., Buts A.Yu., Oleinik V.P To the Problem of Internal analyzed from the point of view of standard Structure of Electrically Charged Particles. Spectra of Internal Energy representations about electron, which refer to a point and Charge Distribution for the Free Electron and Hydrogen Atom. particle, but are obviously inapplicable to real, self- Preprint of the Inst. of Semiconductors of Ukraine, N8-91 (Kiev, 1991) acting electron. According to the predictions of quantum 36 p.(in Russ.). theory of electron as an open self-organizing system, . 20. Oleinik V.P Quantum Electrodynamics Describing the Internal Structure of Electron. Quantum Electronics. #44, p.51-59 (1993) (in real electron is a special object - soliton, i.e. such a cloud Russ.). of electrically charged substance which, when . 21. Oleinik V.P To the Theory of the Internal Structure of Electron. interacting with other particles, tends to keep its sizes Second Quantization and Energy Relations. Quantum Electronics. and geometrical form. #45, p.57-79 (1993) (in Russ.). 22. Oleinik V.P Quantum Theory of Self-Organizing Electrically Charged Particles. Soliton Model of Electron. Proceedings of the At present there is as yet no scattering theory of this NATO-ASI “Electron Theory and Quantum Electrodynamics. 100 kind of particles and for this reason it is impossible to Years Later.” (Plenum Press, N.-Y., London, Washington, D.C., Boston, predict with certainty how can the internal structure of 1997), p.261-278. 23. Oleinik V.P Nonlinear Quantum Dynamical Equation for the Self- electron be manifested in experiments on scattering. Acting Electron. J. Nonlinear Math. Phys. 4, #1-2, p.180-189 (1997). 24. Oleinik V.P Quantum Equation for the Self-Organizing Electron. References Photon and Poincare Group (Nova Science Publishers, New York, Inc., 1999), p.188-200. 25. Nicolis G., Prigogine I. Self-Organization in Non-Equilibrium 1. Storms E. A Critical Review of the “Cold Fusion” Effect. J. Sci. Systems (Wiley-Interscience, 1977). Explor., 10, #2, p.185 (1996). See also: 26.Oleinik V.P., Belousov I.V. The Problems of Quantum Electrodynamics of Vacuum, Dispersive Media and Intense Fields. 2. Storms E. Cold Fusion Revisited. Infinite Energy, 4, #21, p.16(1998). (Kishinev, Shtiintsa, 1983). 3. Schwinger J. Cold Fusion: A Hypothesis. Z. Nat. Forsch. A 45, 756 27. Bogoliubov N.N., Shirkov D.V. Introduction to the Theory of (1990); Cold Fusion: A Brief History of Mine. Infinite Energy, 1, #1, Quantized Fields. (Moscow, Nauka, 1976). p.10 (1995). 28. Bohr N. Selected Scientific Works, V.2. (Moscow, Nauka, 1971). 4. Schwinger J. Nuclear Energy in an Atomic Lattice I. Z. Phys. D15, 29. Bogoliubov N.N. Quasi-Averages in the Problems of Statistical 221 (1990); Prog. Theor. Phys. 85, 711 (1991); Energy Transfer in Cold Mechanics. In the Book: Statistical Physics and Quantum Field Fusion and Sonoluminescence. Theory. (Moscow, Nauka, 1973). 30. Morse P .M., Feshbach H. Methods of Theoretical Physics, V.1. 5. Urutskoev L. I., Liksonov V. I., Tsinoev V. G. Experimental (Moscow, Foreign Literature, 1958). Detection of “Strange” Radiation and Transformation of Chemical 31. Dakker H. Classical and Quantum Mechanics of the Damped Elements. Applied Physics, M., 2000, p.83-100. Urutskoev L.I., Harmonic Oscillator. Phys. Reports, 80, # 1, p.1-112 (1981). Liksonov V.I., Tsinoev V.G. Observation of Transformation of Chemical 32. Oleinik V.P Superluminal Signals, the Physical Properties of Time, Elements During Electric Discharge and the Principle of Self-Organization. Physics of Consciences and Life, Cosmology and Astrophysics, #1, p.68-76 (2001). 6. Vysotskii, V. I., Kornilova A. A., and Samoylenko I. I. Experimental 33. Dehmelt H. Experiments with an Isolated Subatomic Particle at Discovery of Phenomenon of Low-Energy Nuclear Transformation of Rest. Progress in Phys. Sciences, 160, #12, p.129–139 (1990). Page 222 The Evolution of Lifter Bill and I eventually found different paths, and in some ways drifted apart. Bill moved into Geomagnetic Technology levitation research and started intense investigation on the patents of How Wachspress and the magnetic dipole levitator. I went to more traditional technologies – Tim Ventura eventually becoming a UNIX system administrator for AT&T Wireless. I hadn’t heard from Bill Butler in about 6 months when INTRODUCTION he sent me a short email containing the words “hey, check this out” – and a link to Jean-Louis Naudin’s Readers of the electric-spacecraft journal might know “Lifter Experiments” home-page. I visited the site, a little about the Lifter technology popularized recently watched all of the video clips, and then watched them be Jean-Louis Naudin, but they probably don’t know again. This was the technology that I had been waiting the whole story. In the short amount of time that has for! transpired since the publication of that article, this LIFTER TECHNOLOGY technology has both literally and figuratively taken off – going from a “proof-of-concept” prototype by Naudin I can say without a doubt that the lifter technology is to an international group of researchers investigating completely revolutionary, but you might not realize how how to give the lifter higher-performance and greater profoundly revolutionary it is until you’ve stopped to efficiency. With the first commercial products now on think about it for a bit. What is it about the lifter that the horizon, if you haven’t taken the time to read up on makes it so unique, especially when so many inventions lifter technology, this is the perfect time to do so. . . claim to produce more and better electromagnetic To give you a complete up-to-date overview of where thrust? The answer is simple – the lifter works this technology is, where it is going, and what I think it repeatedly. is capable of, let me start with the basics – an overview of how I became involved with Electrogravity research Jean-Louis Naudin started a figurative bonfire when he and what eventually led me to become involved with decided to replicate a “proof-of-concept” experiment lifter technology. by a small Huntsville, AL aerospace contracting firm. The lifter initially came into being in the mind of Jeff MY BACKGROUND Cameron – the chief scientist of Transdimensional Technologies – in the 1970’s from experiments I started college at 16 years old, back in 1992 – at the conducted with high-power military and research-grade same time, I purchased a kit containing “hoverboard lasers. A device in the lasers called a “pre-ionizer” was plans” from Hovertech, Inc. The moment that I received used to apply a high-voltage to the lasing-medium to that $20 white-manilla envelope in October 1992 was facilitate better performance. Repeated operation of the the moment that I became involved with what has now pre-ionizer had a common side-effect of horribly twisting been nearly 10 years of electrogravity research. the wire and foil combination out of shape, which required a decent amount of work to repair. I worked with Bill Butler – the president and chief- scientist of Hovertech – on a variety of different Jeff Cameron realized that the torsional effect on the antigravity, Electrogravity, and levitation ideas from pre-ionizer was a side-effect of some unknown force approximately 1992 through 1996. While putting in my acting on the pre-ionizer apparatus, and he began a college time, I was also taking distinct advantage of long-term investigation into what was causing the the enormous college library at Western Washington apparatus to deform. His eventual results indicated that University to read up on everything that might possibly a force in the foil collector in the pre-ionizer was causing relate to Electrogravity. I read books on standard a net-thrust in the entire pre-ionizer apparatus that was electronics and physics theory alongside with books making it twist and move on its mounts within the laser by the masters of this science, such as TT Brown and – the lifter came to him later as a three-dimensional Nikola Tesla. device to demonstrate this force. Bill and I played with several different ideas – many of Naudin’s genius became readily apparent not through them only peripherally related to Electrogravity. For a giant breakthrough in technology, but rather in a more instance, I published a manuscript initially in 1996 subtle fashion – he replicated the lifter experiments of describing Tesla’s theory on how to reliably produce Transdimensional Technologies and published videos, Ball-Lightning using a standard Tesla coil – the articles, and complete construction plans on his website information courtesy of WWU’s excellent library. Bill to allow others to do the same. In a manner similar to also assisted me with obtaining video footage of a Searl- the open-source software movement, Naudin had taken effect conference that he attended in Denver in the early an incredible scientific find that might have otherwise 90’s – this footage was an excellent overview of Searl’s been overlooked and done and incredibly charitable and design and construction concepts for what he believes intelligent thing – he gave it away for others to play is the next major technological step in aviation and with. By following Naudin’s instructions, inventors all space travel. over the globe began to slowly replicate the Page 354 Transdimensional Technologies experiments and directional force from the larger element towards the thereby validate the proof of concept that Jeff Cameron smaller element. Jeff Cameron seems to have a practical had created to show that his “mystery force” was real axiom that goes along with this scientific philosophy, after all. Naudin of course took advantage of these which is that there must be both a leakage current and replications of the experiment by showcasing them on a capacitance between the wire and the foil in order for his own website – which in turn lends additional the lifter to function. credibility to his research. Conventional physics says that two capacitor elements As far as technology goes, the lifter demonstrates that of different sizes will not generate a net-directional force, science and engineering have more than their share of so what gives? This is actually the thinking that humorous irony. For the years that I researched convinced me to abandon my research into Biefeld- Electrogravity and antigravity claims, all of the devices Brown effect technology in 1996 – physics says it doesn’t that I had seen required something “magic” to make work. What the books say will happen is that since the them work. For instance, Bob Lazar’s UFO-claims could wire can only maintain a lower-capacitance than the have been reverse-engineered except that they require foil, the overall capacitance between the two elements ‘element 115’ to make them work – an element will be reduced to be equivalent to that on the smallest chemically related to Bismuth that is theorized to element (or plate) in the capacitor. This, of course, potentially have electrogravitic properties. I will come assumes a 2-element series-wired capacitor, such as back to the possible electro-gravitational properties of the lifter. Bismuth in a bit, as it turns out that this element may in fact provide some use for future lifter technology. I can give you the conventional physics answer to this small riddle by simply saying that the lifter uses a The Searl-effect disc is an even better example of the manifestation of ion-wind. This would state that the “magic” usually involved with building a working electrons crossing the air-gap cause a breeze that Electrogravity device. Searl’s ideas seem valid enough, causes thrust – since the breeze would be traveling but although he supposedly demonstrated several down from the wire to the foil, the thrust would be up, working prototypes in the 1950’s, he is currently as demonstrated in testing. In the ion-wind explanation, pursuing millions of dollars in research funding in to the electrons are emitted from small-diameter of the replicate those experiments in a modern-day setting. positively charged wire in such great abundance that they move a significant airflow down to the foil where The irony involving lifter technology is that while they are absorbed and transported electrically back to inventors all over the world have been searching for the HV power-supply’s electrical ground. the perfect electro-gravitational device for decades, the possible working proof of concept for many of these Conventional physics would seem to have the theories has been sitting in front of us the whole time – theoretical answer to why the lifter causes lift, but in the lifter costs less than $10 in parts to build, and none the experimental setting, which is what we now have of them are magic – in fact, for my experiments, all of an abundance of thanks to Jean-Louis Naudin, the them were at stores within 2 blocks of my house — balsa conventional physics explanation doesn’t suffice. wood from the craft store, aluminum foil from the Experimentally, there are several deviations from the supermarket, 30-gauge magnet wire from the local ion-wind explanation that seem to invalidate it. For Radio Shack, and an old computer monitor for the high- instance, if you completely contain the lifter in a plastic- voltage power-supply. enclosure, it will still generate lift – this would not be the case if a breeze was responsible for lifting the LIFTER PHYSICS device. How could it be, if the breeze is limited to the inside of an enclosure which itself is levitating? Whether or not Jeff Cameron knew it at the time he constructed his lifter prototype, what he was actually A more compelling proof that Biefeld-Brown is building was a 3 dimensional representation of a something other than ion-wind comes from Purdue drawing on a patent application by TT Brown in the University, where the lifter experiment was replicated 1950’s. In the patent application, the drawing shows a inside a vacuum-enclosure with positive results. While positively charged wire suspended over a grounded foil ion-propulsion can work in space, it usually assumes body which was meant to demonstrate the most basic that there is argon, krypton, or other noble gas to be Biefeld-Brown effect generator. While Brown’s drawing used as the propellant – the vacuum enclosure showed is a little different than Jeff’s design, the resemblance that with no gas available for transport the lifter showed is uncanny enough to indicate that both of these men a moderate improvement in performance. had the same basic force in mind. The vacuum enclosure tests are definitely compelling TT Brown’s patent indicates that this Biefeld-Brown evidence that something else is going on other than effect generator works due to a gradient electrostatic- ion-wind – at least compelling enough for NASA to file field between the wire and the foil – in essence, these patent number 6,317,310 – “Apparatus and Method for two elements compose a low-efficiency, high-voltage Generating Thrust using a Two Dimensional, air-gap capacitor in which the difference in geometries Asymmetrical Capacitor Module”. The NASA patent between the two capacitive elements generates a net- description – which can be accessed from Naudin’s lifter Page 355 website – is as vague is it is compelling in that NASA increased when a higher potential 12-volt charge was is basically requesting a patent on any technology that used to heat the emitter wire in conjunction with the generates force using two geometrically dissimilar standard high-voltage charge coming off it. capacitive plates. Disregarding the fact that this patent was issued nearly 50 years after TT Brown’s patent Transdimensional Technologies – the developers of the using nearly identical descriptions and pictures, and initial lifter design – are taking the approach to also disregarding the fact that NASA also doesn’t optimizing lifter performance to another level. They are understand why the lifter generates thrust, it seems currently not-so-secretly working on a 2nd generation apparent the this phenomena is gaining credibility in lifter, which will consist of a 1-piece layered material to engineering circles while physicists seemingly continue replace the current wire and foil design. to deny that anything is going on. The layered material approach to the lifter is an idea that Jeff Cameron may or may not have had after some lengthy discussions with Travis Taylor – the man responsible for testing some anomalous materials Every good movie always has a sequel, and in known as “Art’s Parts”. technology, if at first a major government agency ‘liberates’ your idea, it may seem that a sequel is in Art’s Parts were some pieces of material sent by an order. In the case of the lifter, it would appear that the unknown person to the Art Bell radio talk-show with a NASA patent would cover this technology to at least note stating that the they were pieces of UFO wreckage some degree – at least until someone overturns this taken from the often-cited “Roswell crash” in 1947. patent under the prior-art rule – which means that the Whether or not the pieces of material actually came from next generation has to be considerably more advanced that crash is unknown, but Art Bell did the honorable to escape having the research and development be thing by sending them to an acquaintance in US Army forfeit to the government. research named Travis Taylor for a professional scientific The pursuit of more advanced versions of the lifter technology is currently underway by several Taylor, who apparently tested the materials after-hours independent inventors, as well as Transdimensional in a world-class research lab to avoid potential Technologies themselves. Most of the private research classification by his superiors, used an electron- by inventors has delved into improving the current lifter microscope to determine that the layered materials were design to produce a greater force output and utilize less actually pieces of metal – containing several hundred power to do so. Because the lifter is so simplistic in microscopically thin layers of magnesium and bismuth. design, many of these enhancements have been of a Taylor also tested the layered-metal with a high-voltage very basic nature. apparatus, which seemed to indicate that when a voltage was applied to the material, the layered metal Jean-Louis Naudin was the first independent inventor would move – and in some cases levitate. to do serious work with improving the technology behind the lifter – and even so, the majority of his work Taylor reported his findings to Art Bell and sent video has utilized similar materials in more complex clips of his high-voltage experiments, which eventually arrangements. Naudin has demonstrated dramatically made it back to a permanent home on the Art Bell radio increased lifting forces by building a “lifter inside a show website. In addition, Taylor conveyed his belief lifter” for demonstration purposes. Naudin has also done that the only manner in which the pieces of metal could a great deal of work in taking breaking up the concept properly be produced was through an advanced form of the single triangular lifter into a parallel series of of electron-deposition technology, due (apparently) to lifting cells – which means that these cells, working in an absence of oxygen-molecules between the different parallel, can contributed to greater stability and higher layers of metals. Additionally, the layers of metal were force output than any single lifting element. too thin to have been mechanically produced. Saviour – an independent inventor working with Jean- Jeff Cameron indicated that Transdimensional Louis Naudin – has done some of the most interesting Technologies maintained some contact at one point in improvements on lifter design since those by Naudin time with Travis Taylor, apparently as professional himself. Saviour’s concerns have not focused around the colleagues in the defense community in Huntsville, AL. “bigger is better” philosophy that many inventors have I am not an expert on this relationship, other than to stuck by – he has done several experiments to determine say that to the best of my knowledge these two the radiation output, remote-controlled applications individuals knew and contacted each other, and that development, and materials analysis and improvement this is how Jeff Cameron might have come up with the on the lifter that others have not had the time or 2nd generation lifter idea. expertise to conduct. ADVANCED LIFTER TECHNOLOGY A recent experiment by Saviour demonstrates just how this gentleman’s foresight is helping other As an inventor, I couldn’t care less whether or not the experimenters – Saviour substituted nichrome heating idea for the technology came from a crashed UFO. To wire for the common lightweight wire used for the be perfectly honest, I’m not what you would call a emitter, and demonstrated that the lifting force greatly “believer” anyways, although I have often wondered Page 356 about it. My point is not to attempt to lend any credibility Transdimensional Technologies recent research is to “Art’s Parts”, but rather to tie in the properties of the utilizing the layered materials approach to eliminate the anomalous material’s high-voltage movement with the air-gap and substitute for it high-k dielectric materials underlying theory of lifter operation. that may allow higher overall performance. Although they have not yet released details about the exact Even mentioning a UFO in a respected publication or composition or thickness of the materials that they are article is the kiss of death in today’s world – and I working with, they claim to currently have a 10% wouldn’t do it if it wasn’t an intricate part of the story. reduction in weight using a low-voltage current across The other interesting thought is that the layered material the thickness of their newest device. is once again partially composed of Bismuth – which is thought to possibly have some of the same electro- FUTURE LIFTER TECHNOLOGY gravitational properties as Bob Lazar’s Area 51 “element 115”. Is there a similarity, or merely a coincidence Thanks to the tremendous amount of research being between a claim that hasn’t gained credibility and a done on lif ter technology by Transdimensional technology currently under development? Technologies and a loosely affiliated group of inventors around the world, the future of lifter technology seems The lifter in its own right is essentially a layered very bright at this point. material. One of those layers is the emitter wire, which is highly charged with about 30kV worth of electrons, Transdimensional hopes to release some breakthrough another layer is the air-gap, which is approximately 3 research to allow replication of their newest 2 nd cm in height, and the final layer is an electrically- generation experiments in the very near future, and grounded “skirt” of aluminum foil that surrounds the along with that stands the massive body of research lifter. It is also reasonable to expect that there are only and advancements being done by inventors and two possible forces at work in the lifter – one of which researchers such as Jean-Louis Naudin, Saviour, the being a possible ion-wind effect moving down from the Lifters-group, and myself. emitter to the foil, and the other being a possible Biefeld- Brown effect, moving up through the foil to the emitter. My personal goals are to attempt to assist Transdimensional Technologies in popularizing this There are a few shortcomings in the lifter as a design technology to increase awareness of it and help “spread that might be overcome if we could transition the the word” about what it is and how it can potentially layered material from one containing an air-gap to one help the world. that does not. For instance, the lifter is currently a rather delicate object, in that having a wire under tension as Imagine if instead of getting in your car and driving the emitter makes construction difficult for future through the usual maze of thoroughfares and side automated assembly. Additionally, because the air-gap streets you were able to simply type in your destination requires struts to support the emitter wire, a trade off and have a flying vehicle take you there automatically. involving the weight versus the strength of the struts The lifter technology offers to potential to transform the is additionally involved in any current implementation current transportation market by offering point-to-point of lifter technology. aerial transport without the need for roads or freeways. Some of the other changes that would be helpful to Additionally, unlike the magnetic-levitation (“Maglev”) implement when transitioning lifter technology from one technologies that are currently being promoted as the type of air-gap to another are changes in the materials future of transportation, the lifter does not require a used to increase the dielectric capacity. High-K specially constructed and exorbitantly expensive track dielectric materials may be used to increase the to operate – the greatly reduces the per-unit cost on displacement of electrons in the material to enhance the technology and opens the door for wider adoption charge transport. And since increasing the dielectric by the general public for transportation solutions. potential of the layered materials also increases the breakdown resistance, it means that thinner materials Other individuals are currently working to see if lifter can be used. technology may offer cost-effective methods of transport into space, which would reduce the cost greatly and Designing a lif ter without an air gap would allow a one-piece, reusable method of moving things accommodate lower voltage requirements between the into orbit. foil and the emitter. The voltage would not have to create the large e-field gradient to create a leakage LIFTER RESOURCES current across such a large void. Therefore the overall voltage across the device could be greatly reduced, All of the research involved with the lifter technology without much cost in thrust. A lower operating voltage is available to the public on the internet. The list of in turn means that a lower-output power-supply can resources below are some of the better and more be used for a given amount of current, which increases common resources to obtain detailed lifter information. the overall efficiency. American Antigravity Page 357 à The author’s website that includes video clips, complete à The home page for Transdimensional Technologies, the instructions, and other related lifter information. developers of the lifter design. Jean-Louis Naudin’s “Lifter Experiments Website” Blaze Labs (Saviour’s Research Website) à à à A very in-depth website containing video clips, complete àAn excellent site on research into lifter enhancements, instructions, radiation testing, sealed devices, power supplies, and other topics relating to lifter technology. World-Wide Lifter Replications à Lifter Builders Group à An overview with photos and video from many of the à independent inventors who have replicated the lifter à An email group for the exchange of research findings for experiments. those interested in building lifters or staying current on the state of the technology. Transdimensional Technologies, Inc à NASA Patent #6,317,310 à The NASA patent regarding obtaining thrust from an asymmetrical two-dimensional capacitor, grant Nov 13, 2001. Research on the Capacitance of many capacitors with different dielectrics. Theoretical grounds and results of measurements of this Converter of Environmental phenomenon are given in the publications in 1984 [1], [2, page 73]. On the industrial standards NC (varicond), Heat to Electric Power ceramic condensers VK2-ZSH, 4·6,8·10 -9 µF with an optimal voltage about 95 V it was stated that N.E. Zaev Ad ~ 1,21 with the power to about 98·10-6 Wt and 143970, Moscow region, village Saltykovka, Ac Granitchnaya Str., 8 529-9664 “generated” extra power is equal to 21·10-6 Wt. Nickolay E. Zaev works on creation of the prototypes 1.2. In [1] and [2] the strict theoretical proofs of of converter energy, which do not require any fuel. realization of Ad>Ac (there are four of them) are given. The direct conversion of environmental heat to 1 Ad − Ac = − a ⋅ ε 0 ⋅ E c (Ec is electric power is possible in the processes of “charge- On 1m3 of dielectric discharge” in non-linear condensers or by means of 2 “magnetization-demagnetization” of ferrites. Such an intensity of the field, V/m; ε0 is a dielectric constant converters of energy create cold and electric power of vacuum, a is a coefficient of nonlinearity of the without any fuel. capacitor). Below we state one more proof more connected with the parameters of circuit. Theory of the converter, results of early experiments on the generation of microwatt power, methods and It is well known that with the charge of a linear capacity features of research are given in this article. The from the source of constant voltage V0=const through C ⋅ V0 methods of generation of a few watts power are described in details. The possibilities and difficulties the resistor R=const it gets an energy Ac = of creation of powerful capacitance converters are exactly equal to the output energy in the time of discussed in this article. charging tc. The output energy irradiated from the load I. Grounds of research. tc R is a Joule heat Θ = R ⋅ ∫ i 2 ⋅ dt [3, page 546]. If NC 1.1. From positions of orthodox physics there is no 0 subject of research. It is evident that the energy of (nonlinear condenser) is charged, then there are no charging (C) Ac condenser Cx is always equal or more proofs of such equation. The NC are the variconds or than the energy of discharging (D) Ad, i. e. always Ac≥Ad. ∂C Only the advanced analysis shows that it is not always other capacitors, which have > 0 in the interval ∂C ∂V true. Exactly, in Cx, where < 0 an inequality Ad>Ac V=0÷Vk. For the variconds Vk is some voltage, which ∂V ∂C ∂C corresponds to the maximum Cv.. If V>Vk, then < 0. is possible, and in Cx, where < 1 , then the work ∂V ∂V For some other capacitors Vk is a voltage breakdown. Ac>Ad. Therefore we should discuss the nonlinear capacitors (NC). In the end of 1969 I noticed a systematic For further consideration let’s believe that in the inequality Ad>Ac during the measurement of Ac and Ad operating area of the given sample of varicond a function Page 358 introduce a notion of action as a product of acting force Reality and Consciousness in FA and the speed of action VA. Education and Activity We offer a law of interaction, which determines the interaction between action of the cause and the effect appeared during this action as a reaction, i.e. the A.P Smirnov product of the force of reaction FR and the speed of reaction VR. Thus, this interaction between the cause Vice president of International Club of Scientists 190031, Saint Petersburg, Kazanskaya str., 36 and the effect is determined by the transfer of action Tel: +7 (812) 312-0508 from one object to another in equal quantity, but with E-mail: appearance of new quality, which is determined by specificity of interacting objects according to Relation of thought to existence is the main question of fundamental law of interaction: philosophy as science on general laws of Nature was formulated but it still did not interpreted and solved in FAVA=-FRVR. the frames of generally accepted logic standards. The ways to solve it lead to futile discussions of materialists Unfortunately, an incorrect interpretation of interaction and idealists, to senseless disputes of determinists with manifestation as an opposite counteraction became eclectics and apologists of the “chance”. This strong in our mind. This manifestation is perceived as a discussion lost its sense without a determination of compensation of cause by action of the effect. Moreover, terms under discussion and condemned debaters to the incorrect way of writing of the mathematical form have subjective “gustatory ” senses, which were of Newton’s third law manifestation established in changing while aging and depended on the extent of textbooks and scientific literature due to the incorrect received and conceived knowledge. Such is the situation translation as FA=-FC. This very tragic situation for the in this link of World studying, which does not allow science suppressed the development of logic in creating a logic chain of reasoning in the understanding description of processes. Chance and statistic approach of cognizable things. to the description of phenomena has taken place in our perception. This approach is based on the model of non- A paradoxicality of all things that happen is connected interacting elements, in which there is no order with incorrect translations and interpretation of stipulated by the interrelation of elements. The science wisdom of ancient philosophers and scornful attitude has developed this model and its properties, and this both to the knowledge of distant past and classical fact predetermined the evolution of notions about real heritage, which highlighted the elements of natural- World. science approach to Weltanschauung. This ideology penetrated in mathematics, which for sake According to Plato, an ideal thing is a visual thing, of physics began to study properties of objects, but not which can be felt by our organs of sense. Therefore, the operations with them. Moreover, a possibility to reflect understanding of objective reality is mediated by the specific character of real physical processes in the crowd of our feelings in such a way that perception of interconnection of cause-effect relations by reality by means of these feelings gives us a notion of mathematical operations is not realized. It is essential, the World. Hence, our notions about reality are the that fundamental law of interaction establishes subject of research in science, but not the World itself, manifestation and description of elementary act of cause i.e. the World outside of our consciousness. So, what and effect interrelation, the law of manifestation of a should be studied in our notions about the World? Let Fact. It means that order in the World is conceived us refer to the wisdom of ancient scientists again: “ The through manifestation of concrete facts. The action of World is given in motion and its laws are the laws of law of interaction lies in the basis of these facts. motion”. Then, we should speak about laws, order, i.e. about relation and interrelation in the phenomena of So, there is a conclusion: the World is perceived motion. This is the distinctness in notions and actions through the discrete manifestation of motion forms (determinism) to predetermine further development of evolution. Hence, the discrete mathematics of finite reality cognition logic, i.e. what has an influence on us discrete aggregate can be applied to describe the World, and determines specific character of our perception. but not the continual mathematics, which lies in the Further we can speak about formation of ideas about basis of traditional orthodox physics. All these reality, which require some premises, principles to circumstances lead to numerous problems and organize these ideas. These principles are given in difficulties in description of our notions of reality, to the classical heritage, in “Dialogues” by G. Galilee [1] and plenty of used principles, which are in contradiction to “Mathematical principles of natural philosophy” by I. each other, as R. Feinmann noticed once [3]. Newton [2]. A notion of force as a measure for momentum was introduced, which manifests in action And what we can get from determinism, which is based and disappears from the body after the action is over, on fundamental law of interaction, law of cause and and the body keeps its new state due to the inborn effect interrelation? The change of force value in a “inertia force”. But the force itself cannot do anything reaction takes place, i.e. the change of value of the without its application with a certain speed. Then we potential gradient, i.e. the change of energy Page 315 concentration. This circumstance is visually appeals to the model and principles of the World of non- demonstrated by the operation of Archimedean lever interacting elements using the range of regularities, as well as in all phenomena of the real World. This is which also reflect some features of the real World, but Archimedean lever, where the loss of speed takes place, they do not include fundamental law of interaction and but there is a gain in force. And the load raised on a Principle of Order, which are necessary and sufficient lower height than the way, which was made by the to describe reality. Descriptions existing in traditional applied force, will give a huge power during its free physics are phenomenological ones and concern only fall. This power is higher in so many times, in how many those aspects of the phenomena under investigation, times the time of the load fall is less than the time of which do not include possible qualitative changes action spent on its raising! And this is the fact, which during development of processes, because the main determines specific character of creation in the real property of real processes of interactions (creation of World. We should attribute both quantitative and new energy property) was excluded. qualitative characteristics to energy. This is the side of energy manifestation, which is reflected in Plank’s The current situation in physics had a strong influence formula: energy is proportional to frequency. on formation and development of other sciences, other fields of knowledge, since the logic of reflection of cause- Manifestation of fundamental law of interaction also lies effect links was initially excluded. These are the links in the basis of general universal regularity of evolution to determine existence, i.e. existence of constant of real many-particle systems with the change in creation of the World. All these circumstances give external conditions. This process develops in multistage grounds to fundamentally revise educational programs, way, and on the each stage the logarithm of the ratio first of all, in physics, philosophy, mathematics, between the event happened and the event to happen chemistry and biology. A change to the offered logic of always is equal to the work of external forces. In other cognition, which is based on the Principle of Order and words, the relation of the event happened to the fundamental law of interaction, will fundamentally resource is in exponential dependence on the initial change our notions about the World as well as will open conditions and extent of external influence. Exponential big opportunities for new technique and technology. A character of development of processes is the evidence Man has got huge opportunities in cognition and that Nature develops according to the law, which existence, but due to his immorality and features of conserves itself during evolution. This regularity, which incorrect aims in the logic of cognition he cannot use manifests everywhere, can be naturally called the these gifts of Nature. We present wider and deeper view Principle of Order. on the World and a Man in it, which allow analyzing, watching and operating with those fields of reality, Fundamental law of interactions and Principle of Order which manifest in finer World, World of higher-frequency appeared to be enough to describe and understand energies and other structures of fields. Logic of cognition phenomena in the observed World. And it is natural to had not touched these structures yet. expect that this principle of Nature manifest in finer World also. This World includes lower and higher References frequencies, which are not available for us yet to watch this wide-range frequency-wave emanating Universe. 1. Galilee Galileo. Selected works in 2 volumes. M.: Nauka, 1964. 2. Newton Isaac. Mathematical principles of natural philosophy. News of Nickolaevskaya Marine Academy, issue IV, V, Petrograd, From all aforesaid we should make a conclusion that 1915-1916. Volumes I, II and III, 620 p. the logic, which exists in the traditional physical tool, 3. Feinmann Richard. Character of physical laws. M.: Nauka, 1967, 160 p. Old New Energy that it does not change chemical properties of substance and is compensated in natural conditions. Physical mechanism of energy-release lies in the fact that an Y. I. Andreev, A.P Smirnov electron in plasma layerwise takes sufficiently smaller elementary particles (electrino) from positively charged St.Petersburg, Russia atoms or fragments of substance (ions). Electrino give Internet: their kinetic energy to plasma, heat it up and move beyond the bounds of reaction zone in the form of Two kinds of energy, accumulated energy [1] and free thermal and optical radiation. There is no substance, energy [2], are considered as an inexhaustible source which could not take part in such process of energy- of natural energy created by Nature itself. It is release, i.e. phase transfer of higher form (PTHF). The ecologically clean and possible to be renewed in natural most appropriate, available and low-cost substances conditions. are air and water, which play the role of nuclear fuel in PTHF. It is turned out that usual combustion is also a The energy accumulated in substance is released as a process of PTHF, in which oxygen is a nuclear fuel and result of partial decay of substance in elementary organic fuel is a donor of electrons. In the process of particles. At that, the acquired defect of mass is so small combustion oxygen atoms get the defect of mass equal Page 316 qualitative characteristics to energy. This is the side of fields of knowledge, since the logic of reflection of cause- watching and operating with those fields of reality, expect that this principle of Nature manifest in finer World also. This World includes lower and higher References frequencies, which are not available for us yet to watch 2. Newton Isaac. Mathematical principles of natural philosophy. 160 p. Old New Energy that it does not change chemical properties of substance and is compensated in natural conditions. Physical mechanism of energy-release lies in the fact that an elementary particles (electrino) from positively charged St.Petersburg, Russia atoms or fragments of substance (ions). Electrino give beyond the bounds of reaction zone in the form of PTHF. It is turned out that usual combustion is also a Page 316 to 10-6 %, which constitutes the so small value that it Free energy diffused in the surrounding space could be cannot change chemical properties of oxygen and does transformed into mechanical, electrical or another kind not call killing radioactive emanation. of energy by means of vibration-resonance, electromagnetic and other energy systems. There is a possibility to use energy properties both of Classification of these systems as well as physical oxygen and nitrogen of free air in the process of PTHF. mechanism of energy transformation is given in [2]. The To do this it is necessary to destroy nitrogen molecule known Searl’s engines can serve as an example of at least in atoms or smaller fragments by some initiating energy systems working with free energy. influence. It is achieved by electrical discharge, magnetic flow, explosion and other means. These means The developed physical mechanisms of energy-release consume much less energy than produced in PTHF. In processes will allow to create industrial, stably particular, such processes were achieved in combustion operating, ecologically clean energy systems, which do engines. Such nitrogen mode of operation and not consume organic and nuclear kinds of fuel, harmful combustion is accompanied by oxidation to H2O, but for humankind. not to CO 2, which is more effective in energy and ecological aspects. Accordingly, the power of engine References increases and organic fuel is saved. Exhausts from this process mainly contain water vapor [3]. 1. ., Andreev Ye.I., Smirnov A.P Davydenko R.A., Klucherev O.A. Natural enegetics. – SPB, Nestor, 2000, 126 p. 2. Andreev Ye.I., Andreev S.Ye., Glazyrin Ye.S. Natural PTHF processes with excessive power release (more energetics –2. – SPb, Nevskaya Zhemchuzhina, 2002, 104 than consumed power) were also obtained in heat- p. generators operating with water. 3. Patent 2179649, Russia, 2000 / Andreev Ye.I., Smirnov A.P., Davydenko R.A. On General Nature of Forces which are: electromagnetic, gravitational and others. There was a theoretical attempt to connect the force initiation with energy gradient [33]. Experimental proof of force initiation due to energy gradient was obtained in the works [7, 38]. Below we made an attempt to show the general regularity of force initiation, which is connected with non-uniform distribution of energy in space. With this process, physical nature of any kind of energy and specific mechanism of force initiation does not play any role. These are only particular cases of general nature of force initiation. General nature of forces Dr. Evgueni D. Sorokodoum We are surrounded by space, which is full of energy. Here we mean the energy of any nature: mechanical, Entrepreneur and General Director, Vortex Oscillation Technology Ltd, Volochaevskaya Street, 40-B, Flat 38, thermal, electromagnetic and others. Energy is related 109033, Moscow, Russia with material world and its value is connected with the Telephone: 7-095-362-8084 volume. Any particle (volume) of continuum has energy: In techniques and in our life we got used to certain A = A( x, y , z , t ) (1) physical notions concerning force. We usually use these notions in creation of automobiles, airplanes, rockets where x, y, z are Eighler’s coordinates of the center of and other techniques, but we don’t think about the particle, t is time. origin of forces in general. Usually appearance of force in continuum is connected with presence of momentum Transmission of energy from one point of space to gradient. another one can take place by various methods, both in connection with energy transmission by material A number of works, which describe various versions particle itself (which is a “carrier” of energy in this case) about origin of a force appeared [1, 2, 5, 8, 17, 21, 22, 23, and without such transmission (for example, with wave 25, 30, 35, 36, 38, 39]. Different mechanisms of motion). For the volume degenerated in ideal point the appearance of force are considered in these articles. energy will be zero. That’s why it is more comfortable Usually they consider origin of a force in one of the fields, to operate with the energy density concluded in the Page 317 increasing constantly. We are the first who analytically toroidal circles (protons). Then the protons form the got the law of gravity of the masses from the known adjoined vortexes around themselves (electron shells) equation of thermal conductivity. Appeared that on the and from the proton- hydrogen gas the stars are forming, relatively small distances (in the bounds of the Sun which are moving to the periphery by the same System) the law of gravity by Newton remains valid, branches. There they dissolve in ether at the periphery but on the larger distances the sudden decrease goes since the protons will loose their energy and stability on (Gauss integral), which naturally solves the famous due to the viscosity. Ether which have got the freedom Zelinger’s paradox of gravity. will return to the nucleus of the galaxy and this process is going on in our galaxy for hundreds milliard years As a conclusion we should note that in the bounds of a and it will keep going until the new center of vortex stable galaxy of a spiral kind there is the circulation of formation will begin to concentrate ether. Then the new ether. Ether moves from the periphery of the galaxy to galaxy will appear and our galaxy will disappear. But it its center (nucleus) by two spiral branches. This will not happen soon and we have enough time to becomes apparent as a weak magnetic field (8-10 micro understand that we should return to the concept of ether Gauss). In the nucleus of the galaxy there is the impact in modern science. of two strings as well as there is formation of the spiral Experimental Demonstration not have any physical sense (Samat Kadyrov. Monograph “Theory of unified field”). of Cosmic Influence on the Earth Life in N.A. Kozyrev’s Researches Author’s note: relations’ interconnection is an interaction of structurally similar objects. It is a (“On the Influence of Time on Matter”) nuclear resonant gain-frequency process: in a stationar y electric field, which is modeled by systematic organization, there is a development of similar to structural one, in-focus rays of powerful regular coherent radiations. These coherent radiations are determined by properties of chemical components of interrelated substances. According to N.A. Kozyrev, it is ought to expect not identical density of relations’ interconnection in space. Some processes decrease density; others on the contrary increase density of relations’ interconnection. Action of the increased density is weakened according to the law of reversed squared distances; it is shielded by a solid Alexandra L. Belyaeva matter, at thickness about 5cm, and is reflected by a mirror, according to the familiar optics law. The action Bishkek, 720075, Russia 8th Location, 46, apt.80 of the decreased density on a detector is shielded, but Tel.: 7-996-31-41-25-79 does not reflected by a mirror. Properties of a matter E-mail: can be changed under the influence of relations’ interconnection. In this sense there is a big advantage in changes of electric current conductivity of resistor, Editor’s note: this article represents a part of the big which is brought into Witson bridge and is located near scientific conception “World models in the new scientific some process. For instance, in order to increase density progress”. On applying of this conception a great it is useful to realize the process of evaporation of a number of practical technical devices have been created volatile liquid; and for density decrease the process of (as an example of such device we offer the description of cooling of a warmed-up agent can be realized. Due to universal electrical bio-heater, which was created by the these processes, change of conductor resistance is group of researchers from Bishkek, Kyrgyz Science actually realized with opposite signs. Increase of density Technical Center “Energy” during the work on ceramic of the conductor with positive temperature coefficient electroconvector). leads to decrease of its resistance. At negative temperature coefficient there is an effect of the opposite We have to note that the position of our editorial sign, in the direction of changes, caused by temperature board concerning “time” and Kozyrev’s work is not changes. Such correspondence to fall in temperature in a good correlation with the authors’ one. should be observed at changes of other properties of a matter, because disorder in a matter structure is Nicolay Alexandrovich Kozyrev scientifically and reduced along with fall in temperature. The researches experimentally discovered the action of relations’ have shown the following results at the resistor, which interconnection, which was falsely named as time. Time was situated near processes of acetone evaporation on cannot cause action because it is absolute and does cotton wool and of solution of sugar in water. The Page 42 relative resistance change of resistor was observed at demonstrates the dependence of matter state from the the 6th or 5th digit after comma (or even at the 4th digit if changes of the general background of the relations’ resistors had especially high temperature coefficient). interconnection. The drift of the devices (that show daily changes) usually stops about at midnight and then There is now a possibility to study the Universal World changes its direction. As for the seasonal course, there not only by means of the investigated spectrum of is a density decrease of the relations’ interconnection electromagnetic oscillations, but also through physical in spring and summer; and there is an increase of it in properties of relations’ interconnection. autumn and winter. It is connected with the absorption of the relations’ interconnection by the vital functions At many researches the influence of relations’ of plants and with the return of it at their fading. There interconnection on resistor electroconductivity was are indications at the seasonal changes of chemical investigated. Acetone evaporation (at 10-15 cm distance processes. For instance, reaction of polymerization has from the resistor) was applied there as the process, more difficulties in its realization in springtime. V. which controls sensitivity of a system. However, the Zhvirilis observations of minimum and maximum light process of evaporation can influence on the resistor not admission by means of the crossed Nickolya prisms can only with density increase, but also due to temperature be explained by the crystalline reconstruction of these increase that occurs at evaporation. In order to take into prisms. consideration this cooling effect, (in the area of evaporating acetone) temperature was measured by By Kozyrev, as being invisible, vital source is Beckman mercurial thermometer with 0.01°C disseminated everywhere in Nature, thus possibility of multiplying factor. The first experiments (without its accumulation is the only necessary thing. Such a thermal protection) have shown the fall in temperature possibility is realized in vital organisms because all vital by several hundredth of degree. This fall was enough functions counteract to the usual course of systems’ to cause the changes of resistor electroconductivity. destruction. The ability of organisms to keep and However, the thermometer had been keeping on the accumulate this counteraction is the reason, which demonstration of practically the same fall in temperature determines the great role of biosphere for the Earth life. at thermal insulation of the resistor. The thermometer But even if we assume, that spreading of life in Space reacted on the radiation of relations’ interconnection at is one of its peculiar properties, biosphere will not have acetone evaporation. a decisive significance. The part of the thermometer with a placed in a Cosmic bodies (and first of all stars) can serve as the pasteboard tube mercury tank was laid round with reservoir, which gathers vital source. Enormous stocks cotton wool and put into a glass retort. The experimental of energy flow out of stars in a very weak degree through process was fulfilled near the retort, and the reading of the radiation of comparatively cold external layers. Inner mercury altitude in capillary was determined by the stars energy is preserved so well, that even at the lack scale of the thermometer through the closed window of supplement, matter of the Sun would become cold in the next room. The mercury altitude was decreased only at one third degree per year. For the Universe the at dissolution of sugar in water (with steady creative source carries the relations’ interconnection. temperature) and it was increased at the release of the Thus cosmic bodies are necessary for support of life. squeezed spring, which was placed near the thermometer. Author’s note: We apprehend relation’s interconnection as natural radioactive background. In fact, it is a nuclear The radiation of the relations’ interconnection was resonance gain-frequency interaction of inertial masses observed from many stars. It is caused by the inner that depends on living systems, especially on its rituals processes, which take place on these heavenly bodies. and that regulates its survival. Cosmic bodies regulate The Sun (with its turbulent processes) radiates the this process. Humanity is able to control nature only relations’ interconnection besides the searched obeying to natural laws. In-focus beams of powerful laser electromagnetic radiation. Actually, if sunlight is streams are formed in the electric field of living system recovered with a thin screen, the significant influence organisms. The creation of proton-antiproton pair in the on the resistor will be discovered. The influences of living cells, alongside with the process of the absolute the Sun to the Ear th through the relations’ release of energy serves as a creative vital force. The interconnection become doubtless. These influences of process of radiation, support, absorption of energy by the Sun should have a particular significance in vital the organization (assembly of particles) is realized functions of organisms, because it brings the beginning through the relation’s interconnection and regulates its for life suppor t. The totality of the researches total mass. Humanity is able to control nature only obeying to natural laws. Page 43 Life without Diseases ceramic structure with superimposed combination of atoms of lattice elements is created. Rhythmic work of and Old Aging cells, which form ceramic mixture, leads to resonance and creates a kind of blow wave (at micro level). This Preventive Electrical Heater blow wave physically destroys microorganisms that have no calcium framework. It is related only to those with Programmed Features microorganisms that are agents of infectious diseases, such as: staphylococcus, enterococcus, enterobacterium, Alexandra L. Belyaeva etc. Thus parameters of the evoked blow wave coincide with vibration frequency of the definite types of Bishkek, 720075, Russia 8th Location, 46, apt.80 bacterium and elementals. These blow waves cause the Tel.: 7-996-31-41-25-79 similar effect in room near of bio-heater, e.g. colonies of microorganisms are noticeably decreased there (even at the absence of bio-heater in the nearest room). Universal electrical bio-heater is intended for heating of Due to its self-organization, bio-heater works in the range rooms and preventive clearing of an air atmosphere from of living systems, it is approached to them. There is a disease-producing organisms at continuous exposition realization of active connection with living coaly forms (continuous work). The principle of its work of biological systems. Actually the work of bio-heater is fundamentally differs from those of the existing adjusted to them. Bio-heater properties can be analogues. Carbon crystals are in the basis of bio-heater, programmed at the process of its production. which makes it environmentally appropriate. Bio-heater is a patented product. Patent KR Bio-heater represents a range of ceramic cylinders, #464 MKI C 04 V 33/24 “Ceramic mixture, jointed with metal plates on top and underneath. These possessing heat-radiating proper ties”. plates play the role of load-carrying structure. It is used Application #20010075.1 at Patent KR #464 in production areas and living rooms for heating alongside with destruction of pathogen microorganisms. MKI C 04 V 33/24 “The way of creation of One bio-heater with 0,2 kWtt power is oriented for energy, renewable, programmed hard-phase heating of the area with volume 35-45m3 (in the future ceramic-carbon mass structure”. Application production of modernized models powered from solar at Patent KR #464 MKI C 04 V 33/24 cells is planned). “Technology of producing of electrical heaters As distinct from the usual oil heater, preventive electrical with anti-resonant air prophylactic effect”. bio-heater destructs agents of infectious diseases, whereas, according to the researches, oil heater Finale product (FP) purchase is not more expensive than stimulates their reproduction. those of existent models of electrical heaters. Cost value is noticeably brought down on organization of the scaled Absolute ecological cleanness is obtained by release of production. It is ought to take into consideration that the quarters from the effect of increased atmospheric from all existent types of heating, from the customer’s dampness with the temperature, appropriate to sanitary point of view, this one is the most energy-efficient. code. Any type of mold or fungus disappears in the Manufacturing of such bio-heaters can be organized on quarter and in the future these forms do not renew their the base of acting industrial production of ceramic existence (even after removal of bio-heater). fabrics. It will require some expenses. Moreover production service is rather cheap because there is no The absence of injurious radiations is attained by the need in maintenance staff. following: features of raw material, which is used during the process of electrical bio-heater production; radiation Electrical bio-heater can be applied everywhere, where is normal during bio-heater working. Pollution-free there is a need in: a) economical heating; b) decrease of temperature influence is attained by favorable infrared air moisture; c) disinfections of rooms. As for the life cycle of bio-heater it does not become Among the other properties of electrical bio-heater there obsolete morally and technically. It is produced from the are following: fire-safety; explosion proof; chemical materials, which are not liable to wear. inertness; enormous effectiveness from the point of view of electric energy demand. Structural simplicity The invention has a certificate of KR Gosstandart. From facilitates its durability; there is nothing in bio-heater to the end of 1998 the first unimproved modification of bio- be broken. heater (with power 0,6 KWtt) were put into serial production in Bishkek (with small test production runs). Technical aspects (applied Know How): In the process This time bio-heaters are readily used as medical of technologic production of ceramic cylinders, from equipment in hospitals and maternity hospitals in which bio-heater is consisted, diamond-like cellular Bishkek. Inventor: Alexandra L. Belyaeva. Page 44 Technical Report on 08.09.2000, #154. According to the normative data, temperature of inner air (tin) in the room must be equal to +20° C. In Bishkek planned specified temperature of The comparison of quantity of heat energy, external air (tex) for heating is minus 23°C. The average required for heat of rooms, and of heat quantity, temperature of heating period is tav= -0.9°C, specific which is produced by Belyaeva’s electroconvector. heat characteristic of the building is: q=0.4 Kcal/m3 h °C. Mavlyanbekov Sh.Yu. Medium quantity of heat energy, which is required for Deputy Director KSTC “Energy” heating, is determined by the formula: Qav heating = q ⋅ V ⋅ (tin − t ex )⋅1.12 ⋅ ⋅ [(t in − t av ) ÷ (tin − t av )] Kcal/h Editor’s note: this calculatious demonstrates the advantages of the device, which at 340 Wtt energy consumption produces about 700 Wtt of heat power. Qavheating = 0.4 ⋅ 52.5 ⋅ (20 + 23)⋅1.12 ⋅ The calculation of heat output, coming from the ceramic ⋅ [(20 + 0.9) ÷ (20 + 23)] = 492Kcal/h electroconvector to a room, was based on the basis of research statement of EVNA-0.2/220 electroconvector’s Thus at the average annual temperature of the heating influence on air micro flora of industrial rooms at period, which is: tav= - 0.9°C, the quantity of heat energy 23.10.01. - 06.11.01. period. required for this room, comes to 492 Kcal/h. The researchers were carried out in the arbitrary room in a four-storied large-panel building. This room was According to the research statement, the trials of the on the 3rd floor, with facing east windows. The room electroconvector with 200Wtt power were carried out was of 52.5 m3 air-space, 3.5 m height and 15 m2 area. at the following external air temperature: +10.2°C; The calculation of heat, was made on the basis of +8.5°C; +10°C; +6.6°C. The calculation data and results “Methods for calculation of the requirement in heat and of its examination are brought together in a table. The electric energy of buildings”. These methods were parameters of electroconvector with 340Wtt power are registered by Department of Justice of Kyrgyz Republic demonstrated in the same table. Table 1 Table of determination of heat entry and heat consumption’s correspondence in the experimental room comparing with power consumption of the Economy of heat energy kWtt (Gcal/h) Economy of heat energy kWtt (Gcal/h) Percentage depending on normative Normative heat consumption 0.2 kWtt (0.000172 Gcal/h) consumption of the device External air temperature, comparing with power Inner air temperature heat consumption (0.000292 Gcal/h) KWtt (Gcal/h) KWtt (Gcal/h) 0.34 kWtt 1 - 0.9 +20 0.572 100 - 2 +10.2 +20 0.267 100 - +10.2 +16 0.158 59 0.2-0.158=0.042 0.34-0.158= (0.000136) =0.182 3 +8.5 +20 0.314 100 +8.5 +17 0.233 74 0.2-0.233=-0.033 0.34-0.233= (0.000200) =0.107 Page 45 4 +10 +20 0.273 100 +10 +23 0.355 130 0.2-0.355=-0.155 0.34-0.355= (0.000305) =-0.015 5 +6.6 +20 0.366 100 +6.6 +19 0.338 92 0.2-0.338=-0.138 0.34-0.338= (0.000291) =0.002 Calculation data demonstrate a considerable economy of heat energy at daily unevenness of external air E = 0.93 ⋅ 5.67 ⋅ 343 4 ⋅10 −8 = 727 Wtt/m2 As the area of irradiation surface is equal to S=0.96 m2, Heat productivity of the new structure of electric then quantity of heat, which is evolved by the convector, convector with 340Wtt power was calculated on the comes to: assumption on the suggestion that heating of the room is carried out by the irradiation at the process of heat exchange. E k = S ⋅ E = 0.96 ⋅ 727 = 698 Wtt (or 600 Kcal/h) E = ε ⋅ C ⋅ T 4 ⋅10 −8 The quantity of heat, which is required for the heating ° of the room, is 492 Kcal/h (at the external air temperature equal to minus 0.90 and temperature in the room equal where: C0 =5.67 Wtt/m2 K4 is a radiant emittance of to plus 200). blackbody, ε =0.93 is an emissitivity factor of the surface of earthenware duct tube; T=70°C=343 K is the Thus, electric convector with 340 Wtt power is able temperature of the surface of earthenware duct tube. to heat totally the room with 60m3 area. On substitution of the known values into the formula Editors note: 340 input and 700 output!!! we get: Longitudinal Waves in Vacuum: derivative in time includes the so called substantial derivative, which was shown in the equations for the Creation and Research moving coordinate system. In particular, one of these equations was written by Maxwell himself to explain the phenomenon of electromagnetic induction Ph. Dr. Kirill P Butusov discovered by Faraday. This induction takes place in the conductor moving across the field lines of 190121, Saint Petersburg, Angliysky prospect, 5-18 electromagnetic field: Tel: (812) 113-8511 H H H E = V×B ; (I) The author presents a new elegant system, which is the symmetrized Maxwell’s equations. In practice it Other equations were obtained later by other scientists. gives a possibility to create the longitudinal waves in In the table I below Maxwell’s equations are given in a vacuum. This system is of great importance in split form. Their static and dynamic parts are given telecommunications and aerospace technigue. separately as well as the equations for moving and fixed coordinate systems. Such matrix concept of Maxwell’s There is a stable paradigm in electrodynamics that the equations allowed finding their incompleteness. Really, existence of the longitudinal waves in vacuum is the analysis of the matrix shows its high symmetry. impossible. This paradigm played its negative role However, full symmetry of the system of equations is preventing scientific minds from solving this problem. broken by the absence of the equation (X). It seems to However, Maxwell was not as categorical in his opinion be strange and calls a desire to remove this defect in on this question as his following were. such an elegant system of equations. Particularly he wrote: “Science of electromagnetism as A new equation is introduced in the Table 1 for the full well as optics is not able to confirm or deny the symmetry of the matrix: existence of longitudinal oscillations.” H 1 ∂j Maxwell’s dynamic equations are usually considered ∇⋅ñ = − 2 ⋅ ; (X) as partial derivatives in time. However, the total c ∂t Page 46 Fundamental Properties of The fourth is the principle of interaction between matter Aether and vortex-wave forms, which do not depend to the spectral part of the Universe, that is quasimatter. This Alexander M. Mishin is the principle of new interaction in nature. The value of energy interaction in each experiment diminishes in Author ’s note: In the ar ticle the principles time according to exponential law that is explained by determining major proper ties of aether are the forming of energy informational or adaptation formulated on the basis of an empirical material. barrier, which separates parallel worlds and reflects the properties of vortex tenacity of aether as superfluid Real aether [1-6], the primary and superfine essence of medium. At that, time of interaction is proportional to which is still a secret, has turned out to be absolutely the size of quasimatter and the barrier for the earthly non-standard superfluid three-dimensional material conditions is lowered at the indefinite period, on the medium, which simultaneously is at solid, liquid and assumption of only thrice-repeated observation of forces gas phases. The first master phase of aether is a (triad law). specifically solid absolute space or an energetical “bottom” of the Universe (“celestial stronghold”). At According to this principle, aether dynamic experiments that the solid phase is considered as mesomorphic in the ear th laboratory do not have classical vor tical-wave structure, which has particular repeatability that, from the one hand, gives occasion to holographic properties. Classical matter represents to doubts in the objectivity and scientific character of the be one of the stable and energetic space-time levels of non-traditional experiments and from another hand it the Universe. Aether vortexes exceed all conceivable is the most reliable test feature of macroscopic aether space scales, have quasi-material properties and create motions. Biosystems have special relations with this a great number of stereo-dynamic subspaces (parallel principle. The fif th is the principle of many-dimensional The first basic principle, to which aether entirely follows, autobalance of forces. All vortex and linear motions of is the principle of the least disturbance (the least action). macroscopic aether organize themselves in the way that Many well-known and unknown physics laws are the in the band of space-time spectrum of the local system subsequent of this principle. In particular, any motion (usually with the aid of fluid and gas aether) occurs to in macroscopic aether happens in such a way to be self-balanced, that is they have zero resulting minimize the interaction with the matter of our world, impulse and the moment of impulse due to the existence with zero moment of the disturbance momentum. In the of the proportionate antivortexes and antistreams of classical physics this principle has been reflected as another spectral structure at the same space volume. Le Shatelye principle, as variation principle, laws of The self-balanced vortex structures and streams are thermodynamics etc. practically closed for the outer watch from the direction of our material world, at least with respect to the The second principle is the principle of fractality, which methods of classical physics. The principle of confirms the similarity of forms and properties of autobalance of forces reflects aether properties as quantum aether vortex structures regardless of their unified synergetic system and has a significant applied space scale. This principle also determines the Universe meaning. as stereodynamically multivariate system in the form of hierarchy of vortical-wave structures of the unified Let call the principle of viability of aether dynamic aether (fractal matreshka). On the researching of the systems as the sixth principle. Only a stereodynamic macroscopic objects of the Universe it is possible to multivariate system is a viable one, that is a system, make a conclusion about microcosm structure if taking which during a definite period of time has the into account the changes of frequencies and velocities opportunity, called as life cycle, to realize interconcerted of action transmission. self-oscillating processes of vortex-wave character simultaneously at different phase states (subspaces, In the third place there is a principle of physical layers) of aether. The most important features of such a autonomy, which confirms that any solitary mass (for system are its space-time quasimaterial (vortex-wave) example a planet) creates aether system. The particular broadbandness and finite time of existence, which is principle of relativity, which reflects one of the fractal determined by the conditions of creation of the energy- properties of the Universe, can be applied to this system. informational barrier. Self-oscillation regime demands Such autonomous mass becomes similar to the the presence of an energy source, oscillatory circuit (a miniuniverse with its aether subspaces, which repeat pendulum) of any character, intensive process (of the basic phases of the Universe spectrum in more negative tenacity) and a channel of positive feedback narrow (which depends on the size of mass) frequency (negative entropy). band of space-time frequencies. Thus, in the local system of the Earth solid aether reproduces the In the sense, referred above, any material system is structure of gravitational field with energy “bottom” in viable and occurs to be a big system in the form of the mass center. As the result such spherical body coordinated community of multivariate subsystems. In occurs to be an energy drain and warms up from within. its turn each big system as a part of the hierarchy is a Page 166 constituent of bigger system, until everything is thermodynamics of many-dimensional aether, including embraced by the Biggest System, that is the Universe. the theory of non-traditional waves and new types of electromagnetism. At that, the supreme aim is the The seventh principle of the universal energy research of differences in aetherodynamics laws on the interchange is the physical realization of the law of unity Ear th (in a laboratory) and in outer space, the and struggle of oppositions. This principle determines unknowing of these differences has caused logical spontaneous creation of thermodynamic and insularity, false all-sufficiency of classical physics, which antigravitation potentials. Any local matter mass (a had refused as “not wanted” the aether conception and body), situated in the open space, creates an exchange fundamental Universal laws. process with the surrounding aether volume in the way that more fine-structure fluid aether is absorbed by the References body, and the less power-consuming gas aether is radiated. As the result the body as a heat engine gets 1. Mishin A.M. On the new properties of physical vacuum, energy due to the cooling of aether exteriors. At that, gravitational field and mass. DD USSR, 1988, p. 44 antigravitation forces acts between bodies and aether 2. Mishin A.M. Experimental results on the registration of exteriors, which have different temperature. aether wind // New ideas in Natural Sciences. Series: Problems of research of the Universe, issue 18. – SPb: RAS, This principle, which establishes the existence of 1995, p. 24-33 antipodes of the second law of thermodynamics and 3. Mishin A.M. The Aether Model as Result of New Empirical Newtonian attraction, is realized mainly in cosmic scales Conception. New Ideas in Natural Sciences. (On materials and explains in which way the energy is created in the of the International Conference). Part I – SPb: RAS, 1996, bowels of planets and stars and why the Universe is p.95-104 stable as regards to gravitation. Obviously, the most 4. Mishin A.M. The physical system of artificial biofield // unexpected for the modern Physics is the discovery of “New Energy Technologies” – SPb: Faraday Labs Ltd, 2001, non-traditional nuclear processes where conditional issue #1, p. 45-50 reactions of decay and fusion occur at the usage of quasimatter. 5. Mishin A.M. Antigravitation and new energy processes // “New Energy Technologies” – SPb: Faraday Labs Ltd, 2001, issue #2, p. 37-41 More deep research of new experimental results and of the stated above scientific principles lets to determine 6. Longitudinal thermomagnetic effect // New Energy the priority-driven strategic tendencies in Physics, to Technologies – SPb: Faraday Labs Ltd, 2002, issue #2(5), p. 38-41 open more entirely the laws of mechanics and Irving Langmuir and Atomic Nicholas Moller PO Box 201 34008 Eretria In this paper Dr. Nicholas Moller describes the history of development of Atomic Hydrogen technologies in details. Irving Langmuir. It is remarkable that this technology can be applied not only for welding processes but also as a clean free energy Electric Company. Patents and discoveries developed source. It is important to note that in this case the by Langmuir during his time with General Electric were hydrogen process does not involve a consumption of to a considerable extent instrumental in laying the hydrogen, which is not combusted in the process. Atomic foundations for what is today one of the largest hydrogen is not really a fuel but rather a medium, corporations in the world. gateway or a super-conductor of ZPE form the vacuum of space, converting ZPE radiation and ultra-high The question that gave birth to this article, is why his frequency electrical energy into infrared (heat) radiation. work and discoveries on Atomic Hydrogen were the only work that received hardly any attention at all and why This is the story of Irving Langmuir who was the first his revolutionary breakthrough was deprived of world to develop a theory on Atomic Hydrogen on the basis of attention for almost 100 years? This question becomes empirical research and experimentation. His work in this even more relevant when taking into consideration the field lasted from 1909 to 1927. During this period he high standing he enjoyed with his contemporaries was employed by the Research Laboratory of General (including being awarded the Nobel Prize in Chemistry) Page 167 man presence near experimental stands and devices. 1991, part 3 p. 368-370 diabetes, some diseases of haematogenic system, of cancer and possibly of AIDS. 6. Methodological materials on experimental pharmacological and clinical trials of immune modulating References effect of pharmacological remedies. - Ministry of Health USSR, M., 1984 1. Proceeding of the International Scientific Conference “New p. 176-187 immunomodulatory and immunotherapeutic properties of biologic response modifiers. Springer Seminar “Interaction between Kozyrev – Dirak radiation and radionuclides”, p. 85-89 to paramagnetic one with general radiation doze of about 7·1019 neutrons/cm2. Other types of radiations fact may be regarded as an indirect evidence for atoms (Devons, 1963). Therefore, their interaction with of technology and to formulate the methods for vintage wine and best quality spirit production. Materials and Methods Authors communicate the data on influence of Magnetic placed at 250 cm distance from MBW source, in tendency, Organoleptic evaluation The quality investigations were made by using of treatment into ferromagnetic substance (ibid). Page 284 was examined with Atomic Absorption Spectrometry reference (non treated) and sample 2 was treated with (AAS). Electronic spectra of samples were obtained with MBW. double beams UV Vis spectrophotometer equipped with permanent wavelength scanning. Redox potential was In both samples, the fructose and glucose levels were measured with EV-74 potentiometer. practically the same and amounted to 43.8 ± 3,32.22.5 g/l respectively. Sucrose and maltose The aroma alterations in the wine samples were were absent. Total sugar content was 76.0 g/1 though investigated by GLC method af ter preliminar y the level marked on the label was 80 g/l. It is thus concentration of aromas by solid phase adsorption. The apparent that the treatment of wine with MBW does concentration was carried out by barbotation of inert not lead to noticeable changes of sugars content. gas (nitrogen) through liquid and consecutive catching Results of organic acids determinations are given in the of volatiles with tube trap, filled by Polysorb 1 sorbent Table 1. (Lur’e 1972). The well-known analogue of Polysorb 1 is Porapak Q. The tube may be regarded as a short Table 1 chromatographic column, and volatiles go through it Main organic acids content, g/1 according to their retention times. The choice of sorbent was motivated by the fact, that retention times of water Acid Treated wine Initial wine and ethanol was rather small (ibid). Thus, a concentration process can be ended at the moment, Lactic 0.0265 0.00187 when water and ethanol have passed through the Oxalic 0.010 0.0088 column, as the other volatiles remained bonded. The Succinic 0.209 0.18 aroma desorption was made with ethyl ester. The Malic 4.56 4.22 analysis of the concentrates obtained was carried out Tartric 0.0805 0.0895 with gas chromatograph equipped with flame ionisation Citric 0.401 0.483 detector (FID), column 3 m x 3 mm, packed by Carbovax M on the Supelcoport. Temperature for the analysis was Standard deviation for the determination method was programmed from 100 to 190ºC with increase of 1º/min. estimated as 7 %. This fact shows that differences in Isothermal conditions in the borders had durations of 2 organic acids content are not significant. It should be and 40 min respectively. The “mild” conditions of noted that a tendency of slight increase in light acids separation were also employed (initial oven temperature (up to malic) in the treated wine was observed in was 80ºC with isothermal condition duration 5 min, contrast to noticeable change in more heavy acids. The temperature growth rate 1º/min, final temperature ethanol content of both of samples was 181 and 150ºC and isothermal condition duration 40 min). 184 g/l for non treated and treated samples, respectively, though the label on the bottle indicated Optical activity was tested with Spectropol at D line of 190 g/l concentration. Standard deviation was 5 %. Thus, Na (580 nm). The samples were evaluated MBW treatment does not lead to significant changes in organoleptically by a group (12 persons) of workers from alcohol content. Russian Institute of Canning Industry. Turbidity tests were made under the methods of Valuiko et al (1987). Atomic Absorption Spectrometry (AAS) data indicated In some cases, qualitative tests were completed by MPL that the samples were practically identical in terms of turbiditymetric measurements. Before testing samples K, Na, Ca, Mg, Fe, Cu and Zn contents (data are not were filtered. Determinations of heavy alcohols and shown). aldehydes contents were carried out in accordance to National Standard (GOST, 5363-67) as follows below. Similarly, spectra of treated and non treated wines, Determination of the constituents of “heavy spirits”(i diluted 150 times before photometring, were practically pentanol, i butanol) was based on reaction of the sample identical, thereby pointed out that polyphenols are with salicylic aldehyde in a presence of H2S04. Rose unchanged. colour develops if sample contains the heavy alcohols. The density was measured with Vis-photometer and When wine is industrially treated with IR or microwave the quantative determination was carried out using heating, ultrasonic, ultraviolet and g radiation, different standard graph made with mixture solution of i pentanol reactions occur and there include redox reaction, and i butanol. A method for determination of aldehydes esterification, condensation, hydrolysis, Maillard content is based on a reaction of fuchsine sulphite. The reactions, etc (Kishkovsky 1988). Most of reactions are developed colour was measured with Vis-photometer. accompanied by redox potential changing. Increase in Calibrating plot constructed basing on typed solutions Redox potential points out the increase in concentration was used for quantification. of oxidants, i.e. oxygen, peroxides, and other compounds, which are electron acceptors. Redox Results and Discussion decrease is a result of oxidation processes (ibid). Redox potential was practically constant ( ∆ E= 145 mV and Investigations of wine quality changes after MBW 150 mV in samples # 1 and 2 respectively). Evidently, treatment were performed using two samples of oxidation processes, like they occurred during heat portwine (“Zemfira”) type wine. Sample 1 was a treatment, were absent during the MBW treatment. Page 285 One of the important reactions to be considered is the acids (Kishkovsky 1988). During heat treatment, storage Maillard reaction. Essentially it appears in form of and other physical influences, different kinds of acid browning, decrease in reducing sugars and amino acids, esters accumulate. These have weaker aroma than and new aromas formation. While our result evidences esters of fatty acids. But their appearance proves the on absence of irrelevant aromas, alterations of wine existence of esterification processes. A comparison of colour, and sugar content, thereby indicating aromas chromatograms of the samples 1 and 2 proves insignificant contribution of Maillard reaction on wine occurrence of changes in concentrations of the quality changes due to MBW treatment. Technological individual substances (increasing of peaks length with treatment often leads to esters accumulation that retention times of 13.10, 100.9; decreasing of peaks improves wine aroma. It’s well known that the most length of 54.85 min). An order of peaks exit of different important in this context are the esters of C6-C14 fatty volatives is given in the Table 2. Table 2 Exit order of different volatiles Exit order, from Retention time in our Exit order, from Retention time in published data for experiments, min published data for our experiments, Carbovax 20 M Carbovax 20 M min Acet aldehyde 3.8 i-Pentanol 23.56 Ethyl acetate 4.71 i-Amyl butyrate Diacetyl 4.82 Acetone Methanol 4.91 n-Pentanol 27.92 Ethanol 6.01 i-Amyl valerate n-Propanol Ethyl lactate i-Butanol 13.58 Ethyl caprilate 52.6 Butyl acetate 13.59 Acetic acid i-Butyl acetate Diethyl succinate Ethyl valerate Ethyl laurate n-Butanol 17.73 Phenyl ethanol Amyl acetate 20.70 Diethyl malate A comparison of retention times of components with height after magnetic treatment. Identification of peaks peaks of standard substances of wine aroma indicates with retention time factor especially in such complex that butyl acetate and i-butanol are very close to peak system is not unquestionable. However, the best way 2. Data on chromatographic separation with mild is to use the chromato mass spectrometer, which allows condition showed that i butanol and butyl acetate peaks inference according to their individual mass-spectrum. exited simultaneously. Organoleptic evaluation recorded a nice smell in the treated wine, thereby due to the Organoleptic evaluation can depend on aliphatic formation of butyl acetate. Data indicated the presence alcohols content. Determination with GLC shows (Table of ethyl malate, ethyl tartrate and ethyl citrate in the 3), that their quantity in the both samples is rather small samples, in addition to two peaks corresponding to with respect to average values taken from literature for ethyllactate and ethyl oxalate. The large experiment this type of wines. Thus such changes can not be error does not allow any inference on changes of their recognized with such evaluation. For both of samples pH was equal to 4.0. Table 3 Aliphatic alcohols content, mg/1 Alcohol Sample #1 Sample # 2 literature (Kishkovsky 1988) Methanol 80-350 i-Propanol 0,3-3 n-Propanol less than 20 5-50 i-butanol less than 20 less than 20 20-100 n-Butanol less than 10 less than 10 2-10 i-Pentanol less than 20 less than 20 100-250 Page 286 The results of optical activity measurements indicated, Commercial vodka bottled in standard 0.5 l bottles and that both samples are not optical active. Filtration, artificial solutions, containing 40 % of food derived clarification and dilution could not change the optical rectified spirit were used. Data showed, that MBW activity. Perhaps, there is a compensation of different treatment significantly influenced the heavy alcohols forms of D- and L- compounds in the samples, thus content, as the reduction in heavy alcohol was more total activity was very close to zero, and magnetic than two times. In addition, it reduced aldehydes by influences could not change equilibrium between the more than 3 times in vodka, and more than 30 % in forms. rectified spirit. Data indicate that efficiency of aldehydes removal is higher when the sample contained higher Organoleptic evaluation of more delicate taste and level of aldehydes. Thus, the MBW treated vodka and aroma of the treated sample with respect to non treated rectified spirit will be better than untreated one. It is one, MBW treated sample as more complete, harmonic, however stressed that untreated samples were also noble, and natural in contrast untreated sample was recorded as good by sensory panel. So, limits for recorded as excessively bitter and sour in spite of aldehydes are usually present in high quality vodka practically the same pH of samples. established by National Standard (GOST 5363- 67) are 6 - 15 mg/l. Thus the organoleptic evaluation of samples It is interesting to test the tendency of wine to make a does not allow to find difference in aldehyde levels in different kind of turbidity after the MBW treatment. Data these samples. showed that both samples were not positive for protein turbidity. In term of reversible colloid turbidity Table 4 formation, after storage at 7.5ºC for 1 day, the MBW treated sample was homogeneous, in contrast to the The main results of heavy alcohols and aldehydes formation of different phases with different determination refractometric numbers in untreated sample. Both the (mg/1) in vodka and solution, contained 40 % of phases in untreated sample were liquid, with a density rectified spirit very close to each other, but the borders of phases were like broken lines when crystallization begins in Substance Non-treated Treated Non-treated Treated crystallization process. This alteration in untreated vodka vodka spirit spirit sample may be due to micelle state changes or of Alcohols 8.7 2.55 3.38 1.5 structurization of product. Aldehydes 1.5 0.4 0.6 0.4 The tendency test for polysaccharide turbidity based on the reaction with phenol in presence of H2S04 and Sediments formation and its character were also determination of the derivative formed by evaluated. The sediment in treated grape juice was photometrically, indicated, that difference in dense and more dark, the formless, non crystalline sort, concentrations of polysaccharide in the both samples and gel like form. The volume of the sediment occupied are very small, the levels being 119 and 106 mg/l for up to 30% of total volume. The sediment did not sink or untreated and treated samples respectively. These float, nor it stick to the walls of glass. It was found that values are close to range of polysaccharide stability 100 ml of juice gave about 155 mg of dry sediment. (150-200 mg/l), and thus do not allow any conclusion Microscopic investigations showed an absence of any on changes of relative stability of the samples. A kind of bacteria or fungi in the sediments. tendency for polyphenols turbidity, due to polyphenols associates precipitation upon addition of salt did not The effects of high energy of magnetic influences on show differences. Turbidity, as determined in MPL sediment were also investigated. The experiments were apparatus, was 15 FEM as against value of 0.2 FEM carried out with “Portwine Erevanski, vol. 0.5 1, white, before testing in untreated sample. These numbers were spirit content 19 vol %, sugar 10 %, prepared according respectively 14 and 0.3 FEM for treated sample. Thus it to GOST (National Standard) 7208-84”. Crystalline indicates that both the samples are very stable with sediment appeared on the walls and especially on the respect to polyphenols turbidity and that the magnetic bottom of the bottle after the BMW treatment. An treatment does not lead to alteration in the polyphenols amorphous precipitate was also presented, and it can stability. be separated by decantation. Crystalline sediment, after washing with ethanol and drying to constant weight, The data on the colloid stability indicate, that both the weighted 69.2 mg, and was of bright brown colour. A samples showed rather high resistance against protein, tartrate content as a tartrate acid, of the sediment was polysaccharide and polyphenols turbidities. Besides, 59% mass. If it is considered as a tartar (a wine stone) treated sample showed higher stability with respect to of potassium sodium tartrate, then tartar content in reversible colloid turbidities. sediment works out to be 86%. If it is considered as a tartar of dipotassium tartrate, tartar content in sediment It is interesting to investigate as to how heavy alcohols will be 93%. and aldehydes, which are often produced, when low- grade technology is used, are affected by magnetic Generalization of data shows the positive effect of treatment. For these studies, a system of simple mixture, magnetic treatment on the wine samples, leading to consisting only of spirit and water, was used. harmonic taste of treated wine and absence of non Page 287 pleasant tastes. Most of the changes were found to bein References the flavour and taste components, which were minor substances in the product. For example esters 1. Amaldi E. et al in: Preprint CERN, Report 63 13. Search of Dirak concentration changes during the treatment. At the Magnetic Pole, 1970. 2. Devons S. Search for Magnetic Monopole, Sci., Progr. (No.204), same time the content of major components, such as 601 (1963). sugars, organic acids, particularly, heavy organic acids, 3. GOST (National Standard) 5363 67. Vodka. Metody ispytaniy and especially ethanol remain constant. It seems logical (Testing Methods). from kinetic point of view, when simple processes, like 4. Kishkovsky Z.N., Skurikhin I.M., Khimiyavina (Wine Chemistry), Moscow: VO “Agropromizdat”, 1988, 253 p. (in Russian) esterification, are preferable with respect to many 5. Lurle A.A. Sorbenty i khromatograficheskie nositely (Handbook stages reactions, and reactions with high activation on Sorbents and Chromatographic Supporters), Moscow, energies, which can go at hard conditions. Also, it seems Khimiya, 1972, 320 p.(in Russian) logical that magnetic treatment may influence on 6. Shakhparonov I.M. in: Sharovaya molniya v laboratorii (Ball Lighting in the Laboratory) / Collection of Articles. Moscow: electrical state of colloid species. Thus magnetic Khimiya, 1994, 400 p. treatment can be considered as mild, selective in the 7. Svoistva konstrukzionnykh materialov na osnove ugleroda (The comparison with many other physical methods. Properties of Constructional Materials Based on Carbon) / Nevertheless, the changes lead to acceptable Handbook/ Nagorny V.G., Kotosonov A.S., Ostrovsky V.S., Dymov B.K., Lutkov A.I., Anufriev Yu.P., Barabanov V.N., energetical and nutritious value of the product. Data Belgorodsky V.D., Kuteinikov A.F., Virgelev Yu.S., Sokker G.A. show that difficult problems, such tartar removal, can . Moscow, Metallurgy, 1975, P 73 77 (in Russian) be solved by MBW treatment. 8. Valuiko G.G., Zinchenko V.I., Mekhuzla N.A. Stabilizatsiyavina (Wine Stabilization), Moscow: “Agropromizdat”, 1987, 160 p. (in Russian). The Fundamentals in the former state interfering with accelerated movements of the object (in accordance with the 1st , of the New Principle of the 2nd and the 3rd Newton’s laws). Motion It should be noticed that such method of motion (for the speed, which is much less than the speed of light) By The Group Studying Inertialess Natural Processes (GSINP) takes place both in animate and inanimate natures. In 123430, Moscow, Mitinskaya Str., 40-1-244 Email: this case the level of energy of motion and reaction of space (or an environment) are not very high. P Sherbak . Incidentally, the energy of object can be of different types: electrical, chemical, biochemical, mechanical etc. The concepts of active and passive interaction between The common consequence of this type of motion is the moving object and the space form the basis of the existence of the inertia. The classical physics can’t new principle of motion. answer the question: “what is inertia?» The same situation is applied to the concept of mass, which is So as to be more understandable, let’s consider what is closely connected with inertia. The classical physics the old principle of motion. For this we will use the says that the mass is a measure of inertia. concept of a moving object and the space in which the There is the new principle of motion of material object: object is moving. Naturally, material objects and the the object is passive and space is active. In this case space can’t interact between each other directly, it’s more favorably for space in the energy aspect to because the space is the philosophical category. In this move the passive object and to spent some power then case we can understand physical essence of natural to keep the object in the present place in the former phenomena easily. In our view, the material objects state of immobility (in accordance with the 1st, the 2nd interact with some fundamental energy of space (FAM), and the 3rd Newton’s laws). And so we should introduce which fills all space with a different density. Thus the the 4th law of Newton’s mechanics. It says that there energy (FAM) is inalienably connected with the space. are the systems of coordinates in which the body is One of the first names of this energy is “ether” in the moving not rectilinearly with acceleration when this early scientific works. So, for the simplicity we will body is in the state of immobility. accept that the object and the space interact between each other. The basic and the main differences of the offered principle of motion from the existing methods at the Thus, all existing methods of motion which have been end of the XX century are the following: invented by mankind till the present time are based on activity of the material object that means the one 1) The absence of inertia of motion; expends some energy to produce the motion, and at 2) There are no limits for the speed of motion; the same time space is passive, it means that space does not need to spent any energy to move the object. 3) The absence of “fuel reserves” “on board” of the And so in common case space tries to keep the object moving material object. Page 288 Earth had the role of one of the charged balls. It was Nikola Tesla and possible by changing of charge on the tower to deform electric charge distribution on the whole Earth surface at Instantaneous Electric once. This deformation (electric currents) could be fixed at once in every point of the Earth surface. It is alluring Communication to use this effect for data transfer telecommunication, both on the Earth, and in space. Vladimir I. Korobeynikov After such introduction the question “How does the system of instantaneous electric communication for any distance look like and work?” is still opened. First of all, the readers need to know, that such instantaneous communication is possible in principle. The proving it theoretical calculations, are rather difficult for popular interpretation. Some part of readers can take it on trust, Nikola Tesla (1856-1943), an outstanding inventor, was and those who are most interested in can apply to works and still remains one of the most mysterious persons in of Oleinik V.P. (quantum physics) the professor from Kiev the history of electrophysics. Whereas the most scientists Polytechnic University. At the minimum there are two were moving together in direction of microparticles necessary works: Oleinik V.P. “Faster-than-light transfer investigations, as the basis of matter structure and of of a signal in electrodynamics. Instantaneous action-at- nature itself, he was going in opposite direction. He had a-distance in modern physics” (Nova Science Publishers. a keen interest in the investigation of electric charge of Inc. New York. 1999) and Oleinik V.P. “Latest the Earth as a whole. He was looking for the ways to development of quantum electrodynamics: self- influence on it, to control its state and methods of its organizing electron, faster-than-light signals, dynamical heterogeneity of time.” (Physical vacuum and nature. 4. 3-17. 2000). Therefore, exactly, the most of his searches, experiments, the purpose of constructions and buildings, created “PC” magazine has devoted a rather significant article according to his conceptions, cause perplexity and entitled “Computers and teleportation” to V.P. Oleinik misunderstanding of scientists even in nowadays. works, concerning instantaneous electric communication (“PC” #6, 2000). Note, that the author of the given article The most mysterious of his main experiments were made has also found the possibility of instantaneous electric in USA after 1904. After Nikola Tesla death in 1943, all communication, but by means of materialistic methods, his diaries and records over a period from 1904 year had absolutely different from Oleinik’s ones, what is most mysteriously disappeared. Probably they were stolen (it important – two different solutions point to the possibility was known, what to take). Lost records could “cast light” of this communication. “PC” #6, 2000 in the article on one of the most “strange” of his buildings in the form “Circles on fields” cited mathematical formulae of the of the enough tall tower, on the top of which a specially structure of electron electro-magnetic field as an created toroidal transformer was placed. This transformer illustration (it refers to the Earth too) that the author of could create there a huge electric potential up to the billion this article has got. The most attentive readers of that article could notice, Nikola Tesla switched on this tower-device, what caused that one vector Hz absolutely “ignores” Special Theory the fright and even panic in mind of people from nearby of Relativity, since its mathematical expression does not settlements. Of course! Because of very high electric include the velocity of light, whereas it presents in other potential there began air ionization, which spread very vectors as a product of electric and magnetic conductivity. high to the atmosphere accompanying by the effect of Magnetic line of this Hz vector goes to infinity and returns color play. Such luminous, color-playing sky caused even back from infinity. It surrounds the whole Universe. It is a horror of people, who knew nothing about the alluring to use exactly this (Hz) line for the instantaneous experiment made and its goals. They did not guess that communication for any distance. Tesla by means of the electric charge, created of the tower, was influencing on the electric charge of the Earth as a It is not so difficult to do it. In the Fig. 1 the easiest and whole (about 600000 Coulomb). There was a global scale most available for understanding line of the in Nikola Tesla’s investigations. instantaneous electric communication is shown. A rotating charged dielectric ball (an “electron”, isn’t it?) There is no point in detailed analysis of the fact that the is used as transmitter. The ball can be electrically charged potential of the tower top influenced on the Earth charge. up to the limit of charge flow-out into the ambient space. Interaction of charges-balls with the distortion of field Around the charged rotating ball there appears electro- lines, distortion-distribution of charge on their surfaces, magnetic field, entirely analogous to the electro-magnetic induced charge, is beautifully described even in school field of the Earth (and of the electron too). The central physics textbooks. In Nikola Tesla investigations the magnetic line Hz goes to the infinity and returns New Energy Technologis Issue #3 (6) May-June 2002 43 Charged dielectric Tower of ball, rotating with Nikola Tesla ω velocity (non-effective in this case) The long capacitor ω (”Chinese Wall”) Control signal Central line Hz S Magnetic lines (Hz) of the rotating ball Angle of rotation (of deviation) of flux line in the case if signal is in the long capacitor pyramid made of RECEIVER soft-magnetic ferrite (”Egyptian Pyramid”) Concentration of magnetic lines into the pyramid Coil at the base of the pyramid Line of the instantaneous electric communication on the basis of rotating charged ball and pyramid back from it to the opposite side of the ball. In the same If such distribution of the surface charge is broken, space way the central magnetic line of the Earth (Hz) goes from position of the line Hz also will change. On mounting one pole to the infinity through the whole Universe and the Tower of Nikola Tesla on the surface of rotating ball returns from it to the center of the opposite pole. and measuring the potential on this tower in time with an information it is possible to change the charge distribution If by the information to force the rotating ball (electron) on the ball, and, respectively, the space position of central to “wag by tail” (by Hz vector) which stretches through magnetic line (Hz) in the whole Universe at once. Big the whole Universe, then this “wagging” can be controlled disadvantage of the Tesla tower is that maximum instantly in every point of the Universe. While the rotating influence on charge is executed in the point under the ball has a steady distribution of the surface charge, the tower, and farther it began decreasing roughly line Hz does not change its dynamic position in the (exponentially), according to physics laws. 44 New Energy Technologis Issue #3 (6) May-June 2002 Hence it is advisable to influence on the whole surface instantaneous and usual radio transmission. The usual of charged ball, but not on some point of it. It is possible radio transmitter for the transmission of the information to influence at once on the very big part of surface by the uses the energy distortion of space by the information. long capacitor, placed on the perimeter (equator) of the This energy change in space happens with the velocity charged rotating ball. Because of optimality reasons, this of light and hence there is the loss of time for information capacitor length should not exceed a quarter of the ball passing. In the considered case there is no energy change perimeter (equator) length. Charging and discharging this in space, there is only a change of magnetic lines position long capacitor on the ball equator by the data signal, only (Hz). the position (angle 5) will be changed, not a value of the infinitely long magnetic line (Hz) in the Universe. It is a This is exactly the vivid and fundamental difference data transfer. between the usual electric communication and the instantaneous one. In other words, in usual transmitter The natural question appears: “How to make on the Earth during the fixed time interval there is the change of the most powerful transmitter for the instantaneous signal energy (instantaneous value), whereas in electric communication?” The answer suggests itself: “It instantaneous transmitter there is no this change (only is necessary to use the Earth itself as a rotating charged information). This is exactly the fundamental ball.” It is not effective to use Nikola Tesla’s Tower to difference. deform the Earth electric charge. To place on the Earth very long (about thousands km) capacitor is quite easier. Evidently, to receive instantly the signal from the opposite However, it must be placed not on the equator exactly, part of our Galaxy, we need rather big pyramid, in order but moved a little bit because of the initial heterogeneity to concentrate a big amount of field lines into the of the Earth surface charge distribution, caused by the oscillatory circuit under the pyramid. The question can presence of continents and oceans. It will be necessary appear: why the pyramid, why not a cone? The point is to find the line of electric equator, where the amount of that lines of the Earth magnetic field (the very lines that charge north and south of it is similar. This line will not compass needle reacts on) in the any place of the pyramid be ideally straight and will be situated near the 30t h horizontal section have the same density of distribution parallel. and are directed strictly parallel to the pyramid base. The cone in its horizontal section cannot provide such As a matter of fact, this grand capacitor is already built, uniformity of distribution that is why it is not advisable but is half-broken. This capacitor is very well known – it to use it. From the space magnetic field lines pass through is a Great Chinese Wall. The ancient, powerful Chinese and concentrate in the pyramid strictly at right angle to Tzcin’ Shi Huandi empire adapted and used it (capacitor) the pyramid base. for the protection from nomads incursions. How unexpectedly and originally it is! In this case the electric This is the riddle of pyramids wonderwork. Any person iron would be the best tool for spiking. It is clear enough coming into a pyramid, at the same moment feels the that the charged ball (as well as the Earth) will “wag by change of mental and physical condition of organism; tail”, which stretches through the whole Universe and whish is very different from that it was before the entering does not change its energy, but only changes its position into a pyramid. Of course! Visitors come inside, into in space in time with information. Now we can go on to concentrated magnetic field lines of the powerful and the question, concerning the way to control the Earth functioning magnetic core of the receiving electric circuit, “wagging by tail” in the Universe, and thus to read what is absent outside the pyramid. information instantaneously in any point of the Universe. It is strange, but most of tourists are afraid of the ill effect, In the Fig. 1 it is shown the input device of the electric which can be produced by electrical systems on their communication receiver, made of the magnet sensitive health, but there they stand in a queue to feel this effect material (it can be soft-magnetic ferrite) in the form of in pyramids. Concentration and division of magnetic field pyramid, with the proportions of well-known Egypt lines are the easy and effective way to reject a noise, pyramids. Magnetic field lines of the far space pass created by the Earth magnetic field. through the pyramid from the top to the base and are concentrated by pyramid. If there is no signal (the It is clear, pyramids should be oriented very thoroughly, “wagging by tail” of the far planet-transmitter is absent), so that lines of the Earth magnetic field would be strictly then the magnetic flow, coming through the pyramid, does parallel to the base and to the opposite (East-West) sides not change, and induced voltage in the coil, placed in the of pyramid. To get such exactness of orientation in base of the pyramid, is absent (no information). If modern conditions is very problematically. “wagging by tail” begins, then the magnetic flow, coming through the pyramid, will change, and it will cause the The most convenient place to build a pyramid (pyramids) appearance of voltage on the coil in the base of the is on the electric equator, in the place of its intersection pyramid in time with the information. with the electric meridian. Such place is located in Egypt, near its capital Cairo. And again we meet a paradox: such Thus, the signal is received instantly. Here it is necessary pyramids are already built on the Earth, but they are half- to remind once again the difference between the New Energy Technologis Issue #3 (6) May-June 2002 45 broken. And Egypt was not less powerful than the ancient It must be noted that «PC» already published information Chinese empire. that the Chinese Wall and Egyptian Pyramids are radio engineering constructions, intended for the instantaneous The Egyptian dynasty of Pharaohs has “completed” and galactic communication (PC #114, 1997, etc). adapted pyramids to burial-vaults, where mummies of dead Pharaohs were buried. Perhaps, it is even more There appears an interest in the possibility to produce incredible than in China. The impression is given that very simple and manufacturable systems of instantaneous ancient powerful civilizations on the Earth had a electric communication right now. Radio-electronic competition between themselves, who will use radio- industry can produce them, but still does not guess about engineering constructions for instantaneous galactic it. communication in the most incredible way. Let’s give to a reader an opportunity to select a “winner”. Furthermore, such systems of instantaneous electric communication can be created at home, and even senior A A Central magnetic line (Hz) A A pupils are capable to use them. In the Fig. 2 there is shown must be winded along, through butt-ends of core, so that the construction of instantaneous electric communication the whole internal part of the coil would be maximal (in line in comparison with the usual one. It can be produced area extent) filled by ferrite. even at home conditions. Two permanent magnets, connected between each other by analogous poles, are The obtained coil can be completely “winded” (screened) used as transmitting circuits. by flat ferrite of big size. For more clearness of the experiment the central magnet line of transmitting part Permanent magnets can be replaced by electromagnets. must be directed strictly along the axis of the receiving In the magnet connection point there is a coil, which while coil. the signal passing through it will change its position (angle 5) in the space of the central magnetic line (Hz), coming Now, if we give the alternating voltage (information) from out from the place of two magnets connection. Receiving transmitter to the transmitting coil, fixed on the permanent circuit is available to be made of the flat ferrite, but coil magnet, then the receiver, connected to the circuit of the 46 New Energy Technologis Issue #3 (6) May-June 2002 transmitting coil, placed on the flat ferrite, will detect an to the conditional unit. Only this single fact in principle alternating voltage (information). Maximum effect is changes the conception about “appearance” and achieved at the resonance (coincidence of transmitter and “disappearance” of elementary particles! Even considered receiver frequencies). instantaneous and usual communications on the same receivers and transmitters in a complex conception It is checked. It works. The dullest experts in radio- (complex physics) eliminate appeared electronic (after the reading above) can rejoin, without “misunderstandings” of all kinds. making an experiment, that it is an absolute nonsense that any communication is out of the question. Coils with Advantages of the instantaneous (Maxwellian) electric absolutely perpendicular axes, besides one of them is communication are especially evident during the screened, do not interact with each other. connection with long-distance spacecraft. At present in the interval between sending of control signal to the And here the most interesting thing starts. In the Fig.2 as station in region of Solar System peripheral planets and it was mentioned above, the usual communication line getting the reply it is possible to have a small break for and the instantaneous one were compared. The usual dinner (it is very convenient). transmitter cannot generate the vector Hz that is why systems of usual and instantaneous communication cannot In the case of the instantaneous electric communication see each other in strict sense. What does it mean? It use, duty operators will have “no dinner”. Moreover, the means, that in the same city it is possible to transmit on system of instantaneous electric communication can the one-carrier frequency (“what a nightmare!”) two realize two-way communication underwater and from absolutely different television channels without any noises underwater to overland. It is clear that input and output of one to another. circuits of such system must be covered by slushing composite for the protection from aggressive effect of Usually by frequency match of a transmitter to the the salt sea-water. Such systems of instantaneous working frequency of another one, the radio communication are very required to submarines. communication is broken, but here it does not happen. Here some additional explanation should be given. As Now, when readers know and understand the principle the vector Hz, which “ignores” the theory of Einstein is of operation of instantaneous (Maxwellian) electric received from Maxwell equations, it follows that the usual communication systems and their advantages over usual (Einsteinian) system and the instantaneous (Maxwellian) ones (Einsteinian), we can only wait, when radio- one work on mutually perpendicular electromagnetic field electronic industry will start to produce these very lines (vectors). required systems. In the Fig. 2 such difference is shown clearly. These are just “jokes” of complex numbers, when one value is absolutely perpendicular to another and nevertheless ELECTRIFYING TIMES together they form a single whole. In other words it means, that two greatest persons in science Einstein and Maxwell an online and published magazine about Electric, as a matter of fact are something like “Siamese twins”, Hybrid, Fuel Cell Vehicles, advanced batteries, ultra capacitors, fuel cells, microturbines, free energy completely grown together at the angle of 90 degrees, systems, events and exhibitions worldwide even by heads. On the one hand every one is on his own, but nevertheless they are the common (complex) 63600 Deschutes Mkt Rd, Bend Oregon, 97701 Hence there are a lot of misunderstandings on happened fax 541-388-2750 phenomena. How many scientists tried to find some mistakes of Einstein? They produced very convincing proofs concerning instantaneous interactions in nature. Subscription $13/3 issues These scientists did not suspect that time and still do not guess now that they already for a long time are “walking” in the complex physics, which still does not exist. Einstein and Maxwell (“Siamese twins”), each occupies his own part of the complex number (complex physics) and they Institute for Planetary Synthesis cannot be already taken off from there. P.O. Box 128, CH-1211 Geneva 20, The only third, free “vacancy” is left to throw on the both Switzerland of them at once the common “collar” and “reins”, i.e. to Tel. 41-022-733.88.76, Fax 41-022-733.66.49 fasten them (“twins”) together by module and argument E-mail: as any complex number. In this case no matter how the one part of complex number “ignore” the other one, only its argument will change, and module always will be equal New Energy Technologis Issue #3 (6) May-June 2002 47 The Unified Gravitation Theory (The unified super-principle, which controls the Universe) I. P Kuldoshin Orenburg, Neftyanikov str., h. 2, apt. 9, 460019, Russia (Editor’s comments by Alexander V. Frolov) A forum of the leading USA physicians took place in the White House in March 1998 in presence of President Clinton. There was only one question: “When will the nature of Gravitation be opened?” The well-known USA physician-astrologer S. Hoking declared that it possibly would occur in twenty years and it would be the Unified Theory of All. So, the scientific world by default called it the greatest discovery of the future. Some time later a new hypothesis pretending to this discovery has got its birth in Orenburg. Despite this fact this hypothesis would gain recognition and status of the Greatest Discovery of Mankind only by 2018 that was predicted by S. Hoking. To present day there have been written a lot of discover the nature of gravitation. The XX century was hypotheses on this problem but they haven’t been marked by a revolutionary development of scientific and recognized. Many scientists consider our Universe as technical progress, but there was an almost 100-year living and functioning according to the unified and rigid stagnation in cognition of the Universe elements. laws in Macro and Microworld, which provides automatic regulations of all its processes due to The theory of “Aether wind” supposed that all Cosmos circulation of radiant energy of the Universe life in is filled with aether particles flying with the speed of cosmic space. This energy is inexhaustible and light (these particles are “neutrino” according to environmentally clean, and Mankind may learn using it modern understanding). The role of gravitation, carrier in the nearest time for the welfare and for prevention of of light and retarding medium in Cosmos was contradirectional irreversible ecological catastrophe. attributed to this motion of particles. There is no alternative for humankind to escape and it But this theory allowed chaotic motion of particles, will not appear in the future. Only cosmic energy will which is impossible in mechanism of the Universe, save us. From the book “Secret Doctrine” by E.P . which is adjusted up to automatic mode. Besides, Blavatskaya we can get complete information about the motion of these particles is not possible without an fact that a highly developed civilization of Atlases on absolute buffer unit, which prevents their head-on the Earth had a “General Theory of All” yet 10-12 collision at the speed of 600000 km/sec (it is thousand years before our civilization. They had no thermonuclear explosion and death of matter, i.e. the automobiles, but instead they had flying objects . Universe). E.P Blavatskaya wrote that Cosmos is filled (aircrafts) “Vimana” of various types as well as ships with radiant energy of the Universe life, luminophore, and submarines, on which they also used Cosmic electromagnetic aether. Thereby she predicted a ready energy. solution to make correction in the uncompleted theory of “aether wind”. On the basis of above stated and due While reading an abstract in General Soviet to the un-assumed dawning up, the theory of “aether Encyclopedia, I got acquainted with the theory of wind” was completed. It was the ground to develop a “Aether wind”, which was abolished in the beginning hypothesis of radiant “aether wind”. Particles of this of the XX century, and then I understood that this theory wind (neutrino) are electromagnetic particles and move contains a deposit to discover the nature of gravitation. with the speed of light in all directions as contradirectional paired single-stream flows (like The nature of gravitation is the only one and there electrical current in twin-wire cable). Due to this, an are no alternatives in theoretical as well as in physical absolutely stable concentration of these beams in sense. When scientific world of entire planet abolished cosmic space is provided according to the principle the theory of “Aether wind”, it lost the possibility to “what has come in, the same has gone out”. Page 142 The hypothesis formulates new views on the problem 11. This hypothesis gives scientific and technical of structure of elements of the Universe material world. recommendation for creation of cosmic energy Some separate conclusions do not match the views of modern scientific thought on the problems of physical 12. It gives scientific and technical recommendation for principles of material world structure and functioning producing of levitation effects for any technical systems. of the Solar system. 13. It disclosures the possibility of cosmic flights with List of topics of the hypothesis the super-light speed. 14. It explains experiments on metering of horizontal 1. The hypothesis disclosures the operating gravitation (The first experiment was made on February environment of a super-mechanism, which controls the 27, 1999). Universe (it is a radiant “aether wind”). 15. It disclosures the particle (neutrino) of original 2. It disclosures the nature of retarding mechanism of matter of the Universe and gives its characteristic. flying objects in Cosmos (its name is Lorenz-Fitzgerald (Ancient thinkers called modern “neutrino” as “Aether”, compression). and it was not occasionally, because its diameter is in 1025 times smaller than atom’s diameter. 3. It proves the absence of Universal gravity and beams of light as we usually conceive it. (The beams of aether All matter of the Universe consists of the same wind collide and compress matter. An alternative to the indivisible particles “neutrino” presented by three notion about beams of light is a temperature wave groups: impulse on the beam of aether wind. It explains why - “energy” group, which is in the beams of “aether the speed of light doesn’t depend on the speed of the wind”; source of light. Light is a “passenger” on the beam of - building group, which forms the part of any micro “aether wind”). particle; - free group (neutral-reserve) as a building material 4. This hypothesis disclosures the mechanism of for new matter and operating environment of all stablization of rotary and orbital movement of the electromagnetic processes. Universe matter in macro- and micro world due to retarding medium in Cosmos. All neutrino of three groups rotate with the speed of 3×1043 rps (equatorial speed of neutrino is equal to the 5. It disclosures the mechanism of reverse rotation of speed of light). Venus due to the forces of autorotation. Fields are formed in every particle as a result of rotation: 6. It disclosures the mechanism of reverse orbital - strong field of a small volume doesn’t allow particles movement of planets and satellites of planets. (Such a to close up; planet had not been opened yet, but there are 6 satellites - weak field of a big volume is a general mechanism of in the Solar system, which move counter to the others, gravitation. and it is not an occasion, but a particular case of the effect of aether wind beams). As scientists write at the present time, the World is subdivided on a dense world (which we can see) and 7. It disclosures a real nature of Tungusska catastrophe. fine world (invisible). At that the density of such world (There were about 100 hypotheses, but neither of them is in 1015 times less than density of water. was recognized to be true). It is known in science that all matter of the Universe 8. It disclosures the nature of gravitation and gives both great and small rotates and is a gyroscope. an explanation that gravitation can be: Particles of matter get rotation with their birth, thus - usual (vertical); the fields are born in them simultaneously. Matter - horizontal; cannot exist without rotation, which generates fields. - circular It is important to note: not the entire matter takes All mechanism of interaction between three groups of part in gravitation, but 1/3, i.e. 33,3% of matter. particles is based on the mutual repulsion. This is the only mechanism, which always and automatically is 9. It disclosures the nature of Levitation and proves able to create the necessary stable interval between that 1 liter of water on the surface of the Earth can have the particles and only this mechanism provides the the weight from 0 up to 3 kg. function of gravitation. 10. It disclosures the role of gyroscope effect in life Many scientists of the late XX came close to the support of the Universe. The gyroscope effect allows discovery of the nature of gravitation, but they didn’t transformation of translation energy of radiant accept a thought to conceive the motion of aether “aether wind” to the rotational energy for practical particles as a pair-counter flow. And there are three needs of humankind. necessary conditions to realize gravitation: Page 143 1. The particles should have the fields of repulsion. Circular gravitation 2. The contradirectional flows should envelop the particle of matter from two sides. Only fast-rotating bodies can create circular 3. While one beam is passing a matter mass then the gravitation. force of fields should decrease and gravitation effect should appear. All bodies rotate by their orbits around the Sun in the open space of Solar system due to circular gravitation Mechanism of gravitation guided by rotating the Sun. Furthermore; circular gravitation always is direct (co-directional to the Sun Gravitation appears due to the intersection (Editor’s rotation) and reversed gravitation on the periphery of note: interference) of fields, produced by beams particles Solar system. A planet with reverse orbital movement and fields of the visual matter. As it was mentioned had not been discovered until now, but 6 satellites of above, the beams are paired and contradirectional. planets in the Solar system have reversed orbital Usually the beams in cosmos are mutually balanced and movement. they do not call gravitation effects. Here is the proof of the fact that circular gravitation But on the surface of the Earth the contradirectional appears only around the fast-rotating bodies and slow- beams are not similar in their power. The powerful rotating bodies, for example, the planets Venus and beams come from above, i.e. they only penetrate the Mercury cannot form circular gravitation, that’s why atmosphere, and the weakened beams come from they have no satellites. below, i.e. they penetrated all the Earth. Thus, gravitation appears. Our Sun is a prototype of mechanism to transform translation energy of aether wind beams into rotary Gravitation is a unique property of “aether wind” energy. beams to loose part of their power during penetrating of matter mass. Gravitation is the difference of forces (Editor’s note: According to Kozyrev, any star is a of contradirectional beams. (Editor’s note: Really other transformer of time (chronal type of energy) into authors reported this idea also. I cannot find who was heat energy. Really, the aether wind can be the first in discussion about gradient of aether as nature considered as the chronal type of energy in our of gravitation.) understanding and for our usual three-dimensional Horizontal gravitation measurement equipment. To my mind it is a clear As a particular case, there is horizontal gravitation on link to notion of 4-dimensinal objects, i.e. the time. the surface of the Earth. It appears on the boundary Time can be described by parameters of the between lowland (of the sea) and plateau. In this case aether wind, i.e. its velocity, direction and one beam goes above the surface of the Earth (water), density. So, we can say that quantitatively time and the counter beam penetrates mountain range and can be described by formulations for kinetic energy of the aether movement. From the other hand it is equivalent of heat energy, which can be measured by usual methods after transformation of the longitudinal waves of the aether in transverse electromagnetic waves). Therefore, any mechanical disk rotating very fast will create a circular gravitational field, which is able to rotate all bodies in the direction of the disk (for example, a rim mounted on its bearing co-axially with rotating gyroscope). I designed and tested a similar device in January 2000. A gyroscope (of 200 mm diameter and 3 weakens. The first measurements of horizontal mm thick) was over-speeded up to 18 thousand rpm. gravitation effect were made on February 27, 1999 on Rotation of gyroscope called slow (but with a good the route Orenburg – Samara at 49 km before Syrtinskiy momentum) rotation of the rim of 15 kg weight. The gyroscopes with the mass of 0,5 kg, 15 kg and 90 A leaden load (0,5 kg) on the float (a piece of foam kg were tested during summer of 2001. All them called plastic) moved on the water surface (not in the sea but rotation of the rims. in basin) towards the mountain. (Editor’s note: There are other experimental facts. Horizontal gravitation is much more weaker than usual Fast rotation of mass should produce rotation of gravitation, but it can reach the value that makes water some part of nearby aether. Self-closed aether forms to flow at some angle upwards. vortex and if photon is trapped by this vortex, then Page 144 experimenters can see “ring of light” near rotating On the basis of all above-mentioned it becomes mass. The rings or self-closed photos can exist in extremely clear that the main secret of Nature was the same place after the mass was stopped or discovered, and let’s representatives of conservative removed away.) science don’t pull the wool over people’s eyes to prove that “it is impossible”. It is possible! Physics is an Nowadays gyroscopes in military devices are over- experimental science in its main part, and there is no speeded up to hundreds of thousands of rpm. The more completed theory until now. rates the gyroscope has, the more energy the rim will produce if it is connected to some generator. But these As a result, I’d like to make some conclusions: The secret research works led to single-valued conclusions that of Gravitation nature was discovered not in connection gyroscopes themselves cannot produce big quantity of with new scientific investigation, but due to dawning additional cosmic energy not jointly with permanent up and understanding of the fact that gravitation since electromagnets. The Sun as well as planets has natural earliest times was produced by “Aether winds”, which electromagnetism and their circular gravitation fill all cosmic space. Instead of improvement of “Aether increases in many times due to the presence of wind” theory, academician science abolished it and electromagnetic fields. forgot it such as some scientists of nowadays don’t have an idea of it. While abolishing of “Aether wind” theory, scientific world spent 100 years in vain to find an (Editor’s note: I think it is obviously that a alternative to it. A real Cosmic scientific and technical rotating magnet can involve into the rotation progress was slowed down during this term. Without much more quantity of aether than any simple this progress all humankind will kill environment of the rotating mass. In some theories any magnetic Earth in 30-40 years! field is considered as circulation of aether particles.) Rush hours for humankind to turn to cosmic energy came, we have not even an hour to wait, and otherwise Electromagnetic fields are the unique boosters of we will loose a chance to survive. Today the scientific circular gravitation. So, the gyroscopes themselves and technical level is such that taking into consideration cannot produce necessary quantity of cosmic energy the buildup made by inventors – enthusiasts, who per unit mass of gyroscope without using of created more than 50 types of Cosmic energy electromagnetism. transformers, it is possible to begin repetition work in one year. Now there is the only barrier to do it, i.e. market In October 2001, I got a copy of 24 patents description. relations in energetics developed during last 100 years. There were patents on “perpetual motion machines”. But since such “perpetual motion machines” cannot Let’s look into near Future. The process of energy exist in reality, then we can explain them as resources (coal, oil, gas) formation in bowels of the Earth gyroscopical transformers of cosmic energy. Efficiency took hundreds million years. There was period of clean of these transformers varies from 150% up to 106 % and ecology in the World Ocean, on land and in the practically all of them work using gyroscope. But atmosphere. And all it catastrophically had been nowadays only the transformer (Bauman’s machine) diminishing during 2-nd half of the XX century. There works in Switzerland, in Maethernitha theological are about 40 years for our civilization to reach the community, Linden city. Some systems have been boundary of having no chance to support normal life working from 1980 and producing total power of 750 on the Earth. An irreversible process of struggle for kWtt, the gyroscopes of 2m diameters are provided with survival using underground environment and protection constant magnets. from mortal ecology will begin. Our close posterity will not forgive us this betrayal. Besides, there are ready transformers of cosmic energy in Russia. The Professor of Moscow State University, Is there any solution? Yes, there is. Academician of Russian Academy of Natural Science Leonid Leskov spoke about them in the first half of 2001. It is necessary to publish the descriptions of all He actually said that Mr. Chubais does not allow “perpetual motion machines” models as well as innovation of energy transformers, which are ready for unprofitable publication of short technical commercialization (see newspaper “Raduga”, Samara, July 2001). documentation in the Internet and magazines, which will give a chance to many companies, I assume that any kinds of such transformers work on research groups and individuals to re-produce the energy produced by “Aether wind” beams. Perhaps them. But at first we should choose the models, our earth ancestry (Atlases) used this energy to fly as which are the most reasonable in technology and well as extraterrestrials. I remember information about prime cost. Such a way of replication of the models flying platforms, which were designed in Germany in will give people confidence, interest and reliable 1943-1945. Nowadays there are publications that there information on existence of inexhaustible salutary are not less than 10 captured extraterrestrial’s cosmic energy. And the victory will be the reward spacecrafts on the Earth, and some samples were tested for courageous, enterprising and advanced people. in Russia and the USA. Page 145 dBrot we should find electrical field acting in physical vacuum. which generates electrical voltage Eb = l in the This field will give us the force of gravitational impulse. structure of vacuum. This voltage generates The experiment by V. Roshchin and S. Godin is simpler Gravitational Impulse itself G = 4πEσ S ⋅ (∆rg ) , where for physical modeling (Editor’s note: the author assumes it is simpler than Podkletnov’s effect). All input and Eb output parameters are known to the authors, i.e. force ∆rg = e0 . b of the magnets, frequency of variable magnetic field in The supposed “Gravitational Impulse” in the the local place of space vacuum, change of gravity. experiment by Podkletnov is modeled by a quarter of Furthermore, there are known cylindrical formations of cosine curve. magnetic “loops” around the device and their approximate arrangement with the intervals divisible by Duration of this curve is determined by the decrease of the half of rotor radius. Effects of temperature decrease magnetic field “trapped” into the superconductor due at 8° C in cylindrical atmospheric formations can be to the partial heating of semi-conductor emitter after simply explained by adiabatic decrease of air pressure plasma passed the discharge of 2MV with the current due to the decrease of gravitation between molecules of strength of 10000 A. The formula of the model is the air. Formulas for estimation of decrease of gravitational following: and inertial forces are the same that for Podkletnov’s X " = Ae −2πf 0 D0t cos(2πf 0 1 − D02 t ) (1) Eb = l , (2) the calculation is made for the frequencies of 30, 3, 0,3 dt and 0,03 Hz and acceleration of 12 m/sec 2, which appears for the mass of the pendulum 30 g with the Eb G = 4πEσ S ⋅ (∆rg ) 2 , where ∆rg = e0 . (3) force horizontally to gravitation 0,03·12=0,36 N. b It can be supposed that it is necessary to make more References careful solution of the problem to find the effect on the pendulum by its reaction, which is known from 1. Podkletnov E., Giovanni M. Impulse Gravity Generator Based experiment. We should apply the more correct use of on Charged Superconductor with Composite Crystal Structure. spectral method of solution of differential equation for 2. Roshchin V., Godin S. Experimental research on physical effects the pendulum with setting of impulse effect. Further, in dynamic magnetic system // Letters to the Journal of Theoretical Physics, 2000, vol. 26, issue 24. having the recording of temporal function of magnetic 3. Rykov A.V. Principles of full-scale physics. // Institute of Earth field by Hall-effect devices and using Maxwell formulas, Physics RAS, M.: 2001, 58 p. New Sources of Energy for over 25 years. The theory is directly related to the problem of new energy sources, and this paper can be from the Point of View of of interest for Journal of New Energy for it is the UQT Unitary Quantum Theory (and not the classical Newton mechanics or the modern standard quantum mechanics) that provides a theoretical basis for the development of new sources of L.G. Sapogin, energy and for the explanation of the operation Department of Physics, Technical University (MADI) principles of the existing and functioning over unity Leningradsky pr. 64, A-319, 125829, Moscow, Russia Yu.A. Ryabov, The fundamental provisions of the UQT and a number Department of Mathematics, Technical University (MADI) of results received on the basis of it were published in Leningradsky pr. 64, A-319, 125829, Moscow, Russia many scientific journals and reported at international conferences (see [1-6], etc.). Generally, the UQT as expressed by the language of formulae and equations V.V. Graboshnikov represents a new mathematical model of interaction and Representative in Moscow of Sceptre Electronics Ltd. movement of elementary particles in the form of a Millennium House Business Center complicated system of non-linear integral-differential 12, Troubnaya Street, Moscow 103045, Russian Federation. equations, an important property of this model principally defines the trajectories and velocities of the particle movement in space (unlike the standard quantum theory, which directly defines only the probabilities of the presence of the particles at a certain The Unitary Quantum Theory (UQT) is a new version of point in space). Another, and the most essential (for the the field quantum theory, which has been developed problem of new energy sources) property of the UQT is by the principal author (Prof. L.Sapogin) of this paper the absence of the energy conservation laws and the Page 253 impulse for single particles in it. That is why the UQT of the classic mechanics has strengthened still more makes theoretically possible processes of energy the sacred belief of the mankind in the Divine Infallibility generation as if from nothing, if they are regarded from of the Conservation Laws, and today it is nearly the classical mechanics point of view or the standard indecent to express any doubts about these laws. quantum theory (while the UQT is able to explain the phenomenon), as well as creation of a device with Let us first of all find out the origin of the conservation efficiency above 1. In other words, the UQT provides laws in ordinary mechanics. Practically any textbook for a theoretic possibility of making a perpetual mobile! will tell you that the Energy Conservation Law (ECL) follows from the homogeneity of time, the Impulse In the 1970’s, when the UQT started to be developed, Conservation Law from the homogeneity of space, and there was nearly no data of the observed phenomena, the Angular Momentum Conservation Law from the or any experimental results confirming this unusual isotropy of space. That is why many people have an theory. Today, such data are abundant. For example, impression that the conservation laws themselves such processes can be named as generation of excessive follow only from the quality of time and space, which is heat energy during cavitation of very small water today an undoubtedly relativistic notion. But, for bubbles; generation of excessive electric energy in an example, the angular momentum is not a relativistic anomalous gas discharge; excess generation of electric notion. So, such a narrow approach is not altogether energy when electric current passes through proton- correct, and it is necessary to turn to the second Newton conducting ceramics, etc. Besides, and still more law, or the equation of relativistic dynamics and the important, operating devices that have been created system insularity. However, the qualities of the time and much more energy than it was necessary for these space ensue exactly from the analysis of the Newton devices functioning: electric current generators mechanics, though they are often construed incorrectly. “Testatica” (Switzerland); thermal cell CETI Let us remind you the correct interpretation. (J.Patterson, USA); heat generators (Yu.S. Potapov, Moldavia, J.Griggs, USA); electric current generators Homogeneity of time suggests that if at any two (P.Correa, A.Correa, Canada); electric engines on moments of time two similar experiments are made in magnetic ceramics (Japan), and others. The said similar closed loop systems, the results thereof will not phenomena and operation principles of the above- differ. mentioned devices can be explained with the help of the UQT. Homogeneity and isotropy of space mean that if a closed loop system is moved from one part of space to another, In this paper we will also touch upon such an important or is oriented differently, nothing will change. problem as cold nuclear fusion. The feasibility of this nuclear process, which is categorically denied by the The making of the fundamental energy and impulse standard quantum theor y and nuclear physics conservation laws from the Newton equation is very specialists, was predicted by the author of the UQT as simple. Let us put down the main equation of dynamics far back as in 1983. This phenomenon was discovered as in 1989 (electrochemical experiments, M.Fleischmann, S.Pons). Many subsequently received experimental data F= confirmed the existence of nuclear reactions under very dt small energies, of nuclear transmutations in plants and biological objects, very slightly connected with For closed loop system F=0 (no external forces generation of energy [7-8]. From the point of view of the operating) and the equation integral will be UQT, which provides an explanation of the cold nuclear P = Const the impulse conservation law. fusion mechanism, this process can be applied in Now let us take the main equation of dynamics as: practice (after the relevant devices are designed) for generation of energy, for production of isotopes, and for dv F = ma = m nuclear waste liquidation. dt and multiply it scalarwise by v dv 3 dv 3 d  v  d  mv 2  F⋅v = m v = ∑ m i vi = ∑ m  i  =   Inventors, as well as swindlers of all kinds, had long dt i =1 dt i =1 dt  2  dt  2 ago been trying to construct, or at least design, a perpetual mobile, i.e. an imaginary machine that where v is the module of the velocity vector v. For closed produced work without any outside energy. Peter the loop system F=0, and the equation integral will be Great even founded the Imperial Russian Academy of Sciences for such research, but the modern Russian Academy of Sciences does not like to recollect this mv 2 circumstance. On the other hand, the French Immortals = Const in 1755 decided not to consider any perpetual mobile projects at all, and, as we would see, were quite right one of the forms of the energy conservation law. From as regards the Newton mechanics. The brilliant success the definition of the angular momentum for a particle, Page 254 L = [r × P] Conservation laws in ordinary quantum mechanics The standard quantum theory formulates the energy Differentiating both parts by t, we get conservation law in the same way. In quantum mechanics we have the same movement integrals as in dL  dr   dP  classic mechanics. A certain value L will be a movement =  × P + r ×  integral, if dt  dt   dt  ∧ ∧ d L ∂L  ∧ ∧ Since the impulse vector is parallel to the velocity vector, = + H , L = 0 (1) the first bracket will equal to zero. On the basis of the dt ∂t    resultant equation and the definition of central force as not creating any momentum, we get  ∧ ∧ ∧ Since  H , L is defined by commutator of operator L   and of Hamilton’s operator, any value L, not depending [r × F] = 0 or L=Const. explicitly on time, will be the movement integral, if its operator commutes with the Hamilton’s operator. When In case of the central force in an unclosed system, the value L does not explicitly depend on time, the first item angular momentum is preserved by value and direction. in (1) turns to zero. There remains The angular momentum conservation law for a closed loop system results in the same way as the impulse ∧ conservation law from the equation of the rotary motion d L  ∧ ∧ = H , L = 0 (2) dynamics: dt    M= and for the movement integrals not explicitly depending on time the Poisson quantum bracket equals zero. From For a closed loop system, the momentum of external (1) and (2) it follows that the average value of the forces M=0 and the integral of the equation will be the movement integrals does not depend on time: angular momentum conservation law L=Const (L ) = 0 In relativistic dynamics the emergence of the energy and impulse conservation laws separately can be easily All good papers on the quantum theory prove that received from the relativistic ratio for energy and probabilityw(Ln , t ) to find at any moment t any value of the movement integral, i.e. Ln , does not depend on E 2 = P2c2 + m2c4 time. Further, is constructed as the movement integrals not explicitly dependent on time. Since operators L Term m 2 c 4 is an invariant, i.e. the same in all reference ∧ systems. In other words, it is a certain constant. This and H commute, they have common proper functions, equation can be represented in a slightly different form which are functions of stationary states. Let us note that the latter follows from solutions of the equation without time, which was received from the full equation E 2 − P 2 c 2 = Const with imposition of requirement For the equation to be valid, it is required that  E Ψ (r , t ) = Ψ0 (r )exp i  E = Const and P = Const  t  equivalent to search of only periodic solutions. Further, and this none the other but conservation laws for energy quite naturally, there appeared an equation without time and impulse. with actually imposed conservation laws, because now nothing depends on time. Expansion by such proper Strictly, relativistic mechanics has a conservation law functions looks as follows: for 4-impulse vector P µ , but we will not dwell on these details, because small energies are what we are ∧ L Ψn = Ln Ψn H Ψn = En Ψn interested in. In the classical theory, the energy conservation law states that the energy of a closed loop system remains  E  unchanged, so, if the energy of such a system is Ψ (x, t ) = ∑ cn Ψn (x )exp − i n t  =∑ cn (t )Ψn (x ) (3) n  h  n designated at a moment t=0 as E0 , and at the moment t as Et , E0 = Et . Page 255 quantum mechanics after summing up by a large  E   E  cn (t ) = cn exp − i n t  = cn (0 )exp − i n t  number of particles, because for a sufficiently big mass,  h   h  the length of the de Broglie wave becomes much less than the body dimensions, and no quantum-wave Since (3) is expansion into proper functions of operator qualities can be talked about. L n, probability does not depend on time: Conservation laws in Unitary Quantum Theory w(Ln , t ) = cn (t ) = cn (0 ) = Const 2 2 In the UQT [1-14] any quantum particle is not a point, Since energy is a movement integral and probability w but a source of field like in the ordinary quantum (E,t) to find at a moment t an energy value equaling E, mechanics, but it represents a bunched field (wave does not depend on time, then: packet) of a certain unified field. The dispersion equation of such a nonlinear field turned out to be such that the wave packet (particle) during its movement dw(E , t ) periodically appears and disappears, and the envelope dt of this process coincides with the de Broglie wave. Numerous particles during their periodic disappearance Let us note once again that it is the probability to find a (spreading in the Universe) are repeated appearance certain value that does not depend on time, but not the from vacuum fluctuations. A theory of quantum value itself, which for any separate event is accidental measurements has been built, and the probability and can assume a wide range of values. interpretation follows from the mathematical formalism of the quantum theory [10,11], and it is not postulated The quantum energy conservation law in the above form as in conventional quantum mechanics. Unfortunately, suggests a possibility of defining energy at a given the main UQT equation turned out very complicated, moment without subjecting it to uncontrolled change, for it is a system of 32 nonlinear integral-differential which raised no doubts in classic mechanics. But in the equations, which could require for their solution some quantum theory the energy, without changing its value, new mathematical methods. But from this the can only be measured to relativistically invariant Hamilton-Jacoby equation, and the Dirac equation system strictly follow. ∆E ≥ τ Papers [13,14] give a solution of the simplified scalar where r - is measurement duration. Formally, it does integral-differential UQT equation, which gave a not present any difficulties for the energy conservation localized solution for the form of a wave packet law, since energy is a movement integral, and we have representing a particle. It turned out that the integral much time to make long measurement. For example, let from a bilinear combination of such a solution for the us make measurements during time r, then leave the whole volume gives with the precision of 0.3% the value system to itself for time T, and then define energy again. of a non-dimensional elementary electric charge [13,14], The classic quantum energy conservation law states which was essentially its first theoretical calculation. that the result of the second measurement will coincide Then, this solution in the form of a periodically appearing and disappearing wave packet (which h square describes the density of a spatial charge) can with the result of the first measurement to ÄE ≈ . ô be replaced by an oscillating charged particle [15-18], But even in the ordinary quantum theory all this is not the movement whereof will be described by the consistent enough. For the real vacuum fluctuations can conventional Newton equations: interfere, that always influences the results of a single process, but their influence disappears after the d 2r  mt  dr  2 mr dr  passage to an ensemble of events. Here we have a m = −2QGRADU (r )cos 2    − + ϕ0  (4) dt 2  2h  dt  h dt  violation of the conservation law due to vacuum   fluctuations, though existence of movement integrals, unlike in the Unitary Quantum Theory (UQT). d 2r  mr dr  m = −2QGRADU (r )cos 2  − + ϕ0  (5) The generally accepted quantum theory carefully avoids dt 2  h dt  the question of conservation laws for individual events in the case of small energies. This question is either not where m, Q, r - mass, charge, and radius-vector of the discussed at all, or it is said that the quantum theory particle, U(r) – external potential, ϕ 0 - initial phase. does not describe individual events. Yes, it does describe individual events, but it can only predict a probability of this or that result. It is clear that in this case there Since E = −GRADU , and a magnetic field also exists, are no conservation laws for individual events (it is the Lorenz force should also be calculated for wrong to speak about it in case of an accidental result Q F = [v × H ] , but in the electromagnetic wave E and of an individual event), and they appear only after the c averaging by large ensembles of events. Essentially, it v can easily be proved that classic mechanics follows from H are equal, and for small energies value → 0 , and Page 256 force F can be ignored. Both these equations produce Newton simply used the word “bouts” in place of the qualitatively similar results for different problems, but word “probability”. the first non-autonomous equation evidently does not have any movement integrals at all, and any hope for It is absolutely clear that all descriptions of processes analytical solutions is very unreal. But for the second by the equation with an oscillating charge will be an autonomous equation such hope still exists. Let us note approximation, because it is evident that no movement that these equations describe more accurately the equations for a material point can describe even the experimental results of scattering on the coulomb simplest interference processes on a semi-transparent potential than the classic Rutherford formula! mirror, during which a material particle should be Application of these equations for the tunnel effect and divided in two parts which will later eliminate each scattering on short potential also produces correct other by destructive addition. It is surprising, but the results, but in this case passage through a high barrier numerical solution of the problem of scattering on a (tunnel effect) will be defined by the initial phase. Of short potential (the Ramsauer effect) for equations (4) greatest interest, however, is the harmonic oscillator and (5) gives the correct diffraction picture! But if we problem. want to describe an individual particle correctly in the conventional quantum mechanics, the picture becomes It is possible that a change in the properties of a material inexact and purely probabilistic. At every given moment point in the process of its movement is just another step of time a particle can exist in only one of the mutually in the material point movement theory. In conventional incoherent states, because one particle cannot move in mechanics this idea is not altogether new. There are different directions simultaneously (it cannot have many Meshchersky’s equations for bodies with a changing impulses at the same time). Nevertheless, there seems mass, and Tsiolkovsky’s equation for a rocket. But so to exist a whole class of processes, where description far, in the conventional quantum theory, the particle has with the help of equations (4) and (5) have certain sense. a permanent and stable in space and time set of It is well known that in all experiments the local energy properties, and in the UQT all the parameters of the and impulse conservation law in individual quantum particle are changed and oscillate during movement. processes are true only under high-energy values. But under small energy values it is not so, at least because It should be noted that Newton did not introduce the of the ratio of uncertainties and the probabilistic notion of a material point at all, and it would be character of all the quantum theory predictions, and the ridiculous to think that he was not able to have this idea of a global, not local ECL, is invisibly present in natural and rather trivial idea. Most probably, and it is the quantum mechanics, and is certainly far from new. not by chance, for today many troubles of the field quantum theory are rooted in the approach to the In the strict UQT and the quantum measurement theory, particle, as to the point, the most vivid example being a great role belongs to unavoidable vacuum fluctuations. a large bouquet of divergences. Nevertheless, this It is clear that these fluctuations are totally approach is very convenient and should only be used unpredictable and non-invariant in relation to space and correctly. Let us also remember, that in accordance with time translations. The same can be said otherwise: there the Newton corpuscular theory, beams of light were to are no habitual properties of time and space in this be regarded as a flow of certain particles. They are theory. Space-time is now not homogeneous and not emitted by a shining body in all directions and move in isotropic. For example, if the system is transferred to a an empty space or a homogeneous medium evenly and new point in space, or a certain experiment is repeated straight, i.e. in the same way as the ordinary material at another time, at the point where particle parameters particles do in the absence of any external or interaction are studied, and it interacts with the macro-device, a forces. Newton explained reflection and refraction of new value of vacuum fluctuations (different from the light beams on the surface of border between two previous one) can appear and produce a different result. homogeneous mediums by the effect of certain forces Of course, all this is only true for small energies and on this border, in the direction perpendicular to the individual events. surface. These forces changed the normal velocity component, but did not touch upon the tangential one, Still more destructive is the UQT for the notion of a which allowed to derive the reflection and refraction closed system. For individual events under small energy laws. However, the inability of such a theory to account values this notion is simply unacceptable for the for the light partial reflection and passage phenomena, following reason: vacuum fluctuation at the location of as well as the Newton rings (which he himself the particle (e.g. in a potential pit) can be sharply discovered), led him to bouts (or fits) theory, which is changed at any moment. It can be caused by different quite modern, although nearly forgotten. Newton factors – the nature of vacuum fluctuations itself, or the believed that for full explanation of all the processes it tunnel effect of another random particle. was necessary to suggest that some light particles could experience reflection bouts, and others – passage Sometimes it is stated that conservation laws follow bouts. Let us imagine light falling to a flat surface, which from the Nether theorem, though these results are is partially, reflects and partially passes. With quantum present in the works by D. Gilbert and F. Klein. For any description of this phenomenon, a particle connected physical system, the movement equations from which to the falling wave at the time of hitting the surface has can be received from the variation principle, each one- a certain probability of passing or being reflected, and parametric continuous transformation that leaves the Page 257 variation functional invariant, corresponds to one on the average and inapplicable to individual processes differential conservation law, and there exists a clearly with small energy, first occurred to Schroedinger, and conserved value. It is easy to see, however, that vacuum later to Bohr, Kramers, Slater, and Gamov. In 1923 Bohr, fluctuations imposed on the varied function (integral of Kramers and Slater made a desperate attempt to Lagrangian) do not in sum remain unchanged during develop the theory, where the energy and impulse parametric transformations (at least today it seems so), conservation laws in case of scattering would be true and this consideration does not work without only statistically, on an average for long periods of time, preliminary of ensemble. but would be inapplicable to elementary events. Lev Landau even called it “Bohr’s wonderful idea”. And now we are in for a little philosophy. The local Energy Conservation Law (ECL) in individual processes Later, however, the authors gave up this approach and, follows from the Newton equations for closed systems. besides, this idea at that time did not follow from the It would be naive to think that its local formulation will quantum theory equations, and the authors, to come be preserved forever, and would be a bad mistake to out of the predicament, simply declared that quantum transfer the ECL from the Newton mechanics to the mechanics did not describe individual events at all. Thus, quantum processes without any changes, because the the most vivid paradox of the quantum science was latter are more fundamental. removed by a simple ban on thinking about it! But the ingenious idea that conservation laws do not apply to References to the first principle of thermodynamics are, individual quantum processes and emerge only after strictly speaking, groundless, because this principle is the averaging by the ensemble of particles remains a postulate. For example, well-known Russian alive. This idea might have been a little premature, and, mathematician N. Luzin, in a letter to an inventor wrote possibly, should be a little different. that the first principle of thermodynamics is the result of unsuccessful attempts of the mankind at building a The Unitary Quantum Theory (UQT), on the contrary, perpetual mobile, and strictly follows from nothing. individual particles, and the difference in their behavior Today it may be said with a great degree of certainty is accounted for by the initial phase of the wave that no sophisticated machine in the framework of the function. In this case, local conservation laws do not Newton mechanics can be a perpetual mobile, and the exist for a single particle, and measuring the initial decree of the French Academy of 1755 not to consider phase or some other parameters for an individual any perpetual mobile projects is still valid. We will only particle is quite a different matter. It is not true that the add that now it is valid only for those projects that are UQT has given up probabilistic description. Probabilistic based exclusively on the Newton mechanics. interpretation remains, but the probability now is strongly dependent on the initial phase. Although the There is the tendency in modern physics to reduce ECL, equations with an oscillating charge can determinately especially in theory, to the rank of a secondary predict a particle’s behavior, the measurements can be derivation from the movement equations (movement made only with the help of a macro device, which will integrals). Some physicists restrict the ECL to the give only a probabilistic result. Impossibility to framework of the first principle of thermodynamics, determinate measurements does not change anything, others, like D. Blokhintsev [37], think it quite probable for the UQT provides for a possibility of influencing the that with the development of a new theory the form of probability value, which was earlier unavailable. The the ECL will undergo certain changes. F. Engels wrote existing Von Neumann theorem about hidden in his “Dialectics of Nature”: “…none of the physicists parameters does not effect our result, but the relevant actually regard the ECL as an eternal and absolute law discussion is too cumbersome, and we will leave it out. of nature, a law of spontaneous transformation of the movement forms of the matter and the quantitative In other words, all the requirements, wherefrom the constancy of this movement in all its transformations”. classical conservation laws follow, are now absent. We But many people do not share this opinion. M. Bronstein can hardly expect the conservation laws for individual in his book “Structure of Matter” wrote: “The ECL is particles to be preserved under small energies in such a one of the main laws of the Newton mechanics. situation. Today we are convinced that the classical Nevertheless, Newton did not ascribe to this law the energy, impulse and angular momentum conservation general character that this law actually possesses. The laws for individual quantum objects are not valid under reason for this erroneous (italicized by authors) opinion small energy values because of periodic appearances of Newton of the ECL is very interesting…”. It is now and disappearances of the par ticle. All direct clear that in view of the above, such an opinion was experimental tests of the conservation laws were made not at all erroneous. Let us remind you that Newton for large energy values, and for small energies of an predicted many things, even the UQT, in his “bouts individual particle only probabilistic results can be theory”. received, and, in this case, it would be indecent even to recall the conservation law. On the other hand, the authors of quantum mechanics realized that there was no conservation law for single Energy generation and perpetual mobile quantum processes under small energies at all. The idea that the construction of ECL, together with the second Let us make the following imaginary experiment. For law of thermodynamics, was a statistical law, true only simplification purposes we will use in our reasoning a Page 258 certain quantum ball-particle. When a classical ball (9), when four types of solutions are possible, three of approaches a wall (perpendicularly for simplification), which are most important for us: stationary, “maternity the speed of the reflected ball is always equal to the home”, and “crematorium”. In the two latter solutions initial speed (we ignore friction and regard the ball and traditional conservation laws do not work. These the walls as absolutely elastic). In the case of a quantum solutions are presented in Fig. 1. Such oscillator ball, the speed of the reflected ball will acquire in behavior explains many experimental facts. From the different experiments with absolutely equal initial physical point of view, it means that in stationary conditions a whole range of values: some balls will be solutions with fixed discrete energies (conventional reflected at a speed greater than the initial speed, others quantum mechanics) the speed of the particle reflected – at a speed equal or lower than the initial speed, and from the wall will be equal to the speed of the falling all this is described by quantum mechanics. particle. If the speed of the particle is decreased after each reflection, it will mean the “crematorium” solution, Let us ask the following question: what if a second wall and if it increases, the “maternity home”. Scenarios for is found, parallel to the first one, in order for the ball to situations will depend on the initial phase of the wave increase its speed after each reflection from the wall? function and the particle energy. In ordinary situations Then we will have increased ball energy without any the “crematorium” and “maternity home” solutions special efforts on our part. Such phenomena appear in always compensate each other, and we find the problem of particle oscillations in a potential pit (not conservation laws. necessarily parabolic) on the basis of equations (8) and Fig. 1. Dependence of the distance between the moving charge and the nucleus on time for autonomous and non-autonomous equations. The task of the future developers of new energy systems interested to keep their stability degree, that everyone of the 21st century will consist in creating such initial who started speaking about it was considered to be a conditions for a great number of particles making up a crazy man. body that only the “maternity home” solution would be realized, and the “crematorium” solution would, if Modern experimental physics has verified the possible, be suppressed. correctness of conservation laws either for very large energies in individual quantum events, or for big It follows from the above that if the unitary quantum macroobjects, when automatic averaging by ensemble theory ideas are applied correctly, there is no is made, but the area of very small energies for fundamental taboo for a perpetual mobile. Such a taboo, individual events today is a terra incognita. as it was shown, does not formally exist even in conventional quantum mechanics (no conservation laws In order to see how the conservation laws for reflection for individual processes with small energies), and, in (repulsion) of an individual particle from the Coulomb order to generate energy, they should be somehow heavy nucleus with different values of the initial phase accumulated (all random processes with excess energy are violated, we have solved numerically one- should be grouped together). But conventional quantum dimensional equations (8) and (9) under the different mechanics refuses to describe individual events and is initial conditions: unable to offer any ways for such grouping. The unitary quantum theory seems to offer such an opportunity. h = 1, m = 1, 2 Zze 2 = 1, x0 = 100, Vx 0 = −0.1 However, the great idea of free energy generation was In the Fig. 2 the distances between the moving particle distorted by effort of some research associations and repulsive nucleus are shown as a time function, for Page 259 different initial phases in cases of non-autonomous and functioning. And in this casethe main problem that autonomous equations. prevented transformation of dead suns , back to red- hot nebulas will disappear”. The question of whether the conservation law exists in global form (we have already proved its not being local) remains open, because nothing leads to it except the inertia of the human mind. This inertia was based on the Newton laws, which were replaced by quantum laws. This mental inertia leads to a situation, when in case excessive energy is generated during solution of movement equations, a question arises how it can happen, and where it comes from. Of course, if a particle (e.g. a photon) falls on a semitransparent mirror, the packet is divided into two halves, which, due to Fig.2 Three types of solutions for oscillator imposition of vacuum fluctuations, will be recorded by photomultipliers as full-fledged photons [1-5]. The result It is evident from the calculations that the speed of a is that energy is taken as if from vacuum: two photons reflected particle can be equal, lower or higher than appear in place of one. Another photon can be divided the speed of a falling particle. This situation seems to on the mirror into two halves, but they will not be be true for all potentials. recorded by the meters, and the energy will allegedly pass into vacuum. So, at one time we borrowed energy Calculations were also made for other potentials: from vacuum, and then gave the same amount of energy harmonic oscillator, Yukawa, Gauss, dipole, hyperbolic back to vacuum at another place. You can think like that, secant, and Wood-Saxon, and the quality results were and this process might take place. But if we consider nearly the same. If we sum up the impulse of all the the equation with an oscillating charge, the energy and particles falling with different phases and compare it impulse conservation laws are not valid there for with the summarized impulse of all the reflected solution of the movement problem, and vacuum particles, the summarized reflected impulse, for fluctuations have nothing to do with it at all. As for the example, for the Coulomb potential, will be several question of where the energy comes from, it is the result percent higher than the summarized impulse of the of our mental inertia, and is, essentially, an atavism falling particles. For other potentials such a small imposed by the Newton mechanics. But the latter deviation can even be in the opposite direction. On the appears as a result of an extreme passage from quantum whole, this problem is very complicated and requires mechanics, which is more fundamental. additional research, because all this is also dependent in quite a complex way on the initial conditions (initial It is interesting to note that there is a bomb in the logical speed, phase and distance). definition itself of the energy conservation law. If energy is something that cannot appear or disappear and Philosophically, any categorical taboos, like the always simply passes from one form to another, the only impossibility of creating a perpetual mobile, are value satisfying this condition is zero. We are far from absolutely unacceptable. If everyone is convinced of it assuming that energy does not exist. But the problem forever, the conservation laws and perpetual mobile of existence is solved differently in different taboos will remain unshakable as long as the human philosophical systems, and the mathematical approach civilization exists. Of course, the funeral of the seems to be the most correct one: an object exists if it is Conservation Laws can be very prolonged. Anyway, we free from contradictions. Energy has bad luck in this are not going to do it, and our article might be just a case, for under such an approach it should be zero. cleanup for the future tomb, and the splendid funeral with all the necessary honors will be organized by future Some cosmologists (for example, British prof. Fred generations. On the other hand, these laws will never Hoyle) are very willing to have a process, in accordance die out completely and will surely be applied, but such with which the Universe has certain places where spheres of science and technology will appear, though energy appears from other certain places, in which it is small at first, where these laws are not valid. eliminated. Besides, any philosopher at least a little bit familiar with astronomy, looking at the bright night sky, The truth should be accepted irrespective of where it will see the birth of matter and its expansion into a still comes from. Words of F. Engels from the “Dialectics of greater space. But for this purpose the Global Energy Nature” will be quite appropriate here: “When the solar Conservation Law is superfluous and only denies what system ends its life circle and shares the fate of all the is observed. The head reels… finite things, when it falls victim to death, what will happen next? Thus, we come to a conclusion that the Cold Nuclear Fusion and Nuclear Transmutation. heat emitted into the universe should have an opportunity, in a way yet to be established by the Let us approach the epoch-making experiments made natural sciences, to turn into another form of movement, by Fleischmann and Pons in March 1989 [30] from the where it can be accumulated again and start positions of the equation with an oscillating charge. One Page 260 of the authors predicted in 1983 [9] the possibility of such nuclear reactions under very small energies.  2 r2  Without going into well-known details, we will sum it D ≈ exp − ∫ 2 µ (U − T )dr  (6)  hr  up very briefly: cold nuclear fusion exists, and there are  1  no people or theories capable of giving a clear explanation. The chain of various mechanisms meant Mm where µ= is reduced mass. The bottom to explain this intriguing phenomenon is growing, but M +m few really believe in them. The reason is as follows. integration limit r1coincides with the nucleus radius R, and the top limit r2 can be found from the condition When a charged particle interacts with the nucleus, the potential energy is like in Fig. 3, where the right top Zze 2 part of the curve is conditioned by mutual T= . After integration we will get Coulomb repulsion between the nucleus and the charged particle. D = exp(− 2 gγ ) The repulsion potential will be R Bc  T  T where g= ; γ = T arccos B  − 1 − B , D Bc    c c Zze 2 U (r ) = h r λBc = and value 2mBc , the de Broglie wavelength corresponding to the kinetic energy of the particle equal to the barrier height T = Bc . If T << Bc , expression (6) is easily transformed to look as  2πRBc   2πZze 2  D = exp −  = exp −    hv   hv   where v is velocity. Let us now see what the shocking cold nuclear fusion will look like on the basis of the above considerations. The deuteron energy in an ordinary electrolytic Fleischmann-Pons cell will be about 0.025 eV, and the where Z – charge of the nucleus, and z – charge of the height of the Coulomb barrier for this case is approaching particle, å – charge of the electron, r – distance between the particle and nucleus. When r=R Zze 2 (critical distance), then the potential energy curve goes Bc = 3 = 0.8MeV . In classical mechanics it would sharply down, which is due to the emergence of intense A nuclear gravitation, the potential whereof today appears be just naive to talk about overcoming such a barrier more complex than could be imagined mathematically. with a height dozens times greater than the kinetic If the charged particle overcomes the Coulomb barrier energy. Let us now see how the tunnel effect will with a height of improve the situation. Let us assess the value of g and γ for the case of collision between two deuterons with Zze 2 Zz such energy: Bc = ≈ 3 MeV R A R 2mBc g= = 1.9 ; it will further get into the nuclear gravitation area and a nuclear reaction will take place.  T  Bc  − 1 − T ≈ 8883 γ = arccos   and the Let us look at the nuclear interaction of a charged T  Bc  Bc particle with kinetic energy T < Bc . From the point of probability of such a process will be view of classical mechanics, there will be no nuclear exp(−2 ⋅1.9 ⋅ 8883) ≈ 10 −7328 , i.e. practically pure zero. reaction in this case, because the particle will approach The fusion cross-section will be defined by the product the nucleus and at a certain distance r < R from the of nuclear cross-section and the tunneling probability: top of the Coulomb barrier will turn back and be reflected from it. However, from the point of view of σ = σ nucl D quantum mechanics, there exists a tunnel effect, and the probability of such a tunnel passage, or transparency and, in the case under review, is also a very small value. of potential barrier D is described by a well-known If the clash parameter of deuterons is not zero, the formula: emergence of centrifugal potential Page 261 D + D —> T (1.01) + p (3.03) (Channel 1) h 2l (l + 1) D + D —> He (0.82)+ n(2.45) (Channel 2) 2mr 2 D + D —> He + γ (5.5). (Channel 3) will still further lower the probability of such interaction. All these reactions are exothermal. The third channel has a very small probability. It was experimentally It is these very circumstances that make the nuclear discovered that they could occur under very small physics scientist think that there is no cold nuclear fusion energies. In a molecule  D2 the equilibrium position as such. For example, such a serious and responsible between atoms is 0.74A and in accordance with the edition as Encyclopedia Britannica 2001 found no place conventional quantum theory, these two deuterons for the notion of cold nuclear fusion at all. Such an official could accidentally enter a nuclear fusion reaction. But position can be understandable only from the point of the interaction value is very small λD2 = 10−64 c −1 . view that quantum mechanics is absolutely true and unshakable. Despite this, for the 12 years since the There is a known estimate that in the water of all seas Fleischmann-Pons experimental discovery, nearly 30 and oceans there are 1043 deuterons, and in 1014 years international conferences have been devoted to this there will be only one fusion. subject, there are lots of books and magazines on this subject, and the number of articles on the problem is It follows from the aforesaid that the main problem nearing ten thousand. Today the situation is gradually impeding the occurrence of the d+d reaction lies in the developing in the positive direction, and the research existence of a very high Coulomb barrier. Our approach in the field of hot nuclear fusion, which has already allows for this problem to be solved, and there is such wasted over $90 billion for 45 years, is slowly coming an opportunity in the UQT. The UQT equation solutions to naught. show that the distance to which deuterons can approach each other is strongly dependent on the phase But today there exist well known experimental data on of the wave function (by the way, it is absolutely clear cold nuclear fusion. They are numerous and various. We intuitively).   will dwell only upon the most important and sufficiently reliable results. Thus, the classical view of electrolysis Let us consider the one-dimensional problem [15-18,31]. of a palladium cathode saturated with heavy hydrogen There is a stationary nucleus with charge Ze at the point in heavy water identifies an anomalous quantity of heat of origin, and another nucleus is approaching it along energy up to 3 kWt/cm3, or up to 200 Mj per small axis õ (charge ze, mass m) at a certain initial velocity. sample. Products of nuclear reaction have also been The non-autonomous and autonomous equations of such found: tritium (107 - 109t/s), neutrons with energy of a problem will look as follows: 2.5 MeV (10-100n/s), and helium. Absence of He3 among the reaction products shows that heat is not generated  m  dx  2 m dx  d 2x 2 Zze 2 by reaction d+p. Besides, emission of charged particles m = − 2 cos 2    t − x + ϕ 0  (8) (p, d, t, γ) is observed. Similar processes are observed dt 2 x  2h  dt  h dt    in case of a gas discharge on a palladium cathode, of phase passage in different crystals saturated with d 2x 2Zze 2  m dx  heavy hydrogen, irradiation of deuterium mixture with m = − 2 cos2  − x + ϕ0  a powerful sound or ultrasound flow, in cavitating dt 2 x  h dt  (9) microbubbles in heavy water, in a tube with palladium powder saturated with heavy hydrogen under a Since an analytical solution was not found for all the pressure of 10-15 atm., etc. In certain reactions (e.g. areas of initial phases , numerical methods were applied d + t → α + p ) neutrons of 14 MeV are absent, and with the following initial values: Z=z=1, e=1, m=1, such a strange situation occurs in other cases too. x0=-10, h =1 for different initial velocities and initial Activity of Li6 , Li7 in reactions with heavy hydrogen phase values. As had been expected, braking or and protons failed to be discovered, whereas reaction acceleration of the particle happens only when the charge is large. But at the last stage, under certain initial K 39 + p → Ca 40 π phases close to , a wonderful process occurs: velocity, was well recorded even in biological objects. But the 2 most intriguing fact of all these processes is the charge and repulsing force are very small. Due to phase shortage of nuclear reaction products for explanation ratios, the small charge is not changed for a long time, of the emerging heat effects. Thus, in certain cases the which means that the particle (or rather what is left of number of nuclear reaction products (tritium, helium, it) is not influenced by any forces, and it is crawling at neutrons, quanta) should be millions of times greater in a permanent small speed for a very long time (“the snail order to account in some way for the quantity of the effect”) inside the field of another particle, and can come generated heat. Generation of such a big amount of very close to the center. Such movement with a very energy cannot be accounted for by either chemical or small charge and a small speed can last for several nuclear reactions, or by phase passages. The well- hours, and disconnection of the external field will not known interaction d+d goes along three channels: effect this movement. This process reminds of quiet and Page 262 invisible scout penetration into the enemy territory. This phenomenon occurs only in certain phase areas, and can be conveniently called a phase hole, which is illustrated in Fig. 4 resulting from the solution of equation (8). Fig.5 Minimum distance between charges depending on the initial velocity for different initial phase values. experimental situations it is reproduced by different experimental groups with a very high accuracy. This very intriguing problem has so far received no simple explanation. Let us dwell on a possible cause for such aphenomenon. With a small velocity in the phase hole, neutrons are affected by nuclear gravitation forces, and Fig.4. Distance to the turning point of the moving charge protons are affected by electrostatic repulsion forces. depending on the initial phase value for different initial velocities. Under the effect of this momentum, the deuteron will have enough time to turn in such a way that the neutron parts of the deuteron would faceeach other. After the Let us note in passing that now we can account for one neutron gravitation the nuclear forces will be saturated, of the nuclear physics anomalies, which has a tendency which will weaken the proton connection, and one of to be totally ignored. Under a nucleon energy of 1 MeV, the protons will leave the system. This reaction can be its velocity is 109 cm/s, nuclear radius is 10-12 cm, and conditionally presented in the following form: the passage time of the nucleus is 10-21s, but the time period in which the nucleon passes is usually d + d → p + (n + d ) → p + t anomalously long -  10 -14 and even more, and it is absolutely unclear what the nucleon is doing in the This reminds of the Oppenheimer-Phillips effect. nucleus so long. In our model it is easily explained by the “snail effect”. It is well known, however, that under big energies, the probabilities of the first and second reaction channels For the same equation, the minimum distance was are the same, and this phenomenon should somehow calculated between the charges dependent on velocity be accounted for. Increased probability of the neutron (Fig. 5) for different initial phase values. For comparison, channel with growing energy can be connected with Fig. 5 also shows the result of the classical calculation the appearance of secondary neutrons in the reaction based on the Coulomb law. It is obvious from Fig. 4 and T + D = He + n (14.1 MeV). In a deuterium-rich 5 that the minimum distance to which charges can environment, a big part of the resultant tritons will pass approach each other is nearly independent of kinetic to neutrons in the process of this reaction, which has a energy, but with reduction of speed the initial phase cross-section of 5 barns under an energy value of 70 area width is reduced as well. In other words, reduced KeV. According to assessments in [32], the number of energy brings also reduced probability of a nuclear such secondar y neutrons per one triton is   7.9 ⋅10 −12 ,1.7 ⋅10 −9 ,2.7 ⋅10 −6 for energy tritons 10, 20 The same results are true for the autonomous equation and 100 KeV respectively. Thus, the prevalence of (9). Under the conventional quantum theory, the ratio t > 106 can be expected only in those reactions, where of the reaction speeds in the tritium and neutron n t tritium is born with energies over 40 KeV. channels should be close to unity: = 1 . But in many n It should not be assumed, however, that the phase hole experiments on cold nuclear fusion this value is very phenomenon in its whole area leads to a nuclear t reaction. It can be assumed that reduction of the different from unity and equals = 109 . In different n Coulomb repulsion is followed by reduction of strong Page 263 interaction. But how? Today nobody knows the exact experimental data of the above-mentioned phenomena. equation of the strong interaction potential. Besides, The reaction of the official science is very interesting. the particle approaches the turning point Xmin is rather For example, well-known physicist Karl Sagan, after “thin”. Will it be able to take part in a full-fledged reading the book about such experimental data, advised nuclear reaction, or will it fly through, like it happens Kervran to read elementary textbooks in nuclear with the electron in the s – states of the atom? There physics! are very narrow phase areas, when soon after the particle stops the charge grows quickly and is sharply Some time later a research was made by Panos T. Pappas accelerated. The charge can even be maximum in the [58], who studied one of the well-observed nuclear nuclear force effect area. May be, it is this narrow phase reactions in biological cells: area that is responsible for cold nuclear fusion, and in case of strong interactions the phase hole mechanism Na11 + O16 = K 39 8 19 must be operating as well. Classical biology has long known about the existence It was discovered long ago that nuclear transmutations of equilibrium, when the ratio between the number of have a mass character (especially in plants and K and Na ions is maintained with greatest accuracy biological objects), but they have little to do with energy despite the shortage or even absence of K ions in food. generation. Examples of such reactions: Later, in work [59] this nuclear reaction was even called the life equation, and the existence of such nuclear Mn55 + p → Fe56 ; Al 27 + p → Si 28 ; reactions in biological objects was proved by M.Sue Benford with the help of direct physical methods. P 31 + p → S 32 ; K 39 + p → Ca 40 All thermonuclear fusion programs are based on blunt In reactions of this type, a very slow proton (with heating and compression of the reacting material. practically zero kinetic energy) penetrates the nucleus Despite the progress achieved, the head of the works in the above-mentioned way and remains there. No in England, Dr. Alan Gibson [34], established several intranuclear energy is generated, because both before years ago that the model reactor design would be and after the reaction the nucleus remains a stable created not earlier than in 50 years. Today, this point of object. In classical nuclear physics, the nucleus usually view is generally accepted. Even if the reactor is once became unstable after it was penetrated by a charged made (although the authors have grave doubts about nucleon with a large kinetic energy and always broke it), it will be very complicated, expensive, and harmful into parts, and the nuclear debris had an even greater for the environment. kinetic energy. Reactions of the above type were considered impossible under small energies and for this Classical approaches have so far not yielded any reasons were not studied by classical nuclear physics. positive results, despite multi-billion investments and It seems to be a completely new type of nuclear a great number of physicists, engineers, service transmutations, not recognized by modern nuclear personnel, and managers involved. It is only natural that science, but experimentally discovered rather long ago. this army of researchers is a potential impediment for Today there is a great deal of experimental material all alternative projects of new power engineering. It has confirming mass nuclear transmutation phenomena. been noted that “viability” of any idea is proportionate Moreover, there are many projects of neutralizing to the number of people involved and investments made. nuclear excess with the help of this technology. Journals For these reasons, the Fleischmann-Pons works were Infinite Energy, New Energy, Cold Fusion, Fusion Facts, given a hostile reception in the USA and other countries. etc. and Internet are full of such projects. All the controlled thermonuclear fusion programs are Of course, a change in the nuclear charge will result in accompanied with the adjective “controlled”, although restructuring of electronic atom shells, but the energy there is no control whatever. It is simply that the initial related to this process will be about several electron- quantity of the reacting substance is prudently made volts and is nothing in comparison with the energies of very small. For example, a ball of lithium deuteride nuclear reactions from several to hundreds million during laser reduction has a diameter of several mm. electron-volts. By the way, nuclear engineers are So far no one has been seriously considering the accustomed to such energy ranges in nuclear reactions. question of utilizing the energy of an explosion of such It was this circumstance that made them deny a priori a ball, which is approximately equal to the energy of an all nuclear processes in biology, because under such explosion of a box of antitank grenades. energy values of the debris dozens and hundreds thousand of complex biological molecules will be The straightforward approach to fusion used by the destroyed. modern science is very natural, because quantum mechanics has no methods of influencing this process. Quite a long time ago, Lois C. Kervran [33] wrote a book The future of really controlled nuclear fusion systems about nuclear transmutations in biology, and now, may be not on the way of primitive and blunt method of nearly 20 years later, its second edition was published! heating and compression of the material, but following It gives, evidently for the first time, numerous UQT on the way of using collisions of nuclei having small Page 264 energy and corresponding fine adjustment of the wave where the parameter t∗ is defined under the condition function phase. that the argument of cosine equals ϕ 0 for It is essentially possible in case of imposition of the & & & t = 0, x = x0 , x = x0 ( thus t∗ = −(2 x0 ) / x0 ), and this external controlling electromagnetic field on the parameter may be considered as the initial moment of reacting system, which contains quasi-fixed ordered so called local time. deuterium atoms and free deuterons. The same properties can be demonstrated by special atomic grid In the interest for us are namely solutions of eq. (10) geometry. Diffraction scattering of a deuteron flow on such grids will lead to automatic deuteron selection by under very small deviation ε from phase ϕ 0 and so we energies and phases.   π put ϕ0 = + ε and rewrite eq.(10) in the following It seems that in the Fleischmann-Pons electrochemical form: experiments such an ordered system existed in the Pd- D grid, and some phasing occurred, which accounts for a 1 && = − − x & & sin 2 ( (t + t∗ ) x 2 + xx + ε ), (11) the results of these experiments [30]. x 2 where a = 0.0144967 Let the initial x0 to be equal Today it appear to us that the cold nuclear fusion - 500000 of our length units ( i.e. approximately processes will be effectively used for nuclear waste liquidation and production of isotopes. 5•10-9 cm ) and the initial deuteron velocity v0 to be Many researchers [35,36] discovered that the quantity equal to the velocity v00 corresponding to the deuteron of heat generated in the process of electrolysis of energy of 1 eV or less. But it turned out that the ordinary water on nickel electrodes (there is no hope precision of numerical integration of this equation under for nuclear reactions in such systems) is the same as in the electrolytic cell with heavy water. It confirms other such initial conditions and under values ε = 10-15and measurements, which showed that the quantity of less is small and besides the interval of the integration nuclear reaction products is millions times less than is must be very large. That is why this equation also had required for such an amount of generated heat, and its origin remains a mystery. to be transformed by passing to “slow” time τ = ε t to Further we will give cer tain concrete data  dx  the equation relative to the variable w =   as a demonstrating the phase values of a deuteron with an  dτ  oscillating charge, under which the deuteron can function of x: approach the nucleus to a critical distance of 10-12 cm or less, i.e. giving the data to estimate the value of the dw 2a  1  1  = − 2  2 sin 2  ε ( (τ + τ ∗ ) w + x w ± 1) , (12) above-mentioned phase “hole” in the interval (0,π) of dx x ε  2  the phase change. where τ ∗ = −(2 x0 ) / w( x0 ) and +1 if ε >0, and Assume that the stationary nucleus with the charge q is placed to the coordinate origin x=0 and the deuteron -1 if ε < 0. It must be added also the equation for τ as a function of x : with the same charge q is placed at the initial moment t=0 to the point x0 < 0on the x-axis, and the deuteron dτ 1 = . (13) velocity equals x0 = v0 > 0 . The units of mass, length dx w and time are chosen in such a way that m = 1, h = 1, c = 1 (m - deuteron mass, c - light velocity). The system of equations (12, 13) is, so to say, a “model” Charge q equals 0.085137266. Our units are connected system describing fairly accurately the deuteron (to 4 significant figures) with the system (kg, m, s) as movement under all values of |ε| from 10-24 to 10-6. follows: Numerical integration of this system was fulfilled under different values of e and under following initial 1 mass unit = 3.345× 10-27kg, conditions: 1 length unit = 1.049· 10-16 m, 1 time unit = 3.502· 10-25 s. w( x0 ) = 2.103,τ ( x0 ) = 0, x0 = −500000,τ • = 689573.18 (14) The electron velocity corresponding to its energy of 1eV equals 5.931·10 7 cm/s. The deuteron velocity corresponding to such energy will be assumed to be It may be noted that the initial deuteron velocity v0 3680 times less, and in our units it will be 5.372 · 10-7 equals 1.450172 ( following the relation (if c = = 3·1010 cm/s). Then the deuteron movement towards the nucleus is described by the equation & x0 = ε w( x0 ) ) for given initial w( x0 ) and for 2q 1 |ε| = 10-7 , i.e. such velocity is approximately 3.7 times && = − x 2 & & cos 2 ( (t + t∗ ) x 2 + xx + ϕ 0 ), (10) x 2 less than velocity v00 corresponding the deuteron Page 265 energy of 1eV. If |ε|=10-6 then the velocity v0 is phase “hole” in the whole interval ( (0, π ) ) of phase approximately 2.7 times greater than velocity v00 . change ϕ 0 in eq. (10). It turned out that the numerical tables for values of w, τ If many deuterons with the energy not more than obtained under different values of ε < 0 in the (0.27)2eV at the distance 5 · 10-9 cm from the nucleus interval (-10-24, -10-6 ) don’t differ essentially from each are equally distributed along their phases ϕ 0 , the ratio other. The following table is true to three-four significant of the length of this “hole” to π, equaling approximately figures for & τ and x / ε = w : 0.3·10-7 , is equal to a share (or a relevant percentage of 0.3· 10-5) of deuterons overcoming the Coulomb barrier. x τ & -500 000 0 1.450 The above figures express at least the order of -50 000 1.426·106 0.0493 probability of the cold nuclear fusion occurrence, and -500 1.002·107 0.000489 this order is absolutely incompatible with the figures -200 1.067·107 0.000440 in the classical quantum mechanics mentioned above. -100 1.090·107 0.000425 Let us note once again that a one-dimensional problem -80 1.100·107 0.000423 was solved, and in case of an accurate analysis (not zero sighting distance will be taking into account) this If reducing the table values of x to centimeters, we probability will be lower. Let us also pay attention to obtain the following corresponding approximate values: the large time intervals ∆T calculated if å is very −9 −10 −12 −12 −12 5⋅10 ,5⋅10 ,5⋅10 ,10 ,0.8⋅10 small. It explains well the effect (observed by many The time interval ∆T , in which the deuteron reaches researchers) of continuation of cold nuclear fusion reactions during even many hours af ter the the critical distance 10 -12 cm from the center, is disconnection of the voltage in the electrolytic cell. This 67350 / ε of our time units or effect was named even “life after death”. (1.090 ⋅107 / ε ) ⋅ 3.502 ⋅10−25 seconds. If nuclear forces As for the analysis of the deuteron movement with the are not taking into account then the deuteron may help of the autonomous equation, the calculations lead approach the distance less 10-12 cm . to initial velocities v0 , exceeding the above mentioned numbers, although the general motion picture is the We present here the table, where are given the initial same. But the autonomous equation is interesting, deuteron velocities v0 in velocities shares v00 and the & because in the area of those values x, x , under which corresponding time intervals ∆T (in seconds) for the product x& is modulo small, it is possible to replace different values of ε. & & sin( xx) with xx , and the eq.(11) under ε=0 to replace v0 with simplified equation (describing the deuteron v00 ∆Τ (s) motion from initial point x0 > 0 to center) -10-6 2.7 3.82· 10-12 & ( xx) 2 = 2 -10-7 0.27 3.82· 10-11 && = a x & -10-22 0.27· 10-15 3.82· 104 (≈ 10.6 hours) x2 -10-23 0.27· 10-16 3.82· 105 (≈ 10.6 hours) This equation has a very simple analytical solution. Without giving very simple calculations, we will present Let us note that the given data change essentially the final formulas. under positive values of ε ( 10-6, 10-7, etc.). There is some Let us take the following initial conditions: asymmetry of solutions behavior under negative and positive values of ε. The calculations show the minimum x(0) = x0 >0, & x(0) = −v0 <0 distance x min more than 500 of our lengths units even Then for relative big initial w( x0 ) = 10000. Thus, if we limit v0 1 x (t ) = − , x (t ) = x0 − ln(1 + av0t ) ourselves to the condition that the deuteron energy is 1 + av0t a not over (0.27)2 eV at a distance of 5·10-9 cm from the central nucleus, and the whole process of deuteron It follows from these formulas that the velocity of a movement towards the nucleus does not exceeds particle moving in accordance with the initial equation approximately 10.5 hours, then the inter val never turns to zero, and under π π ( − 10 − 7 , − 10 − 23 ) is approximately the sought 2 2 exp(ax0 ) − 1 t = t∗ = Page 266 (the Casimir force discovered experimentally long ago), x(t∗ ) = 0, i.e. the particle reaches the center of the which the authors of this interesting work are going to nucleus, its velocity at this moment being exploit. It is easily seen that in this idea energy is − v0 generated from vacuum fluctuations. x(t∗ ) = = −v0 exp(− ax0 ) , 1 + av0t∗ Our approach is altogether different. When the equation so that it passes through the nucleus and moves with an oscillating charge was solved for the quantum further.. oscillator, 4 types of solutions were discovered. For us only two of them matter – “crematorium” and “maternity home”. In one solution (“crematorium”) the For example, let a=0.0144967, x0 = 1000 ( ≈ 10 −11 cm), particle slowly falls to the bottom of the pit and finally x(0) = 5.37 ⋅10−10 ( ≈ 16 cm/s). turns into a “specter” (under the strict unitary quantum theory it disappears, spreads about the Universe and Under such initial data, the product xx = −0.0000537 , contributes to vacuum fluctuations everywhere). In the so it is quite possible to replace sin( x& ) with x x& . x other solution (“maternity home”) the particle can even In this case, be born of a very small fluctuation, or accumulate a sufficiently big energy. Let us underline once again that both these processes are not at all logically connected. t∗ ≈ 2.3 ⋅10 7 ( ≈ 8 ⋅10 −18 s), In other words, there are such systems where energy will disappear completely (electrolytic baths), or x(t∗ ) ≈ −29.9 ⋅10 −17 & ( ≈ 9 ⋅10 −6 cm/s) increase unlimitedly (it might be our Universe). These figures fit well into the reasonable framework, It is the energy conservation law that presents the so the autonomous model can also be of use for the strongest impediment in all cosmological approaches. movement analysis in the problem under review. The However, universes with birth of matter have long phenomenon of particle passage through the Coulomb existed in scientific cosmology independently of us. potential accounts very well for the existence of There is known the theory of British astronomer Fred pendulum orbits in the Bohr-Sommerfeld model, when Hoyle based on the idea of continued creation of matter in states 1s,2s,3s etc. the electron passes through the from nothing. The question of whether such an approach nucleus. Such states in the strict theory and experiment is realized in nature and whether the energy emitted have no impulse, so in the Bohr-Sommerfeld model they by quasars is the result of work produced by a certain were discarded as absurd. Now they have a right to gigantic pit, is the most intriguing question of the future. existence. Further, the experimental data for angular distribution of non-elastic scattering by nuclear It is yet unclear whether the values of appearing and reactions (including reactions with heavy ions) reveal disappearing energy in these solutions are equal. But the big amplitude of the scattering forward. It is neither in the strict UQT nor in the equation with an impossible to explain such effect by the formation of oscillating charge vacuum (as a big set of random intermediate nuclei but it is may be explained from the oscillations) is needed for energy generation. Of course, viewpoint of our UQT. UQT admits of such an energy exchange with vacuum. For example, during split of a photon on a General Principles of Creating New Energy Sources semitransparent mirror, at one time both halves of the photon will not be registered and will give their energy In the ancient classical perpetual mobile idea it is to the vacuum and disappear for the observers for good, supposed that energy is just created and not taken from at another time there will appear two photons out of outside (impossibility of a perpetual mobile is the first one, and the lacking energy will be taken from vacuum. law of thermodynamics). There have appeared lately But the movement equations (4) and (5) themselves know many articles and even books dwelling on the idea of nothing about vacuum and can generate energy due to energy generation from vacuum. We are not in complete their nature (they are noninvariant relative to the agreement with many of these works, and we will dwell coordinate translations) and the conservation laws we only on some of them, which, in our view, can be of are so accustomed to do not exist for them. interest. One of the main ideologists of this completely new sphere in science are Daniel C. Cole and Harold E. Let us remind you once again that the latter follow from Puthoff, and their first serious work entitled «Extracting the Newton equations, and the Newton equations result energy and heat from the vacuum» was published in from averaging by a big number of events, while for Physical Review E, vol. 48, #2, (1993). In this work individual events of small energies no conservation laws authors use the Casimir forces [60] making them in quantum physics exist. produce useful work. The appearance of such forces in vacuum is understandable intuitively: if in a stormy sea In other words, it can be said philosophically that a we put vertically into the water two big parallel plates, motion of a small wave packet, once started, will give on the outside part of these plates the waves will hit birth to other movements (energy) and, consequently, them at random, and between the plates there will be to matter. Since most various and breath-taking no waves. Then, the hitting of the waves outside the speculations are possible, up to the creation of a plates will produce a gravitation force between them universe, we will stop here. Page 267 Thus, the generated or disappearing energy in our sign. Form (15) is convenient for mathematicians, but approach can be manifest not only in the changes of for our purposes we will present equation (15) in a the particle velocity during movement in a certain different way. If both parts of equation (15) are potential, but in the appearance or disappearance of multiplied by electric charge q, on the right we will get the particles themselves as well. A change of the the integral of force qE by way dl, i.e. work for moving particle velocity in movement is most easily the charge along a closed loop contour. It is well known discoverable, and it is the velocity increase that can be that this value is zero. used for generation of heat or electric current. There can be energy systems, which exploit the fact itself of charge oscillation and the consequence of it. It is very ∫ qEdl = 0 (16) probable that these phenomena, contradicting the most l fundamental laws of modern science, have been long If this value were not zero, an energy source could be discovered and even applied. But these are the very created. For this purpose a charge should be moved in phenomena that are the easiest to be exploited at the electric field E from point a, located in the high voltage first stage of development of such new energy area of the field, to point b, located in low voltage area technologies. of the field, and then back, but along another route. The values of work from a → b and from b → a would be When an energy generation mechanism is used, different, and we could extract work from the field crematorium-type solutions should be suppressed. But without making any changes in the system. When the all the quantum processes are built on the basis of charge is constant, it is certainly true, so for a elementary acts, and each of them is impossible to be macroscopic constant charge this theorem is an analog controlled separately. But if the probabilities of such of the energy conservation law. The authors have not processes are controlled, they, being multiplied by the come across such an interpretation of the energy great number of par ticipants in the process, conservation law in other works. If the charge is automatically become macroscopic variables of microscopic, then in the UQT it changes, depends on quantum kinetics, and the process itself becomes time, coordinate and velocity, so work from a → b and possible. It can easily be achieved, if process participants with correlated initial phases are selected. from b → a will be different, in this case work can in principle be extracted from the field without any Let us remind you that the Newton and relativistic changes made in the system. classical mechanics follow from the strict UQT, while the Newton movement equations with the resultant Discussion of Experimental Results energy and impulse conservation laws follow from the oscillating charge equation with averaging by the Let us now get down to explaining some very unusual particle ensemble composing a classical body (material experimental results, which the authors have nothing point). But these conservation laws are nonexistent for to do with, and which they sometimes regard rather individual microparticles in our theory, and they appear skeptically. The point is that the sphere of new energy only in case of averaging by the ensemble of particles. sources is the headache of all the human civilization, Thus, if the energy-generating processes are and in this sphere, like nowhere else, the dividends can accumulated, and the processes where energy be exorbitantly high, and for this reason there are in disappears are suppressed, a classical perpetual mobile this sphere a lot of swindlers (even among the can be created. theoreticians) and simply erring people. The official science of the world does not so far believe in such But the UQT and the oscillating charge equation have research, but the most suspicious fact is the great other differences not only from the equations of classical multiplicity of such works. The authors are not inclined mechanics, but also from some equations of to regard all these people as swindlers or erring, electrostatics and electrodynamics. because the UQT can offer a beautiful and simple interpretation of certain phenomena. There is a fundamental theorem of circulation for the electric field. Let us dwell on it in more detail. Let us There are strange plants with the efficiency over 100%. have a vector field E, which can be an electrostatic or a They are even manufactured in small quantities and are gravitation field. rated among energy-saving devices already termed over unities. Japanese researchers take these problems very E = P(x, y, z )i + Q(x, y, z) j + R(x, y, z)k seriously, and the leading role in studying this problem belongs not to the USA, but to Japan, which even Line integral finances many US institutes in this framework. The total Japanese expenditures for this research exceed Γ = ∫ (P dx + Q dy + R dz ) = ∫ Edl (15) $200.000.000 a year. It can be forecast that with the l l Japanese mentality and the state policy of exporting is called circulation of vector field E by contour l. Of not natural resources, but superhigh technologies and course, circulation depends not only on E, but also on intellect, Japan will find itself among the leading the passage direction accepted in contour l; by changing countries early in the 21st century. We think that our the passage direction we will change the circulation readers will not be surprised to hear that Russia has Page 268 not allocated a single cent for this program, and all chemical processes in it had long been over. The origin research was made on pure enthusiasm. of such an amount of excessive energy is absolutely incomprehensible in the framework of conventional In the USA such works do not get official governmental science, for they cannot be accounted for either by support either (like, for example, the dying out hot nuclear or chemical reactions, or by phase passages. nuclear fusion problem), but a great number of private At first the authors of this experiment supposed nuclear firms and individual businessmen are conducting large- fusion reactions of the D+D type. At our request, A. scale research. The following US journals are devoted Samgin replaced heavy hydrogen (deuterium) during to the subject: Journal of New Energy, Infinite Energy, ceramics production with ordinary hydrogen. If the Cold Fusion, New Energy News, Fusion Facts, and NET- effect of such huge energy generation was connected Journal (Switzerland). with the nuclear D-D reactions, all the anomalous heat effects would have disappeared, but they persisted. Switzerland, Italy, Germany, and France are also among After such a large quantity of energy was generated, the countries where the new energy problems are the tablet disintegrated into powder. seriously researched from the cold nuclear fusion point of view. These effects can easily be explained by UQT from the harmonic oscillator theory point of view. When the tablet A very young sphere of power engineering has emerged is agglomerated, there remain in it some caverns of a and is quickly developing, which researches many new size of hundreds Angstrom units. When direct or energy sources. In future those new energy-saving alternating current flows through it, the protons and sources will first be used, which will considerably differ deuterons in their movement (there are few electrons from the existing ordinary energy transformers in that in such ceramics) get into these caverns, and a process they will generate additional energy that can be used can start which is described by the “maternity home” in the interests of the mankind. The development of solution. A particle accumulating energy, oscillates in civilization will then be limited not by long-expected such a pit, and finally the energy will be sufficient both reduction of natural fuel resources, but by heat pollution for heating and for destruction of the pit walls (tablet of the environment. turning into powder). The same processes seem to be taking place in a palladium electrolytic cell with heavy Let us enumerate just a few of the new energy water, and in a nickel electrolytic cell with ordinary directions: water, which accounts for anomalously large heat 1. The Patterson fuel cell (CETI). generation, not related to nuclear processes. 2. Supermagnet-superengines of Takahashi, Aspden and Adams It would be good to verif y experimentally the 3. Swiss plant Testatika. dependence of the tunnel effect on the initial phase. 4. Engines operating on water. But it seems us that it is more important for our 5. Hypersound Griggs pump, the Potapov and opponents, since both cold nuclear fusion (CNF) and Schaffer heat generators. discovery of nuclear transmutations (which, from the 6. Schoulder and Fox cluster systems. point of view of modern science, are even more absurd 7. N – machines of Farade, Bruce de Palma, Newman, than the existence of CNF) evidently cannot be Searl, Tewari, etc. accounted for in any other way. Besides, such a direct 8. PAGD reactor of Canadian researchers P Correa and experiment is of a fundamental value. There are today A. Correa. a lot of people and groups in the world, who pin great hope on exploiting the nuclear transmutation This list can be complemented with the surprising phenomenon for the purposes of processing and experimental results received by physicists A. Samgin recycling of nuclear wastes, and the question of and A. Baraboshkin (Russia, Institute of High- industrial generation of tritium for military purposes Temperature Electrochemistry under the Russian using CNF methods was under consideration in Los- Academy of Sciences, Ekaterinburg) [24,25] and Alamos. Internet magazines are full of such information. T.Mizuno [26] (Japan). They appear to have used, totally We are not giving Internet addresses here, because independently of each other, special proton-conducting everything is constantly changing in this live system. ceramics, which, when electric current runs through them, generate a thousand times more heat energy than Let us analyze some of the above-mentioned devices. the electric energy consumed. In some experiments by The first, the oldest and the most mysterious information T.Mizuno this value even exceeded 70000(!). T. Mizuno was information about internal combustion engines in a personal talk with one of the authors of this report operating on water. said that he feared very much the radiation sickness. But no α , β , γ radiation or nuclear debris was found, Let us give just one example. When we were students, one of our teachers, the late Professor G.V. Dudko (1959) and the nuclear processes are not responsible for such told us that in 1951 he had participated in the testing energy generation. Such proton-conducting (or, to be of an internal combustion engine [39,55-57]. The device more exact, deuteron-conducting) ceramics was made represented a hybrid of a diesel and an ordinary using the power metallurgy methods by agglomeration carburetor engine, where a gas of petrol was needed to under high temperatures. In other words, all the start it and then ignition was switched off, and an Page 269 ordinary fuel pump sprayed into the cylinder warmed experts note that the generated heat is of mysterious up and strongly compressed water with special origin and cannot be explained by chemical or nuclear additives (which the inventor himself put into the tank reactions, as well as by phase passages. The US ABC in small quantities, and which, as we now understand, TV showed on February 7 and 8 1996 in the «Nightline» represented the principal secret). The engine was and «Good Morning America» cycles of programs about installed on a boat. The researchers were riding for two the development by Patterson of a new energy source days in the Azov Sea, and only water vapor was the generating hundreds of times more energy than it engine exhaust. Professor Dudko himself drew the water consumes. The mysterious nature of the generated heat fuel overboard and poured it into the tank. They needed was again underlined. It is interesting [34] that Motorola much water, several buckets a day, but there was no tried to buy the CETI patent from its authors for shortage of it… The question of why, if everything was $20.000.000, but met with a refusal. We are sure that so great, these engines are still not in use, can occur Motorola had invested a certain amount of money into only to a person who has never lived in Russia. the study of this problem before making such a serious offer. All that happens within the Patterson element has From the point of view of the solutions of the harmonic nothing to do with nuclear reactions (although Patterson oscillator problem, the following theoretical possibility told one of the authors that he was of a different view), exists [40,44,47,55-57]: if water with the necessary and, in our opinion, can be accounted for by exactly the additives (which, evidently, represent the secret of many same processes as were described above for proton- invented engines operating on water) is compressed conducting ceramics. and sprayed into the cylinder, each drop of water, when it gets into the cylinder after being compressed, will The sonoluminescence phenomenon, when certain start dilating and will pass by inertia the equilibrium liquids start shining if weak ultrasound is run through position. As a result, caverns (empty volumes) can be them, also looks very mysterious. No satisfactory formed in it, with a size of several dozen of Angstrom explanation has so far been found for this experimentally units. If a free proton (or some other microparticle) gets proved phenomenon, discovered by Moscow University into such a cavern in the required phase (it is supposed Professor S.N. Rzhevkin in 1933. As Nobel Prize winner that the task of the additive is exactly this), the Professor Yulian Schvinger said, “it has no right to exist, “maternity home” solution will be realized and some of but it does exist” [38]. This phenomenon can also be the drops will explode… Later we heard and read many explained from the above-mentioned positions. times about various Russian inventors, who had successfully created and tested engines operating on There are also heat generators (Yu. Potapov [21-23], ordinary water with some mysterious additives. Moldavia, James J.Griggs [28], and Huffman [29], Schaffer - USA). In them many cavitating bubbles are Of course, the possibility of catalytic water formed during circulation of ordinary water, in which decomposition with small energy consumption before excessive energy is generated, with the output to input spraying into the cylinder is not at all excluded. There energy ratio approaching 1.7. In these experiments and are films and information in Internet about testing of plants no chemical or nuclear reactions can take place, cars operating on water, which is catalytically (with and thousands of Potapov’s heat generators have been small energy consumption) decomposed into oxygen manufactured for heating homes. In such devices (they and hydrogen. Such power engineering would be are very different in appearance) a great number of ecologically absolutely clean, and the only restriction cavitating bubbles are created in a flow of water. This would lie in heat pollution of the environment. is achieved either with the help of interrupting the water flow with a special rotor (J.Griggs, Huffman, Schaffer), An ideal solution for the motor transport could also lie or the water flow is twirled by a special helix and then in use of some new types of electric energy generators. enters the zone of sharp dilation, where cavitating The UQT even admits of the possibility, which was long bubbles are formed (Yu. Potapov). In general, it should observed in the experiments of Nicolas Tesla and in be said that cavitation remains a great puzzle for those made by Canadian physicists the Correas, who theoretical hydrodynamics and science. For example, even received a patent for a system generating energy forged multi-ton screw propellers of big nuclear from vacuum fluctuations (as they believe) [45]. The submarines under certain operation modes and readers could have got acquainted with our detailed geometry of the surrounding forms can be destroyed theory of these processes in [46]. But the ideal system by cavitation within only a few hours. It happens for the automobile would certainly be Testatika. because of huge energy generated in cavitating Any imagination will be amazed at the thermal cell CETI created by James Patterson, USA [27], in which takes Under certain values of phase and energy, a particle in place the electrolysis of specially made nickel balls in the pit, each time reflecting from the walls, will have a ordinary water. The US paper «Fortean Times» ¹ 85, 1995, greater velocity than that of a falling particle (this is wrote about it: “December 4, 1995 will go down into within the uncertainty relation), and after many history. On this day a group of independent experts from reflections will accumulate a fairly big energy which 5 US universities was testing a new source of energy will be generated in the form of heat or bremsstrahlung with a stable output heat power of 1.3 kWt. The when the pit is destroyed, and, finally, the energy of consumed electrical energy was 960 times smaller”. All the oscillations of such a particle accumulated in the Page 270 pit will always be transformed into heat in an ordinary Chinese physicist Swe-Kai Chen from Taiwan in his solid body or a liquid. This physical idea immediately experiments [48] stably observed the same phenomena. accounts for both sonoluminescence (although for It is quite easily explained: a particle with a velocity sonoluminescence in general this mechanism is less exceeding the most probable velocity in this distribution primitive), and energy generation in proton-conducting gets into caverns on electrodes and after some ceramics, nickel during electrolysis in ordinary water oscillations reduces its velocity, which becomes smaller (CETI element), and water bubbles of commercial heat than the most probable one, and then the particle leaves generators. The theory predicts that the samples should the cavern at a small speed, and the same process can be fissure due to increased pressure on the walls of the happen to another energetic particle. This leads to the potential pit with the growth of energy, which fact also cooling of the cell in the case of such mass processes. takes place, since both ceramic samples and nickel balls in the CETI element finally disintegrate. It is evidently The problem of ferromagnet magnetization (the Easing for these reasons that any metal containing much model) can also be reduced to the orientation of a hydrogen in its grid becomes fragile and is quickly magnetic doublet by the external magnetic field, and destroyed, which fact is well known to engineers. then it is essentially the harmonic oscillator equation with a slightly different return force ( F = 3 ) and all The small number of experiments does not so far allow r for making concrete conclusions as to what particles the conclusions made earlier remain in effect. That is generate energy in pits (microbubbles). Besides, for at why magnetization should also produce energy least an electron to disappear a pit of about 0.5 MeV is generation effects. This proved to be true. For the required, while in a solid body the pits are about several general public everything began on May 17, 1996, when eV deep, and what seems to happen is only loss of Frode Olsen from the research group “Free Energy” kinetic energy, and not disappearance of particles. The showed on the Norwegian TV (TV2) a surprising film fact that this process requires very deep potential pits, about a “dynamic sculpture” made by artist and which do not exist in a solid body, does not change the sculptor Reidar Finsrud from Skaarer, Norway. The essence of the matter. author of this “dynamic sculpture” had no idea about physics and had been making it for 12 years. Einstein’s Of course, under ordinary conditions, both competing idea of how discoveries are made conveniently comes solutions usually take place at once: “maternity home” to mind at this point: everyone knows that a certain and “crematorium”, which compensate for each other thing cannot be done, but there is a man who does not and the energy is preserved. For energy generation, the know it, and it is he who makes the discovery. “maternity home” solution should prevail. Both these processes take place simultaneously and compete with This “dynamic sculpture” accompanied by an each other, but, formally, they are not connected in space “explaining” poster «perpetual mobile» represents an and time. The complexity of the energy generation iron well-polished ball with a diameter of 2.7 inches problem lies in suppressing the “crematorium” solution weighing about 2 pounds. The ball is rolling along a by a careful selection of different parameters and circle on close guides resembling two parallel skids with promoting the “maternity home” solutions. So far we a diameter of 25 inches past the poles of three cannot say for sure what the optimum dimensions of permanent magnets, where it is magnetized. In the area such cavitating bubbles are, or which object oscillates of three permanent magnets three more mobile magnets in them, because for this purpose special experiments are installed on special mobile 5-inch long levers, and are needed, which so far have not been staged. these magnets, when the ball passes them, are slightly inclined (due to the ball gravitation) and, after the ball Of course, the inexorable Robber in the form of the passes them, are raised by the holding springs (sway Carnot principle stands in the way of transformation of like yokes). The ball makes a complete turn in 3 seconds. the heat generated in a heat generator or ceramics into All this magic (they say the ball had been rolling along electrical or mechanic energy. In accordance with this the close contour for more than a year) does not have principle, all mechanic or electrical energy can be any sources of energy and is installed for everyone to transformed into heat, but the reverse process is always see in a Norwegian picture gallery on a special stand connected with big losses. covered with a glass cover. The authors only saw a good TV film about this installation and were mostly surprised If there are experiments and plants in which energy at the fact that the ball had not stopped during generation contradicting the conventional conservation uninterrupted shooting (about 20 minutes). laws is discovered, there should also exist opposite ones, where energy disappears completely, i.e. the We are well acquainted with circus tricks, but it is “crematorium” solution prevails. It proved to be true. absolutely incomprehensible how such a trick could be There are such modes during electrolysis in electrolytic staged using some secret methods. It is clearly seen baths, under which the temperature of the solution in that the ball in its movement always partially transfers the bath is strongly reduced for unaccountable reasons, its energy to the three long swaying pendulums, but and this fact has no explanation at all. This phenomenon there is no way to use them for pushing the ball and long ago was noted by attentive industrial engineers, making up for friction, this being the only trick that and it is called the “bath-freezing” mode [49,50]. could, in our view, be applied here. All the rest is clearly visible and contains nothing suspicious. Page 271 Let us estimate the generated energy. At an initial speed The scientists of the older generation will remember of about 1m/s the ball stops after 30 seconds, if all the that a similar toy was shown in the 30’s to David Gilbert, magnets are removed. It means that the energy who said it was the most interesting thing he had ever consumed in 30 seconds is about 0.5 joules, or 1/60 Watt. seen. A question arises as to why it has not yet been The total energy generated in a month is 43,200 joules, realized. We do not know a physical-mathematical and this is huge energy, much greater than that of a answer to this question, and it is not our task to analyze good shell! the social reasons of this phenomenon. Japan has a different mentality, and there is a governmental program It is clear (if the word is relevant here at all) that when for generating energy from permanent magnets. the ball is approaching the permanent magnet and the Takahashi [51] even seems to have made an electric process of magnetization is going on, it is accelerated, engine with an efficiency of up to 318%! but when it mechanically gets past the equilibrium position and, moving away from the magnets, becomes Still more mysterious is the long-known problem of demagnetized, the gravitation (which now starts energy shortage in many biochemical reactions with slowing the ball down) will be slightly less than it was ferments (enzymes). For example, in the well-studied at the moment of the ball’s acceleration. This small reaction of disintegration of polysaccharides in the difference in forces provides for small positive work to presence of lysozyme the following happens: a overcome friction. Energy generation and similar things polysaccharide molecule gets into a special cavern in a during magnetization had been predicted by one of the big lysozyme molecule, and some time later its debris authors in magazines Infinite Energy vol.1, No.2, p.38, are thrown out of it (Fig. 6). The broken binding energy (1995); Proceedings of the ICCF5, p.361, April 9-13, of the polysaccharide is about 3 eV, while the energy of (1995), Monte-Carlo; Cold Fusion, No 11, p.10, (1995); the heat movement is only 0.024 eV. From the standard Chinese Journal of Nuclear Physics (vol. 19, ¹2, 1997). science point of view, it is absolutely unclear where The quantum-mechanic processes are very complicated, lysozyme takes the energy to break the polysaccharide. but some of them can be understood. No satisfactory mechanism for explanation of such reactions (and they are very numerous) was found, and All keen physicists were quick to understand it, and J. all this was “swept under the carpet”, as physicists Naudin in France made a similar, but much simpler say. The UQT provides for a completely new look at the experiment. A ball of a soft magnetic material is catalytic processes, which has an incomprehensible swaying along parallel U-formed skids in a system of source of energy reducing the molecule activation four magnets. Near the bottom of the U-form there is a energy. From our point of view, this process is a variant small smooth step. It may have been made to make the of the “maternity home” solution for oscillator. magnetization and demagnetization processes different in time, which is very important. If there are no magnets, nothing interesting happens and oscillations are quickly (in a few seconds) damped. If the magnets are present, oscillations go on up to 3 hours 27 minutes. It appears that in this case the author failed to find good material and parameters of the plant, so friction was not compensated completely. In all these experiments demagnetization of permanent magnets does not happen, because the experiment is repeated many times Fig.6. Break of polysaccharide molecule by lysozyme. with the same results. The most surprising thing is that in all the cases And now a few vague words about demagnetization generation of excessive energy cannot be accounted for processes. During magnetization of the ball, the by chemical reactions or phase passages. If nuclear magnetic moments of its atoms are oriented (like the reactions do sometimes happen (which should not be hands of a compass) along the field lines. When the according to modern science), they can account for only ball leaves the magnetic field area, the atom magnetic a hundredth or a thousandth share of the generated heat moments are disoriented under the influence of the heat energy. There is no doubt that all these are effects of motion, and it becomes demagnetized. In the unitary new physics, for in the framework of the old physics all quantum theory the share of the oriented magnetic this is simply unexplainable. moments in the external field can be bigger than in the conventional quantum mechanics (the “maternity But the existence of a plant that produces out of nothing home” solution), and the ball gravitation can be stronger about 10 kilowatt of direct current electric energy with due to it. Disorientation of these moments happens a voltage of 300 V seems nearly impossible. The story similarly in both theories. It seems to be for this reason was described by one of the authors in three different and due to the difference in magnetization and magazines, and we will just give a brief resume [52- demagnetization time that a difference in magnetic 54]. forces occurs when the ball approaches the magnet or moves off from it. In summer 1999, at the invitation of Swiss physicists (Director of the Institute of New Energy Sources in Egerkinhen Adolph Schneider), one of the authors Page 272 visited several research organizations. It is interesting (is transformed into heat), and all this can finally lead that there is such an institute in small Switzerland, and to overheating of the environment. They absolutely do there is none in big Russia. The purpose of the invitation not believe (and not without grounds) in the capability was very simple: to explain the operation of a plant of the mankind as a whole to negotiate reasonable use generating energy out of nothing, i.e. a perpetual mobile. of this invention, and they think that the harm caused In Switzerland such plants are called Testatik Machine by it will be greater than from nuclear, bacteriological, M/L Converter from religious group «Methernitha» or conventional weapons. Their main idea for the (Address: Methernitha, CH-3517 Linden, Switzerland, mankind is to live in balance with the environment and phone: ++41 31 97 11 24). to make full use of the energy of the wind, the sun, the water, etc. For this reason the Community is heavily Such machines exist today in the Swiss town of Linden guarded, and they are not going to donate their main near Bern. Part of the town belongs to the Religious discovery to the mankind. Christian Community, which is fenced and heavily guarded. There are about 250 members of the Professor Stephan Marinov visited the Community twice Community, many of them are physicists, graduates of (in July 1988 and in February-March 1989). He was even the universities of Geneva, Lozanne, Bern. It is not only given such a plant with a capacity of 100 Wt (300 V, a research laboratory, they have their own TV center, a 0.3A), which he studied in his laboratory. As far as we film studio, a small furniture plant, shops, garages, now know, even the inventor of this machine does not residential blocks, and support services. You will fully understand its operation principle, so he contacted probably have guessed that this community does not Marinov out of sheer curiosity of a scientist. consume any energy, and this is the most accurate fact in the whole story, for the inquisitive journalists have In 1989 Professor Marinov published a book “Thorny found out that no money from them comes to the Path to Truth – Documents of Violation of Conservation accounts of the local power station, which provides Laws” in International Publishers East-West. The book power for all the town. In a cellar of one of the houses contains a lot of photos, a measurement report, and a they have a power station that produces energy… out description of the plant. He also organized a research of nothing. The author of this inexhaustible source of group called “Free Energy” within the Community free direct current energy is Swiss physicist Paul (Methernitha Group Stephan Marinov Free Energy). Baumann. Let us briefly describe these fantastic plants: they are of four types (sizes) with capacities of 0.1, 0.3, There are very interesting words in this book: “I can 3 and 10 kWt. Externally, the plant resembles very much state without any doubts that this machine is a classical the standard electrostatic machine with Leyden jars perpetual mobile in its pure form. After the initial push, often used in physical demonstrations. There are two it goes on rotatingby itself for an indefinitely long time, acryl disks with 36 pasted narrow sectors of thin constantly producing electrical energy in the amount aluminum, which rotate in different directions. In the of 100 Watt… It is still unclear, however, how it all can first samples ordinary gramophone records were used happen…”. As far as we know, nobody has managed for disks. The machine is started by pushing the disks to build a similar plant elsewhere. in different directions by fingers. The rotation speed is 50-70 turns per minute. After the start disks rotate We have an approximate idea of how the plant operates. independently and can be easily stopped by hand, the The idea is as simple and ingenious as that of the wheel, direct current voltage is about 300-350 V, and the current which is absent from the surrounding nature, so the is up to 30A. The mechanical energy used for rotation inventor could not borrow the idea. We will just show (only 100 mWt, according to measurements made by that the existence of such a plant is in full conformity Austrian Professor S. Marinov) is hundreds of thousand with the UQT. It is natural that the plant operates on times smaller than the generated electrical energy. The the basis of the charge separation principle. Let us have biggest plant for 10 kWt has plastic disks with a two metal spherical surfaces with a hole, isolated from diameter of over 2 m, the smallest one – 20 cm, the the earth and from each other. If, with the help of an weight of the plants is small enough, the 3-kWt machine insulated stick, we transfer the first electron from Ball weighing about 20 kg. A to the internal surface of Ball B through the hole, a difference of potentials will occur, and if we transfer The charge separation process (which consumes the second and the subsequent electrons, Ball A will energy!) practically does not slow down the disks. attract the transferred charge, while Ball B will repulse Connection of a load in the form of a 200-Wt bulb does it, and energy will have to be spent during the transfer not change the rotation speed either. No cooling or of charges (Fig. 7). heating of the air or machine parts during long operation takes place, only a slight smell of ozone is felt. The Let us remind you that under the existing circulation system is noiseless, compact, environment-friendly, and theorem (16), the charge transfer work will consume can be installed anywhere. the same amount of energy as will later be generated during the passage of electric current resulting from The Community management thinks, and quite rightly, charge separation. But in the UQT the circulation that wide spread of such systems in the world will lead theorem (16) for an individual elementary charge is not to a heat explosion, because all the energy generated valid. Thus, we can select the time and route, along by the mankind finally finds itself in an energy dump which the charge will be transferred in such a way, that Page 273 2. Sapogin L.G. On Unitary Quantum Mechanics. Nuovo Cimento, vol.53A, No 2, p.251, 1979. 3. Sapogin, L.G. A Unitary Quantum Field Theory. Annales de la Fondation Louis de Broglie, vol.5, No 4, p.285-300, 1980 4. Sapogin, L.G. A Statistical Theory of Measurements in Unitary Quantum Mechanics. Nuovo Cimento, vol.70B, No 1, p.80, 1982. 5. Sapogin, L.G. Statistical Theory of Detector in Unitary Quantum Mechanics. Nuovo Cimento, vol.71B, No 3, p.246, 1982. 6. Boichenko, V.A. and L.G. Sapogin On Equation of Unitary Quantum Theory. Annales de la Fondation Louis de Broglie, vol.9, No3, p.221, 7. Sapogin, L.G. and V.A. Boichenko On Solution of One Non-linear Equation. Nuovo Cimento, vol.102B, No 4, p.433, 1988. Fig.7. Work for moving the charge depends 8. Sapogin, L.G. and V.A. Boichenko On Charge and Mass of Particles on method of movement and route. in Unitary Quantum Theory. Nuovo Cimento, vol.104A, No 10, p.1483. 9. Sapogin, L.G. Clearcut Picture of Microworlds. “Journal Technique the charge value during the transfer will be close to for the Young” (Tekhnika Molodezhi). Moscow, No 1, p.41, 1983 (in zero, and, consequently, the electrostatic force and the 10. Sapogin L.G., “Statistical Theory of Measurements in Unitary charge transfer and separation work will be close to Quantum Mechanics,” Nuovo Cimento, vol. 70B, No.1, p.80, 1982. zero too. For example, instead of selecting the route you 11. Sapogin L.G., “Statistical Theory of Detector in Unitary Quantum can wait for the charge to be reduced to zero and then Mechanics,” Nuovo Cimento, vol. 71B, No. 3, p. 24M, 1982. 12. Boichenko V.A., Sapogin L.G., “On Equation of Unitary Quantum transfer it quickly, and when the charge increases, Theory,” Annales de la Fondation Louis de Broglie, vol. 9, No. 3, immediately stop the transfer and fix the charge. Or you p.221, 1984. can duly select the route and velocity. There are many 13. Sapogin L.G., Boichenko V.A., “On Solution of One Non-linear options. This was evidently realized by Paul Baumann, Equation”, Nuovo Cimento, vol. 102B, No.4, p.433, 1988. 14. Sapogin L.G., Boichenko V.A., “On Charge and Mass of who is so far practically unknown to the official science, Particles in Unitary Quantum Theory,” Nuovo Cimento, vol. 104A, and who can find consolation in the idea that the No.10, p.1483, 1991. inventor of the wheel will never be known at all. The 15. Sapogin L.G. “Deuteron Interaction in Unitary Quantum Theory”, problem of simple arrangement of all this is just a matter and “On Mechanisms of Cold Nuclear Fusion”. In: Proceedings of the Forth International Conference on Cold Fusion, vol.4, Theory of technique. and Special Topics Papers TR-104188-V4, July 1994, p.171-178, Hawaii. 1994. You cannot help, thinking that all these might be just 16. Sapogin L.G. “Deuterium Interaction in Unitary Quantum tricks. The history of perpetual mobile abounds in Theory”, and “On Mechanisms of Cold Nuclear Fusion”. In: Fusion Source Book. International Symposium on Cold Nuclear Fusion and evidence of downright swindling and frauds, and not a Advanced Energy Sources, Belarussian State University, Minsk, May single positive result before, and who can guarantee 24-26, p.91-98. 1994. that the information given above will not prove to be 17.Sapogin, L.G. “Cold Nuclear Fusion and Energy Generation another swindle? Processes in Terms of theEquation”. Chinese Journal of Nuclear Physics vol.19,#2, p.115-120, 1996 . 18. Sapogin, L.G. «Cold Nuclear Fusion and Energy Generation First of all, if all the people always piously believe in Processes in Terms of the Schrödinger Equation». Infinite Energy the unquestionable stability of the energy conservation [E.Mallove, editor], vol.1, No 5,6, p.75-76, 1996. law, there will never be any progress in this sphere, 19. Sapogin, L.G. «Energy Generation Processes and Cold Nuclear Fusion in Terms of the Schrödinger Equation». In: Proceedings of and it is then unexplainable how man got down from the Sixth International Conference on Cold Fusion, Progress in New the palm at all. Secondly, to justify the proposed Hydrogen Energy, October 13-18, 1996, Japan, vol.2, p.595-600. rebellious position, the following idea comes in mind: if 20. Sapogin, L.G. “Energy Generation Processes in Terms of the 30 years ago somebody had told the authors (who were Schrödinger Equation.”Proccedings îf the 2nd Russian Conference CNFNT (in Russian) p.18-24, Sochi, September 19-23,1994. then already professors) that at the beginning of the 21. Potapov J.S. Patent of the Russian Federation No 2045715 Heat next millennium they would deal in such research, it Generator and Device for Heating of Liquids. Registered on the 10- would have seemed not only a silly joke, but an th of October 1995; priority from the 26-th of April 1993 (in Russian). absolutely impossible thing as well. But, as Voltaire said, 22. Potapov J.S. Water as a Source of Life and Energy. Enerjia- takarekossadi, Revu. p.25- 29, September 1998, Budapest. “He is silly who does not change”. 23. Potapov J.S., MD, Patent No 649 Instalatie pentru obtiinerea enerjiei electrice si-temice. Buletin Oficial de Proprietate Industriala, In conclusion we wish to express with certainty that No12, p.18-19, Chisinau (in Moldavian). the time of theoretical recognition and of practical 24. Samgin A., Baraboshkin A. et al. Influence of Conductivity on Neutron Generation Process in Proton-Conducting Solid Electrolytes. universal using of overunity devices will come soon and In: Proceedings of the 4th International Conference on Cold Fusion. become the epoch of new energetics. The people of our Palo Alto, USA, v.3, p.51-57, 1994. planet will regret that so much oil, coal and gas was 25. Samgin A. Cold Fusion and Anomalous Effects in Deuteron burned causing terrible ecological losses. Conductors during Stationary High-Temperature Electrolysis. In: Proceedings of the 5th International Conference on Cold Fusion. April 9-13,1995, Monte-Carlo, p.201. The authors thank astronaut V.A. Dzhanibekov and 26. Mizuno T., Enio M., Akimoto T. and K. Azumi Anomalous Heat Professor A.P Buslaev. Evolution from SrCeO3-type proton conductors during absorption/ desorbtion of deuterium in alternate Electric Field. Proceedings of the 4th International Conference on Cold Fusion, December 6-9,1993, References Hawaii, vol.2, p.14., EPRI, Palo Alto, USA, 1994. 27. Patterson J.A System for Electrolysis, U.S. patent No 5,494,559,27 1. Sapogin, L.G. Unitary Field and Quantum Mechanics. In: Feb.1996; Miley G.H. and J.A. Patterson in: Proceedings of the 6th Investigation of Systems. Academy of Sciences of the USSR, International Conference on Cold Fusion, Progress in New Hydrogen Vladivostok, No 2, p. 54-84, 1973 (in Russian). Energy, October 13-18,1996, Japan, vol.2, p.629-644. Page 274 28. Griggs J. Calorimetric Study of Excess Heat Production within 45. Correa Paulo and Correa Alexandra XS NRG in Technology, Hydrosonic Pump System Using Light Water. Fusion Source Book. Infinite Energy, vol.2, #7 p.18-38, Nr 8 p.10-15, #9 p.33-37, 1996. US International Symposium on Cold Fusion and Advanced Energy Patents, numbers: 5.416.391, 5.502.354, 5.449.989. Sources, Belarussian State University, Minsk, Belarus, May 24-26, 46. Sapogin L.G. Theory of Excess Energy in PAGD Reactor (Correa p.248-253, 1994. reactor). In: Proceedings of ICCF-7, Vancouver, April 1998; Infinite 29. Huffman M.T. From a Sea of Water to a Sea of Energy, Infinite Energy, No 20, 1998, p.49. Energy, vol.1, No 1, p. 38-45, 1995. 47. Sapogin L.G. “New Source of Energy?” Jour nal 30. Fleischmann M., Pons S. Electroanal. Chem., v.261, p.301, 1989. “Acknowledgement and Physical Reality”, Moscow, vol. 2, #1, page 31. Sapogin, L.G. and I.V.Kulikov “Cold Nuclear Fusion in Unitary 34-40,1997, (in Russian). Quantum Theory”. Chinese Journal of Nuclear Physics, vol.17, No 48. Swe-Kai Chen, Chu-Yung Liang «Observation of Cell Temperature 4, p.360-370, 1995. Drops». In: Proceedings of ICCF-7, Vancouver, April 1998, p.68-72. 32. Cryz W.: Rivista Nuovo Cimento, 1, Special No, 42, 1969. 49. Jakimenko L.M. Electrolysis of Water. Chimia Press, Moscow, 33.Kervran Lois C. Biological Transmutations. Swan House Pub. Co, p.p. 33, 86, 90-114, (1970) (in Russian). NY, 11223, 1972. 50. Pfleiderer N. Electrolysis of Water. pp. 12, 17-18, 1935, Leningrad 34.Private Communication. (in Russian). 35. Notoya R., Noya Y., Ohnisi T. Fusion Technology. vol. 26, p. 179- 51. Rothwell J. Yasunori Takahashi´s Supermagnets, Infinite Energy, 183, 1993. vol.1, No 5,6, p.33, 1996. 36.Swartz M. Journal of New Energy vol.1, #3, 1996. 52. Sapogin L.G. “Is This Really True?”, Infinity Energy, N 28, 2000. 37. Blokhintsev D.I. On Energy Conservation Law. In: Works on 53. Sapogin L.G. «Perpetual Mobiles Operating in Methodological Problems of Physics, p.51, 1993, Print of Moscow Switzerland»,magazine «Chudesa I Prikluchenia» # 2, 2000. State University. (In Russian). 54. Sapogin L.G. «They Say There are No Perpetual Mobiles. Then 38. Schwinger J. Casimir “Energy for Dielectric”. In: Proceedings of what is it?”, magazine “Samolet”, 4, 2000. the National Academy of Sciences, vol.87, p.8370-8372, 1990, “Cold 55. Sapogin L.G. XXI Century - New Sources of Energy? In: Chudesa Fusion: Does it Have a Future?” Journal “Cold Fusion”, vol.1, #1, i Prikliuchenia, Moscow, No 11, p.32-35 (In Russian) 1996, and No 3, page 14-17,1994. 1998. 39. Sapogin L.G. «What Can Our Power Engineering Be Like in the 56. Sapogin L.G.,Kulikov I.V. “Neue Quantenfeldtheorie und prozesse Next Millennium», Journal Business-Match, 14, 1998 (In Russian). zur electromagnetischer und thermisher energie mit overunity 40. Sapogin, L.G. On One of Energy Generation Mechanisms in effekt”, DVR-Mitglieder-Journal 2/2000. Unitary Quantum Theory. Infinite Energy [E.Mallove, editor], vol.1, 57. Sapogin L.G. “The 21st Century: Will it Bring a New Quantum No 2, p.38-39, 1995. Picture of the Universe and New Energy Sources?”, Journal of New 41. Sapogin, L.G. On One of the Energy Generation Mechanisms in Energy,vol.2, #3/4,1999. Unitary Quantum Theory. Proceedings of the ICCF5, p.361, April 9- 58. Panos T. “Electrically Induced Nuclear Fusion in the Living 13,1995, Monte Carlo. Cell “, Journal of New Energy vol.3, #1, 1998. 42. Sapogin, L.G. Energy Generation Processes and Cold Nuclear 59. M.Sue Benford, R.N. M.A. “Biological Nuclear Reactions: Fusion in Terms of the Schrödinger Equation. In: Proceedings of the Empirical Data Describe Unexplained SHC Phenomenon” Journal Sixth International Conference on Cold Fusion, Progress in New of New Energy vol.3, #4, 1999. Hydrogen Energy, October 13-18, 1996, Japan, vol.2, p.595-600. 60. Schwinger J. “Casimir Energy for Dielectric”. In: Proceedings of 43. L.G. Sapogin, “On One of Energy Generation Mechanisms in the National Academy of Sciences, vol.87, p.8370-8372, 1990, “Cold Unitar y Quantum Theory ”. Proceedings of the 2nd Russian Fusion: Does it Have a Future?” Journal “Cold Fusion”, vol.1, #1, Conference CNFNT (in Russian) p.18-24, Sochi, September 19-23, page 14-17,1994. 1994; Cold Fusion, No 11, p.10, 1995. 44. L.G. Sapogin, “On One of Energy Generation Mechanisms in Unitary Quantum Theory”. Cold Fusion, No 11, p.10, 1995. KOZYREV-DIRAC presented in this report. Here are descriptions of experiments and methods of measurement. The effects EMANATION. of interaction between new type of emanation and matter have been obtained. Till the present moment theoretical physics didn’t pay METHODS OF DETECTING attention to the nonoriented configurations and spaces. The reason of this situation is the fact, that from the philosophic point of view it is not possible to determine Dr. Ivan M. Shakhparonov and locate the area of the nonoriented topological structures in our world. We (eight scientific teams) 125252, Russia, Moscow,Pestchanny Pas. 20-1-33 joined our forces and we needed more than 30 years to phone/fax 8·095-198-2012 solve this problem by an experimental approach. In this paper the authors show the possibility of creation The fundamental tenet of the casual mechanics of a new kind of emanation. The magnetic monopole developed by Kozyrev can be formulated as follows. beam can be made in space as a result of focusing of There are two types of energy in the Universe. The some natural substance. Special devices based on the positive or «right» energy acts as a factor of the entropy Moebius band elements make the given focusing. This increase. The negative, or «left» energy tends to emanation is able to magnetize graphite and organics, decrease the entropy, i.e. it acts as a factor, which decrease the radioactivity, and influence the oncology regulates the entropy increase. The «right» energy is diseases .The time reverse technology is realized in such transformed to the «left» one and this fact may be devices. interpreted as a course of time from the past to the future. When the energy is transformed from the «left» Experimental data, which allow making a conclusion to the «right» form, time is reversed. Kozyrev supposed about existence of previously unknown emanation, are [1] that through revolving of a body together with a Page 275 6. Activation by nuclear magnetic resonance; Information stated above is only a small part of the questions appeared under consideration in Gerlovin’s 7. Activation by electronic paramagnetic resonance; theory of fundamental field (TFF). Other important 8. Activation by electrochemical force. questions should be considered with a new experimental data. All these methods can be used as possible way to high efficient energy systems. Gerlovin wrote: “Usually 1, 6 References and 7-th methods of structural activation are realized in catalysis simultaneously. Besides, catalysis differs 1. Ilia L. Gerlovin, “Foundations of unified theory of all from macroscopic methods because it has the most interactions in matter”, published in 1990, St.Petersburg, minimal distances from the sources of activator fields Russia. to the activated molecules. And finally, an active 2. To the question of multipolarity, Alexander V. Frolov, New participation of force fields created by nuclei of atoms Energy Technologies, #1 (4) 2002, St.Petersburg, Russia. and significantly more active participation of disturbed EPV is possible in catalysis. That’s why catalysis is the 3. N. A. Kozyrev, Selected works, St.Petersburg, LGU, 1991. most effective method of structural activation. The 4. Practical application of time rate control (TRC) theory, detailed account of this method exceeds the limits of Frolov A.V., New Energy Technologies, #3, 2001, p.15, this article and we can only annotate it.” [1, p.333] St.Petersburg, Russia. Antigravitation Force and Antigravitation of Matter. F=K q 1 ⋅ q2 m 1 ⋅ m2 Methods of its Creation R R2 It is well known, that mechanical energy can be bringing Anatoly K. Gaponov in electrostatic charge, where mechanical energy runs (turns) into energy of electric field, where Sadovaya Str. 195, Novosibirsk, 630009, Russia E2 ⋅ Vm 3 (mechanical energy) F ⋅R → (electrical Part I 2 For a long time there is an opinion in physics about antimatter as a possible source of antigravitation, but Similarly it is also possible to insert mechanical energy the researches on this subject came into a dead end. into mass of a body. As the result, the mechanical energy The existent presentations and formulas forbade the will turn into energy of gravitational field, where conclusion about antigravitation, but our conducted g 2 ⋅ Vm3 investigations brought us to the possibility to get (mechanical energy) V ⋅ F ⋅ t → (gravitational antigravitation of substance and to the paradoxical conclusions concerning the next: 1. Two types of space exist: Since the volume of the Ear th is constant, the a) The Absolute space acceleration of gravitational field will be increased. b) The Relative space It should be logical to expect, that when removing the 2.a The Gravitation Field is the relative space, which mechanical energy from mass the inverse process will has accelerated motion, directed to the center of a occur, that is to say acceleration reduction of gravitation planet. field will occur. 2.b The Antigravitation Field is the relative space, which In his works I. Newton affirmed about existence of two has accelerated motion, directed from the center of a spaces: planet. The Absolute space - is an immovable non-rotatable space, which represents a limited cube, with our planet 3. Gravity force does not depend on mass of a body! in the center. The mass can be presented in three versions: The Relative space – is a movable space. It can move a) mk –mass as amount of atoms. with acceleration in the absolute space. b) Wme - electronic-atomic energy in mass. Editor’s note: In aether conception this means two parts c) Wm m – mechano-gravitational energy in mass. of aether: some part is involved into the motion with the G mass, but another part of aether is immovable. On the basis of the stated notions we offer to revise the essence of force not only in Coulomb’s formula, but in The main mistake in search of aether consisted in the Newton’s formula too. following: Maikelson’s experiments were aimed on Page 96 search of relative velocity between bodies and space. possible ways to contribute and to extract the said However, it was the relative acceleration between energy from mass. bodies and space that was necessary to search for. And finally WmMG is the mass, which can be a measure To quote the conclusions of I. Newton: “Body can keep of mechanical energy, or it can be either inserted or the quiescent mode or mode of rectilinear uniform extracted from the matter (it can be identified as the motion …” By this, he postulates, that the relative linear gravitational mass). This gravitational mass is what we velocity between solids and space does not exist. But put our attention on, because it affects upon gravitation we know that for rotation it exists (the famous and it is able to create antigravitation. experiments with revolving pail of water). “In his time N. Tesla worked on more general problem, The gravitational field is the accelerated “falling” which is the problem of matter and energy. And he has relative space, which represents a spherical form. If found, as he believed, the new physical principle, on relative space moves, thus the question appears: where the ground of which he brought forth his gravitational does it move? There is only answer: it moves in the theory that was named dynamic gravitation. But he did absolute cubic space. not tell about it until almost the end of his life”. [2] Really, dynamic gravitation is the energy of motion. In Einstein’s theory there is notion of unified and curved space in gravitational field, but the contradictions Let’s take the following indications: appear here, and on concerning that N. Tesla writes: V – mechanical velocity “Only by presence of force field it is possible to explain F – force the observed motion of celestial bodies, but thus the t – time. hypothesis of curvature of space is not necessary. The In this case the product W = V ⋅ F ⋅ t has the dimension whole scientific literature on this subject is futile and doomed on oblivion”. [1] of energy. Hereinafter, let’s take I – strength of electric current The fact that gravitation is the accelerated moving U – difference of potentials relative space can be proved by observation of t – time. accelerated moving rocket, where the acceleration in rocket is equivalent to the acceleration in gravitational Then W ' = I ⋅ U ⋅ T has the dimensional of energy. field. Accelerated movement of rocket is relatively, that Thereby, W ~ W’ that is to say, the following products allows speaking about either acceleration of rocket are accepted as equivalent: motion in immovable space, or accelerated motion of space in immovable rocket! 1. V ⋅ F ⋅ t ~ I ⋅U ⋅ T 2. In previous materials it was repor ted about The anti-gravitational field is the relative space, which untraditional way for accumulation of energy, under the has accelerated motion from the center of a body (for condition, in which at constant current I the product example: rotating cylinder, Earth satellite and etc.) But it is possible to create the model of anti-gravitation q = U ⋅ t will depend on amount of inserted energy in without rotations. On the basis of analogy between unchangeable circuit L = const, in which the energy mechanical and electric energy comes to conclusion that can be accumulated by untraditional way not only in gravity between bodies does not depend on mass of electric capacity, but also in inductance. the body, but on mechanical-gravitation energy, contained in this mass, which is possible to contribute Similarly the energy can be accumulated by or to extract from. Therefore, this is the internal untraditional way in a moving body, under the condition gravitation energy. V=const and m k=const (the product qr = F ⋅ t will depend on inserted energy and have unlimited value). Part II Exactly this charge will create the power ful gravitational fields. The “Mass” can be considered as a measure of three different conditions of matter: 3. Let’s take: F is mechanical force, R is distance. Then the product F ⋅ R has the dimensionality of energy. For m k - as a measure of amount of atoms, E 2 ⋅V representing a “framework” or “container”, in which the uniform electric field the product Ea ⋅ also two types of independent energies are concentrated. 2 has the dimensionality of energy. In this case Ea is Wme - as a measure of electric energy, which can constant, E is intensity of electric field, V is volume. be either accumulated or extracted, and it have a Thereby, F ⋅ R ~ E 2 ⋅ V . Similarly, V ⋅ R ⋅ t ~ g 2 ⋅ V “compressed” form. We have received the correlations of resemblance An example of accumulation of electric energy in mass for heterogeneous physical values, on the ground of is a big cylinder, rotating with linear velocity, close to which the following physical experiments can be velocity of light, in this cylinder the mass of electric and magnetic fields of atoms increases. There are another Page 97 On the grounds of the above-mentioned analogies it can be assumed that the accumulation of “compressed” .ag .ag energy is possible in mechanics, as well as in electricity. Since the velocity is relative, that the mass can have zero velocity relatively a observer, who moves with this +q N S S N mass, but the force field will remain unchangeable, since it depends on already invected mechanical energy. Let’s note that: When the “compressed” electric energy is accumulated, the power field does not change. ω ω When the “compressed” mechanic energy is accumulated, the power field increases. Fig. 3. The third way to obtain the antigravitational force Now we have come to the amazing conclusion that a) electric the gravitational force does not depend on mass of b) magnetic matter, but it depends on mechanic energy, which is 1. The disk and the ring are made from electrical current included in this mass. This energy is unstable and at conductive material. contact with land it is disappearing, and at zero gravity it can be saved for a long time. 2. When these disks rotate, the currents, which emit the mechanic-antigravitational energy in the manner of heat, are formed there. .ag .ag ω ω\ Fig. 1 The first way to obtain the antigravitational force. 1. The magnets are not revolved. 2. The cylinders of charged capacitor are revolving in different directions. F ag F ag Fig. 4. The fourth way to obtain the antigravitational force. Mechanical method. + 1. This is an extraction of energy from matter. It was reported in details on the 10 international symposium in Volgo- Donsk, Russia. I + - 2. The difference with electric circuits is that it is possible ω1 not only to extract the mechanical energy, but also to insert I +- additional energy in the system. - ω2 - References 1. Magazine Inventor and Rationalizator, Russia, #9, 1979. Fig. 2 The second way to obtain the antigravitational force. 1. The capacitor plates are charged and not revolving. 2. Magazine Inventor and Rationalizator, Russia, #9, 2. The current circuits are revolving in different directions. 1979, p.28 The Capacitor, which has the but electrical energy, placing an energy equivalent of atomic bomb into ordinary electrical capacitor. Energy of an Atomic Bomb Everybody using ordinary batteries knows its defect: they need frequent recharging. Gaponov’s capacitor (Review of Anatoly K. Gaponov’s research by Eugenie is slightly smaller than a matchbox. Just come home and Marina Golomolzins) by electrical automobile, take out the capacitor from engine, and then put it into the pocket. For home Is it possible to place a pail of water into a one-liter jar? needs you can just insert the capacitor into plughole At the first look the answer is obvious: certainly not! to power the light, boiler, and TV system. In general, However, the inventor from Novosibirsk, Anatoly each electronic device can have its own capacitor, Gaponov thinks differently. He does not “press” water, then an electrical wiring is not necessary. After one Page 98 On the grounds of the above-mentioned analogies it can energy is possible in mechanics, as well as in electricity. Since the velocity is relative, that the mass can have S N mass, but the force field will remain unchangeable, since it depends on already invected mechanical energy. Let’s note that: When the “compressed” electric energy is accumulated, the power field does not change. ω ω When the “compressed” mechanic energy is Now we have come to the amazing conclusion that a) electric the gravitational force does not depend on mass of b) magnetic matter, but it depends on mechanic energy, which is 1. The disk and the ring are made from electrical current contact with land it is disappearing, and at zero gravity mechanic-antigravitational energy in the manner of heat, are formed there. .ag .ag ω ω\ Fig. 1 The first way to obtain the antigravitational force. 1. The magnets are not revolved. 2. The cylinders of charged capacitor are revolving in different directions. Mechanical method. in details on the 10 international symposium in Volgo- Donsk, Russia. I + - 2. The difference with electric circuits is that it is possible I +- additional energy in the system. E ω2 - References Fig. 2 The second way to obtain the antigravitational force. atomic bomb into ordinary electrical capacitor. they need frequent recharging. Gaponov’s capacitor engine, and then put it into the pocket. For home Page 98 or two years you will just have to come into electric From the school Physics we have known the notion of service station and charge your magic capacitor like “electric arc” – it is a small blue lightning between two a gas balloon. Meantime, this research work began electrodes. Gaponov has tamed this lightning in such a from hypnosis. way, that having drawn apart two wires, which executed the role of electrodes, by hands and got the Anatoly Konstantinovich Gaponov (by birth from arc by length up to half meter. Anatoly confirms that in Kaluzhskaya region) is Ziolkovsky’ countryman. In his principle, it is possible to create an arc of any desired youth Gaponov was brought by fate into Sakhalin, length under any amperage. where he showed hypnotic abilities. As an inquisitive person Gaponov had organized a research group, One of the experiments found out one more enigmatic started experiments and soon he understood that characteristic of electric discharge. During electric human brain had incredible possibilities. photography of arc a person happened to be between the camera and the system. On typing pictures, the A mental prick was made distantly to the hypnotized researchers have found with surprise, that the electric man, and he uttered a cry of pain. The ability to see arc was perfectly seen through the person. That is to people through, to define and to avoid organism’s faults, say, it created the invisible field, for which material was revealed in a hypnotic trance. It was possible to object was not an screening obstacle, and which was inspire pleasant emotions, to force “watching” a film fixed on the film. on the given subject, as if on the screen. An uneducated person became an erudite, as if being connected to a The further experiments with electric arc have certain global information database. Thus an idea to allowed to get a new source of energy, as well as to make an amazing experiment was appeared. open the possibility of setting light and sound on fire! Just imagine, you ring up a bell, it’s sound waves spread In one of the experiments Gaponov hypnotized the at once in all directions, and then flash up with bright person with four classes education, and asked, if it was blaze. possible to transmit the electric current without wires? The hypnotized person gave the answer, that it was (Editor’s note: this experimental facts are rare modern possible. For that it is required to convert the electric evidences of possibility to create longitudinal electric energy in x-ray radiation. And what afterwards? waves. It is clear analogy here with sound waves in air Afterwards it is required to focus that rays. By what? since they are longitudinal waves also. Alexander V. By the lens made from quartz glass, gold coated. It was Frolov) a miracle! The person told about things that in usual condition he had no idea of! The information was When the problem of energy source was solved, received from somewhere outside. Gaponov tur ned to the problem of energy accumulation. According to Gaponov, he has Further quite an amazing thing has occurred. Gaponov provedexperimentally the possibility of charging of asked the hypnotized person, if it was possible to an ordinary capacitor with any amount of energy. This intensify the abilities of hypnotist’s brain? statement sounds paradoxically: how it is possible to place the unlimited amount of contents in limited He answered, that he could. “He turned me round and volume? However, this is not a simple way. stared at the back of my head, - recalls Anatoly. - And suddenly the smile began to tear my mouth. I could not Gaponov believes that energy “placing” occurs not do anything with myself. When my mouth was sprawled in space, but in time by means of his system! In what literally from ear to ear, the hypnotized person in some way? Imagine, that you fill one-liter jar with water. But inhuman voice declared that experience could not be already after an instant the water-filled jar is in past, continue since the cerebral hemorrhage would occur. I and that present one is once again ready to be filled. was hardly able to give the order to stop the And so ad infinitum. Water as if it fills a certain “time experiment”. reservoir”, and a jar is just a neck of this “time reservoir”. Thereby, the experiments with hypnosis gave the beginning to the thirty-years period of inventions in the (Editor’s note: This method is described in other field of accumulation and transmission of energy. After articles also but usually it is pure mathematical the return to native Kaluzhskaya region, Gaponov was discussion about Minkovsky space-time and occupied with physics, development of logical thinking theoretical proposals. Gaponov’s experiments and became the town champion in chess. are realization of fantastical idea to take power The necessary books fell into his hands by themselves: from the time flow, i.e. from Past or from Future some time a certain acquaintance gave it to read; to get over-unity in Present space. Alexander another time he found the last copy in a bookstore. As a majority of self-taught inventors, Anatoly preferred V. Frolov) practical experimentation. In quest of laboratory for realization of his own ideas, he moved to Novosibirsk. “It is possible to demonstrate one more example, - As a result, in 1980 Gaponov has made experimental Anatoly Gaponov adds. – Let’s charge the capacitor system for compression of energy. with the expectation, that it will supply the light bulb Page 99 for one second. Thereby, on the Earth this light bulb Finally, the third Gaponov’s invention is the system will be on only for an instant. But if the same capacitor for transmitting of energy without wires. As well as with light bulb is placed in rocket and dispersed in two previous cases, there is an experimental device. around the Earth at the velocity, closed to velocity of Anatoly Gaponov speaks that he has succeeded in light, time on the board of rocket will be so slowed getting the essence of experiments for transmitting that the light bulb on rocket will be glowing infinitely of energy, which were conducted by Tesla. long for an observer from the Earth. It means, that in any case it is the same energy quantity, but in one It is clear, that the main advantage of this method is an case it’s action is sprawling for a second, and in absence of wires and losses of electric energy. The another one it is sprawling for eternity! It is possible electricity could be transmitted directly into any point, to say, that in my system I have created the condition where receiving equipment placed, let say from Kaluga corresponding to this hypothetic rocket”. to Sahara. However, this is not so interesting for anybody, since for the present day Anatoly Gaponov’s The system for accumulation of electric energy could inventions don’t have demand. be charged by ordinary wall plug 220 VAC. Time period of charging is different and depends on the certain “ The first system was created twenty years ago”, - scheme of the system. By the way, sea electric slopes says Mr. Gaponov. – “Now I am fifty five, but things are the certain natural analogues of such capacitor. have not budged an inch”. He adds dreamily: “Eh, if Some elements of internal device of these sea creations only I had a laboratory and some money...”. reminds the “pump” elements for placing of electric energy into “temporal jar”. Gritskevitch’s Hydro-Magnetic To fool investigators of my secrets, I have an occasion provided misleading information. For example, the Dynamo drawing accompanying the Russian patent referenced below shows a cylinder across the toroid to fool readers. The real dynamo only has the toroid without the Oleg V. Gritskevitch cylinder. Even its name “hydro-magnetic dynamo” is somewhat deliberately misleading. RUSSIA, 690002, VLADIVOSTOK, Okeansky prospect, 99 - ap.112 I have some familiarity with the new energy field. Nearly phone/fax: (7-4232) 424-674 Email: all purported new energy devices are fairly small Russian Academy of Energy and Information, Russian Academy of Natural Sciences electrical generators. The dynamo may be the only new electrical generator which most nearly meets all the requirements of an ideal large-scaled electrical Editorial: The article presents construction and operation generator. My dynamo really is the single most valuable of Oleg V. Gritskevitch’s hydro-magnetic dynamo, which invention the world has ever known. is an example of very powerful new energy system. The prototype in Armenia has been produced over Alexander V. Frolov of St. Petersburg recommended me 1500 KWtts power during several years. to contact with Dr. Patrick Bailey, Institute for New Energy since Pat has lots of contacts who could possibly The author was born on 14 August 1936 and grew up help me with patenting my invention of a new source in Vladivostok, Russia. He is married and has a son of energy in USA. Boris. Gritskevitch is a physicist by education. He worked in the Far - East branch of the USSR Academy I conducted the work on the theory and creation of the of Sciences. Since 1985 he has been working electrostatic generator-converter «Hydro-magnetic independently as an inventor. He has more than 70 dynamo» about 20 years. (See dynamo history below.) patents on inventions ranging from household The first primitive equipment was created when I engineering up to high technologies, which he has worked in Academy of Sciences. During that time been trying to apply in our country and met big various changes were introduced in the generator and difficulties. After numerous attempts to receive the in the theor y of its work. It is now possible to patents the author was convinced that outflow of the manufacture, install, and apply it in industry. information occurred. Therefore he has received the state certificates as on know-how (on a French way For the first time I made the public report on this work of patenting), for all his inventions. in 1991 on a symposium in Volgodonsk city. The report received the positive replies and reviews of the experts Introduction of a nuclear industry in USSR. The same year I was accepted in International Nuclear Society. In these years During the Institute for New Energy 1999 Symposium, I I offered development of this technology to different lectured on my hydro-magnetic dynamo. This paper is state bodies and private enterprises. But there was the my attempt to explain the construction and operation only answer: “It is very interesting and perspective of my dynamo. project, but there is no money for it”. Page 100
aa5ea9e59cbfa475
Engines of Our Ingenuity No. 1547: by John H. Lienhard Click here for audio of Episode 1547. Today, let's reclaim mystery. The University of Houston's College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity created them. I was doing a program on another scientist when I discovered he was born the same year as Einstein, 1879. On a hunch I went to the dictionary to check some of the people around them. Niels Bohr was born just six years after Einstein, and Schrödinger four years before. Since these people had redirected human thinking in remarkable ways just after 1900, I looked further. Picasso was born two years after Einstein, James Joyce three years after, Schönberg four years before. The Wright Brothers were only a decade older. Now it's perfectly obvious that the great minds at any date will've been born around the same time. Nothing interesting there. But I wondered about the world that'd made this particular set of people. What did it say to them when they were young? How did it send them off to wreak such radical change? I think it was because we'd grown complacent. The problems of science were yielding, left and right, to new instruments, new math, and new physical theory. Only a few nagging problems lingered: the inexplicably constant velocity of light, and our failure to predict how the energy of light and heat varies with wavelength. The art of the day was Salon Art: extraordinarily realistic images of larger-than-life romantic unreality. Like physics, art had nowhere to go. Nor did the rich, overblown music of the late Romantics. How were you supposed to take it further! Transportation had run to the end of its tether. At a hundred miles an hour, railroad trains could go no faster, and the Clipper Ship had topped out at fourteen knots. It was a world filled with vast accomplishments. But it was also a world of technological ceilings, social ceilings, scientific ceilings. Just when our revolutionaries were young adults, beginning to make their marks, Henry Adams wrote that the great blind spot at the end of the nineteenth century was our denial of mystery. So geniuses born in the 1870s and '80s would have to find mystery once again. For mystery is the one thing we cannot do without. Einstein gave us the mysterious neverland of relativity. Schrödinger reduced the contradictions of quantum physics to that single mysterious hypothesis we call the Schrödinger equation. Picasso offered a vision of reality just as disorienting as relativity or quantum uncertainty. Schönberg declared freedom from the tonal hierarchies that'd bound music for five hundred years. James Joyce made mincemeat of prose as we'd expected it to sound. But revolutions don't survive in their original forms. Music and art eventually regained contact with the people they serve. So did physics. The point becomes crystal clear in two other Einstein contemporaries: Lenin was born nine years before, and Stalin the same year as, Einstein. They made a muck of social revolution. The reason lay in the depths of their profound denial of the centrality of mystery. Without it, their revolution had no way of getting back to the mysterious reaches of human need. (Theme music) By 1927 we had so left the solid realities of 1879 behind as to describe electrons bouncing off a crystal lattice in the same way we would describe water waves reaching a pier. (Tien, C.L., and Lienhard, J.H., Statistical Thermodynamics. New York: Hemisphere Publishing Corp., 1979, pg. 99.)
be2a0102eb67094e
Quantum mechanics and scientific realism atom711by Quentin Ruyant One of the main tasks of philosophy is to clarify conceptual problems and sketch the landscape of possible solutions to these problems. Of course, individual philosophers often tend to defend specific positions, but what emerges at the level of the community is, generally, a landscape of possibilities. Take, for example, the question of scientific realism: what is the status of scientific theories? Should they be interpreted as literal descriptions of reality? Or are they rather predictive instruments, tools for interacting with reality? Or perhaps they are mere social constructions? The standard way of framing this problem that emerged from philosophical discussions over the years is to decompose it into three distinct questions: • Metaphysical question: does nature, the object of scientific inquiry, exist independently of our conception and observations of it? Idealists and radical constructivists would deny that it does. • Semantic question: what makes theories true? Are they literal descriptions of nature? Is there a direct correspondence between language (including formal languages or mathematical models) and the fundamental constitution of nature, or does the meaning of our theoretical statements reduce to their conditions of verification? Instrumentalists would typically opt for the latter view. • Epistemic question: are we in a position to know that our theories are at least approximatively true? Empiricists would say that, inasmuch as our theories pretend to say more than what is verifiable at the level of observable phenomena, we are not in a position to know that they are any more true than (perhaps unconceived) alternative theories with as much empirical confirmation. Scientific realism is thus the position that reality exists independently of the mind, that our theories should be interpreted as literal (if approximate) descriptions of reality, and that we are in a position to know that they are (at least approximately) true. The conceptual landscape we are considering is also constituted of arguments pro- and con- each positions. Nowadays, the semantic and metaphysical propositions of scientific realism are often accepted by philosophers (at least in the analytic tradition). Only the epistemic aspect is still under discussion. One of the main arguments for scientific realism, once formulated by Putnam, is that realism is the only position that does not make a miracle of the predictive success of science. The point being that anti-realists have no convincing explanation for the impressive success of science (notably for making novel, unexpected predictions), while realists have a simple one: our theories work because they correctly describe reality. Conversely, one of the main arguments against scientific realism is the so-called pessimistic meta-induction: past, abandoned theories were false after all (there are no gravitational forces as Newton had thought, only deformations of space-time according to relativity), so it is reasonable to expect that contemporary theories will also eventually be replaced by different ones. It is therefore unreasonable to believe in their truth. Some have sought a compromise between realism [1] and anti-realism, meting the challenge of the pessimist induction by restricting realism to the structural content of theories (the lawful relations between entities, rather than the entities themselves), which, they say, is retained in theory change. That position is accordingly known as structural realism [2]. Others, for the same reasons, want to restrict realism to the concrete entities with which we causally interact. This is called entity realism [3]. Instrumentalism and quantum mechanics As noted above, most arguments in this debate are epistemic in nature: they concern scientific knowledge in general. They don’t get into too much detail about the actual content of scientific theories, except sometimes for the purpose of illustration. The argument I wish to defend here is that, on the contrary, the specific content of scientific theories should not be overlooked in these discussions, and that one theory in particular poses a serious threat to scientific realism, namely quantum mechanics. This theory, I will argue, has no straightforward, “literal” interpretation. If this is correct, then scientific realism loses its grip: why argue that scientific theories should be interpreted literally, if no such literal interpretation exists for our best physical theories (which purportedly address the most fundamental levels of reality)? Shouldn’t we go back to a more modest conception of the meaning of our theories and accept a more humble view of the status of our representations? Perhaps we could find a way to still accommodate some of the desirata of realism while giving up on a strict correspondence between models and reality after all. First, let me say a word on the long-standing relationship between quantum mechanics and instrumentalism. Quantum mechanics was developed at a time when different forms of instrumentalism (a denial of the semantic proposition above, for example through a verificationist theory of meaning) were prevalent. It was also a time when philosophers and scientists entertained strong intellectual relations. Famous scientists and philosophers gathered in the Vienna Circle in the 1920’s. The circle gave birth to logical empiricism [4], a philosophical movement which durably influenced the philosophy of science. Instrumentalism faded out in the course of 20th century, after the demise of logical empiricism. Instrumentalist positions were attacked by strong arguments, both internal and external to the movement, but principally in the philosophy of language [5]. However, quantum mechanics remained, and, so to speak, became orphan of a philosophical interpretation, as illustrated by the notorious “shut up and calculate” school of thought among some practicing scientists, which began after WWII. Unfortunately for the realist, the weirdness of quantum mechanics is here to stay. Not that the theory won’t be superseded by a better theory. It certainly will, as standard quantum field theory, which is the fusion between quantum mechanics and relativity, does not account for gravitation. However, there are strong indicators that its successor will share most of its puzzling aspects. Some of them are addressed by Bell’s theorem [6], which is largely independent of the theory itself, but rests on a few, uncontroversial empirical principles. The weird consequences of the theorem, that no local-realist theory can account for observed phenomena, have been well confirmed by experiments, such as that of Aspect in 1982 [7]. Any future theory will have to accommodate this result. Personally, I tend to think that the theory could not have been developed without the strong instrumentalist stance of its founders, and that the empirical success of the theory calls for a compromise between the now prevalent realism of philosophers and the almost built-in instrumentalism of the theory. There are attempts to reconcile quantum mechanics with realism, but I think they face serious challenges and lead to unacceptable conclusions. Perhaps another path is preferable. The purpose of this essay, however, is not to find this alternative path, a far too ambitious goal. More modestly, I will simply lay out the difficulties in formulating realist interpretations of quantum mechanics. The measurement problem There are two main difficulties facing realist interpretations of quantum mechanics: first, the measurement problem, then the problem of reference. The measurement problem is one area where philosophers have done their job properly in clarifying the issue. In standard quantum mechanics (I will not address quantum field theory, but the problem is essentially the same) systems are described by wave-functions. Different properties can be measured on a system: its position, its momentum, its spin. A wave-function is a mathematical structure which, loosely speaking, describes the correlations between all possible measurement outcomes for these properties, including, in the case of composite systems, the possible outcomes for all combinations of measurements on distinct parts of the system, however far apart. Note that the wave-function encodes all complex measurement possibilities but not all measurements are compatible and can be performed simultaneously (think, by analogy, of a 3D object which encodes all possible 2D perspectives on that object, but only one perspective can be had at a time). Scientists call these complex possible ways of measuring a system “observables.” The wave-function evolves according to a linear equation, the Schrödinger equation. The coefficients associated with possible outcomes for an observable are complex numbers (a weight and a phase), which entails that (again, loosely speaking) possible outcomes may interfere with one another, at least when they are not measured (imagine that from a given 2D perspective, parts of the object overlap and interfere in a destructive or constructive way). In addition to this mathematical model, the Born rule tells us how to infer specific outcome probabilities from the model. This amounts to projecting the wave-function onto only one of the possible outcomes to get a probability, calculated from the corresponding coefficient. The problem is this: if realism is true of standard quantum mechanics, then reality, as described by the theory, is the wave-function, which encodes all possible outcomes for all possible measurements on a system, but empirical reality, from which we test the theory, is constituted of determinate measurement outcomes for specific observables only. There is thus a gap between the model and empirical reality. The gap is filled by the Born rule, but the Born rule is not part of the physical model. It is not an object, nor a process occurring in space-time: it is only a mathematical rule. It is also relative to a way of measuring the system. How can we make sense of this? One could think that the problem is easily solved: just interpret the wave-function as an epistemic object, describing our ignorance of a real, underlying state. That’s how probabilities were usually interpreted in classical physics after all: as reflecting our ignorance. Perhaps the wave-function can be seen as a superposition of possible states, but only one of them actually exists. However, decomposition into possible states depends on the observable. How could the system know in advance how it will be observed? Remember, also, that “possible states” of an observable which are not measured can interfere with each other, they all potentially contribute to the final outcome for the observable which is eventually measured, at least statistically. How can they do that if they did not all exist? But if they all exist, why do we ever observe determinate outcomes, and not superpositions thereof? What happens during a measurement? As I said, philosophers did a great job of clarifying the problem; here is one of its formulations (that I take from Maudlin [8]) in terms of a trilemma: three propositions which cannot all be accepted together: 1. The wave-function is a complete description of the state of a system 2. The wave function evolves according to a linear dynamic (the Schrödinger equation) 3. All measurements have determinate outcomes Following (1), a system can be viewed, for any observable, as a “superposition of states.” Following (2), a superposition of states will necessarily evolve into another superposition of states, there is no physical “projection.” Let us accept (1) and (2) and describe the state of a measuring apparatus coupled with a system as a composite wave-function. The measuring apparatus is also a physical system after all. At the end of an experiment, the system+apparatus will be in a superposition of states, contradicting (3): the experiment does not have a determinate outcome. The logical conclusion of the argument is that we have to abandon one of the three propositions. This is the measurement problem. The prospects of realist solutions One benefit of this formulation is that it allows for a classification of possible solutions to the problem. I won’t enter into too much technical detail here, but no solution is entirely satisfying. Rejecting (1) or (2) involves completing the theory with additional structure. Bohmian mechanics [9] is the most conservative move. It rejects (1) by postulating punctual (and causally idle) particles in addition to the wave-function, just as in classical physics. It restores determinism but is obliged to postulate instantaneous interactions at a distance, and is perhaps the less well apt to reconcile relativity and quantum mechanics (relativity has a notoriously complicated relationship with non-locality and simultaneity). Another possibility in this class of solutions is implemented by modal interpretations [9], initially proposed by van Fraassen, which complete the theory with a dynamical, privileged observable for which there is a determinate state at any time (and which should eventually coincide with the observable that is measured). Bohmian mechanics can actually be read as a modal interpretation where the privileged observable is static and is always the position. Modal interpretations also require a notion of absolute simultaneity, because the state of a non-local system is, by construction, determinate at a particular instant. For this reason, they are hard to reconcile with relativity theory. Some theories reject (2) by postulating random physical projection processes, also called collapses. An early possibility, envisaged by Wigner and von Newman, was a collapse induced by conscious observers [9], but this solution seems too anthropocentric and dualistic to be acceptable and it has never been precisely articulated anyway. More concrete formulations are GRW and CSL theories [9], which postulate spontaneous collapses (respectively, discrete and continuous). These theories make distinct predictions from standard QM, but they come with parameters (the rate and strength of collapses) which are fine-tuned to stay compatible with current empirical confirmations of the theory. This is a bit ad-hoc, obviously. All these theories postulate additional mathematical structure for a non-empirical purpose: to save our realist presumptions. Arguably, this is a case of “domestication of science by metaphysics” [10] that we would have liked to avoid. A more concrete price to pay lies in the difficulties in reconciling these additional structures with relativity theory and in formulating consistent quantum field versions of these theories. Needless to say these theories are rarely considered by physicists. In any case, as far as they postulate additional structure, they cannot be considered straightforward, literal readings of quantum mechanics: they are distinct theories. Rejecting (3) seems prima facie absurd: how could empirical outcomes, the ones which serve as very tests for the theory, not be determinate? A solution, proposed by Everett, is to view them as relative to an observer [9]. The idea is that the wave-function of the universe evolves into relatively independent branches (accounted for by the theory of decoherence [11]), and that experimenters are only ever situated in one of the branches, from which empirical outcomes seem determinate. Each measurement outcome is instantiated in a separate branch. Following this proposal come the many-mind and many-world interpretations [9]. The move is tentative: we could have a realist theory without the cost of additional structure, only if we abandoned certain common-sense intuitions and accept that trillions of alternative, unaccessible worlds are instantiated each millisecond. Alternatively, we could view the universe as the interrelated set of all physically possible worlds and their complete evolutions, and each of our instant selves located somewhere in this huge block-universe. But the devil is in the details. The many-mind interpretation comes with a very strange ontology (infinitely many minds inhabiting every one of us at any instant, following diverging branches) plus a problematic commitment to dualism and epiphenomenalism. The many-world interpretation does not seem to make sense of probabilities: why talk of probabilities if all outcomes are equally real? We cannot invoke ignorance probabilities here: there is nothing relevant that we ignore. We know that every outcome will occur. Moreover, why the Born rule? Shouldn’t every outcome have an equal probability? There are attempts to solve the issue by grounding probabilities on rationality constraints on epistemic agents (the proposal was made by physicist Deutsch and improved by philosopher Wallace [12]). Probabilities would be subjective and correspond to bets on future outcomes. The Born rule can be retrieved as the only rule which satisfies certain symmetry constraints on the assignation of probabilities to quantum states. However, it is not clear that these solutions succeed. Bets are based on past empirical results, but without a more robust conception of probabilities (something that could be linked to a statistical distribution in the multiverse) there is no reason to think that past empirical results are representative of the whole universe: if this theory is true, it seems that we are not rationally entitled to believe that quantum theory is true! [13] In any case, what are we really willing to bet for if all our future selves equally exist? Why should we care? Rationality constraints supposedly have a normative aspect (they are not psychological laws: they tell us what we should do), but here, what is the point, exactly? The many-worlds interpretation also requires decoherence, but perhaps the theory of decoherence also depends on a more robust interpretation of probabilities [14]. It also presupposes a distinction between systems and their environment, but does the universe as a whole have an environment? The problem of reference Enough about the measurement problem. There is another challenge which realist theories face: the problem of reference. Following the semantic proposition of scientific realism, there should be a correspondence between mathematical models and real entities. However, the wave function is not the kind of structure that can easily be mapped to real entities, as commonly understood. The problem, then, is with connecting this picture to our everyday experience. Take the electromagnetic field of classical physics: it assigns vectors to every position in space-time. The object is not too difficult to represent. Just imagine that a vector is some kind of property of the field at a specific location. But what kind of object is a wave-function? The wave-function, interpreted as a field, does not assign specific properties to space-time points: it lives in an abstract mathematical space of almost infinite dimensions (called the configuration space). Traditionally, these mathematical dimensions are construed as the degrees of freedom of different particles. Fine, but that supposes that particles exist in addition to the wave-function: what if, following many-worlds or GRW or CSL theories, we wish to view the wave-function as an autonomous object, as “all there is”? It is not easy, from this abstract representation of an object with infinitely many degrees of freedom to recover the “manifest image of the world”: the familiar, 3+1 dimensional space-time, filled with ordinary objects, in which we perform the empirical tests of our theories. Some authors are ready to bite the bullet (for example Albert) and hope that our familiar space-time is somehow emergent on the configuration space, but many think that there is a problem, and that a physical theory should be able to tell us what exists in space-time [15]. A possible solution is to think of the wave-function as a “nomic” entity, akin to laws of nature, or dispositions, rather than a concrete object, and to supplement the theory with an additional structure which describes the bearers of these dispositions, or the followers of these laws, in ordinary space-time. Bohmian mechanics already has the particles for that purpose. Proposals for GRW include the peculiar space-time points where the collapses occur, aka “flashes,” or matter density fields [16]. Note that this additional structure is idle and serves no empirical purpose: aren’t we, again, trying to force scientific theories into the mold of our metaphysical prejudices? Furthermore, thinking of wave-functions as laws hardly makes sense: laws of nature do not vary in space and time. But thinking of wave-functions as dispositions is difficult too, because the wave-function is a non-local object and these dispositions cannot be assigned to local bearers directly (as Esfeld and Egg observe in a forthcoming paper [17]). They should be assigned to “configurations of stuff” instead. In the end, we could be left with a huge abstract structure which would represent the disposition of the universe to evolve. Not very enthusing. In sum, in any of these theories the wave-function cannot be dispensed with because it does all the predictive job, so to speak; but its ontological status and the way it is connected to ordinary objects of our experience remains somewhat obscure. Other interpretations The framing of the discussion so far has been realist. The problem of reference directly stems from a realist commitment, that the structure of the theory should correspond to real entities, and implicit in the formulation of the measurement problem is that the wave-function describes a state (if not a complete state) which evolves with time. Let us now say a word on a few anti-realist interpretations. Contemporary physicists do not agree on the correct interpretation of quantum mechanics. Some of them are realists and explicitly defend the many-world interpretation (for example Carroll [18]). Perhaps some have a non-explicit collapse interpretation in mind, and others don’t have an interpretation at all (the above mentioned “shut-up and calculate” school), or stick to the vague Copenhagen interpretation [9] (roughly, realism with regard to classical objects and instrumentalism with regard to quantum states), or the more elaborated consistent histories approach [9]. In any case, having a clearly articulated ontological interpretation is not necessary for all scientists. It seems that instrumentalism is perfectly fine, for all practical purposes. The question of realism is more a philosophical issue, although there has been a number of influential scientists, from Newton to Einstein, who held strong metaphysical views underlying their theoretical reflections. This is probably the case for many contemporary scientists too, but it’s a subject for another time. Of course, an immediate benefit of instrumentalism is that it trivially eschew the problems above. All that is required is that the theory works. However, there are more subtle ways of throwing light on the mysteries of quantum mechanics from an instrumental perspective. Viewing quantum mechanics as a theory of information, or as a generalized probability theory [9] (which roughly amounts to revising classical logic!), has gained in popularity in recent decades, notably in the field of quantum computing. The most sophisticated proposals include quantum bayesianism, or Q-Bism [9]. This is clearly an anti-realist move (Bayesianism is a subjective theory of probabilities). Following these views, the wave-function is epistemic: it describes our knowledge of reality. All that these theories say is that our inferences on the physical must follow counterintuitive logical rules, the rules of quantum logic. Another possibility is to adapt Everett’s relative state formulation with an anti-realist twist (and without the many worlds). This is the relational interpretation [9], proposed by physicist Rovelli, which holds that wave-functions do not describe objective states, but relations between physical observers (any physical system) and observed systems. There is no objective “view from nowhere.” Other similar attempts relativize the wave function to frames of reference. There are also perspectival modal interpretations in this vein, which attempt to resolve the problem of compatibility between relativity theory and standard modal interpretations. These kinds of view are indeed relativistic in spirit: one could say that they push relativity theory one step further, through a relativization of all physical states (not only of space-time coordinates) to physical observers. Finally the transactional interpretation [9] proposed by Cramer postulates that measurements are transactions between emitters and receivers. A transaction involves the combination of a retarded wave (going forward in time) and an advanced wave (a wave going back in time, traditionally discarded as “unphysical” by physicists). The interpretation proposes a narrative in pseudo-time, where retarded waves are offered by emitters to absorbers, which respond with advanced waves. An absorber is selected and a transaction occurs. The interpretation retrieves the Born rule in a nice, elegant way from the formalism. I did not classify it as a realist interpretation because transactions are not really physical processes, they do not occur in space-time, and they are sometimes said to be some sort of perceptive relations (for example by Kastner [19]). Perhaps this interpretation, with its emphasis on relational aspects (the transactions), is not too far from relational interpretations. I am somewhat sympathetic to these proposals, in particular when they retain a realist component and do not force us to go back to hard-core idealism. But they too face challenges and work remains to be done to obtain fully consistent and metaphysically explicit theories. They also probably need to confront more general arguments that are part of the epistemological debate on scientific realism. So what shall we conclude? It seems to me that debates on scientific realism in the epistemology of science should pay attention to the content of scientific theories. In the case of quantum mechanics, the problem is that there is no uncontentious literal interpretation of the theory. The closest is actually instrumentalist in flavor: it tells us to apply a mathematical rule to calculate outcome probabilities from a model, which is not very realist-like. Arguably, all other interpretations (including the many-world interpretation, pace many of its defenders) are conjectures layered on top of the theory. Furthermore, all encounter difficulties: either we complete the theory with an ad-hoc structure which plays no predictive role and threatens the compatibility with relativity theory, or we face conceptual problems in the interpretation of probabilities (or we are forced into the adoption of a dubious many-mind ontology). And in any case, the ontological status of the wave-function remains quite a bit obscure. No solution to date is entirely satisfying. Note that concerning the epistemological debate we started with, suffice to say that there is more than one possible metaphysical interpretation or theory, all compatible with the same empirical data, and none of them being more natural or straightforward [20]. This amounts to undermining scientific realism about quantum mechanics: what interpretation or theory should we be realist about? Perhaps future developments will convince everyone that one realist interpretation or the other is the right one, but I don’t find that prospects are very good at the moment. What about anti-realist interpretations then? If there is no straightforward sense in which the bare content of quantum mechanics can be said to “correspond to” reality, shouldn’t we amend the correspondence theory of truth in consequence, and adopt a more pragmatic stance toward scientific theories? I am personally inclined to think so, but admittedly, matters are not simple. At least some of the desirata of scientific realism are quite sensible. The challenge is to formulate a position that does not fall prey of standard objections against instrumentalism (in particular the “no miracle argument,” but the semantic arguments as well), and that recovers the “manifest image of the world,” the common-sense intuition that there are objective states at a macroscopic scale in a low dimensional space-time. All this, if possible, without the vagueness of the Copenhagen interpretation. With these difficulties standing before us, it is no wonder that many authors prefer to accommodate one or the other realist interpretation. Let us remain optimistic though: quantum mechanics is weird, and we probably shouldn’t get away with its weirdness, which means that there is a lot of really exciting philosophical work to do! Quentin Ruyant is a PhD student in the philosophy of physics in Rennes, France. His thesis is on the potential implications of structural realism on the interpretation of quantum mechanics. He blogs at Philosophie des Sciences. [1] Scientific Realism, SEP. [2] Structural Realism, SEP. [3] Entity realism, Wiki entry. See also Massimo’s recent post: On the Reality of Atoms and Subatomic Particles. [4] On the Vienna circle and on logical empiricism. [5] Popper criticized verificationism as early as 1934 (see: The Logic of Scientific Discovery). Quine’s “Two dogmas of empiricism” (1951) and Kuhn’s The Structure of Scientific Revolutions  are among the most cited criticisms of logical empiricists’ positions. Kripke (in Naming and Necessity, 1980) and Putnam (“The Meaning of ‘Meaning,’” 1975) are often credited for their arguments against descriptivism, a semantic theory underlying logical empiricist’s positions. [6] Bell’s Theorem, SEP. [7] Aspect, A., Dalibard, J., and Roger, G. (1982), “Experimental test of Bell’s Inequalities using time-varying analyzers,” Physical Review Letters, 49:1804–1807. [8] Maudlin, D., (1995) “Three Measurement Problems.” [9] Here are some resources for the many interpretations of quantum mechanics: Bohmian mechanics; modal interpretations; Wigner-von Newman interpretation; collapse theories (GRW and CSL); Everett’s relative state formulation; many worlds and many minds; Copenhagen interpretation; consistent histories approach; quantum Bayesianism; quantum logic and probabilities; relational interpretations; transactional interpretation. [10] Ladyman, Ross and Spurett vehemently argued against this attitude in Every Thing Must Go. [11] The Role of Decoherence in Quantum Mechanics, SEP entry. [12] Quantum Probability and Decision Theory, Revisited, arxiv. [13] Against the Empirical Viability of the Deutsch Wallace Approach to Quantum Mechanics, PhilScience Archive. [14] Many Worlds: Decoherent or Incoherent?, PhilScience Archive. [15] See for example Wave function ontology, PhilPapers. [16] On the common structure of Bohmian mechanics and the Ghirardi–Rimini–Weber theory, PhilPapers. [17] Primitive ontology and quantum state in the GRW matter density theory, PhilSci Archive. [18] Why the Many-Worlds Formulation of Quantum Mechanics Is Probably Correct, Preposterous Universe. [19] The Transactional Interpretation of Quantum Mechanics, IEET. [20] Underdetermination of Scientific Theory, SEP entry. 106 thoughts on “Quantum mechanics and scientific realism 1. Hi jarnauga111, Scientific realism accepts that there is only one correct view. Scientific realism implies that there is only one view/description that is both true and all encompassing, fully describing all aspects of the reality. However, there can be any number of descriptions that capture some aspects of the truth. Re: Davidson’s article: Nothing in Davidson’s article shows that there is no human-independent reality, nor that such a reality is “incapable of being conceptualized”, all it says is that human concepts designed to organize sense-data cannot be independent of the human sense-data that they are organizing. (And, again, I agree, that is part of the `web’ concept that I’m espousing.) Thus Davidson’s point is about human concepts about any underlying reality, not about any such reality. Hi Quentin Ruyant, Second, scientific realism is definitely a philosophical position. It is not a necessary prerequisite for doing science and it’s not established by science itself. Again, I disagree. If you are going to calculate the Moon’s orbit 380 million years ago, you do have to adopt the idea that there really was a moon that was causing tides on Earth, and thus was in that sense “real”. That assumption might be wrong, and some other idea might be better, in which case the competing ideas then get tested against each other in the usual way, with the gold standard being predictive power. On the particular case of the wavefunction, one can proceed by treating the wavefunction as ontic, or instead by treating it as epistemic about an underlying reality, but the fact that both are currently tenable results from the fact that we don’t yet fully understand quantum mechanics. Here is a simple, anti-realist explanation for the success of science: the phenomena we predict are in part created by our interactions with reality. OK, but now develop this idea to the point where it can predict solar eclipses in 10 years time and lunar orbits 380 million years ago. If, to do that, you de facto adopt realism, then it is simply realism with a false label attached. If you do something else, then go ahead and produce the predictions and let’s test them against a realist model. We could have different theories which would work as well. Yes, in principle we could, but it’s then up to the anti-realist to actually produce such theories. some questions are definitely … conceptual and fall in the domain of philosophy Concepts are just as much a part of science as empirical data is! Indeed, one can never disentangle them (can I quote Davidson on that?). Some of the philosophical concepts of science presented here would reduce science to mere stamp collecting! Hi astrodreamer, If a realist asserts that the moon exists … then does not the orbit of the moon equally exist? No, since the need for the moon to “exist” is because it needs to raise tides on Earth for my above explanatory scheme to work. There is no requirement for the “orbit of the moon” to be ontic or to be an entity capable of causation. Liked by 1 person 2. It is puzzling that some appear to take the position that metaphysical constructs are of such power as to trump conclusions reached from the analysis of observation. The pre-scientific days of Socrates, Plato, Aristotle et al are gone. Perhaps their beautiful ideas need protection from ugly facts. “Analysis of observation” – the efficacy of this approach is limited by problems with the accuracy and precision of observation, and, perhaps more so, by the skills of the analyst. It is therefore standard procedure to repeat, to re-evaluate, and to obtain more information with a same or different method. The key is that each investigation tries to mount an honest and independent effort. Such effort very often falls short because the discoverers fall in love with their original finding or theory, or seek approval from their peers. This happens a lot. Famous cases of investigatorial malfeasance further illustrate the point. Investigators may also be unaware of personal biases related to their subject. Many recognize the important implications of Godel’s Theorem: ~ in a well defined system of knowledge, not everything can be explained. When we start thinking about ‘ultimate reality’ or ‘absolute truth’, however, we are dealing with a system or field in which almost all of the elements are not well defined at all. One can even go so far as to say, quibble, that nothing will be ‘well defined’ until ultimate reality comes into view. Absolute certainty will be inappropriate until such time. We must therefore always remain pragmatic, reasonable and open minded. It is unjustified therefore to say that this or that idea is dead, or that this or that idea is unassailably true, unless there is strong observational and experiential evidence. Even then one must be cautious. The systems of thought in which we operate everyday (science, philosophy, politics, religion, economics, trade, beauty, love, virtue, happiness) are rather poorly defined and cause a lot of trouble because their practitioners are ignorant of their ignorance, willfully or otherwise. This is a very hard problem. Keep on testing and learning seems to be the only answer. 3. When you said, “Why should we assume this? (That there is some fundamental reality that is being pointed to by multiple different descriptions)” I think you have taken a wrong dialectical turn here. Boghossian is showing that Putnam’s positive argument for the claim that there is no such thing as fundamental reality is flawed. All Boghossian wanted to do was show that Putnam’s argument does not establish what Putnam wants it to. Consequently, Boghossian is actually the one who is saying to Putnam, “why should we assume there is no fundamental reality? You haven’t given us good reason to think there is no fundamental reality yet since your argument has failed.” If one person (Jake) is arguing that never-never land exists and presents a positive argument, then another person (John) convincingly shows that Jake’s argument has failed, it would be a sneaky attempt to shift the burden of proof onto John if Jake were to say, “Sure you have refuted my argument, but why should I assume that you are right that there is no never-never land?” Boghossian, I take it, simply wants to exhaustively respond to all description-dependent arguments about facts because (I think) he takes it that for the most part the common sense view is that there is some fundamental reality. So, absent any convincing positive arguments, Boghossian can rest in his castle that is free of the burden of proof. I’m also not quite sure how your Putnam quotes respond to Boghossian’s initial argument that in order for you to be carving up (with different descriptions, logical schemes, etc.) things in any way, there must be something that you are carving up in the first place. I’ll leave the discussion at this point, but would still enjoy hearing your response. 4. Massimo Not necessarily “just” a comments slugfest between Coel and Aravis, but in general, I’d welcome another piece, or two, discussing: 1. Issues in philosophical realism vs. anti-realism 2. Issues in scientific realism vs. anti-realism 3. Interaction and overlap between science and philosophy here. Indeed, I’d welcome that before another essay on issues in quantum mechanics. Not that I dislike the QM discussion, but, something like this might (or might not) serve as some “deck clearing.” That said, I’d then welcome an essay or two, after that, discussing philosophical issues behind the various schools of QM interpretation. I’ve already mentioned my take on part of what I think was behind Schrödinger’s mind when he crafted his cat thought experiment, as well as a general taste for mysticism that sometimes gets involved. (Note: One can be anti-mystical and anti-realist, too.) Robin, not sure exactly what you’re referring to with EQM. “Everettian,” perhaps? But, that’s the the only way the “E” could be understood within that acronym. There’s also Essential Quantum Mechanics and Extended (State) Quantum Mechanics. That said, I’d say that the interpretation I favor, the Ensemble interpretation (which is also an “E”!) is the closest to a “realism” based theory, although I acknowledged that it, or something like it, could be seen as having a quasi-instrumentalist stance on single QM events. 5. Daniel Tippens: In your analogy about Jake and John, the assumption is that John has been successful, hence why shifting the burden of proof is wrong. The point I want to make is that I don’t really see how that’s the case in the first place, given that Boghossian’s analysis is insufficient to address Putnam’s points. The problem, it seems to me, is that Putnam is highlighting the conceptual difficulty of us stating what is true without reference to some perspective or other. In fact, he believes this so strongly that he says even God himself would have difficulties which arise from logical necessity [shades of the Omnipotence Paradox], viz: “Imagine a Euclidean plane. Think of the points in the plane. Are these parts of the plane, as Leibniz thought? Or are they “mere limits,” as Kant said? If you say, In this case, that these are “two ways of slicing the same dough,” [i.e., Boghossian’s response] then you must admit that what is a part of space, in one version of the facts, is an abstract entity (say, a set of convergent spheres-although there is not, of course, a unique way of construing points as limits) in the other version. But then you will have conceded that which entities are “abstract entities” and which are “concrete objects,” at least, is version-relative. Metaphysical realists to this day continue to argue about whether points (space-time points, nowadays, rather than points in the plane or in three-dimensional space) are individuals or properties, particulars or mere limits, and so forth. My view is that God himself, if he consented to answer the question “Do points really exist or are they mere limits?” would say “I don’t know”; not because His omniscience is limited, but because there is a limit to how far questions make sense.” Boghossian wants to assume that the Polish logician’s number of objects is the true one since it subsumes both the Carnapian 3-object position and the larger 7-object position. But this is precisely what is at issue and, furthermore, the assumption that both ways of speaking “work” is incorporated into the analogy he gives between individuals and couples, without arguing for it. That you yourself believe in a transcendental order of reality and transcendental concepts capable of describing it are genuine articles of your faith, no doubt. That you draw this from Davidson, however, makes me wonder whether you have really understood him. One of the consequences of the sort of approach that Davidson takes is that this intertwined understanding of meaning makes nonsense of the sort of realism you are after. The human world is the only one we know or can even *conceive*. To imagine some other separate from this is not possible, since all evidence for it is equally evidence against it. 6. Bunge expresses a realist position, and something vague that more or less resemble modal interpretations, but he does not engage in the implications and difficulties I mention in the article (at least not in the passage you cite). Being sarcastic is not very difficult. Clarifying one’s position in detail is another story. He is a little hard to pin down on specifics in recent papers and books. I was quoting from papers 1993-98 by other physicists (Perez-Bergliaffa, Romero, Vucetich) that he cited in support of his general stance, and who cite Bunge’s 1967 books. Perez-Bergliaffa et al (1995) explicitly suggests the “Bungean” axiomatization is compatible with the Consistent Histories approach. Re sarcasm, he does seem to like expressing himself forcefully, but was I think 91 when he wrote this. Comments are closed.
2e0c0c56b018cfb9
 NLSEmagic NLSEmagic: Nonlinear Schrödinger Equation Multidimensional Matlab-based GPU-accelerated Integrators using Compact high-order schemes Please donate to support NLSEmagic: Locations of visitors to this page NLSEmagic is a package of C and MATLAB script codes which simulate the nonlinear Schrödinger equation in one, two, and three dimensions.  The code includes MEX integrators in C, as well as NVIDIA CUDA-enabled GPU-accelerated MEX files in C.  The MATLAB script files call the compiled MEX codes forming an easy-to-use highly efficient program.  The codes utilize a fourth-order (in time) Runge-Kutta scheme combined with the choice of standard second-order (in space) finite differencing, or a compact  two-step fourth-order (in space) finite differencing. The code was developed as part of my Ph.D. dissertation, and includes two versions.  One is a streamlined easy-to-follow script code which is meant as an example of how to use the MEX codes, while the other version is a full-research code which can reproduce my research results. NLSEmagic is freely distributed for use and modification.  However, a nominal donation and acknowledgment of authorship is appreciated. NLSEmagic is in the process of being updated to version 020. The 1D code is now available! Further updates to come. (07/16/14)
4009cc865bcd6cbe
 Creation, Providence, and Miracle Creation, Providence, and Miracle Dr. William Lane Craig In treating divine action in the world, we must distinguish between creation, providence, and miracle. Creation has typically been taken to involve God's originating the world (creatio originans) and His sustaining the world in being (creatio continuans). A careful analysis of these two notions serves to differentiate creation from conservation. Providence is God's control of the world, either through secondary causes (providentia ordinaria) or supernaturally (providentia extraordinaria). A doctrine of divine middle knowledge supplies the key to understanding God's providence over the world mediated through secondary causes. Miracles are extraordinary acts of providence which should not be conceived, properly speaking, as violations of the laws of nature, but as the production of events which are beyond the causal powers of the natural entities existing at the relevant time and place. Source: In Philosophy of Religion, ed. Brian Davies (Washington, D.C.: Georgetown University Press, 1998), pp. 136-162 Creatio Ex Nihilo "In the beginning God created the heavens and the earth" (Gen. 1.1). With majestic simplicity the author of the opening chapter of Genesis thus differentiated his viewpoint, not only from that of the ancient creation myths of Israel’s neighbors, but also effectively from pantheism, panentheism, and polytheism. For the author of Genesis 1, no pre-existent material seems to be assumed, no warring gods or primordial dragons are present--only God, who is said to "create" (bara, a word used only with God as its subject and which does not presuppose a material substratum) "the heavens and the earth" (et hassamayim we et ha ares, a Hebrew expression for the totality of the world or, more simply, the universe). Moreover, this act of creation took place "in the beginning" (bereshith, used here as in Is. 46.10 to indicate an absolute beginning). The author thereby gives us to understand that the universe had a temporal origin and thus implies creatio ex nihilo in the temporal sense that God brought the universe into being without a material cause at some point in the finite past.{1} Later biblical authors so understood the Genesis account of creation.{2} The doctrine of creatio ex nihilo is also implied in various places in early extra-biblical Jewish literature.{3} And the Church Fathers, while heavily influenced by Greek thought, dug in their heels concerning the doctrine of creation, sturdily insisting, with few exceptions, on the temporal creation of the universe ex nihilo in opposition to the eternity of matter.{4} A tradition of robust argumentation against the past eternity of the world and in favor of creatio ex nihilo, issuing from the Alexandrian Christian theologian John Philoponus, continued for centuries in Islamic, Jewish, and Christian thought.{5} In 1215, the Catholic church promulgated temporal creatio ex nihilo as official church doctrine at the Fourth Lateran Council, declaring God to be "Creator of all things, visible and invisible, . . . who, by His almighty power, from the beginning of time has created both orders in the same way out of nothing." This remarkable declaration not only affirms that God created everything extra se without any material cause, but even that time itself had a beginning. The doctrine of creation is thus inherently bound up with temporal considerations and entails that God brought the universe into being at some point in the past without any antecedent or contemporaneous material cause. At the same time, the Christian Scriptures also suggest that God is engaged in a sort of on-going creation, sustaining the universe in being. Christ "reflects the glory of God and bears the very stamp of His nature, upholding the universe by his word of power" (Heb. 1.3). Although relatively infrequently attested in Scripture in comparison with the abundant references to God’s original act of creation, the idea of continuing creation came to constitute an important aspect of the doctrine of creation as well. For Thomas Aquinas, for example, this aspect becomes the core doctrine of creation, the question of whether the world’s reception of being from God had a temporal commencement or not having only secondary importance.{6} For Aquinas creation is the immediate bestowal of being and as such belongs only to God, the universal principle of being; therefore, even if creatures have existed from eternity, they are still created ex nihilo in this metaphysical sense. Thus, God is conceived in Christian theology to be the cause of the world both in His initial act of bringing the universe into being and in His on-going conservation of the world in being. These two actions have been traditionally classed as species of creatio ex nihilo, namely, creatio originans and creatio continuans. While this is a handy rubric, it unfortunately quickly becomes problematic if pressed to technical precision. As Philip Quinn points out{7}, if we say that a thing is created at a time t only if t is the first moment of the thing’s existence, then the doctrine of creatio continuans lands us in a bizarre form of occasionalism, according to which no persisting individuals exist. At each instant God creates a new individual, numerically distinct from its chronological predecessor, so that diachronic personal identity and agency are precluded. Rather than re-interpret creation in such a way as to not involve a time at which a thing first begins to exist, we ought to recognize that creatio continuans is but a façon de parler and that creation needs to be distinguished from conservation. As John Duns Scotus observed, Properly speaking . . . it is only true to say that a creature is created at the first moment (of its existence) and only after that moment is it conserved, for only then does its being have this order to itself as something that was, as it were, there before. Because of these different conceptual relationships implied by the words ‘create’ and ‘conserve’ it follows that one does not apply to a thing when the other does.{8} Intuitively, creation involves God’s bringing something into being. Thus, if God creates some entity e (whether an individual or an event) at a time t (whether an instant or finite interval), then e comes into being at t. We can explicate this notion as follows: E1. e comes into being at t iff (i) e exists at t, (ii) t is the first time at which e exists, and (iii) e’s existing at t is a tensed fact E2. God creates e at t iff God brings it about that e comes into being at t God’s creating e involves e’s coming into being, which is an absolute beginning of existence, not a transition of e from non-being into being. In creation there is no patient entity on which the agent acts to bring about its effect.{9} It follows that creation is not a type of change, since there is no enduring subject which persists from one state to another. It is precisely for this reason that conservation cannot be properly thought of as essentially the same as creation. For conservation does presuppose a subject which is made to continue from one state to another. In creation God does not act on a subject, but constitutes the subject by His action; in contrast, in conservation God acts on an existant subject to perpetuate its existence. This is the import of Scotus’s remark that only in conservation does a creature "have this order to itself as something that was, as it were, there before." The fundamental difference between creation and conservation, then, lies in the fact that in conservation, as opposed to creation, there is presupposed a subject on which God acts. Intuitively, conservation involves God’s preservation of that subject in being over time. Conservation ought therefore to be understood in terms of God’s preserving some entity e from one moment of its existence to another. A crucial insight into conservation is that unlike creation, it does involve transition and therefore cannot occur at an instant.{10} We may therefore provide the following explication of divine conservation: E3. God conserves e iff God acts upon e to bring about e’s existing from t until some t*>t through every sub-interval of the interval [t, t* ] Creation and conservation thus cannot be adequately analyzed with respect to the divine act alone, but involve relations to the object of the act. The act itself (the causing of existence) may be the same in both cases, but in one case may be instantaneous and presupposes no prior object, whereas in the other case occurs over an interval and does involve a prior object. The doctrine of creation also involves an important metaphysical feature which is rarely appreciated: it commits one to a tensed or, in McTaggart’s convenient terminology, an A-Theory of time.{11} For if one adopts a tenseless or B-Theory of time, then things do not literally come into existence. Things are then four-dimensional objects which tenselessly subsist and begin to exist only in the sense that their extension along their temporal dimension is finite in the earlier-than direction. The whole four-dimensional, space-time manifold is extrinsically (as opposed to intrinsically) timeless, existing co-eternally with God. The universe thus does not come into being on a B-Theory of time, regardless of whether it has a finite or an infinite past relative to any time. Hence, clause (iii) in E2 represents a necessary feature of creation. In the absence of clause (iii) God’s creation of the universe ex nihilo could be interpreted along tenseless lines to postulate merely the finitude of cosmic time in the earlier than direction. Since a robust doctrine of creatio ex nihilo thus commits one to an A-Theory of time, we are brought face to face with what has been called "one of the most neglected, but also one of the most important questions in the dialogue between theology and science," namely, the relation between the concept of eternity and that of the spatio-temporal structure of the universe.{12} Since the rise of modern theology with Schleiermacher, the doctrine of creatio originans has been allowed to atrophy, while the doctrine of creatio continuans has assumed supremacy.{13} Undoubtedly this was largely due to theologians’ fear of a conflict with science, which creatio continuans permitted them to avoid by operating only within the safe harbor of metaphysics, removed from the realities of the physical, space-time world.{14} But the discovery in this century of the expansion of the universe, first predicted in 1922 by Alexander Friedman on the basis of the General Theory of Relativity, coupled with the Hawking-Penrose singularity theorems of 1968, which demonstrated the inevitability of a past, cosmic singularity as an initial boundary to space-time, forced the doctrine of creatio originans back into the spotlight.{15} As physicists Barrow and Tipler observe, "At this singularity, space and time came into existence; literally nothing existed before the singularity, so, if the Universe originated at such a singularity, we would truly have a creation ex nihilo."{16} Of course, various and sometimes heroic attempts have been made to avert the initial cosmological singularity posited in the standard Big Bang model and to regain an infinite past. But none of these alternatives has commended itself as more plausible than the standard model. The old steady state model, the oscillating model, and vacuum fluctuation models are now generally recognized among cosmologists to have failed as plausible attempts to avoid the beginning of the universe.{17} Most cosmologists believe that a final theory of the origin of the universe must await the as yet undiscovered quantum theory of gravity. Such quantum gravity models may or may not involve an initial singularity, although attention has tended to focus on those that do not. But even those that eliminate the initial singularity, such as the Hartle-Hawking model, still involve a merely finite past and, on any physically realistic interpretation of such models, imply a beginning of the universe. This is due to the peculiar feature of such models’ employment of imaginary, rather than real, values for the time variable in the equations governing the universe during the first 10-43 sec of its existence. Imaginary quantities in science are fictional, without physical significance.{18} Thus, use of such numbers is a mathematical "trick" or auxiliary device to arrive at physically significant quantities represented by real numbers. The Euclidean four-space from which classical space-time emerges in such models is thus a mathematical fiction, a way of modeling the early universe which should not be taken as a literal description.{19} Now it might be said that so-called "imaginary time" just is a spatial dimension and to that extent is physically intelligible and so is to be realistically construed. But now the metaphysician must surely protest the reductionistic view of time which such an account presupposes. Time as it plays a role in physics is an operationally defined quantity varying from theory to theory: in the Special Theory of Relativity it is a quantity defined via clock synchronization by light signals, in classical cosmology it is a parameter assigned to spatial hyper-surfaces of homogeneity, in quantum cosmology it is a quantity internally constructed out of the curvature variables of three-geometries. But clearly these are but pale abstractions of time itself.{20} For a series of mental events alone, a succession of contents of consciousness, is sufficient to ground time itself. An unembodied consciousness which experienced a succession of mental states, say, by counting, would be temporal; that is to say, time would in such a case exist, and that wholly in the absence of any physical processes. I take this simple consideration to be a knock-down argument that time as it plays a role in physics is at best a measure of time, rather than constitutive or definitive of time. Hence, even if one were to accept at face value the claim of quantum cosmological models that physical time really is imaginary prior to the Planck time, that is to say, is a spatial dimension, that fact says absolutely nothing at all about time itself. When it is said that such a regime exists timelessly, all that means is that our physical measures of time (which in physics are taken to define time) break down under such conditions. That should hardly surprise. But time itself must characterize such a regime for the simple reason that it is not static. I am astonished that quantum theorists can assert that the quantum regime is on the one hand a state of incessant activity or change and yet is on the other not characterized by time. If this is not to be incoherent, such a statement can only mean that our concepts of physical time are inapplicable on such a scale, not that time itself disappears. But if time itself characterizes the quantum regime, as it must if change is occurring, then one can regress mentally in time back along the imaginary time dimension through concentric circles on the spherical hyper-surface as they converge toward a non-singular point which represents the beginning of the universe and before which time did not exist. Hartle-Hawking themselves recognize that point as the origin of the universe in their model, but how that point came into being (in metaphysical, that is, ontological, time) is a question not even addressed by their theory. Hence, even on a naive realist construal of such models, they at best show that that quantity which is defined as time in physics ceases at the Planck time and takes on the characteristics of what physics defines as a spatial dimension. But time itself does not begin at the Planck time, but extends all the way back to the very beginning of the universe. Such theories, if successful, thus enable us to model the origin of the universe without an initial cosmological singularity and, by positing a finite imaginary time on a closed surface prior to the Planck time rather than an infinite time on an open surface, actually support temporal creatio ex nihilo. But if the spatio-temporal structure of the universe exhibits an origination ex nihilo, then the difficulty concerns how to relate that structure to the divine eternity. For given the reality of tense and God’s causal relation to the world, it is very difficult to conceive how God could remain untouched by the world’s temporality. Imagine God existing changelessly alone without creation, with a changeless and eternal determination to create a temporal world. Since God is omnipotent, His will is done, and a temporal world begins to exist. (We may lay aside for now the question whether this beginning of a temporal creation would require some additional act of intentionality or exercise of power other than God’s timeless determination.) Now in such a case, either God existed temporally prior to creation or He did not. If He did exist alone temporally prior to creation, then God is not timeless, but temporal, and the question is settled. Suppose, then, that God did not exist temporally prior to creation. In that case He exists timelessly sans creation. But once time begins at the moment of creation, God either becomes temporal in virtue of His real, causal relation to time and the world or else He exists as timelessly with creation as He does sans creation. But this second alternative seems quite impossible. At the first moment of time, God stands in a new relation in which He did not stand before (since there was no before). We need not characterize this as a change in God; but there is a real, causal relation which is at that moment new to God and which He does not have in the state of existing sans creation. At the moment of creation, God comes into the relation of causing the universe or at the very least that of co-existing with the universe, relations in which He did not before stand. Hence, even if God remains intrinsically changeless in creating the world, He nonetheless undergoes an extrinsic, or relational, change, which, if He is not already temporal prior to the moment of creation, draws Him into time at that very moment in virtue of His real relation to the temporal, changing universe. So even if God is timeless sans creation, His free decision to create a temporal world constitutes also a free decision on His part to enter into time and to experience the reality of tense and temporal becoming. The classic Thomistic response to the above argument is, remarkably, to deny that God’s creative activity in the world implies that God is really related to the world. Aquinas tacitly agrees that if God were really related to the temporal world, then He would be temporal.{21} In the coming to be of creatures, certain relations accrue to God anew and thus, if these relations be real for God, He must be temporal in light of His undergoing extrinsic change, wholly apart from the question of whether God undergoes intrinsic change in creating the world. So Thomas denies that God has any real relation to the world. According to Aquinas, while the temporal world does have the real relation of being created by God, God does not have a real relation of creating the temporal world. Since God is immutable, the new relations predicated of Him at the moment of creation are just in our minds; in reality the temporal world itself is created with a relation inhering in it of dependence on God. Hence, God’s timelessness is not jeopardized by His creation of a temporal world. This unusual doctrine of creation becomes even stranger when we reflect on the fact that in creating the world God does not perform some act extrinsic to His nature; rather the creature (which undergoes no change but simply begins to exist) begins to be with a relation to God of being created by God. According to this doctrine, then, God in freely creating the universe does not really do anything different than He would have, had He refrained from creating; the only difference is to be found in the universe itself: instead of God existing alone sans the universe we have instead a universe springing into being at the first moment of time possessing the property being created by God, even though God, for His part, bears no real reciprocal relation to the universe made by Him. I think it hardly needs to be said that Thomas’s solution, despite its daring and ingenuity, is extraordinarily implausible. "Creating" clearly describes a relation which is founded on something’s intrinsic properties concerning its causal activity, and therefore creating the world ought to be regarded as a real property acquired by God at the moment of creation. It seems unintelligible, if not contradictory, to say that one can have real effects without real causes. Yet this is precisely what Aquinas affirms with respect to God and the world. Moreover, it is the implication of Aquinas’s position that God is perfectly similar across possible worlds, the same even in worlds in which He refrains from creation as in worlds in which He creates. For in none of these worlds does God have any relation to anything extra se. In all these worlds God never acts differently, He never cognizes differently, He never wills differently; He is just the simple, unrelated act of being. Even in worlds in which He does not create, His act of being, by which creation is produced, is no different in these otherwise empty worlds than in worlds chock-full of contingent beings of every order. Thomas’s doctrine thus makes it unintelligible why the universe exists rather than nothing. The reason obviously cannot lie in God, either in His nature or His activity (which are only conceptually distinct anyway), for these are perfectly similar in every possible world. Nor can the reason lie in the creatures themselves, in that they have a real relation to God of being freely willed by God. For their existing with that relation cannot be explanatorily prior to their existing with that relation. I conclude, therefore, that Thomas’ solution, based in the denial of God’s real relation to the world, cannot succeed in hermetically sealing off God in atemporality. The above might lead one to conclude that God existed temporally prior to His creation of the universe in a sort of metaphysical time. But while it makes sense to speak of such a metaphysical time prior to the inception of physical time at the Big Bang (think of God’s counting down to creation: . . ., 3, 2, 1, fiat lux!), the notion of an actual infinity of past events or intervals of time seems strikingly counter-intuitive. Not only would we be forced to swallow all the bizarre and ultimately contradictory consequences of an actual infinite, but we would also be saddled with the prospect of God’s having "traversed" the infinite past one moment at a time until He arrived at the moment of creation, which seems absurd. Moreover, on such an essentially Newtonian view of time, we would have to answer the difficult question which Leibniz lodged against Clarke: why did God delay for infinite time the creation of the world?{22} In view of these perplexities, it seems more plausible to adopt the Leibnizian alternative of some sort of relational view of time according to which time does not exist in the utter absence of events.{23} God existing alone sans creation would be changeless and, hence, timeless, and time would begin at the first event, which, for simplicity’s sake, we may take to be the Big Bang. God’s bringing the initial cosmological singularity into being is simultaneous (or coincident) with the singularity’s coming into being, and therefore God is temporal from the moment of creation onward. Though we might think of God as existing, say, one hour prior to creation, such a picture is, as Aquinas states, purely the product of our imagination and time prior to creation merely an imaginary time (in the phantasmagorical, not mathematical, sense!).{24} Why, then, did God create the world? It has been said that if God is essentially characterized by self-giving love, creation becomes necessary.{25} But the Christian doctrine of the Trinity suggests another possibility. Insofar as He exists sans creation, God is not, on the Christian conception, a lonely monad, but in the tri-unity of His own being, God enjoys the full and unchanging love relationships among the persons of the Trinity. Creation is thus unnecessary for God and is sheer gift, bestowed for the sake of creatures, that we might experience the joy and fulfillment of knowing God. He invites us, as it were, into the inner-Trinitarian love relationship as His adopted children. Thus, creation, as well as salvation, is sola gratia.  The biblical worldview involves a very strong conception of divine sovereignty over the world and human affairs, even as it presupposes human freedom and responsibility. While too numerous to list here, biblical passages affirming God’s sovereignty have been grouped by D. A. Carson under four main heads: (1) God is the Creator, Ruler, and Possessor of all things, (2) God is the ultimate personal cause of all that happens, (3) God elects His people, and (4) God is the unacknowledged source of good fortune or success.{26} No one taking these passages seriously can embrace currently fashionable libertarian revisionism, which denies God’s sovereignty over the contingent events of history. On the other hand, the conviction that human beings are free moral agents also permeates the Hebrew way of thinking, as is evident from passages listed by Carson under nine heads: (1) People face a multitude of divine exhortations and commands, (2) people are said to obey, believe, and choose God, (3) people sin and rebel against God, (4) people’s sins are judged by God, (5) people are tested by God, (6) people receive divine rewards, (7) the elect are responsible to respond to God’s initiative, (8) prayers are not mere showpieces scripted by God, and (9) God literally pleads with sinners to repent and be saved.{27} These passages rule out a traditional deterministic understanding of divine providence, which precludes human freedom. Reconciling these two streams of biblical teaching without compromising either has proven extraordinarily difficult. Nevertheless, a startling solution to this enigma emerges from the doctrine of divine middle knowledge crafted by the Counter-Reformation Jesuit theologian Luis Molina.{28} Molina proposes to furnish an analysis of divine knowledge in terms of three logical moments. Although whatever God knows, He knows eternally, so that there is no temporal succession in God’s knowledge, nonetheless there does exist a sort of logical succession in God’s knowledge in that His knowledge of certain propositions is conditionally or explanatorily prior to His knowledge of certain other propositions. In the first, unconditioned moment God knows all possibilia, not only all individual essences, but also all possible worlds. Molina calls such knowledge "natural knowledge" because the content of such knowledge is essential to God and in no way depends on the free decisions of His will. By means of His natural knowledge, then, God has knowledge of every contingent state of affairs which could possibly obtain and of what the exemplification of the individual essence of any free creature could freely choose to do in any such state of affairs that should be actual. In the second moment, God possesses knowledge of all true counterfactual propositions, including counterfactuals of creaturely freedom. Whereas by His natural knowledge God knew what any free creature could do in any set of circumstances, now in this second moment God knows what any free creature would do in any set of circumstances. This is not because the circumstances causally determine the creature’s choice, but simply because this is how the creature would freely choose. God thus knows that were He to actualize certain states of affairs, then certain other contingent states of affairs would obtain. Molina calls this counterfactual knowledge "middle knowledge" because it stands in between the first and third moment in divine knowledge. Middle knowledge is like natural knowledge in that such knowledge does not depend on any decision of the divine will; God does not determine which counterfactuals of creaturely freedom are true or false. Thus, if it is true that If some agent S were placed in circumstances C, then he would freely perform action a, then even God in His omnipotence cannot bring it about that S would freely refrain from a if he were placed in C. On the other hand, middle knowledge is unlike natural knowledge in that the content of His middle knowledge is not essential to God. True counterfactuals are contingently true; S could freely decide to refrain from a in C, so that different counterfactuals could be true and be known by God than those that are. Hence, although it is essential to God that He have middle knowledge, it is not essential to Him to have middle knowledge of those particular propositions which He does in fact know. Given God’s free decision to actualize a world, in the third and final moment God possesses knowledge of all remaining propositions that are in fact true in the actual world, including future contingent propositions. Such knowledge is denominated "free knowledge" by Molina because it is logically posterior to the decision of the divine will to actualize a world. The content of such knowledge is clearly not essential to God, since He could have decreed to actualize a different world. Had He done so, the content of His free knowledge would be different. The doctrine of middle knowledge is a doctrine of remarkable theological fecundity. Molina’s scheme would resolve in a single stroke most of the traditional difficulties concerning divine providence and human freedom. Molina defines providence as God’s ordering of things to their ends, either directly or mediately through secondary agents. By His middle knowledge God knows an infinity of orders which He could instantiate because He knows how the creatures in them would in fact freely respond given the various circumstances. He then decides by the free act of His will how He would respond in these various circumstances and simultaneously wills to bring about one of these orders. He directly causes certain circumstances to come into being and others indirectly by causally determined secondary causes. Free creatures, however, He allows to act as He knew they would when placed in such circumstances, and He concurs with their decisions in producing in being the effects they desire. Some of these effects God desired unconditionally and so wills positively that they occur, but others He does not unconditionally desire, but nevertheless permits due to His overriding desire to allow creaturely freedom and knowing that even these sinful acts will fit into the overall scheme of things, so that God’s ultimate ends in human history will be accomplished.{29} God has thus providentially arranged for everything that happens by either willing or permitting it, yet in such a way as to preserve freedom and contingency. Molinism thus effects a dramatic reconciliation between divine sovereignty and human freedom. Before we embrace such a solution, however, we should ask what objections might be raised against a Molinist account. Surveying the literature, one discovers that the detractors of Molinism tend not so much to criticize the Molinist doctrine of providence as to attack the concept of middle knowledge upon which it is predicated. It is usually alleged that counterfactuals of freedom are not bivalent or are uniformly false or that God cannot know such counterfactual propositions. These objections have been repeatedly refuted by defenders of middle knowledge,{30} though opposition dies hard. But as Freddoso and Wierenga pointed out in an American Philosophical Association session devoted to a recent popularization of libertarian revisionism, until the opponents of middle knowledge answer the refutations of their objections--which they have yet to do,--there is little new to be said in response to their criticisms. Let us consider, then, objections, not to middle knowledge per se, but to a Molinist account of providence. Robert Adams has recently argued that divine middle knowledge of counterfactuals of creaturely freedom is actually incompatible with human freedom. Although inspired by an argument of William Hasker for the same conclusion, Adams’s argument avoids any appeal to Hasker’s dubious--and, I should say, clearly false--premiss that on the Molinist view counterfactuals of freedom are more fundamental features of the world than are categorical facts.{31} Adams summarizes his argument "very roughly" as follows" Suppose it is not only true that P would do A if placed in circumstances C; suppose that truth was settled, as Molinism implies, prior to God’s deciding what, if anything, to create, and it would therefore have been a truth even if P had never been in C--indeed even if P had never existed. Then it is hard to see how it can be up to P to determine freely whether P does A in C.{32} Granted that this summary is admittedly very rough, still it is frustratingly ambiguous. The argument seems to assume as a premiss that there is a true counterfactual of creaturely freedom Ø that If P were in C, P would do A, whose antecedent is true. Is the objection then supposed to be aimed at the imagined claim that P freely brings about the truth of Ø? Is Adams asserting that P cannot freely bring about the truth of Ø because if, posterior to God’s middle knowledge of Ø, P were not in C or did not exist at all, Ø would still be true, though P never does A in C, which is absurd? Is Adams saying that once the content of God’s middle knowledge is fixed, P is no longer free with respect to A in C? If this is the argument, then it is just the old bogey of fatalism raising its fallacious head in a new guise, as Jonathan Kvanvig points out effectively in his critique of Adams’s similar argument against the temporal pre-existence of "thisnesses."{33} Just as we have the power to act in such a way that were we to do so, future-tense propositions which were in fact true would not have been true, so things can happen differently than they will, in which case thisnesses and singular propositions which in fact exist(ed) would not have existed. Analogously, the Molinist could hold that it is within our power so to act that were we to do so, the truth of counterfactuals of creaturely freedom which is brought about by us would not have been brought about by us. But perhaps this is not what Adams intends. Maybe the argument is that if Ø is true logically prior to God’s decree, then God still has the choice whether to instantiate worlds in which the antecedent of Ø is true or not. If, then, God decrees to actualize a world in which P is not in C or does not exist at all, Ø still remains true, being part of what Thomas Flint calls the "world type" which confronts God prior to His decree.{34} But then how can P bring about the truth of Ø, if P does not even exist? The Molinist answer to that question, however, is straightforward: P does not in that case bring about the truth of Ø. The hypothetical Molinist against whom this objection is directed holds ex hypothesi "that in the case of a true counterfactual of freedom with a true antecedent it is the agent of the free action described in the consequent who brings it about that the conditional is true."{35} That claim is consistent--though I, like Adams, cannot imagine why any Molinist should want to maintain such a claim--with the further claim that in cases of true counterfactuals of creaturely freedom lacking true antecedents, their truth is not brought about by the agents described. In my opinion, it is better to say that in all cases of true counterfactuals of creaturely freedom, the truth of a counterfactual like Ø is grounded in the obtaining in the actual world (logically prior to God’s decree) of the counterfactual state of affairs that if P were in C, then he would do A, and that any further explanation of this fact implicitly denies libertarianism.{36} Just as a true, contingent, future-tense proposition of the form It will be the case that P does A at t cannot be explained in terms of the truth of a tenseless proposition of the form P does A at t, so it is futile to try to explain true counterfactuals of creaturely freedom of the form If P were in C, P would do A in terms of categorical, indicative propositions of a form like P will do A in C. Just as irreducibly tensed facts are needed in the former case, conditional subjunctive facts are needed in the latter. Be that as it may, however, Adams’s intuitive reasoning provides no grounds for rejecting either the view that the truth of counterfactuals of creaturely freedom with true antecedents is brought about by the agents described or the view that the truth of counterfactuals of creaturely freedom of any kind is not brought about by the agents described. Having summarized the intuitive basis of his argument, Adams develops the following more rigorous formulation: 1. According to Molinism, the truth of all true counterfactuals of freedom about us is explanatorily prior to God’s decision to create us. 2. God’s decision to create us is explanatorily prior to our existence. 3. Our existence is explanatorily prior to all of our choices and actions. 4. The relation of explanatory priority is transitive. 5. Therefore it follows from Molinism (by 1-4) that the truth of all true counterfactuals of freedom about us is explanatorily prior to all of our choices and actions. 10. It follows also from Molinism that if I freely do action A in circumstances C, then there is a true counterfactual of freedom F*, which says that if I were in C, then I would (freely) do A. 11. Therefore, it follows from Molinism that if I freely do A in C, the truth of F* is explanatorily prior to my choosing and acting as I do in C. 12. If I freely do A in C, no truth that is strictly inconsistent with my refraining from A in C is explanatorily prior to my choosing and acting as I do in C. 13. The truth of F* (which says that if I were in C, then I would do A) is strictly inconsistent with my refraining from A in C. 14. If Molinism is true, then if I freely do A in C, F* both is (by 11) and is not (by 12-13) explanatorily prior to my choosing and acting as I do in C. 15. Therefore, (by 14) if Molinism is true, then I do not freely do A in C. In his critique of Adams’s earlier anti-Molinist argument, Alvin Plantinga charged that the argument is unsound because the dependency relation involved is not a transitive relation.{37} It seems to me that the present argument shares a similar failing. The notion of "explanatory priority" as it plays a role in the argument seems to me equivocal, and if a univocal sense can be given it, there is no reason to expect it to be transitive. Consider the explanatory priority in (2) and (3). Here a straightforward interpretation of this notion can be given in terms of the counterfactual dependence of consequent on condition: 2’. If God had not created us, we should not exist. 3’. If we were not to exist, we should not make any of our choices and actions. Both (2’) and (3’) are metaphysically necessary truths. But this sense of explanatory priority is inapplicable to (1), for 1’. According to Molinism, if all true counterfactuals of freedom about us were not true, God would not have decided to create us is false. Molinism makes no such assertion, since God might still have created us even if the actually true counterfactuals of creaturely freedom were false or even, per impossible, if no such counterfactuals at all were true. The sense of explanatory priority in (1) must therefore be different than it is in (2) and (3). The root of the difficulty seems to be a conflation of reasons and causes on Adams’s part. The priority in (2) and (3) is a sort of causal or ontic priority, but the priority in (1) is not causal or ontic, since the truth of all counterfactuals of creaturely freedom is neither a necessary nor a sufficient condition of God’s decision to create us. At best, the truth of such counterfactuals is prior to His decision in providing a partial reason for that decision. Adams’s mistake seems to be that he leaps from God’s decision in the hierarchy of reasons to God’s decision in the hierarchy of causes and by this equivocation tries to make counterfactuals of creaturely freedom explanatorily prior to our free choices. Perhaps Adams can enunciate a univocal sense of "explanatory priority" that is applicable to (1-3). But I suspect that any such notion would be so generic that we should have to deny its transitivity or so weak that it would not be inimical to human freedom. This suspicion is borne out by Hasker’s very recent attempt to save Adams’s argument by enunciating a very broad conception of explanatory priority which is univocal in (1)-(3) and yet transitive: for contingent states of affairs p and q, EP: p is explanatorily prior to q iff p must be included in a complete explanation of why q obtains Hasker asserts, "It should be apparent that explanatory priority as explicated by (EP) is transitive: if p is explanatorily prior to q, and q to r, then clearly P must be included in a complete explanation of why r obtains."{38} But this is not at all clear. As Hasker observes, such a relation must also be irreflexive: "a contingent state of affairs cannot constitute an explanation (in whole or in part) of itself."{39} But if the relation described by (EP) is transitive, then it seems that the condition of irreflexivity is violated. My wife and I not infrequently find ourselves in the situation that I want to do something if she wants to do it, and she wants to do it if I want to do it. Suppose, then, that John is going to the party because Mary is going, and Mary is going to the party because John is going. It follows that if the (EP) relation is transitive, John is going to the party because John is going to the party, which conclusion is obviously wrong. Not only is such a conclusion explanatorily vacuous, but it also implies, in conjunction with (12), that John does not freely go to the party--the very conclusion Hasker wants to avoid. Adams’s reductio also fails because (12) is false. What is undeniably true is 12’. If I freely do A in C, no truth that is strictly inconsistent with my doing A in C is explanatorily prior to my choosing and acting as I do in C. But why would we be tempted to think that no truth which is inconsistent with my not doing A in C is explanatorily prior to my freely doing A in C? Certainly F**. If I were in C, then I would not do A cannot be explanatorily prior to my freely doing A in C; but why would F** not be explanatorily prior to my freely not doing A in C? Adams’s intuition seems to be that if F* were explanatorily prior to my doing A in C, then I could not refrain from A, which is a necessary condition of my doing A freely.{40} But such an assumption seems doubly wrong. First, it represents once more the fallacious reasoning of fatalism. Though F* is (ex concessionis) in fact explanatorily prior to my freely doing A in C, it is within my power to refrain from doing A in C; only if I were to do so, F* would not then be explanatorily prior to my action nor a part of God’s middle knowledge. Until Adams can show that the content of God’s middle knowledge is a "hard fact," his argument based on (12) is undercut. Second, my being able to refrain from doing A in C is not a necessary condition of my freely doing A in C. For perhaps I do A in C without any causal constraint, but it is also the case that God would not permit me to refrain from A in C. Perhaps it is true that G. If I were to attempt to refrain from doing A in C, God would not permit me to refrain from doing A in C. (G) is inconsistent with my refraining from doing A in C, and yet it may well be explanatorily prior to my freely doing A in C. Flint’s essay on infallibility, which appears in the same volume as Adams’s, provides a good illustration.{41} Suppose I am the Pope and A is promulgating ex cathedra only correct doctrine. God knew via His middle knowledge that if I were in C, I would freely do A. Therefore, His creative decree includes my being elected Pope. Given papal infallibility, (G) may also be true and part of God’s middle knowledge, and so is explanatorily prior to my freely doing A in C. But (G) is inconsistent with my refraining from A in C. If such a scenario is coherent--and Flint seems to have refuted all objections to it--, then (12) is false. The sense of explanatory priority explicated in Hasker’s (EP) is so weak that even if the Molinist simply concedes the truth of (5) in this sense, then (12) is all the more obviously false. For counterfactuals concerning our free actions may be explanatorily prior to those actions only in the sense that God’s reason for creating us may have been in part that He knew we should freely do such things. But it is wholly mysterious how this sense of explanatory priority is incompatible with our performing such actions freely. In a footnote, Hasker claims that Adams’s argument can be freed from reliance on (12), referring the reader to his own argument against middle knowledge.{42} But the duly attentive reader will find in that discussion nothing but a reinteration of Hasker’s previous argument on this score with no refutation of the several objections lodged against it in the literature.{43} Thus, it seems to me that both sides of Adams’s reductio argument are unsound. His attempt to show that counterfactuals of creaturely freedom are explanatorily prior to our actions fails due to equivocation. And even if they were in some peculiar sense explanatorily prior to our actions because they are true and known by God logically prior to categorical contingent propositions, that would not be incompatible with the freedom of our actions. In short, neither Adams nor Hasker has been able to explicate a sense of explanatory priority with respect to the truth of counterfactuals of creaturely freedom which is both transitive and inimical to human freedom. Given that the objections against a Molinist doctrine of providence thus fail, the theological power of such an account ought to prompt us to avail ourselves of it.  It hardly needs to be demonstrated that the biblical narrative of divine action in the world is a narrative replete with miraculous events. God is conceived to bring about events which natural things, left to their own resources, would not bring about. Hence, miracles are able to function as signs of divine activity.{44} "Why this is a marvel!" exclaims the man born blind, when confronted with the Pharisees’ scepticism concerning Jesus’s rectification of his sight, "Never since the world began has it been heard that any one opened the eyes of a man born blind. If this man were not from God, he could do nothing" (Jn. 9.30-33). In order to differentiate between the customary way in which God acts and His special, miraculous action, theologians have traditionally distinguished within divine providence God’s providentia ordinaria and His providentia extraordinaria, the latter being identified with miracles. But our exposition of divine providence based on God’s middle knowledge suggests a category of non-miraculous, special providence, which it will be helpful to distinguish. One has in mind here events which are the product of natural causes but whose context is such as to suggest a special divine intention with regard to their occurrence. For example, just as the Israelites approach the Jordan River, a rockslide upstream blocks temporarily the water’s flow, enabling them to cross into the Promised Land (Josh 3. 14-17); or again, as Paul and Silas lie bound in prison for preaching the gospel, an earthquake occurs, springing the prison doors and unfastening their fetters (Acts 16.25-26). By means of His middle knowledge, God can providentially order the world so that the natural causes of such events are, as it were, ready and waiting to produce such events at the propitious time, perhaps in answer to prayers which God knew would be offered. Of course, if such prayers were not be offered or the contingent course of events were to go differently, then God would have known this and so not arranged the natural causes, including human free volitions, to produce the special providential event. Events wrought by special providence are no more outside the course and capacity of nature than are events produced by God’s ordinary providence, but the context of such events, such as their timing, their coincidental nature, and so forth, is such as to point to a special divine intention to bring them about. If, then, we distinguish miracles from both God’s providentia ordinaria and extraordinaria, how should we characterize miracles? Since the dawning of modernity, miracles have been widely understood to be "violations of the laws of nature." In his Dictionary article on miracles, for example, Voltaire states that according to accepted usage, "A miracle is the violation of mathematical, divine, immutable, eternal laws" and is therefore a contradiction.{45} Voltaire is in fact quite right that such a definition is a contradiction, but this ought to have led him to conclude, not that miracles can thus be defined out of existence, but that the customary definition is defective. Indeed, an examination of the chief competing schools of thought concerning the notion of a natural law in fact reveals that on each theory the concept of a violation of a natural law is incoherent and that miracles need not be so defined. Broadly speaking, there are three main views of natural law today: the regularity theory, the nomic necessity theory, and the causal dispositions theory.{46} According to the regularity theory, the "laws" of nature are not really laws at all, but just generalized descriptions of the way things happen in the world. They describe the regularities which we observe in nature. Now since on such a theory a natural law is just a generalized description of whatever occurs in nature, it follows that no event which occurs can violate such a law. Instead, it just becomes part of the description. The law cannot be violated, because it describes in a certain generalized form everything that does happen in nature. According to the nomic necessity theory, natural laws are not merely descriptive, but tell us what can and cannot happen in the natural world. They allow us to make certain counterfactual judgments, such as "If the density of the universe were sufficiently high, it would have re-contracted long ago," which a purely descriptisivist theory would not permit. Again, however, since natural laws are taken to be universal inductive generalizations, a violation of a natural law is no more possible on this theory than on the regularity theory. So long as natural laws are universal generalizations based on experience, they must take account of anything that happens and so would be revised should an event occur which the law does not encompass. Of course, in practice proponents of such theories do not treat natural laws so rigidly. Rather, natural laws are assumed to have implicit in them certain ceteris paribus assumptions such that a law states what is the case under the assumption that no other natural factors are interfering. When a scientific anomaly occurs, it is usually assumed that some unknown natural factors are interfering, so that the law is neither violated nor revised. But suppose the law fails to describe or predict accurately because some supernatural factors are interfering? Clearly the implicit assumption of such laws is that no supernatural factors as well as no natural factors are interfering. If the law proves inaccurate in a particular case because God is acting, the law is neither violated nor revised. If God brings about some event which a law of nature fails to predict or describe, such an event cannot be characterized as a violation of a law of nature, since the law is valid only on the assumption that no supernatural factors in addition to the natural factors come into play. On such theories, then, miracles ought to be defined as naturally impossible events, that is to say, events which cannot be produced by the natural causes operative at a certain time and place. Whether an event is a miracle is thus relative to a time and place. Given the natural causes operative at a certain time and place, for example, rain may be naturally inevitable or necessary, but on another occasion, rain may be naturally impossible. Of course, some events, say, the resurrection, may be absolutely miraculous in that they are at every time and place beyond the productive capacity of natural causes. According to the causal dispositions theory, things in the world have different natures or essences, which include their causal dispositions to affect other things in certain ways, and natural laws are metaphysically necessary truths about what causal dispositions are possessed by various natural kinds of things. For example, "Salt has a disposition to dissolve in water" would state a natural law. If, due to God’s action, some salt failed to dissolve in water, the natural law is not violated, because it is still true that salt has such a disposition. As a result of things’ causal dispositions, certain deterministic natural propensities exist in nature, and when such a propensity is not impeded (by God or some other free agent), then we can speak of a natural necessity. On this theory, an event which is naturally necessary must and does actually occur, since the natural propensity will automatically issue forth in the event if it is not impeded. By the same token, a naturally impossible event cannot and does not actually occur. Hence, a miracle cannot be characterized on this theory as a naturally impossible event. Rather, a miracle is an event which results from causal interference with a natural propensity which is so strong that only a supernatural agent could impede it. The concept of miracle is essentially the same as under the previous two theories, but one just cannot call a miracle "naturally impossible" as those terms are defined in this theory; perhaps we couild adopt instead the nomenclature "physically impossible" to characterize miracles under such a theory. On none of these theories, then, should miracles be understood as violations of the laws of nature. Rather they are naturally (or physically) impossible events, events which at certain times and places cannot be produced by the relevant natural causes. Now the question is, what could conceivably transform an event that is naturally impossible into a real historical event? Clearly, the answer is the personal God of theism. For if a transcendent, personal God exists, then He could cause events in the universe that could not be produced by causes within the universe. Given a God who created the universe, who conserves the world in being, and who is capable of acting freely, Christian theologians seem to be entirely justified in maintaining that miracles are possible. Indeed, if it is even (epistemically) possible that such a transcendent, personal God exists, then it is equally possible that He has acted miraculously in the universe. Only to the extent that one has good grounds for believing atheism to be true could one be rationally justified in denying the possibility of miracles. In this light arguments for the impossibility of miracles based upon defining them as violations of the laws of nature become fatuous. The more interesting question is whether the identification of any event as a miracle is possible. On the one hand, it might be argued that a convincing demonstration that a purportedly miraculous event has occurred would only succeed in forcing us to revise natural law so as to accommodate the event in question. But as Swinburne has argued, a natural law is not abolished because of one exception; the counter-instance must occur repeatedly whenever the conditions for it are present.{47} If an event occurs which is, as Swinburne puts it, contrary to a law of nature and we have reasons to believe that this event would not occur again under similar circumstances, then the law in question will not be abandoned. One may regard an anomalous event as repeatable if another formulation of the natural law better accounts for the event in question, and if it is no more complex than the original law. If any doubt exists, the scientist may conduct experiments to determine which formulation of the law proves more successful in predicting future phenomena. In a similar way, one would have good reason to regard an event as a non-repeatable counter-instance to a law if the reformulated law were much more complicated than the original without yielding better new predictions or by predicting new phenomena unsuccessfully where the original formulation predicted successfully. If the original formulation remains successful in predicting all new phenomena as the data accumulate, while no reformulation does any better in predicting the phenomena and explaining the event in question, then the event should be regarded as a non-repeatable counter-instance to the law. Hence, a miraculous event would not serve to upset the natural law: On the other hand, it might be urged that if a purportedly miraculous event were demonstrated to have occurred, we should conclude that the event occurred in accordance with unknown natural causes and laws. The question is, what serves to distinguish a genuine miracle from a mere scientific anomaly? Here the religio-historical context of the event becomes crucial. A miracle without a context is inherently ambiguous. But if a purported miracle occurs in a significant religio-historical context, then the chances of its being a genuine miracle are increased. For example, if the miracles occur at a momentous time (say, a man’s leprosy vanishing when Jesus speaks the words, "Be clean!") and do not recur regularly in history, and if the miracles are numerous and various, then the chances of their being the result of some unknown natural causes are reduced. In Jesus’s case, moreover, his miracles and resurrection ostensibly took place in the context of and as the climax to his own unparalleled life and teachings and produced so profound an effect on his followers that they called him Lord. The central miracle of the New Testament, the resurrection of Jesus, was, if it occurred, doubtlessly a miracle. In the first place, the resurrection so exceeds what we know of natural causes that it can only be reasonably attributed to a supernatural cause. The more we learn about cell necrosis, the more evident it becomes that such an event is naturally impossible. If it were the effect of unknown natural causes, then its uniqueness in the history of mankind becomes inexplicable. Secondly, the supernatural explanation is given immediately in the relgio-historical context in which the event occurred. Jesus’s resurrection was not merely an anomalous event, occurring without context; it came as the climax to Jesus’s own life and teachings. As Wolfhart Pannenberg explains, Jesus’ claim to authority, through which he put himself in God’s place, was . . . blasphemous for Jewish ears. Because of this Jesus was then also slandered before the Roman Governor as a rebel. If Jesus really has been raised, this claim has been visibly and unambiguously confirmed by the God of Israel, who was allegedly blasphemed by Jesus.{49} We should therefore have good reasons to regard Jesus’s resurrection, if it occurred, as truly miraculous. Thus, while it may, indeed, be difficult to know in some cases whether a genuine miracle has occurred, that does not imply pessimism with respect to all cases. But perhaps the very natural impossibility of a genuine miracle precludes our ever identifying an event as a miracle. As Hume notoriously argued, perhaps it is always more rational to believe that some mistake or deception is at play than to believe that a genuine miracle has occurred.{50} This conclusion is based on Hume’s principle that it is always more probable that the testimony to a miracle is false than that the miracle occurred. But Hume’s principle incorrectly assumes that miracles are highly improbable. With respect to the resurrection of Jesus, for example, the hypothesis "God raised Jesus from the dead" is not improbable, either relative to our background information or to the specific evidence. What is improbable relative to our background information is the hypothesis "Jesus rose naturally from the dead." Given what we know of cell necrosis, that hypothesis is fantastically, even unimaginably, improbable. Conspiracy theories, apparent death theories, hallucination theories, twin brother theories--almost any hypothesis, however unlikely, seems more probable than the hypothesis that all the cells in Jesus’s corpse spontaneously came back to life again. But such naturalistic hypotheses are not more probable than the hypothesis that God raised Jesus from the dead. The evidence for the laws of nature relevant in this case makes it probable that a resurrection from the dead is naturally impossible, which renders improbable the hypothesis that Jesus rose naturally from the grave. But such evidence is simply irrelevant to the probability of the hypothesis that God raised Jesus from the dead. That hypothesis needs to be weighed in light of the specific evidence concerning such facts as the post-mortem appearances of Jesus, the vacancy of the tomb where Jesus’s corpse was laid, the origin of the original disciples’ firm belief that God had, in fact, raised Jesus, and so forth, in the religio-historical context in which the events took place and assessed in terms of the customary criteria used in justifying historical hypotheses, such as explanatory power, explanatory scope, plausibility, and so forth. When this is done, there is no reason a priori to expect that it will be more probable that the testimony is false than that the hypothesis of miracle is true. Given the God of creation and providence described in classical theism, miracles are possible and, when occurring under certain conditions, plausibly identifiable. Guide to Further Reading Bilinskyji, Stephen S. "God, Nature, and the concept of Miracle." Ph.D. dissertation, University of Notre Dame, 1982. Craig, William Lane and Smith, Quentin. Theism, Atheism, and Big Bang Cosmology. Oxford: Clarendon Press, 1993. Freddoso, Alfred J. "The Necessity of Nature." Midwest Studies in Philosophy 11 (1986): 215-242. Hebblethwaite, Brian and Henderson, Edward, eds. Divine Action. Edinburgh: T. & T. Clark, 1990. Molina, Luis de. On Divine Foreknowledge: Part IV of the "Concordia". Translated with an Introduction and Notes by Alfred J. Freddoso. Ithaca, N.Y.: Cornell University Press, 1988. Morris, Thomas V., ed. Divine and Human Action. Ithaca, N.Y.: Cornell University Press, 1988. See especially articles by Quinn, Kvanvig and McCann, Flint, and Freddoso. Quinn, Philip L. "Creation, Conservation, and the Big Bang." In Philosophical Problems of the Internal and External Worlds, pp. 589-612. Edited by John Earman, et.al. Pittsburgh: University of Pittsburgh Press, 1993. Swinburne, Richard. The Concept of Miracle. New York: Macmillan, 1970. ________, ed. Miracles. Philosophical Topics. New York: Macmillan Publishing Co., 1989. Tomberlin, James E., ed. Philosophical Perspectives. Vol. 5: Philosophy of Religion. Atascadero, Calif.: Ridgeway Publishing, 1991. See especially articles by Flint, Kvanvig and McCann, and Freddoso.  {1}On Gen. 1.1 as an independent clause which is not a mere chapter title, see Claus Westermann, Genesis 1-11, trans. John Scullion (Minneapolis: Augsburg, 1984), p. 97; John Sailhammer, Genesis, Expositor’s Bible Commentary 2, ed. Frank Gaebelein (Grand Rapids, Mich.: Zondervan, 1990), p. 21. {2}See, e.g., Prov. 8.27-9; cf. Ps. 104.5-9; also Is. 44.24; 45.18, 24; Ps. 33.9; 90.2; Jn. 1.1-3; Rom. 4.17; 11.36; I Cor. 8.6; Col. 1.16, 17; Heb. 1.2-3; 11.3; Rev. 4.11. {3}E.g., II Maccabees 7.28; 1QS 3.15; Joseph and Aseneth 12.1-3; II Enoch 25.1ff; 26.1; Odes of Solomon 16.18-19; II Baruch 21.4. For discussion, see Paul Copan, "Is Creatio ex nihilo a Post-biblical Invention?": an Examination of Gerhard May’s Proposal," Trinity Journal 17 (1996): 77-93. {4}Creatio ex nihilo is affirmed in the Shepherd of Hermas 1.6; 26.1 and the Apostolic Constitutions 8.12.6,8; and by Tatian Oratio ad graecos 5.3; cf.4.1ff; 12.1; Theophilus Ad Autolycum 1.4; 2.4, 10, 13; and Irenaeus Adversus haeresis 3.10.3. For discussion, see Gerhard May, Creatio ex nihilo: The Doctrine of "Creation out of Nothing" in Early Christian Thought, trans. A. S. Worrall (Edinburgh: T. & T. Clark, 1994); cf. Copan’s review article in note 3. {5}See Richard Sorabji, Time, Creation and the Continuum (Ithaca, N.Y.: Cornell University Press, 1983), pp. 193-252; H. A. Wolfson, "Patristic Arguments against the Eternity of the World," Harvard Theological Review 59 (1966): 354-367; idem, The Philosophy of the Kalam (Cambridge, Mass.: Harvard University Press, 1976; H. A. Davidson, Proofs for Eternity, Creation and the Existence of God in Medieval Islamic and Jewish Philosophy (New York: Oxford University Press, 1987); Richard C. Dales, Medieval Discussions of the Eternity of the World, Studies in Intellectual History 18 (Leiden: E. J. Brill, 1990). {6}Thomas Aquinas Summa theologiae 1a.2.3; Idem Summa contra gentiles 2.16; 32-38; cf. idem Summa theologiae 1a.45.1; 1a.4b.2. Though Aquinas discusses divine conservation, he does not differentiate it from creation (Idem Summa contra gentiles 3.65; Summa theologiae 1a.104.1). {7}Philip L. Quinn, "Divine Conservation, Continuous Creation, and Human Action," in The Existence and Nature of God, ed. Alfred J. Freddoso (Notre Dame, Ind.: University of Notre Dame Press, 1983), pp. 55-79. See also idem, "Creation, Conservation, and the Big Bang," in Philosophical Problems of the Internal and External Worlds, ed. John Earman, Allen I. Janis, Gerald J. Massey, and Nicholas Rescher (Pittsburgh: University of Pittsburgh Press, 1993), pp. 589-612; idem, "Divine Conservation, Secondary Causes, and Occasionalism," in Divine and Human Action, ed. Thomas V. Morris (Ithaca, N.Y.: Cornell University Press, 1988), pp. 50-73. {8}John Duns Scotus, God and Creatures, trans. E. Alluntis and A. Wolter (Princeton: Princeton University Press, 1975), p. 276. {9}As noted by Alfred J. Freddoso, "Medieval Aristotelianism and the Case against Secondary Causation in Nature," in Divine and Human Action, p. 79. For the scholastics causation is a relation between substances (agents) who act upon other substances (patients) to bring about states of affairs (effects). Creatio ex nihilo is atypical because in that case no patient is acted upon. {10}To analyze God’s conservation of e , along Quinn’s lines, as God’s re-creation of e anew at each instant of e’s existence is to run the risk of falling into the radical occasionalism of certain medieval Islamic theologians, who, out of their desire to make God not only the creator of the world, but also its ground of being, denied that the constituent atoms of things endure from one instant to another but are rather created in new states of being by God at every successive instant. There are actually two forms of occasionalism threatening Quinn: (1) the occasionalism implied by a literal creatio continuans according to which similar, but numerically distinct, individuals are created at each successive instant, and (2) the occasionalism which affirms diachronic individual identity, but denies the reality of transeunt secondary causation. {11}On A- versus B-Theories of time: see Richard Gale, "The Static versus the Dynamic Temporal: Introduction," in The Philosophy of Time, ed. Richard M. Gale (New Jersey: Humanities Press, 1968), pp. 65-85. {12}Wolfhart Pannenberg, "Theological Questions to Scientists," in The Sciences and Theology in the Twentieth Century, ed. A. R. Peacocke, Oxford International Symposia (Stocksfield, England: Oriel Press, 1981), p. 12. {13}According to Schleiermacher, the original expression of the relation of the world to God, that of absolute dependence, was divided by the Church into two propositions: that the world was created and that the world is sustained. But there is no reason, he asserts, to retain this distinction, since it is linked to the Mosaic account of creation, which is the product of a mythological age. The questions of whether it is possible or necessary to conceive of God as existing apart from created things is a matter of indifference, since it has no bearing on the feeling of absolute dependence on God (F. D. E. Schleiermacher, The Christian Faith, 2d ed., ed. H. R. MacIntosh and J. S. Stewart [Edinburgh: T. & T. Clark, 1928], 36.1, 2; 41; pp. 142-143, 155). {14}Good examples of such timorousness include Langdon Gilkey, Maker of Heaven and Earth (Garden City, N.Y.: Doubleday, 1959), pp. 310-315; Ian Barbour, Issues in Science and Religion (New York: Harper & Row, 1966), p. 383-385; Arthur Peacocke, Creation and the World of Science (Oxford: Clarendon Press, 1979), pp. 78-79. {15}Pannenberg, "Questions," p. 12; Ted Peters, "On Creating the Cosmos," in Physics, Philosophy, and Theology: a Common Quest for Understanding, ed. R. Russell, W. Stoeger, and G. Coyne (Vatican City: Vatican Observatory, 1988), p. 291; Robert J. Russell, "Finite Creation without a Beginning: the Doctrine of Creation in Relation to Big Bang and Quantum Cosmologies," in Quantum Cosmology and the Laws of Nature, ed. R. J. Russell, N. Murphy, and C. J. Isham (Vatican City: Vatican Observatory, 1993), pp. 303-310. {17}See William Lane Craig and Quentin Smith, Theism, Atheism, and Big Bang Cosmology (Oxford: Clarendon Press, 1993) for discussion. {18}In the case of quantum mechanics, for example, "the state vector in the Schrödinger equation is not a physical magnitude, for it is an imaginary function and such functions do not represent real physical magnitudes" (C. Liu, "The Arrow of Time in Quantum Gravity," Philosophy of Science 60 [1993]: 622). Liu contends that in the mature theory of quantum gravity a fundamental arrow of time will obtain. {19}Hartle-Hawking’s use of imaginary numbers for the time variable allows one to redescribe a universe with an initial cosmological singularity in such a way that that point appears as a non-singular point on a curved hyper-surface. Such a re-description suppresses and also literally spatializes time, which makes evident the purely instrumental character of the model. Such a model could be of great utility to science, but it would not, as Hawking boldly asserts (Stephen Hawking, A Brief History of Time [New York: Bantam Books, 1988], pp. 140-141), eliminate the need for a Creator. {20}See the interesting lecture by C. Rovelli, "What Does Present Day’s [sic] Physics Tell Us about Time and Space?" Lecture presented at the 1993-94 Annual Series of Lectures of the Center for Philosophy of Science of the University of Pittsburgh, September 17, 1993, p. 17, where he lists eight properties of time as characterized in natural language and compares the concepts of time found in thermodynamics, STR, GTR, and so forth; time as it is defined in quantum gravity has none of the properties usually associated with time. {21}M.-T. Liske, "Kann Gott reale Beziehungen zu den Geschöpfen haben?" Theologie und Philosophie 68 (1993): 224. {22}The difficulty may be formulated as follows: 1. If God delays creating at t until t’, He has good reason to do so. 2. If God existed from eternity past until creating at t’, He delayed creating at t. 3. God can have no good reason to do so. 4. \ God did not delay creating at t until t’. 5. \ God has not existed from eternity past until creating at t’. {23}Such a view would not preclude the existence of time during hiatuses within the series of events, such as are envisioned by Sidney Shoemaker, "Time Without Change," The Journal of Philosophy 66 (1969): 363-381. {24}Thomas Aquinas De potentia Dei 3. 1, 2. {25}Keith Ward, Rational Theology and the Creativity of God (Oxford: Basil Blackwell, 1982), p. 86. {26}D. A. Carson, Divine Sovereignty and Human Responsibility: Biblical Perspectives in Tension, New Foundations Theological Library (Atlanta: John Knox, 1981), pp. 24-35. {27}Carson, Sovereignty and Responsibility, pp. 18-22. One should mention also the striking passages which speak of God’s repenting in reaction to a change in human behavior (e.g., Gen. 6.6; 1 Sam. 15.11, 35). {28}See Luis Molina, On Divine Foreknowledge: Part IV of the "Concordia," trans. with Introduction and Notes by Alfred J. Freddoso (Ithaca, N.Y.: Cornell University Press, 1988); also William Lane Craig, The Problem of Divine Foreknowledge and Future Contingents from Aristotle to Suarez, Studies in Intellectual History 7 (Leiden: E. J. Brill, 1988), chaps. 7, 8. {29}Molina explains, ". . . . all good things, whether produced by causes acting from a necessity of nature or by free causes, depend upon divine predetermination . . . and providence in such a way that each is specifically intended by God though His predetermination and providence, whereas the evil acts of the created will are subject as well to divine predetermination and providence to the extent that the causes from which they emanate and the general concurrence on God’s part required to elicit them are granted through divine predetermination and providence--though not in order that these particular acts should emanate from them, but rather in order that other, far different, acts might come to be, and in order that the innate freedom of the things endowed with a will might be preserved for their maximum benefit; in addition evil acts are subject to that same divine predetermination and providence to the extent that they cannot exist in particular unless God by His providence permits them in particular in the service of some greater good. It clearly follows from the above that all things without exception are individually subject to God’s will and providence, which intend certain of them as particulars and permit the rest as particulars Thus, the leaf hanging from the tree does not fall, nor does either of the two sparrows sold for a farthing fall to the ground, nor does anything else whatever happen without God’s providence and will either intending it as a particular or permitting it as a particular "(Molina On Divine Foreknowledge 4. 53. 3. 17). On the way in which sins contribute to the eventual realization of God’s purposes, see the powerful statement in On Divine Foreknowledge 4. 53. 2. 15. {30}Alvin Plantinga, "Reply to Robert Adams," in Alvin Plantinga, ed. James E. Tomberlin and Peter Van Inwagen, Profiles 5 (Dordrecht: D. Reidel, 1985), pp. 371-82; Jonathan L. Kvanvig, The Possibility of an All-Knowing God (New York: St. Martin’s, 1986), pp. 121-148; Alfred J. Freddoso, "Introduction," in On Divine Foreknowledge, pp. 68-78; Edward J. Wierenga, The Nature of God: an Inquiry into Divine Attributes (Ithaca, N. Y.: Cornell University Press, 1989), pp. 150-160; William Lane Craig, Divine Foreknowledge and Human Freedom, Brill’s Studies in Intellectual History 19 (Leiden: E. J. Brill, 1990), pp. 247-269; Thomas Flint, "Hasker’s God, Time, and Knowledge," Philosophical Studies 60 (1990): 103-115; William Lane Craig, "Hasker on Divine Knowledge," Philosophical Studies 67 (1992): 89-110. {31}Hasker does attempt to re-defend his controversial premiss in William Hasker, "Middle Knowledge: a Refutation Revisited," Faith and Philosophy 12 (1995): 224-225; but his account fails to respond to any of the three objections advanced in Craig, "Hasker on Divine Knowledge," pp. 106-107, and in the end he himself concedes that ". . . the complexity of the argument . . . leaves a number of points at which doubts can arise and toward which critics can direct their fire" (Hasker, "Refutation Revisited," p. 226), so that he chooses to adopt Adams’s alternative formulation. {32}Robert Merrihew Adams, "An Anti-Molinist Argument," in Philosophical Perspectives, vol. 5: Philosophy of Religion, ed. James E. Tomberlin (Atascadero, Calif.: Ridgeway Publishing, 1991), p. 356. {33}Adams had argued, "My thisness, and singular propositions about me, cannot have pre-existed me because if they had, it would have been possible for them to have existed even if I had never existed, and that is not possible" (Robert Merrihew Adams, "Time and Thisness," Midwest Studies in Philosophy 11 (1986): 317). This argument is parallel to the interpretation under discussion, counterfactuals of creaturely freedom and divine middle knowledge taking the place of thisnesses and singular propositions. As Kvanvig discerns, this reasoning is susceptible to the same response as is the argument for fatalism (Jonathan L. Kvanvig, "Adams on Actualism and Presentism," Philosophy and Phenomenological Research 50 (1989): ***. {34}Thomas P. Flint, "The Problem of Divine Freedom," American Philosophical Quarterly 20 (1983): 255-264. {35}Adams, "Anti-Molinist Argument." p. 345. {36}See further my Divine Foreknowledge and Human Freedom , pp. 259-262. {37}Alvin Plantinga, "Reply to Robert Adams," p. 376. {38}William Hasker, "Explanatory Priority: Transitive and Unequivocal, A Reply to William Craig, " Philosophy and Phenomenological Research 57 (1997): 3. {40}He writes, ". . . (12) expresses a . . . distinctively incompatibilist intuition, that the explanatory antecedents of the totality of my choosing and doing, must leave the omission of the free action ‘open,’ at least in the sense of not being strictly inconsistent with the omission" (Adams, "Anti-Molinist Argument," p. 352). {41}Thomas P. Flint, "Middle Knowledge and the Doctrine of Infallibility," in Philosophy of Religion, pp. 385-390. {42}Hasker, "Explanatory Priority," p. 1. The article referenced is Hasker, "Refutation Revisited," pp. 223-236. {43}Hasker revises the first part of his argument in deference to Adams’s version, but the second part he leaves unchanged and undefended--indeed, in footnote 17 on p. 235 he actually commends Adams’s (12) as an alternative to his argument for those "who have qualms about some of the premises in my version of the argument." {44}It is very often said by biblical scholars anxious not to be associated with a defunct evidential apologetic use of miracles that biblical miracles function as signs, not evidence. This, however, is a false dichotomy; it is precisely because of their evidential force that miracles serve effectively as signs (see William Lane Craig, review article of Miracles and the Critical Mind, by Colin Brown, Journal of the Evangelical Theological Society 27 [1985]: 473-483). {45}Marie François Arrouet de Voltaire, Dictionnaire philosophique (Paris: Garnler, 1967) s.v. "Miracles". {46}For discussion see Stephen S. Bilinskyji, "God, Nature, and the Concept of Miracle" (Ph.D. dissertation, University of Notre Dame, 1982); Alfred J. Freddoso, "The Necessity of Nature," Midwest Studies in Philosophy 11 (1986): 215-242. {47}R. G. Swinburne, "Miracles," Philosophical Quarterly 18 (1968): 321. {48}Ibid., p. 323. {49}Wolfhart Pannenberg, Jesus--God and Man, trans. L. L. Wilkins and D. A. Priebe (London: SCM, 1968), p. 67. {50}David Hume, An Enquiry concerning Human Understanding, ed. L. A. Selby-Bigge, 3d ed. rev. P. H. Nidditch (Oxford: Clarendon Press, 1975), chap. 10. Copyright (C) William Lane Craig. All Rights Reserved.
b769486b3712f2a3
Overblog Suivre ce blog Editer l'article Administration Créer mon blog 15 janvier 2011 6 15 /01 /janvier /2011 14:05 In physics, energy (from Greek ???????? - energeia, "activity, operation", from ??????? - energos, "active, working" ) is a quantity that is often understood as the ability a physical system has to produce changes on another physical system  (Dell XPS M1210 Batteryhttp://www.hdd-shop.co.uk . The changes are produced when the energy is transferred from a system to another. A system can transfer energy by means of three ways, namely: physical or thermodynamical work, heat transfer, or mass transfer  (Dell Studio XPS 1340 Battery)     . This quantity can be assigned to any physical system. The assigned energy, according to Classical Physics, depends on its physical state relative to the frame of reference used to study it       (Dell Studio XPS 1640 Battery)      . On the other hand, in Relativistic Physics, when using an inertial reference frame, invariant mass energy is independent of such kind of reference frames. The invariant mass of a system is the same in all the inertial reference frames, it means that its energetic equivalent (invariant mass energy) would be the same in all the inertial reference frames, too  (Dell Vostro 1710 Battery)   . All the forms of energy that a system has can belong to one of two great components: the internal energy and the external energy (not to be confused with the energy of the surroundings which is outside the system). All kinds of internal and external energies can, additionally, be classified as kinetic energy or potential energy  (Dell KM958 battery)         . Kinetic energy considers the mass and the motion of a system. If the system is studied as a whole, it is called external kinetic energy. The thermal energy is the internal kinetic energy and it considers the motion of every constitutive particle of the system (molecules, atoms, electrons, etc.)        (Sony VGP-BPS13 battery)       . The gravitational potential energy is an external potential energy and so is theelectrostatic potential energy. The elastic energy is an internal potential energy. The forms of energy are often named after a related force, as in the previous examples. Some forms of energy are associated to the particle-like behaviour of the system  (Sony VGP-BPS13/B battery)      . But, there might be cases like that of sound energy in which the energy overall effect is related to the wave-like behaviour of the system. In the specific case of sound, there is a transmission of oscillations in the pressure through the system  (Sony VGP-BPS13/S battery)     . The energy associated to the sound wave converts back and forth between the elastic potential energy of the extra compression (in case of longitudinal waves) or lateral displacement strain (in case of transverse waves) of the matter and the kinetic energy of the oscillations of the medium of which the system is made up     (Sony VGP-BPS13A/B battery)    . German physicist Hermann von Helmholtz established that all forms of energy are equivalent — energy in one form can disappear but the same amount of energy will appear in another form. A restatement of this idea is that energy is subject to a conservation law over time    (Sony VGP-BPS13B/B battery)     . In all such energy transformation processes, the total energy remains the same. Energy may not be created nor destroyed. This principle, the conservation of energy, was first postulated in the early 19th century, and applies to any isolated system  Dell Inspiron E1505 battery . Although the total energy of a system does not change with time, its value may depend on the frame of reference  Dell Latitude E6400 battery . Energy is a scalar physical quantity       HP Pavilion dv6000 Battery . In the International System of Units (SI), energy is measured in joules, but in some fields other units such as kilowatt-hours and kilocalories are also used. The word energy derives from Greek ???????? (energeia), which possibly appears for the first time in the work of Aristotle in the 4th century BC      SONY VAIO VGN-FZ Battery . The concept of energy emerged out of the idea of vis viva (living force), which Leibniz defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent parts of matter  SONY VAIO VGN-FZ18 Battery , It was argued for some years whether energy was a substance (thecaloric) or merely a physical quantity, such as momentum. William Thomson (Lord Kelvin) amalgamated all of these laws into the laws of thermodynamics  SONY VAIO VGN-FZ220E Battery , which aided in the rapid development of explanations of chemical processes using the concept of energy by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan   SONY VAIO VGN-FZ340E Battery . During a 1961 lecture[7] for undergraduate students at the California Institute of Technology, Richard Feynman, a celebrated physics teacher and Nobel Laureate, said this about the concept of energy: There is a fact, or if you wish, a law, governing all natural phenomena that are known to date  SONY VAIO VGN-FZ430E Battery . There is no known exception to this law—it is exact so far as we know. The law is called theconservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes  SONY VAIO VGN-FZ460E Battery . The Feynman Lectures on Physics Since 1918 it has been known that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time        SONY VAIO VGN-FZ4000 Battery . Energy in various contexts The concept of energy and its transformations is useful in explaining and predicting most natural phenomena  SONY VAIO VGN-FZ31E Battery . The concept of energy is widespread in all sciences      SONY VAIO VGN-FZ31J Battery . • by the Boltzmann's population factor e E / kT - that is the probability of molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation.The activation energy necessary for a chemical reaction can be in the form of thermal energy  SONY VGP-BPS8 Battery . • In biology, energy is an attribute of all biological systems from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or anorganelle of a biological organism  SONY VGP-BPS13 Battery . • Energy is thus often said to be stored by cells in the structures of molecules of substances such as carbohydrates (including sugars), lipids, and proteins, which release energy when reacted with oxygen in respiration      SONY VGP-BPS13/S Battery . • In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, assuming an average human energy expenditure of 12,500kJ per day and a basal metabolic rate of 80 watts      SONY VGP-BPS13A/B Battery . • In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior., while meteorologicalphenomena      SONY VGP-BPS13AS Batterylike wind, rain, hail, snow, lightning, tornadoes and hurricanes, are all a result of energy transformations brought about by solar energy on the atmosphere of the planet Earth       Dell Inspiron 1320n Battery . Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources        Toshiba NB100 Battery . Through all of these transformation chains, potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in a number of ways over time between releases, as more active energy  Sony VGN-FW139E/H battery . Conservation of energy Most kinds of energy (with gravitational energy being a notable exception) are also subject to strict local conservation laws, as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space     Dell Latitude E4200 Battery . There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.Conservation of energy is the mathematical consequence of translational symmetry of time (that is, the indistinguishability of time intervals taken at different time) - see Noether's theorem  Dell Inspiron 300M Battery . This law is a fundamental principle of physics   Dell Vostro A840 Battery . In quantum mechanics energy is expressed using the Hamiltonian operator  Dell RM791 battery . On any time scales, the uncertainty in the energy is bywhich is similar in form to the Heisenberg uncertainty principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics)    Dell XPS M1530 battery . In particle physics, this inequality permits a qualitative understanding of virtual particles which carry momentum, exchange by which and with real particles, is responsible for the creation of all knownfundamental forces (more accurately known as fundamental interactions)    Dell XPS M2010 battery . Applications of the concept of energy • The total energy of a system can be subdivided and classified in various ways. For example, it is sometimes convenient to distinguish potential energy (which is a function of coordinates only) fromkinetic energy (which is a function of coordinate time derivatives only)       SONY VGN-FZ210CE Battery . In classical physics energy is considered a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of theenergy-momentum 4-vector). In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts)  Toshiba Satellite A200 Battery. Energy transfer if there are no other energy-transfer processes involved. Here E is the amount of energy transferred, and W represents the work done on the system     Toshiba Satellite M300 Battery . where Q represents the heat flow into the system. There are other ways in which an open system can gain or lose energy      Sony Vaio PCG-5G2L Battery . Energy is also transferred from potential energy (Ep) to kinetic energy (Ek) and then back to potential energy constantly  Sony Vaio PCG-5J2L Battery . The equation can then be simplified further since Ep = mgh (mass times acceleration due to gravity times the height) and   (half mass times velocity squared). Then the total amount of energy can be found by adding Ep + Ek = Etotal Sony Vaio PCG-5L1L Battery . Energy and the laws of motion In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept  Sony Vaio PCG-6S2L Battery . The Hamiltonian The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems  Sony Vaio PCG-6S3L Battery . These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics. The Lagrangian Another energy-related concept is called the Lagrangian, after Joseph Louis Lagrange. This is even more fundamental than the Hamiltonian, and can be used to derive the equations of motion     Sony Vaio PCG-6V1L Battery . Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction)          Sony Vaio PCG-6W1L Battery . Energy and thermodynamics Internal energy [edit]The laws of thermodynamics According to the second law of thermodynamics, work can be totally converted into heat, but not vice versa  Sony Vaio PCG-6W3L Battery . This is a mathematical consequence of statistical mechanics. The first law of thermodynamics simply asserts that energy is conserved, and that heat is included as a form of energy transfer        Sony Vaio PCG-7111L Battery . A commonly used corollary of the first law is that for a "system" subject only topressure forces and heat transfer (e.g., a cylinder-full of gas), the differential change in energy of the system (with a gain in energy signified by a positive quantity) is given as the following equation      Sony Vaio PCG-7112L Battery : where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system). Although this equation is the standard textbook example of energy conservation in classical thermodynamics       Sony Vaio PCG-7133L Battery , it is highly specific, ignoring all chemical, electric, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat, and because it contains a term that depends on temperature. The most general statement of the first law (i.e., conservation of energy) is valid even in situations in which temperature is undefinable   Sony Vaio PCG-7Z2L Battery . Energy is sometimes expressed as the following equation:which is unsatisfactorybecause there cannot exist any thermodynamic state functions W or Q that are meaningful on the right hand side of this equation, except perhaps in trivial cases  Sony Vaio PCG-8Y1L Battery . Equipartition of energy then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is called the second law of thermodynamics        Sony VAIO PCG-5G2L Battery . Oscillators, phonons, and photons In an ensemble (connected collection) of unsynchronized oscillators, the average energy is spread equally between kinetic and potential types        Sony VAIO PCG-5G3L Battery . Because an electric oscillator (LC circuit) is analogous to a mechanical oscillator   Sony VAIO PCG-5K2L Battery , its energy must be, on average, equally kinetic and potential. It is entirely arbitrary whether the magnetic energy is considered kinetic and whether the electric energy is considered potential, or vice versa. That is, either the inductor is analogous to the mass while the capacitor is analogous to the spring, or vice versa      Sony VAIO PCG-5J2L Battery . 2. On the other hand, in the key equation m2c4 = E2 ? p2c2, the contribution mc2 is called the rest energy, and all other contributions to the energy are called kinetic energy       Sony VAIO PCG-5L1L Battery . For a particle that has mass, this implies that the kinetic energy is 0.5p2 / m at speeds much smaller than c, as can be proved by writing E = mc2 ?(1 + p2m ? 2c ? 2) and expanding the square root to lowest order. By this line of reasoning, the energy of a photon is entirely kinetic, because the photon is massless and has no rest energy      Sony VAIO PCG-6S2L Battery . The two analyses are entirely consistent. The electric and magnetic degrees of freedom in item 1 are transverse to the direction of motion, while the speed in item 2 is along the direction of motion          Sony VAIO PCG-6S3L Battery . Work and virtual work Work, a form of energy, is force times distance       Sony VAIO PCG-6V1L Battery . This says that the work (W) is equal to the line integral of the force F along a path C; for details see the mechanical work article. Quantum mechanics In quantum mechanics energy is defined in terms of the energy operator as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. In results can be considered as a definition of measurement of energy in quantum mechanics     Sony VAIO PCG-6W2L Battery . The Schrödinger equation describes the space- and time-dependence of slow changing (non-relativistic) wave function of quantum systems. The solution of this equation for bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta        Sony VAIO PCG-6W3L Battery . In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by the Planckequation E = h? (where h is the Planck's constant and ? the frequency). In the case of electromagnetic wave these energy states are called quanta of light or photons        Sony VAIO PCG-7111L Battery . When calculating kinetic energy (work to accelerate a mass from zero speed to some finite speed) relativistically - using Lorentz transformations instead of Newtonian mechanics, Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed       Sony VAIO PCG-7112L Battery . E = mc2, m is the mass,      Sony VAIO PCG-7113L Battery c is the speed of light in vacuum, E is the rest mass energy. For example, consider electron-positron annihilation, in which the rest mass of individual particles is destroyed Sony VAIO PCG-7133L Battery , This is a reversible process - the inverse process is called pair creation - in which the rest mass of particles is created from energy of two (or more) annihilating photons. In this system the matter (electrons and positrons) is destroyed and changed to non-matter energy (the photons)    Sony VAIO PCG-7Z2L Battery . However, the total system mass and energy do not change during this interaction. In general relativity, the stress-energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation       Sony VAIO PCG-8Y1L Battery . It is not uncommon to hear that energy is "equivalent" to mass. It would be more accurate to state that every energy has an inertia and gravity equivalent, and because mass is a form of energy, then mass too has inertia and gravity associated with it  Sony VAIO PCG-8Y2L Battery Sony VAIO PCG-8Z1L Battery Sony VAIO PCG-8Z2L Battery. Partager cet article Repost 0 Published by batterys - dans Laptop Battery commenter cet article
6d3206da435aab3c
Random physics Alberto Verga, research notebook \(\newcommand{\I}{\mathrm{i}} \newcommand{\E}{\mathrm{e}} \newcommand{\D}{\mathop{}\!\mathrm{d}} \newcommand{\Di}[1]{\mathop{}\!\mathrm{d}#1\,} \newcommand{\Dd}[1]{\frac{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d}#1}}\) »quantum chaos»kicked rotator»random matrices»quantum walk Dynamical localization Anderson (1958) discovered that disorder leads to localization of the quantum states. Contrary to naive physical “intuition”, transport in a metal with a strong concentration of impurities do not proceed by tunneling from one impurity to another, but completely stops above a threshold concentration: if initially the particle probability is concentrated at the origin, the probability of finding the particle at the origin when times goes to infinity, remains finite. Therefore the particle’s wave function is exponentially localized inside a region characterized by a localization length \(l\). A simple model of localization consists in a tight-binding one dimensional Hamiltonian \(H\) with random energies \(\varepsilon_x\) at each lattice site \(x\) (we take the lattice step \(a=1\) as the unit of length), and a constant hopping energy between neighboring sites (taken to be the unit of energy and \(\hbar=1\)): \begin{equation} \label{e:alH} H = \sum_x \left( \varepsilon_x |x\rangle \langle x| + |x+1\rangle \langle x| + |x\rangle \langle x-1| \right)\,, \end{equation} where \(\varepsilon_x \sim \mathcal{U}(-W/2,W/2)\) are uniformly distributed in a band of width \(W\), and \(|n\rangle\) is the position state. The first term of \(H\) contains the random energies, and the two last terms the hopping energies to the right or to the left of point \(x\). This model, proposed by Anderson, exhibit in one and two dimensions, complete localization for any disorder strength (measured by \(W\)), and in three dimensions for disorder strengths above a threshold. The solution of the stationary Schrödinger equation, \begin{equation} \label{e:alEV} H |E_n\rangle = E_n |E_n \rangle\,, \quad n = 0,1,\ldots \end{equation} are the energy eigenvectors \(\langle x|E_n\rangle = \psi_n(x)\) and \(E_n\) the corresponding eigenvalues. This eigenvalue problem can easily be solved numerically. Compute the eigenvectors and eigenvalues of the Anderson Hamiltonian \((\ref{e:alH})\), and show in particular that the histogram of energy level spacings \(s=(E_{n+1} - E_n)/\overline{\Delta E}\) follows a Poisson law. from numpy.random import random, uniform from numpy.linalg import eig, eigvals def hamiltonian(L = 1024, W = 2.0): return -roll(eye(L), -1) - roll(eye(L), +1) + diag(uniform(-W/2, W/2, L)) L = 1250 W = 2.0 H = hamiltonian(L = L, W = W) E, U = eig(H) U = U[E.argsort()] energy mode spacings The above figures show the spectrum of the random Hamiltonian, a typical eigenvector (in one dimension all the eigenvectors are localized), and a histogram showing that the energy spacing follows a Poisson distribution. This distribution is different from the random matrix distribution of the Gaussian orthogonal ensemble (valid for real symmetric matrices whose entries are Gaussian random numbers), which characterizes time reversal symmetric Hamiltonians. The main difference between the matrix \(H\) and the Gaussian matrix is that its diagonal terms are uncorrelated; the Gaussian matrix can be diagonalized, but its eigenvalues will have strong correlations. The Schrödinger equation with energy \(E\), can also be written as a second order difference equation \begin{equation} \label{e:alS} \psi_{x+1} + \psi_{x-1} - (E - \varepsilon_x) \psi_x = 0\,, \end{equation} where \(\psi_x = \langle x | \psi \rangle\) is the particle wave function, the probability amplitude to find the particle at site \(x\). This form is well adapted to investigate the transmission \(T\) through a lattice of length \(L\), as a function of the energy \(E\), of an incident particle \(\E^{\I kx}\) (plane wave), where \(k\) is given by $$E=2 \cos(k)\,,$$ the dispersion relation of a free particle (\(\varepsilon_x=0\)), propagating from left to right. Equation \((\ref{e:alS})\), can be put in a more convenient form, using the matrix \(M_n\), \begin{equation} \label{e:alM} \begin{pmatrix} \psi_{x+1} \\ \psi_x \end{pmatrix} = M_x\,\begin{pmatrix} \psi_{x} \\ \psi_{x-1} \end{pmatrix}\,, \quad M_x = \begin{pmatrix} E-\varepsilon_x & -1 \\ 1 & 0 \end{pmatrix}\,; \end{equation} the matrix \(M_x\) is unimodular (\(\det M_x =1\)). Therefore, the initial value problem of the Schrödinger equation is solved by the product of random matrices: $$ \begin{pmatrix} \psi_{x+1} \\ \psi_x \end{pmatrix} = M_x M_{x-1}\ldots M_1 \begin{pmatrix} \psi_{1} \\ \psi_0 \end{pmatrix} $$ A theorem on random unimodular matrices products due to Furstenberg (1963), states that \begin{equation} \label{e:alF} \lim_{x\rightarrow \infty} \frac{1}{x} \ln \mathrm{Tr}\, (M_x\ldots M_1 ) = \lambda>0\,, \end{equation} where \(\lambda\) is the maximum Lyapunov exponent. As a consequence of the exponential growth of the matrix product, the asymptotic behavior of the wave amplitude must be of the form: $$ \lim_{x \rightarrow \infty} \psi_x \sim \E^{\pm x/\ell}\,, \quad \ell \sim 1/\lambda\,.$$ Indeed, for generic initial conditions \((\psi_0,\psi_1)\), the solutions of the Schrödinger equation will be a superposition of growing and decaying amplitudes, whose characteristic length scale is a function of the energy, \(\ell=\ell(E)\). Only for special values of the energy, precisely the eigenvalues of \((\ref{e:alEV})\), will result in normalizable solutions, simultaneously decaying in both directions. In summary, the spectrum of the Anderson Hamiltonian is a discrete spectrum of exponentially localized wave functions, whose characteristic localization length is given by the inverse of the Lyapunov exponent of the transfer matrix product. It is worth noting that equation \((\ref{e:alS})\) is analogous to the dynamical equation of a kicked Hamiltonian system, where position steps are replaced by time steps between kicks. This analogy allows to investigate the localization of the quantum kicked rotator using the methods developed to the Anderson problem. Compute the localization length and investigate the fluctuations of the transmission coefficient \(T=T(E,L)\) of the one dimensional Anderson model. Solve \((\ref{e:alS})\) imposing an outgoing plane wave at site \(L+1\): \(\psi_{L+1}=\E^{\I k}\) and \(\psi_L = 1\), and integrating the difference equation backwards in space. See the Markos (2006) paper, to find a computation of the transmission coefficient using the transfer matrix \(T\) $$T_x = Q^{-1} M_x Q \,, \quad T^L = \begin{pmatrix} 1/t^* & -r^*/t^* \\ -r/t & 1/t \end{pmatrix} \,,\quad Q = \begin{pmatrix} 1 & 1 \\ \E^{-\I k} & \E^{\I k} \end{pmatrix}$$ amplitude transmission # original perl code by Dominique Delande # arXiv:1005.0915v2 (Les Houches, 2009) # http://arxiv.org/abs/1005.0915v2 def transmission(L = 1250, W = 0.6, E = 0.5, NA = 10000): k = arccos(0.5*E) logT = zeros(NA) # average over disorder for j in range(NA): # initial plane wave (outgoing) psi_n_plus_1 = exp(1j*k) psi_n = 1.0 e_n = W*(random(L) - 0.5) # solve Schroedinger equation from last point for n in arange(L-1,0,-1): psi_n_minus_1 = (E - e_n[n])*psi_n - psi_n_plus_1 (psi_n, psi_n_plus_1) = (psi_n_minus_1, psi_n) # transmission coefficient $itt = 1/|t|^2$ psi_n_minus_1 = E*psi_n - psi_n_plus_1 itt = (0.5*abs(psi_n_minus_1 - exp(1j*k)*psi_n)/sin(k))**2 logT[j] = log(itt) return logT Analogy of the kicked rotator with the Anderson model A relation can be established between the Anderson model of spacial localization, and the dynamical localization in kicked systems. We start by using a Hermitian operator \(W\) to transform the perturbation part of the Floquet operator \(F=\E^{-\I V(x)} \E^{-\I H_0}\): \begin{equation} \label{e:alFW} \E^{-\I V(x)} = \frac{1 + \I W(x)}{1 - \I W(x)}\,, \quad W(x) = - \tan \frac{V(x)}{2}\,. \end{equation} Substitution in the eigenvector formula \(F |\phi\rangle = \E^{-\I \phi}|\phi\rangle\), gives $$(1+\I W) \E^{\I \phi - \I H_0} |\phi\rangle = (1-\I W) |\phi\rangle\,,$$ or after a simple (formal) manipulation, we get $$\tan \frac{\phi - H_0}{2} |\phi\rangle + W |\phi\rangle = 0\,.$$ Now, using the \(p\)-representation (eigenvectors of the unperturbed system, \(H_0|n\rangle = (n^2/2M) |n\rangle\)), the first term of the previous expression is diagonal, and the second one is a convolution: \begin{equation} \label{e:alDiff} \varepsilon_n \varphi_n + \sum_{m \ne n} W_{n-m} \varphi_m = E \varphi_n\,, \end{equation} with \(E= -W_0\) $$W_n = -\frac{1}{2\pi} \int_{0}^{2\pi} \Di{x} \tan \left( \frac{k\cos x}{2} \right) \E^{-\I n x}\,,\quad \varepsilon_n = \tan \left[ \frac{\phi}{2} - \frac{n^2}{4M} \right] \,, $$ and \(\varphi_n = \langle n | \phi \rangle\). Equation \((\ref{e:alDiff})\) is precisely of the form of the Schrödinger equation on a one dimensional lattice. The locality of interaction (between neighbors in the Anderson model) is ensured by the rapid decay of \(W_n\) with \(n\). Moreover, the “site energies”, are pseudo-random numbers with a Cauchy distribution: \(\varepsilon_n \sim \mathcal{L}\): $$P(\varepsilon_n) = \frac{1}{\pi(1+\varepsilon_n^2)}\,. $$ Indeed, the sequence entering the tangent, becomes effectively \(\phi - n^2/4M \mod \pi\), which is ergodic in the interval \([0,\pi]\) [Weyl (1916)]. We will now investigate the localization in a one dimensional lattice, assuming neighbor hopping, with a Cauchy-like site disorder (known as the Lloyds model). Our objective is to derive an expression for the localization length. tan(p^2) Lorentzian The figures (above) show the values of \(\varepsilon_n\) for \(\phi\) the golden ratio and \(M=4\), together with its histogram, which fits a Lorentzian function. Localization length and density of states When restricted to nearest neighbors hopping, equation \((\ref{e:alDiff})\) is analogous to the Anderson model Schrödinger equation: \begin{equation} \label{e:alDiffn} \varphi_{n+1} + \varphi_{n-1} - (E - \varepsilon_n) \varphi_n = 0 \,, \quad P(\varepsilon_n) = \frac{W}{2\pi} \frac{1}{ \left(\frac{W}{2} \right)^2 + \varepsilon_n^2} \end{equation} where hopping is in momentum space instead of position space, and site disorder is distributed according to the Cauchy law, rather than uniform, law. The assumption of hopping to the neighbors is not really satisfied when \(k\) is large: the integral \(W_n\) is oscillating and decays slowly; however, this approximation can give us a relevant qualitative information about the localized states of the kicked rotator. To compute the ensemble-averaged Lyapunov exponent, or equivalently the inverse of the localization length \(\overline{\lambda} = 1/\ell\), we can solve \((\ref{e:alDiffn})\) as an initial value problem, fixing \(\varphi_0\) and \(\varphi_1\), to find \(\varphi_L(E) \sim \E^{L/\ell(E)}\) (if \(E\) is not in the spectrum, according to the Furstenberg theorem). In one dimension \(\varphi_L(E)\) is a polynomial of degree \(L-1\) whose zeros correspond to the eigenvalues of the chain with fixed boundary \(\varphi_L=0\). This is special for one dimension, because we can order the eigenmodes by their number of nodes: the lowest energy state has no nodes, one node for the first “excited” state, etc. This is a property that the disorder do not change. Therefore, we can write, \begin{equation} \varphi_L(E) = C \prod_{k=1}^{L-1}(E_k - E)\,, \quad \ln \varphi_L(E) = \sum_{k=1}^{L-1} \left[ \ln |E_k -E| + \I \pi \theta(E - E_k) + \ln C \right]\,, \end{equation} where indices \(k,\ldots\) denote energy levels (\(n,\ldots\), are for “position”); the second expression takes into account the sign change at each node, which introduces a jump of \(\I \pi\) in the logarithm (\(C\) is a normalization constant). In the limit of an infinite system, the sum over states can be replaced by an integral over the density of states; separating real an imaginary parts, we obtains, \begin{equation} \label{e:alHJT} \frac{1}{\ell} = \lim_{L\rightarrow \infty} \frac{1}{L} \mathrm{Re}\, \overline{ \varphi_L(E) } = \int_{-\infty}^{\infty} \Di{E'} \overline{\rho(E')} \ln | E - E'| + \ln 2\,, \end{equation} \begin{equation} F(E) = \lim_{L\rightarrow \infty} \frac{1}{\pi} \mathrm{Im}\,\overline{ \varphi_L(E)} = \int_{-\infty}^{E} \Di{E'} \overline{ \rho(E')} \,, \end{equation} where \(\overline{\rho(E)}\) is the averaged density of states, and \(F(E)\) is its cumulative distribution (we denote \(\overline{\cdots}\), the disorder average). The constant was determined from the condition \(\ell^{-1}=0\) for the free system. Equation \((\ref{e:alHJT})\) relates the density of states in one dimension to the localization length, it is known as the Herbert, Jones, Thouless formula. An equivalent formula of the localization can be obtained from the behavior of the eigenstates. From \((\ref{e:alDiffn})\) we can deduce the form of an effective Hamiltonian, whose matrix elements are \begin{equation} H_{nm} = \varepsilon_n \delta_{nm} + \delta_{n,m+1} + \delta_{n,m-1}\,, \end{equation} a tridiagonal matrix, with random elements on the diagonal. We note \(u_n(E_k)\) the eigenvector of eigenvalue \(E_k\) of \(H\). An explicit solution of the tridiagonal system is obtained from the Green function \(G = (E-H)^{-1}\), whose matrix elements can be written in terms of the cofactor \(C\) and determinant of the \(E-H\) matrix: \begin{equation} G=\frac{C^T(E-H)}{\det(E-H)}\,, \quad G_{nm} = \frac{ (-1)^{n+m} \det(E-H)_{mn} }{\det(E-H)} \end{equation} where \(\det_{mn}\) is the minor of row \(m\) and column \(n\). The computation of the minor involving the initial site \(n=1\) and the last one \(n=L\) is trivial for a tridiagonal matrix: \begin{equation} \label{e:alG1L} G_{1L} = \prod_{k=1}^{L} (E-E_k)^{-1} \end{equation} From the general form of the Green function in terms of the Hamiltonian basis functions \(u_n(E_k)\), \begin{equation} G_{1L} = \sum_k \frac{u_1(E_k) u_L(E_k) }{E - E_k}\,, \end{equation} we find that the residue of the pole at \(E=E_k\) is \(u_1(E_k) u_L(E_k)\), and comparing with the residue of \((\ref{e:alG1L})\), we get \begin{equation} u_1(E_k) u_L(E_k) = \prod_{j\ne k}^{L} (E_k-E_j)^{-1}\,. \end{equation} For a Bloch-type state \(|u_1u_L|\sim 1/L\), while for a localized state \(|u_1u_L|\sim \E^{-L/\ell}\). Therefore, $$\frac{1}{L} |\ln|u_1 u_L| \sim - \frac{1}{\ell}\,,$$ is finite for a localized state, but vanishes for an extended state. Therefore, taking the logarithm and passing to the continuous limit, one obtains an equivalent to formula \((\ref{e:alHJT})\). Localization length of the kicked rotator To evaluate the localization length for the Cauchy distribution,the appropriated probability law for the kicked rotator, we must calculate the average over disorder of the density of state. Instead of working directly with \(\rho\), it is easer to study the Green function before, and deduce the system’s properties from the Green function. For instance, starting from the formula of the trace, $$\mathrm{Tr}\,G(E \pm \I o) = \sum_k \frac{1}{E - E_k \pm \I o} \rightarrow P \int \Di{E'} \frac{\rho(E')}{E-E'} \mp \I \pi \rho(E)$$ \(P\) stands for the principal value and \(o\rightarrow 0\), it is easy to demonstrate that the derivative of the Lyapunov exponent is given by \begin{equation} \label{e:alGa} \frac{\partial }{\partial E} \overline{ \lambda(E)} = \mathrm{Re}\, \mathrm{Tr}\, \overline{ G(E + \I o )} \,, \quad \overline{ \rho(E)} = \frac{-1}{\pi} \mathrm{Im}\, \overline{G(E + \I o )} \,. \end{equation} Therefore, we need to average the Green function. One of the most interesting (and generally applicable) methods to compute disorder averages in many-body systems is the replica method devised by Edwards and Anderson (1975) [see the book Peliti, “Statistical Mechanics in a Nutshell” (2011)]. The idea of the replica method comes from the statistical mechanics of disordered systems as a trick to compute the disorder averaging \(\overline{\cdots}\) of the free energy: $$ \overline{F} = \lim_{N\rightarrow\infty} - T\overline{\ln Z} = \lim_{R \rightarrow 0} \lim_{N \rightarrow \infty} -T \frac{ \overline{Z^R} -1 }{R} $$ (note the intervention of the limits). Instead of computing the average of a logarithm, which is almost never possible, one computes the moments \(\overline{Z^R}\) for integer \(R\), and then analytically continue to \(R \rightarrow 0\), in the thermodynamic limit. In order to calculate the averaged Green function \((\ref{e:alGa})\), we write it as a Gaussian integral, \begin{equation} G_{nm} = \langle n|G(E+\I o)|m\rangle = \frac{2}{Z} \int_{-\infty}^{\infty} \mathrm{D}\varphi \,\varphi_n \varphi_m \exp \left[-\I \varphi^T ( E-H+\I o ) \varphi \right] \,, \end{equation} where \(\mathrm{D} \varphi = \prod_{i=1}^{L}\Di{\varphi_i}\), and \(\varphi = (\varphi_1,\ldots,\varphi_L)\) is the (real) vector of integration variables; the normalization factor is, $$Z = \frac{\pi^{L/2}}{ \det(E - H + \I o)^{1/2} }\,. $$ If we replicate now the integral \(R\) times and average over disorder, the normalization factor \(Z^R\) will disappear in the limit \(R\rightarrow 0\): \begin{equation} \label{e:alGrm} \overline{G_{nm}} = 2 \lim_{R\rightarrow0} \int \mathrm{D}\varepsilon P[\varepsilon] \prod_{r=1}^{R} \int_{-\infty}^{\infty} \mathrm{D}\varphi^r \,\varphi_n^r \varphi_m^r \exp \left[-\I \varphi^{rT} \left( E-H[\varepsilon]+\I o \right) \varphi^r \right] \,, \end{equation} where we explicitly noted the dependency of the Hamiltonian on the random vector \(\varepsilon\) of site disorder energies. The remarkable point of the Cauchy distributed disorder, is that the integral over the Lorentzian function gives an exponential, which naturally adds to an effective disorder averaged Hamiltonian. Indeed, because of the independence of random energies, the term in \(\varepsilon_n\) of \(H\) can be integrate separately, $$ \frac{W}{2\pi} \int_{-\infty}^{\infty} \Di{\varepsilon_n} \frac{\E^{\I \varphi_n\, \varepsilon_n\, \varphi_n}}{(W/2)^2 + \varepsilon_n^2}= \E^{-W \varphi_n^2/2} $$ which results formally in replacing \(\varepsilon_n \rightarrow W/2\) in \(H\), to obtain from the final expression of \(G\), the effective Hamiltonian: \begin{equation} \overline{G(E+ \I o)} = \frac{1}{E- \overline{H} +\I o}\,, \quad \overline{H_{nm}} = \frac{\I W}{2} \delta_{n,m} - \delta_{n,m+1} - \delta_{n,m-1} \end{equation} The noteworthy point is that this effective Hamiltonian is a non-hermitian operator, underlining the “dissipative” aspect of the disorder. We need now to compute, \begin{equation} \mathrm{Tr}\, G = \sum_m \frac{\det(E-\overline{H})_{mm}}{\det(E-\overline{H})}\,. \end{equation} the resulting expression of the integral \((\ref{e:alGrm})\). This equation is conveniently written using the well known matrix formula \(\mathrm{Tr}\,\ln M = \ln \det M\), for any matrix \(M\): $$\mathrm{Tr}\, G = \frac{\partial}{\partial E} \ln \det(E - \overline{H})\,,$$ which reduces the problem to the standard problem of computing the determinant of a tridiagonal matrix \(D_L = \det{E - \overline{H}}\). It is obtained by simple recursion: \begin{align*} D_0 &= 1 \\ D_1 &= E - \frac{\I W}{2} \\ D_2 &= (E - \frac{\I W}{2})^2 - 1 \\ D_n &= (E - \frac{\I W}{2} )D_{n-1} - D_{n-2}\,. \end{align*} The ansatz \(D_n\sim x^n\), gives the two solutions $$x_\pm = \frac{1}{2} \left[ E-\frac{\I W}{2} \pm \sqrt{(E-\I W/2)^2 - 4} \right]\,$$ which leads to $$D_L = \frac{x_+^{L+1} - x_-^{L+1} }{x_+ - x_-}\,.$$ In the \(L\rightarrow\infty\) limit, only the larger root survives. The final expression for the localization length is, \begin{equation} \frac{1}{\ell(E)} = \ln \frac{1}{2}\left| E-\I W/2 + \sqrt{(E-\I W/2)^2 - 4} \right|\,, \end{equation} or, in a more explicit form: \begin{equation} \cosh\left[ \frac{2}{\ell(E)} \right] = \frac{1}{4} \left( E^2 + \frac{W^2}{4} \right)+ \frac{1}{4} \left[(E+2)^2 + \frac{W^2}{4} \right]^{1/2} \left[(E-2)^2 + \frac{W^2}{4} \right]^{1/2} \,. \end{equation} We observe that when the hopping energy dominates (in the present units this corresponds to \(E\rightarrow0\) and \(W\rightarrow0\)), the localization length diverges, and that for any value of \(W>0\), \(\ell(E)\) is finite, in accordance with the phenomenology of the kicked rotator. »quantum chaos»kicked rotator»random matrices»quantum walk
db55e5f3451a734b
Atomic theory Atomic theory, ancient philosophical speculation that all things can be accounted for by innumerable combinations of hard, small, indivisible particles (called atoms) of various sizes but of the same basic material; or the modern scientific theory of matter according to which the chemical elements that combine to form the great variety of substances consist themselves of aggregations of similar subunits (atoms) possessing nuclear and electron substructure characteristic of each element. The ancient atomic theory was proposed in the 5th century bc by the Greek philosophers Leucippus and Democritus and was revived in the 1st century bc by the Roman philosopher and poet Lucretius. The modern atomic theory, which has undergone continuous refinement, began to flourish at the beginning of the 19th century with the work of the English chemist John Dalton. The experiments of the British physicist Ernest Rutherford in the early 20th century on the scattering of alpha particles from a thin gold foil established the Rutherford atomic model of an atom as consisting of a central, positively charged nucleus containing nearly all the mass and surrounded by a cloud of negatively charged planetlike electrons. With the advent of quantum mechanics and the Schrödinger equation in the 1920s, atomic theory became a precise mathematical science. Austrian physicist Erwin Schrödinger devised a partial differential equation for the quantum dynamics of atomic electrons, including the electrostatic repulsion of all the negatively charged electrons from each other and their attraction to the positively charged nucleus. The equation can be solved exactly for an atom containing only a single electron (hydrogen), and very close approximations can be found for atoms containing two or three electrons (helium and lithium). To the extent that the Schrödinger equation can be solved for more-complex cases, atomic theory is capable of predicting from first principles the properties of all atoms and their interactions. The recent availability of high-speed supercomputers to solve the Schrödinger equation has made possible accurate calculations of properties for atoms and molecules with ever larger numbers of electrons. Precise agreement with experiment is obtained if small corrections due to the effects of the theory of special relativity and quantum electrodynamics are also included. Learn More in these related articles: Different types of bonding in crystals. Lavoisier’s experimentation inspired further studies that ultimately resulted in an overthrow of the view that matter is a structureless continuum. These observations culminated in the atomic hypothesis developed by the English chemist John Dalton, which states that matter is composed of indestructible particles which are unique to and characteristic of each element. Two major sets of... The idea that matter is composed of atoms goes back to the Greek philosophers, notably Democritus, and has never since been entirely lost sight of, though there have been periods when alternative views were more generally preferred. Newton’s contemporaries, Robert Hooke and Robert Boyle, in particular, were atomists, but their interpretation of the sensation of heat as random motion of atoms... Keep Exploring Britannica Margaret Mead Read this Article quantum mechanics Read this Article Zeno’s paradox, illustrated by Achilles’ racing a tortoise. foundations of mathematics Read this Article Periodic table of the elements. Chemistry matter atom Chemistry: Fact or Fiction? Take this Quiz game theory Read this Article Mária Telkes. 10 Women Scientists Who Should Be Famous (or More Famous) Read this List Read this Article acid–base reaction Read this Article Quantum Mechanics Take this Quiz Physics and Natural Law Take this Quiz Read this Article Read this Article atomic theory • MLA • APA • Harvard • Chicago You have successfully emailed this. Error when sending the email. Try again later. Edit Mode Atomic theory Tips For Editing Thank You for Your Contribution! Uh Oh Email this page
c49543ec8a47cb56
Quantum Philosophy New experiments - real and imagined - are probing ever more deeply into the surreal quantum realm Gravitational Lens Thought Experiment COSMIC THOUGHT EXPERIMENT calls for measuring individual photons from a quasar whose image has been split in two by a galaxy acting as a "gravitational lens." In a sense, the way the experiment is carried out now determines whether each photon -billions of years ago - acted like a particle, going one way or the other around the galaxy and ending up in one of the two detectors (a and b),or like a wave, going both ways around the galaxy and generating an interference pattern (c). In ancient Greece,Plato tried to think an talk his way to the truth in extended dialogues with his disciples.Today physicists such as Leonard Mandel of the University of Rochester operate in a somewhat different fashion.He and his students,who are more likely to wear t-shirts and laser proof goggles than robes and sandals,spend countless hours bent over a large metal table trying to align a laser with a complex network of mirrors,lenses, beam splitters and light detectors. Yet the questions they address in their equipment-jammed laboratory are no less profound than those contemplated by Plato in his grassy glade.What are the limits of human knowledge? Is the physical world shaped in some sense by our perception of it? Is there an element of randomness in the universe,or are all events predetermined? Mandel,being inclined toward understatement,offers a more modest description of his mission."We are trying to understand the implications of quantum mechanics," he says,"The subject is very old,but we are still learning." Indeed,it has been nearly a century since Max Planck proposed that electromagnetic radiation comes in tidy bundles of energy called quanta.Building on this seemingly tenuous supposition,scientists erected what is by far the most successful theory in the history of science.In addition to yielding theories for all the fundamental forces of nature except gravity,quantum mechanics has accounted for such disparate phenomena as the shining of stars and the order of the periodic table.From it have sprung technologies ranging from nuclear reactors to lasers. Still,quantum theory has deeply disturbing implications.For one,it shattered traditional notions of causality.The elegant equation devised by Erwin Schrödinger in 1926 to describe the unfolding of quantum events offered not certainties,as Newtonian mechanics did,but only an undulating wave of possibilities. Werner Heisenberg's uncertainty principle then showed that our knowledge of nature is fundamentally limited - as soon as we grasp one part,another part slips through our fingers. The founders of quantum physics wrestled with these issues. Albert Einstein, who in 1905 showed how Planck's electromagnetic quanta, now called photons,could explain the photoelectric effect (in which light striking metal induces an electric current), insisted later that a more detailed, wholly deterministic theory must underlie the vagaries of quantum mechanics. Arguing that "God does not play dice," he designed imaginary, "thought" experiments to demonstrate the theory's "unreasonableness." Defenders of the theory such as Niels Bohr, armed with thought experiments of their own, asserted that Einstein's objections reflected an obsolete view of reality. "It is not the job of scientists," Bohr chided his friend, "to prescribe to God how He should run the world." Until recently, the prevailing attitude of most physicists has been utilitarian: if the theory can foretell the performance of a doped gallium arsenide semiconductor, why worry about its epistemological implications? In the past decade or so, however, a growing cadre of researchers has been probing the ghostly underpinnings of their craft. New technologies, some based on the very quantum phenomena that they test, have enabled investigators to carry out experiments Einstein and Bohr could only imagine. These achievements, in turn,have inspired theorists to dream up even more challenging - and sometimes bizarre - tests. The goal of the quantum truth-seekers is not to build faster computers or communications devices-although that could be an outcome of the research. And few expect to "disprove" a theory that has been confirmed in countless experiments. Instead their goal is to lay bare the curious reality of the quantum realm. "For me, the main purpose of doing experiments is to show people how strange quantum physics is," says Anton Zeilinger of the University of Innsbruck, who is both a theorist and experimentalist ."Most physicists are very naive; most still believe in real waves or particles." So far the experiments are confirming Einstein's worst fears. Photons, neutrons and even whole atoms act sometimes like waves, sometimes like particles, but they actually have no definite form until they are measured. Measurements, once made, can also be erased, altering the outcome of an experiment that has already occurred. A measurement of one quantum entity can instantaneously influence another far away.This odd behaviour can occur not only in the microscopic realm but even in objects large enough to be seen with the naked eye. These findings have spurred a revival of interest in "interpretations" of quantum mechanics, which attempt to place it in a sensible framework But the current interpretations seem anything but sensible. Some conjure up multitudes of universes. Others require belief in a logic that allows two contradictory statements to be true. "Einstein said that if quantum mechanics is right,then the world is crazy," says Daniel Greenberger, a theorist at the City College of New York. " Well, Einstein was right. The world is crazy." The root cause of this pathology is the schizophrenic personality of quantum phenomena, which act like waves one moment and particles the next. The mystery of wave-particle duality is an old one, at least in the case of light. No less an authority than Newton proposed that light consisted of "corpuscles," but a classic experiment by Thomas Young in the early 1800s convinced most scientists that light was essentially wavelike. Young aimed a beam of light through a plate containing two narrow slits, illuminating a screen on the other side. If the light consisted of particles, just two bright lines should have appeared on the screen. Instead a series of lines formed. The lines could be explained only by assuming that the light was propagating as waves, which were split into pairs of wavelets by the two-slit apparatus. The pattern on the screen was formed by the overlapping, or interference, of the wavelet pairs. The screen was bright where crests coincided and dark where crests met troughs,cancelling each other out. But more recent two-slit experiments suggest that Newton was also right. Modern photodetectors (which exploit the photoelectric effect explained by Einstein) can show individual photons plinking against the screen behind the slits in a particular spot at a particular time-just like particles. But as the photons continue striking the screen, the interference pattern gradually emerges,a sure sign that each individual photon went through both slits, like a wave. Moreover, if the researcher either leaves just one slit at a time open or moves the detectors close enough to the two slits to determine which path a photon took, the photons go through one slit or the other, and the interference pattern disappears. Photons, it seems, act like waves as long as they are permitted to act like waves, spread out through space with no definite position. But the moment someone asks where the photons are-by determining which slit they went through or making them hit a screen-they abruptly become particles. Revealing the Split Personality of Light Wave Particle Duality Two-slit experiments reveal that photons, the quantum entities giving rise to light and other forms of electromagnetic radiation, act both like particles and like waves. A single photon will strike the screen in a particular place, like a particle (left)- But as more photons strike the screen, they begin to create an interference pattern (center). Such a pattern could occur only if each photon had actually gone through both slits, like a wave (right). Actually, wave-particle duality is even more baffling than this explanation suggests, as John A. Wheeler of Princeton University demonstrated with a thought experiment he devised in 1980. " Bohr used to say that if you aren't confused by quantum physics then you haven't really understood it," remarks Wheeler who studied under Bohr in the 1930s and went on to become one of the most adventurous explorers of the quantum world. In the two-slit experiments, the physicist's choice of apparatus forces the photon to choose between going through both slits like a wave or just one slit, like a particle. But what would happen,Wheeler asked, if the researcher could somehow wait until after the light bad passed the two slits before deciding how to observe it? Five years after Wheeler outlined what he called the delayed-choice experiment, it was carried out independently by groups at the University of Maryland and the University of Munich. They aimed a laser beam not at a plate with two slits but at a beam splitter, a mirror coated with just enough silver to reflect half of the photons impinging on it and let the other half pass through. After diverging at the beam splitter the two beams were guided back together by mirrors and fed into a detector. This initial setup provided no way for the investigators to test whether any individual photon had gone right or left at the beam splitter. Consequently, each photon went both ways splitting into two wavelets that ended up interfering with each other at the detector. Then the workers installed a customized crystal called a Pockels Cell in the middle of one route. When an electric current was applied to the Pockels Cell, it diffracted photons to an auxiliary detector. Otherwise, photons passed through the cell unhindered. A random signal generator made it possible to turn the cell on or off after the photon had already passed the beam splitter but before it reached the detector as Wheeler had specified. When the Pockels-cell detector was switched on, the photon would behave like a particle and travel one route or the other, triggering either the auxiliary detector or the primary detector, buy not both at once. If the Pockels-cell detector was off ,an interference pattern would appear in the detector at the end of both paths, indicating that the photon bad travelled both routes. To underscore the weirdness of this effect, Wheeler points out that astronomers could perform a delayed-choice experiment on light from quasars, extremely bright, mysterious objects found near the edges of the universe. In place of a beam splitter and mirrors the experiment requires a gravitational lens, a galaxy or other massive object that splits the light from a quasar and refocuses it in the direction of a distant observer, creating two or more images of the quasar. Psychic Photons The astronomers choice of how to observe photons from the quasar here in the present apparently determines whether each photon took both paths or just one path around the gravitational lens-billions of years ago. As they approached the galactic beam splitter the photons must have had something like a premonition telling them how to behave in order to satisfy a choice to be made by unborn beings on a still nonexistent planet. The fallacy giving rise to such speculations,Wheeler explains, is the assumption that a photon had some physical form before the astronomer observed it. Either it was a wave or a particle; either it went both ways around the quasar or only one way. Actually Wheeler says quantum phenomena are neither waves nor particles but are intrinsically undefined until the moment they are measured. In a sense the British philosopher Bishop Berkeley was right when he asserted two centuries ago that "to be is to be perceived." Reflecting on quantum mechanics some 60 years ago, the British physicist Sir Arthur Eddington complained that the theory made as much sense as Lewis Carroll's poem "Jabberwocky" in which "slithy toves did gyre and gimble in the wabe." Unfortunately, the jargon of quantum mechanics is rather less lively. An unobserved quantum entity is said to exist in a "coherent superposition" of all the possible "states" permitted by its "wave function." But as soon as an observer makes a measurement capable of distinguishing between these states the wave function "collapses", and the entity is forced into a single state. Yet even this deliberately abstract language contains some misleading implications. One is that measurement requires direct physical intervention. Physicists often explain the uncertainty principle in this way:in measuring the position of a quantum entity, one inevitably blocks it off its course, losing information about its direction and about its phase, the relative position of its crests and troughs. Most experiments do in fact involve intrusive measurements. For example, blocking one path or the other or moving detectors close to the slits obviously disturbs the photons passage in the two-slit experiment as does placing a detector along one route of the delayed-choice experiment. But an experiment done last year by Mandel's team at the University of Rochester shows that a photon can be forced to switch from wavelike to particlelike behaviour by something much more subtle than direct intervention. The experiment relies on a parametric down-converter an unusual lens that splits a photon of a given energy into two photons whose energy is half as great. Although the device was developed in the 1960s, the Rochester group pioneered its use in tests of quantum mechanics. In the experiment, a laser fires light at a beam splitter. Reflected photons are directed to one down - converter, and transmitted photons go to another down-converter. Each down-converter splits any photon impinging on it into two lower-frequency photons one called the signal and the other called the idler. The two down-converters are arranged so that the two idler beams merge into a single beam. Mirrors steer the overlapping idlers to one detector and the two signal beams to a separate detector. This design does not permit an observer to tell which way any single photon went after encountering the beam splitter. Each photon therefore goes both right and left at the beam splitter, like a wave, and passes through both down-converters, producing two signal wavelets and two idler wavelets. The signal wavelets generate an interference pattern at their detector. The pattern is revealed by gradually lengthening the distance that signals from one down - converter must go to reach the detector. The rate of detection then rises and falls as the crests and troughs of the interference wavelets shift in relation to each other, go in and out of phase. Mandel's Team and Laser LEONARD MANDEL (at left) and co-workers at the University of Rochester gather around a parametric down-converter, an unusual crystal that converts any photon striking it into two photons with half as much energy. Mandel's group pioneered the use of the device in tests of quantum mechanics. Now comes the odd part. The signal photons and the idler photons, once emitted by the down-converters, never again cross paths; they proceed to their respective detectors independently of each other. Nevertheless, simply by blocking the path of one set of idler photons, the researchers destroy the interference pattern of the signal photons. What has changed? The answer is that the observer's potential knowledge has changed. He can now determine which route the signal photons took to their detector by comparing their arrival times with those of the remaining, unblocked idlers. The original photon can no longer go both ways at the beam splitter, like a wave, but must either bounce off or pass through like a particle. The comparison of arrival times need not actually be performed to destroy the interference pattern. The mere "threat" of obtaining information about which way the photon travelled, Mandel explains, forces it to travel only one route. "The quantum state reflects not only what we know about the system but what is in principle knowable," Mandel says. Can the threat of obtaining incriminating information, once made, be retracted? In other words, are measurements reversible? Many theorists, including Bohr, thought not, and the phrase "collapse of the wave function" reflects that belief. But since 1983 Marlan O. Scully, [Isn't that just the correct name?-LB] a theorist at the University of New Mexico, has argued that it should be possible to gain information about the state of a quantum phoenomenon, thereby destroying its wavelike properties, and then restore those properties by "erasing" the information. Several groups working with optical interferometry, including Mandel's, claim to have demonstrated what Scully has dubbed a "quantum eraser." The group that has come closest, according to Scully, is one led by Raymond Y. Chiao of the University of California at Berkeley. Earlier this year Chiao's group passed a beam of light through a down-conversion crystal, generating two identical photons. After being directed by mirrors along separate paths, the two photons crossed paths again at a half-silvered mirror and then entered two detectors. Because it was impossible to know which photon ended up in which detector, each photon seemed to go both ways.As in Mandel's experiment, the interference pattern was revealed by lengthening one arm of the interferometer; a device called a coincidence counter showed the simultaneous firings of the two photon detectors rising and falling as the two wavelets entering each detector went in and out out of phase. Then the workers added a device to the interferometer that shifted the polarization of one set of photons by 90 degrees- If one thinks of a ray of light as an arrow, polarization is the orientation of the plane of the arrowhead. One of the peculiarities of polarization is that it is a strictly binary property; photons are always polarized either vertically or horizontally.The altered polarization served as a tag; by putting polarization detectors in front of the simple light detectors at the end of the routes, one could determine which route each photon had taken. The two paths were no longer indistinguishable, and so the interference pattern disappeared. Finally, Chiao's group inserted two devices that admitted only light polarized in one direction just in front of the detectors. The paths were indistinguishable again, and the interference pattern reappeared. Unlike Humpty-Dumpty, a collapsed wave function can be put back together again. Spooky Action Following up another proposal by Scully, Chiao has even suggested a way to delay the choice of whether or not to restore the interference pattern until after the photons have struck the detectors. The simple polarizing filters in front of the detectors are replaced with polarizing beam splitters, which direct photons with opposite polarization to different detectors. A computer then stores the data on the arrival times of all the photons in one file and the polarization of all the photons in another file. Viewed all at once without regard to polarization, the arrival times show no interference pattern.But if one separates differently polarized photons, plots them independently , two distinct interference patterns emerge. Such possibilities provoke consternation in some quarters. Edwin T. Jaynes of Washington University, a prominent theorist whose work helped to inspire Scully to conceive the quantum eraser, has nonetheless dubbed it "medieval necromancy." Scully was so pleased by Jaynes's remark that he included it in a recent article on the quantum eraser. Necromancy cannot hold a candle to nonlocality. Einstein,Boris Podolsky and Nathan Rosen first drew attention to this bizarre quantum property (which is now often called the EPR effect in their honour) in 1935 with a thought experiment designed to prove that quantum mechanics was hopelessly flawed. What would happen, Einstein and his colleagues asked, if a particle consisting of two protons decayed, sending the protons in opposite directions? According to quantum mechanics, as long as both protons remained unobserved their properties remain indefinite, in a superposition of all possible states;that means each one travels in all possible directions. But because of their common origin, the properties of the protons are tightly correlated, or "entangled." For example, through simple conservation of momentum, one knows that if one proton heads north, the other must have headed south. Consequently, measuring the momentum of one proton instantaneously determines the momentum of the other proton- even if it has travelled to the opposite end of the universe. Einstein said that this "spooky action at a distance" was incompatible with any "realistic" model of reality; all the properties of each proton must be fixed from the moment they first fly apart. Until the early 1960s, most physicists considered the issue entirely academic, since no one could imagine how to resolve it experimentally. Then, in 1964, John S. Bell of CERN, the European laboratory for particle physics, showed that quantum mechanics predicted stronger statistical correlations between entangled particles than the so-called local realistic theory that Einstein preferred. Bell's papers triggered a flurry of laboratory work culminating in a classic (but not classical) experiment performed a decade ago by Alain Aspect of the University of Paris. Instead of the momentum of protons, Aspect analysed the polarization of pairs of photons emitted by a single source toward separate detectors. Measured independently, the polarization of each set of photons fluctuated in a seemingly random way. But when the two sets of measurements were compared, they displayed an agreement stronger than could be accounted for by any local realistic theory-just as Bell had predicted. Einstein's spooky action at a distance was real. Until recently, no experiment had successfully shown that the EPR effect held true for momentum, as Einstein, Podolsky and Rosen had originally proposed. Two years ago John G. Rarity and Paul R. Tapster of the Royal Signals and Radar Establishment in England finally achieved that feat. The experiment began with a laser firing into a down-converter which produced pairs of correlated photons. Each of these photons then passed through a separate two-slit apparatus and thence to a photon detector.Through conservation of momentum, one could determine the route of each photon if one knew the route of its partner. But the arrangement of mirrors and bean splitters made it impossible to determine the actual route of either photon. How to Destroy-and Revive-a Light Wave Experiment Layout 1 Experiment Layout 2 Information rather than direct intervention destroys wavelike behaviour in an experiment done at the University of Rochester. A laser fires photons past a half-silvered mirror, or beam splitter, to two down-converters, labelled 1 and 2. These convert each incident photon into two lower-energy photons, called signals and idlers. Because the signal detector cannot tell how the signals arrived, each signal takes both routes, like a wave, generating an interference pattern at the signal detector. But the pattern can be destroyed merely by blocking idlers from down-converter 1 (dotted line). The reason is that each signal's path can now be retraced; simultaneous detection of a signal and idler indicates that both came from a photon reflected by the beam splitter into down-converter 2. Erasing information about the path of a photon restores wavelike behaviour in an experiment done at the University of California at Berkeley. Pairs of identically polarized photons produced by a down-converter bounce off mirrors, converge again at a beam splitter and pass into two detectors. A coincidence counter observes an interference pattern in the rate of simultaneous detections by the two detectors, indicating that each photon has gone both ways at the beam splitter, like a wave. Adding a polarization shifter to one path destroys the pattern by making it possible to distinguish the photons. But placing two polarizing filters in front of the detectors makes the photons identical again, erasing the polarization distinction and restoring the interference pattern. Next, the workers slightly lengthened one of the four routes, as Chiao did in his quantum eraser experiment. Although the rate at which photons struck each detector did not change, the rate of simultaneous firings recorded by a coincidence counter oscillated, forming a telltale interference pattern like the one observed by Chiao. Such a pattern could occur only if each photon, the one on the left and the one on the right, was passing through both slits to its respective detector, its momentum fundamentally undefined and yet still correlated with the momentum of its distant partner. Still more ambitious EPR experiments have been proposed but not yet carried out. Greenberger, Zeilinger and Michael Home of Stonehill College have shown that three or more particles sprung from a single source will exhibit much stronger nonlocal correlations than those between just two particles. Bernard Yurke and David Stoler of AT& T Bell Laboratories have even suggested a way in which three particles emitted from separate locations can exhibit the EPR effect. Unfortunately, the EPR effect does not provide a loophole in the theory of relativity, which prohibits communications faster than light, since each isolated observer of a correlated particle sees only an apparently random fluctuation of properties. But the effect does allow one safely to transmit a random number that can then serve as the numerical "key" for an encryption system .In fact such a device has been built by Charles H. Bennett of the IBM Thomas J. Watson Research Center. A die-hard realist might dismiss the experiments described above, since they all involve that quintessence of ineffability, light But electrons, neutrons, protons and even whole atoms-the stuff our own bodies are made of-also display pathological behaviour. Researchers observed wavelike behaviour in electrons through indirect means as early as the 1920s, and they began carrying out two- slit experiments with electrons several decades ago. Superposed Philosophers A new round of electron experiments may be carried out soon if Yakir Aharonov of Tel-Aviv University has his way. Noting that superposition is generally inferred from observations of large numbers of particles, Aharonov contends that a single electron bound to a hydrogen atom could be detected smeared out in a relatively large cavity- say, 10 centimetres across- by very delicately scattering photons off it. Aharonov has not yet published his idea- " I am a very fast thinker but a very slow writer," he says-and some physicists he has discussed it with are sceptical. On the other hand, many were sceptical in 1958, when Aharonov and David Bohm of the University of London suggested a way in which a magnetic field could influence an electron that, strictly speaking, lay completely beyond the field's range. The so-called Aharonov - Bohm effect has now been confirmed in laboratories. How Distant Particles Keep in Touch Quantum entanglement layout Spooky correlations between separate photons were demonstrated in an experiment at the Royal Signals and Radar Establishment in England. In this simplified depiction, a down-converter sends pairs of photons in opposite directions. Each photon passes through a separate two-slit apparatus and is directed by mirrors to a detector. Because the detectors cannot distinguish which slit a photon passes through, each photon goes both ways, generating an interference pattern in the coincidence counter. Yet each photon's direction, or momentum, is also correlated with its partner's. A measurement showing a photon going through the upper left slit would instantaneously force its distant partner to go through the lower slit on the right. Since the mid-l970s various workers have done interference experiments with neutrons, which are almost 2,000 times heavier than electrons. Some 15 years ago, for example, Samuel A. Werner of the University of Missouri at Columbia and others found that the interference pattern formed by neutrons diffracted along two paths by a sculpted silicon crystal could be altered simply by changing the interferometer' s orientation relative to the earth's gravitational field. It was the first demonstration that the Schrödinger equation holds true under the sway of gravity. Investigators have begun doing interferometry with whole atoms only in the past few years. Such experiments are extraordinarily difficult. Atoms cannot pass through lenses or crystals, as photons, electrons and even neutrons can. Moreover, since the wavelength of an object is inversely proportional to its mass and velocity, the particle must move slowly for its wavelength to be detectable. Yet workers such as David E. Pritchard of the Massachusetts Institute of Technology have created the equivalent of beam splitters, mirrors and lenses for atoms out of metal plates with precisely machined grooves and even standing waves of light, formed when a wave of light reflects back on itself in such a way that its crests and troughs match precisely. Pritchard says physicists may one day be able to pass biologically significant molecules such as proteins or nucleic acids through an interferometer.In principle, one could even observe wavelike behaviour in a whole organism, such as an amoeba. There are some obstacles,though: the amoeba would have to travel very slowly, so slowly, in fact, that it would take some three years to get through the interferometer, according to Pritchard. The experiment would also have to be conducted in an environment completely free of gravitational or other influences-that is, in outer space. Getting a slightly larger and more intelligent organism, for instance, a philosopher, to take two paths through a two-slit apparatus would be even trickier." It would take longer than the age of the universe," Pritchard says. While physicists may never nudge a philosopher into a superposition of states, they are hard at work trying to induce wavelike behaviour in objects literally large enough to see. The research has rekindled interest in a famous thought experiment posed by Schrödinger in 1935. In a version altered by John Bell, the EPR theorist, to be more palatable to animal lovers, a cat is placed in a box containing a lump of radioactive matter, which has a 50 per - cent chance of emitting a particle in a one-hour period. When the particle decays, it triggers a Geiger counter, which in turn causes a flask of milk to pour into a bowl, feeding the cat. (In Schrödinger 's version, a hammer smashes a flask of poison gas, killing the cat.) Common sense dictates that a cat cannot have a stomach both empty and full. But quantum mechanics dictates that after one hour, if no one has looked in the box, the radioactive lump and so the cat exist in a superposition of indistinguishable states; the former is both decayed and undecayed, and the latter is both hungry and full. Various resolutions to the paradox have been suggested. Wojciech H. Zurek, a theorist at Los Alamos National Laboratory, contends that as a quantum phenomenon propagates, its interaction with the environment inevitably causes its superposed states to become distinguishable and thus to collapse into a single state. Mandel of the University of Rochester thinks this view is supported by his experiment, in which the mere potential for knowledge of a photon's path destroyed its interference pattern. After all, one can easily learn whether the cat has been fed-say, by making the box transparent-without actually disturbing it. But since the early 1980s Anthony J.Leggett, a theorist at the University of Illinois, has argued that one should be able to observe a superconducting quantum interference device, more commonly called a SQUID, in a superposition of states. A SQUID, which is typically the size of a pinhead and therefore huge in comparison with atoms or other quantum objects, consists of a loop of superconducting material, through which electrons flow without resistance, broken by a thin slice of insulating material called a Josephson junction. In a classical world the electrons would be completely blocked by the insulator, but the quantum indefiniteness of the electrons' positions allows hordes of them to "tunnel" blithely through the gap. Inspired by Leggett's calculations, Claudia D. Tesche of the IBM Watson center proposed an experiment that could show the superposition quite directly. Given certain conditions, Tesche notes, the current in a SQUID has an equal chance of flowing in either direction. According to quantum mechanics, then, it should flow both ways, creating an interference pattern analogous to the one formed in a two-slit experiment. Tesche's design calls for placing two extremely sensitive switches around the SQUID, each of which is tripped when the current is going in a different direction. Of course, once a switch is tripped, the wave function collapses, and the interference pattern is destroyed. Tesche hopes to infer the pattern from those microseconds during which the switches are not activated- making measurements, in effect, by not making them. Orthodoxy under Attack Other theorists note that Tesche's experiment is extremely difficult, since even minute disturbances from the environment can cause the SQUID's wave function to collapse. In fact, Tesche recently turned to other, more conventional pursuits, at least temporarily setting aside the experiment. " It wasn't working very well," she concedes. Yet less ambitious experiments by John Clarke of the University of California at Berkeley, Richard A. Webb of IBM and others have produced strong circumstantial evidence that a SQUID can in fact exist in a superposition of two states. The experiments involve a property known as flux, which is the area of the superconducting ring multiplied by the strength of the magnetic field perpendicular to the ring. In an ordinary superconducting ring the flux would be constant, but measurements with magnetometers show the flux of the SQUID spontaneously jumping from one value to another. Such jumps can occur only if the flux is in a superposition of states-hungry and full at the same time, as it were. All the recent experiments, completed and proposed, have hardly led to a consensus on what exactly quantum mechanics means. If only by default, the "orthodox" view of quantum mechanics is still the one set forth in the 1920s by Bohr. Called the Copenhagen interpretation, its basic assertion is that what we observe is all we can know; any speculation about what a photon, an atom or even a SQUID "really is" or what it is doing when we're not looking is just that-speculation. To be sure, the Copenhagen interpretation has come under attack from theorists in recent years, most notably from John Bell, author of the brilliant proof of the divergence between "realistic" and quantum predictions for EPR experiments. In a television interview just before his sudden death from a stroke two years ago, the Irish physicist expressed his dissatisfaction with the Copenhagen interpretation, noting that it "says we must accept meaninglessness." Does that make you afraid? the interviewer asked." No, just disgusted," Bell replied, smiling. John Wheeler JOHN A. WHEELER, seen here with the likenesses of two earlier explorers of the quantum realm, Einstein and Bohr, thinks the deepest lesson of quantum mechanics may be that reality is defined by the questions we put to it. Bell's exhortations helped to revive interest in a realistic theory originally proposed in the 1950s by Bohm.In Bohm's view, a quantum entity such as an electron does in fact exist in a particular place at a particular time, but its behaviour is governed by an unusual field, or pilot wave, whose properties are defined by the Schrödinger wave function.The hypothesis does allow one quantum quirk, nonlocality, but it eliminates another, the indefiniteness of position of a particle. Its predictions are identical to those of standard quantum mechanics. Bell also boosted the standing of a theory developed six years ago by Gian Carlo Ghirardi and Tullio Weber of the University of Trieste and Alberto Rimini of the University of Pavia and refined more recently by Philip Pearle of Hamilton College. By adding a nonlinear term to the Schrödinger equation, the theory causes superposed states of a system to converge into a single state as the system approaches macroscopic dimensions, thereby eliminating the Schrödinger 's cat paradox, among other embarrassments. Unlike Bohm's pilot-wave concept, the theory of Ghirardi's group offers predictions that diverge from those of orthodox quantum physics, albeit subtly. " If you shine a neutron through two slits, you get an interference pattern," Pearle says. " But if our theory is correct, the interference should disappear if you make the measurement far enough away." The theory also requires slight violations of the law of conservation of energy. Zeilinger of the University of Innsbruck was sufficiently interested in the theory to test the neutron prediction, which was not borne out. "This approach is one of those dead end roads that has to be walked by someone," he sighs. Yet another view currently enjoying some attention, although not as a result of Bell's efforts, is the many-worlds interpretation, which was invented in the 1950s by Hugh Everett III of Princeton.The theory sought to answer the question of why, when we observe a quantum phenomenon, we see only one outcome of the many allowed by its wave function. Everett proposed that whenever a measurement forces a particle to make a choice, for instance, between going left or right in a two-slit apparatus, the entire universe splits into two separate universes; the particle goes left in one universe and right in the other. Although the theory was long dismissed as more science fiction than science, it has been revived in a modified form by Murray Gell-Mann of the California Institute of Technology and James B. Hartle of the University of California at Santa Barbara.They call their version the many-histories interpretation and emphasize that the histories are "potentialities" rather than physical actualities. Gell-Mann has reportedly predicted that this view will dominate the field by the end of the century. An intriguing alternative, called the many-minds view, has been advanced by David Z. Albert, a physicist-turned- philosopher at Columbia University, and Barry Loewer, a philosopher from Rutgers University. Each observer, they explain, or " sentient physical system," is associated with an infinite set of minds, which experience different possible outcomes of any quantum measurement. The array of choices embedded in the Schrödinger equation corresponds to the myriad experiences undergone by these minds rather than to an infinitude of universes. The concept may sound far-fetched, but it is no more radical, Albert argues, than the many histories theory or even the Copenhagen interpretation itself. The It from Bit Other philosophers call for a sea change in our very modes of thought.After Einstein introduced his theory of relativity, notes Jeffrey Bub, a philosopher at the University of Maryland, "we threw out the old Euclidean notion of space and time, and now we have a more generalised notion." Quantum theory may demand a similar revamping of our concepts of rationality and logic, Bub says. Boolean logic, which is based on either- or propositions, suffices for a world in which an atom goes either through one slit or the other, but not both slits. "Quantum mechanical logic is non-Boolean," he comments. " Once you understand that, it may make sense." Bub concedes, however, that none of the so-called quantum logic systems devised so far has proved very convincing. A different kind of paradigm shift is envisioned by Wheeler.The most profound lesson of quantum mechanics, he remarks, is that physical phenomena are somehow defined by the questions we ask of them. " This is in some sense a participatory universe," he says. The basis of reality may not be the quantum, which despite its elusiveness is still a physical phenomenon, but the bit, the answer to a yes-or-no question,which is the fundamental currency of computing and communications. Wheeler calls his idea "the it from bit." Following Wheeler's lead, various theorists are trying to recast quantum physics in terms of information theory,which was developed 44 years ago to maximise the amount of information transmitted over communications channels. Already these investigators have found that Heisenberg's uncertainty principle, wave-particle duality and nonlocality can be formulated more powerfully in the context of information theory, according to William K. Wootters of Williams College, a former Wheeler student who is pursuing the it-from-bit concept. Meanwhile theorists at the surreal frontier of quantum theory are conjuring up thought experiments that could unveil the riddle in the enigma once and for all. David Deutsch of the University of Oxford thinks it should be possible, at least in principle, to build a "quantum computer," one that achieves superposition of states. Deutsch has shown that if different superposed states of the computer can work on separate parts of a problem at the same time, the computer may achieve a kind of quantum parallelism, solving certain problems more quickly than classical computers. Taking this idea further, Albert - with just one of his minds - has conceived of a quantum computer capable of making certain measurements of itself and its environment. Such a "quantum automaton" would be capable of knowing more about itself than any outside observer could ever know-and even more than is ordinarily permitted by the uncertainty principle.The automaton could also serve as a kind of eyewitness of the quantum world, resolving questions about whether wave functions truly collapse, for example. Albert says he has no idea how actually to engineer such a machine, but his calculations show the Schrödinger equation allows such a possibility. If that doesn't work, there is always Aharonov's time machine.The machine, which is based not only on quantum theory but also on general relativity, is a massive sphere that can rapidly expand or contract Einstein's theory predicts that time will speed up for an occupant of the sphere as it expands and gravity becomes proportionately weaker, and time will slow down as the sphere contracts. If the machine and its occupant can be induced into a superposition of states corresponding to different sizes and so different rates of time, Aharonov says, they may "tunnel" into the future.The occupant can then disembark, ask physicists of the future to explain the mysteries of quantum mechanics and then bring the answers-assuming there are any-back to the present. Until then, like Plato's benighted cave dwellers, we can only stare at the shadows of quanta flickering on the walls of our cave and wonder what they mean. By John Horgan
e05906965bc7987e
Influence of a Moving Nodal Point on the "Causal Trajectories" in a Quantum Harmonic Oscillator Potential Chaotic motion in the vicinity of a moving quantum nodal point is studied in the framework of the de Broglie–Bohm trajectory method of quantum mechanics. Chaos emerges from the sequential interaction between the quantum path with the moving nodal point depending on the distance and the frequencies between the quantum particles and their initial positions [1]. Here, chaotic motion means the exponential divergence of initially neighboring trajectories. In a very special case (constant phase shift parameter: , ), the orbit of the nodal point is a circle with radius , with center at the origin. In most cases, the orbits of the nodal point are elliptical for different constant phase shifts. In the causal interpretation of quantum theory, the dynamics is strongly influenced by the initial distribution of the particles and the "quantum force" transmitted by the quantum potential. In this description, chaos arises because of the dynamics of the singularity of the quantum potential. At the nodal point, the quantum potential becomes very negative or approaches negative infinity, which keeps the particles from entering or passing through the nodal region. This could be interpreted as the effect that empty space, where the squared wavefunction is approximately zero, influences the motion of quantum particles via the quantum potential. The nodal point itself acts as an attractor or repeller. The motions of the quantum particles could be periodic, ergodic, or chaotic depending on the constant . There are some curves starting at the nodal point that form outward spirals [1]. If , there are no stable limit cycles for the paths of the quantum particles, as seen in the figure. In conclusion, moving nodal points or nodal lines are important for the appearance of chaos in the de Broglie–Bohm interpretation. This model could serve as another reference for the simplest chaotic causal trajectories [2]. The graphic shows the squared wavefunction or the quantum potential, the vector field of the velocities (gray arrows), the trajectories of the quantum particles (colored paths), and the local minima/nodal point (blue point). The orbit of the nodal point is displayed by a thick blue line. • [Snapshot] • [Snapshot] • [Snapshot] This Demonstration studies a normalized superposition of ground state and a degenerate first excited state with a constant relative phase shift of an uncoupled isotropic harmonic oscillator in two dimensions. We assume commensurable frequencies, that is, a minimum common multiple period exists. The squared wavefunction and therefore the nodal point have period . The coefficient from the wavefunction governs the diameter of the orbit of the nodal point. The wavefunction obeys the Schrödinger equation: with , , and so on. The normalized wavefunction for this Demonstration is: with the eigenfunctions , where are the Hermite polynomials, are arbitrary constants (here:), are constant phase shifts, and the quantum numbers with . The velocity field is calculated from the gradient of the phase from the total wavefunction in the eikonal form . For this Demonstration the velocity is: The quantum potential is: In the program, if PlotPoints, AccuracyGoal, PrecisionGoal, and MaxSteps are increased, the results will be more accurate. [1] C. Efthymiopoulos, C. Kalapotharakos, and G. Contopoulos, "Origin of Chaos near Critical Points of the Quantum Flow," Physical Review E 79, 2009 pp. 036203-1–036203-18. doi:10.1103/PhysRevE.79.036203, arXiv: 0903.2655 [quant-ph]. [2] A. J. Makowski and M. Frackowiak, "The Simplest Non-trivial Model of Chaotic Causal Dynamics," Acta Physica Polonica B 32, 2001 pp. 2831–2842. arXiv: 0111155v1 [quant-ph]. • Share: Embed Interactive Demonstration New! Files require Wolfram CDF Player or Mathematica. Mathematica » The #1 tool for creating Demonstrations and anything technical. Wolfram|Alpha » Explore anything with the first computational knowledge engine. MathWorld » The web's most extensive mathematics resource. Course Assistant Apps » An app for every course— right in the palm of your hand. Wolfram Blog » Read our views on math, science, and technology. Computable Document Format » The format that makes Demonstrations (and any information) easy to share and interact with. STEM Initiative » Programs & resources for educators, schools & students. Computerbasedmath.org » Join the initiative for modernizing math education. Step-by-Step Solutions » Wolfram Problem Generator » Wolfram Language » Knowledge-based programming for everyone. Download or upgrade to Mathematica Player 7EX I already have Mathematica Player or Mathematica 7+
9ecd8bd334efc109
Q: What are complex numbers used for? Physicist: If you’ve ever had to do square roots you’ve probably come up against the problem of taking the square root of a negative number.  If you restrict your attention only to real numbers (0,1, -17, \pi, √2, …, any number you can think of), then there’s no way to take the square root of a negative.  But this makes taking square root, cube roots, and so on, a pain. You have to remember: 1) The square root, forth root, sixth root, … of a positive number has two answers (positive and negative).  2) The square root, forth root, sixth root, … of a negative number has no answers.  3) The cube root, fifth root, seventh root, … of any number has one answer (with the same sign as the original number).  These are random, frustrating, difficult-to-remember rules.  Mathematicians had to deal with these all the time before the 1700′s. Then Euler happened.  He was looking at a similar problem; finding the roots of polynomials.  Similar, because the question “x=\sqrt{-1}, what is x?”, is exactly the same as “x2+1=0, what is x?”.  He was annoyed that most polynomials have roots that don’t exist, so he said; “Sure…  But, what if the root did exist.  Like… in an imaginary sense…”. That’s paraphrasing, but it’s pretty accurate.  He just declared that there’s a new number called “i” with the property that “i2 = -1″. So, to actually answer: Complex numbers make it easy to take roots, and using complex numbers, all polynomials with terms up to xN have N roots. Using only real numbers, the cube root of 1 is 1, and only 1. Using complex numbers you can see that the other two roots exist, they just happen to be off of the real line. In this picture the arrow to the right is "1", and the real line is the horizontal line. The other two arrows are the other two roots. The roots of a number are always equally spaced like this. These two properties help complex numbers “complete” the real numbers.  That is to say; you don’t need to create “super complex numbers” to fix any problems with complex numbers.  One of the first “problems” that people ask about is; “Sure, \sqrt{-1} = \pm i, but what’s \sqrt{i}?”  Well, \sqrt{i} = \pm \left( \frac{1}{\sqrt{2}} + \frac{i}{\sqrt{2}} \right).  No problems! Also, if you’ve ever had to do anything with trigonometry you’ve probably come up against: \cos{(A+B)} = \cos{(A)} \cos{(B)} - \sin{(A)}\sin{(B)} Which looks horrible, is horrible, and is horrible to deal with.  Here’s the same thing (both equations) using complex numbers: e^{i(A+B)} = e^{iA}e^{iB}, which is just a direct application of Euler’s equation: e^{i \theta} = \cos{\theta} + i \sin{\theta}.  Almost any time that you have to do lots of summations or multiplications involving trig function, it’s best to bust out some complex numbers. In the same vein, electrical engineers use “phasors” (phasor, not phaser) to talk about sinusoidal current (like what comes out of the wall).  Again, complex numbers = easy! If you’ve gotten stuck behind a nasty integral in calculus (and if you haven’t, ignore this), you’ll find that many of them are surprisingly hard using real numbers, but baby simple using complex numbers.  For example: \int_0^\infty \frac{\sin{x}}{x} \, dx, \int_{-\infty}^\infty \frac{1}{1+x^2} \, dx, \int_0^\pi \frac{1}{2 \cos{x} + 1} \, dx, \int_{-\infty}^\infty \frac{p(x)}{q(x)} \, dx (where p and q are polynomials). Some fields simply can’t be approached without complex numbers, particularly quantum mechanics.  In QM the probability of something happening is the square of the magnitude (absolute value) of the “probability amplitude”, which is complex-valued.  So, if the probability amplitude is \frac{i}{\sqrt{2}}, then the probability is \left| \frac{i}{\sqrt{2}} \right|^2 = \left( \frac{1}{\sqrt{2}} \right)^2 = \frac{1}{2}.  There’s really no way around this.  In fact, the Schrödinger equation, which is arguably the backbone of QM, has an “i” staring you right in the face.  Here’s the equation for a single particle i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{r},\,t) = -\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{r},\,t) + V(\mathbf{r})\Psi(\mathbf{r},\,t) where i is in the first term, and “\Psi” is the probability amplitude of the particle’s location.  That’s a bit much, but the point is; you can’t get rid of the complex numbers and do QM with just real numbers. The moral of the story is that “complex numbers” are misnamed.  While they are intimidating at first, they make things simple all over the place.  Notably: trigonometry, anything with waves (electricity, light, sound), finding roots, streamlining math (often, but not always), and in quantum mechanics. This entry was posted in -- By the Physicist, Equations, Math, Quantum Theory. Bookmark the permalink. 15 Responses to Q: What are complex numbers used for? 1. Scott says: Here’s what I want to know: why do we waste so much time learning how to rationalize denominators in Algebra 2, and then never require it again in all of math and science? 2. anon says: Scott: Rationalizing denominators is usually a waste of time. For example, 2/sqrt(7) doesn’t look any uglier than sqrt(7)*2/7. But sometimes rationalizing the denominator is helpful. For example (sqrt(2) + 1)/(sqrt(2) – 1). If you rationalize this denominator, you get 3+2sqrt(2), which is much nicer than the expression you started off with. 3. EloquentMath says: Exactly what I’ve been looking for, very detailed and interesting. Was looking to do an article myself on imaginary numbers, but there’s no way I could live up to this… 4. Pingback: Q: Can you fix the “1/0 problem” by defining 1/0 as a new number? | Ask a Mathematician / Ask a Physicist 5. Mike says: The theory of “black holes” was put together using complex numbers. It must therefore have the same kind of reality as the square root of negative unity. Agree/Disagree ? 6. The Physicist The Physicist says: I wanna say “disagree”, but mostly because I don’t know what a “kind of reality” is. You could just as easily say “the theory of airfoils (wings) and alternating electrical current was put together using complex numbers, so therefore they have the same kind of reality as the square root of unity”. If that statement is necessarily true, then I gotta give you the black hole thing! 7. Anthony Rose says: Thank you for this, very informative and helpful. I have always been bothered by the acceptance of the square root of minus one (i) as a necessity, because maths depends upon the truth that a negative times a negative is a positive. So when we allow the existence of i, even for a moment, we are also admitting to ourselves that there is some deep fault-line in our own mathematical understanding of the universe, are we not? Now from the above I gather that we only use complex numbers to save time. That there is a long way round, all the time, every time? Even so, either way, I do wonder whether complex numbers refer to an alternate dimension… like quantum fluctuations into and out of space, that just as you can have the square root of 1 on the positive of th x/y axes, if you flip the chart left to right, the square root of -1 makes sense…. a bit confused, I admit, but just waving at the idea. 8. The Physicist The Physicist says: Complex numbers are an “extension” of the real numbers by “i”. In very much the same way, negative numbers are also a string of extensions of the positive real numbers. One is no more abstract than the other. Though we do feel as though there’s something profound and mysterious about complex numbers, they’re just a bunch of symbols representing rules, quantities, and interactions, the same as everything else in mathematics. 9. Anthony Rose says: Sorry for being so dumb (as everybody but me seems to see ‘i’ as ok), but is ‘i’ not a direct contradiction of the more foundational principle that x^2 > 0? How it can it be a simple extension of what it undermines? 10. The Physicist The Physicist says: Imaginary numbers are weird to everyone. Keep in mind that math is really about making up new rules and seeing where those rules lead, than it is about delving into the nature of the universe. It’s a heck of a lot more useful when we create math that has rules that correspond to the laws of the universe, but it’s not necessary that they have anything in common. In this case a mathematician sat down one day and said “x2>0, so x2=-1 is can’t have a solution. But what if there were a new number such that that did solve it?”. So, it’s not that there’s some actual number that nobody noticed, it’s that “i” is a complete fabrication, that’s defined only in terms of the fact that i2=-1. 11. Anthony Rose says: Ok, but (1) if “i” turns out to be useful in our universe, so useful that it is more than a short-cut to a long way round, but in fact is sometimes the *only* way to get to a solution, then does that not hint that there really is such a number “i” in our universe, and we just don’t know how it relates, yet? And (2), aside from that, if we mix real numbers with “i” and sometimes “i” is the only way to solve a problem which contains unknown numbers squared, how do we know which unknowns in the problem are real numbers and which square like “i”? Or, to put it another way, how can if I solve a problem containing x^2 by relying on the fact that it is positive, if x may be a member of the “i” set? Or to put it a third way, if I *have to* temporarily use “i” during the solution of a problem, i.e. cross the line between the real world and the impossible world of “i” and come back again, and the solution proves to be correct practically, have I not admitted that at the very least, there is something I do not understand? Because however theoretical it is, surely maths proofs depend on lesser proofs which in turn depend upon axioms, and if we contradict an axiom or proof in order to arrive at a new proof or statement, we cut off the theoretical branch which we are sitting on, even if the answer is correct in practice. And it is no good getting out of it by saying that there are some numbers which are exempt from our normal mathematical axioms if we at the same time depend upon those exempt numbers to behave in our normal mathematical proofs. Unless we say “it works, but we can’t say why”. I guess I am bumbling about here. But to me it is like saying, imagine that there is such a number that when added to itself, is not 2x but 3x itself, i.e. 1z + 1z = 3z. Ok, that can be useful at times, but I suspect that no decent mathematician would accept my working some problem out which relied on that number existing, even if only temporarily during the sum. Or perhaps – a gleam of hope here – “i” is just short-hand for “I’m too bothered to work out the way in which such-and-such mathematical description of a real world situation would express itself without involving “i”, because using “i” is quick and works.” So there *may* be a long way round to exactly understand the real world situation, or it may involve some sort of quantum state of non-existence, but we just don’t care which? Ultimately, every state of every mathematical equation used for the real world, should reflect some real world state, surely? If I pause an equation reflecting some real world situation at a point where “i = {some expression}”, then {some expression} is not real – in fact, the entire equation represents a non-real state, and cannot reflect reality. The situation is impossible, and therefore we know that it is either flawed as an expression of reality, or, represents some non-real existence we do not understand. A bit like the electron, which can be at point A at one moment, then at point B the next, and is not anywhere in the real world between those two points. Either there must be another way to solve such an equation, or, such a non-real mathematical expression must have a non-real existence in the universe. If I want to take a proof from A to B, and I use a falsehood on the way, or logical step which disproves the existence of A, how can I rely on my proof, unless I understand how that inbetween state is valid for A? Sorry for the rambling…. 12. The Physicist The Physicist says: Keep in mind, even though we’re used to “3″, there’s no such thing as a 3 in the universe at large. It’s a symbol with some properties that are useful (there’s a big difference between “useful and intuitive” and “existing”). “i” is the exact same thing, a symbol with some properties. It’s no more or less real than any number, letter, or equation. If you were very careful about how you define your 1+1=3 idea, then mathematicians would have no problem with what you find using that equation. In fact, there are huge branches of mathematics that look a lot like that! For example, in “mod 5 arithmetic“, the equation “3+4=2″ makes perfect sense and is true. In this particular case, if you say “X is a real number, and X^2 = -1″ mathematicians would agree that there’s no solution. But, if you say “X can be a complex number, and X^2 = -1″ then mathematicians would agree that it can be solved. Math isn’t ever about pulling rules out of the universe, it’s about making them up and looking at the consequences. 13. Anthony Rose says: That helps a lot. But my sticking point is perhaps an incorrect assumption, that people mix the system of complex numbers with real numbers in the same equation, and apply the answers to real life? I’m okay with making up numbers and playing around to see what the system producesm but if you have an equation describing some real life behaviour involving for example x^2 and then you bring “i” into that equation as well…. wait! I think I see! If I take a photograph, and fold it, and unfold it a different way to get a new insight, the beginning and the end pictures may relate to a real situation, but the inbetween folded bit does not, of course, and need not, as long as I know it. Using “i” in a real number equation is just folding the equation into a non-real, but useful, state temporarily for mathematical calculation purposes? (I hope I’ve got it at last because I hate to waste your precious time) (If that is the only way to fold the equation to bring it to its new state then I still think that this only path of manipulation at least hints at a non-real dimensional quantum state which our real world can get into. But I do admit of course this is equivalent to the gardener’s opinion :D ) 14. The Physicist The Physicist says: Sounds like you’ve got it. 15. Pingback: Q: What makes natural logarithms natural? What’s so special about the number e? | Ask a Mathematician / Ask a Physicist Leave a Reply
1b0158b7650e2516
Schrödinger equation From Wikipedia, the free encyclopedia   (Redirected from Schrodinger equation) Jump to: navigation, search For a more general introduction to the topic, see Introduction to quantum mechanics. Schrödinger equation as part of a monument in front of Warsaw University's Centre of New Technologies In quantum mechanics, the Schrödinger equation is a partial differential equation that describes how the quantum state of a quantum system changes with time. It was formulated in late 1925, and published in 1926, by the Austrian physicist Erwin Schrödinger.[1] In classical mechanics, Newton's second law (F = ma) is used to make a mathematical prediction as to what path a given system will take following a set of known initial conditions. In quantum mechanics, the analogue of Newton's law is Schrödinger's equation for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localised). It is not a simple algebraic equation, but in general a linear partial differential equation, describing the time-evolution of the system's wave function (also called a "state function").[2]:1–2 The concept of a wavefunction is a fundamental postulate of quantum mechanics. Using these postulates, Schrödinger's equation can be derived from the fact that the time-evolution operator must be unitary and must therefore be generated by the exponential of a self-adjoint operator, which is the quantum Hamiltonian. This derivation is explained below. In the Copenhagen interpretation of quantum mechanics, the wave function is the most complete description that can be given of a physical system. Solutions to Schrödinger's equation describe not only molecular, atomic, and subatomic systems, but also macroscopic systems, possibly even the whole universe.[3]:292ff Schrödinger's equation is central to all applications of quantum mechanics including quantum field theory which combines special relativity with quantum mechanics. Theories of quantum gravity, such as string theory also do not modify Schrödinger's equation. The Schrödinger equation is not the only way to make predictions in quantum mechanics—other formulations can be used, such as Werner Heisenberg's matrix mechanics, and Richard Feynman's path integral formulation. Time-dependent equation[edit] The form of the Schrödinger equation depends on the physical situation (see below for special cases). The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time:[4]:143 A wave function that satisfies the non-relativistic Schrödinger equation with V = 0. In other words, this corresponds to a particle traveling freely through empty space. The real part of the wave function is plotted here. Time-dependent Schrödinger equation (general) where i is the imaginary unit, ħ is the Planck constant divided by , the symbol /t indicates a partial derivative with respect to time t, Ψ (the Greek letter psi) is the wave function of the quantum system, r and t are the position vector and time respectively, and Ĥ is the Hamiltonian operator (which characterizes the total energy of any given wave function and takes different forms depending on the situation). Each of these three rows is a wave function which satisfies the time-dependent Schrödinger equation for a harmonic oscillator. Left: The real part (blue) and imaginary part (red) of the wave function. Right: The probability distribution of finding the particle with this wave function at a given position. The top two rows are examples of stationary states, which correspond to standing waves. The bottom row is an example of a state which is not a stationary state. The right column illustrates why stationary states are called "stationary". The most famous example is the non-relativistic Schrödinger equation for a single particle moving in an electric field (but not a magnetic field; see the Pauli equation):[5] Time-dependent Schrödinger equation (single non-relativistic particle) where μ is the particle's "reduced mass", V is its potential energy, 2 is the Laplacian (a differential operator), and Ψ is the wave function (more precisely, in this context, it is called the "position-space wave function"). In plain language, it means "total energy equals kinetic energy plus potential energy", but the terms take unfamiliar forms for reasons explained below. Given the particular differential operators involved, this is a linear partial differential equation. It is also a diffusion equation, but unlike the heat equation, this one is also a wave equation given the imaginary unit present in the transient term. The term "Schrödinger equation" can refer to both the general equation (first box above), or the specific nonrelativistic version (second box above and variations thereof). The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in various complicated expressions for the Hamiltonian. The specific nonrelativistic version is a simplified approximation to reality, which is quite accurate in many situations, but very inaccurate in others (see relativistic quantum mechanics and relativistic quantum field theory). To apply the Schrödinger equation, the Hamiltonian operator is set up for the system, accounting for the kinetic and potential energy of the particles constituting the system, then inserted into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system. Time-independent equation[edit] The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states (also called "orbitals", as in atomic orbitals or molecular orbitals). These states are important in their own right, and if the stationary states are classified and understood, then it becomes easier to solve the time-dependent Schrödinger equation for any state. Stationary states can also be described by a simpler form of the Schrödinger equation, the time-independent Schrödinger equation. (This is only used when the Hamiltonian itself is not dependent on time explicitly. However, even in this case the total wave function still has a time dependency.) Time-independent Schrödinger equation (general) In words, the equation states: When the Hamiltonian operator acts on a certain wave function Ψ, and the result is proportional to the same wave function Ψ, then Ψ is a stationary state, and the proportionality constant, E, is the energy of the state Ψ. The time-independent Schrödinger equation is discussed further below. In linear algebra terminology, this equation is an eigenvalue equation. As before, the most famous manifestation is the non-relativistic Schrödinger equation for a single particle moving in an electric field (but not a magnetic field): Time-independent Schrödinger equation (single non-relativistic particle) with definitions as above. In the modern understanding of quantum mechanics, Schrödinger's equation may be derived as follows.[6] If the wave-function at time t is given by , then by the linearity of quantum mechanics the wave-function at time t' must be given by , where is a linear operator. Since time-evolution must preserve the norm of the wave-function, it follows that must be a member of the unitary group of operators acting on wave-functions. We also know that when , we must have . Therefore, expanding the operator for t' close to t, we can write where H is a Hermitian operator. This follows from the fact that the Lie algebra corresponding to the unitary group comprises Hermitian operators. Taking the limit as the time-difference becomes very small, we obtain Schrödinger's equation. So far, H is only an abstract Hermitian operator. However using the correspondence principle it is possible to show that, in the classical limit, the expectation value of H is indeed the classical energy. The correspondence principle does not completely fix the form of the quantum Hamiltonian due to the uncertainty principle and therefore the precise form of the quantum Hamiltonian must be fixed empirically. The Schrödinger equation and its solutions introduced a breakthrough in thinking about physics. Schrödinger's equation was the first of its type, and solutions led to consequences that were very unusual and unexpected for the time. Total, kinetic, and potential energy[edit] The overall form of the equation is not unusual or unexpected, as it uses the principle of the conservation of energy. The terms of the nonrelativistic Schrödinger equation can be interpreted as total energy of the system, equal to the system kinetic energy plus the system potential energy. In this respect, it is just the same as in classical physics. The Schrödinger equation predicts that if certain properties of a system are measured, the result may be quantized, meaning that only specific discrete values can occur. One example is energy quantization: the energy of an electron in an atom is always one of the quantized energy levels, a fact discovered via atomic spectroscopy. (Energy quantization is discussed below.) Another example is quantization of angular momentum. This was an assumption in the earlier Bohr model of the atom, but it is a prediction of the Schrödinger equation. Another result of the Schrödinger equation is that not every measurement gives a quantized result in quantum mechanics. For example, position, momentum, time, and (in some situations) energy can have any value across a continuous range.[7]:165–167 Measurement and uncertainty[edit] In classical mechanics, a particle has, at every moment, an exact position and an exact momentum. These values change deterministically as the particle moves according to Newton's laws. Under the Copenhagen interpretation of quantum mechanics, particles do not have exactly determined properties, and when they are measured, the result is randomly drawn from a probability distribution. The Schrödinger equation predicts what the probability distributions are, but fundamentally cannot predict the exact result of each measurement. The Heisenberg uncertainty principle is the statement of the inherent measurement uncertainty in quantum mechanics. It states that the more precisely a particle's position is known, the less precisely its momentum is known, and vice versa. The Schrödinger equation describes the (deterministic) evolution of the wave function of a particle. However, even if the wave function is known exactly, the result of a specific measurement on the wave function is uncertain. Quantum tunneling[edit] Main article: Quantum tunneling Quantum tunneling through a barrier. A particle coming from the left does not have enough energy to climb the barrier. However, it can sometimes "tunnel" to the other side. In classical physics, when a ball is rolled slowly up a large hill, it will come to a stop and roll back, because it doesn't have enough energy to get over the top of the hill to the other side. However, the Schrödinger equation predicts that there is a small probability that the ball will get to the other side of the hill, even if it has too little energy to reach the top. This is called quantum tunneling. It is related to the distribution of energy: although the ball's assumed position seems to be on one side of the hill, there is a chance of finding it on the other side. Particles as waves[edit] A double slit experiment showing the accumulation of electrons on a screen as time passes. The nonrelativistic Schrödinger equation is a type of partial differential equation called a wave equation. Therefore, it is often said particles can exhibit behavior usually attributed to waves. In some modern interpretations this description is reversed – the quantum state, i.e. wave, is the only genuine physical reality, and under the appropriate conditions it can show features of particle-like behavior. However, Ballentine[8]:Chapter 4, p.99 shows that such an interpretation has problems. Ballentine points out that whilst it is arguable to associate a physical wave with a single particle, there is still only one Schrödinger wave equation for many particles. He points out: "If a physical wave field were associated with a particle, or if a particle were identified with a wave packet, then corresponding to N interacting particles there should be N interacting waves in ordinary three-dimensional space. But according to (4.6) that is not the case; instead there is one "wave" function in an abstract 3N-dimensional configuration space. The misinterpretation of psi as a physical wave in ordinary space is possible only because the most common applications of quantum mechanics are to one-particle states, for which configuration space and ordinary space are isomorphic." Two-slit diffraction is a famous example of the strange behaviors that waves regularly display, that are not intuitively associated with particles. The overlapping waves from the two slits cancel each other out in some locations, and reinforce each other in other locations, causing a complex pattern to emerge. Intuitively, one would not expect this pattern from firing a single particle at the slits, because the particle should pass through one slit or the other, not a complex overlap of both. However, since the Schrödinger equation is a wave equation, a single particle fired through a double-slit does show this same pattern (figure on right). Note: The experiment must be repeated many times for the complex pattern to emerge. Although this is counterintuitive, the prediction is correct; in particular, electron diffraction and neutron diffraction are well understood and widely used in science and engineering. Related to diffraction, particles also display superposition and interference. The superposition property allows the particle to be in a quantum superposition of two or more quantum states at the same time. However, it is noted that a "quantum state" in QM means the probability that a system will be, for example at a position x, not that the system will actually be at position x. It does not imply that the particle itself may be in two classical states at once. Indeed, QM is generally unable to assign values for properties prior to measurement at all. Interpretation of the wave function[edit] The Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. However, the Schrödinger equation does not directly say what, exactly, the wave function is. Interpretations of quantum mechanics address questions such as what the relation is between the wave function, the underlying reality, and the results of experimental measurements. An important aspect is the relationship between the Schrödinger equation and wavefunction collapse. In the oldest Copenhagen interpretation, particles follow the Schrödinger equation except during wavefunction collapse, during which they behave entirely differently. The advent of quantum decoherence theory allowed alternative approaches (such as the Everett many-worlds interpretation and consistent histories), wherein the Schrödinger equation is always satisfied, and wavefunction collapse should be explained as a consequence of the Schrödinger equation. Historical background and development[edit] Erwin Schrödinger Following Max Planck's quantization of light (see black body radiation), Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs of wave–particle duality. Since energy and momentum are related in the same way as frequency and wavenumber in special relativity, it followed that the momentum p of a photon is inversely proportional to its wavelength λ, or proportional to its wavenumber k. where h is Planck's constant. Louis de Broglie hypothesized that this is true for all particles, even particles which have mass such as electrons. He showed that, assuming that the matter waves propagate along with their particle counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus of an atom are allowed.[10] These quantized orbits correspond to discrete energy levels, and de Broglie reproduced the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular momentum L according to: According to de Broglie the electron is described by a wave and a whole number of wavelengths must fit along the circumference of the electron's orbit: This approach essentially confined the electron wave in one dimension, along a circular orbit of radius r. In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation.[11] Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation, and solve for its energy eigenvalues for the hydrogen atom. Unfortunately the paper was rejected by the Physical Review, as recounted by Kamen.[12] Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Schrödinger decided to find a proper 3-dimensional wave equation for the electron. He was guided by William R. Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system—the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action.[13] A modern version of his reasoning is reproduced below. The equation he found is:[14] However, by that time, Arnold Sommerfeld had refined the Bohr model with relativistic corrections.[15][16] Schrödinger used the relativistic energy momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units): He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself in an isolated mountain cabin in December 1925.[17] While at the cabin, Schrödinger decided that his earlier non-relativistic calculations were novel enough to publish, and decided to leave off the problem of relativistic corrections for the future. Despite the difficulties in solving the differential equation for hydrogen (he had sought help from his friend the mathematician Hermann Weyl[18]:3) Schrödinger showed that his non-relativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in 1926.[18]:1[19] In the equation, Schrödinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a wave Ψ(x, t), moving in a potential well V, created by the proton. This computation accurately reproduced the energy levels of the Bohr model. In a paper, Schrödinger himself explained this equation as follows: This 1926 paper was enthusiastically endorsed by Einstein, who saw the matter-waves as an intuitive depiction of nature, as opposed to Heisenberg's matrix mechanics, which he considered overly formal.[21] The Schrödinger equation details the behavior of Ψ but says nothing of its nature. Schrödinger tried to interpret it as a charge density in his fourth paper, but he was unsuccessful.[22]:219 In 1926, just a few days after Schrödinger's fourth and final paper was published, Max Born successfully interpreted Ψ as the probability amplitude, whose absolute square is equal to probability density.[22]:220 Schrödinger, though, always opposed a statistical or probabilistic approach, with its associated discontinuities—much like Einstein, who believed that quantum mechanics was a statistical approximation to an underlying deterministic theory—and never reconciled with the Copenhagen interpretation.[23] Louis de Broglie in his later years proposed a real valued wave function connected to the complex wave function by a proportionality constant and developed the De Broglie–Bohm theory. The wave equation for particles[edit] The Schrödinger equation is a diffusion equation,[24] the solutions are functions which describe wave-like motions. Wave equations in physics can normally be derived from other physical laws – the wave equation for mechanical vibrations on strings and in matter can be derived from Newton's laws – where the wave function represents the displacement of matter, and electromagnetic waves from Maxwell's equations, where the wave functions are electric and magnetic fields. The basis for Schrödinger's equation, on the other hand, is the energy of the system and a separate postulate of quantum mechanics: the wave function is a description of the system.[25] The Schrödinger equation is therefore a new concept in itself; as Feynman put it: The foundation of the equation is structured to be a linear differential equation based on classical energy conservation, and consistent with the De Broglie relations. The solution is the wave function ψ, which contains all the information that can be known about the system. In the Copenhagen interpretation, the modulus of ψ is related to the probability the particles are in some spatial configuration at some instant of time. Solving the equation for ψ can be used to predict how the particles will behave under the influence of the specified potential and with each other. The Schrödinger equation was developed principally from the De Broglie hypothesis, a wave equation that would describe particles,[27] and can be constructed as shown informally in the following sections.[28] For a more rigorous description of Schrödinger's equation, see also Resnick et al.[29] Consistency with energy conservation[edit] The total energy E of a particle is the sum of kinetic energy T and potential energy V, this sum is also the frequent expression for the Hamiltonian H in classical mechanics: Explicitly, for a particle in one dimension with position x, mass m and momentum p, and potential energy V which generally varies with position and time t: For three dimensions, the position vector r and momentum vector p must be used: This formalism can be extended to any fixed number of particles: the total energy of the system is then the total kinetic energies of the particles, plus the total potential energy, again the Hamiltonian. However, there can be interactions between the particles (an N-body problem), so the potential energy V can change as the spatial configuration of particles changes, and possibly with time. The potential energy, in general, is not the sum of the separate potential energies for each particle, it is a function of all the spatial positions of the particles. Explicitly: The simplest wavefunction is a plane wave of the form: where the A is the amplitude, k the wavevector, and ω the angular frequency, of the plane wave. In general, physical situations are not purely described by plane waves, so for generality the superposition principle is required; any wave can be made by superposition of sinusoidal plane waves. So if the equation is linear, a linear combination of plane waves is also an allowed solution. Hence a necessary and separate requirement is that the Schrödinger equation is a linear differential equation. For discrete k the sum is a superposition of plane waves: for some real amplitude coefficients An, and for continuous k the sum becomes an integral, the Fourier transform of a momentum space wavefunction:[30] where d3k = dkxdkydkz is the differential volume element in k-space, and the integrals are taken over all k-space. The momentum wavefunction Φ(k) arises in the integrand since the position and momentum space wavefunctions are Fourier transforms of each other. Consistency with the De Broglie relations[edit] Diagrammatic summary of the quantities related to the wavefunction, as used in De broglie's hypothesis and development of the Schrödinger equation.[27] Einstein's light quanta hypothesis (1905) states that the energy E of a photon is proportional to the frequency ν (or angular frequency, ω = 2πν) of the corresponding quantum wavepacket of light: Likewise De Broglie's hypothesis (1924) states that any particle can be associated with a wave, and that the momentum p of the particle is inversely proportional to the wavelength λ of such a wave (or proportional to the wavenumber, k = /λ), in one dimension, by: while in three dimensions, wavelength λ is related to the magnitude of the wavevector k: The Planck–Einstein and de Broglie relations illuminate the deep connections between energy with time, and space with momentum, and express wave–particle duality. In practice, natural units comprising ħ = 1 are used, as the De Broglie equations reduce to identities: allowing momentum, wavenumber, energy and frequency to be used interchangeably, to prevent duplication of quantities, and reduce the number of dimensions of related quantities. For familiarity SI units are still used in this article. Schrödinger's insight,[citation needed] late in 1925, was to express the phase of a plane wave as a complex phase factor using these relations: and to realize that the first order partial derivatives were: with respect to space: with respect to time: Another postulate of quantum mechanics is that all observables are represented by linear Hermitian operators which act on the wavefunction, and the eigenvalues of the operator are the values the observable takes. The previous derivatives are consistent with the energy operator, corresponding to the time derivative, where E are the energy eigenvalues, and the momentum operator, corresponding to the spatial derivatives (the gradient ), where p is a vector of the momentum eigenvalues. In the above, the "hats" ( ˆ ) indicate these observables are operators, not simply ordinary numbers or vectors. The energy and momentum operators are differential operators, while the potential energy function V is just a multiplicative factor. Substituting the energy and momentum operators into the classical energy conservation equation obtains the operator: so in terms of derivatives with respect to time and space, acting this operator on the wavefunction Ψ immediately led Schrödinger to his equation:[citation needed] Wave–particle duality can be assessed from these equations as follows. The kinetic energy T is related to the square of momentum p. As the particle's momentum increases, the kinetic energy increases more rapidly, but since the wavenumber |k| increases the wavelength λ decreases. In terms of ordinary scalar and vector quantities (not operators): The kinetic energy is also proportional to the second spatial derivatives, so it is also proportional to the magnitude of the curvature of the wave, in terms of operators: As the curvature increases, the amplitude of the wave alternates between positive and negative more rapidly, and also shortens the wavelength. So the inverse relation between momentum and wavelength is consistent with the energy the particle has, and so the energy of the particle has a connection to a wave, all in the same mathematical formulation.[27] Wave and particle motion[edit] Increasing levels of wavepacket localization, meaning the particle has a more localized position. In the limit ħ → 0, the particle's position and momentum become known exactly. This is equivalent to the classical particle. Schrödinger required that a wave packet solution near position r with wavevector near k will move along the trajectory determined by classical mechanics for times short enough for the spread in k (and hence in velocity) not to substantially increase the spread in r. Since, for a given spread in k, the spread in velocity is proportional to Planck's constant ħ, it is sometimes said that in the limit as ħ approaches zero, the equations of classical mechanics are restored from quantum mechanics.[31] Great care is required in how that limit is taken, and in what cases. The limiting short-wavelength is equivalent to ħ tending to zero because this is limiting case of increasing the wave packet localization to the definite position of the particle (see images right). Using the Heisenberg uncertainty principle for position and momentum, the products of uncertainty in position and momentum become zero as ħ → 0: where σ denotes the (root mean square) measurement uncertainty in x and px (and similarly for the y and z directions) which implies the position and momentum can only be known to arbitrary precision in this limit. The Schrödinger equation in its general form is closely related to the Hamilton–Jacobi equation (HJE) where S is action and H is the Hamiltonian function (not operator). Here the generalized coordinates qi for i = 1, 2, 3 (used in the context of the HJE) can be set to the position in Cartesian coordinates as r = (q1, q2, q3) = (x, y, z).[31] where ρ is the probability density, into the Schrödinger equation and then taking the limit ħ → 0 in the resulting equation, yields the Hamilton–Jacobi equation. The implications are: • The motion of a particle, described by a (short-wavelength) wave packet solution to the Schrödinger equation, is also described by the Hamilton–Jacobi equation of motion. • The Schrödinger equation includes the wavefunction, so its wave packet solution implies the position of a (quantum) particle is fuzzily spread out in wave fronts. On the contrary, the Hamilton–Jacobi equation applies to a (classical) particle of definite position and momentum, instead the position and momentum at all times (the trajectory) are deterministic and can be simultaneously known. Non-relativistic quantum mechanics[edit] The quantum mechanics of particles without accounting for the effects of special relativity, for example particles propagating at speeds much less than light, is known as non-relativistic quantum mechanics. Following are several forms of Schrödinger's equation in this context for different situations: time independence and dependence, one and three spatial dimensions, and one and N particles. In actuality, the particles constituting the system do not have the numerical labels used in theory. The language of mathematics forces us to label the positions of particles one way or another, otherwise there would be confusion between symbols representing which variables are for which particle.[29] Time independent[edit] If the Hamiltonian is not an explicit function of time, the equation is separable into a product of spatial and temporal parts. In general, the wavefunction takes the form: where ψ(space coords) is a function of all the spatial coordinate(s) of the particle(s) constituting the system only, and τ(t) is a function of time only. Substituting for ψ into the Schrödinger equation for the relevant number of particles in the relevant number of dimensions, solving by separation of variables implies the general solution of the time-dependent equation has the form:[14] Since the time dependent phase factor is always the same, only the spatial part needs to be solved for in time independent problems. Additionally, the energy operator Ê = /t can always be replaced by the energy eigenvalue E, thus the time independent Schrödinger equation is an eigenvalue equation for the Hamiltonian operator:[4]:143ff This is true for any number of particles in any number of dimensions (in a time independent potential). This case describes the standing wave solutions of the time-dependent equation, which are the states with definite energy (instead of a probability distribution of different energies). In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels. The energy eigenvalues from this equation form a discrete spectrum of values, so mathematically energy must be quantized. More specifically, the energy eigenstates form a basis – any wavefunction may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is the spectral theorem in mathematics, and in a finite state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix. One-dimensional examples[edit] For a particle in one dimension, the Hamiltonian is: and substituting this into the general Schrödinger equation gives: This is the only case the Schrödinger equation is an ordinary differential equation, rather than a partial differential equation. The general solutions are always of the form: For N particles in one dimension, the Hamiltonian is: where the position of particle n is xn. The corresponding Schrödinger equation is: so the general solutions have the form: For non-interacting distinguishable particles,[32] the potential of the system only influences each particle separately, so the total potential energy is the sum of potential energies for each particle: and the wavefunction can be written as a product of the wavefunctions for each particle: For non-interacting identical particles, the potential is still a sum, but wavefunction is a bit more complicated – it is a sum over the permutations of products of the separate wavefunctions to account for particle exchange. In general for interacting particles, the above decompositions are not possible. Free particle[edit] For no potential, V = 0, so the particle is free and the equation reads:[4]:151ff which has oscillatory solutions for E > 0 (the Cn are arbitrary constants): and exponential solutions for E < 0 The exponentially growing solutions have an infinite norm, and are not physical. They are not allowed in a finite volume with periodic or fixed boundary conditions. See also free particle and wavepacket for more discussion on the free particle. Constant potential[edit] Animation of a de Broglie wave incident on a barrier. For a constant potential, V = V0, the solution is oscillatory for E > V0 and exponential for E < V0, corresponding to energies that are allowed or disallowed in classical mechanics. Oscillatory solutions have a classically allowed energy and correspond to actual classical motions, while the exponential solutions have a disallowed energy and describe a small amount of quantum bleeding into the classically disallowed region, due to quantum tunneling. If the potential V0 grows to infinity, the motion is classically confined to a finite region. Viewed far enough away, every solution is reduced to an exponential; the condition that the exponential is decreasing restricts the energy levels to a discrete set, called the allowed energies.[30] Harmonic oscillator[edit] A harmonic oscillator in classical mechanics (A–B) and quantum mechanics (C–H). In (A–B), a ball, attached to a spring, oscillates back and forth. (C–H) are six solutions to the Schrödinger Equation for this situation. The horizontal axis is position, the vertical axis is the real part (blue) or imaginary part (red) of the wavefunction. Stationary states, or energy eigenstates, which are solutions to the time-independent Schrödinger equation, are shown in C, D, E, F, but not G or H. The Schrödinger equation for this situation is It is a notable quantum system to solve for; since the solutions are exact (but complicated – in terms of Hermite polynomials), and it can describe or at least approximate a wide variety of other systems, including vibrating atoms, molecules,[33] and atoms or ions in lattices,[34] and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in quantum mechanics. There is a family of solutions – in the position basis they are where n = 0,1,2,..., and the functions Hn are the Hermite polynomials. Three-dimensional examples[edit] The extension from one dimension to three dimensions is straightforward, all position and momentum operators are replaced by their three-dimensional expressions and the partial derivative with respect to space is replaced by the gradient operator. The Hamiltonian for one particle in three dimensions is: generating the equation: with stationary state solutions of the form: where the position of the particle is r. Two useful coordinate systems for solving the Schrödinger equation are Cartesian coordinates so that r = (x, y, z) and spherical polar coordinates so that r = (r, θ, φ), although other orthogonal coordinates are useful for solving the equation for systems with certain geometric symmetries. For N particles in three dimensions, the Hamiltonian is: where the position of particle n is rn and the gradient operators are partial derivatives with respect to the particle's position coordinates. In Cartesian coordinates, for particle n, the position vector is rn = (xn, yn, zn) while the gradient and Laplacian operator are respectively: The Schrödinger equation is: with stationary state solutions: Again, for non-interacting distinguishable particles the potential is the sum of particle potentials and the wavefunction is a product of the particle wavefuntions For non-interacting identical particles, the potential is a sum but the wavefunction is a sum over permutations of products. The previous two equations do not apply to interacting particles. Following are examples where exact solutions are known. See the main articles for further details. Hydrogen atom[edit] This form of the Schrödinger equation can be applied to the hydrogen atom:[25][27] where e is the electron charge, r is the position of the electron (r = | r | is the magnitude of the position), the potential term is due to the Coulomb interaction, wherein ε0 is the electric constant (permittivity of free space) and is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass mp and the electron of mass me. The negative sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the electron mass is used since the electron and proton together orbit each other about a common centre of mass, and constitute a two-body problem to solve. The motion of the electron is of principle interest here, so the equivalent one-body problem is the motion of the electron using the reduced mass. The wavefunction for hydrogen is a function of the electron's coordinates, and in fact can be separated into functions of each coordinate.[35] Usually this is done in spherical polar coordinates: where R are radial functions and Ym (θ, φ) are spherical harmonics of degree and order m. This is the only atom for which the Schrödinger equation has been solved for exactly. Multi-electron atoms require approximative methods. The family of solutions are:[36] NB: generalized Laguerre polynomials are defined differently by different authors—see main article on them and the hydrogen atom. Two-electron atoms or ions[edit] The equation for any two-electron system, such as the neutral helium atom (He, Z = 2), the negative hydrogen ion (H, Z = 1), or the positive lithium ion (Li+, Z = 3) is:[28] where r1 is the position of one electron (r1 = | r1 | is its magnitude), r2 is the position of the other electron (r2 = |r2| is the magnitude), r12 = |r12| is the magnitude of the separation between them given by μ is again the two-body reduced mass of an electron with respect to the nucleus of mass M, so this time and Z is the atomic number for the element (not a quantum number). The cross-term of two laplacians is known as the mass polarization term, which arises due to the motion of atomic nuclei. The wavefunction is a function of the two electron's positions: There is no closed form solution for this equation. Time dependent[edit] This is the equation of motion for the quantum state. In the most general form, it is written:[4]:143ff and the solution, the wavefunction, is a function of all the particle coordinates of the system and time. Following are specific cases. For one particle in one dimension, the Hamiltonian generates the equation: For N particles in one dimension, the Hamiltonian is: where the position of particle n is xn, generating the equation: For one particle in three dimensions, the Hamiltonian is: generating the equation: For N particles in three dimensions, the Hamiltonian is: where the position of particle n is rn, generating the equation:[4]:141 This last equation is in a very high dimension, so the solutions are not easy to visualize. Solution methods[edit] General techniques: The Schrödinger equation has the following properties: some are useful, but there are shortcomings. Ultimately, these properties arise from the Hamiltonian used, and solutions to the equation. In the development above, the Schrödinger equation was made to be linear for generality, though this has other implications. If two wave functions ψ1 and ψ2 are solutions, then so is any linear combination of the two: where a and b are any complex numbers (the sum can be extended for any number of wavefunctions). This property allows superpositions of quantum states to be solutions of the Schrödinger equation. Even more generally, it holds that a general solution to the Schrödinger equation can be found by taking a weighted sum over all single state solutions achievable. For example, consider a wave function Ψ(x, t) such that the wave function is a product of two functions: one time independent, and one time dependent. If states of definite energy found using the time independent Schrödinger equation are given by ψE(x) with amplitude An and time dependent phase factor is given by then a valid general solution is Additionally, the ability to scale solutions allows one to solve for a wave function without normalizing it first. If one has a set of normalized solutions ψn, then can be normalized by ensuring that This is much more convenient than having to verify that Real energy eigenstates[edit] For the time-independent equation, an additional feature of linearity follows: if two wave functions ψ1 and ψ2 are solutions to the time-independent equation with the same energy E, then so is any linear combination: Two different solutions with the same energy are called degenerate.[30] In an arbitrary potential, if a wave function ψ solves the time-independent equation, so does its complex conjugate, denoted ψ*. By taking linear combinations, the real and imaginary parts of ψ are each solutions. If there is no degeneracy they can only differ by a factor. In the time-dependent equation, complex conjugate waves move in opposite directions. If Ψ(x, t) is one solution, then so is Ψ(x, –t). The symmetry of complex conjugation is called time-reversal symmetry. Space and time derivatives[edit] Continuity of the wavefunction and its first spatial derivative (in the x direction, y and z coordinates not shown), at some time t. The Schrödinger equation is first order in time and second in space, which describes the time evolution of a quantum state (meaning it determines the future amplitude from the present). Explicitly for one particle in 3-dimensional Cartesian coordinates – the equation is The first time partial derivative implies the initial value (at t = 0) of the wavefunction is an arbitrary constant. Likewise – the second order derivatives with respect to space implies the wavefunction and its first order spatial derivatives are all arbitrary constants at a given set of points, where xb, yb, zb are a set of points describing boundary b (derivatives are evaluated at the boundaries). Typically there are one or two boundaries, such as the step potential and particle in a box respectively. As the first order derivatives are arbitrary, the wavefunction can be a continuously differentiable function of space, since at any boundary the gradient of the wavefunction can be matched. On the contrary, wave equations in physics are usually second order in time, notable are the family of classical wave equations and the quantum Klein–Gordon equation. Local conservation of probability[edit] The Schrödinger equation is consistent with probability conservation. Multiplying the Schrödinger equation on the right by the complex conjugate wavefunction, and multiplying the wavefunction to the left of the complex conjugate of the Schrödinger equation, and subtracting, gives the continuity equation for probability:[37] is the probability density (probability per unit volume, * denotes complex conjugate), and is the probability current (flow per unit area). Hence predictions from the Schrödinger equation do not violate probability conservation. Positive energy[edit] If the potential is bounded from below, meaning there is a minimum value of potential energy, the eigenfunctions of the Schrödinger equation have energy which is also bounded from below. This can be seen most easily by using the variational principle, as follows. (See also below). For any linear operator  bounded from below, the eigenvector with the smallest eigenvalue is the vector ψ that minimizes the quantity over all ψ which are normalized.[37] In this way, the smallest eigenvalue is expressed through the variational principle. For the Schrödinger Hamiltonian Ĥ bounded from below, the smallest eigenvalue is called the ground state energy. That energy is the minimum value of (using integration by parts). Due to the complex modulus of ψ2 (which is positive definite), the right hand side always greater than the lowest value of V(x). In particular, the ground state energy is positive when V(x) is everywhere positive. For potentials which are bounded below and are not infinite over a region, there is a ground state which minimizes the integral above. This lowest energy wavefunction is real and positive definite – meaning the wavefunction can increase and decrease, but is positive for all positions. It physically cannot be negative: if it were, smoothing out the bends at the sign change (to minimize the wavefunction) rapidly reduces the gradient contribution to the integral and hence the kinetic energy, while the potential energy changes linearly and less quickly. The kinetic and potential energy are both changing at different rates, so the total energy is not constant, which can't happen (conservation). The solutions are consistent with Schrödinger equation if this wavefunction is positive definite. The lack of sign changes also shows that the ground state is nondegenerate, since if there were two ground states with common energy E, not proportional to each other, there would be a linear combination of the two that would also be a ground state resulting in a zero solution. Analytic continuation to diffusion[edit] The above properties (positive definiteness of energy) allow the analytic continuation of the Schrödinger equation to be identified as a stochastic process. This can be interpreted as the Huygens–Fresnel principle applied to De Broglie waves; the spreading wavefronts are diffusive probability amplitudes.[37] For a free particle (not subject to a potential) in a random walk, substituting τ = it into the time-dependent Schrödinger equation gives:[38] which has the same form as the diffusion equation, with diffusion coefficient ħ/2m. In that case, the diffusivity yields the De Broglie relation in accordance with the Markov process.[39] On the space of square-integrable densities, the Schrödinger semigroup is a unitary evolution, and therefore surjective. But the flows satisfy the Schrödinger equation . However, since for most physically reasonable Hamiltonians (e.g., the Laplace operator, possibly modified by a potential) is unbounded in , this shows that the semigroup flows lack Sobolev regularity in general. Instead, solutions of the Schrödinger equation satisfy a Strichartz estimate. Relativistic quantum mechanics[edit] Relativistic quantum mechanics is obtained where quantum mechanics and special relativity simultaneously apply. In general, one wishes to build relativistic wave equations from the relativistic energy–momentum relation instead of classical energy equations. The Klein–Gordon equation and the Dirac equation are two such equations. The Klein–Gordon equation, was the first such equation to be obtained, even before the non-relativistic one, and applies to massive spinless particles. The Dirac equation arose from taking the "square root" of the Klein–Gordon equation by factorizing the entire relativistic wave operator into a product of two operators – one of these is the operator for the entire Dirac equation. The general form of the Schrödinger equation remains true in relativity, but the Hamiltonian is less obvious. For example, the Dirac Hamiltonian for a particle of mass m and electric charge q in an electromagnetic field (described by the electromagnetic potentials φ and A) is: in which the γ = (γ1, γ2, γ3) and γ0 are the Dirac gamma matrices related to the spin of the particle. The Dirac equation is true for all spin-12 particles, and the solutions to the equation are 4-component spinor fields with two components corresponding to the particle and the other two for the antiparticle. For the Klein–Gordon equation, the general form of the Schrödinger equation is inconvenient to use, and in practice the Hamiltonian is not expressed in an analogous way to the Dirac Hamiltonian. The equations for relativistic quantum fields can be obtained in other ways, such as starting from a Lagrangian density and using the Euler–Lagrange equations for fields, or use the representation theory of the Lorentz group in which certain representations can be used to fix the equation for a free particle of given spin (and mass). In general, the Hamiltonian to be substituted in the general Schrödinger equation is not just a function of the position and momentum operators (and possibly time), but also of spin matrices. Also, the solutions to a relativistic wave equation, for a massive particle of spin s, are complex-valued 2(2s + 1)-component spinor fields. Quantum field theory[edit] The general equation is also valid and used in quantum field theory, both in relativistic and non-relativistic situations. However, the solution ψ is no longer interpreted as a "wave", but should be interpreted as an operator acting on states existing in a Fock space.[citation needed] First Order Form[edit] The Schrödinger equation can also be derived from a first order form[40][41][42] similar to the manner in which the Klein-Gordon equation can be derived from the Dirac equation. In 1D the first order equation is given by This equation allows for the inclusion of spin in non-relativistic quantum mechanics. Squaring the above equation yields the Schrödinger equation in 1D. The matrices obey the following properties The 3 dimensional version of the equation is given by Here is a nilpotent matrix and are the Dirac gamma matrices (). The Schrödinger equation in 3D can be obtained by squaring the above equation. In the non-relativistic limit and , the above equation can be derived from the Dirac equation.[41] See also[edit] 1. ^ Schrödinger, E. (1926). "An Undulatory Theory of the Mechanics of Atoms and Molecules" (PDF). Physical Review. 28 (6): 1049–1070. Bibcode:1926PhRv...28.1049S. doi:10.1103/PhysRev.28.1049. Archived from the original (PDF) on 17 December 2008.  3. ^ Laloe, Franck (2012), Do We Really Understand Quantum Mechanics, Cambridge University Press, ISBN 978-1-107-02501-1  4. ^ a b c d e Shankar, R. (1994). Principles of Quantum Mechanics (2nd ed.). Kluwer Academic/Plenum Publishers. ISBN 978-0-306-44790-7.  5. ^ 6. ^ Sakurai, J. J. (1995). Modern Quantum Mechanics. Reading, Massachusetts: Addison-Wesley. p. 68.  7. ^ Nouredine Zettili (17 February 2009). Quantum Mechanics: Concepts and Applications. John Wiley & Sons. ISBN 978-0-470-02678-6.  8. ^ Ballentine, Leslie (1998), Quantum Mechanics: A Modern Development, World Scientific Publishing Co., ISBN 9810241054  9. ^ David Deutsch, The Beginning of infinity, page 310 10. ^ de Broglie, L. (1925). "Recherches sur la théorie des quanta" [On the Theory of Quanta] (PDF). Annales de Physique. 10 (3): 22–128.  Translated version at the Wayback Machine (archived 9 May 2009). 11. ^ Weissman, M.B.; V. V. Iliev; I. Gutman (2008). "A pioneer remembered: biographical notes about Arthur Constant Lunn". Communications in Mathematical and in Computer Chemistry. 59 (3): 687–708.  12. ^ Kamen, Martin D. (1985). Radiant Science, Dark Politics. Berkeley and Los Angeles, CA: University of California Press. pp. 29–32. ISBN 0-520-04929-2.  13. ^ Schrodinger, E. (1984). Collected papers. Friedrich Vieweg und Sohn. ISBN 3-7001-0573-8.  See introduction to first 1926 paper. 14. ^ a b Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, (Verlagsgesellschaft) 3-527-26954-1, (VHC Inc.) ISBN 0-89573-752-3 15. ^ Sommerfeld, A. (1919). Atombau und Spektrallinien. Braunschweig: Friedrich Vieweg und Sohn. ISBN 3-87144-484-7.  16. ^ For an English source, see Haar, T. "The Old Quantum Theory".  17. ^ Rhodes, R. (1986). Making of the Atomic Bomb. Touchstone. ISBN 0-671-44133-7.  18. ^ a b Erwin Schrödinger (1982). Collected Papers on Wave Mechanics: Third Edition. American Mathematical Soc. ISBN 978-0-8218-3524-1.  19. ^ Schrödinger, E. (1926). "Quantisierung als Eigenwertproblem; von Erwin Schrödinger". Annalen der Physik. 384: 361–377. Bibcode:1926AnP...384..361S. doi:10.1002/andp.19263840404.  20. ^ Erwin Schrödinger, "The Present situation in Quantum Mechanics," p. 9 of 22. The English version was translated by John D. Trimmer. The translation first appeared first in Proceedings of the American Philosophical Society, 124, 323–38. It later appeared as Section I.11 of Part I of Quantum Theory and Measurement by J.A. Wheeler and W.H. Zurek, eds., Princeton University Press, New Jersey 1983. 21. ^ Einstein, A.; et. al. "Letters on Wave Mechanics: Schrodinger–Planck–Einstein–Lorentz".  22. ^ a b c Moore, W.J. (1992). Schrödinger: Life and Thought. Cambridge University Press. ISBN 0-521-43767-9.  23. ^ It is clear that even in his last year of life, as shown in a letter to Max Born, that Schrödinger never accepted the Copenhagen interpretation.[22]:220 24. ^ Takahisa Okino (2013). "Correlation between Diffusion Equation and Schrödinger Equation". Journal of Modern Physics (4): 612–615.  26. ^ The New Quantum Universe, T.Hey, P.Walters, Cambridge University Press, 2009, ISBN 978-0-521-56457-1 27. ^ a b c d Quanta: A handbook of concepts, P.W. Atkins, Oxford University Press, 1974, ISBN 0-19-855493-1 28. ^ a b Physics of Atoms and Molecules, B.H. Bransden, C.J.Joachain, Longman, 1983, ISBN 0-582-44401-2 29. ^ a b Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (2nd Edition), R. Resnick, R. Eisberg, John Wiley & Sons, 1985, ISBN 978-0-471-87373-0 30. ^ a b c Quantum Mechanics Demystified, D. McMahon, Mc Graw Hill (USA), 2006, ISBN 0-07-145546-9 31. ^ a b Analytical Mechanics, L.N. Hand, J.D. Finch, Cambridge University Press, 2008, ISBN 978-0-521-57572-0 32. ^ N. Zettili. Quantum Mechanics: Concepts and Applications (2nd ed.). p. 458. ISBN 978-0-470-02679-3.  34. ^ Solid State Physics (2nd Edition), J.R. Hook, H.E. Hall, Manchester Physics Series, John Wiley & Sons, 2010, ISBN 978-0-471-92804-1 35. ^ Physics for Scientists and Engineers – with Modern Physics (6th Edition), P. A. Tipler, G. Mosca, Freeman, 2008, ISBN 0-7167-8964-7 36. ^ David Griffiths (2008). Introduction to elementary particles. Wiley-VCH. pp. 162–. ISBN 978-3-527-40601-2. Retrieved 27 June 2011.  37. ^ a b c Quantum Mechanics, E. Abers, Pearson Ed., Addison Wesley, Prentice Hall Inc, 2004, ISBN 978-0-13-146100-0 38. ^ 39. ^ Takahisa Okino (2015). "Mathematical Physics in Diffusion Problems". Journal of Modern Physics (6): 2109–2144.  40. ^ Ajaib, Muhammad Adeel (2015). "A Fundamental Form of the Schrödinger Equation". Found.Phys. 45 (2015) no.12, 1586-1598. doi:10.1007/s10701-015-9944-z.  41. ^ a b Ajaib, Muhammad Adeel (2016). "Non-Relativistic Limit of the Dirac Equation". International Journal of Quantum Foundations.  42. ^ Lévy-Leblond, J-.M. (1967). "Nonrelativistic particles and wave equations". Comm. Math. Pays. 6 (4): 286–311.  External links[edit]
52d9dc021e012d86
Take the 2-minute tour × I am an eighth grader (please remember this!!!) in need of some guidance in my school project on Quantum Mechanics, Theory, and Logic. I am attempting the create a graph of the Schrödinger Equation given the needed variables. To do this, I need to know what all of the variables mean and stand for. For starters, I get to the point of: (LaTeX code, reformat if possible please!) $$\Psi \left( x,t \right)=\frac{-\hbar}{2m}\left( i\frac{p}{\hbar} \right)\left( Ae^{ikx-i\omega t} \right)$$ Where $\hbar$ is the reduced Planck constant. And my guess is that k is kinetic energy of the particle, m is the mass, p is the potential energy, and the Greek w-like variable is the frequency. What are the other variables? Also, am I right so far? share|improve this question $k$ is the wavenumber: $2\pi/\lambda$. By 'lesser planck constant', do you mean 'reduced planck constant'? In that case the symbol is $\hbar$ \hbar. Also, that's not the schrodinger equation, just a particular solution given some function $u(x)$ for potential, which seems to be constant here. –  Manishearth Mar 13 '12 at 14:40 Yes, I meant reduced instead of lesser. And I have no experience in LaTeX, I just created this equation in the Grapher Application that came with my Mac. I am sort of confused with the u(x)... –  fr00ty_l00ps Mar 13 '12 at 14:44 I fixed it for you. Anyways, LaTeX (rather MathJax) is down at the moment. $U(x)$ is the potential energy function. Also written as $V(x)$. Could you provide a link to where you got that equation from? Its not the schrodinger equation, rather a specific solution of it. Kind of like how you get a specific solution for $y$ in $x+y=11$ when you substitute a value for $x$. The specific solution is not the whole equation.... –  Manishearth Mar 13 '12 at 15:22 Just out of interest, how much Quantum mechanics do you know? It's better to stay away from the schrodinger equation till you know enough calculus as well as general physics. If you want to graph some solutions of it, I would suggest showing electron orbital graphs or something. Also, how are you connecting QM to Theory and Logic? –  Manishearth Mar 13 '12 at 15:25 CodeAdmiral: That is a real challenge to have a school project on QM, Theory and Logic. Maybe you can explain a bit want you want to achieve, simply plotting the given equation will look basically like a wave: $f(x)=a*sin(x)$. As Manishearth already pointed out that is not the Schrödinger Equation. –  Alexander Mar 13 '12 at 20:17 show 14 more comments 1 Answer This is just a placeholder answer so that this (answered) question does not go into our unanswered backlog and get bumped up every now and then by this obnoxious fellow known as Community ♦. Please accept this answer. The equation you've given is not the Schrödinger equation, rather, it is most probably a specific solution of it. • $k=2\pi/\lambda$ is the (angular) wavenumber, where $\lambda$ is the wavelength • $\omega$ is (angular) frequency • $p$ is probably momentum. In the Schrödinger equation, potential energy is usually represented with $U(x)$ or $V(x)$ • $m$ is the mass of the particle • $A$ is the amplitude of the wave. This itself may be a function of $x$ • $i=\sqrt{-1}$ • $t$ is time • $\Psi$ is the wavefunction http://chat.stackexchange.com/transcript/2778 has a full transcript of a discussion which lead to the resolution of the dilemma. share|improve this answer add comment Your Answer
9d614c3aaa8acc4c
Open Access Nano Express A study of photomodulated reflectance on staircase-like, n-doped GaAs/Al x Ga1−x As quantum well structures Omer Donmez1, Ferhat Nutku1*, Ayse Erol1, Cetin M Arikan1 and Yuksel Ergun2 Author Affiliations 1 Department of Physics, Faculty of Science, Istanbul University, Vezneciler, Istanbul, 34134, Turkey 2 Department of Physics, Faculty of Science, Anadolu University, Eskisehir, 26470, Turkey For all author emails, please log on. Nanoscale Research Letters 2012, 7:622  doi:10.1186/1556-276X-7-622 Received:18 July 2012 Accepted:23 August 2012 Published:12 November 2012 © 2012 Donmez et al; licensee Springer. In this study, photomodulated reflectance (PR) technique was employed on two different quantum well infrared photodetector (QWIP) structures, which consist of n-doped GaAs quantum wells (QWs) between undoped AlxGa1−xAs barriers with three different x compositions. Therefore, the barrier profile is in the form of a staircase-like barrier. The main difference between the two structures is the doping profile and the doping concentration of the QWs. PR spectra were taken at room temperature using a He-Ne laser as a modulation source and a broadband tungsten halogen lamp as a probe light. The PR spectra were analyzed using Aspnes’ third derivative functional form. Since the barriers are staircase-like, the structure has different ground state energies; therefore, several optical transitions take place in the spectrum which cannot be resolved in a conventional photoluminescence technique at room temperature. To analyze the experimental results, all energy levels in the conduction and in the valance band were calculated using transfer matrix technique, taking into account the effective mass and the parabolic band approximations. A comparison of the PR results with the calculated optical transition energies showed an excellent agreement. Several optical transition energies of the QWIP structures were resolved from PR measurements. It is concluded that PR spectroscopy is a very useful experimental tool to characterize complicated structures with a high accuracy at room temperature. Photomodulated reflectance; Quantum well infrared photodetectors (QWIP); Aspnes’ third derivative form; Excitonic levels.; 85.30.De; 85.60.-q; 71.55.Eq. Quantum well infrared photodetector (QWIP) structures have been developed since 1990s [1]. There are many different types of QWIP structures. QWIPs can be categorized by their electrical properties: photovoltaic or photoconductive, or by their layer thicknesses: multi-quantum wells (MQW) or superlattice structures. They can also be categorized by having optical responsivity at a single or multiple wavelengths. Multi-color QWIPs can be composed of double barriers [2], stepped quantum wells [3], and stepped barriers. The structures with stepped barriers are also called as staircase-like QWIPs in the literature [4]. In this work, photomodulated reflectance (PR) and photoluminescence (PL) experiments were carried out on two different staircase-like QWIP structures at room temperature. PR is a powerful characterization method to determine optical transitions in both bulk and low-dimensional multilayer semiconductor structures. Its absorption-like character and high sensitivity makes it possible to observe optical transitions between ground and excited states, even at room temperature. PR spectroscopy utilizes the modulation of the built-in electric field at the semiconductor surface or at the interfaces through photo-injection of electron–hole pairs generated by a chopped incident laser beam. This technique produces sharp spectral features related to the critical points of the band structure. This provides a more explicit comparison of experimental results with theoretical models. However, PL only gives information about ground state transitions in QWs at room temperature. PR spectra were analyzed using the third derivative functional form (TDFF) in order to fit the optical transition energies, and the results were compared to the theoretical values calculated using transfer matrix method. Transfer matrix technique is a common method for solving Schrödinger equation for MQW structures which consist of layers having different band gaps and effective masses. By virtue of this technique, energy levels, wave functions under zero or constant electric field can be calculated in complex structures [5-7]. In this work, we had employed this technique to calculate the energy levels in each QW at 300 K. In order to determine the band gap of GaAs at room temperature, Varshni equation [8] was used: E g T = E g 0 α T 2 T + β , (1) where Eg(0) is the band gap of GaAs at T = 0 K; α = 5.405 × 10−4 eV/K and β = 204 K are Varshni parameters at the Г point. For AlxGa1−xAs ternary alloys. Temperature dependence of the band gap for x < 0.4 can be estimated by: E g x , T = 1.519 + 1.155 x + 0.37 x 2 α T 2 T + β , (2) where α and β are Varshni parameters of AlxGa1−xAs. Adachi showed that compositional dependence of Varshni parameters becomes significant in AlxGa1−xAs ternary alloys for x > 0.4 [9]. However, since x < 0.4 for AlxGa1−xAs in our structures, we used the same values as GaAs [9,10]. The conduction and the valance band offsets were chosen as 60% and 40%, respectively. In the calculations of energy levels, the effective mass for each layer was considered separately. The effective masses of electrons in AlAs and GaAs were taken as 0.15 and 0.067, respectively. Using these values, the effective mass of electrons in AlxGa1−xAs layers was calculated by applying Vegard’s law: m Al x Ga 1 x As = m AlAs m GaAs x m GaAs + 1 x m AlAs . (3) Similarly, the effective masses of holes in the AlxGa1−xAs layers were also calculated using Equation 3, taking the density of states heavy hole effective masses as 0.81 and 0.55, and the averaged light hole effective masses were taken as 0.16 and 0.083 in AlAs and GaAs, respectively [9]. PR spectra were fitted using the linear combination of several Aspnes’ TDFFs [11], expressed as: Δ R R = Re j = 1 n A j e i φ j E E gj + i Γ j m j + f j E , (4) where n is the number of spectral features to be fitted; E is the photon energy; Aj, φj, Egj, and Гj are the amplitude, phase, band gap energy, and line broadening of the jth feature, respectively. mj represents the type of critical point depending on the dimensionality of the structure, and its value is 2.5 or 3 for 3-D (bulk) or 2-D cases, respectively. The background signal in the measurements was simulated and suppressed from Equation 4 by a linear f(E) function. PR and PL measurements were carried out on two different MQW structures at room temperature. A tunable monochromatic probe light was provided by a 100-W tungsten lamp, dispersed by a single grating monochromator, and the sample was pumped with a modulated 10-mW He-Ne laser at 632.8 nm that was mechanically chopped at 280 Hz. The reflected probe beam was measured by a Si photodiode. The AC and DC components of reflectance (R) and differential changes in R (ΔR) were acquired by a computer, simultaneously. The structures used in this study consist of n-doped GaAs QWs sandwiched between undoped AlxGa1−xAs barriers with three different x compositions, producing staircase-like barriers. The structures were designed as QWIP devices. Details of the structures are given in Figure 1. The main differences between the two structures are the doping profile, the doping concentration, and the barrier composition of the triangular well. All QWs in ANA-coded samples have a doping concentration of 2 × 1018 cm−3. On the other hand, in IQE samples, QWs with 5.5- and 5-nm well width have doping concentrations of 3 × 1018 and 1 × 1018 cm−3, respectively. QWs with central doping width of 1.2 nm in the ANA structure were replaced by 2.5 nm in IQE structure. ANA samples have triangular wells in the active period with barriers containing 30% Al concentration. Besides, IQE-coded samples have 27% Al concentration. One edge of the triangular quantum well is formed by a graded barrier; the other edge is formed by a fixed barrier as seen from the potential profile of the conduction band given in Figure 2. We have introduced the graded barrier in the structure in order to provide a quasi-electric field which facilitates the drift current of the photo-excited carriers to adjacent layers. In Figure 2, quantum wells which have different barrier heights are labeled with numbers. thumbnailFigure 1. Schematic layer structure of (a) ANA14 and (b) IQE14 sample. thumbnailFigure 2. Conduction band profile. (a) ANA14 and (b) IQE14 structures. One period of the active region is shown in the diagram. Results and discussion Experimental results on reflectivity (R), PL, and PR spectra of ANA14 and IQE14 structures are given in Figure 3. Calculated PR spectra are also included in the figure. The R spectra are given just for information and not to be included in the discussion. PL signal begins to rise from the fundamental band edge of bulk GaAs and peaks at about the combined excitonic transition region. Details of the excitonic transitions are smeared out. However, in the PR spectra, the fundamental band gaps of bulk GaAs cap layer and AlxGa1−xAs barrier regions, and a series of excitonic transitions are clearly resolvable. In order to analyze the obtained PR spectra, we divided the spectrum into three regions. The first region between 1.4 to 1.51 eV includes signals from the bulk GaAs and effective band gap due to e1-hh1 excitonic transitions of doped GaAs QWs having AlxGa1−xAs barriers with x = 0.21. The second region ranging from 1.51 to 1.6 eV includes the other excitonic transitions such as e1-hh1, e1-hh2, and e1-lh1, coming mainly from the active period of the structures. Finally, 1.6 to 1.8 eV region includes PR signals of AlxGa1−xAs layers. The experimental results exhibit Lorentzian-like peaks which are obtained from Equation 4 as the modulus of PR resonances according to the equation below: Δ ρ = A E E 0 2 + Γ 2 m / 2 . (5) thumbnailFigure 3. R, PL, and PR spectra. (a) ANA14 and (b) IQE14 sample. Red line is the experimental, while open circles are the calculated PR spectra points. Using this equation, bulk and excitonic transition parameters A and Γ of each signal were determined. These parameters were placed into Equation 4, and then the optical transition energies were calculated. The calculated energy levels and corresponding PR peaks in the spectra are summarized in Table 1. All possible excitonic transitions in the calculated PR spectra are clearly distinguishable. Experimental and calculated PR spectra are in excellent agreement. Although most of the excitonic transitions are identified, some of the calculated transitions were not observed in the PR spectra (Table 1). Table 1. Electron and hole energy levels and PR peaks of the QWIP structure obtained at T = 300 K PL studies showed just a single broad peak for ANA14 structure at 1.525 eV and for IQE14 structure at 1.539 eV at room temperature. As seen from the calculated values of the excitonic transitions in different quantum wells, the energy differences between them are quite small; therefore, the observed PL peak cannot be attributed to just one transition. It can be concluded that observed PL peak represents additive information about some of the optical transitions. However, PR provides detailed information, resolving closely separated energy levels, even at room temperature. The importance of the photomodulated reflectance spectroscopy in complicated semiconductor QW structures and hence in QWIPs has been verified by the experimental and the theoretical results obtained from this work. QWs with barriers having minor differences in the alloy composition can clearly be distinguished by PR measurements at room temperature. Indeed, e1-hh1, e1-hh2, and e1-lh1 transitions were clearly observed and resolved. On the other hand, in PL measurements, only one single photoluminescence peak was observed. MQW: multi-quantum well; PL: photoluminescence; PR: photomodulated reflectance; QW: quantum well; QWIP: quantum well infrared photodetector; R: reflectance; TDFF: third derivative functional form. Competing interests The authors declare that they have no competing interests. Authors’ contributions OD carried out the experiments and fitted the PR spectra in collaboration with AE, MCA, and FN. FN calculated the energy levels. OD, FN, MCA, and AE contributed to the manuscript preparation. YE is the designer of the QWIP structure. All authors read and approved the final manuscript. We are grateful to Dr. Bulent Aslan and Dr. Ugur Serincan from Anadolu University for growing the ANA samples with MBE. This work was partially supported by The Scientific and Technological Research Council of Turkey (TUBITAK; project number 108T721), COST Action MP0805, Scientific Research Projects Coordination Unit of Istanbul University (project numbers 3587 and UDP 16607), and the Ministry of Development of Turkey (project number 2010K121050). 1. Levine BF: Quantum-well infrared photodetectors. J Appl Phys 1993, 74:R1-R81. Publisher Full Text OpenURL 2. Luna E, Guzman A, Sdnchez-Rojas J, Sanchez J, Munoz E: GaAs-based modulation-doped quantum-well infrared photodetectors for single- and two-color detection in 3–5 um. IEE J Selected Topics in Quantum Electronics 2002, 8:992-997. Publisher Full Text OpenURL 3. Mii YJ, Wang KL, Karunasiri RPG, Yuh PF: Observation of large oscillator strengths for both 1→2 and 1→3 intersubband transitions of step quantum wells. Appl Phys Lett 1990, 56:1046. Publisher Full Text OpenURL 4. Eker S, Hostut M, Ergun Y, Sokmen I: A new approach to quantum well infrared photodetectors: staircase-like quantum well and barriers. Infrared Physics and Technology 2006, 48:101-108. Publisher Full Text OpenURL 5. Jonsson B, Eng S: Solving the Schrodinger equation in arbitrary quantum-well potential profiles using the transfer matrix method. IEEE J Quantum Electron 1990, 26:2025-2035. Publisher Full Text OpenURL 6. Li W: Generalized free wave transfer matrix method for solving the Schrodinger equation with an arbitrary potential profile. IEEE J Quantum Electron 2010, 46:970-975. OpenURL 7. Lantz KR: Two color photodetector using an asymmetric quantum well structure. California: Naval Postgraduate School, Monterey; 2002. [PhD thesis] OpenURL 8. Varshni YP: Temperature dependence of the energy gap in semiconductors. 9. Adachi S: Properties of Semiconductor Alloys: Group-IV, III-V and II-VI Semiconductors. Wiltshire: Wiley; 2009:159-160. 10. Aspnes DE: GaAs lower conduction-band minima: ordering and properties. Phys Rev B 1976, 14:5331-5343. Publisher Full Text OpenURL 11. Aspnes DE: Third-derivative modulation spectroscopy with low-field electroreflectance. Surf Science 1973, 37:418-442. OpenURL
536f39462760551d
Intro to Quantum Mechanics      This page is intended to give an ordinary person a brief overview of the importance and wonder of quantum mechanics. Unfortunately, most people believe you need the mind of Einstein in order to understand QM so they give up on it entirely. (Interesting side note: Einstein didn't believe QM was a correct theory!) Even some chemists fall into that category-- to represent physical chemistry our departmental T-shirts have a picture of the below atom, which is almost a century out of date. <Sigh>      So please read on, and take a dip in an ocean of information that I find completely invigorating! Old atom {1 kB} If the above picture is your idea of an atom, with electrons looping around the nucleus, you are about 70 years out of date. It's time to open your eyes to the modern world of quantum mechanics! The picture below shows some plots of where you would most likely find an electron in a hydrogen atom (the nucleus is at the center of each plot). Hydrogen electron orbitals {18 kB} What is quantum mechanics? Simply put, quantum mechanics is the study of matter and radiation at an atomic level. Why was quantum mechanics developed? In the early 20th century some experiments produced results which could not be explained by classical physics (the science developed by Galileo Galilei, Isaac Newton, etc.). For instance, it was well known that electrons orbited the nucleus of an atom. However, if they did so in a manner which resembled the planets orbiting the sun, classical physics predicted that the electrons would spiral in and crash into the nucleus within a fraction of a second. Obviously that doesn't happen, or life as we know it would not exist. (Chemistry depends upon the interaction of the electrons in atoms, and life depends upon chemistry). That incorrect prediction, along with some other experiments that classical physics could not explain, showed scientists that something new was needed to explain science at the atomic level. If classical physics is wrong, why do we still use it? Classical physics is a flawed theory, but it is only dramatically flawed when dealing with the very small (atomic size, where quantum mechanics is used) or the very fast (near the speed of light, where relativity takes over). For everyday things, which are much larger than atoms and much slower than the speed of light, classical physics does an excellent job. Plus, it is much easier to use than either quantum mechanics or relativity (each of which require an extensive amount of math). What is the importance of quantum mechanics? The following are among the most important things which quantum mechanics can describe while classical physics cannot: Discreteness of energy If you look at the spectrum of light emitted by energetic atoms (such as the orange-yellow light from sodium vapor street lights, or the blue-white light from mercury vapor lamps) you will notice that it is composed of individual lines of different colors. These lines represent the discrete energy levels of the electrons in those excited atoms. When an electron in a high energy state jumps down to a lower one, the atom emits a photon of light which corresponds to the exact energy difference of those two levels (conservation of energy). The bigger the energy difference, the more energetic the photon will be, and the closer its color will be to the violet end of the spectrum. If electrons were not restricted to discrete energy levels, the spectrum from an excited atom would be a continuous spread of colors from red to violet with no individual lines. Emission spectra {52 kB} The concept of discrete energy levels can be demonstrated with a 3-way light bulb. A 40/75/115 watt bulb can only shine light at those three wattage's, and when you switch from one setting to the next, the power immediately jumps to the new setting instead of just gradually increasing. It is the fact that electrons can only exist at discrete energy levels which prevents them from spiraling into the nucleus, as classical physics predicts. And it is this quantization of energy, along with some other atomic properties that are quantized, which gives quantum mechanics its name. The wave-particle duality of light and matter In 1690 Christiaan Huygens theorized that light was composed of waves, while in 1704 Isaac Newton explained that light was made of tiny particles. Experiments supported each of their theories. However, neither a completely-particle theory nor a completely-wave theory could explain all of the phenomena associated with light! So scientists began to think of light as both a particle and a wave. In 1923 Louis de Broglie hypothesized that a material particle could also exhibit wavelike properties, and in 1927 it was shown (by Davisson and Germer) that electrons can indeed behave like waves. How can something be both a particle and a wave at the same time? For one thing, it is incorrect to think of light as a stream of particles moving up and down in a wavelike manner. Actually, light and matter exist as particles; what behaves like a wave is the probability of where that particle will be. The reason light sometimes appears to act as a wave is because we are noticing the accumulation of many of the light particles distributed over the probabilities of where each particle could be. For instance, suppose we had a dart-throwing machine that had a 5% chance of hitting the bulls-eye and a 95% chance of hitting the outer ring and no chance of hitting any other place on the dart board. Now, suppose we let the machine throw 100 darts, keeping all of them stuck in the board. We can see each individual dart (so we know they behave like a particle) but we can also see a pattern on the board of a large ring of darts surrounding a small cluster in the middle. This pattern is the accumulation of the individual darts over the probabilities of where each dart could have landed, and represents the 'wavelike' behavior of the darts. Get it? Quantum tunneling This is one of the most interesting phenomena to arise from quantum mechanics; without it computer chips would not exist, and a 'personal' computer would probably take up an entire room. As stated above, a wave determines the probability of where a particle will be. When that probability wave encounters an energy barrier most of the wave will be reflected back, but a small portion of it will 'leak' into the barrier. If the barrier is small enough, the wave that leaked through will continue on the other side of it. Even though the particle doesn't have enough energy to get over the barrier, there is still a small probability that it can 'tunnel' through it! Let's say you are throwing a rubber ball against a wall. You know you don't have enough energy to throw it through the wall, so you always expect it to bounce back. Quantum mechanics, however, says that there is a small probability that the ball could go right through the wall (without damaging the wall) and continue its flight on the other side! With something as large as a rubber ball, though, that probability is so small that you could throw the ball for billions of years and never see it go through the wall. But with something as tiny as an electron, tunneling is an everyday occurrence. On the flip side of tunneling, when a particle encounters a drop in energy there is a small probability that it will be reflected. In other words, if you were rolling a marble off a flat level table, there is a small chance that when the marble reached the edge it would bounce back instead of dropping to the floor! Again, for something as large as a marble you'll probably never see something like that happen, but for photons (the massless particles of light) it is a very real occurrence. The Heisenberg uncertainty principle People are familiar with measuring things in the macroscopic world around them. Someone pulls out a tape measure and determines the length of a table. A state trooper aims his radar gun at a car and knows what direction the car is traveling, as well as how fast. They get the information they want and don't worry whether the measurement itself has changed what they were measuring. After all, what would be the sense in determining that a table is 80 cm long if the very act of measuring it changed its length! At the atomic scale of quantum mechanics, however, measurement becomes a very delicate process. Let's say you want to find out where an electron is and where it is going (that trooper has a feeling that any electron he catches will be going faster than the local speed limit). How would you do it? Get a super high powered magnifier and look for it? The very act of looking depends upon light, which is made of photons, and these photons could have enough momentum that once they hit the electron they would change its course! It's like rolling the cue ball across a billiard table and trying to discover where it is going by bouncing the 8-ball off of it; by making the measurement with the 8-ball you have certainly altered the course of the cue ball. You may have discovered where the cue ball was, but now have no idea of where it is going (because you were measuring with the 8-ball instead of actually looking at the table). Werner Heisenberg was the first to realize that certain pairs of measurements have an intrinsic uncertainty associated with them. For instance, if you have a very good idea of where something is located, then, to a certain degree, you must have a poor idea of how fast it is moving or in what direction. We don't notice this in everyday life because any inherent uncertainty from Heisenberg's principle is well within the acceptable accuracy we desire. For example, you may see a parked car and think you know exactly where it is and exactly how fast it is moving. But would you really know those things exactly? If you were to measure the position of the car to an accuracy of a billionth of a billionth of a centimeter, you would be trying to measure the positions of the individual atoms which make up the car, and those atoms would be jiggling around just because the temperature of the car was above absolute zero! Heisenberg's uncertainty principle completely flies in the face of classical physics. After all, the very foundation of science is the ability to measure things accurately, and now quantum mechanics is saying that it's impossible to get those measurements exact! But the Heisenberg uncertainty principle is a fact of nature, and it would be impossible to build a measuring device which could get around it. Spin of a particle In 1922 Otto Stern and Walther Gerlach performed an experiment whose results could not be explained by classical physics. Their experiment indicated that atomic particles possess an intrinsic angular momentum, or spin, and that this spin is quantized (that is, it can only have certain discrete values). Spin is a completely quantum mechanical property of a particle and cannot be explained in any way by classical physics. It is important to realize that the spin of an atomic particle is not a measure of how it is spinning! In fact, it is impossible to tell whether something as small as an electron is spinning at all! The word 'spin' is just a convenient way of talking about the intrinsic angular momentum of a particle. Magnetic resonance imaging (MRI) uses the fact that under certain conditions the spin of hydrogen nuclei can be 'flipped' from one state to another. By measuring the location of these flips, a picture can be formed of where the hydrogen atoms (mainly as a part of water) are in a body. Since tumors tend to have a different water concentration from the surrounding tissue, they would stand out in such a picture. What is the Schrödinger equation? Every quantum particle is characterized by a wave function. In 1925 Erwin Schrödinger developed the differential equation which describes the evolution of those wave functions. By using Schrödinger's equation scientists can find the wave function which solves a particular problem in quantum mechanics. Unfortunately, it is usually impossible to find an exact solution to the equation, so certain assumptions are used in order to obtain an approximate answer for the particular problem. Schrodinger equation {5 kB} What is a wave packet? As mentioned earlier, the Schrödinger equation for a particular problem cannot always be solved exactly. However, when there is no force acting upon a particle its potential energy is zero and the Schrödinger equation for the particle can be exactly solved. The solution to this 'free' particle is something known as a wave packet (which initially looks just like a Gaussian bell curve). Wave packets, therefore, can provide a useful way to find approximate solutions to problems which otherwise could not be easily solved. First, a wave packet is assumed to initially describe the particle under study. Then, when the particle encounters a force (so its potential energy is no longer zero), that force modifies the wave packet. The trick, of course, is to find accurate (and quick!) ways to 'propagate' the wave packet so that it still represents the particle at a later point in time. Finding such propagation techniques, and applying them to useful problems, is the topic of my research. 1. Claude Cohen-Tannoudji, Bernard Diu, and Franck Laloë, Quantum Mechanics, Volumes 1 and 2, John Wiley & Sons, New York (1977). 2. John J. Brehm and William J. Mullin, Introduction to the Structure of Matter: A Course in Modern Physics, John Wiley & Sons, New York (1989). 3. Donald A. McQuarrie, Quantum Chemistry, University Science Books, Mill Valley, Calif. (1983). A Few Places That Refer to This Page Links2Go Key Resource in Quantum Physics The American Institute of Physics, as a part of a series about the achievements of Albert Einstein. A Hypernote(#7) in an article in Science magazine (vol. 282, 23 Oct 1998, pp. 637-638) about quantum teleportation. Professor David Banach's Philosophy of Science homepage. A science link at Sandhills Community College. Phil Plait's Bad Astronomy pages, dealing with a question about electron probability. Useful Links Basic Ideas of Quantum Mechanics. Quantum Mechanics Overview. Written by Todd Stedl ( Last modified on 25 July 1996. Minor revisions on 25 March 2000. Reconstructed on 18 August 2004 after a server meltdown. Moved to on 31 July 2005. We use AN Hosting: better support and better security than our previous web host.
e4f9dec531d9906b
Quantum invisibility cloak could hide objects from reality Quantum Cloak Share This article Science has been chasing the mythical invisibility cloak for years, and recent experiments have even shown the concept to be valid. But what if you want to go beyond simply hiding something from the visible light spectrum? A new paper from the National Tsing-Hua University in Taiwan explores the idea of quantum cloaking. The researchers, Jeng Yi Lee and Ray-Kuang Lee, believe they have devised a method to take cloaks to the logical extreme with a quantum invisibility cloak. If it works, the quantum cloak could make it as if an object didn’t even exist. The initial concept of a quantum invisibility cloak was born of the more traditional kind of invisibility cloak. Ordinary invisibility works on the basis of steering light waves around an object such that an observer cannot see it. This is called transformation optics, and the math backing quantum invisibility is startlingly similar. Transformation optics starts with Maxwell’s equations, which describe the behavior of electromagnetic radiation (like light) as it passes through space. Flipping this concept to the quantum realm requires a new starting point — the Schrödinger equation. Devised by Austrian physicist Erwin Schrödinger (of Schrödinger’s Cat fame), this is an equation that predicts the probability of an object being in a certain place at a certain time. With light invisibility, scientists engineer materials that can distort a light field, but the quantum variety has to distort probability. According to the math, it should be possible to build a system where the probability of existing inside a region of space falls to zero. Of course, you could put something in that space and it would be cloaked from reality. Quantum CloakThis isn’t a one-stop shop for all your reality-extinguishing needs, though. The paper points out a few weaknesses in the current hypothesis. Perhaps most importantly, the cloak can only shield an object from one aspect of the Schrödinger equation at a time. So the cloak could be used to protect a region of space from the quantum effects of nearby electrons, but other effects would still bleed through. The cloak described in the new paper is basically an extremely complicated mathematical exercise. Jeng Yi and Ray-Kuang have shown that the there is a theoretical basis for the quantum invisibility cloak, but it doesn’t exist yet. They do posit one possible method for constructing it with a hollow silicon nanoparticle capable of shielding the interior from outside quantum effects. The practical applications aren’t as fantastical as traditional invisibility, but could end up being considerably more important. The researchers believe that a properly implemented quantum invisibility cloak could be used to make quantum information storage and computing more feasible. Quantum invisibility is not a reality yet, but the groundwork has been laid. It only took a few years for transformation optics to go from theory to application. Maybe this new kind of invisibility will be on a similar trajectory. Research paper: arXiv:1306.2120 – “Hide the interior region of core-shell nanoparticles with quantum invisible cloaks” Post a Comment • Steven First steps towards creating the Infinite Improbability Drive? • greybirdtoo Dang, you got there first! • http://dbakeca.com Dbakeca Italia nice post,,,very interesting • digritz Isn’t that the same as being invisible to all other realities? Couldn’t the bleed through simply be our own reality? • Peter Snell The article mentions only one aspect of the Schrödinger equation can be hidden at a time. If gravity is one of those aspects, then voila, anti-gravity!
82df034a914ad4f5
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer I am using Mathematica to construct a matrix for the Hamiltonian of some system. I have built this matrix already, and I have found the eigenvalues and the eigenvectors, I am uncertain if what I did next is correct: I took the normalized eigenvectors, placed them in matrix form, and did matrix multiplication with the basis set of solutions. Let me try to be more precise since I am not sure I am using the right language when mentioning the basis solutions. In the problem we are using the set of solutions of the particle in a box model as our basis. I can increase the number of basis elements in the calculation of the matrix of the Hamiltonian (which amounts to doing $<\psi_n|H \psi_k>$ over a specified range of $n$ and $k$) in order for some of my smallest eigenvalues to begin to converge. Once I have this $H$ matrix built, and that I see that my eigenvalues are converging to some degree, I take the eigenvectors of the $H$ matrix, format them to be in matrix form, and multiply them by the set of basis solutions. I hope that makes things clearer. share|cite|improve this question I'm not sure what you are asking; eigenfunctions are a type of eigenvector, in that they satisfy the eigenvalue equation. If your eigenvectors are functions, then you already have your eigenfunctions. – KDN Feb 21 '13 at 18:26 @KDN, the eigenvectors I find are just numbers, it is my understanding that the eigenfunctions should be a linear combination of the basis solutions with the eigenvectors I found as coefficients. – user17338 Feb 21 '13 at 18:46 I think I see the confusion; The eigen values are just numbers. The eigenfunctions are the eigenvectors of the operator. In the expression $A \Psi_n = \lambda_n \Psi_n$, the $\lambda_n$ (the numbers) are the eigenvalues, and the $\Psi_n$ are the eigenfunctions, which are also the eigenvectors of $A$. – KDN Feb 21 '13 at 19:35 up vote 2 down vote accepted If $\bf{v}$ is an eigenvector of the matrix $\bf{H}$ (where the ith row and jth column of $\bf{H}$ is $<\psi_i|H|\psi_j>$) with eigenvalue $\lambda$, i.e. $$\bf{H} \cdot \bf{v} = \lambda \cdot \bf{v}$$ the function (which is the one you are looking for) $$ \varphi = \bf{v} \cdot \bf{\psi} = \sum_j \bf{v}_j \cdot \psi_j$$ is an 'eigenfunction' (solution of the Schrödinger equation) of the Hamiltonian corresponding to $\bf{H}$ because for all $i$: $$ <\psi_i|H|\varphi> = \sum_j \bf{v}_j \cdot <\psi_i|H|\psi_j> = \sum_j \bf{v}_j \cdot \bf{H}_{ij} = (\bf{H} \cdot \bf{v})_i = (\lambda \cdot \bf{v})_i$$ Assuming your set of basis functions is orthonormal, i.e. $<\psi_i|\psi_j>= \delta_{ij}$ one can rewrite the above expression as: $$<\psi_i|H|\varphi> = \sum_j \delta_{ij} \cdot \lambda \cdot \bf{v}_j = \lambda \cdot \sum_j \bf{v}_j <\psi_i|\psi_j> = \lambda \cdot <\psi_i|\sum_j \psi_j> = \lambda \cdot <\psi_i|\varphi>$$ because this holds for all $i$ $$H|\varphi> = \lambda \cdot |\varphi>$$ You say that you put the eigenvector $\bf{v}$ in Matrix form and then multiply it with the vector of basis functions to obtain the function $\varphi$. In fact it should be more like a 'dot product' but if you put the numbers of the eigenvector onto the diagonal (and leave zeros off the diagonal), that should be equivalent. share|cite|improve this answer Your Answer
a00f95cee06b7082
Open Access Moving gapless indirect excitons in monolayer graphene Nanoscale Research Letters20127:599 DOI: 10.1186/10.1186/1556-276X-7-599 Received: 16 July 2012 Accepted: 11 October 2012 Published: 30 October 2012 The existence of moving indirect excitons in monolayer graphene is theoretically evidenced in the envelope-function approximation. The excitons are formed from electrons and holes near the opposite conic points. The electron-hole binding is conditioned by the trigonal warping of the electron spectrum. It is stated that the exciton exists in some sectors of the exciton momentum space and has the strong trigonal warping of the spectrum. Monolayer graphene Exciton Energy spectrum Optical absorption Specific heat 71.35.-y; 73.22.Lp; 73.22.Pr; 78.67.Wj; 65.80.Ck An exciton is a usual two-particle state of semiconductors. The electron-hole attraction decreases the excitation energy compared to independent particles producing the bound states in the bandgap of a semiconductor. The absence of the gap makes this picture inapplicable to graphene, and the immobile exciton becomes impossible in a material with zero gap. However, at a finite total momentum, the gap opens that makes the binding of the moving pair allowable. The purpose of the present paper is an envelope-approximation study of the possibility of the Wannier-Mott exciton formation near the conic point in a neutral graphene. In the present paper, we use the term ‘exciton’ in its direct meaning, unlike other papers where this term is referred to as many-body (‘excitonic’) effects[1, 2], exciton insulator with full spectrum reconstruction, or exciton-like singularities originating from saddle points (van Hove singularity) of the single-particle spectrum[3]. On the contrary, our goal is the pair bound states of electrons and holes. There is a widely accepted opinion that zero gap in graphene forbids the Mott exciton states (see, e.g.,[4]). This statement which is valid in the conic approximation proves to be incorrect beyond this approximation. Our aim is to demonstrate that the excitons exist if one takes the deviations from the conic spectrum into consideration. We consider the envelope tight-binding Hamiltonian of monolayer graphene as follows: H ex = ( p e ) + ( p h ) + V ( r e r h ) , ( p ) = γ 0 1 + 4 cos a p x 2 cos 3 a p y 2 + 4 cos 2 a p x 2 , is the single-electron energy, a = 0.246 nm is the lattice constant, = 1 , V(r )= −e2/(χr) is the potential energy of the electron-hole interaction. The electron spectrum has conic points ν K,ν = ±1, K = (4Π/3a,0), where (p)≈s|pν K|, s = γ 0 a 3 / 2 is the electron velocity in the conic approximation. The electron and hole momenta pe,hcan be expressed via pair q=p e + p h and relative p=p e p h momenta. The momenta pe,h can be situated near the same (qk 2K) or near the opposite conic points (q = 2K + k ,k 2K). We assumed that graphene is embedded into the insulator with a relatively large dielectric constant χ so that the effective dimensionless constant of interaction g = e 2 / ( sχℏ ) 2 / χ 1 and the many-body complications are inessential. In the conic approximation, the classical electron and hole with the same direction of momentum have the same velocities s. The interaction changes their momenta, but not their velocities. The two-particle Hamiltonian contains no terms quadratic in the component of the relative momentum p along k. In a quantum language, such attraction does not result in binding. Thus, the problem of binding demands accounting for the corrections to the conic spectrum. Two kinds of excitons are potentially allowed in graphene: a direct exciton with k 1/a(when the pair belongs to the same extremum) and an indirect exciton with q = 2K + k. Assuming pk (this results from the smallness of g), we get to the quadratic Hamiltonian H ex = sk + p 1 2 2 m 1 + p 2 2 2 m 2 e 2 χr , where the coordinate system with the basis vectors e1k/k and e2e1 is chosen, r = (x1,x2). In the conic approximation, we have m2 = k/s, m1 = . Thus, this approximation is not sufficient to find m1. Beyond the conic approximation (but near the conic point), we should expand the spectrum (2) with respect to k up to the square terms, which results in the trigonal spectrum warping. As a result, we have for the indirect exciton, 1 m 1 = ν sa 4 3 cos 3 ϕ k , where ϕ k is an angle between k and K. The effective mass m1 m2is directly determined by the trigonal spectrum warping, and the large value of m1 follows from the warping smallness. The sign of m1is determined by ν cos3 ϕ k . If ν cos3 ϕ k > 0, electrons and holes tend to bind, or else to run away from each other. Thus, the binding of an indirect pair is permitted for ν cos3ϕ k >0. Apart from the conic point, this condition transforms to ( 1 + u + v ) < 0 ( 1 + u + v + ) < 0 ( 1 + u + v ) < 0 ( 1 + v + v + ) < 0 ( 1 + u + v + ) < 0 ( 1 + v + v + ) < 0 , where u = cos a k x , v ± = cos ( ( k x ± 3 k y ) a / 2 ) . To find the indirect exciton states analytically, we solved the Schrödinger equation with the Hamiltonian (3) using the large ratio of effective masses. This parameter can be utilized by the adiabatic approximation similar with the problem of molecular levels. Coordinates 1 and 2 play a role of heavy ‘ion’ and ‘electron’ coordinates. At the first stage, the ion term in the Hamiltonian is omitted, and the Schrödinger equation is solved with respect to the electron wave function at a fixed ion position. The resulting electron terms then are used to solve the ion equation. This gives the approximate ground level of exciton ε(k)=skε ex (k), where the binding energy of the exciton is ε ex (k) = Π−1sk g2 log2(m1/m2) (the coefficient 1/Π here is found by a variational method). A similar reasoning for the direct exciton gives negative mass m1=−32/(ks a2(7−cos6ϕ k )). As a result, the direct exciton kinetic energy of the electron-hole relative motion is not positively determined and that means the impossibility of binding of electrons with holes from the same cone point. Results and discussion Figure1 shows the domain of indirect exciton existence in the momentum space. This domain covers a small part of the Brillouin zone. Figure 1 Relief of the single-electron spectrum. Domains where exciton states exist are bounded by a thick line. The quantity ε ex (k) essentially depends on the momentum via the ratio of effective masses m1/m2. Within the accepted assumptions, ε ex is less than the energy of unbound pair sk. However, at a small-enough dielectric constant χ, the ratio of both quantities is not too small. Although we have no right to consider the problem with a large g in the two-particle approach, it is obvious that the increase of the parameter g can only result in the binding energy growth. Besides, we have studied the problem of the exciton numerically in the same approximation and by means of a variational approach. Figure2 represents the dependence of the exciton binding energy on its momentum for χ=10. Figure3 shows the radial sections of the two-dimensional plot. The characteristic exciton binding energies have the order of 0.2 eV. Figure 2 Relief map of indirect exciton ground-state binding energy. The map shows ε ex (in eV) as a function of the wave vector in units of reciprocal lattice constant. The exciton exists in the colored sectors. Figure 3 Radial sections of Figure 2 at fixed angles in degrees (marked). Curves run up to the ends of exciton spectrum. All results for embedded graphene are applicable to the free-suspended layer if the interaction constant g is replaced with a smaller quantity g ~ , which is renormalized by many-body effects. In this case, the exciton binding energy becomes essentially larger and comparable to kinetic energy sk. We discuss the possibility of observation of the indirect excitons in graphene. As we saw, their energies are distributed between zero and some tenth of eV that smears up the exciton resonance. The large exciton momentum blocks both direct optical excitation and recombination. However, a slow recombination and an intervalley relaxation preserve the excitons (when generated someway) from recombination or the decay. On the other hand, the absence of a low-energy threshold results in the contribution of excitons in the specific heat and the thermal conductivity even at low temperature. It is found that the exciton contribution to the specific heat at low temperatures in the Dirac point is proportional to (gT/s)2log2(aT/s)). It is essentially lower than the electron specific heat (T/s)2 and the acoustic phonon contribution (T/c)2, where c is the phonon velocity. Nevertheless, the exciton contribution to the electron-hole plasma specific heat is essential for experiments with hot electrons. In conclusion, the exciton states in graphene are gapless and possess strong angular dependence. This behavior coheres with the angular selectivity of the electron-hole scattering rate[5]. In our opinion, it is reasonable to observe the excitons by means of high-resolution electron energy loss spectroscopy of the free-suspended graphene in vacuum. Such energy and angle-resolving measurements can reproduce the indirect exciton spectrum. This research has been supported in part by the grants of RFBR nos. 11-02-00730 and 11-02-12142. Authors’ Affiliations Institute of Semiconductor Physics, Siberian Branch, Russian Academy of Sciences 1. Yang L, Deslippe J, Park CH, Cohen ML, Louie SG: Excitonic effects on the optical response of graphene and bilayer graphene. Phys Rev Lett 2009, 103: 186802.View ArticleGoogle Scholar 2. Yang L: Excitons in intrinsic and bilayer graphene. Phys Rev B 2011, 83: 085405.View ArticleGoogle Scholar 3. Chae DH, Utikal T, Weisenburger S, Giessen H, vKlitzing K, Lippitz M, Smet JH: Excitonic fano resonance in free-standing graphene. Nano Lett 2011, 11: 1379. 10.1021/nl200040qView ArticleGoogle Scholar 4. Ratnikov PV, Silin AP: Size quantization in planar graphene-based heterostructures: pseudospin splitting, interface states, and excitons. Zh Eksp Teor Fiz 2012, 141: 582. [JETP 2012, 114(3):512] [JETP 2012, 114(3):512]Google Scholar 5. Golub LE, Tarasenko SA, Entin MV, Magarill LI: Valley separation in graphene by polarized light. Phys Rev B 2011, 84: 195408.View ArticleGoogle Scholar © Mahmoodian and Entin; licensee Springer. 2012
b74f1ca171d55bf8
You are viewing information for England.  Change country or region. The quantum world If you're interested in the fundamental laws of modern physics and how mathematics is used to state and apply these laws, this module is for you. It surveys the physical principles, mathematical techniques and interpretation of quantum theory. The Schrödinger equation, the uncertainty principle, the exclusion principle, fermions and bosons, measurement probabilities, entanglement, perturbation theory and transition rates are all discussed. Applications include atoms, molecules, nuclei, solids, scanning tunnelling microscopy and quantum cryptography. The module also presents recent evidence relating to some of the most surprising and non-classical predictions of quantum mechanics. Modules count towards OU qualifications Browse qualifications in related subjects Module code Study level 3 10 6 Study method Distance Learning Module cost See Module registration Entry requirements See Entry requirements Student Reviews The course content and course books are very good; well-written, clear and interesting. Assessment isn't onerous - relatively short TMAs... Read more We are very pleased that the course content, books and assessment are well-received. It is very helpful to have comments... Read more Absolutely fascinating course. Found it challenging, but not impossible and extremely interesting. One of the best courses I have ever... Read more Request your prospectus Explore our subjects and courses Request your copy now What you will study Quantum mechanics is famous for challenging our intuitive view of the world. However, it does not simply frustrate classical mechanics: it replaces it by a clear and precise formalism and a set of principles that allow exact calculations to be made. This puts the subject in a unique position. Whilst it challenges our intuitions, it provides the concepts and quantitative predictions needed by applied physicists, chemists and technologists who wish to interpret and control phenomena on the nanoscale and below. This module will give you a detailed understanding of the physical principles and mathematical techniques of quantum mechanics. Building on this understanding, you’ll learn about the interpretation of quantum mechanics in the light of recent experiments and discover how quantum mechanics is used to explain the behaviour of physical systems, from nuclei and atoms to molecules and solids. The study materials include three books, accompanied by DVD-ROMs containing computer-based activities and video materials. Book 1, Wave Mechanics, begins with a wide-ranging introduction to the quantum revolution. It then develops Schrödinger’s equation, together with the concepts of wave functions, expectation values and uncertainties. Schrödinger’s equation is solved for simple model systems such as particles in boxes and harmonic oscillators. You will also learn how the equation can be used in various applications including quantum dots and vibrating molecules. The concept of a wave packet is introduced and used to describe the classical limit of quantum mechanics. Finally, the quantum processes of tunnelling, barrier penetration and reflection are discussed, together with their application to nuclear fusion, alpha decay, and the scanning tunnelling microscope. The mathematical techniques used and developed in this book include complex numbers, separation of variables, integration, differential equations and eigenvalues. Book 2, Quantum Mechanics and its interpretation, gives a more general discussion of quantum mechanical principles. It shows how quantum states can be represented by vectors in a vector space, with observable quantities represented by operators acting on the vectors. This formalism is used to derive quantum mechanical conservation laws and to provide a proof of the uncertainty principle. The properties of orbital and spin angular momentum are introduced and the extraordinary properties of systems of identical particles, including Bose-Einstein condensation, are explored. The book then discusses some fascinating topics in the interpretation of quantum mechanics, supported by the results of recent experiments. The process of measurement in quantum mechanics cannot be described by Schrödinger’s equation and appears to involve chance in an unavoidable way. The book ends by discussing the concept of entanglement, and its applications to quantum encryption and quantum teleportation. The mathematical techniques used and developed in this book include vector spaces, Hermitian operators and matrix algebra. Book 3, The Quantum Mechanics of Matter shows how quantum mechanical methods are used to explain the behaviour of matter, from the scale of nuclei and atoms to molecules and solids. The hydrogen atom is discussed in detail, as well as hydrogen-like systems such as positronium. The useful technique of perturbation theory is developed to obtain approximate results in cases where exact calculations become difficult. The book goes on to discuss multi-electron atoms and the Periodic Table, molecular binding and the behaviour of electrons in the energy bands of metals, insulators and semiconductors. Finally, the book considers the interaction of matter with light. You will see how quantum mechanics can predict the lifetimes of atomic states and the brightness of spectral lines. You will learn In this module, you will learn the fundamental principles of quantum mechanics and the mathematical techniques needed to state and apply them. You will explore the interpretation of quantum mechanics and critically evaluate the extent to which quantum mechanics has been tested by experiment. You will also see how quantum mechanical methods are used to model phenomena in physical systems including atoms, molecules and solids. Professional recognition This module, when studied as part of an honours degree in the physical sciences or engineering, can help you gain membership of the Institute of Physics (IOP). For further information about the IOP, visit their website. This module may also help you to gain membership of the Institute of Mathematics and its Applications (IMA). For further information, see the IMA website. Teaching and assessment Support from your tutor You'll have a tutor who will help you with the study material and comment on your written work, and whom you can ask for advice and guidance. There will be a number of online tutorials that you can join and access via your computer. You're encouraged, but not obliged, to participate in these. You'll also be able to participate in discussions through online forums.  You will, however, be granted the option of submitting on paper if typesetting electronically or merging scanned images of your answers to produce an electronic TMA would take you an unacceptably long time. There will be a mixture of online interactive computer-marked assignments (iCMAs) and short tutor-marked assignments (TMAs), with a total workload equivalent of three full TMAs. Both the iCMAs and TMAs will focus strongly on learning through practice rather than on assessment. The feedback you receive on your answers will help you to improve your knowledge and understanding of the study material and to develop important skills associated with the module. The feedback on the iCMAs will be instantaneous and hints will be given so that you can refine any incorrect answers. Although your scores on all these assignments will not contribute directly to your module grade, they form an essential part of the learning process and you will be required to submit a proportion of them to complete the module. You will be given detailed information when you start the module. Future availability The quantum world (SM358) starts once a year – in October. This page describes the module that will start in October 2020. Course work includes: 4 Tutor-marked assignments (TMAs) 6 Interactive computer-marked assignments (iCMAs) No residential school Entry requirements This is an OU Level 3 module that builds on study skills and subject knowledge acquired from previous studies at OU Levels 1 and 2. It is intended for students who have recent experience of higher education in a related subject at this level. The module is designed to follow Mathematical methods (MST224) or Mathematical methods, models and modelling (MST210), and Physics: from classical to quantum (S217). You would find it very difficult to study SM358 without the necessary mathematical background. The parts of MST224 or MST210 relating to matrices, ordinary and partial differential equations are especially important. S217 is the ideal physics module to prepare you for studying SM358, particularly the parts relating to classical and quantum mechanics. Students are most successful if they have acquired their prerequisite knowledge through passing these OU level 2 physics and mathematics modules. It's essential that you establish whether or not your background and experience give you a sound basis on which to tackle SM358. We've produced a booklet Are You Ready For SM358? to help you decide whether you already have the recommended background knowledge and experience to start the module or whether you need some extra preparation. Start End England fee Register 03 Oct 2020 Jun 2021 - Registration now closed Additional Costs Study costs Ways to pay for this module Open University Student Budget Account Joint loan applications Read more about Open University Student Budget Accounts (OUSBA).   Employer sponsorship Credit/debit card We accept American Express, Mastercard, Visa and Visa Electron.  Mixed payments This information was provided on 30/09/2020. What's included You'll have access to a module website, which includes: • a week-by-week study planner • course-specific module materials • audio and video content • assignment details and submission section • online tutorial access. You'll also be provided with three printed module books, each covering one block of study, a printed glossary and a DVD pack containing interactive computer packages and video material. You will need Basic scientific calculator. Computing requirements • A desktop or laptop computer with an up-to-date version of Windows • The screen must have a resolution of at least 1024 pixels horizontally and 768 pixels vertically. If you have a disability The OU strives to make all aspects of study accessible to everyone and this Accessibility Statement outlines what studying SM358 involves. You should use this information to inform your study preparations and any discussions with us about how we can meet your needs.
c46faea70e9b6b50
Wave-Function Story Last time I started with wave-functions of quantum systems and the Schrödinger equation that describes them. The wave-like nature of quantum systems allows them to be merged (superposed) into combined quantum system so long as the coherence (the phase information) remains intact. The big mystery of quantum wave-functions involves their apparent “collapse” when an interaction with (a “measurement” by) another system seemingly destroys their coherence and, thus, any superposed states. When this happens, the quantum behavior of the system is lost. This time I’d like to explore what I think might be going on here. To quickly review, the problem is that the Schrödinger equation describes the linear evolution of a quantum system. The abrupt change from this smooth evolution to a localized measurement represents a discontinuity we haven’t truly explained. There are multiple connected issues. For one, the photon always manifests as a point, being absorbed by just one atom, but the interference pattern requires it act like a wave during flight. This is the wave-particle duality in a nutshell. For another, and this is spooky, nothing seems to predict where the photon actually lands — it appears genuinely random. This may be a property of nature, but it’s very hard for some to swallow. The biggest mystery involves what physically happens when the photon ends its flight. That sudden change doesn’t have a consensus story among us yet. The wave-like flight of the photon through both slits invokes another mystery involving gravity. A photon has energy, thus mass, and thus gravity. Tiny, but present. We can also use massive particles in the two-slit experiment; they’d have even more gravity. (Scientists have successfully interfered extremely large molecules.) The point is, when the wave goes through both slits, as it must, what happens to its mass, its gravity? Do “particles” even manifest gravity as wave-functions? § § Here’s a story that tries to make physical sense of things by taking various physically sensible pieces from quantum mechanics. The story tries, as much as possible, to stick with mainstream physics ideas. One of those ideas is quantum decoherence, which I wrote about last time. Another is Quantum Field Theory (QFT), which sees quantum particles as long-lived vibrations or wave-packets in a quantum field. For example, all Up quarks are disturbances in the Up field. Quantum fields permeate all of space because particles can be anywhere (and because physics is fundamentally isotropic). Where there are no particles, the field value is zero. These fields are part of the mathematical description, the gauge theory, that QFT is built on, but we don’t know what they are physically. Since particles seem real, these fields must , in some fashion, also be real. My speculation is that these fields have non-local properties (which, in fact, account for the non-local behaviors of the quantum world). I’ll come back to that. (BTW: It is Heisenberg Uncertainty of values in these fields that allow virtual particle pairs to appear and quickly disappear. A point in the field suddenly has a value, has energy, which manifests as a particle wave-packet that necessarily vanishes.) Let’s consider in detail the photon’s flight from start to finish. Few have any problem with the start: An excited electron in an atom drops to a lower energy level releasing a photon in the process. This is a common occurrence. (A complicated one from a Schrödinger equation point of view, though. The electron has a wave-function, which entangles with the atom’s wave-function, which entangles with the surrounding atoms and so on up to a very complicated wave-function for the laser.) Once the laser emits the photon, we’re able to view the photon as a mostly isolated quantum system with coherent phase. As such, it can interfere with itself. The Schrödinger equation describes the photon as a wave phenomenon, and there is a strong correlation with the behavior of mechanical waves. Other than questions about the physicality of the Schrödinger equation and the wave-function, there isn’t much controversy so far. The controversy involves the photon being absorbedthe infamous measurement. Now my story gets a bit imaginative. I think the Schrödinger equation describes something real having to do with the quantum field. According to QFT, a particle is the smallest quantum that manifests in the field — that is, that can be measured, or that can change the state of some other system (two ways of saying the same thing). But what if lesser amount of energy could travel along the field? Think about the “wave” that spreads out from the laser towards the detector. It has volume, internal space. Suppose the energy of the particle spreads out exactly like a wave — exactly as the Schrödinger equation describes. But since that energy is distributed over a volume, it’s not enough to manifest anywhere as a particle. It’s just a wave of tiny energy spreading out at light speed. (Very much as we’d imagine a big beam of light streaming out.) This spread out energy is sub-quantum at every particular location. It isn’t enough to change the state of any system it encounters. But it represents the sum of possible paths the particle could take. (It resembles Feynman’s summing of all possible paths. Another mainstream idea.) If you want a visual image, imagine a 3D grid, finely meshed, of taut wires representing the quantum field. Imagine flicking a spot (the starting point) with a fingernail. Vibrations spread out from that point, ringing through the mesh. This part is a bit similar to what’s called pilot wave theory. As this, pilot wave (for lack of a better word) spreads out and interacts with other systems, it ultimately selects one, and the spread-out energy “collapses” or “drains” into the selected interaction. This is what we perceive as wave-function collapse. A key point is that the energy of this pilot wave isn’t sufficient to cause an interaction with any of the other systems until one is selected, and then the entire quanta of energy, previously spread throughout the field, is applied to the selected interaction. This does require non-local behavior as the wave submits to the interaction. At first, it seems asymmetrical in spreading at light speed (or sub-light speed for massive particles) but collapsing instantly (or nearly so). Perhaps entrance and exit to the field are both instantaneous, regardless of whether the quantum is a point at insertion or a volume at exit. The “particle” is either there or not there, period. It starts at a point source, spreads out, and then “drains” into the interaction. In other words: What looks like collapse to us really is a collapse of something. There is a physical reality to it. To address the mysteries, there is no gravity question while the particle is a wave because its energy is distributed sub-quantum. It’s incapable of affecting the state of any other system. (It follows spacetime geodesics so paths are aware of the existing gravitational field.) The fact that the quantum of energy has to find a system it can interact with is why we have point interactions. It’s always a “particle” interacting with another “particle” using the total of their energies per quantum physics. The apparent randomness is either genuine, and reality really is random (a possibility I’m fine with), or something in the interaction of the spreading wave selects a destination system. If we ever figure out why one uranium atom decays rather than its neighbor, that’ll solve this one, too. My guess, assuming it isn’t random, is that the combined wave-function of the photon and all the other systems, might select a target system as most probable, if not outright determined (thus removing the apparent randomness). There are no hidden variables; just the sum lots of interacting systems. § § You may note I’ve made no mention of many worlds. Everett, in his paper, provides: “Alternative 2: To limit the applicability of quantum mechanics by asserting that the quantum mechanical description fails when applied to observers, or to measuring apparatus, or more generally to systems approaching macroscopic size.” This is exactly what I’m asserting. The photon is absorbed by the electron, its phase information is distributed — and effectively lost —among the many atoms in the detector. In particular, that phase information is not amplified to include multiple states of the detector, let alone the scientist observing. It certainly has no power to create multiple worlds. He goes on, in objection to this alternative, to say: “If we try to limit the applicability so as to exclude measuring apparatus, or in general systems of macroscopic size, we are faced with the difficulty of sharply defining the region of validity. For what n might a group of n particles be construed as forming a measuring device so that the quantum description fails?” I’ve long suspected the boundary is fuzzy and hugely dependent on conditions. In more pristine conditions, n might be quite large. In messier conditions, n might be much smaller. I think a qualitative understanding of n requires a deeper understanding of reality — at the least a reconciling of QFT and GR. For one thing, I wonder if n might depend on the gravity (mass) of the measuring system. It may be fundamentally stochastic or even random. § § All quantum theories are weird; there is no exception here. There is also ontological speculation, so take it with a shaker of salt. I will say it’s a speculation based on physical reality and mostly mainstream ideas. It has non-locality, but that seems required regardless. The main guesswork involves the role of the quantum field and the potential for the “pilot wave” of sub-quantum energy to instantly “drain” into the interaction. For me it’ll do until something better comes along. Stay coherent, my friends! About Wyrd Smythe 14 responses to “Wave-Function Story • Wyrd Smythe One of the truly mind-bending things to think about under any quantum interpretation is the flight of a single photon from a distant star to Earth. Think about the wave-function that has spread out over hundreds of light years. (To the photon, of course, no time passes, which is just weird-squared.) Consider what the sum of paths must be for such a photon. As I said at the beginning: Reality is Too Weird For Words! • Wyrd Smythe For those with more background in quantum theory: When the electron absorbs the photon both quantum states merge into a new quantum state — which describes the excited electron. The phase of the photon merges with the phase of the electron, so the electron wave-function has (almost certainly) a shifted phase from what either had initially. The electron is part of an atom, which is part of an assembly of atoms, and, as described in the previous post, any phase information from the photon is quickly distributed into the larger quantum system. Meanwhile the electron is interacting with the overall system such that its phase (and any imprint from the photon) is quickly smeared. If the photon hit the wall, nothing else happens. The disturbance is quickly lost in the much larger system of the wall. If the photon hit the detector, and the detector is capable of recording individual photons, it’s state must obviously change. The electron absorbing the photon must be amplified to something sufficiently macro to affect a recording or display device. To the extent the detector can be said to be in a superposed state, it would necessarily be between detecting and not detecting. This requires a coherent phase for the entire detector, and I’m not sure that’s possible. Any coherence among the atoms of the detector should be instantly dissipated. • SelfAwarePatterns An interesting interpretation Wyrd! Unfortunately, my knowledge of QM isn’t really sufficient for me to judge its merits, but I do appreciate that you recognize and acknowledge the cost, in your case, non-locality. One question that does occur to me. Your example is done using a photon, where the photon’s existence unequivocally comes to an end. If we run through the scenario with an electron, or some other fermion, does it change things? Just curious. • Wyrd Smythe Thanks! Yeah, it’s always gotta be something with quantum. One has to pick one’s weirdness. (As I mentioned at the end of the previous post, experiments with Bell’s Inequality seem to make it clear we’re stuck with non-locality. Or, at least, most interpretations are forced to provisionally accept it. If I have to swallow a weirdness pill, that one seems kind of already on the plate.) I’d want to do a little research into exactly what happens when we throw electrons at something. (For that matter, what about uncharged heavy molecules? What happens to them??) I can speculate based on some things we do know… Firing electrons at something raises its charge. It can raise it to the point of having enough negative charge to repel further electrons (unless they’re moving very fast). The phosphor coating in CRTs is electrically grounded to drain that charge. CRTs wouldn’t work otherwise. So the electron can’t vanish. (Off the top of my head, I’m not sure what events absorb electrons. Some weak interactions, maybe? Certainly meeting a positron would do it.) As you may know, while current moves at high speed, the actual electrons move very slowly (like walking pace slowly). They just of buzz around the material. I assume what happens is that the incoming electron merges with that cloud of electrons, raising the overall charge by one electron. In the right kind of system, that can be detected, and I’d want to look into the details of exactly how that electron is amplified to macro levels. What I do think, in terms of wave-function, is that the incoming electron’s wave-function merges (superposes) with the particles of the detector and, in turn, its wave-function becomes a superposition of everything it interacts with. Its quantum state essentially becomes the quantum state of the detector. What I don’t see happening is the detector’s state becoming a superposition of detecting and not detecting. I don’t see how the detector can be in any kind of coherent state due to all its atoms and being connected to the environment. The decoherence for a system that size in a hot messy environment is below the Planck time. I’ve come to realize that hidden assumption in Everett’s work is the idea that the wave-function of a single particle can have a significant effect on the wave-function of a much larger system. Part of the argument rests on us not knowing how to define “much larger system” but (as I touched on in the post) I’m beginning to think answering that is where the key to all this lies. (I’m really wondering if this sort of thing is where Baggott is headed with that ‘no big deal’ stuff. Reading his book was something of a shared mind experience for me.) • SelfAwarePatterns Thanks for speculating! I haven’t read Everett directly, but on the particle not having much of an effect, I can see that being true for a non-measuring device, like a brick. But as you noted for the film, isn’t a measuring device constructed to amplify the effects of that one particle? It seems like that would give the particle far more causal power than it would have on a non-measuring device. Or am I missing your point? • Wyrd Smythe Not at all, and it’s something I’m still chewing on (and will no doubt explore thru writing about). My sense at this point is there’s a difference between causal power and wave-function. I need to look into the details of single particle detection to understand exactly what’s happening there. My understanding currently is that such systems have a stored energy level that’s analogous to a set mousetrap. The detection of a small force releases that stored energy. The film molecules (I’m guessing) are in a non-minimal energy configuration that a photon unlocks and releases. It might be analogous to a super-chilled solution where a tiny disturbance seeds a phase change that expands throughout the solution. (I love watching it. In the winter I leave capped but opened bottles of water in my garage, after shaking them to get as much air out of the solution as possible. Often, when it’s sub-zero for a day or two, I can go out and flick one and watch the phase change of freezing spread through the super-chilled water. Very cool. Literally. 😀 ) Such systems, whether through phase change or other energy release mechanism, consist of large numbers of molecules, all of which are connected to the environment (which includes things like all the radio waves passing by — tons and tons of photons streaming through the detector every second; their energy levels don’t permit direct interaction, which is why they pass through, but their wave-functions certainly interact). The only way we ever get superposition is when the self-interacting single system, or the two interacting systems, both have coherent phase; that’s the only way it’s possible. When a system is large enough to decohere, that’s lost. I forget where I read this (Tegmark?) but, IIRC, the decoherence time for a lone hydrogen atom in deep intergalactic space, about as isolated as possible in the universe, is on the order of microseconds. With larger or more environmentally linked systems, the time drops rapidly. AIUI, it quickly drops below Planck time. • SelfAwarePatterns Not sure what to make of the Planck time part. It does make sense that an atom in an intergalactic void is still being buffeted by the CMB, so it would still quickly decohere. Although it does make me wonder what might happen once the CMB disappears in the distant future. But consider this. Under MWI, with the measuring device just sitting there, its wave function is branching zillions and zillions of times per second. (“Zillions” being a very technical term for ginormous number. (“Ginormous” also being a very technical term.)) So it’s not just the particle that is going to throw it into superposition. All the particle does is lightly perturb it. But the device is designed to magnify that perturbation into something a human can perceive. So the particle ends up having large scale effects on the state of the device. So among the zillions and zillions of branches the device’s wave function is constantly splitting into, a portion of them will now contain the detection state. The portion should be in proportional to the probability of the device making the detection. (I think. I’m winging this.) That’s the thing about the MWI. We have to remember that it’s not just the measurement throwing out all these branches, but everything else as well. That said, I’m guessing this makes it even more ludicrous in your eyes. 🙂 • Wyrd Smythe Once the CMB fades out, isolated atoms in intergalactic space would be very isolated, indeed! The more I read, the more I question what we mean by the “wave-function of the detector” — it’s not something we’re able to calculate, and I’ve begun to wonder if we’re even thinking about it in a sensible manner. The detector would certainly have a wave-function, but I’m not sure quite what to make of it. For one, there is the de Broglie wavelength, which gives the wavelength for a given mass. The formula is simply h/mv, but with h as the numerator, the mass and velocity must be equally tiny for the wavelength to amount to anything at all. Another puzzle is that the wave-function of the detector is a summation of every particle that comprises it, so the detector’s ψ encodes a mind-boggling number of contributing states. Superposition involves the notion of observables, and I’m not quite sure how to apply that notion to a macro object like a detector. On its own, it’s always observed to be a detector sitting in the same spot you saw it last with the same energy level and basic atomic configuration. The deep puzzle, of course, is how to view its detecting something, and that’s something I want to research a little before I go too far out on a limb. “So the particle ends up having large scale effects on the state of the device.” Yep, right with ya. “The portion should be in proportional to the probability of the device making the detection.” You mean the portion of entire detectors that have detected something, yes? You’re talking about the MWI view of it? (Assuming so, yes, that’s what it posits.) We’re at the point where I kind of want a physicist, because (per above) I’m not sure what to say about the detector being in superposition with itself. The de Broglie wavelength might be sub-Planck, and I’m not sure what effect that has on things. I’m not sure what effect all the particle and atom sub-systems have in swamping or damping out superposition. Certainly the energy and atomic configuration has to change when, say, an electron lands after passing through two slits. It’s absolutely tempting to view the detection system as being in superposition between absorbing the electron and not… I’m not sure it’s the right picture. It’s kinda like an ocean liner hitting a life raft and being affected the same way as the raft. Maybe all that happens is the detector gets a little smear across its bow where the raft hit. Enough to say it happened. But, yes, what you are suggesting is exactly what Everett proposed. I showed you a bit of this before, but maybe it’s even more relevant here. In the simplest form he presents: Which translates to: (post observation) The wave-function of system S combined with observer O results in the product of the states of Φ and Ψ (the i subscript denotes superpositions; each i is an observable state). The brackets enclose the “memory” of O, the dots indicating the unchanged states leading to the observation, and the a representing the change in memory due to the observation. (This is all in Chapter IV Observation.) But I keep thinking about ocean liners. 😉 • SelfAwarePatterns I’m not really grasping the significance of the wavelengths. We do know that those decoherences have to happen, don’t we? (Or wave function collapse if classic Copenhagen?) I was thinking of all the versions of the detector that could possibly detect something, which I suspect is what you meant, but just making sure. Definitely this is my take on MWI. On matching Everett, that’s good to know. My reading of the popular accounts hasn’t been in vain. If I’m understanding the equation correctly, the two wavefunctions (the system and the observer) combine, which makes sense. I’m actually slightly stoked that I understand the equation at all! • Wyrd Smythe The de Broglie wavelength affects the wave behavior of a system in suggesting that above some threshold, there really is no wave behavior to speak of. I don’t know that it’s directly connected to decoherence, which is just the loss of phase of a quantum system (“loss” in the can’t measure it anymore sense). As an example, that hydrogen atom in intergalactic space. The more CMB photons it absorbs, each with their own phase, the less we can say about the atom’s original phase. It gets combined with more and more photons. (Anywhere on Earth, it gets blasted with radio and IR photons.) OTOH, the atom still has a wave-function we can measure, observables we can detect. We’ve just lost the earlier observables. There are measurements we could have made that we can no longer make. (Doesn’t really matter with the atom, but if it was a computing qbit we’d configured, that configuration is gone.) FWIW, I visualize coherent (i.e. measurable) phase as a pure musical note. The more notes we add to it, the less it even sounds like a note, let alone can be identified as to what that note was (imagine someone played every note on the piano at once). If we add just one or two notes, though, we get new notes (harmonics) which is a lot like combining a limited number of quantum states. The thing is, if we see decoherence as behind the “collapsing” of the wave-function, then all macro objects must be in permanently collapsed condition given the messy environment. (Quantum computers have to go to great lengths to maintain coherence long enough to make a computation.) There’s just no way for the detector to be in a coherent state such that interacting with a coherent system should affect it. (As these posts suggest, maybe we need to rethink the whole “collapse” thing. At least in the case of the photon, it goes away so why should it be surprising its w-f goes away? The quantum information it had is absorbed and dissipated into the larger system, so no information is lost. If something analogous happens with more massive particles, we may have over thought the collapse mystery.) • SelfAwarePatterns Lots to think about, but I’m depleted for now. Thanks Wyrd! • Wyrd Smythe Now that’s a great exit line! Until next time. And what do you think? WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s %d bloggers like this:
6f91e969af262716
Chapter 30 by on History shows that scientific “truth” changes over time. The uncertainty is the reason why continued testing of our ideas is so important in science. Science is the study of the natural world using the five senses. Because people use their senses every day, people have always done some sort of science. However, good science requires a systematic approach. While ancient Greek science did rely upon some empirical evidence, it was heavily dominated by deductive reasoning. Science as we know it began in the 17th century. The father of the scientific method is Sir Francis Bacon (1561–1626), who clearly defined the scientific method in his Novum Organum (1620). Bacon also introduced inductive reasoning, which is the foundation of the scientific method. The first step in the scientific method is to define clearly a problem or question about how some aspect of the natural world operates. Some preliminary investigation of the problem can lead one to form a hypothesis. A hypothesis is an educated guess about an underlying principle that will explain the phenomenon that we are trying to explain. A good hypothesis can be tested. That is, a hypothesis ought to make predictions about certain observable phenomena, and we can devise an experiment or observation to test those predictions. If we conduct the experiment or observation and find that the predictions match the results, then we say that we have confirmed our hypothesis, and we have some confidence that our hypothesis is correct. On the other hand, if our predictions are not borne out, then we say that our hypothesis is disproved, and we can either alter our hypothesis or develop a new one and repeat the process of testing. After repeated testing with positive results, we say that the hypothesis is confirmed, and we have confidence that our hypothesis is correct. Properly applied inductive reasoning does not necessarily lead to a true conclusion. Notice that we did not “prove” the hypothesis, but that we merely confirmed it. This is a big difference between deductive and inductive reasoning. If we have a true premise, then properly applied deductive reasoning will lead to a true conclusion. However, properly applied inductive reasoning does not necessarily lead to a true conclusion. How can this be? Our hypothesis may be one of several different hypotheses that produce the same experimental or observational results. It is very easy to assume that our hypothesis, when confirmed, is the end of the matter. However, our hypothesis may make other predictions that future, different tests may not confirm. If this happens, then we must further modify or abandon our hypothesis to explain the new data. The history of science is filled with examples of this process, and we ought to expect that this will continue. This puts the scientist in a peculiar position. While we can definitely disprove a number of propositions, we can never be entirely sure that what we believe to be true is indeed true. Thus, science is a very changing thing. History shows that scientific “truth” changes over time. The uncertainty is the reason why continued testing of our ideas is so important in science. Once we test a hypothesis many times, we gain enough confidence that it is correct, and we eventually begin to call our hypothesis a theory. So a theory is a grown-up, well-developed hypothesis. At one time, scientists conferred the title of law to well-established theories. This use of the word “law” probably stemmed from the idea that God had imposed some order (law) onto the universe, and our description of how the world operates is a statement of this fact. However, with a less Christian understanding of the world, scientists have departed from using the word law. Scientists continue to refer to older ideas, such as Newton’s law of gravity or laws of motion as law, but no one has termed any new ideas in science as law for a very long time. Isaac Newton Isaac Newton (1643–1727) In 1687, Sir Isaac Newton (1643–1727) published his Principia, which detailed work that he had done about two decades earlier. In the Principia, Newton presented his law of gravity and laws of motion, which are the foundation of the branch of physics known as mechanics. Because he required a mathematical framework to present his ideas, Newton invented calculus. His great breakthrough was to hypothesize that the force that held us to the earth was the same force that kept the moon orbiting around the earth each month. From knowledge of the moon’s distance from the earth and orbital period, Newton used his laws of motion to conclude that the moon is accelerated toward the earth 1/3600 of the measured acceleration of gravity at the surface of the earth. The fact that we on the earth’s surface are 60 times closer to the earth’s center than the moon allowed Newton to devise his inverse square law for gravity (602 = 3,600). This unity of gravity on the earth and the force between the earth and moon was a good hypothesis, but could Newton test it? Yes. When Newton applied his laws of gravity and motion to the then-known planets orbiting the sun (Mercury, Venus, Earth, Mars, Jupiter, and Saturn), he was able to predict several things: 1. The planets orbit the sun in elliptical orbits with the sun at one focus of the ellipses. 2. The line between the sun and a planet sweeps out equal areas in equal intervals of time. 3. The square of a planet’s orbital period is proportional to the third power of the planet’s mean distance from the sun. Johannes Kepler Johannes Kepler (1571–1630) These three statements are known as Kepler’s three laws of planetary motion, because the German mathematician Johannes Kepler (1571–1630) had found them in a slightly different form several decades before Newton. Kepler empirically found his three laws by studying data on planetary motions taken by the Danish astronomer Tycho Brahe (1546–1601) over a period of 20 years in the latter part of the 16th century. Kepler arrived at his result by laborious trial and error for over two decades, but he had no explanation of why the planets behaved the way that they did. Newton easily showed (or predicted) that the planets must follow Kepler’s law as a consequence of his law of gravity. Many other predictions of Newton’s new physics followed. Besides Earth, Jupiter and Saturn had satellites that obeyed Newton’s formulation of Kepler’s three laws. Newton’s good friend who privately funded the publication of the Principia, Sir Edmond Halley (1656–1742), applied Newton’s work to the observed motions of comets. He found that comets also followed the laws, but that their orbits were much more elliptical and inclined than the orbits of planets. In his study, Halley noticed that one comet that he observed had an orbit identical to one seen about 75 years before and that both comets had a 75-year orbital period. Of course, when the comet returned once again, Halley was long dead, but this comet bears his name. In 1704, Newton first published his other seminal work in physics, Optics. In this book, he presented his theory of the wave nature of light. Together, his Principia and Optics laid the foundation of physics as we know it. Over the next two centuries, scientists applied Newtonian physics to all sorts of situations, and in each case the predictions of the theory were borne out by experiment and observation. For instance, William Herschel stumbled upon the planet Uranus in 1781, and its orbit followed Kepler’s three laws as well. However, by 1840, astronomers found that there were slight discrepancies between the predicted and observed motion of Uranus. Two mathematicians independently hypothesized that there was an additional planet beyond Uranus whose gravity was tugging on Uranus. This led to the discovery of Neptune in 1846. These successes gave scientists a tremendous confidence in Newtonian physics, and thus Newtonian physics is one of the most well-established theories in history. However, by the end of the 19th century, experimental results began to conflict with Newtonian physics. Quantum Mechanics Near the end of the 19th century, physicists turned their attention to how hot objects radiate, with one practical application being the improvement of efficiency of the filament of the recently invented light bulb. Noting that at low temperatures good absorbers and emitters of radiation appear black, they dubbed a perfect absorber and emitter of radiation a black body. Physicists experimentally determined that a black body of a certain temperature emitted the greatest amount of energy at a certain frequency and that the amount of energy that it radiated diminished toward zero at higher and lower frequencies. Attempts to explain this behavior with classical, or Newtonian, physics worked very well at most frequencies but failed miserably at higher frequencies. In fact, at very high frequencies, classical physics required that the energy emitted increase toward infinity. Max Planck Max Planck (1858–1947) In 1901, the German physicist Max Planck (1858–1947) proposed a solution. He suggested that the energy radiated from a black body was not exactly in waves as Newton had shown, but was instead carried away by tiny particles (later called photons). The energy of each photon was proportional to its frequency. This was a radical departure from classical physics, but this new theory did exactly explain the spectra of black bodies. In 1905, the German-born physicist Albert Einstein (1879–1955) used Planck’s theory to explain the photoelectric effect. What is the photoelectric effect? A few years earlier, physicists had discovered that when light shone on a metal to which an electric potential was applied, electrons were emitted. Attempts to explain the details of this phenomenon with classical physics had failed, but Einstein’s application of Planck’s theory explained it very well. Other problems with classical physics had mounted. Physicists found that excited gas in a discharge tube emitted energy at certain discrete wavelengths or frequencies. The exact wavelengths of emission depended upon the composition of the gas, with hydrogen gas having the simplest spectrum. Several physicists investigated the problem, with the Swedish scientist Johannes Rydberg (1854–1919) offering the most general description of the hydrogen spectrum in 1888. However, Ryberg did not offer a physical explanation. Indeed, there was no classical physics explanation for the spectral behavior of hydrogen gas until 1913, when the Danish physicist Niels Bohr (1885–1962) published his model of the hydrogen atom that did explain hydrogen’s spectrum. In the Bohr model, the electron orbits the proton only at certain discrete distances from the proton, whereas in classical physics the electron can orbit at any distance from the proton. In classical physics the electron must continually emit radiation as it orbits, but in Bohr’s model the electron emits energy only when it leaps from one possible orbit to another. Bohr’s explanation of the hydrogen atom worked so well that scientists assumed that it must work for other atoms as well. The hydrogen atom is very simple, because it consists of only two particles, a proton and an electron. Other atoms have increasing numbers of particles (more electrons orbiting the nucleus, which contains more protons as well as neutrons) which makes their solutions much more difficult, but the Bohr model worked for them as well. The Bohr model is essentially the model that most of us learned in school. While Bohr’s model was obviously successful, it seemed to pull some new principles out of the air, and those principles contradicted principles of classical physics. Physicists began to search for a set of underlying unifying principles to explain the model and other aspects of the emerging new physics. We will omit the details, but by the mid-1920s, those new principles were in place. The basis of this new physics is that in very small systems, as within atoms, energy can exist in only certain small, discrete amounts with gaps between adjacent values. This is radically different from classical physics, where energy can assume any value. We say that energy is quantized because it can have only certain discrete values, or quanta. The mathematical theory that explains the energies of small systems is called quantum mechanics. Quantum mechanics is a very successful theory, yet a few people do not accept it. Why? There are several reasons. One reason for rejection is that the postulates of quantum mechanics just do not feel right. They violate our everyday understanding of how the physical world works. However, the problem is that very small particles, such as electrons, do not behave the same way that everyday objects do. We invented quantum mechanics to explain small things such as electrons because our everyday understanding of the world fails to explain them. The peculiarities of quantum mechanics disappear as we apply quantum mechanics to larger systems. As we increase the size and scope of small systems, we find that the oddities of quantum mechanics tend to smear out and assume properties more like our common-sense perceptions. That is, the peculiarities of quantum mechanics disappear in larger, macroscopic systems. Another problem that people have with quantum mechanics is certain interpretations applied to quantum mechanics. For instance, one of the important postulates of quantum mechanics is the Schrödinger wave equation. When we apply the Schrödinger equation to a particle such as an electron, we get a mathematical wave as a description of the particle. What does this wave mean? Early on, physicists realized that the wave represented a probability distribution. Where the wave had a large value, the probability was large of finding the particle in that location, but where the wave had low value, there was little probability of finding the particle there. This is strange. Newtonian physics had led to determinism—the absolute knowledge of where a particle was at a particular time from the forces and other information involved. Yet, the probability function does accurately predict the behavior of small particles such as electrons. Even Albert Einstein, whose early work led to much of quantum mechanics, never liked this probability. He once famously remarked, “God does not play dice with the universe.” Erwin Schrödinger (1887–1961), who had formulated his famous Schrödinger equation stated in 1926, “If we are going to stick to this ****** quantum-jumping, then I regret that I ever had anything to do with quantum theory.” For instance, let us consider a double slit experiment. If we send a wave toward an obstruction with two slits in it, the wave will pass through both slits and produce a distinctive interference pattern behind the slits. This is because the wave passes through both slits. If we send a large number of electrons toward a similar apparatus, the electrons will also produce an interference pattern behind the slits, suggesting that the electrons (or their wave functions) went through both slits. However, if we send one electron at a time toward the slits and look for the emergence of each electron behind the slits, we will find that each electron will emerge through one slit or the other, but not both. How can this be? Indeed, this is perplexing. The most common resolution is the Copenhagen interpretation, named for the city where it was developed. This interpretation posits that an individual electron does not go through either slit, but instead exists in some sort of meta-stable state between the two states until we observe (detect) the electrons. At the point of observation, the electron’s wave equation collapses, allowing the electron to assume one state or the other. Now, this is weird, but most alternate explanations are even weirder, so you might understand why some people may have a problem with quantum mechanics. Classical physics introduced determinism, quantum mechanics introduced indeterminism. Is there a way out of this dilemma? Yes. Why do we need an interpretation to quantum mechanics? No one demanded any such interpretation of Newtonian physics. No one asked, “What does it mean?” There is no meaning, other than the fact that Newtonian physics does a good job of describing what we see in the macroscopic world. The same ought to be true for quantum mechanics. It does a good job of describing the microscopic world. Whereas classical physics introduced determinism, quantum mechanics introduced indeterminism. This indeterminism is fundamental in the sense that uncertainty in outcome will still exist even if we have all knowledge of the relevant input parameters. Newtonian determinism fit well with the concept of God’s sovereignty, but the fundamental uncertainty of quantum mechanics appears to rob God of that attribute. However, this assumes that quantum mechanics is a complete theory, that is, that quantum mechanics is an ultimate theory. There are limits to the applications of quantum mechanics, such as the fact that there is no theory of quantum gravity. If the history of science is any teacher, we can expect that quantum mechanics will one day be replaced by some other theory. This other theory probably will include quantum mechanics as a special case of the better theory. That theory may clear up the uncertainty question. As an aside, we perhaps ought to mention that the determinism derived from Newtonian physics also produces a conclusion unpalatable to many Christians. If determinism is true, then all future events are predetermined from the initial conditions of the universe. Just as the Copenhagen interpretation of quantum mechanics led to even God not being able to know the outcome of an experiment, many people applying determinism concluded that God was unable to alter the outcome of an experiment. That is, God was bound by the physics that rules the universe. This quickly led to deism. Most, if not all, people today who reject quantum mechanics refuse to accept this extreme interpretation of Newtonian physics. They ought to recognize that just as determinism is a perversion of Newtonian physics, the Copenhagen interpretation is a perversion of quantum mechanics. The important point is that just as classical mechanics does a good job in describing the macroscopic world, quantum mechanics does a good job in describing the microscopic world. We ought not expect any more of a theory. Consequently, most physicists who believe the biblical account of creation have no problem with quantum mechanics. There are two theories of relativity, the special and general theories. We will briefly describe the special theory of relativity first. Even before Newton, Galileo (1564–1642) had conducted experiments with moving bodies. He realized that if we move toward or away from a moving object, the relative speed that we measure for that object depends upon that object’s motion and our motion. This Galilean relativity is a part of Newtonian mechanics. The same behavior is true for the speed of waves. For instance, if we ride in a boat moving through water with waves, the speed of the waves that we measure will depend upon our motion and on the motion of the waves. In 1881, Albert A. Michelson (1852–1931) conducted a famous experiment that he refined and repeated in 1887 with Edward W. Morley (1838–1923). In this experiment, they measured the speed of light parallel and perpendicular to our annual motion around the sun. Much to their surprise, they found that the speed of light was the same regardless of the direction they measured it. This null result baffled physicists, for if taken at face value, it suggested that the earth did not orbit the sun, while there is other evidence that the earth does indeed orbit the sun. In 1905, Albert Einstein took the invariance of the speed of light as a postulate and worked out its consequences. He made three predictions concerning an object as its speed approaches the speed of light: 1. The length of the object as it passes will appear to shorten toward zero. 2. The object’s mass will increase without bound. 3. The passage of time as measured by the object will approach zero. These behaviors are strange and do not conform to what we might expect from everyday experience, but keep in mind that in everyday experience we do not encounter objects moving at any speed close to that of light. Eventually, these predictions were confirmed in experiments. For instance, particle accelerators accelerate small particles to very high speeds. We can measure the masses of the particles as we accelerate them, and their masses increase in the manner predicted by the theory. In other experiments, very fast-moving, short-lived particles exist longer than they do when moving very slowly. The rate of time dilation is consistent with the predictions of the theory. Length contraction is a little more difficult to directly test, but we have tested it as well. Relativity Confirmed Relativity Confirmed In 1919 a total eclipse of the sun allowed scientists to confirm Einstein’s general theory of relativity. As a result of the sun’s gravitation, stars appeared to be displaced from their true positions, just as Einstein’s theory predicted. Einstein’s theory of special relativity applies to particles moving at a constant rate but does not address their acceleration. Einstein addressed that problem with his general theory in 1916, but he also treated the acceleration due to gravity. In general relativity, space and time are physical things that have a structure in some ways similar to a fabric. Einstein treated time as a fourth dimension in addition to the normal three dimensions of space. We sometimes call this four-dimensional entity space-time or simply space. The presence of a large amount of matter or energy (Einstein previously had shown their equivalence) alters space. Mathematically, the alteration of space is like a curvature, so we say that matter or energy bends space. The curvature of space telegraphs the presence of matter and energy to other matter and energy in space, and this more deeply answered a question about gravity. Newton had hypothesized that gravity operated through empty space, but his theory could not explain at all how the information about an object’s mass and distance was transmitted through space. In general relativity, an object must move through a straight line in space-time, but the curvature of space-time induced by nearby mass causes that straight-line motion to appear to us as acceleration. Einstein’s new theory made several predictions. The first opportunity to test the theory happened during a total solar eclipse in 1919. During the eclipse, astronomers were able to photograph stars around the edge of the sun. The light from those stars had to pass very close to the sun to get to the earth. As the stars’ light passed near the sun, the sun attracted the light via the curvature of space-time. This caused the stars to appear farther from the sun than they would have otherwise. Newtonian gravity also predicts a deflection of starlight toward the sun, but the deflection is less than with general relativity. The observed amount of deflection was consistent with the predictions of general relativity. Astronomers have repeated the experiment many times since 1919 with ever-improving accuracy. For many years, radio astronomers have measured with great precision the locations of distant-point radio sources as the sun passed by, and those results beautifully agree with the predictions. Another early confirmation was the explanation of a small anomaly in the orbit of the planet Mercury that Newtonian gravity could not explain. Many other experiments of various types have repeatedly confirmed general relativity. Some experiments today even allow us to test for slight variations of Einstein’s theory. We can apply general relativity to the universe as a whole. Indeed, when we do this, we discover that it predicts that the universe is either expanding or contracting; it is a matter of observation to determine which the universe actually is doing. In 1928, Edwin Hubble (1889–1953) showed that the universe is expanding. Most people today think that the expansion began with the big bang, the supposed sudden appearance of the universe 13.7 billion years ago. However, there are many other possibilities. For instance, the creation physicist Russell Humphreys proposed his white hole cosmology, assuming that general relativity is the correct theory of gravity (see his book Starlight and Time1). It is interesting to note that universal expansion is consistent with certain Old Testament passages (e.g., Psalm 104:2) that mention the stretching of the heavens. Seeing that there is so much evidence to support Einstein’s theory of general relativity, why do some creationists oppose the theory? There are at least three reasons. One reason is that, as with quantum mechanics, modern relativity theory appears to violate certain common-sense views of the way that the world works. For instance, in everyday experience, we don’t see mass change and time appear to slow. Indeed, general relativity forces us to abandon the concept of simultaneity of time. Simultaneity means that time progresses at the same rate for all observers, regardless of where they are. As we previously stated, in special relativity, time slows with greater speed. However, with general relativity, the rate at which time passes depends not only upon speed but also on one’s location in a gravitational field. The deeper one is in a gravitational field, the slower that time passes. For example, a clock at sea level will record the passage of time more slowly than a clock at mile-high Denver. Admittedly, this is weird. However, the discrepancy between the clocks at these two locations is so miniscule as to not appear on most clocks, save the most accurate atomic clocks. This sort of thing has been measured several times, and the discrepancies between the clocks involved always are the same as those predicted by theory. Thus, while our perception is that time flows uniformly everywhere, the reality is that the passage of time does depend upon one’s location, but the differences are so small in the situations encountered on the earth that we cannot perceive them. That is, the predictions of general relativity on earth are consistent with our ability to perceive time. However, there are conditions beyond the earth that the loss of simultaneity would be very obvious if we could experience them. A second reason why some creationists oppose modern relativity theory is the misappropriation of modern relativity theory to support moral relativism. Unfortunately, modern relativity theory arose at precisely the time that moral relativism became popular. Moral relativists proclaim that “all things are equal,” and they were very eager to snatch some of the triumph of relativity theory to support their cause. There are at least two problems with this misappropriation. First, it does not follow that a principle that works in the natural world automatically operates in the world of morality. The physical world is material, but the world of morality is immaterial. Second, the moral relativists either did not understand relativity or they intentionally misused it. Despite the common misconception, modern relativity theory does not tell us that everything is relative. There are absolutes in modern theory of relativity. The speed of light is a constant. While the passage of time may vary, general relativity provides an absolute way in which to compare the passage of time in two reference frames. The modern theory of relativity in no way supports moral relativism. The third reason why some creationists reject modern relativity theory is that they think that general relativity inevitably leads to the big-bang model. However, the big-bang model is just one possible origin scenario for the universe; there are many other possibilities. We have already mentioned Russ Humphreys’s white hole cosmology, and there are other possible recent creation models based upon general relativity. True—if general relativity is not correct, then the big-bang model would be in trouble. However, if general relativity is correct, then the shortcut attempt to undermine the big-bang model will doom us from ever finding the correct cosmology. String Theory With the establishment of quantum mechanics in the 1920s, the development of the science of particle physics soon followed. At first, only a few particles were known: the electron, proton, and neutron. These particles all had mass and were thought at the time to be the fundamental building blocks of matter. Quantum mechanics introduced the concept that material particles could be described by waves, and conversely that waves could be described by particles. That led to the concept of particles that had no mass, such as photons, the particles that make up light. Eventually, physicists saw the need for other particles, such as neutrinos and antiparticles. Evidence for these odd particles soon followed. Experimental results suggested the existence of other particles, such as the meson, muon, and tau particles, as well as their antiparticles. Many of these new particles were very short-lived, but they were particles nevertheless. Physicists began to see patterns in the growing zoo of particles. They could group particles according to certain properties. For instance, elementary particles possess angular momentum, a property normally associated with spinning objects, so physicists say that elementary particles have “spin.” Imagining elementary particles as small spinning spheres is useful, but modern theories view this as a bit naive. Spin comes in a quantum amount. Some particles have whole integer values of quantum spin. That is, they have integer multiples (0, ±1, ±2, etc.) of the basic unit of spin. Physicists call these particles Bosons. Other particles have half integer (±1/2, ±3/2, etc.) amounts of spin, and are known as fermions. Bosons and fermions have very different properties. Physicists also noticed that elementary particles tended to have certain mathematical relationships between one another. Physicists eventually began to use group theory, a concept from abstract algebra, to classify and study elementary particles. By the 1960s, physicists began to suspect that many elementary particles, such as protons and neutrons, were not so elementary after all, but consisted of even more elementary particles. Physicists called these more elementary particles quarks, after an enigmatic word in a James Joyce poem. According to the theory, there are six types of quarks. Many particles, such as protons and neutrons, consist of the combination of three quarks. The different combinations of quarks lead to different particles. Some of those combinations of quarks ought to produce particles that no one had yet seen, so these combinations amounted to predictions of new particles. Particles physicists were able to create these particles in experiments in particle accelerators, so the successful search for those predicted particles was confirmation of the underlying theory. Therefore, quark theory now is well established. In recent years, particle physicists have in similar fashion developed string theory. Physicists have noticed that certain patterns among elementary particles can be explained easily if particles behave as tiny vibrating strings. These strings would require the existence of at least six additional dimensions of space. We already know that the universe has three normal spatial dimensions as well as the dimension of time, so these six extra dimensions bring the total number of dimensions to ten. The reason why we do not normally see the other six dimensions is that they are tightly curled up and hidden within the tiny particles themselves. At extremely high energies, the extra dimensions ought to manifest themselves. Therefore, particle physicists can predict what kind of behavior strings ought to exhibit when they accelerate particles to extremely high energies. The problem is that current particle accelerators are not nearly powerful enough to produce these effects. As theoretical physicists refine their theories and we build new, powerful particle accelerators, physicists expect that one day we can test whether string theory is true, but for now there is no experimental evidence for string theory. The Size of Strings The Size of Strings Looking at progressively smaller parts of a water molecule, we can glimpse the complexity God designed in all things. We realize the illustration used deuterium, a rare isotope of hydrogen, to help convey the point. Currently, most physicists think that string theory is a very promising idea. Assuming that string theory is true, there still remains the question as to which particular version of string theory is the correct one. You see, string theory is not a single theory but instead is a broad outline of a number of possible theories. Once we confirm string theory, we can constrain which version properly describes our world. If true, string theory could lead to new technologies. Furthermore, a proper view of elementary particles is important in many cosmological models, such as the big bang. This is because in the big-bang model, the early universe was hot enough to reveal the effects of string theory. Modern physics is a product of the 20th century and relies upon twin pillars: quantum mechanics and general relativity. Both theories have tremendous experimental support. Christians ought not to view these theories with such great suspicion. True, some people have perverted or hijacked these theories to support some nonbiblical principles, but some wicked people have even perverted Scripture to support nonbiblical things. We ought to recognize that modern physics is a very robust, powerful theory that explains much. At the same time, the theory is very incomplete in some respects. In time, we ought to expect that some new theories will come along that will better explain the world than these theories do. However, we know that God’s Word does not change. String theory has emerged in the 21st century as the next great idea in physics. Time will tell if string theory will live up to our expectations. What ought to be the reaction of Christians to this? We must be vigilant to investigate the amount of nonbiblical influences that may have crept into modern thinking, particularly in the interpretation of string theory (as with modern physics). However, we must be careful not to throw out the baby with the bath water. That is, can we reject the anti-Christian thinking that many have brought to the discussion? The answer is certainly yes. As with the question of origins, we must strive to interpret these things on our terms, guided by the Bible. Do the new theories adequately describe the world? Can we see the hand of the Creator in our new physics? Can we find meaning in our studies that brings glory to God? If we can answer yes to each of these questions, then these new theories ought not to be a problem for the Christian. The New Answers Book 2 Read Online Buy Book 1. D. Russell Humhreys, Starlight and Time (Green Forest, AR: Master Books, 1994). I agree to the current Privacy Policy. Learn more • Customer Service 800.778.3390
5af0b94fb16a6643
Open access peer-reviewed chapter Mechanical Models of Microtubules By Slobodan Zdravković Submitted: May 7th 2017Reviewed: September 21st 2017Published: May 2nd 2018 DOI: 10.5772/intechopen.71181 Downloaded: 294 Microtubules are the major part of the cytoskeleton. They are involved in nuclear and cell division and serve as a network for motor proteins. The first model that describes nonlinear dynamics of microtubules was introduced in 1993. Three nonlinear models are described in this chapter. They are longitudinal U-model, representing an improved version of the first model, radial φ -model and new general model. Also, two mathematical procedures are explained. These are continuum and semi-discrete approximations. Continuum approximation yields to either kink-type or bell-type solitons, while semi-discrete one predicts localized modulated waves moving along microtubules. Some possible improvements and suggestions for future research are discussed. • microtubules • partial and ordinary differential equations • kink solitons • breathers 1. Introduction A cell is defined as eukaryotic if it has a membrane-bound nucleus. Such cells are generally larger and much more sophisticated than prokaryotic ones. Microtubules (MTs) are the basic components of cytoskeleton existing in eukaryotes [1]. They are long structures that spread between a nucleus and a cell membrane. MTs play an essential role in the shaping and the maintenance of cells and are involved in cell division. Also, they represent a network for motor proteins. These proteins move with a velocity of 0.12μm/s[2] carrying a certain cargo such as mitochondrion. All eukaryotic cells produce two kinds of tubulin proteins. These are αand βtubulins, or monomers, and they spontaneously arrange head to tail forming biologically functional subunit that we call a heterodimer, or a dimer for short. When intracellular conditions favor assembly, the dimers assemble into long structures called protofilaments (PFs). Microtubules are usually formed of 13 PFs, as shown in Figure 1. Figure 1. A tubulin dimer, a protofilament and a microtubule [3]. Hence, MTs are long cylindrical polymers whose lengths vary from a few hundred nanometers up to meters in long nerve axons [4]. Each dimer is an electric dipole whose mass and length are m=1.8×1022 kgand l=8 nm, respectively [5]. The component of its electric dipole moment in the direction of PF is p=337Debye=1.13×1027 Cm[6]. Consequently, MT as a whole appears to be a giant dipole with negatively charged end coinciding with biologically positive end (more active) and vice versa. This is the reason why an intrinsic electric field exists within MT. MTs in non-neuronal cells are unstable structures. They exhibit dynamic instability behavior existing in phases of elongation (polymerization) or rapid shortening (depolymerization). This size fluctuation has been called as dynamic instability [7, 8]. Notice that the shrinkage rate is bigger than the growth rate (see Ref. [9] and references therein). MTs grow steadily at positive end, corresponding to the β– subunit, and then shrink rapidly by loss of tubulin dimers at the negative end, corresponding to the α–monomer. Many anticancer drugs, for example, taxol (paclitaxel), prevent growth and shrinkage of MTs and thus prevent cell proliferation [10]. MTs existing in neuronal cells are stable and, consequently, neurons, once formed, do not divide [4]. This stability is crucial as there are evidences that neuronal MTs are responsible for processing, storage and transduction of biological information in a brain [4, 11]. It was mentioned that MTs represent the traffic road for motor proteins. Some more information can be found in Ref. [9] and in an exhaustive review paper [12]. It suffices now to state that the cellular motors with dimensions of less than 100 nm convert chemical energy into useful work. These small machines have the fundamental role of dissipation in biological systems, which has been confirmed by both the theoretical and the experimental investigations [13]. The molecular motors dissipate continuously and operate as irreversible systems [13]. It is clear that any molecular motor, to start moving, should obtain a certain signal. One of the promising dynamical mechanisms for intracellular signaling is solitary waves, which is explained in this chapter. 2. Mechanical models MTs, as well as all biological systems, are nonlinear in nature. Strong covalent chemical bonds are usually modeled by linear “springs”, while weak chemical interactions, existing in all biological systems, are modeled by nonlinear “springs”. This means that expressions for energy of biological systems require nonlinear terms, which brings about nonlinear partial differential equations (PDEs) explaining nonlinear dynamics of these systems. This is the topic of the present chapter. We will see that, in case of MTs, the solutions of these nonlinear PDEs are solitary waves. The word soliton was introduced in 1965 to designate solitary waves describing the propagation of excitations in continuous media with nonlinearity and dispersion [14]. The first qualitative description of solitary waves dates back to 1834 when hydrodynamic engineer John Scott Russell observed them on a surface in a shallow channel [15]. The wave was so stable that the engineer followed it about 1 or 2 miles. From then, there has been tremendous interest for various kinds of solitons in many branches of physics [15, 16, 17, 18, 19]. In this chapter, the terms soliton and solitary waves are treated as synonyms, which is commonly accepted in literature. Solitons are localized waves possessing some interesting properties. The most important is their stability in a sense that they conserve their shape and energy after mutual interaction. In other words, they can pass through one another without annihilation. This was experimentally observed in neurons [20]. To model complex MT dynamics, we should introduce some simplifications. To the best of the author’s knowledge, all the models introduced so far have only one degree of freedom per dimer. Hence, for the models explained in this chapter, elementary subunits of PFs are dimers and they perform either longitudinal or angular oscillations and the appropriate models can be called as longitudinal or angular (radial), respectively. The longitudinal contacts along PFs are much stronger than those between adjacent PFs [21, 22], which allows us to construct a simplified Hamiltonian of MT, which is, practically, Hamiltonian for a single PF only. However, the influence of the neighboring PFs is taken into consideration through the electric field. Namely, each dimer exists in the electric field coming from the dimers belonging to all PFs. Also, the nearest neighbor approximation is assumed. 3. U-model The first model that describes nonlinear dynamics of MTs is a longitudinal one. It was introduced in 1993 by Satarić et al. [23]. According to the model, the dimers perform angular oscillations but a coordinate u, describing the dimer’s displacement, is a projection of the top of the dimer on the direction of PF. Therefore, the displacements are radial but the used coordinate is longitudinal. There is a real longitudinal model assuming longitudinal displacements of the dimers that we call as Z-model [24]. Both U- and Z-models bring about equal crucial differential equations and the latter one will not be studied here. Somewhat improved and more general version of the first nonlinear model is what we call as U-model [25] and this will be explained in the following paragraphs. Both models are based on the fact mentioned above that the dimers are electric dipoles and that the whole MT can be regarded as ferroelectric [23, 26], which means that the interaction between a single dimer and its surrounding can be modeled by W-potential [23, 27]. This yields to the following Hamiltonian for MT [23, 25, 28] where dot means the first derivative with respect to time while the integer ndetermines the position of the considered dimer in PF. The first term obviously represents a kinetic energy of the dimer of mass m. The second one is interaction between the neighboring dimers belonging to the same PF in the nearest neighboring approximation and kis an intradimer stiffness parameter. The next two terms represent the W-potential energy mentioned earlier, where the parameters Aand Bshould be determined or, at least, estimated and are assumed to be positive. We should point out that the double-well potential is rather common in physics [27, 29, 30]. The very last term is coming from the fact that the dimer is the electric dipole existing in the field of all other dimers, where Q > 0 represents the excess charge within the dipole and E > 0 is internal electric field. The last three terms together can be regarded as unsymmetrical W-potential. Our final goal is the function unt, describing nonlinear dynamics of MT. This function is a solution of so-called dynamical equation of motion, which can be obtained from Eq. (1). To derive it, we introduce generalized coordinates qnand pndefined as qn=unand pn=mdun/dt. Using well-known Hamilton’s equations of motion dpn/dt=dH/dqnand dqn/dt=dH/dpn, we obtain the following discrete differential equation that should be solved The last term is a viscosity force with γbeing a viscosity coefficient [23]. Therefore, nonlinear dynamics of MTs has been described by Eq. (2). Obviously, nonlinearity is coming from the fourth degree term in the W-potential. It was explained earlier that we used some approximations to derive Eq. (2). However, we need one more to solve it. We now explain two mathematical methods for solving this equation. Practically, these two approaches are two approximations. They are continuum and semi-discrete approximations. We will see that the different mathematical procedures yield to different solutions. Therefore, the function untdepends not only on the physical system but also on the used mathematical method. Let us explain the continuum approximation first. A question if MTs are discrete or continuum systems was studied in Ref. [31], where it was shown that the continuum approximation is valid. The continuum approximation means a transition untuxt, which allows a series expansion of the terms un±1, that is, un±1u±uxl+122ux2l2, where lis the dimer’s length explained earlier. In fact, PF can be seen as one-dimensional crystal with lbeing a period of the lattice. This straightforwardly brings about the following continuum dynamical equation of motion This is PDE that cannot be easily solved. Hopefully, this equation can be transformed into an ordinary differential equation (ODE). It is well known that, for a given wave equation, a traveling wave uξis a solution which depends upon xand tonly through a unified variable ξas ξ=κxωt, where κand ωare constants. If we substitute the variables xand tby ξwe straightforwardly transform Eq. (3) into the following ODE where udu/dξand Eq. (4) becomes the appropriate one in Ref. [23] for αu=1. Therefore, the U-model is more general than its predecessor introduced in Ref. [23]. It is crucial that the parameter αucan be determined together with the function Ψfor known or estimated σand ρu. This is because αuhas very important physical meaning. The first term in Eq. (3) is the inertial term and it is coming from the kinetic energy in Hamiltonian (1), while the second one is the elastic one. Therefore, positive αumeans that the inertial term is bigger than the elastic one and vice versa. Eq. (4) has already been solved using different mathematical procedures like standard procedure [23, 27, 29, 30] and method of factorization [31, 32]. There exists a group of procedures where the function Ψis represented as a serious expansion over other known function like Ψ=k=0NAkΦk. The function Φis usually known and we plug Ψinto Eq. (4) and determine the coefficients Ak. A common example for Φis a solution of Riccati equation, which is either tangent or tangent hyperbolic. As only the latter function may have physical meaning, we call the method as tangent hyperbolic function method (THFM) [25, 33, 34, 35] and extended or modified extended THFM [36]. The function Φcan also be one of Jacobian elliptic functions [37] and, even, unknown [38]. It is very likely that the most general procedure is the simplest equation method (SEM) [39, 40, 41] and its simplified version called as modified simplest equation method (MSEM) [42]. According to SEM, the series expansion is [39, 40, 41]. where A0, Akand Bkare coefficients that should be determined and Φrepresents the first derivative. In general, the function Φ=Φξis known and represents a solution of a certain ODE of lower order than the equation that should be solved. A commonly used example is the Riccati equation [40] To determine the positive integer Nin Eq. (6), we should plug Ψ=c/ξp, c=const, into Eq. (4) and concentrate our attention on the leading terms [42]. One can easily show that N=1for Eq. (4) as the leading terms are proportional to ξp+2and ξ3p. The well-known general solution of Eq. (7) is [39, 40] In what follows, we assume ξ0=0. Our next step is determination of the parameters A0, A1, B1, a, band αu. According to Eqs. (6) for N=1and (7), we obtain the expressions for Ψ, Ψand Ψ3as required by Eq. (4), which yields to the following expression: Obviously, this is satisfied if all the coefficients are simultaneously equal to zero. This brings about a system of seven equations, which can be obtained using Mathematica or similar software [39]. One of them can be written as indicating two possible relationships between the parameters A1and B1. Hence, there are a few cases to be studied. They are as follows [39]: (1) B1=0, a=0; (2) B1=0, a0; (3) A1=B1; (4) 2αu=A1B12, A1B10; (5) A1=0, a0and (6) A1=0, a=0. It is obvious that the first case represents nothing but a simpler method called extended tanh-function method. The system mentioned earlier brings about [39] The final result is [39] where A0iis the following three real solutions of the first of Eqs. (11) Of course, these three solutions exist for σ<σ0. The case σ>σ0was discussed in Ref. [25]. All the three solutions are shown in Figure 2 for σ0.9σ0and ρu=1. Of course, these solutions reproduce previously known results [25]. Figure 2 shows that the solutions of Eq. (4) are kink and antikink solitons. More detailed analysis of their physical meaning is given in Ref. [25]. Figure 2. The functions Ψ ξ for ρ u = 1 and σ = 0.34 . It was shown [42] that the second case is equal to the first one indicating that the value of ais irrelevant if B1=0. In other words, we could have assumed a simpler version of the Riccati equation neglecting the term 2aΦ. The third case is more interesting. It turns out that, instead of the three lines in Figure 2, that is, the three solutions, we obtain infinitely many lines corresponding to each of them [39]. However, they represent three groups of parallel lines, which means that all these solutions are only shifted functions and, consequently, have equal physical meaning. Therefore, this case does not bring about any physically new result. Case 4 is suggested by Eq. (10). The system of seven equations, mentioned earlier, gives the first and the last term in Eq. (11) as well as The final expression for Ψis This case yields to a new solution, which was not obtained using less general mathematical methods. However, it may be interesting from mathematical point of view only as Ψdiverges for ξ=0. Case 5 is a simplified version of SEM, explained in Ref. [42]. The mentioned system brings about ρu=0as well as where a notation A0has been introduced to distinguish this parameter from A0, used in the previous cases. It is interesting to compare the polynomials for A0and A0, existing in Eqs. (11) and (17). We can see that. which means that the values for A0iare given by Eqs. (13), (14) and (18). We can easily show that the final solution for Ψis [39] Obviously, this function cannot diverge for any value of but only for 1<K<1. Also, Kshould be real and these two requirements eliminate Ψ2and Ψ3[39], which means that Ψand A0in Eq. (19) are Ψ1and A01. The function Ψ1ξis shown in Figure 3 for a=0.1and for two values of the parameter σ. We notice very interesting result that is a bell-type soliton! This certainly demonstrates the advantage of SEM method over the less general ones. Figure 3. A bell-type soliton for a = 0.1 and σ = 0.34 (a) and σ = 0.1 (b). It is important to study the physical meanings of the parameters aand σ. Eq. (19) indicates that solitonic width is inversely proportional to aand that adoes not affect maximum of the wave. Figure 3 shows that the amplitude of Ψ1is a decreasing function with respect to σ. Finally, the last case gives the solution which is obviously divergent for bξ=, k=0,±1,±2,. Therefore, all the cases are explained and we can see that the continuum approximation yields to both kink solitons and bell-type solitons. The latter may exist only if viscosity is neglected. It was stated earlier that the coordinate uwas the projection of the top of the dimer on the direction of MT. A patient reader may ask how ucan be negative when this is the projection. This question is answered in Ref. [28]. It was mentioned earlier that there are two approximations that can be used to solve Eq. (2). Now we get back to Eq. (2) and study semi-discrete one [15, 28, 43]. A mathematical basis for the method is a multiple-scale method or a derivative-expansion method [16, 44]. We assume small oscillations which straightforwardly transforms Eq. (2) into According to the semi-discrete approximation, we look for wave solution which is a modulated wave, that is [28, 45] where ω is the optical frequency of the linear approximation, q = 2π/λ > 0 is the wave number, cc represents complex conjugate terms and the function F0 is real. Of course, lis the dimer’s length, as mentioned earlier. The function Fis continuous and represents an envelope, while exp(iθn), including discreteness, is a carrier component. Notice that the parameter ε exists in the function F, but does not in exp(iθn). This is because the frequency of the carrier wave is much higher than the frequency of the envelope and we need two time scales, tand εt, for those two functions. The same holds for the coordinate scales. To simplify the problem, a continuum limit nlzshould be introduced as well as new transformations Z=εzand T=εt. This allows a series expansion of Fξ, that is where indexes Zand ZZdenote the first and the second derivative with respect to Z. Hence, the function Φntbecomes where stands for complex conjugate and FFZT. All this allows us to obtain the expressions existing in Eq. (22), such as Φn+1+Φn12Φn, Φnand Φn3, and Eq. (22) becomes [28] This crucial expression represents a starting point for a series of important expressions. They can be obtained equating the coefficients for the various harmonics. For example, equating the coefficients for eand neglecting all the terms with ε2and ε3one obtains the following expressions for the dispersion relation ω=ωqand the group velocity dω/dq: Also, the coefficients for ei0=1and ei2θ, respectively give [28] which yields to Eqs. (28) and (29) and new coordinates Sand τ, defined as S=ZVgTand τ=εT, allows us to simplify Eq. (27). An explanation for why the parameter εexists in the time scaling but is absent in the space scaling is given in Refs. [45, 46]. If we consider the terms for eagain we obtain the well-known nonlinear Schrödinger equation (NLSE) for the function F where the dispersion coefficient Pand the coefficient of nonlinearity Qare Even though Eq. (31) is PDE, its solution exists. This well-known solution, existing for PQ>0, is [15, 47, 48] where parameters ueand ucrepresent envelope and carrier component velocities, while the amplitude Aeand the soliton width Lehave the forms It is very difficult to deal with the parameters ueand ucas ue>2ucis completely unprecise statement. However, uc/ue<0.5seems to be more practical. Hence, new parameters Ueand ηhave been introduced as Ue=εue, η=uc/ueand 0η<0.5[45]. Finally, we can easily obtain the expression for the longitudinal displacement of the dimer at the position n One more parameter can be eliminated using the idea of coherent mode [49]. This mode means that the envelope and the carrier wave velocities are equal. It follows from Eq. (35) that Ve=Ω/Θ, which yields to the function Ueη. This means that the wave untis the one phase function, preserving its shape in time. To plot the function untor, equivalently, Untwe should know or estimate the values of a couple of the parameters. Of course, if 2D plot is chosen, Untcan be presented as either a function of tat a certain position nor as a function of nfor chosen t. Very detailed analysis of the parameter selection was done in Ref. [28]. One example for q=2π/Nlis shown in Figure 4. Obviously, this is a localized modulated wave usually called as breather. We can see that its width is about 200 nm, which means that it covers about 25 dimers. Figure 4. Function U n for t = 50 ns , A = 2.9 × 10 − 3 N / m , B = 1.7 × 10 14 N / m 3 , k u = 150 A , N = 15 and η = 0.43 . As a conclusion, we can state that the two mathematical procedures bring about even three results, that is, three different solitons. These are kinks, bell-type solitons and breathers. They may be signals for the motor proteins to start moving, as explained in Introduction. Obviously, viscosity has been neglected. This will be explained in the following section, within the φ-model. 4. φ-model A weak point of the U-model is the last term in Eq. (1). A scalar product pE=QdEcosφnwould be better choice for the potential energy, where dis the distance between the centers of positive and negative charges within the dipole. This potential indicates the angle as a coordinate instead of the projection uand the Hamiltonian for the radial model, which we call as φ-model, is [50, 51] where Iis a moment of inertia of the dimer at the position n. Notice that the W-potential does not exist in Eq. (38) even though the terms including φn2and φn4appear as a result of a series expansion of the cosine function. Instead of viscosity force introduced in the previous section, we introduce viscosity momentum Mv=Γφ̇, where Γis the viscosity coefficient [51, 52, 53]. Following the procedure explained earlier, we obtain Like above, the solutions are kink solitons [50]. It is very interesting to compare the expressions for αuand αφ, given by Eqs. (5) and (40). They can be written as where v=ω/κis the soliton velocity, while cuand cφare corresponding sound velocities. According to Eq. (11), we can see that the U-model predicts cu>vas Ais positive. On the other hand, αφ=2ρφ2/9>0[50] means that, according to the φ-model, the kink belongs to the class of supersonic solitons. We will return to this issue in the next section. Now, we switch to the semi-discrete approximation within the φ-model to solve the dynamical equation of motion, which is [51] where φ=εΦhas been used. Of course, Eq. (43) is analog to Eq. (22). Following the procedure explained in the previous section, we straightforwardly obtain F0=0and F2ξ=0, as well as where ω0is the lowest frequency of the oscillations [59]. Also, we easily obtain NLSE (31), where The final solution φntis the same as Untexcept that Pand Qare different. Therefore, both the U- and the φ-models predict the breather waves moving through MT. Finally, viscosity should be introduced in the semi-discrete approximation [51]. Due to viscosity momentum Mv=ΓΦ̇n, the final result φntincludes the expected exponential term eβt, where β=Γ/2I[51]. 5. General model of MTs It was mentioned earlier that the weak point of the U-model is the last term in Eq. (1). Also, it is better to use the radial coordinate φthan the longitudinal one as we assume angular oscillations of the dimers. The scalar product pE=QdEcosφn, existing in the φ-model, solved these problems but the W-potential has been missing. In fact, a series expansion of cosφngives φn2and φn4terms but with opposite signs from those in the U-model. These two terms are, practically, a potential that looks like W in a mirror having only one minimum surrounded by two maxima and, due to its shape, can be called as M-potential [54]. This potential brings about αφ>0, which is disputable result. Therefore, we want to solve the mentioned problem regarding the U-model but to keep the W-potential, the coordinate φand, probably, Iω2<kl2κ2, that is, α<0. This suggests the following Hamiltonian where A>0, B>0and φnhas the same meaning as in the φ-model. Let us call the model as general one (GM). The procedure mentioned earlier brings about where, of course, φφξ. If we consider Eqs. (3) and (47), we can see that the last two terms in Eq. (47) may be the first derivatives of either W- or M-potential, depending on the sign of the terms in the brackets. However, these brackets may have different signs or can be zero. Therefore, the possible cases are: Case 1: ApEBpE6>0,      Case 2: ApEBpE6<0, Case 3: A=pE, BpE6,                  Case 4: ApE, B=pE/6. All of them are studied in Ref. [54] and they will be explained here briefly. Case 1 straightforwardly yields to and the final solution is [54] Eq. (50) holds for both positive and negative ρ1. Therefore, φξrepresents kink soliton if ρ1>0and antikink one for negative ρ1, which is shown in Figure 5. Figure 5. Kink soliton for ρ 1 = − 1 (a) and antikink soliton for ρ 1 = 1 (b) for K = 1 . One of the advantages of the GM over the φ-model is the value of amplitude. Namely, the amplitude of the kink soliton, according to the φ-model, is 6, coming from Eq. (40). This is unrealistic, too big value. Instead of 6, the appropriate factor, existing in the GM, is K, given by Eq. (49). If viscosity is neglected, the GM brings about Case 2 straightforwardly yields to and to the final results It is obvious that these results do not have physical meaning as φ2is complex, while φ20may diverge. Case 3 brings about as well as a0=a=α3=ρ3=0, which certainly means that Eq. (55) does not have any solution having physical sense. The remaining Case 4 linearizes Eq. (47) and will not be studied here. Therefore, the GM yields to the kink solitons as the previous two models do. However, this is the radial model and the problems with both the last term in Eq. (1) and the huge amplitude in the case of the φ-model have been solved. We should study one more issue. It was mentioned earlier that the U-model predicts the subsonic kink soliton, while the φ-model predicts the supersonic wave. How about the GM? It was shown that Case 1 yields to the solutions having physical sense and that α1>0. According to Eq. (49), we easily reach the final conclusion: 1. If A>pEand B>pE/6, then ρ1<0and the function φxtis subsonic soliton, kink for the positive Kin Eq. (50) and antikink otherwise. 2. If A<pEand B<pE/6, then ρ1>0and the function φxtis supersonic soliton, antikink for the positive Kin Eq. (50) and kink otherwise. All this certainly suggests the advantages of the GM with respect to the previous two. 6. Conclusion and future research In this chapter, the three models describing nonlinear dynamics of MTs are shown. The first one, the U-model, is the improved version of the first nonlinear model and it predicts subsonic kink solitons moving along MT. The second one, the radial φ-model, predicts the supersonic kinks. Finally, the GM is explained. This is the radial model which yields both possibilities regarding the kink’s speed. If we assume that the kink soliton is subsonic wave then we know the minimum value of the parameter A, that is, A>pE, as explained earlier. Two mathematical procedures are explained, continuum and semi-discrete approximations. It is very interesting that the final result depends not only on the physical system but on the mathematical methods as well. These solutions are the kink soliton and the breather. The question is which one, if any, really moves along MTs. This is not known in the moment and cannot be without experimental results. It was demonstrated that the GM is better than the previous two models. However, this does not mean that it should not be improved. For example, there has been an attempt to improve the model introducing Morse potential instead of the harmonic one [55]. The harmonic potential energy assumes that attractive and repulsive forces are equal. Morse potential is not symmetric and is good for both strong and weak interactions. In this chapter, the dimers are considered as elementary units. However, their structure is more complicated and they include tubulin tales (TTs). Consequently, nonlinear dynamics of TTs should also be studied and some results already exist [56, 57]. The W-potential has two minima which means that it assumes existence of the two angles between the dimer and the direction of PF around which the dimer oscillates. One of the future tasks should be measuring these angles. First of all, such experiment would check if the W-potential is correct or not. If it is, then our knowledge of their values would improve the theory a lot. One of the future research goals should be two-component model. This may mean that we should construct the model assuming two degrees of freedom. However, one of these degrees can be an internal one, which means that oscillations if monomers within the dimer should be taken into consideration. Notice that the two-component model may be the one studying electro-acoustic wave excitations [58]. Finally, we should bear in mind the cytological and medical applications of the research explained in this chapter [59, 60, 61]. This work was supported by funds from Serbian Ministry of Education, Sciences and Technological Development (grant No. III45010). How to cite and reference Link to this chapter Copy to clipboard Cite this chapter Copy to clipboard Slobodan Zdravković (May 2nd 2018). Mechanical Models of Microtubules, Complexity in Biological and Physical Systems - Bifurcations, Solitons and Fractals, Ricardo López-Ruiz, IntechOpen, DOI: 10.5772/intechopen.71181. Available from: chapter statistics 294total chapter downloads More statistics for editors and authors Access personal reporting Related Content This Book Next chapter The Dynamics Analysis of Two Delayed Epidemic Spreading Models with Latent Period on Heterogeneous Network By Qiming Liu, Meici Sun and Shihua Zhang Related Book First chapter Some Commonly Used Speech Feature Extraction Algorithms By Sabur Ajibola Alim and Nahrul Khair Alang Rashid More About Us
4a2da49c57c493b0
Tomoi Koide is a Professor at Institute of Physics, Federal University of Rio de Janeiro, Brazil. He has completed his PhD from Tohoku University, Japan and Post-doctoral studies from Frankfurt University, Federal University of Rio de Janeiro and so on. He has published more than 60 papers in reputed journals. Variational principle plays a fundamental role in elucidating the structure of classical mechanics, clarifying the origin of dynamics and the relation between symmetries and conservation laws. In classical mechanics, the optimized function is characterized by Lagrangian, defined as T-V with T and V being a kinetic and a potential terms, respectively. We can still argue a variational principle even in quantum mechanics, but the Lagrangian does not have the form of T-V any more. Therefore, at first glance, any clear or direct correspondence between classical and quantum mechanics does not seem to exist from the variational point of view, but it does exist. For this, we need to extend the usual variational method to the case of stochastic variables. This is called stochastic variational method (SVM). The Schrödinger equation can be then obtained by the stochastic optimization of the action which leads to, meanwhile, the Newton equation in the application of the classical variation. From this point of view, quantization can be regarded as a process of stochastic optimization and the invariance of the action leads to the conservation laws in quantum mechanics. In this manner, classical and quantum behaviors are described in a unified way under SVM. Although SVM was originally proposed as the reformulation of Nelson's stochastic quantization, its applicability is not restricted to quantization. In fact, dissipative dynamics such as the Navier-Stokes-Fourier (viscous fluid) equation can be obtained by applying SVM to the Lagrangian which leads to the Euler (ideal fluid) equation in the classical variational method. This method is useful even to obtain coarse-grained dynamics. For example, the Gross-Pitaevskii equation is regarded as an optimized dynamics in SVM. Therefore it is possible to consider that the study of SVM enables us to generalize the framework of analytic mechanics. Speaker Presentations Speaker PDFs
e005f817c445b80b
From UW-Math Wiki Revision as of 17:52, 3 February 2014 by Spagnolie (talk | contribs) (ACMS Abstracts: Spring 2014) Jump to: navigation, search ACMS Abstracts: Spring 2014 Adrianna Gillman (Dartmouth) Fast direct solvers for linear partial differential equations Yaniv Plan (Michigan) Low-dimensionality in mathematical signal processing Lorenzo Pareschi (University of Ferrara) Kinetic description and simulation of optimal control problems in self-organized systems Emerging phenomena driven by interactions of a large number of self-organized agents are present in various real life applications. Different to the classical approach where individuals are assumed to freely interact with each other, here we are particularly interested in such problems in a constrained setting. This can be used to understand the influence of external factors to the system dynamics, for example to enforce emergence of non spontaneous desired asymptotic states. Classical examples are given by persuading voters to vote for a specific candidate, by influencing buyers towards a given good or asset or by confining/driving a group of animals in a specific area. In this talk we review different kind of controls for the resulting process and present several kinetic models and stochastic simulation methods including those controls. Harvey Segur (Colorado) The nonlinear Schrödinger equation, dissipation and ocean swell This is joint work with Diane Henderson, at Penn State.
50cf968fe0cf3460
Ady Stern from the Weizmann Institute of Science will introduce the quantum Hall effect. Ady thanks Dr. Dan Arav and Gil Novik from the School of Media Studies of the College of Management - Academic Studies for their help in preparing the videos. The Hall effect We now move on to the quantum Hall effect, the mother of all topological effects in condensed matter physics. But let's start from the classical Hall effect, the famous phenomenon by which a current flows perpendicular to an applied voltage, or vice versa a voltage develops perpendicular to a flowing current. How does one get a Hall effect? The key is to break time-reversal symmetry. A flowing current breaks time-reversal symmetry, while an electric field doesn't. Hence, any system with a Hall effect must somehow break time-reversal symmetry. But wait a minute, you might catch me and ask, what about a normal electric current flowing parallel to an electric field? This is what happens in a metal on a regular basis, and a metal does not break time-reversal symmetry. The key difference there is that such a longitudinal current breaks time-reversal through energy dissipation, which turns into heat that breaks time-reversal by the second law of thermodynamics. A Hall current is special in that it is dissipationless. We can drive a Hall current without wasting any energy because the current flows perpendicular to the voltage gradient. Thus to get a Hall effect we must somehow break time-reversal symmetry. We will examine the simplest way to achieve this, an external magnetic field. How to measure the Hall effect Let's consider a two dimensional gas of electrons immersed in a strong, perpendicular magnetic field. In particular, we take the following geometry, which is called a Hall bar and is routinely used in experiments: The electron gas is contacted by six electrodes, numbered in the figure. We can use this Hall bar geometry set-up to measure the transport characteristics of the gas, as follows. The transport characteristics are tabulated using the 4 components $\sigma_{xx},\sigma_{yy},\sigma_{xy}$ and $\sigma_{yx}$ of the so-called conductivity tensor. Once we know the conductivity tensor, we can use it to calculate how the current density $\mathbf{j} = (j_x,j_y)$ flows in response to the electric field $\mathbf{E} = (E_x,E_y)$ in the metal, through the equation $$j_\alpha=\sum_\beta \sigma_{\alpha\beta}E_{\beta}.$$ By inverting this set of relations between current densities and electric field, we obtain the resistivities $\rho_{xx}, \rho_{xy}, \dots$, which are more often reported in experimental data. Also, in two-dimensional systems there is no real difference between conductance and conductivity (or resistance and resistivity) - they have the same physical units. So the terms are somehow interchangeable. The way to use the Hall bar device is to drive a current $I$ along the $x$ direction, so that there is a current density $j_x=(I/W)$ where $W$ is the width of the sample. There is no current density in the perpendicular direction. We can measure the electric field using the Hall bar geometry from the voltage drops between the probes with voltages $V_{1,2,3,4}$. We can then measure the $x$-component of the electric field from the longitudinal voltage drop $V_L\sim (V_1-V_2)$ or $(V_3-V_4)$ according to the averaged equation $$E_x \equiv \frac{V_1+V_3-V_2-V_4}{2L}.$$ Similarly, we can measure the $y$-component of the electric field from the Hall voltage $V_H=(V_1-V_3)$ or $(V_2-V_4)$. Specifically we can calculate the electric field as: $$E_y \equiv \frac{V_1+V_2-V_4-V_3}{2W}.$$ The Hall bar can only measure the conductance completely for isotropic or rotationally invariant systems. If we rotate the system by 90 degrees we can transform $x\rightarrow y$ and $y\rightarrow -x$. So we expect $\sigma_{xx}=\sigma_{yy}=\sigma_L$, the longitudinal conductance. If we apply this same rotation transformation we conclude that $\sigma_{xy}=-\sigma_{yx}=\sigma_H$, the Hall conductance. So with rotational invariance the 4 component conductance tensor has only 2 independent components i.e. the longitudinal and Hall conductance. We can calculate these using the two electric fields $E_{x,y}$ that we measure using the Hall bar. To do this, we solve the set of equations $j_y=\sigma_L E_y - \sigma_H E_x=0$ and $j_x=\sigma_L E_x+\sigma_H E_y$ to obtain $\sigma_{L,H}$. We obtain the Hall conductance $$\sigma_H=\frac{j_x E_y}{E_x^2+E_y^2}.$$ The classical Hall effect is a linear effect Let's now try to obtain an alternative expression for the Hall conductance $\sigma_H$ of our Hall bar. In general we expect the electric and magnetic fields present in our Hall bar to apply a force to the electrons, and increase their velocity. Instead of solving the problem directly, let us make the ansatz that the electrons enter a state, which is obtained from the usual electron ground state by doing a Galilean transformation to a reference frame moving with velocity $\bf{v}$ with respect to the original reference frame. Since the average velocity of the electrons is $\bf v$ in the original reference frame, the average force on the electrons is $${\bf F}= e\,(\mathbf{E}+\mathbf{v}\times \mathbf{B}).$$ If we want to be a steady state then $\bf F=0$, which means that ${\bf v}= (\mathbf{E}\times \mathbf{B})/B^2$. Since the electrons move with an average velocity $\bf v$, and if $n$ denotes the electron density, we can easily guess that the current density is ${\bf j}=n e {\bf v}=(n e/ B) \,(\mathbf{E}\times \mathbf{z})$. Comparing with the previous subsection, we can thus conclude that simply based on Galilean invariance, an electron gas in a magnetic field must have a Hall conductance that is given by $$\sigma_H=n e B^{-1}.$$ This relation, which says that $\sigma_H\propto n$, is extremely general in the sense that it does not depend on how the electrons interact with each other or anything else. It is referred to as the Streda relation. If we define the so-called "filling factor" as $\nu=n h/ e B$ the Hall conductance can be written as a multiple of the quantum of conductance as $\sigma_H=\nu \frac{e^2}{h}$. As you already heard from Ady Stern in the intro video, people have measured the Hall conductance of this exact system to incredible precision. At relatively high density, the Hall conductance of this system behaves itself accordingly and scales linearly with gate voltage, which is tuned to control the density. At low filling factors, one would expect many non-idealities like disorder and interaction to break the Galilean invariance based argument and lead to a Hall conductance $\sigma_H$ that varies from sample to sample and depends on disorder. What is the longitudinal conductance for the ideal electron gas in a magnetic field? Infinity since there are no impurities in the system. Finite and inversely proportional to the magnetic field like the Hall conductance. Finite and proportional to density but independent of magnetic field. Zero since current is perpendicular to electric field. The quantum Hall effect: experimental data Instead, a completely unexpected result was measured for the first time by Klaus von Klitzing. Typical experimental data looks like this (taken from M.E. Suddards, A. Baumgartner, M. Henini and C. J. Mellor, New J. Phys. 14 083015): As the average density is varied, the Hall conductance $\sigma_H$ appears to form plateaus at integer filling fractions $\nu=1,2,3,\dots$. These plateaus are incredibly sample independent and occur at the same value in many other materials. At the same time, the longitudinal conductivity appears to vanish except at the transition points between the plateaus. This is the integer "Quantum Hall effect". This setup is easy to try to reproduce numerically, but there's one complication: Numerical systems are so good that the longitudinal conductivity always stays low even at the transition. But other than that small problem everything works just the same. Quantized Hall conductance from pumping: Laughlin argument Why is the quantized Hall conductance $\sigma_H$ so robust and independent of system details? Clearly there must be a topological argument at play. Soon after the experimental discovery, Laughlin came up with an elegant argument that mapped the Hall conductance problem to a topological pumping problem and in the process explained the robustness. Let us go through this argument. The Corbino geometry To start with, we imagine doing the Hall measurement in a system cut out as an annulus, which is referred to as the Corbino disk: We will also try to do the experiment in reverse i.e. apply an electric field along the circumference of the disk and measure the current $I$ in the radial direction, as shown in the figure. The radial current is easy to measure - we just measure the amount of charge $\Delta Q$ transferred between the inner and outer edges of the Corbino geometry and obtain the radial current $I=\Delta Q/\Delta T$, where $\Delta T$ is the time over which this is done. But how do we apply an electric field in the tangential direction? The easiest way to do this is to apply a time-dependent magnetic field in the centre of the disc and use the Faraday effect. We can calculate the electric field from the changing magnetic field using Faraday's law as $\oint d{\bf{r}\cdot\bf{E}}=\partial_t \Phi$, where $\Phi$ is the magnetic flux resulting from the field in the center of the disk. Assuming that the electric field depends only on the radius $R$ we find that the resulting tangential electric field is given by $$E(R,t)=\frac{1}{2\pi R}\,\partial_t \Phi.$$ Given $I$, we can also calculate the other component of the measurement of the Hall conductance $\sigma_H$ i.e. the radial current density $j=I/(2\pi R)$ at the same radius $R$ as we calculated the electric field. Now that we know both the circumferential electric field and also the radial current density, the Hall conductance can be measured easily in this geometry as $$\sigma_H=\frac{j}{E(r,t)}=\frac{I}{\partial_t \Phi}.$$ You might worry that we were a bit simplistic and ignored the longitudinal conductance in this geometry. We could measure the longitudinal conductivity by applying a voltage difference between the inner and outer edges and measuring the resulting radial current $I$. For the remainder of this discussion, we assume that the longitudinal conductivity vanishes as is observed experimentally. Laughlin pump We are now ready to present the pumping argument to explain why the low temperature Hall effect is quantized. To do this, we change the magnetic field in the center of the Corbino disc so that the flux changes by $\Delta \Phi=\Phi_0=h/e$, i.e. a flux quantum over the time $\Delta T$. (Note that this flux quantum is only half of the superconducting flux quantum that we were using last week. That's because now the current is being carried by electrons and not Cooper pairs. It is customary to use the same symbol $\Phi_0$ for both, since they often appear in different contexts). Assuming that we have a system with Hall conductance $\sigma_H$, we obtain the charge transferred as $$\Delta Q=I \Delta T=\sigma_H\, \Delta T\, \partial_t\Phi =\sigma_H\,\Delta\Phi=\sigma_H\, \frac{h}{e}.$$ Writing $\sigma_H=\nu e^2/h$, we obtain $\Delta Q=\nu e$. Since the longitudinal conductance $\sigma_L=0$, we expect the system to be gapped in the bulk of the disc and we expect the entire charge transfer $\Delta Q$ to occur between the edges. Since the flux $\Phi$ in the center is a flux quantum $\Phi_0$, the wave functions of the electrons all return to being the same as at $\Phi=0$. Therefore only an integer number of charges $\Delta Q=n e$ can be pumped between the edges. This is Laughlin's argument for why the Hall conductance must be quantized as $$\sigma_{xy}=n e^2/h.$$ What you notice at this point is that we basically have a pump similar to the last unit. Here an integer number of charges is pumped from one edge to the other as the flux $\Phi$ is increased by $\Phi_0$. As one sees below, one can simulate electrons in a Corbino geometry and check that indeed an integer number of charges is pumped between the edges as the flux $\Phi$ is changed by $\Phi_0$. Experimentally the quantum Hall conductance jumps - what does this mean about the robustness of the Laughlin pumping argument? The Laughlin argument breaks down because it assumes specific values of the magnetic field. The Laughlin argument assumes there is no longitudinal conductivity. The Hall conductance is not a topological invariant since it changes. The flux in the corbino geometry was changed by a value that was not a multiple of the flux quantum. Landau levels: a microscopic model for the quantum hall effect The general argument so far is great in that it applies to virtually any complicated electron system with interactions and in a real material, but we would probably feel better if we could calculate the Hall conductance directly for some simple system. So let us try to do this for the simplest case of electrons in a magnetic field. For starters, let us forget about the Corbino disk and just ask what do quantum mechanical electrons do in a magnetic field. Landau levels on the back of an envelope We know what classical electrons do in a perpendicular magnetic field: They go around in cyclotron orbits, because of the Lorentz force. The cyclotron radius in a magnetic field of strength $B$ for an electron with velocity $v$ is $r_c = mv/eB$. An electron performing a cyclotron orbit at velocity $v$ has angular momentum $L=mvr_c=eB r^2_c$. In quantum mechanics, however, only orbits with a quantized angular momentum $L=n\hbar$ will be allowed. From the equality $r^2_c = n\hbar/eB$ one obtains that only some discrete values are allowed for the radius, $r_n = \sqrt{n} l_B$, where $l_B = \sqrt{\hbar/eB}$ is called the magnetic length. All cyclotron orbits, independent of the radius, circle at the same frequency $\omega_c=eB/m$. The energy of the electron in this quantized orbit is equal to $L\omega_c = n\hbar\omega_c$. So the energy spectrum really looks like that of a harmonic oscillator. All the energy levels are also shifted up from zero energy by the zero-point motion of the harmonic oscillator, $\hbar\omega_c/2$. We finally obtain that the allowed energy levels are $$E_n = \hbar \omega_c \,\left(n+\tfrac{1}{2}\right)\,.$$ These quantized energy levels of electrons in a magnetic field are called Landau levels. You can put many electrons in the same Landau level: one for every flux quantum of the magnetic flux passing through the system. Therefore Landau levels have a huge degeneracy, proportional to the area of the sample. Landau levels from the Hamiltonian Now that we know the answer in advance, we can solve the Schrödinger equation for electrons in a magnetic field without stress. It will still be important to understand the quantum Hall effect in a bit more detail. The Hamiltonian is $$H=(\textbf{p}-e \textbf{ A})^2.$$ The vector potential $\bf{A}$ depends on position, which makes this Hamiltonian complicated to solve in general. For a uniform magnetic field, we can make our life easier by choosing a Landau gauge $$\textbf{A}(x,y)=\hat{\textbf{x}}B y ,$$ where the vector potential does not depend on $x$. In this gauge, the entire Hamiltonian is translationally invariant along the $x$ direction, and therefore commutes with the corresponding momentum $p_x$. This allows us to choose $p_x=\hbar k$ as a good quantum number, and our two dimensional Hamiltonian reduces to a one dimensional one: $$H(k)=p_y^2+(\hbar k-e B y)^2.$$ Apart from a shift of the $y$ coordinate by $y_0(k)=\hbar k/eB$, this is exactly the Hamiltonian of a simple harmonic oscillator! Its eigenvalues are the Landau levels, which are independent of $k$. The corresponding wave functions are those of the harmonic oscillator along the $y$ direction, and plane waves with momentum $k$ along the $x$ direction. In the $y$ direction, they are localized in space within a length $\sim l_B$. This gives us another way to understand the quantized Hall conductance for ideal two dimensional electron gases. Now, the electron energies are quantized in Landau levels, and if $n$ Landau levels are filled at a given chemical potential, the filling factor is $\nu=n$. The Streda formula then predicts the Hall conductance as $\sigma_H=\nu e^2/h=n e^2/h$. The longitudinal conductivity must vanish since the gapped system does not allow dissipation of energy in the bulk. Flux pumping of electrons in a Hall cylinder We can now see explicitly how the Laughlin pumping argument works, starting from the microscopic description of electrons in terms of Landau levels. Starting from the formulas we derived, it is a little difficult to do so in the Corbino geometry, which has an angular symmetry rather than a translational symmetry. It is very easy if we consider the Laughlin pump for electrons in a cylinder: In fact the cylinder drawn above and the Corbino disk are completely equivalent - you can imagine deforming one into the other. The advantage of the cylinder is that we get to keep our $(x, y)$ coordinates. The Hall cylinder that we considered for Laughlin's argument is in fact equivalent to a ribbon in the $(x, y)$ plane, with periodic boundary conditions $x\equiv x+L$ in the $x$ direction ($L$ is the circumference of the cylinder). The periodic boundary conditions along the $x$ direction discretize the allowed values of $k$ as $k=2\pi n/L$. For the Laughlin pumping argument, we need to introduce a flux through the cylinder. Using Stokes' theorem, we know that the line integral of the vector potential around the cylinder must be equal to the flux passing through it, $\oint \textbf{dr}\cdot\textbf{A(r)}=\Phi$. So we can introduce a flux through the cylinder by choosing our vector potential $\bf{A}$ as $$\textbf{A}(x,y)=(B y +\Phi/L)\,\hat{\textbf{x}}\,,$$ very similar to the previous calculation. The resulting Hamiltonian for the states labeled by $n$ is $$H=p_y^2+\left(\frac{\hbar 2\pi n}{L}-e B y-\frac{e\Phi}{L}\right)^2\,.$$ Comparing the above equation to the quantum harmonic oscillator, we see that the harmonic oscillator levels must be centered at $$y_0(n) = \left(n-\frac{\Phi}{\Phi_0}\right)\frac{h}{e B L}\,.$$ We see from this that the Landau level wave-functions are centered around a discrete set of rings at $y_0(n)$ on the cylinder axis that are labelled by the integer $n$. As $\Phi$ is increased we see that the centers $y_0$ move so that after one flux quantum $\Delta\Phi=\Phi_0=h/e$ all the electrons have moved down by one step along $y$, i.e. $n \rightarrow n-1$. If $n$ Landau levels are filled then a total charge of $\Delta Q=n e$ will be transferred between the edges, in exact accordance with the Laughlin argument. We can now look again at the Laughlin pump, monitoring at the same time the Landau levels. You can see that the total pumped charge jumps in integer steps each time a Landau level passes through the Fermi level.
b760033f4f48ba32
We are tackling various problems in information communication, machine learning, etc. using the notions and techniques of statistical mechanics. However, unlike mechanics and electromagnetics, there are few occasions where one encounters the notions of statistical mechanics before formally studying it. This may make it difficult to imagine the research conducted by our group. Here, we would like to introduce our elemental idea in a non-technical manner. Bridging the "micro" and "macro" As a simple example, let us consider ideal gases. In high school physics courses, we learn that ideal gases obey the “equation of state” \(pV =nRT\),  where \(p\), \(V\), \(n\), \(R\), and \(T\) denote pressure, volume of container, mole number, gas constant, and absolute temperature, respectively. Today, we almost doubtlessly accept that gases are composed of many gas molecules. We also learn that each gas molecule obeys the “equation of motion” \(\vec{F}=m\vec{a}\) (or Schrödinger equation in the case of quantum theory). These indicate that there are two different equations governing the same system depending on the scale we focus on. Then, how can these equations be compatible? Statistical mechanics addresses such questions. More is different We mentioned that gases are composed of gas molecules. However, such hierarchic structures are not limited to gases, and similar structures can be found in almost everything. This can be roughly expressed as Elemental particles→Atoms→Molecules→Cells→Biological tissues→Biological bodies→Communities→.... At every level, systems are composed of the elements just below in the hierarchy. Then, can we understand everything if the ultimate governing rule of the lowest hierarchy, namely, the elemental particles, is discovered? This is a divisive subject, but perhaps, we cannot. This is because statistical mechanics implies that phenomena that is unpredictable by the theory of one hierarchy can be understood when we focus on the hierarchy just above it. Such limitations of the predictability of theories between different hierarchies are sometimes called “More is different.” Slogan of our group is "More is different in information science as well" Scientific approaches that try to understand phenomena by reducing them to other simpler or more fundamental phenomena are termed “reductionism.” It is based on the belief that one should be able to understand all phenomena in higher hierarchies if the theories in lower hierarchies are understood. From this viewpoint, “more is different” leads to a negative conclusion. However, this may not always be the case as it implies that qualitatively different governing rules can be compatible between different hierarchies. For instance, in information science, we encounter many combinatorial problems, which are computationally difficult to solve. However, sometimes changing the description hierarchy by considering problem ensembles, taking the large system limit, etc. can provide us with new insights and/or solution methods that we could not find by directly solving the original problems. The main interest of our group is to deepen and advance such approaches in information science. Copyright© Kabashima Laboratory , 2021 All Rights Reserved Powered by STINGER.
56baf102974f50c7
Klein Oskar  Biografia estratta da  (1894-1977) Oskar Klein was the youngest son of Sweden's first rabbi, Gottlieb Klein, who was originally from the Southern Carpathian. Gottlieb Klein received his doctorate from Heidelberg and moved to Sweden in 1883. He evidently instilled an interest in learning in his young son, as Oskar became quite fond of biology at an early age. This interest changed to chemistry around the age of 15 and soon after, in 1910, Svante Arrhenius, at what seems to be the behest of Gottlieb, invited Oskar to work in his laboratory at the Nobel Institute. Here he took up an interest in solubility and he published his first paper in 1912 on the solubility of zinc hydroxide in alkalis. This was the very same year that he finished his secondary education. He waited, however, until 1914 to take the University exam. Arrhenius wanted to send Klein to work with Jean-Baptiste Perrin in his laboratory at the University of Paris but the plan was foiled by the outbreak of World War I. Klein found himself caught up in the tempest and saw military service in 1915 and 1916. After his service concluded, but with the war still raging, he returned to work with Arrhenius. Their work now centred around studying dielectric constants of alcohols in various solvents. During this particular stay in Stockholm, he met Hendrik A Kramers, who, at the time (1917), was a student of Niels Bohr in Copenhagen. Kramers and Klein met several times during the next few years both in Stockholm and in Copenhagen, which was to be Klein's next destination. In 1917 Klein received a fellowship to study abroad and, subsequently, arrived in Copenhagen in 1918. Over the course of the next two years he would travel between Stockholm and Copenhagen performing work for both Bohr and Arrhenius, spending the summer of 1919 with Kramers in Copenhagen, and finally returning to Stockholm in 1920. But that was not to be the end of his Copenhagen experience. In fact, it was merely the beginning. Bohr traveled to Stockholm in 1920 to visit Klein and convinced him to return to Copenhagen once more to work at Bohr's Institute. Klein agreed and began what would prove to be quite a fruitful relationship that eventually would lead him to his first teaching position. Around this time, Bohr was working with Svein Rosseland on the statistical equilibrium of a mixture of atomic and free electrons. At the time, it was believed that electrons colliding with atoms always lost energy. However, Klein, in conjunction with Rosseland, introduced "collisions of the second kind" where the electrons actually gained energy! Klein continued his work on the other side of the 'molecular aisle' by turning his attention to ions. In fact, this led him to his thesis research in which he studied the forces between ions in strong electrolytes using Gibbs' statistical mechanics. The result was a generalized formulation of Brownian motion. He defended his doctorate in 1921 at Stockholm Högskola and was opposed by Erik Ivar Fredholm the mathematical physicist best known for his work on integral equations and spectral theory. After his successful defence, Klein returned to Copenhagen, later assisting Bohr on a trip to Göttingen. Around this time Klein turned to publishing semi-popular writings on physics. His first work in this new arena was a philosophical paper that was a refutation of an objection to relativity theory by Swedish philosophers. Not surprisingly, it was around this time that he began to look for a job. In 1923, Oskar Klein married Gerda Agnete Koch and moved to Ann Arbor, Michigan to take up a post at the University of Michigan, a post he won with no small thanks to his venerable friend Niels Bohr. His first work in Ann Arbor dealt with the anomalous Zeeman effect which was a problem that arose out of the fact that no one at the time understood the behavior of atoms in a magnetic field. The classical Zeeman effect was explained, in a nutshell, as the splitting of spectral lines by the magnetic field. The problem was that the classical theory only effectively described atoms with a total electron spin of zero. The difference can be seen in the Hamiltonians of the two. For the time (1923), this was a fairly large problem to tackle, but Klein did not stop there. He went on to work on the interaction of diatomic molecules with precessing electrons, studying the angular momentum within the molecule itself. The following year, in 1924, he taught a course on electromagnetism and lectured on an electric particle in a combined gravitational and electromagnetic field. This was the beginning of his landmark work on a unified field theory. Klein chose to solve the problem by essentially extending his work to a fifth dimension, though his early unification ideas centred around quantum physics as the catalyst. After a time Klein argued less and less that quantum physics could lead to a unified picture, in fact he later abandoned the idea entirely. However, he did see the possibility of unification in five dimensions, which seems to have been present in his initial attempt. At this time, Klein apparently was unaware of the work of Theodor Kaluza. Kaluza, in 1919, sent a paper to Albert Einstein proposing a unification of gravity with Maxwell's theory of light. Einstein initially was uninterested in the paper, but later realized the highly original ideas contained within it and encouraged Kaluza to publish his ideas. In fact the paper was communicated by Einstein himself on 8 December 1921. In 1925, Klein returned to Copenhagen and contracted hepatitis. He was ill for half a year, though he was visited by Heisenberg in July of 1925 and Schrödinger in January of 1926. This was around the time he was finally able to return to work. It was at this time that he finally became aware of Kaluza's work. Wolfgang Pauli communicated this work to him and Klein. Klein's adaptation of Kaluza's work had a major difference from the original in that the extra or fifth dimension was curled up into a ball that was on the order of the Planck length, 10-33 cm. It is important to note, however, that the extra dimension, though curled up, was still Euclidean in nature. Basically, the fifth coordinate was not observable but was a physical quantity that was conjugate to the electrical charge. As Kragh explains, Klein attempted to explain the atomicity of electricity as a quantum law. He also attempted to account for the electron and the proton. Klein assumed the fifth dimension to be periodic: the dimension was on the order of the Planck length. Klein's results were published in Nature in the autumn of 1926 and generated interest from such eminent theorists as Vladimir Fock, Leon Rosenfeld, Louis de Broglie, and Dirk Struik. Unfortunately, despite a lot of initial interest in unification, most physicists eventually went on to more promising and experimentally testable research leaving Kaluza-Klein theory to be explored by another generation of physicists nearly half a century later. In Klein's own words:- Dirac may well say that my main trouble came from trying to solve too many problems at a time. It was also in 1926 that Klein was appointed as docent at Lund University and became, for the next five years, Bohr's closest collaborator both on correspondence and complimentarity, and apparently contributed to the development of the uncertainty principle, as Heisenberg recalled:- After several weeks of discussion, which were not devoid of stress, we soon concluded, not least thanks to Oskar Klein's participation, that we really meant the same, and that the uncertainty relations were just a special case of the more general complementarity principle. In fact, 1926 was a banner year for Klein. I n addition to finally recovering from the hepatitis and becoming docent at Lund, it was in this same year that he made his next great theoretical breakthrough. In a paper in which he determined the atomic transition probabilities (prior to Dirac), he introduced the initial form of what would become known as the Klein-Gordon equation. It is interesting to note that this equation appeared exactly as it has been written in David Bohm's 1951 book Quantum Theory but was not called the Klein-Gordon equation. However, Bethe and Jackiw's Intermediate Quantum Mechanics, originally written in 1964, does refer to the same equation as the Klein-Gordon equation. Klein and Walter Gordon were thus eventually honoured with having the equation named after them, though it seems to have taken over a quarter of a century to receive the honour. Oddly enough, Schrödinger himself privately developed a relativistic wave equation from his original wave equation, which, in reality, was not that difficult to do, and did so prior to Klein and Gordon, though he never published his results. The trouble came when the equation did not result in the correct fine structure of the hydrogen atom and when Pauli introduced the concept of spin a year later (1927). The equation turned out to be incompatible with spin and, as a result, is only useful for calculations involving spinless particles. But, nonetheless, it was an important point in quantum theory and, along with his unification theory, was to ensure a lasting legacy for Klein and cemented 1926 as a pivotal year in his life. In the years following 1926, Klein turned to teaching and continued his research, though possibly at a reduced pace. Brink [5] quotes a friend and mentor to Klein as having said:- You will now fulfill the words: go and teach the people. Your great pedagogical talents always were one of your strongest qualities. I am not of the opinion that finding new laws of nature and indicating new directions is one of your great strengths, although you always have developed a certain ambition in this direction. In 1927, Klein was appointed Lektor in Copenhagen but nonetheless continued his research working with Pascual Jordan on the second quantization in quantum mechanics. In his work with Jordan, he demonstrated the close connection between quantum fields and quantum statistics. It was known that second quantization guarantees that photons obey Bose-Einstein statistics, but Klein showed that second quantization is not confined to free particles only. He and Jordan showed that one can quantize the non-relativistic Schrödinger equation and, in honour of this work, he was the recipient of yet another named mathematical tool, the Jordan-Klein matrices. In subsequent years he collaborated with the Japanese physicist Yoshio Nishina who was in Copenhagen on an extended research visit and worked on the problem of Compton scattering of a Dirac electron. Despite the so-called Klein paradox, that being that the positron was not completely understood by physicists, he was able to convince physicists of the soundness of Dirac's relativistic wave equation. His continued work included the quantum mechanics of the second law of thermodynamics and Klein's lemma. In 1930, he was offered Fredholm's position at Stockholm Högskala and he finally returned to his native city to take up a post that he held until his retirement in 1962. During the 1930s, Klein helped many refugee physicists who were expelled from Germany and other nations largely due to their Jewish heritage. Of the many he helped, one included Walter Gordon who would later join Klein in being the beneficiaries of the named equation we have just discussed. In 1943, Klein also aided in Bohr's escape from Copenhagen. During the 1930s Klein also found time to attend conferences, not the least of which included the 1938 Warsaw Conference where he spoke on (almost) non-Abelian gauge theories. This conference included some of the leading theorists of the day including Sir Arthur Eddington, Eugene Wigner, and others. It was at this conference that Klein suggested that a spin-1 particle mediated beta decay and played a role in weak interactions in a similar manner to the photon in electromagnetism. Klein's hypothesis was yet another crack at a unified field theory, this time in attempt to unify the strong, weak, and electromagnetic forces. The work was not noticed until nearly twenty years later when it was resurrected by Julian Schwinger in 1957. In the 1940s Klein worked on a wide variety of subjects including superconductivity (with Jens Lindhard in 1945), biochemistry, universal p-decay, general relativity, and stellar evolution. Sometime after 1947 he, and independently Giovanni Puppi, realized that both the electron and the -meson were "weak" particles. In the 1950s and 1960s Klein remained active, addressing the 11th Solvay Conference in 1958, developing a new model for cosmology in conjunction with Hannes Alfven in 1963, and tackling Einstein's General Relativity in a paper published in Astrophisica Norvegica in 1964. During his later years, he also became very interested in philosophy and especially in analogies between science and religion. In addition, he took to writing a few popular books, most of which are out of print. Oskar Klein died in Stockholm, one of the finest theoretical physicists of the twentieth century.
a4fa580d403de2ee
In quantum tunneling, the probability of finding an electron inside the potential barrier is non zero . So we can actually find an electron which had an energy $E$ in a place where classically it should have an energy bigger than $E$. enter image description here So if we find an electron in this potential barrier what will its energy be ? • The energy of the electron is conserved so its $E$ and the fact that its $KE \lt 0$ is just a weird fact of quantum mechanics. (I don't like this answer) • Since we know the electron is there we can treat it as a classical particle and because of that it's energy has rose to a value bigger than the potential of the potential barrier. The energy of the electron is not conserved unless the electron manages to pick up energy from nowhere. These are the answers I have in mind, I'm practically sure none of them is correct ... So how can we interpret this fact from an energy point of view ? The question may be silly but I'm just beginning quantum mechanics and I'm having a difficult time trying to understand it. I hope someone can help me. (Sorry for my bad english) Short answer: Position and energy are not compatible observables, meaning you can not determine them both at the same time, much like position and momentum are non-compatible observables. Long answer: If you know the energy of your particle, that means it's wavefunction is an eigenfunction of the Hamiltonian (a solution to the time-independent Schrödinger equation). This wavefunction will be spread out over the system with non-zero components in the classically forbidden region, i.e. there is a finite probability to find the particle in this region. To actually find it there, you must perform a measurement. The measurement will collapse the wavefunction to one which is localized around the point where you happen to find it (let's assume we do find it in the classically forbidden region). The new wavefunction now is no longer an eigenfunction of the Hamiltonian, and the particle therefore does not have a well-defined energy. To determine the energy of the particle you would have to perform an energy measurement. This measurement would collapse the wavefunction into an eigenstate of the Hamiltonian, which would again be spread out over the system, i.e. the particles position would now be undetermined. Furthermore, the energy you would measure would likely be different from the original energy of the particle (before position and energy measurements). As for energy conservation: When you introduce a measurement apparatus the system is no longer closed, and energy conservation does not apply unless you consider the total system, including the measurement apparatus. • $\begingroup$ when the measurement of the position is made, the particles energy is not well-defined but can we say that it is a lest bigger than the potential of the potential barrier ? (I think it is the case since the particle juste after the measurement acts like a classical particle). If the energy of the electron can theoretically (without actually measuring it) be less than the potential, wouldn't that be absurd ? $\endgroup$ – Jbar May 8 '14 at 8:03 • $\begingroup$ I disagree with your notion that, after the measurement, the particle "acts like classical particle". A classical particle would have well defined energy as well as position. I suppose you are thinking that the more localized the wavefunction, the more classical the particle. This is not true. You might say that after the measurement the position of the particle becomes a more classical property, but at the same time, energy has become a less classical property of the particle in that it has become less well defined. $\endgroup$ – jensa May 8 '14 at 8:45 • $\begingroup$ To answer your question - No, we can not say that the particle has an energy larger than the potential of the barrier. The particle will be a linear combination of energy eigenstates with some energies below the potential barrier. This must be the case since there must be an overlap with the original energy eigenstate (the wavefunction before the measurement). $\endgroup$ – jensa May 8 '14 at 8:48 • $\begingroup$ You may look at it like this - First you knew the energy and QM tells you there is a chance the particle is in the classically forbidden region, after the measurement you know the particle is inside the barrier and QM tells you there is a chance it has an energy below the classically allowed limit. It's equally "absurd". $\endgroup$ – jensa May 8 '14 at 8:58 One possible view on this is that while the average energy is given by $\int \psi^*\hat{H}\psi dV$, the actual energy value fluctuates in time around this value; the electron receives energy and gives it back again to fluctuating electromagnetic fields (background radiation), which are always present (in this view). This is motivated by stochastic electrodynamics, where background electromagnetic radiation has been used with some success to explain several microscopic phenomena (Casimir forces, thermal radiation, stability of the atom) as alternative to quantum theory. • $\begingroup$ (I'm not familiar with stochastic electrodynamics so I can't say that I have understand well what you are said). But I don't see why the energy of the particle fluctuates ... the particle has a well defined energy in my example (the wave function is a eigenfunction of the Hamiltonian). So the electron take some energy when it wants to enter the classical forbidden region and then gives it back ? So the energy of the electron is not conserved but the energy of the system {electron + background} (without the measurement apparatus) is conserved. Doesn't that contradicts the answer above ? $\endgroup$ – Jbar May 8 '14 at 8:17 • $\begingroup$ You are assuming the common view of quantum measurement theory, that energy of the particle has numerical value only when the $\psi$ function we use to describe it equals Hamiltonian eigenfunction, otherwise there is no definite value. Under this assumption, energy is constant or does not have meaning unless you measure it at some time and my answer is indeed incomprehensible. This view is the scheme of von Neumann postulates and is sometimes useful, mainly for spins. $\endgroup$ – Ján Lalinský May 8 '14 at 9:44 • $\begingroup$ But quantum measurement theory is by no means the only one way to describe and explain experiments or to make sense of Schroedinger's equation for positions. Try to think of experiment where energy (value of classical Hamiltonian) of microscopic particle system such as electron or atom was reliably measured and eigenvalue of the Hamiltonian was found - I do not know any. $\endgroup$ – Ján Lalinský May 8 '14 at 9:45 • $\begingroup$ On the other hand, even without hypothetical zero-point field of stochastic electrodynamics, there is thermal EM radiation everywhere due to charged particles forming neutral matter. This EM radiation acts on charged particles and makes them move randomly. If you assume physical quantities always have value (in contradiction to the quantum measurement scheme) it is most natural to expect every real system's energy fluctuates. $\endgroup$ – Ján Lalinský May 8 '14 at 9:47 Your Answer
2a885eead1cbb1b0
Tuesday, August 19, 2014 Maldacena's bound on statistical significance JM: Geometry and Quantum Mechanics, Maldacena reminds us of the obvious and old observation that the spacetime inside the black hole interior (i.e. the lifetime and the Lebensraum of the poor infalling observers) is limited which inevitably seems to affect the accuracy and reliability of the experiments. Such limitations are often described in terms of the usual uncertainty relations. Inside the hole, you can't measure the energy more accurately than with the \[ \Delta E = \frac\hbar{2 \Delta t} \] error margin and similarly for the momentum, and so on. But Juan chose to phrase his speculative ideas about the universal bound in a more invariant and more novel way, using the notion of entropy. A person who is falling into a black hole and wants to make a measurement must be sufficiently different from the vacuum. But after she is torn apart, hung by her balls, and destroyed (note that I am politically correct and "extra" nice to the women so I have used "she"), the space she has once occupied is turned into the vacuum. The vacuum inside a black hole of a fixed mass is more generic so the "emptying" means that the total entropy goes up. Juan says that the relative entropy\[ S(\rho|\rho_{\rm vac}) = \Delta K - \Delta S \geq 0 \] Because we know that once she's destroyed at the singularity, the entropy jumps at least by her entropy, it is logical – and Juan is tempted – to interpret the life and measurements inside the black hole, and not just the fatal end, as a process in which she approaches the equilibrium. So it's not possible to perform a sophisticated, accurate, and/or reliable experiment without sending something in. And if we send something in, the entropy will increase. An explicit inequality that Maldacena conjectured is the following inequality for the statistical significance:\[ p \gt \exp(-S) \] That's a formula written in the convention where the \(p\)-value is close to zero. If you prefer to talk about "\(P=\)99% certainty", you would write the same thing as\[ P \lt 1-\exp(-S) \] The certainty is less certain than 100% minus the exponential of the negative entropy and I suppose that by \(S\), Juan only means the entropy of the object. It's still huge which means that the statement above is very weak. The entropy of a human being exceeds \(10^{26}\) (in the dimensional units nats or, almost equivalently, in the less natural but more well-known bits) so the deviation from 100% is just \(\exp(-10^{26})\) which is a really small number morally closer to the inverse googolplex than the inverse googol. There may be stronger inequalities like that. And I also suspect that many such inequalities could be applicable generally – outside the context of black hole interiors. Have you ever encountered such inequalities or proved them? Note that the \(p\)-value encoding the statistical significance is the probability of a false positive. If we're constrained to live in a finite-dimensional Hilbert space where all basis vectors get ultimately mixed up with each other or something else, it's probably impossible to be any certain than your microstate isn't a particular one. But there are just \(\exp(S)\) basis vectors in the relevant Hilbert space and one of them may be right even if the "null hypothesis" holds, whatever it is. I am essentially trying to say that \(\exp(-S)\) is the minimum probability of a false positive. If someone thinks that she can formulate such comments more clearly or construct some evidence if not a conclusive proof (or proofs to the contrary), I will be very curious. If you allow me to return to the black hole interior issues: It seems to me that these "bounds on accuracy or significance" haven't played an important role in the recent firewall wars. But they're still likely to be a part of any complete picture of the black hole interior. For example, it's rather plausible that all the arguments (and instincts) directed against the state dependence violate these bounds. Juan tends to say that the rules of quantum mechanics may become approximate or inaccurate or emergent inside the black hole, and so on. He even says that "because the time is emergent inside, so is probably the whole quantum mechanics". Well, the answer may depend on which rule of quantum mechanics we exactly talk about. But quite generally, I don't believe that there can be any modification of quantum mechanics, even in the mysterious black hole interiors. In particular, the inequalities sketched by Maldacena himself might be derivable from orthodox quantum mechanics itself. And I would be repeating myself if I were arguing that ideas like ER-EPR and state dependence agree with all the postulates of quantum mechanics. Also, if we sacrifice the exact definition of time as a variable that state vectors or operators depend on – and we do so e.g. in the S-matrix description of string theory – it doesn't really mean that we deform quantum mechanics, does it? If we lose time, we no longer describe the evolution from one moment to another and we get rid of the explicit form of the Heisenberg or Schrödinger equations. But the "true core" of quantum mechanics – linearity and Hermiticity of operators, unitarity of transformation operators, and Born's rule – remain valid. What breaks down inside the black hole is the idea that exactly local degrees of freedom capture the nature of all the phenomena. But unlike locality, quantum mechanics doesn't break down. I should perhaps emphasize that even locality is only broken "spontaneously" – because the black hole geometry doesn't allow us to use the Minkowski spacetime as an approximation for the questions we want to be answered. 1. They're government workers. Of course 80% are going to require an operating system that was designed for mental defectives! Frankly, I'm surprised that the number is not even higher. I guess that's an indication of the partial success that the LiMux developers had in dumbing down the system to government-worker level -- a difficult task. I guess that Munich, in anticipation of the change, is transferring the budget for hiring a competent IT staff to purchasing third-party virus-protection software. "Penguins belong to the South Pole, not to European or American buildings." Except, apparently, Google datacenters. You do know that Google Web Server (which feeds this blog) runs on Linux, don't you? 2. Linux is fast and tight, Windows is pretty. I did a 3-month calculation of a growing crystal lattice. Knoppix (boot from CD) ran 30% faster than Windows, AMD ran 30% faster than Intel. Knoppix in AMD still ran three months - but the log-log plot of the output was longer, Past 32 A radius ran in blades. Theoretical slope is -2. The fun is in the intercept (smaller is better) and the bandwidth. Unix is not unfriendly, but it is selective about who its friends are. "the Linux solution is very expensive because it requires lots of custom programming." Bespoke vs. off the rack. 3. Nope, I am using Linux since over 20 years, and I am in trouble only whenever I have to use a computer with Windows installed :-) 4. This is silly. Germany is (unlike Greece and others) a very well functioning country with a healthy equilibrium between the commercial and government sector. So the people who work for the government are in principle the very same kind of people who work in the private sector, too. The government sector has a different way how it's funded - it's stealing money from the productive citizens via the so-called "taxes" - but that doesn't really affect the work that the employees are doing there. I think that the Google web server running this server should be moved to the South Pole, too. ;-) 5. I just cannot envision any modification of quantum mechanics whatsoever. I’ll bet that lubos is correct here. 6. "Time" is a whore concept. No reason to believe QM depends on its survival. 7. Interesting point that one cannot perform a measurement absent a source and a sink. If everything is at equilibrium, one can build a thermometer and read it, but not calibrate it to assign the output meaning. 8. Sadly, Windows taught people that (1) Computers should be pretty and should be so easy a 3 year old could use them and (2) Computers should crash all the time. People expect lousy performance and don't care, as long as Facebook and Twitter come up most of the time. I don't use Windows at all now. I use open source software. I fully admit that most people have not the training nor the ambition to do this. I pay nothing for my software and my computer works the way I want it to. I find Windows too confining. On the other hand, for those who want pretty, sparkly screens, and no thought required, Windows is the way to go. 9. OK but having used Linux for 20 years should be classified as a medical disorder. ;-) 10. It's only strange because the "technical people" have been penetrated by anti-market zealots who suppress everyone else. It's much stranger to be a fan of such a thing. Unix is a system from the 1960s that should be as obsolete today as the cars or music from the 1960s. But it's not obsolete especially because its modern clones have been promoted by a political movement. Unix, like Fortran and other things, should share the fate of Algol, Cobol, Commodore 64 OS, and many other things, and go to the dumping ground of the history where it has belonged for quite some time. 11. There is nothing wrong for a system to be usable by a 3-year-old. Coffee machines, toasters, and vacuum cleaners have the same property. Kids are ultimately the best honest benchmarks to judge whether software is constructed naturally. When kids may learn it, it really means that an adult is spending less energy with things that could also be made unnecessarily complicated, and it's a good thing. My Windows 7 laptop hasn't crashed for a year since I stopped downloading new and new graphics drivers etc. I had freezes due to Mathematica's insane swapping to the disk - when it should say "I give up" instead - but that's a different thing. 12. "So the people who work for the government are in principle the very same kind of people who work in the private sector, too." Ah ... so can you show me the private sector equivalent, in principle, of the Potsdam Institute for Climate Impact Research? ;-) The United States also is a very well-functioning country with a healthy equilibrium between the commercial and government sector. (In fact, I would argue that the US is less socialist than Germany.) Surely, during your time in the US you must have been forced to deal with the New Jersey or Massachusetts DMV? (Here I use the generic term -- in New Jersey it's called the MVC, while in Massachusetts it's the RMV.) If not, consider yourself very fortunate. There's a little bit of Greece in every government bureaucracy. (In the US, we have to tell them not to defecate in the hall -- http://www.newser.com/story/189036/epa-to-workers-stop-pooping-in-the-hall.html -- yeah.) These are the folks who prefer a platform that is better suited for gaming, entertainment, and viruses than getting quality work done. Hence, I agree with you, I think that Munich is leaning toward making the right decision. 13. Sure, I can. The commercial sector is literally drowning in similar šit, too. Try e.g. 14. Your taking of COBOL out to the dumping ground of history may be a bit premature. It's still actively being used in bluechip industries such as banking, insurance, and telecommunications. As far as new development goes it's rarely (if ever) used in GUI type applications but remains popular for high volume backend transaction processing in the bluechip industries. My guess is that your recent Bank of America transactions were touched by COBOL at some point, most likely in the mission critical application of updating your account. Not that I don't agree with your sentiment, it's just that it's incredibly difficult to get rid of. The business case for replacing existing backend systems with a more modern platform are usually weak. 15. Keyboards and mice should theoretically be obsolete too, but after playing with tablets for a couple of years, many people are moving back to laptops and even desktops for "real work". Linux having its origins in the 1960's is not an argument at all against it. 16. LOL, right, it surely feels like the two debit cards were attempted to be sent to me by a COBOL robot. ;-) I understand it's hard to get rid of things when lots of stuff has been written in an old framework. 17. Eelco HoogendoornAug 19, 2014, 10:57:00 PM 'What I am really stunned by is the unbelievably complicated culture of installing things on Linux.' Indeed. The only thing such accomplishes is making people feel clever because they haxxored their computer with 1337 compilars. In the real world of people trying to get stuff done, such nonsense is known as a lack of encapsulation, which is simply objectively bad software design. 18. Wow, what a highly emotional and non-factual piece. I come here for science news, but the credibility of the blog just plummeted. So three year old user friendliness is the main criterion for municipal desktop operating systems? Where did this criterion come from? If valid, there are several Linux distributions dedicated to three year olds. Dou Dou, for example. Come on Lubos you can de better. Where is the meat (facts)? 19. Have people who struggled with Linux run Windows computers for a long time before switching to a different operative system? Are there people who have always run Linux machines and never used Windows, but still feel unhappy about the Linux user experience. Just wondering because my mother started using computers when she was 60 yo, and she always found it pretty straightforward to use. Only time she tried to use Windows she found it pretty disgusting and user-unfriendly. 20. Lubos is a theorist. All theorists use Windows, while most all experimentalists use Linux (Scientific Linux is the official OS of Fermilab and CERN). I'll let someone else explain the reasons. 21. I think I get it already. Theorists tax the Operating System as lightly as a three-year-old, whereas experimentalists need the system for real work. 22. Dear Eelco, thanks for making these observations clear with some adult terminology! ;-) 23. I think it is true to some extent and there is nothing to be ashamed of. Of course that theorists often use computers in similar ways as writers (of literature), not really to compute, and they don't want to waste their time by forcing computers to do elementary things because computers are supposed to make things simpler, not harder. Experimenters do lots of complicated things with computers so they may sacrifice some friendliness without increasing the amount of wasted time by too high a percentage. For the Kaggle contest, I had to recreate an Ubuntu virtual machine because it seemed like the most plausible if not only way to install software that helps one produce competitive scores. By now, someone has ported it to Windows. I would probably prefer it but my experience with things like Visual Studio etc. is really non-existent, due to my Linux training, so the Linux path could have been easier for me due to the historical coincidences, too. 24. "it's been my point for years that the movement to spread Linux on desktop is an ideological movement" The reverse is true. Computing in the free world is subject to market forces. Linux has won hands down everywhere except for the Desktop where MS Office addicted persons obstruct innovation. Political and objective reasoning has placed Linux everywhere except the desktop. Grandmothers, children and some theorists have been well served on Desktop Linux for a decade or more. I invite you to drill down to the objective reasons why that is. We will probably never know the truth about Munich IT management decisions, but the wider market tells a clear and dramatic story in favour of open (but profit making) systems. If you find being called out for lack of meat obnoxious then I am sorry. This article happens to be the the first protein lacking I have seen by you, Thank you for the Reference Frame. 25. Desktop - and increasingly more often, mobile platforms - are the places where the actual work is being done and where the actual relevant features of operating systems are being tested. It's unambiguously clear that for the operating systems to do their work well, they should be profit-driven, company-protected systems. Whether the source is open or closed isn't too important. What's important is that a company has a financial interest to make it work. So Apple is doing the same thing for iOS and Google for Android that Microsoft is doing for Windows. The underlying mechanisms that make all these things usable are completely analogous and they require capitalism. 26. You call the sharing of IT ideas, architecture and open core modules "socialism". By the same token you are a rabid socialist for openly discussing your physics theories. By all means let Apple and Microsoft tinker with buttons and pixels to accommodate the increasingly dumbed down populations, but let the core architecture be defined by the Open Source world. This massively benefits the corporate world as well as the rest of humanity, which is why the corporate world all use Open solutions in one way or another. 27. Yes, I am an insane socialist donating intellectual assets of multi-million values to others for free. But that's less unethical than to be forcing others to use unusable products. 28. It may be several hundred thousand generations behind the most obsolete flying saucer dimensional transfer management system in the galaxy, but .NET is the greatest thing in the known universe for sure. Do the Linux bug dwellers have anything remotely like this? I don't know since I haven't looked but I seriously doubt it. Congratulations to the officials of Munich city who have belatedly achieved common sense. 29. Hmm, think you have been brainwashed by microsoft, Lubos---there are plenty of uses for Linux...even Google uses a lightly morphed version, as does Android, etc...here is a partial list of surprising adopters from Wikipedia: --lots of free compilers as well for developers and programmers. 30. I have never communicated with Microsoft or read any of its opinions - unfortunately, I would say - so I couldn't have been "brainwashed by Microsoft". I am not saying that people aren't using all kinds of other products, and so am I. Concerning mobile OSes, I have devices with iOS, Android, as well as Windows Phone, and Android is the most expensive one. I am just warning against the political movement that is trying to force different systems upon desktop users whose majority clearly and voluntarily prefers Microsoft Windows as the market conditions unambiguously show. 31. Unlike benchtop chemistry and biology, physics can be mostly taught online, with engineers later being hired to do experiments. I sure would like Lubos to join an online university to create video lectures, at both advanced and entry level physics. -=NikFromNYC=-, Ph.D. in chemistry (Columbia/Harvard) 32. Honest question: What's so great about it? Can you explain or give an example? Thanks. 33. I have to say that I fail to see the Linux world as some sort of sinister kabal that is forcing innocents to use unusable systems. Look at the Linux desktop market share, and you can at least say that they have failed. Windows is great for Microsoft-style word processing and spreadsheets. Perhaps it's even OK for TeX/LaTeX, if there's a decent and easy to install distribution for it (I know there is one for OSX, not sure about Windows). Linux seems popular for scientific computing, and where such users want a more polished and easy to use system for their work laptop/desktop, they choose OSX, which gives you Unix underneath and a polished user interface on top. That's why a progressive household would have all three operating systems on their computers. I know mine does. :) 34. OT: Which reminds me ... I'm feeling nostalgic. It's many decades since every other word in those horrible computer trade magazines seemed to be about the 'goto' statement and 'spaghetti code'. Now all is silent — as far as I know anyway. Oh, how I miss the tedium of it all! Anyone care to rekindle the exquisite ennui? Hey, how about a discussion on punched cards versus paper tape? :) Incidentally, as far as operating systems go, I mostly use Windows simply because, reluctantly, that was all that was made available to me at one point (more accurately it started with that awful DOS), but I got used to it and I can do all I need to do with it. But most of all I use it these days because I'm buggered if I'm going to spend any time looking up the kind of stuff that I lost interest in and forgot about years ago just to make a change for the sake of Greater F#cking Spartan. Also VBA behind Excel can be very handy for a quickie, a little like a fast shag behind the bicycle shed. Just the ticket sometimes. :) P.S. Many years ago, but again long past my interest date, I surprised myself by reading Bjarne Stroustrup's book on the genesis of C++ (I forget the title) and found it fascinating. I'm pretty sure I'm fully cured now though. :) 35. I just noticed that Microsoft is currently in the process of shifting its German operational center to - München, Schwabing. Now that they are becoming a big tax payer over there, it seems inconvenient for the municipal government to run on Linux. After all, Linux won't finance any pleasure ('amigo') trips for the local politicians, Microsoft perhaps does ... 36. Absent a source and a sink of time... everything happens? Or nothing happens? The event horizon is when happening stops? Can entropy be static? 37. "Suggestions the council has decided to back away from Linux are wrong, according to council spokesman Stefan Hauf." Some meat: 38. Dear FlanObrien, the committee to review the computing in the city was probably built by the executive power in the city which is why one should also respect the interpretation of the executive power, and not the council, why it was done. 39. Believing the world should run on the level of three-year-olds is really very disturbing. It may also explain why social has become more and juvenile over time. I figure if you need pretty pictures and shiny baubles, you're not really looking for a computer. More like an electronic playmate. It's interesting that your Vista computer worked so well. Mine crashed, despised the peripherals (all of which I replaced) and drove me to buy an Apple to escape the Microsoft curse. Maybe I just really use my computer more than most and expect it to function like I want it, not like a three-year-old wants it. I'm a grown-up now. I want a grown-up computer.
ca4043fd384c249a
Lattice QCD approach to Nuclear Physics Sinya Aoki    Takumi Doi    Tetsuo Hatsuda    Yoichi Ikeda    Takashi Inoue    Noriyoshi Ishii    Keiko Murano    Hidekatsu Nemura    Kenji Sasaki (HAL QCD Collaboration) Graduate School of Pure and Applied SciencesGraduate School of Pure and Applied Sciences University of Tsukuba University of Tsukuba Tsukuba 305-8571 Tsukuba 305-8571 Japan Center for Computational Sciences Japan Center for Computational Sciences University of Tsukuba University of Tsukuba Tsukuba 305-8577 305-8577 Japan Theoretical Research Division Japan Theoretical Research Division Nishina Center Nishina Center RIKEN RIKEN Wako 351-0198 Wako 351-0198 Japan IPMU Japan IPMU The University of Tokyo The University of Tokyo Kashiwa 277-8583 Kashiwa 277-8583 Japan Department of Physics Japan Department of Physics Tokyo Institute of Technology Tokyo Institute of Technology Tokyo 152-8551 Tokyo 152-8551 Japan Nihon University Japan Nihon University College of Bioresource Sciences College of Bioresource Sciences Fujisawa 252-0880 Fujisawa 252-0880 Japan We review recent progress of the HAL QCD method which was recently proposed to investigate hadron interactions in lattice QCD. The strategy to extract the energy-independent non-local potential in lattice QCD is explained in detail. The method is applied to study nucleon-nucleon, nucleon-hyperon, hyperon-hyperon and meson-baryon interactions. Several extensions of the method are also discussed. 1 Introduction One of the ultimate goals in nuclear physics is to describe hadronic many-body problems on the basis of the hadronic S-matrices calculated from first principle QCD. In particular, the nuclear forces are the most fundamental quantities: Once they are obtained from QCD, one can solve finite nuclei, hypernuclei, nuclear matter and hyperon matter by employing various many-body techniques developed in nuclear physics. Phenomenological nucleon-nucleon () potentials, which are designed to reproduce a large number of proton-proton and neutron-proton scattering data as well as deuteron properties have been constructed in 90’s and are called high-precision potentials. Some of the examples are shown in Fig. 1, which reflect characteristic features of the interaction for different values of the relative distance as reviewed in \citenTaketani1967,Hoshizaki1968,Brown1976,Machleidt1989,Machleidt2001: The long range part of the force ( fm) is dominated by one-pion exchange originally introduced by Yukawa[10]. Since the pion is the Nambu-Goldstone boson associated with the spontaneous breaking of chiral symmetry, it couples to the nucleon’s spin-isospin density and leads to not only the central force but also the tensor force. The medium range part ( fm) of the force receives significant contributions from two-pion () exchange [11] and/or heavy meson (, , and ) exchanges. In particular, the spin-isospin independent attraction of about 50 – 100 MeV in this region plays an essential role to bind the atomic nuclei and nuclear matter. The short range part ( fm) of the force is best described by a phenomenological repulsive core introduced by Jastrow [12]. The nuclear saturation, the nuclear shell structure, the nuclear superfluidity and the structure of neutron stars are all related to the properties of the nuclear force [13, 14, 15]. Furthermore, the hyperon-nucleon () and hyperon-hyperon () forces, whose information is still quite limited experimentally, are crucial to understand the structure of hypernuclei and the core of the neutron stars. The three-nucleon forces (and the three-baryon forces in general) are also important to understand the binding energies of finite nuclei and the equation of state of dense hadronic matter. Three examples of the modern Figure 1: Three examples of the modern potential in (spin-singlet and -wave) channel: Bonn[6], Reid93[7] and Argonne [8]. Taken from Ref. \citenIshii:2006ec. It has been a long-standing challenge in theoretical particle and nuclear physics to extract the hadron-hadron interactions from first principle. A framework suitable for such a purpose in lattice QCD was first proposed by Lüscher[16]: For two hadrons in a finite box with the size under periodic boundary conditions, an exact relation between the energy spectra in the box and the elastic scattering phase shift at these energies has been derived. If the range of the hadron interaction is sufficiently smaller than the size of the box , the behavior of the two-particle Nambu-Bethe-Salpeter (NBS) wave function in the interval is sufficient to relate the phase shift and the two-particle spectrum. This Lüscher’s finite volume method bypasses the difficulty to treat the real-time scattering process on the Euclidean lattice. Furthermore, it utilizes the finiteness of the lattice box effectively to extract the information of the on-shell scattering matrix and the phase shift. A closely related but a new approach to the hadron interactions from lattice QCD has been proposed recently by three of the present authors [9, 17, 18] and has been developed extensively by the HAL QCD Collaboration. (Therefore the approach is now called the HAL QCD method.) Its starting point is the same NBS wave function as discussed in Ref. \citenLuscher:1990ux. Instead of looking at the wave function outside the range of the interaction, the authors consider the internal region and define an integral kernel (or the non-local “potential” in short) from so that it obeys the Schrödinger type equation in a finite box. This potential can be shown to be energy-independent by construction. Since for strong interactions is localized in its spatial coordinates due to confinement of quarks and gluons, it receives only weak finite volume effect in a large box. Therefore, once is determined and is appropriately extrapolated to , one may simply use the Schrödinger type equation in infinite space to calculate the scattering phase shifts and bound state spectra to compare the results with experimental data. Since is a smooth function of the quark masses, it is relatively easy to handle on the lattice. This is in sharp contrast to the scattering length, which shows a singular behavior in the quark mass corresponding to the formation of the hadronic bound state. A further advantage of the HAL QCD method is that it can be generalized directly to the many-body forces and also to the case of inelastic scattering. Studying structure of and hypernuclei is one of the key challenges in modern nuclear physics. Also, the central core of the neutron stars will have hyperonic matter if the neutron beta-decays to hyperons become possible at high density. The hyperon-nucleon () and hyperon-hyperon () interactions are crucial to determine the level structures of hypernuclei as well as onset-density of hyperonic matter in neutron stars [19]. By generalizing the scattering in the flavor SU(2) space to the baryon-baryon () scatterings in the flavor SU(3) space, the HAL QCD method can give the and potentials as natural extension of the potentials. Such extension is also useful for identifying the origin of the short-range repulsive core of the potential and for studying possible six-quark state such as the -dibaryon. In this article, we review the basic ideas and recent progress of the HAL QCD method to hadron interactions. (As for the Lüscher’s finite volume method, see a recent review Ref. \citenBeane:2010em.) In Sec. 2, the basic strategy to define the potential in QCD is explained. In Sec. 3, we introduce lattice formulations of the time-independent HAL QCD method originally proposed in Refs.\citenIshii:2006ec,Aoki:2008hh,Aoki:2009ji as well as its time-dependent generalization. In Sec. 4, some recent results of lattice QCD calculations for the potential are given in both quenched and full QCD. Magnitude of the non-locality in is also discussed in the section. In Sec. 5, the method is applied to the hyperon-nucleon interactions such as and systems. In Sec. 6, interactions between octet baryons are investigated in the flavor SU(3) limit, where up, down and strange quark masses are all equal. In Sec. 7, a generalization of the HAL QCD method to the case of inelastic scattering is given. In Sec. 8, we show results of the three-nucleon potential, especially its short distant structure. In Sec. 9, an application to the kaon-nucleon scattering is considered. Sec. 10 is devoted to summary and concluding remarks. 2 Defining the potential in QCD 2.1 Nambu-Bethe-Salpeter (NBS) wave function A key quantity to define the baryon-baryon() “potential” in QCD is the equal-time Nambu-Bethe-Salpeter wave function, where is a QCD eigenstate for two baryons with equal mass , helicity and , total energy , the relative momentum , and the total momentum (we take in this paper). Generalization to the unequal mass can be formulated in a similar manner. In the case of two nucleons, the local interpolating operator is taken as where , are the color indices, and is the spinor index. The charge conjugation matrix is given by , and are proton and neutron operators while denote up and down quark operators. Here implicitly has two pairs of spinor-flavor indices from as well as two helicity indices and . The most important property of the above NBS wave function is as follows. If the total energy lies below the threshold of meson production (i.e. with the meson mass ), it satisfies the Helmholtz equation with at , Furthermore, the asymptotic behavior of the radial part of the NBS wave function for given orbital angular momentum and total spin reads [21, 18] Here is nothing but the phase shift obtained from the baryon-baryon S-matrix in QCD below the inelastic threshold. It should be remarked here that only the upper components of the spinor indices for the NBS wave function ( and ) are enough to reproduce all scattering phase shifts with and (See Appendix A of Ref.\citenAoki:2009ji for the precise expression of Eq.(4) and its relation to the S-matrix in QCD.) 2.2 Non-local potential from the NBS wave function From the NBS wave function, we define a non-local potential through the relation [9, 17, 18] where is expected to be short-ranged because of absence of massless particle exchanges between two baryons. As mentioned in the previous subsection, it is enough to consider the upper spinor indices of : Then 16 components of can be determined from components of for 4 different combinations of . Since the NBS wave function is multiplicatively renormalized, the potential is finite and does not depend on the particular renormalization scheme. Note that, while Lorentz covariance is lost by using the equal-time NBS wave function and Eq. (5) is written as a Schrödinger type equation, no non-relativistic approximation is employed here to define . The non-local potential has been shown to be energy-independent[17, 18]. To see this, let be the space spanned by the wave function at : where represents quantum numbers of the NBS wave function other than energy . Then the projection operator to is given by where is defined as the inverse of the Hermitian operator which satisfies in the restricted indices that . (We here assume that does not have zero eigenvalues in this restricted space.) Using these, the non-local potential is defined by Then, it is easy to observe that the above non-local potential satisfies Eq.(5) at : This non-local potential is energy independent by construction. It is also easy to see that we can make the potential local but energy-dependent. Similar trade-off between non-locality and energy-dependence has been also discussed long time ago in Ref.\citenKR56 in a different context. Note however that the non-local potential which satisfied Eq. (5) at is not unique. For example, one may add a term such as with arbitrary functions to the non-local potential without affecting Eq.(5) at . We remark that One may define a non-local potential different from Eq.(9) as which satisfies Eq. (5) for all . This potential, however, becomes long-ranged, due to the presence of inelastic contributions above . An extension of the HAL QCD method, which keeps the short-range nature of the potential while inelastic channels open, will be discussed in Sec. 7. The most general form of the Schrödinger type equation for the NBS wave function has energy-dependent and non-local potential as shown in Ref.\citenLuscher:1990ux. However, one can always remove its energy-dependence as demonstrated in the above derivation. 2.3 Velocity expansion of the non-local potential If one knows NBS wave functions for all , the non-local potential can be constructed according to Eq. (9). In lattice QCD simulations in a finite box, however, only a limited number of wave functions at low energies (ground state and possibly a few low-lying excited states) can be obtained. In such a situation, it is useful to expand the non-local potential in terms of the velocity (derivative) with local coefficient functions[23]; In the lowest few orders we have where , is the Pauli-matrix acting on the spin index of the -th baryon, is the total spin, is the angular momentum, and is the tensor operator. Each coefficient function is further decomposed into its flavor components. In the case of nucleons (i.e. ), we have where is the Pauli-matrix acting on the flavor index of the -th nucleon. The form of the velocity expansion (13) agrees with the form determined by symmetries[24]. At the leading order of the velocity expansion, the local potential is given by which is obtained from the NBS wave function at one value of . Since for the spin-singlet state, for example, one has 2.4 Remarks on the “scheme”-dependence of the potential We emphasize that the potential itself is not a physical observable, and is therefore not a unique quantity in quantum mechanics and in field theory. In fact, the baryon-baryon potential in QCD depends on the choice of the interpolating baryon operator to define the NBS wave function. Among others, the local baryon operator used in HAL QCD method is a most convenient choice, since the reduction formula for composite particles can be derived in a simplest way for this choice [25, 26, 27]. Nevertheless, one may adopt other interpolating operators (such as higher dimensional operators and non-local operators): Particular choice of the baryon operator and associated potential may be considered as a ”scheme” to describe physical observables such as the scattering phase shift and the binding energies. The potential, although being “scheme”-dependent, is still useful to understand physical phenomena as we know well in quantum mechanics. The repulsive core of the nucleon-nucleon potential in the coordinate space, which is known to be the best way to summarize the scattering phase shift at high energies, is one of such examples.111Although in a different sense of the “scheme”, analogous situation in quantum field theory is the running coupling constant. It is scheme-dependent quantity but is quite useful to understand the high energy processes such as the deep inelastic scattering data. Among different schemes, good convergence of the velocity expansion is an important check of the choice of our present scheme. Such a check can be carried out by examining the dependence of the lower order potentials. For example, if we have for , we can determine the unknown local functions of the velocity expansion in different ways. The variation among different determinations gives an estimate of the size of the higher order terms. Furthermore one of these higher order terms can be determined from for . The convergence of the velocity expansion will be investigated explicitly in Sec. 4. The analysis in this section shows that the use of Schrödinger type equation with non-local potential is justified to describe the scattering in QCD. The key quantity is the NBS wave function, whose asymptotic behavior encodes phases of the S-matrix for the scattering. If the velocity expansion of the non-local potential is reasonably good at low energies, one can use the LO and NLO potentials to investigate various nuclear many-body problems. 3 Lattice formulation We now discuss procedures to extract the NBS wave function from lattice QCD simulations. For this purpose, we consider the correlation function on the lattice defined by where is a source operator which creates two-baryon states. Inserting a complete set and considering baryon number conservation, we have where and ellipses represent contributions from inelastic states such as , , etc. At large time separation , we obtain where is the lowest energy of states. Since the source dependent term is just a multiplicative constant to the NBS wave function , the potential defined from is manifestly source-independent. For this extraction of the wave function to work, the ground state saturation for in Eq. (20) must be satisfied by taking large . In practice, however, becomes very noisy at large . In Sec. 3.4, we will discuss more on this point. 3.1 Choice of source operators We choose the source operator to fix quantum numbers of . Since lattice QCD simulations are usually performed on a hyper-cubic lattice, the cubic transformation group instead of is considered as the symmetry of 3-dimensional space. Therefore the quantum number is classified in terms of the irreducible representation of , which is denoted by , , , and whose dimensions are and 3, respectively. Relation of irreducible representations between and is given in Table 1 for , where denotes the angular momentum for the irreducible representation of . For example, the source operator in the representation with positive parity generates states with at , while the operator in the representation with negative parity produces states with . For two octet-baryons, the total spin becomes , which corresponds to () and () of . The total representation for a two baryon system is thus determined by the product , where for the orbital ”angular momentum” while for the total spin. In Table 2, the product is decomposed into the direct sum of irreducible representations. 0 (S) 1 0 0 0 0 1 (P) 0 0 0 1 0 2 (D) 0 0 1 0 1 3 (F) 0 1 0 1 1 4 (G) 1 0 1 1 1 5 (H) 0 0 1 2 1 6 (I) 1 1 1 1 2 Table 1: The number of each representation of which appears in the angular momentum representation of . denotes the eigenvalue under parity transformation. Table 2: The decomposition of a product of two irreducible representations, , into irreducible representations in . Note that by definition. We often use the wall source at defined by where are upper component of the spinor indices while are flavor indices. Here is obtained by replacing the local quark field of by the wall source, with the Coulomb gauge fixing at . Note that this gauge-dependence of the source operator disappears for the potential. All states created by the wall source have zero total momentum. Among them the state with zero relative momentum has the largest magnitude. A reason for employing the wall source here is that the ground state saturation for the potential at long distance is better achieved for the wall source than for other sources. Let us consider the case of the two nucleons. The source operator has zero orbital angular momentum at , which corresponds to the representation with positive parity. Therefore, the total angular momentum can be fixed by using the spin recoupling matrix , e.g., and for as Here is the parity and is the total isospin of the system. Since the nucleon is a fermion, exchange of the nucleon operators in the source should give a minus sign. This fact fixes the total isospin given the total spin: or . (Note that are antisymmetric while are symmetric under the exchange.) Since and , the state with either for the spin-singlet or for the spin-triplet is created at by the corresponding source operator. The NBS wave function extracted at has the same quantum numbers as they are conserved under QCD interactions. In addition the total spin is conserved at for the two nucleon system with equal up and down quark masses: Under the exchange of the two particles, the constraint must be satisfied due to the fermionic nature of the nucleon. Also, the parity and the isospin are conserved in this system. Therefore is conserved. However, is not conserved in general. While the state with always has even at , the one with has both and components222This can be seen from Table 2 for (spin-triplet), which also tells us the existence of component in addition. The extra component is expected to be small since it appears as a consequence of the violation of on the hyper-cubic lattice. at , which corresponds to and in , respectively. Note that and are used to represent the total and orbital quantum numbers respectively for as well as for . The orbital angular momentum of the NBS wave function for can be fixed to a particular value by the projection operator as where is extracted from for large . The total spin projection operator is for spin-singlet and for spin-triplet, but this is redundant since the total spin , already fixed by the source, is conserved as mentioned before. The projection operator of the orbital angular momentum for an arbitrary function is defined in general by for , where denotes the character of the representation in , is its complex conjugate, is one of 24 elements in and is the dimension of . 3.2 Leading order potential: spin-singlet case We present the procedure to determine potentials at the leading order(LO): Since and for the spin-singlet case, the LO central potential for the spin-singlet case is extracted from the state as where in isospin space. The potential in the above is often referred to as the central potential for the state, where the notation represents the orbital angular momentum (see Table 1), the total spin and the total angular momentum of . It is noted, however, that in the leading order of the velocity expansion, the potential does not depend on the quantum number of the state . Moreover the state may contain components other than , though the component may dominate. Therefore it is more precise to refer to as the spin-singlet (isospin-triplet) central potential determined from the state with . A possible difference of spin-singlet central potentials between this determination and others such as the one determined from gives an estimate for contributions from higher order terms in the velocity expansion. 3.3 Leading order potential: spin-triplet case Both the tensor potential and central potential appear in the LO for the spin-triplet case. Let us consider the determination from state. The Schrödinger equation for this state becomes with , where the spin-triplet central potential is given by We separate the Schrödinger equation Eq. (29) into the and non- components by using projection operators and as Note that and commute with , and , whereas they do not commute with . Non- component receives contributions from , and , among which only and contribute to the D-wave. Since the contribution from component turns out to be negligible in the numerical simulation, the non- component is dominated by D-wave contributions. Using these projections, and can be extracted as In numerical simulations, in state is mainly employed. One may focus only on the component of the wave function and define so-called the effective central potential for the spin-triplet (isospin-singlet), often used in nuclear physics: The effect of , which leads to a transition from the component to the non- component of the wave-function, is implicitly included in this effective central potential: For small , the difference between and is as the second order perturbation tells us. 3.4 Time-dependent HAL QCD method One of the practical difficulties to extract the NBS wave function and the potential from the correlation function Eq.(18) is to achieve the ground state saturation in numerical simulations at large but finite with reasonably small statistical errors. While the stability of the potential against has been confirmed within statistical errors in numerical simulations[9, 18], the determination of for the ground state suffers from systematic errors due to contaminations of possible excited states. There exist three different methods to determine . The most well-known method is to determine from the dependence of the correlation function Eq.(18) summed over to pick up the zero momentum state. On the other hand, one may determine of by fitting the dependence of the NBS wave function with its expected asymptotic behavior at large or by reading off the constant shift of the Laplacian part of the potential from zero at large . Although the latter two methods usually give consistent results within statistical errors, the first method sometimes leads to a result different from those determined by the latter two at the value of employed in numerical simulations. Although, in principle, the increase of is needed in order to see an agreement among three methods, it is difficult in practice due to larger statistical errors at large . The problem above is common in various applications of lattice QCD. Fortunately, the original HAL QCD method can be improved to overcome this difficulty as follows. Let us consider the normalized correlation function defined from Eq.(18) as where and . By neglecting the inelastic contributions above the meson production threshold, represented by , for large enough 333This limitation for can be removed if the coupled channel potentials are introduced as in Sec. 7., non-relativistic approximation leads us to where the Schrödinger equation Eq. (5), the defining relation of the non-local potential , is used to replace by . By applying a time derivative on both side, we have the time-dependent Schödinger equation in imaginary time Now the velocity expansion of the non-local potential leads us to the formula of the leading order potential Once the ground state saturation is achieved in , Eq.(39) reduces to Eq.(17), for example for the spin-singlet case. Indeed, in this case, is safely replaced by the non-relativistic energy of the ground state under the non-relativistic approximation. The non-relativistic formula for above can be generalized to the case that masses of two particles are different by the replacement, . Note also that the potential extracted in this method automatically satisfies without constant shift. This property can be used to check whether this extraction works correctly or not. The non-relativistic approximation used to derive Eq.(36) can be removed by using the second order derivative in ; which leads to
5c6b7894e5e64455
pgtruspace's blog about things that interest me. Category Archives: space flight gravity is a myth The Earth sucks or does it? On Gravity Gravity causes exactly the same warpage as charge fields in atomic structures. Gravity behaves exactly the same as charge fields as to effects over distance. Charge fields are created by gravity. The potential for acceleration in an earth gravity field is 32ft per second for each second of acceleration, the charge field of earth is an linear accelerator. Investigations conducted in the 1800s established an average charge field of 300 volts per meter average in dry air. In a boring week I created a number of gravity batteries of oil, paper and foil. In all cases the batteries were positive on top and negative on the bottom. The voltage over distance was about .50 milivolts per 10 mils or 300volts over 1 meter. This was dielectric warpage as in a condenser, no current flow measured as this was a device to measure potential created by gravity. Do not confuse voltage potential with current flow! You have to gather the charge bodies as well as develop potential and create a controllable current flow as well as allow for recharge of the device. Dielectric warpage is the displacement of the nucleus from the center of the electron shell. The electron shell is the atomic surface and the nucleus the atomic mass or center of gravity. Whether gravitational fields or electrical charge fields the effect is the same, atomic warpage, ………..  for more of this post.On gravity gravity and aether Dielectric Warpage, This above representation of the atomic structure is difficult as the nucleon is so small within the electron shell. It is said that if the proton was the size of a basketball the electron shell would be the size of the United States. About 1 foot in 3,000 miles! Electro-static forces within a charged capacitor warp the diaelectric of the atoms within the space between the plates, just as gravity would, creating the potential for acceleration as gravity does. This creates the possibility of creation of artificial gravity between 2 charged plates. …pg Electronic Engine Proposed Helical Engine Proposed Computer simulation results of Helical Engine Proposed: SPACECRAFT PROPULSION AND POWER study results of computer simulations 16 page .pdf on a study of computer simulations for a proposed electronic space propulsion system. Nothing real here but an interesting study on the possibility of electronic propulsion none the less….pg Navy Patent to modify Mass / Inertia Patent for a craft using an Inertial Mass Reduction Device   patent number 10,144,532B2 It was brought to my attention that a researcher for the U.S. Navy had patented a device for the reduction of mass / inertia,  last December, on work he has been doing  for the last 12 years.  There are many citations of papers of his own as well as work done by others, After reading the patent claims it appears to me that this is a proposed device patent meant to cover the possibilities rather then an actual device. The author seems to use a great deal of techno-babble to dance around,  Not saying Aether,  to describe the EMF manipulation of the near to the craft, space, to reduce the effects of Mass / Inertia by creating a richly confused, 3 Dimensional field…pg My adventure in 3D printing monoprice iiip Monoprice 3D Printer IIIPv2 This Little 3D printer is about the minimum usable, as well as inexpensive, machine available.   $250 at this date. 2 years ago I purchased this little machine, with it’s 200mm x 200mm ( 7.8″ x 7.8″) build plate, so my grandson could get acquainted with this new technology along with his computer abilities. Last fall with the nearby Camp Fire and attendant bad air quality, I was reminded of the need for my  water based air cleaners. I REALLY needed to get new ones built! even if just for me and other family members. So I set the printer up next to my computer and began the design. see: A new old project & The new old project continues. My sister prevailed on me to help launch this new business, it became important to share what I have learned, so she and her son could become familiar with the technology without starting from total The question of the cost to be a part of using 3D printing was posed and after some thought I replied; “ I am amazed at how little I have spent for the things that this little IIIPv2 printer, Optiplex780 computer and I have accomplished over the last 3 months. For less then a thousand dollars we have created things that would cost me tens of thousands and 2 years, 25 years ago. The 21st century is an Amazing place to this 20th century man…pg An excellent pdf on filament printing for a beginner starting out…pg Back in 1986 I took a temporary job fabricating equipment for the Silicon Valley electronics industry. Equipment such as tanks, containment trays, fume duct work and fume scrubbers created from shaped plastic sheet stock parts Hot air welded together with a hot air welder and plastic rod.  Plastic welding requires that the material be heated to the point that it will flow and stick, but not yet a fluid as one uses in metal welding. Plastic is made up of long chain hydrocarbons much like spaghetti that must be heated to the point that it is much like wet cooked pasta that will flow and stick but not break up or have it’s structure destroyed, a rather fine line of temperature that is particular to that type of plastic or alloy of plastics. 3D printing of plastic filaments is carried out under much the same conditions. In this case a miniature extruder under the control of a computer directed robot is laying down “welding rod” to create an object. The job of the operator is to evaluate the conditions, manage the heat energy and material applied, to create the conditions for a good weld buildup of the part. Due to the nature of the device being used the building part must remain solidly in place on the build bed during it’s creation. Much of the problems encountered in 3D printing center around keeping the building part in stuck in place. Then, at the end of the build the part should be easily removed so that the bed alignment is not disturbed, the bed surface is not damaged or warped, and the next part can be started. Printer Notes: New printer setup: First thing, check all fasteners for tightness, Modestly tight, not finger tight, not OMG tight, but well fastened. All attachments must remain stable. The software assumes a stable machine and the printer does not self correct for any displacement during operation. Set printer on a resilient bed such as ridged foam, hard rubber foam, etc. This will greatly reduce noise transfer and reduce resonant movements as the build bed and the mass of the printed object is rapidly moved under the Extruder head. Tie down the “Z” towers solid to frame, with angle braces or attachments to enclosure frame. The towers MUST be stable to the frame, there must be no chance of induced wobble during print bed movements. Any wobble can result in the nozzle encountering the solid work piece, causing a displacement that will ruin the work and may damage the printer. We will be using ABS so heat management is critical for best results. The operating printer bed must be in a very warm, draft free environment, so an enclosure of some kind is important for consistent results. Being able to see the results of the printer’s operations as they happen is valuable, so the operator can observe, make any adjustments needed is very useful, so a “window” should be considered. Periodic service before printing: To insure that the part remains “stuck” to the print bed during it’s creation is critical to success. Be sure to clean bed covering of all dust and oils. Isopropal Alcohol works best, Acetone will work but will damage PEI coverings. Or add a glass covering, glass will require additional time to heat up ( 15-20min.), any glass will work and is less likely to be damaged by use, thicker glass will be less likely to warp, but glass will need additional fasteners to prevent movement on the bed table. Glass adds to the mass of the bed that will be in motion as well as it requires changes in the bottom limit switch position due to it’s additional thickness. We are using a Borosilicate glass covering as that has the best”stick” while hot and best release when cooled. Do not touch the bed covering after cleaning. To improve “stick” of the part to the build bed, various materials are often used to act as temporary “glue”. We are using ABS dissolved in Acetone as our sticking agent for most parts and sometimes Painters tape for really difficult to stick, critical large parts. The Painters Tape is the last resort as it is difficult to remove after the completion of the print. Also used by others are Hair Spray, paper glue stick, as well as salt and sugar. All water based that will stick hot and pop lose on cooling. many people print small and light pieces with no glue and others use special build plate coverings that seem to work with no additional adhesion.  When printing with ABS it is critical that the piece remain warm and well adhered to the build bed during the build process. Any draft that might cool the bed, part or Extruder can ruin the work as well as the “stick”to the bed. Temperature MANAGEMENT is critical while working with ABS! It is critical to adjust the machinery to be properly aligned, the Extruder must move parallel to the bed to lay even layers onto the surface of the build. In our type of printer the part creation bed moves back and forth in the “Y” plain under the Extruder. The Extruder moves in the side to side motion of the “X” plain and is supported by it’s carriage that is moved in the “Z” direction up to lay down the layers. In our case the Extruder carriage  is controlled by two stepper motors that sometimes get out of synchronization, measure the distance of the carriage rods over the bed to be sure they are parallel to the bed. If not, they can easily be rotated into alignment while the printer is off. While energized they are electrically locked together. To “level” or trammel the bed to printer nozzle there are adjusting thumb wheels or nuts in each corner of the bed. These are adjusted corner to corner, at least 3 times around. As you adjust one corner the diagonally opposite one will change as the bed teeters on the other two corners. Typewriter paper works well as a thickness gauge. Nozzle should just barely “grip” the paper to the bed. This must be completed on a “Hot” bed to be true for heated operation. It is very important that the first layer be properly stuck to the bed. We use “brim” as an attachment enhancement, as it adds no additional height to the parts, something that is critical to maintain measurements when several parts are assembled together. Once the printer begins the “Brim” the first deposition from the Extruder can be examined for good attachment to the build plate as well as the proper “squish”width and thickness. It is at this time that I do final adjustment of the bed. There are also now add-ons that will test the bed surface distances and read into the Gcode the needed nozzle position to properly stick that first layer. After the print is completed and the print bed cools I slide a sharp thin blade under it to help release it from the bed. The more gently you can remove the part the better to prevent warpage or displacement of the bed adjustments. Sharp thin blade tool: I took a good quality, flexible putty knife and ground one side into a wood chisel like sharp edge. To assist lifting the part or for scraping the build plate I use it bevel side up to get under the brim and part to begin lifting them. Caution! Do not use this on a warm soft build plate cover material such as PEI, it will cut right through them, use only on glass or other hard materials. Software We are using: Acad 17 is being used to model the parts and export them as stl ‘s. There is also 360 Fusion available to private users as well as several free 3D modeling programs. These stl’s are actually cross-sections of the parts that are being exported into the Slicer that reads these files and then computes the needed instructions that the printer carries out. Printers are dumb, they are a very simple computer that only handle specific movements and temperature. Slicers convert the object into step-by-step directions for the printer to follow. Things like, maintain temperature, start extruding, move x-axis 10 mm and y-axis 2mm, and so on. Most of those commands start with a G, hence the name Gcode. A slicer translates the model slices into the needed movements, speeds and temperatures that are set in configuration instructions which the printer understands. “Repetier-Host”, A freeware program, is being used to import the stl files and manage the “slicer” that creates the Gcode files and serves those instructions to the printer. Cura, A freeware program, is a slicer that is used to create the needed Gcode files that instruct the printer on how to execute the creation of the required piece. These Gcode files are the instructions that operate the “printer” to build the object one layer at a time by depositing material in amounts and position as dictated by the results of information developed by the “Slicer” program . The needed Gcode is specific to the printer being used and the slicer instructions are set in configuration before the slicer is engaged. The Gcode files are served to the printer through WiFi, or a USB connection or via. a SD card by Repetier-Host the server program that the “slicer” it is embedded in. Early problem was prints were so porous that water would readily go through the walls but the tops and bottoms were quite solid. Found that tops & bottoms set were default set at 3x and the walls at 1x. I reset the tops & bottoms to 2x as well as set the walls to 2x. This increased the time required and material used but resulted in a substantial improvement in part quality. increased the extrusion rate 10% to get a stronger more solid part and the result was… As I was attempting to print the third of 3 motor bases, again I hear this pop and the printer dislocates the “X” axis at about the 1.1 inch line, What the heck! This was 3 times in a row at about the same point, damaging the print. now I will have to section off the top quarter of an inch of the model and print 3 of those and repair all 3 motor bases. But why ?  I sleep on this problem and…  of course! the extrusion is too high. We have been making the layers just a bit too thick and after an inch of layers the build was too high and during a travel move from side to side the no longer high enough nozzle impacted the build so hard that the “X” belt jumped the sprocket and the machine lost it’s registry of position, always at about the same spot.  Reduced the extrusion rate, end of that problem. notes from 3D Printing: Pg Sharrow One of the things I learned during years of plastic welding is that every formulation of plastic and COLOR behaves differently. White is different from Black. Blue or Red. Every time you change supplier or color you must change your technique. Rob Smith I would print a temp tower… Believe it or not, every spool has its own ideal temperature, even if it’s the same material in the same color from the same manufacturer. Cura has a plugin called “ChangeAtZ” that makes it easy to change the temperature at specific layers, and there are dozens of models on Thingiverse to choose from. A good temp tower will have features that highlight bridging quality, overhangs, stringing control, and of course, surface finish. I like this one: Rob Smith No problem! Protip: the temp tower is a great diagnostic tool, but it’s not the only one. There are a bunch of prints that are popular for “benchmarking” your printer’s performance… Lately Benchy has been the most popular ( and before that it was Marvin (… but if you’re really looking to fine-tune your printer, there’s a collection of carefully-chosen calibration prints to challenge your sanity: If you have the chutzpah to try printing all of them a) remember each one is a unique purpose-built test, and it’s designed to be challenging, and b) post pictures! #3DBenchy – The jolly 3D printing torture-test by by CreativeTools  for setup of the IIIPv2 in Curia Aaron Greengrass Printers are different enough from each other that there are no ‘standard’ settings. My hotend has had 4 different layer cooling fans. Each one requires a number of changes to print settings to get a good print. Of the 9 printers I’ve owned so far, only 2 have even been vaguely similar, and even they didn’t have identical settings. Test, adjust, and test again. Print slower. Watch a lot of youtube videos about dialing in printer settings. Look at the print troubleshooting pages (the one simplify3d has is pretty good). Expect this to require trial and error. Expect also that whatever settings you get working will require changes not only for different types of filament, but in many cases for different colors (ie translucent vs solid color print very differently) Todd Saltzman The way you have to approach it is when your print comes out bad you need to identify what exactly is bad about it. Such as is it over extruding or being too stringy etc. then you need to look at the solutions to your problem and just adjust small things at a time. advise and links I greatly appreciate…pg link to hot end assembly and PID tuning : The Great Coil Great coil in disk The great coil and plasma jet in place under the Disk frame In this configuration the black coils are the primary and the white above is the necklace coils. Each of the necklace coils is a single loop terminated at a condenser unit distributed around the circumference of the disk along with the driving primary. HV Coil with Pincushion plasma cone The Original Device bottom of old coil deck, coil is welded to the other side of the central portion of the device. The original device was constructed of welded polypropylene, the bulge rings were built in condensers that were terminations of a necklace of wired loops  bundled with the primary loop of the Great Tesla coil. During early testing the condensers failed and could not be repaired. All of this outer works was cut away to salvage the Great Coil. Tesla Coil plasma jet Flat spiral wound tesla coil with central pincushion/plasma jet The central part of this device is the flat wound spiral coil. This coil is created from about 1200ft of 14 gauge copper wire, 120 turns, at 5 turns to the inch, fixed with 5/32″ polypropylene rod welded to the 1/2 inch thick polypropylene deck. The insulation value between winding turns  was tested to 40,000 volts per turn or 4 million volts of stress for the entire coil. The coil center termination is a 8 inch disk with pins set in a 1/4 inch grid for 16 pins per inch square. The pincushion is inside a polypropylene forcing cone that terminates with a 2 inch plasma conduit. The necklace bundle is just visible inside the ring of the left picture, the central discoloration is the coil it’s self spiraling out from the pincushion/ plasma cone. During initial testing of the device, it was being directly powered from the 15,000 volt transformer when a lead came loose and began arching. The coil went into full Tesla mode of about 2 million volts at the pincushion. A violet plasma jet 6 feet high erupted from the cone! After 25-30 seconds the loose wire came free and the system died. Some of the built in condensers were damaged and could not be repaired. The Great Coil was removed from the damaged device for further use as it would be very difficult to replace …pg For a description of the Tesla Coil driver parts see:   Pictorial of parts. For a description of the Tesla Coil  driver see:   primary-driver for  the layout and parts test of driver:   condenser-sparkgap-test  These above links will open new tabs so you won’t get lost…pg Physics discussion on Aether Propulsion disk gradiant Physics discussion on Aether Propulsion lifted from ChiefIO blog 25 September 2018 at 5:08 am It is looking like pervasive “fields” are all that is real, and “particles” are just what you get when something pokes the field. (So a photon hitting the “electron field” causes the “electron particle” to come into being and changes the “2 slot” outcome to the particle form… ) Basically, if we don’t look then everything is a field. It is when we look that it becomes particles… Isn’t QM fun? 8-} /sarc; 25 September 2018 at 5:49 am yup! pretty much sums it up. K.I.S.S. ! 😎 …pg • Simon Derricutt says: 25 September 2018 at 9:46 am Maybe a lot of the problem with QM is that we can visualise waves OK, and we know about particles and how they collide, and neither the maths nor the visualisation work that well for something that is both at the same time. This isn’t helped by the Copenhagen interpretation, which tells us that it’s only when we look that the wave functions “collapse” into a single result from being indeterminate before we look. Given the age of the universe, and the lack of people to look and measure, it makes sense that things happen whether we look or not. That problem of only having a real result when someone measures it was got over with Bohmian Mechanics, where a “guiding wave” determines the position and direction of a real particle (which makes the wave and the particle there at the same time, and thus things can happen without anyone to measure them). However, this explanation wasn’t chosen, possibly because it effectively posits an Aether (the medium that the guiding wave exists in), and people were trying to go away from anything Aether-like. As I’ve said before, though, if you’re going to have waves of any sort, then as far as we can tell *something* will be waving, and a model with inertial minuscule particles with springs between them is bound to work for a lot of the properties. Basically, you can’t get away from some sort of Aether, even if you rename it as spacetime and say there can be waves in spacetime, unless of course you try to make a model where waves don’t exist and it’s only particles. Since a particle model isn’t going to match reality when it comes to diffraction unless you give the particles some wave properties, and again that implies something being waved, finding a non-paradoxical description has so far escaped us. At the heart of QM we thus have paradox, which tells us we haven’t yet got a good description of what is actually happening. The models we’ve got mostly work pretty well despite the paradox, though, so it’s the best we’ve got at the moment and mostly we ignore the paradoxes and choose the description of particle or wave depending on which one gives the right answer. Another problem with current theory is that it has inconvenient infinities turning up in the maths. Where these turn up, the technique of “renormalisation” is used, which basically means we ignore the infinities and take them out of the equations, and the rest of the equation then gives the right answer. It’s a fudge, and wasn’t liked at the time it was introduced (can’t remember who by), but sorts the problem. . Wikipedia has a nice explanation, too, that goes quite a bit deeper. One thing that bugs me about all this complex maths is that it’s logical that the particles/waves themselves don’t have the capacity to do all the partial derivatives and integrations to work out where they ought to be. They really should simply react to the forces they see at any point in time (here and now forces and what happens as a result) and though the resultant path may be a little complex such as a conic section, it still ought to be calculable using numerical simulation where we use timesteps and the configuration/forces at each point and thus step through positions of the constituent parts. Of course, all these theories assume momentum is absolutely conserved in an interaction, and that apart from borrowing/returning to the Heisenberg energy bank, that energy is conserved too. That may not be a valid assumption. It’s almost certainly the net result after an interaction (we normally see energy and momentum conserved), but may not be valid during the interaction. If inertia is quantised (as seems to be true from cosmological observations) then this will apply at the particle level too, and rather than being a continuous range, momentum can only be exchanged (or changed) in quanta. It’s possible that this may change the maths quite a lot. A small force won’t thus affect the velocity (below the necessary force to jump to the next momentum level), and the path of a particle won’t be a smooth curve but instead a series of straight lines as the momentum has step-changes. I figure that might make some difference to the calculations…. Feynman was required reading when I was learning physics. He was good at explaining things, and where he found things that didn’t make sense he changed them so they did make sense. Probably killed a few sacred cows on the way, and his personal life was unconventional too. I see nothing wrong in watching his lectures for entertainment. I haven’t the time this morning, so I’ll watch them somewhat later. More fun than a Marvel blockbuster with fights between groups of people with magic powers. 25 September 2018 at 1:52 pm @Simon; Excellent essay on the logic of the problem of waves that appear to be particles, particles that behave as waves. Quantaize the medium, Call it what you want. I prefer Aether and this results in Mass/Inertia being external to Mater, the thing that matters to me…pg 25 September 2018 at 2:30 pm If you “kick” the Aether hard enough (voltage) and fast enough (frequency) It will kick back hard. Just like the results in a Tesla coil operation. If you are operating a cone shape field by pulsing a signal over a cone or saucer shape within the high voltage/high frequency field you are operating a linear motor within the activated Aether. Electronic Propulsion!…pg More about pg Back in the winter of 63-64 I was sent to a symposium being put on at Cal Berkley for budding young scientists to join their next years student body. We were given a selection of departments to go through on a show and tell. I chose the Physics Lab. and the Soils Lab. The Physics Lab. had a lot of cool stuff, “Giant” new cyclotron and all the latest equipment being operated by Laurence Radiation Laboratories. I found their “toys” equipment fascinating. Their bs science, which they were very proud of,  mainly boring. Later at the presentation of the grad students papers, I pointed out that the conclusions drawn from one experiment could have several other causes. The grad student was not pleased that a 17 year old “hick from the sticks” would critique His science! The Soils Lab was quite a shock! Their science was in how to “Destroy Soil” to make it solid underlay to build on. I had spent 4 years learning how to create and husband soils for farming. Was something of an expert at it. These guys were teaching how to ruin it! What would you expect at Berzerkly….pg for a bit more see An Engineers’ Tale Further evidence of Aether ESO/L. Calçada It’s taken us 80 years to witness this. 1 DEC 2016 More of article Further evidence of the existence of Aether filling all of space. There Ain’t Nothing in Space! Space is jam packed full of something. Something that is effected by magnetic or Electro-Motive-Force. This EMF yields the phenomena of Mass/Inertia and Gravity…pg EMF Thruster really Works Artist concept of activity within thruster cavity that creates external thrust. pictures of test device in front of vacuum  chamber. A vacuum test campaign evaluating the impulsive thrust performance of a tapered radio-frequency test article excited in the transverse magnitude 212 mode at 1937 MHz has been completed. The test campaign consisted of a forward thrust phase and reverse thrust phase at vacuum with power scans at 40, 60, and 80 W. The test campaign included a null thrust test effort to identify any mundane sources of impulsive thrust; however, none were identified. Thrust data from forward, reverse, and null suggested that the system was consistently performing. Read More: An EMF propulsion device that really works! Next will be a test in space. A renewed NASA will have a new toy. The paradigm of the “Fabric of Space” will need to be rewritten..again…pg Also see Impossible EM Thruster From paper;  Discussion       Before providing some qualitative thoughts on the proposed physics potentially at work in the tapered RF test articles, it will be useful to provide a brief background on the supporting physics lines of thought. In short, the supporting physics model used to derive a force based on operating conditions in the test article can be categorized as a nonlocal hidden-variable theory, or pilot-wave theory for short. Pilot-wave theories are a family of realist interpretations of quantum mechanics that conjecture that the statistical nature of the formalism of quantum mechanics is due to an ignorance of an underlying more fundamental real dynamics, and that microscopic particles follow real trajectories over time just like larger classical bodies do. The first pilot-wave theory was proposed by de Broglie in 1923 [4], where he proposed that a particle interacted with an accompanying guiding wave field, or pilot wave, and this interaction was responsible for guiding the particle along its trajectory, orthogonal to the surfaces of constant phase. In 1926, Madelung [5] published a hydrodynamic model of quantum mechanics by recasting the linear Schrödinger equation into hydrodynamic form, where the Planck constant was analogous to a surface tension σσ in shallow-water hydrodynamics and vacuum fluctuations were the reason for quantum mechanics. In 1952, Bohm [6,7] published a pilot-wave theory where the guiding wave was equivalent to the solution of the Schrödinger equation and a particle’s velocity was equivalent to the quantum velocity of probability. Soon after, the Bohmian mechanics line of thinking was extended by others to incorporate the effects of a stochastic subquantum realm, and de Broglie augmented his initial pilot-wave theory with this approach in 1964 [8], adopting the parlance “hidden thermodynamics.” A family of models categorized as vacuum-based pilot-wave theories or stochastic electrodynamics (SED) [9] further explored the concept that the electromagnetic vacuum fluctuations of the zero point field represent a natural source of stochasticity in the subquantum realm and provide classical explanations for the origin of the Planck constant, Casimir effect, ground state of hydrogen, and much more. It should be noted that the pilot-wave domain experienced an early setback when von Neumann [10] published an impossibility proof against the idea of any hidden-variable theory. This and other subsequent impossibility proofs were later discredited by Bell 30 years later in 1966 [11], and Bell went on to say in the preface of his 1987 book [12] that the pilot wave eliminated the shifty boundary between wavy quantum states on the one hand and Bohr’s classical terms on the other: said simply, there was a real quantum dynamics underlying the probabilistic nature of quantum mechanics. Although the idea of a pilot wave or realist interpretation of quantum mechanics is not the dominant view of physics today (which favors the Copenhagen interpretation), it has seen a strong resurgence of interest over the last decade based on some experimental work pioneered by Couder and Fort [13]. Couder and Fort discovered that bouncing a millimeter-sized droplet on a vibrating shallow fluid bath at just the right resonance frequency created a scenario where the bouncing droplet created a wave pattern on the shallow bath that also seemed to guide the droplet along its way. To Couder and Fort, this seemed very similar to the pilot-wave concept just discussed and, in subsequent testing by Couder and others, this macroscopic classical system was able to exhibit characteristics thought to be restricted to the quantum realm. To date, this hydrodynamic pilot-wave analog system has been able to duplicate the double slit experiment findings, tunneling, quantized orbits, and numerous other quantum phenomena. Bush put together two thorough review papers chronicling the experimental work being done in this domain by numerous universities [14,15]. In addition to these quantum analogs, there may already be direct evidence supportive of the pilot-wave approach: specifically, Bohmian trajectories may have been observed by two separate experiments working with photons [16,17]. Reconsidering the double slit experiment with the pilot-wave view, the photon goes through one slit, and the pilot wave goes through both slits. The resultant trajectories that photons follow arTruespacee continuous real trajectories that are affected by the pilot wave’s probabilistic interference pattern with itself as it undergoes constructive and destructive interference due to reflections from the slits. In the approach used in the quantum vacuum plasma thruster (also known as a Q thruster) supporting physics models, the zero point field (ZPF) plays the role of the guiding wave in a similar manner to the vacuum-based pilot-wave theories. To be specific, the vacuum fluctuations (virtual fermions and virtual photons) serves as the dynamic medium that guides a real particle on its way. Two recent papers authored by members of this investigation team explored the scientific ramifications of this ZPF-based background medium. The first paper [18] considered the quantum vacuum at the cosmological scale in which a thought experiment applied to the Einstein tensor yielded an equation that related the gravitational constant to the quantity of vacuum energy in the universe, implying that gravity might be viewed as an emergent phenomenon: a long wavelength consequence of the quantum vacuum. This viewpoint was scaled down to the atomic level to predict the density of the quantum vacuum in the presence of ordinary matter. This approach yielded a predicted value for the Bohr radius and electron mass with a direct dependency on dark energy. The corollary from this work pertinent to the q-thruster models is that the quantum vacuum is a dynamic medium and could potentially be modeled at the microscopic scale as an electron-positron plasma. The quantum vacuum around the hydrogen nucleus was considered in much more detail in the second paper [19]. Here, the energy density of the quantum vacuum was shown to theoretically have a 1/r41/r4 dependency moving away from the hydrogen nucleus (or proton). This 1/r41/r4 dependency was correlated to the Casimir force, suggesting that the energy density in the quantum vacuum is dependent on geometric constraints and energy densities in electric/magnetic fields. This paper created a quasi-classical model of the hydrogen atom in the COMSOL Multiphysics software (COMSOL is not an acronym) that modeled the vacuum around the proton as an electron-positron plasma. These analysis results showed that the n=1n=1 to 7 energy levels of the hydrogen atom could be viewed as longitudinal resonant acoustic wave modes in the quantum vacuum. This suggests that the idea of treating the quantum vacuum as a dynamic medium capable of supporting oscillations might be valid. If a medium is capable of supporting acoustic oscillations, this means that the internal constituents were capable of interacting and exchanging momentum. A vacuum test campaign that used an updated integrated test article and optimized torsion pendulum layout was completed. The test campaign consisted of a forward thrust element that included performing testing at ambient pressure to establish and confirm good tuning, as well as subsequent power scans at 40, 60, and 80 W, with three thrust runs performed at each power setting for a total of nine runs at vacuum. The test campaign consisted of a reverse thrust element that mirrored the forward thrust element. The test campaign included a null thrust test effort of three tests performed at vacuum at 80 W to try and identify any mundane sources of impulsive thrust; none were identified. Thrust data from forward, reverse, and null suggested that the system was consistently performing at 1.2±0.1  mN/kW1.2±0.1  mN/kW, which was very close to the average impulsive performance measured in air. A number of error sources were considered and discussed. Although thermal shift was addressed to a degree with this test campaign, future testing efforts should seek to develop testing approaches that are immune to CG shifts from thermal expansion. As indicated in Sec. II.C.8, a modified Cavendish balance approach could be employed to definitively rule out thermal. Although this test campaign was not focused on optimizing performance and was more an exercise in existence proof, it is still useful to put the observed thrust-to-power figure of 1.2  mN/kW1.2  mN/kW in context. The current state-of–the-art thrust to power for a Hall thruster is on the order of 60  mN/kW60  mN/kW. This is an order of magnitude higher than the test article evaluated during the course of this vacuum campaign; however, for missions with very large delta-v requirements, having a propellant consumption rate of zero could offset the higher power requirements. The 1.2  mN/kW1.2  mN/kW performance parameter is over two orders of magnitude higher than other forms of “zero-propellant” propulsion, such as light sails, laser propulsion, and photon rockets having thrust-to-power levels in the 3.336.67  μN/kW3.33–6.67  μN/kW (or 0.00330.0067  mN/kW0.0033–0.0067  mN/kW) range.     G. G. SpanjersAssociate Editor I guess they will need Aether for this thing to work. As they only used 300 volts as the bias field, They will need to study Tesla’s work, as MUCH higher voltages will be needed to really get traction on the stuff of space. At least 100 times greater to get real traction. Tesla’s dream of an EMF propulsion system will be achieved and humans will have their Truespace drive. The second gift from GOD for this era…pg Are Climate changes caused by Solar activity variations ? Documentation of the solar activity variations and it’s influence on climate Dimitris Poulos Abstract of paper from: The four planets that influence the most the solar surface through tidal forcing seem to affect the Earth climate. A simple two cosine model with periods 251 years, of the seasonality of the Earth – Venus syzygies, and 265.4 years, of the combined syzygies of Jupiter and Mercury with Earth when Earth is in synod with Venus, fits well the Northern Hemisphere temperatures of the last 1000 years as reconstructed by Jones et al (1998). The physical mechanism proposed is that planetary gravitational forces drive solar activity that in turn drives temperature variations in earth. The sun is in a boundary balance state at one hand collapsing due to gravity and at the other hand expanding due to fusion, and as such it should be heavily influenced by minimal external forcings such as planetary gravity. Sound waves in the solar mass, created from the planetary movement, are responsible for the formation of solar corona and sun spots. The Earth-Venus 251 year resonance is resonant to a near surface solar layer’s thermal natural frequency that “explodes” to form solar wind. The calculated solar wind properties match the observed.  link to  pdf Solar energy output. Energy creation and output from the Sun is a factor of matter-energy density. Whether Fission or Fusion, changes in matter-energy density causes changes in Neutron decay or creation and therefor energy creation and output. Just like a boiling pot of water that has a pressure set 212f /100c temperature of output, so the sun has a pressure set temperature of output, or TSI ( Total Solar Irradiance)  .surface temperature of approximately 5,778 K (5,505 °C, 9,941 °F). This is very stable. Just like the pot of water, changes in energy are manifest as “steam” wind output changes and not temperature change. During changes in solar output there are changes in the spectral signature of the solar TSI  Moving toward the higher frequencies as the output increases. Lower frequencies penetrate deeper into the Earth’s atmosphere and oceans but higher frequencies carry more energy. The solar system is composed of a number of Sun orbiting bodies. Each causes some amount of gravitational pull on the solar body, as their position changes it changes the position of the center of gravity of the solar system Changes in the Solar System center of gravity causes the solar body to move to and fro In it’s attempt to center it’s self with the local gravitational field. While there is an argument that this makes no difference as the sun is in “free-fall”.  All of the sun has mass, mass that resists changes in motion. Local tidal movements cause changes in local matter-energy density causing changes in local Fission/Fusion and therefore energy creation and output being caused by the position changes of the orbiting planets…pg
90e8356b6c55ba51
After about a year of work on the density-functional toolkit (DFTK) we finally started using the code for some new science. Given the mathematics background of the CERMICS the first two DFTK-related articles were not so much about large-scale applications, but rather deal some fundamental questions of Kohn-Sham density-functional theory (DFT). The first submission provides a detailed analysis of self-consistent iterations and direct-minimisation approaches for solving equations such as DFT. My main focus in the past weeks, however, was rather the second project, an invited submission for the Faraday discussions New horizons in density functional theory, which are to take place in September this year. In this article we deal with numerical error in DFT calculations. More precisely we want to address the question: What is the remaining error in the result obtained by a particular DFT simulation, which has already been performed? This is naturally a rather broad statement, which cannot realistically be addressed in the scope of a single article. As we detail we only address the question of bounding the error of a particular quantity of interest, namely band energies near the Fermi level. Other quantities, such as the forces or the response to an external perturbation, might benefit from our ideas, but will need extensions on top. Also one needs to keep in mind that there are plenty of sources for error in a DFT calculation, including: 1. The model error due to replacing the (almost) exact model of the many-body Schrödinger equation by a reduced, but more feasible model like DFT. 2. The discretisation error due to employing only finitely many basis functions instead of solving analytically in an infinite-dimensional Hilbert space. 3. The algorithmic error resulting from using non-zero convergence tolerances in the eigensolvers as well as the SCF procedure. 4. The arithmetic error obtained by doing computation only in finite-precision floating-point arithmetic. Of course in practice people often have a good ballpark idea of the error of DFT as a method or the errors obtained from a particular basis cutoff. Rightfully one might ask, why one should go through the effort of deriving provable bounds to the error in a DFT result? While there are surely many takes on this question, I only want to highlight two aspects in this summary: • Educated guesses for convergence parameters taken from experience can fail. Typically they fail exactly when interesting things happen in a system and thus the usual heuristic breaks down. In other words converging a simulation to investigate what's going on becomes difficult when it's most needed. Detailed error analysis splitting up the error during an SCF iteration into contributions like the errors 1 to 4 (or even finer) can help to hint at the parameters worth tweaking or can provide insight into which error term behaves unusual in an SCF. • Thinking such aspects one step further, a good bound to the individual terms even allows to equilibrate sources of error during a running calculation. The outlook of this idea would be a fully black-box scheme for DFT calculations where typical convergence parameters are automatically while the calculation progresses in order to yield the cheapest path to a desired target accuracy. With respect to the errors 1 to 4 mentioned above one would of course like to be able to provide an upper bound to each of them. Unfortunately especially obtaining an estimate for the model error is a rather difficult task. In our work we have therefore concentrated on the errors 2 to 4 and moreover we only focused on Kohn-Sham models without self-consistency, i.e. where none of the terms in the Hamiltonian depend on the density / orbitals. It goes without saying that especially this last restriction needs to be lifted to make our results useful in practice. This angle we have left for future work. Already at the stage of the current reduced model, however, there are a few aspects to consider when finding an upper bound: • We wanted our bound to be fully-guaranteed, which means that we wanted to design a bound where we are able to prove that the exact answer must lie inside the bounds we give. This means when we provide error bars for band energies it is mathematically guaranteed to find the exact answer of the full-dimensional (complete-basis set limit) calculation at infinite accuracy and at zero convergence tolerances inside our bound. • To be useful our bound should be (cheaply) computable, because otherwise plainly checking the calculation at vastly increased precision might end up being the better option. Notice that our bound does require to use a finer discretisation for some terms, but this is only a one-shot a posteriori step and not an iterative one. • Ideally the bound should be sharp meaning that the upper bound to the error we report should not be too far off the true error. Even better would be an optimal bound, where we are as close to the true error as possible (given proposed structure in the error estimate). Such considerations are very important to not end up with completely correct but also useless statements like: "The error in the band energy is smaller than 10 million Hartree". Finding a balance between these three aspects is not always easy and in our work we often take the pragmatic route to obtain a simpler, albeit less sharp error. Still, our bound is fully computable and allowed us, for the first time, to report band structure diagrams of silicon annotated with fully-guaranteed error bars of combined discretisation, algorithm and arithmetic errors. Details of our methodologies are given in the paper. Its full abstract reads We address the problem of bounding rigorously the errors in the numerical solution of the Kohn-Sham equations due to (i) the finiteness of the basis set, (ii) the convergence thresholds in iterative procedures, (iii) the propagation of rounding errors in floating-point arithmetic. In this contribution, we compute fully-guaranteed bounds on the solution of the non-self-consistent equations in the pseudopotential approximation in a plane-wave basis set. We demonstrate our methodology by providing band structure diagrams of silicon annotated with error bars indicating the combined error. Michael F. Herbst, Antoine Levitt and Eric Cancès. A posteriori error estimation for the non-self-consistent Kohn-Sham equations. Faraday Discussions, 224, 227 (2020). DOI 10.1039/D0FD00048E [code]
c6752f4de6efdd35
Quantum Barrier Scattering Canvas not supported! Please update your browser. Run  Speed:   Real/imag  Density/phase   Grid Reset  Wavepacket energy = 0.030 ± 0.005 Barrier energy = 0.030    Width = 20   Ramp = 0 This simulation shows a quantum mechanical wavepacket hitting a barrier. You can adjust the wavepacket’s nominal energy, the barrier energy, the barrier width, and the width of a “ramp” on either side of the barrier, to see how these affect the amount of the wavepacket that gets through (i.e., the tunneling probability). Drag the width slider all the way to the right to make a step instead of a barrier. The wavefunction is always zero at the edges of the image, so the quantum particle is effectively trapped in an infinitely deep potential well. Thus, when the wavepacket hits the edges, it will reflect off of them. You can plot either the real and imaginary parts of the wavefunction (shown in orange and blue, respectively), or the probability density and phase, with the phase represented by hues going from red (pure real and positive) to light green (pure imaginary and positive) to cyan (pure real and negative) to purple (pure imaginary and negative) and finally back to red. Play with the simulation for a while, then try to predict what will happen when you change the various settings. How does the wavepacket behave when there is no barrier at all? How can you tell, when the simulation is paused, whether the wavepacket is moving to the left or right? How does the wavelength within the packet vary as you change its energy? Under what conditions will most of the wavepacket make it through the barrier? In what ways does the wavepacket behave like a classical particle? Technical details: The simulation works by solving a discretized version of the time-dependent Schrödinger equation, as you can see by looking at the source code. Distances are measured in units of nominal screen pixels, and the grid spacing is 20 pixels. Other units are determined by setting h-bar and the particle mass to 1. This is a nonrelativistic particle, so its kinetic energy is p2/2m. From this formula and the de Broglie relation you can figure out how the energy of a wavefunction is related to its wavelength. A wavepacket is actually a mixture that includes a range of energies, so the uncertainty (standard deviation) in the energy is displayed next to the energy slider. Notice also that the phase velocity (of the individual waves within the wavepacket) differs from the group velocity (of the packet as a whole). By Daniel V. Schroeder, Physics Dept., Weber State University See PhET’s Quantum Tunneling and Wave Packets for a similar simulation with some more useful features. More physics software
afa30d7cf0ba95d6
Take the 2-minute tour × The theory as I know it Let $\mathcal{H}$ be a Hilbert space and $(A, D(A))$ a self-adjoint operator acting on it. The Spectral Theorem (cfr. Reed & Simon Methods of modern mathematical physics, vol.I, §VIII.6) asserts the existence of a unique projection-valued measure (PVM) $P$ s.t. $$A= \int_{-\infty}^\infty \lambda dP_{\lambda},$$ thus allowing us to define functions of $A$ and especially, for real $t$, $$e^{i t A}=\int_{-\infty}^{\infty}e^{i t \lambda} dP_{\lambda}.$$ Turns out (Reed & Simon, §VIII.7) that $(e^{itA})_{t \in \mathbb{R}}$ is a strongly continuous unitary group with generator $(A, D(A))$. In particular, since the operator $-\Delta$ is self-adjoint on $H^2(\mathbb{R}^d)$, we can define solution of the free Schrödinger equation with inital datum $u_0 \in H^2(\mathbb{R}^d)$ $$\text{(S)}\ \begin{cases} i u_t= \Delta u \\ u(0, x)=u_0(x) \end{cases}$$ the function $u(t,x)=e^{i t \Delta}u_0(x)$. Is the requirement $u_0 \in H^2(\mathbb{R}^d)$ really necessary? Would $u_0 \in H^1(\mathbb{R}^d)$ suffice? In fact I read in some course notes: "since $-\Delta$ is self-adjoint on $L^2(\mathbb{R}^d)$ with form domain $H^1(\mathbb{R}^d)$, we can define the Schrödinger group $e^{i t \Delta}$ [...] and so $u(t, x)=e^{i t \Delta}u_0(x)$ is solution of (S) for $u_0 \in H^1(\mathbb{R}^d)$." The author then stops explaining. How could he weaken the regularity request on $u_0$ so much? share|improve this question 1 Answer 1 up vote 2 down vote accepted Well, this is sloppy language. You can solve the equation only for $H^2$. However, you can define the exponential function and the unitary group as you do, and call $u(t,\cdot) = e^{it\Delta}u_0(\cdot)$ the generalized (mild) solution for all $u_0\in L^2$. Why not? Now if multiply the original equation by a nice function with compact support, then you can integrate by part once, and what you get you can write down also for $H^1$ functions. They remain invariant under the group, and are usually called the weak solutions. share|improve this answer Let me see if I got it. In an abstract framework, we are trying to solve $$\begin{cases} u_t=Au \\ u(0)=u_0 \end{cases},$$ where $(A, D(A))$ is a self-adjoint operator and $u_0 \in \mathcal{H}$. What the author did is switching to the quadratic form $q_A(u, v)=(Au, v)$ which is defined in a bigger domain than $D(A)$ and consider the equation $$\begin{cases}(u_t, v)=q_A(u, v), \quad v \in D(q) \\ u(0)=u_0 \end{cases}.$$ [continue...] –  Giuseppe Negro Apr 21 '11 at 15:19 [...] When $u_0 \in D(A)$, $e^{itA}u_0$ is a solution of the first problem, ok. When $u_0 \in D(q)$, $e^{itA}u_0$ must be a solution of the second problem. Correct? Can you explain me a little bit how to better justify this, if you don't mind? Thank you. –  Giuseppe Negro Apr 21 '11 at 15:23 Your Answer
90598cb4a4e8e675
Take the 2-minute tour × I don't really know much about Quantum mechanics, but would like to know one simple fact. The state function $\Psi(r, t)|$ whose magnitude gives the probability density of the position of the particle and the magnitude of its ($\Psi(r, t)$) fourier transform gives probability density of its momentum. Is there any rule that these state functions are smooth (possess infinite order derivatives everywhere) (derivatives of all orders exist)? share|improve this question I'm not sure your statement about the Fourier transform is quite correct. Foruier-transforming the wavefunction in terms of position will indeed give the momentum wavefunction, but whether this can be done on the probability distribution ($|\psi|^2$), I do not know. Hopefully someone more mathematically adept can enlighten me. –  Noldorin Nov 18 '10 at 15:35 @Noldorin: I meant it on the wave function itself, not on the magnitude/probability distribution. Thanks for the clarification in the question. –  Rajesh D Nov 18 '10 at 15:36 Ok, sure. That makes more sense now. :) (And in your question, I'm also presuming you define $S(r, t) = |\Psi(r, t)|^2|$.) –  Noldorin Nov 18 '10 at 15:37 Can you change title to something meaningful like "Is it guaranteed that wavefunction is well behaved everywhere?"? –  Pratik Deoghare Nov 18 '10 at 16:11 related: Is the world $C^\infty$? –  Tobias Kienzler Mar 1 '11 at 9:09 2 Answers 2 up vote 9 down vote accepted The only general requirement on the state function for a single, spinless, quantum particle (quanton) in a physically realistic state is that the state function be square integrable, i.e., the integral of its absolute value squared over all space be finite. Non-square integrable state functions are used for many purposes, but they are all idealizations that do not, individually, represent realistic states. If the state function is also to belong to the domain of definition of the Hamiltonian, then, in non-relativistic QM, the state function must be spatially differentiable to second order as well. State functions which are square integrable but not second order differentiable do not satisfy the Schroedinger equation. But their time evolution is still determined by continuity considerations since the second order differentiable state functions are everywhere dense in the state space, i.e., Hilbert space. share|improve this answer Is there any derivative operator in the QM ? –  Rajesh D Nov 18 '10 at 16:19 Momentum is represented by the derivative operator, up to a factor. –  Raskolnikov Nov 18 '10 at 17:20 derivative of what ? –  Rajesh D Nov 18 '10 at 21:19 This is a slightly convoluted answer. I'm actually not sure what point you're trying to get across, I'm afraid. –  Noldorin Nov 19 '10 at 19:40 Some of the conditions for wavefunctions $\Psi(x)$, for all elements $x$ of a subset of $\mathbb{R^{d}}$ (in the hyperphysics link, they use $x \in \mathbb{R}$). 1.- Must be a solution of the Schrodinger equation. 2.- Must be normalizable. 3.- Must be a continuous function of $x$. 4.- The slope of the function in $x$ must be continuous, that is, $\displaystyle \frac{\partial \Psi(x)}{\partial x}$ must be continuous. The property of being square-integrable is included in the condition 2. share|improve this answer @Robert Smith: "some of the conditions", do you there are more ? –  Rajesh D Nov 18 '10 at 16:25 The solution to the Dirac delta potential is not continuously differentiable, so it violates condition 4. –  Keenan Pepper Nov 18 '10 at 16:40 The assumption for the Delta potential is separate the space for $x<0$ and $x>0$. Therefore, the solution is one wavefunction for $x<0$ and other for $x>0$. Is that what you're saying? I don't see how that violates the condition 4. –  Robert Smith Nov 18 '10 at 16:55 @Robert: your point 3. says precisely that it has to be continuous at every point $x$. What you forgot to include (in this formulation) is that a particle in QM lives in Hilbert space $H = L^2(\mathbb{R}^d)$ so that indeed it needs to be defined (and continuous) for every $x \in \mathbb{R}^d$. The problem with Delta potential arises only because it's not quite physical to assume infinite jump in potential. You do this to make things simple, e.g. to disallow movement through walls. But in reality walls are made of atoms so the potential is smooth (just very fast growing). –  Marek Nov 18 '10 at 18:13 For sure Marek, but I don't see why the Schrödinger equation is not considered a mathematical idealization then? After all, it's only a non-relativistic approximation. And if we're gonna start like this, everything that has ever been conceived of in physics is an idealization. Your decision to consider one more physically relevant than the other is arbitrary if you don't specify the bounds within which the approximation is valid or not. So, without doubt, the Schrödinger equation can do more than models for Anderson localization which are not unphysical, only less broadly applicable. –  Raskolnikov Nov 19 '10 at 15:57 Your Answer
1edea0cbbe4afad2
Similar triangles Complex number In mathematics, the complex numbers are an extension of the real numbers obtained by adjoining an imaginary unit, denoted i, which satisfies: Every complex number can be written in the form a + bi, where a and b are real numbers called the real part and the imaginary part of the complex number, respectively. Complex numbers are a field, and thus have addition, subtraction, multiplication, and division operations. These operations extend the corresponding operations on real numbers, although with a number of additional elegant and useful properties, e.g., negative real numbers can be obtained by squaring complex (imaginary) numbers. Complex numbers were first discovered by the Italian mathematician Girolamo Cardano, who called them "fictitious", during his attempts to find solutions to cubic equations. The solution of a general cubic equation may require intermediate calculations containing the square roots of negative numbers, even when the final solutions are real numbers, a situation known as casus irreducibilis. This ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, it is always possible to find solutions to polynomial equations of degree one or higher. The rules for addition, subtraction, multiplication, and division of complex numbers were developed by the Italian mathematician Rafael Bombelli. A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions. Complex numbers are used in many different fields including applications in engineering, electromagnetism, quantum physics, applied mathematics, and chaos theory. When the underlying field of numbers for a mathematical construct is the field of complex numbers, the name usually reflects that fact. Examples are complex analysis, complex matrix, complex polynomial and complex Lie algebra. The set of all complex numbers is usually denoted by C, or in blackboard bold by mathbb{C}. Although other notations can be used, complex numbers are very often written in the form a + bi , For example, 3 + 2i is a complex number, with real part 3 and imaginary part 2. If z = a + ib, the real part a is denoted Re(z) or ℜ(z), and the imaginary part b is denoted Im(z) or ℑ(z). The real numbers, R, may be regarded as a subset of C by considering every real number a complex number with an imaginary part of zero; that is, the real number a is identified with the complex number . Complex numbers with a real part of zero are called imaginary numbers; instead of writing , that imaginary number is usually denoted as just bi. If b equals 1, instead of using or 1i, the number is denoted as i. In some disciplines (in particular, electrical engineering, where i is a symbol for current), the imaginary unit i is instead written as j, so complex numbers are sometimes written as a + bj. Two complex numbers are equal if and only if their real parts are equal and their imaginary parts are equal. In other words, if the two complex numbers are written as a + bi and c + di with a, b, c, and d real, then they are equal if and only if a = c and b = d. Complex numbers are added, subtracted, multiplied, and divided by formally applying the associative, commutative and distributive laws of algebra, together with the equation i 2 = −1: * Addition: ,(a + bi) + (c + di) = (a + c) + (b + d)i * Subtraction: ,(a + bi) - (c + di) = (a - c) + (b - d)i * Multiplication: ,(a + bi) (c + di) = ac + bci + adi + bd i^2 = (ac - bd) + (bc + ad)i * Division: ,frac{(a + bi)}{(c + di)} = left({ac + bd over c^2 + d^2}right) + left({bc - ad over c^2 + d^2} right)i,, where c and d are not both zero. It is also possible to represent complex numbers as ordered pairs of real numbers, so that the complex number a + ib corresponds to (ab). In this representation, the algebraic operations have the following formulas: (ab) + (cd) = (a + cb + d) (ab)(cd) = (ac − bdbc + ad) Since the complex number a + bi is uniquely specified by the ordered pair (a, b), the complex numbers are in one-to-one correspondence with points on a plane. This complex plane is described below. The field of complex numbers A field is an algebraic structure with addition, subtraction, multiplication, and division operations that satisfy certain algebraic laws. The complex numbers are a field, known as the complex number field, denoted by C. In particular, this means that the complex numbers possess: • An additive identity ("zero"), 0 + 0i. • A multiplicative identity ("one"), 1 + 0i. • An additive inverse of every complex number. The additive inverse of a + bi is −a − bi. • A multiplicative inverse (reciprocal) of every nonzero complex number. The multiplicative inverse of a + bi is {aover a^2+b^2}+ left({-bover a^2+b^2}right)i. Other fields include the real numbers and the rational numbers. When each real number a is identified with the complex number a + 0i, the field of real numbers R becomes a subfield of C. The complex numbers C can also be characterized as the topological closure of the algebraic numbers or as the algebraic closure of R, both of which are described below. The complex plane A complex number z can be viewed as a point or a position vector in a two-dimensional Cartesian coordinate system called the complex plane or Argand diagram (see and ) named after Jean-Robert Argand. The point and hence the complex number z can be specified by Cartesian (rectangular) coordinates. The Cartesian coordinates of the complex number are the real part x = Re(z) and the imaginary part y = Im(z). The representation of a complex number by its Cartesian coordinates is called the Cartesian form or rectangular form or algebraic form of that complex number. Absolute value, conjugation and distance The absolute value (or modulus or magnitude) of a complex number z=re^{iphi} is defined as |z|=r. Algebraically, if z=x+yi, then |z|=sqrt{x^2+y^2}. The absolute value has three important properties: | z | geq 0, , where | z | = 0 , if and only if z = 0 , | z + w | leq | z | + | w | , (triangle inequality) | z cdot w | = | z | cdot | w | , for all complex numbers z and w. These imply that |1|=1 and |z/w|=|z|/|w|. By defining the distance function d(z,w)=|z-w|, we turn the set of complex numbers into a metric space and we can therefore talk about limits and continuity. The complex conjugate of the complex number z=x+yi is defined to be x-yi, written as bar{z} or z^*,. As seen in the figure, bar{z} is the "reflection" of z about the real axis, and so both z+bar{z} and zcdotbar{z} are real numbers. Many identities relate complex numbers and their conjugates: overline{z+w} = bar{z} + bar{w} overline{zcdot w} = bar{z}cdotbar{w} overline{(z/w)} = bar{z}/bar{w} bar{z}=z   if and only if z is real bar{z}=-z   if and only if z is purely imaginary operatorname{Re},(z) = tfrac{1}{2}(z+bar{z}) operatorname{Im},(z) = tfrac{1}{2i}(z-bar{z}) |z|^2 = zcdotbar{z} z^{-1} = frac{bar{z}}{|z|^{2}}   if z is non-zero. The latter formula is the method of choice to compute the inverse of a complex number if it is given in rectangular coordinates. That conjugation commutes with all the algebraic operations (and many functions; e.g. sinbar z=overline{sin z}) is rooted in the ambiguity in choice of i (−1 has two square roots). It is important to note, however, that the function f(z) = bar{z} is not complex-differentiable (see holomorphic function). Geometric interpretation of the operations on complex numbers The operations of addition, multiplication, and complex conjugation in the complex plane admit natural geometrical interpretations. • The sum of two points A and B of the complex plane is the point X = A + B such that the triangles with vertices 0, A, B, and X, B, A, are congruent. Thus the addition of two complex numbers is the same as vector addition of two vectors. • The product of two points A and B is the point X = AB such that the triangles with vertices 0, 1, A, and 0, B, X, are similar. • The complex conjugate of a point A is the point X = A* such that the triangles with vertices 0, 1, A, and 0, 1, X, are mirror images of each other. These geometric interpretations allow problems of geometry to be translated into algebra. And, conversely, geometric problems can be examined algebraically. For example, the problem of the geometric construction of the 17-gon is thus translated into the analysis of the algebraic equation x17 = 1. Polar form Alternatively to the cartesian representation z = x+iy, the complex number z can be specified by polar coordinates. The polar coordinates are r = |z| ≥ 0, called the absolute value or modulus, and φ = arg(z), called the argument or the angle of z. The representation of a complex number by its polar coordinates is called the polar form of the complex number. For r = 0 any value of φ describes the same complex number z = 0. To get a unique representation, a conventional choice is to set φ = 0. For r > 0 the argument φ is unique modulo 2π; that is, if any two values of the complex argument differ by an exact integer multiple of 2π, they are considered equivalent. To get a unique representation, a conventional choice is to limit φ to the interval (-π,π], i.e. −π < φ ≤ π. This choice of φ is sometimes called the principal value of arg(z). Conversion from the polar form to the Cartesian form x = r cos varphi y = r sin varphi Conversion from the Cartesian form to the polar form r = |z| = sqrt{x^2+y^2} varphi = arg(z) = operatorname{atan2}(y,x) The value of φ can change by any multiple of 2π and still give the same angle. The function atan2 gives the principal value in the range (−π, +π]. If a non-negative value of φ in the range [0, 2π) is desired, add 2π to any negative value. The arg function is sometimes considered as multivalued taking as possible values atan2(yx) + 2πk, where k is any integer. Notation of the polar form The notation of the polar form as z = r,(cos varphi + isin varphi ), is called trigonometric form. The notation cis φ is sometimes used as an abbreviation for cos φ + i sin φ. Using Euler's formula it can also be written as z = r,mathrm{e}^{i varphi}, which is called exponential form. Multiplication, division, exponentiation, and root extraction in the polar form Multiplication, division, exponentiation, and root extraction have simple formulas in polar form. Using sum and difference identities it follows that r_1,e^{ivarphi_1} cdot r_2,e^{ivarphi_2} = r_1,r_2,e^{i(varphi_1 + varphi_2)} , and that = frac{r_1}{r_2},e^{i (varphi_1 - varphi_2)}. , Exponentiation with integer exponents; according to De Moivre's formula, (cosvarphi + isinvarphi)^n = cos(nvarphi) + isin(nvarphi),, from which it follows that (r(cosvarphi + isinvarphi))^n = (r,e^{ivarphi})^n = r^n,e^{invarphi} = r^n,(cos nvarphi + mathrm{i} sin n varphi)., Exponentiation with arbitrary complex exponents is discussed in the article on exponentiation. Multiplication by a fixed complex number can be seen as a simultaneous rotation and stretching, in particular multiplication by i corresponds to a counter-clockwise rotation by 90 degrees (π/2 radians). The geometric content of the equation i 2 = −1 is that a sequence of two 90 degree rotations results in a 180 degree (π radians) rotation. Even the fact (−1) · (−1) = +1 from arithmetic can be understood geometrically as the combination of two 180 degree turns. If c is a complex number and n a positive integer, then any complex number z satisfying zn = c is called an n-th root of c. If c is nonzero, there are exactly n distinct n-th roots of c, which can be found as follows. Write c=re^{ivarphi} with real numbers r > 0 and φ, then the set of n-th roots of c is { sqrt[n]r,e^{i(frac{varphi+2kpi}{n})} mid kin{0,1,ldots,n-1} , }, where sqrt[n]{r} represents the usual (positive) n-th root of the positive real number r. If c = 0, then the only n-th root of c is 0 itself, which as n-th root of 0 is considered to have multiplicity n. Some properties Matrix representation of complex numbers While usually not useful, alternative representations of the complex field can give some insight into its nature. One particularly elegant representation interprets each complex number as a 2×2 matrix with real entries which stretches and rotates the points of the plane. Every such matrix has the form a & -b b & ;; a where a and b are real numbers. The sum and product of two such matrices is again of this form, and the product operation on matrices of this form is commutative. Every non-zero matrix of this form is invertible, and its inverse is again of this form. Therefore, the matrices of this form are a field, isomorphic to the field of complex numbers. Every such matrix can be written as a & -b b & ;; a end{bmatrix} = a begin{bmatrix} 1 & ;; 0 0 & ;; 1 end{bmatrix} + b begin{bmatrix} 0 & -1 1 & ;; 0 end{bmatrix} which suggests that we should identify the real number 1 with the identity matrix 1 & ;; 0 0 & ;; 1 end{bmatrix}, and the imaginary unit i with 0 & -1 1 & ;; 0 a counter-clockwise rotation by 90 degrees. Note that the square of this latter matrix is indeed equal to the 2×2 matrix that represents −1. The square of the absolute value of a complex number expressed as a matrix is equal to the determinant of that matrix. |z|^2 = a & -b b & a (a^2) - ((-b)(b)) a^2 + b^2. If the matrix is viewed as a transformation of the plane, then the transformation rotates points through an angle equal to the argument of the complex number and scales by a factor equal to the complex number's absolute value. The conjugate of the complex number z corresponds to the transformation which rotates through the same angle as z but in the opposite direction, and scales in the same manner as z; this can be represented by the transpose of the matrix corresponding to z. If the matrix elements are themselves complex numbers, the resulting algebra is that of the quaternions. In other words, this matrix representation is one way of expressing the Cayley-Dickson construction of algebras. It should also be noted that the two eigenvalues of the 2x2 matrix representing a complex number are the complex number itself and its conjugate. Real vector space C is a two-dimensional real vector space. Unlike the reals, the set of complex numbers cannot be totally ordered in any way that is compatible with its arithmetic operations: C cannot be turned into an ordered field. More generally, no field containing a square root of −1 can be ordered. R-linear maps CC have the general form with complex coefficients a and b. Only the first term is C-linear, and only the first term is holomorphic; the second term is real-differentiable, but does not satisfy the Cauchy-Riemann equations. The function corresponds to rotations combined with scaling, while the function corresponds to reflections combined with scaling. Solutions of polynomial equations A root of the polynomial p is a complex number z such that p(z) = 0. A surprising result in complex analysis is that all polynomials of degree n with real or complex coefficients have exactly n complex roots (counting multiple roots according to their multiplicity). This is known as the fundamental theorem of algebra, and it shows that the complex numbers are an algebraically closed field. Indeed, the complex numbers are the algebraic closure of the real numbers, as described below. Construction and algebraic characterization One construction of C is as a field extension of the field R of real numbers, in which a root of x2+1 are added. To construct this extension, begin with the ring R[x] of polynomials of the real numbers in the variable x. Because the polynomial x2+1 is irreducible over R, the quotient ring R[x]/(x2+1) will be a field. This extension field will contain two square roots of -1; one of them is selected and denoted i. The set {1, i} will form a basis for the extension field over the reals, which means that each element of the extension field can be written in the form a+ b·i. Equivalently, elements of the extension field can be written as ordered pairs (a,b) of real numbers. Although only roots of x2+1 were explicitly added, the resulting complex field is actually algebraically closed – every polynomial with coefficients in C factors into linear polynomials with coefficients in C. Because each field has only one algebraic closure, up to field isomorphism, the complex numbers can be characterized as the algebraic closure of the real numbers. The field extension does yield the well-known complex plane, but it only characterizes it algebraically. The field C is characterized up to field isomorphism by the following three properties: One consequence of this characterization is that C contains many proper subfields which are isomorphic to C (the same is true of R, which contains many subfields isomorphic to itself). As described below, topological considerations are needed to distinguish these subfields from the fields C and R themselves. Characterization as a topological field As just noted, the algebraic characterization of C fails to capture some of its most important topological properties. These properties are key for the study of complex analysis, where the complex numbers are studied as a topological field. The following properties characterize C as a topological field: • C is a field. • C contains a subset P of nonzero elements satisfying: • P is closed under addition, multiplication and taking inverses. • If x and y are distinct elements of P, then either x-y or y-x is in P • If S is any nonempty subset of P, then S+P=x+P for some x in P. • C has a nontrivial involutive automorphism x→x*, fixing P and such that xx* is in P for any nonzero x in C. Given a field with these properties, one can define a topology by taking the sets • B(x,p) = {y | p - (y-x)(y-x)^*in P} as a base, where x ranges over the field and p ranges over P. To see that these properties characterize C as a topological field, one notes that P ∪ {0} ∪ -P is an ordered Dedekind-complete field and thus can be identified with the real numbers R by a unique field isomorphism. The last property is easily seen to imply that the Galois group over the real numbers is of order two, completing the characterization. Pontryagin has shown that the only connected locally compact topological fields are R and C. This gives another characterization of C as a topological field, since C can be distinguished from R by noting that the nonzero complex numbers are connected, while the nonzero real numbers are not. Complex analysis The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions which are commonly represented as two dimensional graphs, complex functions have four dimensional graphs and may usefully be illustrated by color coding a three dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane. The words "real" and "imaginary" were meaningful when complex numbers were used mainly as an aid in manipulating "real" numbers, with only the "real" part directly describing the world. Later applications, and especially the discovery of quantum mechanics, showed that nature has no preference for "real" numbers and its most real descriptions often require complex numbers, the "imaginary" part being just as physical as the "real" part. Control theory In control theory, systems are often transformed from the time domain to the frequency domain using the Laplace transform. The system's poles and zeros are then analyzed in the complex plane. The root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane. In the root locus method, it is especially important whether the poles and zeros are in the left or right half planes, i.e. have real part greater than or less than zero. If a system has poles that are If a system has zeros in the right half plane, it is a nonminimum phase system. Signal analysis Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value |z| of the corresponding z is the amplitude and the argument arg(z) the phase. If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex valued functions of the form f (t ) = z e^{iomega t} , where ω represents the angular frequency and the complex number z encodes the phase and amplitude as explained above. In electrical engineering, the Fourier transform is used to analyze varying voltages and currents. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. (Electrical engineers and some physicists use the letter j for the imaginary unit since i is typically reserved for varying currents and may come into conflict with i.) This approach is called phasor calculus. This use is also extended into digital signal processing and digital image processing, which utilize digital versions of Fourier analysis (and Wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals. Improper integrals In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour integration. Quantum mechanics The complex number field is relevant in the mathematical formulation of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics – the Schrödinger equation and Heisenberg's matrix mechanics – make use of complex numbers. In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time variable to be imaginary. (This is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity. Applied mathematics In differential equations, it is common to first find all complex roots r of the characteristic equation of a linear differential equation and then attempt to solve the system in terms of base functions of the form f(t) = ert. Fluid dynamics In fluid dynamics, complex functions are used to describe potential flow in two dimensions. Certain fractals are plotted in the complex plane, e.g. the Mandelbrot set and the Julia set. The earliest fleeting reference to square roots of negative numbers perhaps occurred in the work of the Greek mathematician and inventor Heron of Alexandria in the 1st century AD, when he considered the volume of an impossible frustum of a pyramid, though negative numbers were not conceived in the Hellenistic world. Complex numbers became more prominent in the 16th century, when closed formulas for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (see Niccolo Fontana Tartaglia, Gerolamo Cardano). It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. For example, Tartaglia's cubic formula gives the following solution to the equation x³ − x = 0: At first glance this looks like nonsense. However formal calculations with complex numbers show that the equation z3 = i has solutions –i, {scriptstylefrac{sqrt{3}}{2}}+{scriptstylefrac{1}{2}}i and {scriptstylefrac{-sqrt{3}}{2}}+{scriptstylefrac{1}{2}}i. Substituting these in turn for {scriptstylesqrt{-1}^{1/3}} in Tartaglia's cubic formula and simplifying, one gets 0, 1 and −1 as the solutions of x3 – x = 0. This was doubly unsettling since not even negative numbers were considered to be on firm ground at the time. The term "imaginary" for these quantities was coined by René Descartes in 1637 and was meant to be derogatory (see imaginary number for a discussion of the "reality" of complex numbers). A further source of confusion was that the equation sqrt{-1}^2=sqrt{-1}sqrt{-1}=-1 seemed to be capriciously inconsistent with the algebraic identity sqrt{a}sqrt{b}=sqrt{ab}, which is valid for positive real numbers a and b, and which was also used in complex number calculations with one of a, b positive and the other negative. The incorrect use of this identity (and the related identity scriptstyle 1/sqrt{a}=sqrt{1/a}) in the case when both a and b are negative even bedeviled Euler. This difficulty eventually led to the convention of using the special symbol i in place of sqrt{-1} to guard against this mistake. The 18th century saw the labors of Abraham de Moivre and Leonhard Euler. To de Moivre is due (1730) the well-known formula which bears his name, de Moivre's formula: (cos theta + isin theta)^{n} = cos n theta + isin n theta , and to Euler (1748) Euler's formula of complex analysis: cos theta + isin theta = e ^{itheta }. , The existence of complex numbers was not completely accepted until the geometrical interpretation (see below) had been described by Caspar Wessel in 1799; it was rediscovered several years later and popularized by Carl Friedrich Gauss, and as a result the theory of complex numbers received a notable expansion. The idea of the graphic representation of complex numbers had appeared, however, as early as 1685, in Wallis's De Algebra tractatus. Wessel's memoir appeared in the Proceedings of the Copenhagen Academy for 1799, and is exceedingly clear and complete, even in comparison with modern works. He also considers the sphere, and gives a quaternion theory from which he develops a complete spherical trigonometry. In 1804 the Abbé Buée independently came upon the same idea which Wallis had suggested, that pmsqrt{-1} should represent a unit line, and its negative, perpendicular to the real axis. Buée's paper was not published until 1806, in which year Jean-Robert Argand also issued a pamphlet on the same subject. It is to Argand's essay that the scientific foundation for the graphic representation of complex numbers is now generally referred. Nevertheless, in 1831 Gauss found the theory quite unknown, and in 1832 published his chief memoir on the subject, thus bringing it prominently before the mathematical world. Mention should also be made of an excellent little treatise by Mourey (1828), in which the foundations for the theory of directional numbers are scientifically laid. The general acceptance of the theory is not a little due to the labors of Augustin Louis Cauchy and Niels Henrik Abel, and especially the latter, who was the first to boldly use complex numbers with a success that is well known. The common terms used in the theory are chiefly due to the founders. Argand called cos phi + isin phi the direction factor, and r = sqrt{a^2+b^2} the modulus; Cauchy (1828) called cos phi + isin phi the reduced form (l'expression réduite); Gauss used i for sqrt{-1}, introduced the term complex number for a+bi, and called a^2+b^2 the norm. The expression direction coefficient, often used for cos phi + i sin phi, is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass. Following Cauchy and Gauss have come a number of contributors of high rank, of whom the following may be especially mentioned: Kummer (1844), Leopold Kronecker (1845), Scheffler (1845, 1851, 1880), Bellavitis (1835, 1852), Peacock (1845), and De Morgan (1849). Möbius must also be mentioned for his numerous memoirs on the geometric applications of complex numbers, and Dirichlet for the expansion of the theory to include primes, congruences, reciprocity, etc., as in the case of real numbers. A complex ring or field is a set of complex numbers which is closed under addition, subtraction, and multiplication. Gauss studied complex numbers of the form a + bi, where a and b are integral, or rational (and i is one of the two roots of x^2 + 1 = 0). His student, Ferdinand Eisenstein, studied the type a + bomega, where omega is a complex root of x^3 - 1 = 0. Other such classes (called cyclotomic fields) of complex numbers are derived from the roots of unity x^k - 1 = 0 for higher values of k. This generalization is largely due to Kummer, who also invented ideal numbers, which were expressed as geometrical entities by Felix Klein in 1893. The general theory of fields was created by Évariste Galois, who studied the fields generated by the roots of any polynomial equation in one variable. The late writers (from 1884) on the general theory include Weierstrass, Schwarz, Richard Dedekind, Otto Hölder, Henri Poincaré, Eduard Study, and Alexander MacFarlane. See also Mathematical references Historical references • {{citation|title=An Imaginary Tale: The Story of sqrt{-1}|first=Paul J.|last=Nahin|publisher=Princeton University Press|isbn=0-691-02795-1|year=1998|edition=hardcover}} • :A gentle introduction to the history of complex numbers and the beginnings of complex analysis. • :An advanced perspective on the historical development of the concept of number. Further reading • The Road to Reality: A Complete Guide to the Laws of the Universe, by Roger Penrose; Alfred A. Knopf, 2005; ISBN 0-679-45443-8. Chapters 4-7 in particular deal extensively (and enthusiastically) with complex numbers. • Unknown Quantity: A Real and Imaginary History of Algebra, by John Derbyshire; Joseph Henry Press; ISBN 0-309-09657-X (hardcover 2006). A very readable history with emphasis on solving polynomial equations and the structures of modern algebra. • Visual Complex Analysis, by Tristan Needham; Clarendon Press; ISBN 0-198-53447-7 (hardcover, 1997). History of complex numbers and complex analysis with compelling and useful visual interpretations. External links Search another word or see Similar triangleson Dictionary | Thesaurus |Spanish Copyright © 2015, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
5b036d7defa07ffe
Climate Models Irreducibly Imprecise A number of recent papers analyzing the nature of climate models have yielded a stunning result little known outside of mathematical circles—climate models like the ones relied on by the IPCC contain “irreducible imprecision.” According to one researcher, all interesting solutions for atmospheric and oceanic simulation (AOS) models are chaotic, hence almost certainly structurally unstable. Further more, this instability is an intrinsic mathematical property of the models which can not be eliminated. Analysis suggests that models should only be used to study processes and phenomena, not for precise comparisons with nature. The ability to predict the future state of the Earth climate system, given its present state and the forcings acting upon it, is the holly grail of climate science. What is not fully appreciated by most is that, in the prediction of the evolution of that system, we are severely limited by the fact that we do not know with arbitrary accuracy the evolution equations and the initial conditions of the system. By necessity climate models work with a finite number of equations, from initial data determined with finite resolution from a finite set of observations. These limitations are further exacerbated by the addition of structural instability due to finite mesh discretization errors (the real world isn't divided into boxes 10s or 100s of kilometers on a side; the impact of changing mesh size has been well documented in a number of recent studies). In a 2007 paper, James C. McWilliams, of the Department of Atmospheric & Oceanic Sciences at UCLA, has termed the impact of the errors in AOS models from the change in the probability density functions (PDFs) in the climate equilibrium compared with the true PDFs from nature as “irreducible imprecision.” The main hypothesis advocated by McWilliams is that structural instability is the primary source of irreducible imprecision for climate change science. In other words, small changes in AOS model parameters or formulation result in significant differences in the longtime PDFs or the phase-space attractor and these can effect climate change projections. Virtually all physical systems have structural instability, according to a paper in PNAS by Andrew J. Majda, Rafail Abramov, and Boris Gershgorin: Climate change science focuses on predicting the coarse-grained, planetary scale, longtime changes in the climate system due to either changes in external forcing or internal variability, such as the impact of increased carbon dioxide. For several decades, the predictions of climate change science have been carried out with some skill through comprehensive computational, atmospheric, and oceanic simulation (AOS) models that are designed to mimic the complex, physical, and spatio-temporal patterns in nature. Such AOS models, either through lack of resolution due to current computing power or through inadequate observation of nature, necessarily parameterize the impact of many features of the climate system such as clouds, mesoscale and submesoscale ocean eddies, sea ice cover, etc. There are intrinsic errors in the AOS models for the climate system and a central scientific issue is the effect of such model errors on predicting the coarse-grained, large-scale, longtime quantities of interest in climate change science. What is at issue here is the fundamental behavior of turbulent, chaotic, dynamical systems. To understand the true impact of these statements some background information is needed—such as just what a probability density function is. Such systems have been the subject of study for more than a century, beginning with early work on Brownian motion. In 1827, English botanist Robert Brown noticed that pollen grains suspended in water jiggled about under the lens of the microscope, describing seemingly random zigzag paths. Even pollen grains that had been stored for a century moved in the same way. The puzzle was why the pollen didn't eventually settle to the bottom of the jar. As explained by Desaulx in 1877, the phenomenon is a result of thermal molecular motion in the liquid environment. A suspended particle is constantly and randomly bombarded from all sides by molecules of the liquid. A number of the equations found in climate models come from studies of fluid flow. Where things become really dicey is when the flow becomes turbulent—chaotic in the mathematical sense. This touches on work by Edward Lorenz in the early 1960s, some of which was discussed in The Resilient Earth. Again, to understand the math presented in these papers some background in fluid flow and chaos theory is needed. There is a fairly accessible paper that presents useful background information by Matthew Carriuolo, “The Lorenz Attractor,Chaos, And Fluid Flow,” available on the web. It was his undergraduate-level thesis at Brown University, done in 2005. In a smoothly flowing fluid, a laminar flow, it is possible to trace the trajectory of a particle or molecule through the system. Unfortunately, in many, if not most, complex natural systems fluid does not flow smoothly. Instead it exhibits swirling, tumbling patterns of turbulence—chaotic flow. Under these conditions the trajectories followed by individual particles are unpredictable. Two particles that start next to each other may follow wildly different paths through the system under chaotic conditions. Instead of trying to predict particle trajectories exactly scientists turn to a statistical measure of where the particles are likely to be—this is the probability density function. An example of a PDF overlain by an individual particle's trajectory is shown in the figure below, taken from Carriuolo's thesis. The green region is a representation of the probability density function for the Rossler Attractor, the cyan dotted path is an actual Phase space trajectory. From Carriuolo, 2005. Brownian motion follows the Langevin equation (a), which can be solved directly using numerical methods such as Monte Carlo simulation. This approach, however, can be quite expensive computationally. The main method of solution is by use of the Fokker-Planck equation (b), which provides a deterministic equation satisfied by the time dependent probability density. Other techniques, such as path integration have also been used, drawing on the analogy between statistical physics and quantum mechanics. For physics fans, the Fokker-Planck equation can be transformed into the Schrödinger equation by rescaling a few variables. Unfortunately, being a partial differential equation the Fokker–Planck equation can be solved analytically only in special cases—generally numerical methods must be used. What is important in this application is that the Fokker–Planck equation can be used for computing the probability densities of stochastic differential equations. The Fokker–Planck equation describes the time evolution of the PDF of the position of a particle or other parameter observation of interest. It is named after Adriaan Fokker and Max Planck, and was first used for the statistical description of (surprise!) Brownian motion of a particle in a fluid. In his PNAS paper, “Irreducible imprecision in atmospheric and oceanic simulations,” McWilliams identifies two types of endemic modeling error—sensitive dependence and structural instability. As a result of these errors, “there is a persistent degree of irreproducibility in results among plausibly formulated AOS models. I believe this is best understood as an intrinsic, irreducible level of imprecision in their ability to simulate nature.” Generic behaviors for chaotic dynamical systems with dependent variables ξ(t) and η(t): (Left) sensitive dependence, small changes in initial or boundary conditions imply limited predictability with (Lyapunov) exponential growth in phase differences, and (Right) structural instability, small changes in model formulation alter the long-time probability distribution function, PDF (i.e., the attractor). For climate models, McWilliams states, “their solutions are rarely demonstrated to be quantitatively accurate compared with nature.” What's more, “their partial inaccuracies occur even after deliberate tuning of discretionary parameters to force model accuracy in a few particular measures.” McWilliams attributes this to differences between the model's predicted long-term, steady state solution and the steady state conditions of the natural system. The way these differences are determined is by comparing PDFs of the model and the natural environment. The last item of math-speak that you need to know to understand the McWilliams and Majda et al. papers is “Lyapunov characteristic time.” When you have a system of partial differential equations that meet all the necessary mathematical restrictions discussed above the Lyapunov exponent or Lyapunov characteristic exponent can be computed (there are actually a number of these exponents, a whole spectrum with the number of exponents equal to the number of dimensions of the phase space). The largest exponent characterizes the rate of separation of infinitesimally close trajectories in phase space and can determine the predictability of the system in question. The inverse of the largest Lyapunov exponent is sometimes referred in literature as Lyapunov time. In simple terms, it can provide a time limit on the validity of a model's future predictions. Given that, here is what Majda et al. have to say about the current crop of GCM climate models: Contemporary climate models are typically characterized by a set of fast “weather” variables that describe small-scale interactions on a short time scale of a few hours, nonlinearly coupled with the large-scale slow “climate” variables.This setup causes the largest Lyapunov exponents and, consequently, the characteristic Lyapunov time to be extremely short and associated with the fast variables, whereas the response of the mean climate state is tied to the decorrelation times of the slow-climate variables. Therefore, it is likely that the typical time of climate response development will be much longer than the Lyapunov characteristic time, and the irreducible imprecision noted above may potentially have a remarkable impact. If there is any doubt that such imprecision leads to a wide range of variability in model predictions, look at the figure below showing the output of a number of models. It shows their predictions of globally averaged surface air temperature change in response to emissions scenario A2 of the IPCC Special Report on Emission Scenarios. Note that atmospheric CO2 levels are double present concentrations by year 2100. As can be seen, a large disparity exists among various climate models in their prediction of change in global mean surface air temperature. The predicted temperature rise for 2100 ranges from a low of ~1°C to a high approaching 6°C. Although each climate model has been optimized to reproduce observational means, each model contains slightly different choices of model parameter values as well as different parametrizations of under-resolved physics. This is why I have repeatedly stated that climate modeling is no substitute for real climate science. Sadly, the IPCC's climate scientists have known about their models' weaknesses from the start. Majda et al. wrote their paper to suggest alternate ways of modeling climate systems. Whether being able to solve more accurately for the long-term climate trend would prove sufficient is an open question. If the best you can do is say it is going to get warmer for a while and then, within say 10,000 years, Earth will start the slow descent into another glacial period most people, politicians and the media in particular, will show little interest. In addition to Majda et al.'s fluctuation dissipation theorem approach other types of model have recently been suggested. Regardless, given today's models, the predictions climate change alarmists base their case on can not be trusted. To quote McWilliams: “Such simulations provide fuller depictions than those provided by deductive mathematical analysis and measurement (because of limitations in technique and instrumental-sampling capability, respectively), albeit with less certainty about their truth.” Scientists are currently arguing about temperature changes of tenths of degrees per decade or even per century. Given the state of GCM and available computer resources, valid predictions of climate changes of these magnitudes simply cannot be accurately calculated. This is not a mater of opinion, it is a statement of fact based in mathematical analysis of climate models by multiple scholars. To base the future of the world's economy and possibly the course of human civilization on climate model predictions is insanity. Be safe, enjoy the interglacial and stay skeptical. models are "right" if they fit your theory But I was reading the IPCC report and was floored to learn that they estimate radiance from CO2 to be 3.7 watts per meter squared. I had just read some reconstructions that referenced Crowley et al showing the millenial reconstruction for solar constant to be 1365 watts per square meter with a variance of.... 3.5 watts. Well if 3.7 watts is a hockey stick, then 3.5 watts should show up in the record like a sore thumb. Except it didn't. Then I realized that I had forgotten basic physics. How long after the volts goes up does the capacitor reach full charge? You have to calculate a time constant based on resistance to change. Well I couldn't do that but I could reverse engineer it by matching the data to the theory. Which I did. 5 times constants (full charge so to speak) comes out to 150 years. Then I used the result to predict recent temps and got a rough max at 2000 with pretty much a flat line after that. Would be very interested in any feedback: You might be interested … Your post brought to mind this article by Jeffrey A. Glassman, "The Cause of the Earth's Climate Change is the Sun," in the following respect, from the article: "This model hypothesis that the natural responses of Earth to solar radiation produce a selecting mechanism. The model exploits evidence that the ocean dominates Earth's surface temperature, as it does the atmospheric CO2 concentration, through a set of delays in the accumulation and release of heat caused by three dimensional ocean currents. The ocean thus behaves like a tapped delay line, a well-known filtering device found in other fields, such as electronics and acoustics, to amplify or suppress source variations at certain intervals on the scale of decades to centuries. A search with running trend lines, which are first-order, finite-time filters, produced a family of representations of TSI as might be favored by Earth's natural responses. One of these, the 134-year running trend line, bore a strong resemblance to the complete record of instrumented surface temperature, the signal called S134." A nicely formatted pdf of this article is here: Best Regards, If the right knowledge base had been consulted this issue could've been resolved from the start. The first paragraph of this post is saying the same thing I've been saying since I first heard several years ago that climate researchers were using computer models to make predictions. My computer science professors from 20 years ago could've told people what "a number of recent papers" said. They told us about the unreliability of trying to simulate chaotic non-linear systems. Heck, even my professor for freshman economics could've told people this. Economists have used chaotic non-linear models for decades. What I was told years ago was everyone knew they were not good enough for prediction, just for study, though judging from what Alan Greenspan said in the aftermath of the economic crash of 2008, that may have changed. Not that the economic models had gotten any better, but the attitude towards them in the financial community probably did. What's the point in developing knowledge to try to help humanity when the implementors of systems remain blissfully ignorant? I was telling a climate modeler who believes in catastrophe about this this past summer. I was like Charlie Brown's teacher. All he heard was "Blah blah blah, blabbity-blah." I guess he had no incentive to listen, since I was telling him his job was not as significant (though not totally useless) as he thought it was. I imagine being on the other end of that kind of message, and taking it to heart, would probably be depressing. Best avoid thinking about it, move on, and keep getting one's paycheck.
8dc856987e08da11
Take the 2-minute tour × Lawrence Evans wrote in discussing the work of Lions fils that there is in truth no central core theory of nonlinear partial differential equations, nor can there be. The sources of partial differential equations are so many - physical, probabilistic, geometric etc. - that the subject is a confederation of diverse subareas, each studying different phenomena for different nonlinear partial differential equation by utterly different methods. To me the second part of Evans' quote does not necessarily imply the first. So my question is: why can't there be a core theory of nonlinear PDE? More specifically it is not clear to me is why there cannot be a mechanical procedure (I am reminded here by [very] loose analogy of the Risch algorithm) for producing estimates or good numerical schemes or algorithmically determining existence and uniqueness results for "most" PDE. (Perhaps the h-principle has something to say about a general theory of nonlinear PDE, but I don't understand it.) I realize this question is more vague than typically considered appropriate for MO, so I have made it CW in the hope that it will be speedily improved. Given the paucity of PDE questions on MO I would like to think that this can be forgiven in the meantime. share|improve this question Are there any Markov or Novikov type theorems for PDEs ? i.e. presumably you could encode algorithmically unsolvable problems into the language of PDEs. Meaning, knowledge of some aspect of the solution (bounded orbit, say) is equivalent to knowing the solution to an algorithmically unsolvable problem? If there were such theorems that would partially address your question. –  Ryan Budney Feb 14 '10 at 23:14 Perhaps the kind of negative result you are looking for is the theorem of Pour-el and Richards that the 3-dimensional wave equation has non-computable solutions with computable initial conditions. This is in their book Computability in Analysis and Physics (Springer-Verlag 1989). –  John Stillwell Feb 14 '10 at 23:43 @Ryan, John--Good point! (One or both of) You should put this in an answer...I seem to recall hearing something once along the lines of PDEs being Turing-universal. But perhaps there can be a general theory of PDE that correspond to a restricted model of computation? –  Steve Huntsman Feb 15 '10 at 1:10 When I teach basic differential equations, I stress analogies with algebraic equations. While this is probably more simple-minded than you were looking for, I point out (without attempting a thorough justification) that although there is a good theory of linear (algebraic equaions) a general theory to solve all algebraic equations, no matter how irregular, is hopelessly out of reach. And we have no right to expect better of differential equations. –  Mark Meckes Mar 23 '10 at 16:31 This also brings to mind the preface (books.google.com/…) from "Lectures on Partial Differential Equations" by Arnol'd. Unfortunately the google books version cuts out after the first page, and I can't find another English version online. You can find a Russian version by googling for "Лекции об уравнениях с частными производными". –  Josh Guffin Sep 29 '10 at 18:26 show 1 more comment 9 Answers I find Tim Gowers' "two cultures" distinction to be relevant here. PDE does not have a general theory, but it does have a general set of principles and methods (e.g. continuity arguments, energy arguments, variational principles, etc.). Sergiu Klainerman's "PDE as a unified subject" discusses this topic fairly exhaustively. Any given system of PDE tends to have a combination of ingredients interacting with each other, such as dispersion, dissipation, ellipticity, nonlinearity, transport, surface tension, incompressibility, etc. Each one of these phenomena has a very different character. Often the main goal in analysing such a PDE is to see which of the phenomena "dominates", as this tends to determine the qualitative behaviour (e.g. blowup versus regularity, stability versus instability, integrability versus chaos, etc.) But the sheer number of ways one could combine all these different phenomena together seems to preclude any simple theory to describe it all. This is in contrast with the more rigid structures one sees in the more algebraic sides of mathematics, where there is so much symmetry in place that the range of possible behaviour is much more limited. (The theory of completely integrable systems is perhaps the one place where something analogous occurs in PDE, but the completely integrable systems are a very, very small subset of the set of all possible PDE.) p.s. The remark Qiaochu was referring to was Remark 16 of this blog post. share|improve this answer I wonder: can one not model Turing machines using ODEs? –  Mariano Suárez-Alvarez Feb 15 '10 at 2:15 And even the completely integrable systems are full of surprises, such as the Camasso–Holm equation, where the solution concept needs some tweaking in order to make the Cauchy problem well posed. –  Harald Hanche-Olsen Feb 15 '10 at 2:20 @Mariano: yes, as covered in your subsequent question: mathoverflow.net/questions/15309 –  Steve Huntsman Feb 15 '10 at 4:26 Leave it to Terry Tao to give the most knowledgable and succinct response to a deep question.His grasp of the Big Picture and relevant publications in any field never ceases to amaze me. –  Andrew L Jun 4 '10 at 21:13 add comment Further to my comment above, on the theorem of Pour-el and Richards: it originally appeared in Advances in Math. 39 (1981) 215-239, entitled "The wave equation with computable initial data such that its unique solution is not computable." I think it is fair to say that they get the wave to simulate a universal Turing machine, albeit with very complicated encoding. However, this may all be irrelevant to explaining why "nonlinear PDE are hard" because the wave equation is linear! share|improve this answer Yes, I would say that there is a general theory of linear PDE, and Hörmander pretty well captures the basics. –  Steve Huntsman Feb 15 '10 at 3:00 Yes, there is a general theory of linear PDE developed largely by Hormander, but of what use is it? In some sense, the space of all possible linear PDE's can be viewed as a singular algebraic variety, where Hormander's theory applies only to generic (smooth) points and the most interesting and heavily studied PDE's all lie in a lower-dimensional subvariety and mostly in the singular set of the variety. –  Deane Yang Feb 15 '10 at 3:17 Also, even though you can't solve the halting problem for Turing machines, the existence, uniqueness, and computability (by definition!) of solutions to the Turing machine “equations of motion” are all utterly trivial. For PDEs, nothing could be farther from the truth. Similarly for ODEs: The local theory is easy, it's long term and global behaviour that is difficult. But for PDEs, even the local theory is fiendishly difficult. (Except for the Cauchy-Kowalevskaja theorem, which despite (or because of?) its generality also turns out to be of rather limited use.) –  Harald Hanche-Olsen Feb 15 '10 at 3:31 add comment I agree with Craig Evans, but maybe it's too strong to say "never" and "impossible". Still, to date there is nothing even close to a unified approach or theory for nonlinear PDE's. And to me this is not surprising. To elaborate on what Evans says, the most interesting PDE's are those that arise from some application in another area of mathematics, science, or even outside science. In almost every case, the best way to understand and solve the PDE arises from the application itself and how it dictates the specific structure of the PDE. So if a PDE arises from, say, probability, it is not surprising that probabilistic approximations are often very useful, but, say, water wave approximations often are not. On other hand, if a PDE arises from the study of water waves, it is not surprising that oscillatory approximations (like Fourier series and transforms) are often very useful but probabilistic ones are often not. Many PDE's in many applications arise from studying the extrema or stationary points of an energy functional and can therefore be studied using techniques arising from calculus of variations. But, not surprisingly, PDE's that are not associated with an energy functional are not easily studied this way. Unlike other areas of mathematics, PDE's, as well as the techniques for studying and solving them, are much more tightly linked to their applications. There have been efforts to study linear and nonlinear PDE's more abstractly, but the payoff so far has been rather limited. share|improve this answer add comment Some more random thoughts: The closest thing I've ever seen to a "general theory of nonlinear PDE's" is Gromov's book, Partial Differential Relations. He does many things in there that I don't understand, but one application that he applied his theory to is isometric embeddings of Riemannian manifolds into Euclidean space or other higher dimensional Riemannian manifolds (the problem made famous by Nash). Moreover, in a paper by Bryant, Griffiths, and me (but in a section written by the other two and not me), it is shown that in some sense, the linearized PDE corresponding to the isometric embedding of an $n$-dimensional Riemannian manifold into $n(n+1)/2$-dimensional Euclidean space looks like a generic $n$-by-$n$ system of first order linear PDE's. I'm not aware of any other place where a "generic" system of PDE's arises naturally. The results in this paper inspired some efforts by Jonathan Goodman and me (unpublished) as well as Nakamura and Maeda (TAMS 313 (1989) 1-51) to extend Hormander's theory of linear PDE's (at least those of real principal type) to nonlinear PDE's. (It should be noted that much more interesting work in this direction was done for the 2-dimensional case, starting with the Ph.D. thesis of C. S. Lin) But maybe you really meant "the general theory of nonlinear PDE's that are elliptic, hyperbolic, or parabolic" and not really the all encompassing "general theory of nonlinear PDE's"? There's far too much junk in the latter. share|improve this answer As far as my very limited understanding goes, the h-principle is not really a "general theory of nonlinear PDE's" but mostly applies to underdetermined systems, which happen to arise a lot in geometric applications, but not as much in physics. –  Otis Chodosh Jan 11 '12 at 3:54 Yes, Gromov's study of PDE's is pretty much limited to underdetermined systems and therefore is definitely not a study of general PDE's. But it applies to general underdetermined PDE's, and that's probably the broadest class of PDE's that anyone has been able to study using a unified approach. –  Deane Yang Jan 11 '12 at 8:39 Here is a 7-page review of Partial Differential Relations by Dusa McDuff: projecteuclid.org/DPubS/Repository/1.0/… –  Tom LaGatta Mar 20 '13 at 10:44 Tom, thanks. I've certainly seen that when it first came out. –  Deane Yang Mar 21 '13 at 2:41 add comment To elaborate on Steve Huntsman's comment, I remember reading the following on Terence Tao's blog: there exist PDE that can simulate Newtonian mechanics, and using such a PDE and the correct initial conditions it is possible, in principle, to simulate an arbitrary analog Turing machine. So a general-purpose algorithm to determine even the qualitative behavior of an arbitrary PDE cannot exist because such an algorithm could be used to solve the halting problem. share|improve this answer add comment I think there is something you can call a general theory of PDEs. It started already long time ago with Meray, Riquier, Janet, Elie Cartan. There is an important survey article by Donald Spencer: Overdetermined systems of linear partial differential equations , Bull. Amer. Math. Soc. 75 (1969), 179-239. see also the recent book by Seiler: Involution:The Formal Theory of Differential Equations and its Applications in Computer Algebra, springer, 2010. This book contains lots of references to this topic. It is a bit strange why this line of research is not very well known. share|improve this answer In response to "It is a bit strange why this line of research is not very well known": 1) Actually, this stuff has become much better known through the work and books by Bryant, Chern, Goldschmidt, Griffiths, Ivey, and Landsberg. 2) Most PDE's that arise from other areas of mathematics and sciences are either scalar or determined systems. For such PDE's, the formal theory tells you nothing more than what the Cauchy-Kovalevski theorem says. 3) The formal theory tells you nothing about the global behavior and regularity of solutions to PDE's. –  Deane Yang Jun 4 '10 at 18:47 @Deane. Your comment 2) is irrelevant for several reasons. a) Cauchy-Kovalevskaia theorem tells you nothing about the Cauchy problem for the heat equation, Navier-Stokes system or Schrödinger equation, because the order with respect to time ($=1$) is smaller than the total order ($=2$). b) Real problems are posed in domains with boundaries, and the boundary conditions can be non-homogeneous. You may need a very much elaborated theory to prove the solvability. Hyperbolic initial-boundary-value problems are notoriously difficult (see the book by S. Benzoni-Gavage and myself); C.-K. is useless. –  Denis Serre Nov 18 '10 at 7:51 Denis, your statements are consistent with and provide some details that underly mine. –  Deane Yang Jan 20 '11 at 4:27 add comment this is a comment to Deane Yang, but apparently it was too long so here is a separate answer. My background is in numerical solution of PDEs 1) while I know about this, it is not at all well-known by people who numerically solve PDEs. 2) this is not true. Most computations are systems of PDEs. I think most computations are done with systems where there are no actual theory, i.e. existence and uniqueness results. Think about Navier-Stokes. Many systems are NS coupled with for example convection diffusion type systems (small amounts of material in the flow etc). Then there are liquid crystals, Maxwell, elasticity, flow coupled with elasticity etc. Of course when the computers were slower one had to simplify to get a scalar equation and then hope that it gives something reasonable. Of course Cauchy-Kovalevskaia as such is irrelevant because one wants the solutions in Sobolev spaces. But the whole formal theory started as a generalization of CK. 3) this is not true. For example there are systems which are not elliptic initially but whose involutive forms are elliptic. This gives a priori regularity results and existence results. Also one could argue that the word "determined" (and over/underdetermined) can't be defined in general without formal theory. share|improve this answer Here are my reactions: 1) Is the formal theory useful for numerical solutions? Could you provide references for this? 2) There are certainly systems consisting of an evolution equation that is coupled with a constraint or gauge condition. Navier-Stokes is like this. The formal theory provides no new insights for these systems, either. 3) What example do you have in mind? I know this statement as an abstract theorem, but I have never seen it used anywhere. –  Deane Yang Jun 4 '10 at 20:26 add comment I will simply quote Heisenberg. (This an approximative quote from memory.) One can say almost everything about nothing, and almost nothing about everything. share|improve this answer But why is the study of PDE's "everything"? In comparison to, say, the study of polynomials? –  Deane Yang Jan 11 '12 at 8:37 Of course, the study of PDEs is not everything, but the point of Heisenberg's quote (and he was referring specifically to nonlinear pde-s) is within the pde Universe, the statement that something is nonlinear carries zero information. PS The study of polynomials can be viewed as a subclass of the study of PDE's (think characteristic polynomials of constant coefficients o.d.e.'s) More generally the theory of D-modules suggests that substantial chunk of algebraic geometry is closely connected to PDE's. –  Liviu Nicolaescu Jan 11 '12 at 11:28 add comment In my limited experience, the furthest you can carry a general theory of PDEs, assuming only smoothness of the PDEs for example, is to describe the characteristic variety and its integrability, or to determine whether the equations are formally integrable (in the sense of the Cartan-Kaehler theorem). Already you find that every real algebraic variety is the characteristic variety of some system of PDE. Even when the characteristic variety is very elementary (a sphere, for example), we know very little about the PDE (it is hyperbolic, but we don't have a complete theory of boundary value problems, initial value problems, long term existence, uniqueness). So I think that a general theory of PDE would have to be much more difficult than real algebraic geometry, which already has elementary problems that seem to be very difficult. share|improve this answer add comment Your Answer
7076a8c788034942
Matter wave From Wikipedia, the free encyclopedia Jump to: navigation, search In quantum mechanics, a branch of physics, a matter wave is when you think of matter as a wave. The concept of matter waves was first introduced by Louis de Broglie. Matter waves are hard to visualize, because we are used to thinking of matter as solid. De Broiglie revolutionized quantum mechanics by producing the equation for matter waves. Wavelength of Matter[change | edit source] Based on the fact that light has a wave-particle duality, De Broglie showed that matter might exhibit wave-particle duality as well (simply meaning that matter is made of both particles and waves). Basing his formula on earlier formulas, he arrived at the equation below. Where λ is the wavelength of the object, h is Planck's constant, m is the mass of the object, and v is the velocity of the object. An alternate but correct version of this formula is Where p is the momentum. (Momentum is equal to mass times velocity). These equations merely say that matter exhibits a particle like nature in some circumstances, and a wavelike characteristic at other times. Erwin Schrödinger created an advanced equation based on this formula and the Bohr model, known as the Schrödinger equation. Related pages[change | edit source]
7122d8fa32d9fded
next up previous contents index Next: 8.2.3 Parallel Algorithm Up: Quantum Mechanical Reactive Previous: 8.2.1 Introduction 8.2.2 Methodology The detailed formulation of reactive scattering based on hyperspherical coordinates and local variational hyperspherical surface functions (LHSF) is discussed elsewhere [Kuppermann:86a], [Hipes:87a], [Cuccaro:89a]. We present a very brief review to facilitate the explanation of the parallel algorithms. For a triatomic system, we label the three atoms , and . Let () be any cyclic permutation of the indices (). We define the coordinates, the mass-scaled [Delves:59a;62a] internuclear vector from to , and the mass-scaled position vector of with respect to the center of mass of diatom. The symmetrized hyperspherical coordinates [Kuppermann:75a] are the hyper-radius , and a set of five angles , , , and , denoted collectively as . The first two of these are in the range 0 to and are, respectively, and the angle between and . The angles , are the polar angles of in a space-fixed frame and is the tumbling angle of the , half-plane around its edge . The Hamiltonian is the sum of a radial kinetic energy operator term in , and the surface Hamiltonian , which contains all differential operators in and the electronically adiabatic potential . The surface Hamiltonian depends on parametrically and is therefore the ``frozen'' hyperradius part of . The scattering wave function is labelled by the total angular momentum J, its projection M on the laboratory-fixed Z axis, the inversion parity with respect to the center of mass of the system, and the irreducible representation of the permutation group of the system ( for ) to which the electronuclear wave function, excluding the nuclear spin part, belongs [Lepetit:90a;90b]. It can be expanded in terms of the LHSF defined below, and calculated at the values of : The index i is introduced to permit consideration of a set of many linearly independent solutions of the Schrödinger equation corresponding to distinct initial conditions which are needed to obtain the appropriate scattering matrices. The LHSF and associated energies are, respectively, the eigenfunctions and eigenvalues of the surface Hamiltonian . They are obtained using a variational approach [Cuccaro:89a]. The variational basis set consists of products of Wigner rotation matrices , associated Legendre functions of and functions of which depend parametically on and are obtained from the numerical solution of one-dimensional eigenvalue-eigenfunction differential equations in , involving a potential related to . The variational method leads to an eigenvalue problem with coefficient and overlap matrices and and whose elements are five-dimensional integrals involving the variational basis functions. The coefficients defined by Equation 8.12 satisfy a coupled set of second-order differential equations involving an interaction matrix whose elements are defined by The configuration space is divided into a set of Q hyperspherical shells within each of which we choose a value used in expansion 8.12. When changing from the LHSF set at to the one at , neither nor its derivative with respect to should change. This imposes continuity conditions on the and their -derivatives at , involving the overlap matrix between the LHSF evaluated at and The five-dimensional integrals required to evaluate the elements of , , , and are performed analytically over , , and and by two-dimensional numerical quadratures over and . These quadratures account for 90% of the total time needed to calculate the LHSF and the matrices and . The system of second-order ordinary differential equations  in the is integrated as an initial value problem from small values of to large values using Manolopoulos' logarithmic derivative propagator [Manolopoulos:86a]. Matrix inversions account for more than 90% of the time used by this propagator. All aspects of the physics can be extracted from the solutions at large by a constant projection [Hipes:87a], [Hood:86a], [Kuppermann:86a]. next up previous contents index Guy Robinson Wed Mar 1 10:19:35 EST 1995
56d36ae6034e2f9e
Forgot your password? NASA Science NASA Gravity Probe Confirms Two Einstein Predictions 139 Posted by samzenpus from the I-hope-it-feels-so-good-to-be-right dept. NASA Gravity Probe Confirms Two Einstein Predictions Comments Filter: • by Anonymous Coward on Thursday May 05, 2011 @04:53AM (#36033044) Please, can somebody restore the fortune database? Thanks. Uh, and First Post. • Re: (Score:1, Offtopic) by hcpxvi (773888) Uh, what he said. I'd mod him up if I had any mod points. Not that I have had any for months, despite excellent karma. The new Slashdot: too buggy to be fit for purpose. • by rhook (943951) on Thursday May 05, 2011 @06:16AM (#36033308) The new Slashdot: too buggy to be fit for purpose. I have to agree with this, several bugs. The most annoying one is having the comments scroll to the top of the page when I click anything. • by nanospook (521118) I know this is off topic, because I need glasses, I use the + and - keys in Opera to zoom the screen a bit. But now ./ does something to ignore those keystrokes. I have to go to Options and toggle filter controls. It doesn't seem to matter if it's on or off, I have to just toggle it to another state. Then it works. A day or so later, I have to do it again.. • by Shippu (1888522) I don't have this problem. It's probably Opera's fault though. For some months I've been wanting to try Firefox/IE9/Chromium because Opera has many unfixed bugs that go back even to version 9. For example, I can't select any text in this text box without doing a right click>select all first. I reported this to them 4 years ago. • by amaupin (721551) on Thursday May 05, 2011 @09:48AM (#36034508) Homepage Links are now unclickable, at least on the first 4 or 5 tries. Each time you click a link in someone's post, the page jumps and/or another post expands/collapses. The sheer level of ignorance and/or lack of interest in their own site on the part of the Slashdot owners is mind-boggling. (Click on links? I must be new here.) Seriously, Slashdot, fix your goddam site. • by Ogive17 (691899) I'm curious why /. looks like shit while using IE8 or Firefox but looks pretty good on my Droid X's native browser. I was browsing from my phone during a phone conference yesterday and couldn't believe how functional the page looked. • by Joce640k (829181) Um, maybe the developer uses a Droid X for development work. That would explain quite a lot actually... • by Xacid (560407) And here I thought it was just my fault for not using IE... • by Hatta (162192) Mark slashdot.org as untrusted. Switch to classic discussion mode in your preferences. • by JWW (79176) Couldn't agree more. EVERYTIME /. upgrades the first thing I do is go back and turn classic discussion mode back on. • dont click anything. CmdrTaco --sent from my iPhone • by sjwt (161428) no, but I can link to the related saturday morning breakfast cereal comic. This is why experimental scientists hare theoretical scientists [smbc-comics.com] • by dotancohen (1015143) on Thursday May 05, 2011 @08:47AM (#36033992) Homepage Please, can somebody restore the fortune database? Thanks. Uh, and First Post. Restore it? It works fine for me, here: In fact, I've been seeing that for a few days! Protip: Say that quote while walking the halls. You will immediately know who your fellow /.ers are by the snickers. If your boss laughs, then you're in trouble. Well, I'd laugh at that quote -- specifically, the presumptions it implies. • Honey? (Score:4, Funny) by mangu (126918) on Thursday May 05, 2011 @05:09AM (#36033112) Doh, this is Slashdot, we want a car analogy, please. And have the numerical results expressed in libraries of congress per football field. Thanks. • OK, geodetic effect, check. Frame-dragging, check. Commence dev. project warp drives • by roger_pasky (1429241) on Thursday May 05, 2011 @06:21AM (#36033332) Agreed, make it so. Geordi, estimate developement period from current stardate. Data, start doing some calculations. Wesley, contact Dr. Sheldon Cooper and piss him off. • NASA and the USA (Score:5, Insightful) by mustPushCart (1871520) on Thursday May 05, 2011 @05:22AM (#36033158) I am not an American, but I have seen both the blue pearl image and the pale blue dot image. I have read about how long these projects have run and the astounding quality of the instruments that must be on satellites like these along with the massive foresight it must have taken at launch time to make them relevant decades later. You can criticize the USA all you want for their wars, and I have heard some harsh criticism of NASA too but the most astounding images and discoveries have always come from the here because they are on the pinnacle of space exploration. The world would be a lot less interesting if it wasn't for them. • by Anonymous Coward Have you seen the comments in TFA by this David de Hilster guy? What a fruitloop. Check out his picture [newiki.org]. Want some love particles, baby? • by a_hanso (1891616) on Thursday May 05, 2011 @05:53AM (#36033256) Journal http://einstein.stanford.edu/Media/Simple_Expt_Anima-Flash.html [stanford.edu] has a simple animation explaining the gravity probe B experiment. • That's great... but given a quantum physics and that little bugger of a concept known as the observer effect (basically ALL experience is subjective to the observer - even scientific ones...) how do we know the results we are recording are actual vs what we believe we should be experiencing and therefore are willing to see? Sure I could be wrong in what I am saying, but let me know and I'll entertain it in my field of awareness as possibility and perhaps I'll experience it differently...or maybe not. ;) Y • by sandytaru (1158959) on Thursday May 05, 2011 @08:40AM (#36033938) Journal The effects of gravity are at macro scales, not quantum scales. From what I understand, the observer effect doesn't really kick in until you start talking about stuff smaller than atoms. The universe is a bit more well-behaved at scale sizes larger than an atom, where chemistry and classical physics kick in. Our other end of non-understanding doesn't start until you get to the very macro, all the dark matter and dark energy floating around out there that no one really knows anything about. • by gman003 (1693318) Exactly. Quantum mechanics only starts to be noticeable about ~50nm or so. In contrast, gravity is normally only noticeable with objects best measured in yottagrams (that's "quintillions of tons", for those of us a bit fuzzy on the extreme SI prefixes). Now, there's been a huge amount of speculation as to how the two combine, especially from theoretical physicists like Dr. Hawking. However, there have been absolutely no experiments in quantum gravity, for one simple reason: the only time you get that much • In contrast, gravity is normally only noticeable with objects best measured in yottagrams 1.61lb is considerably less than a yottagram. Cavendish Experiment [wikipedia.org] • by gman003 (1693318) Yes, and that experiment required some of the greatest precision technologically possible at that time. I'm talking objects big enough that the force of gravity they exert is clearly and immediately obvious, just as I was talking about quantum effects only being clearly and immediately obvious below 50nm. You can certainly detect both phenomena at lower masses or greater distances, but that is hardly relevant to the discussion of practical effects. • The effects of gravity are at macro scales, not quantum scales. The effects are on all scales. Just because nobody can currently describe how a single photon warps space as it travels does not mean it does not occur. We know it does. • by blueg3 (192743) on Thursday May 05, 2011 @08:55AM (#36034052) That's not part of quantum mechanics at all. That's a gross generalization made philosophical that arose out of an actual quantum mechanical principle. Measurement-related QM principles, like wavefunction collapse and Heisenburg, are only meaningful when what you're observing is the size and scale of a quantum state, which is very, very small. Gravitational effects are for the most part (and in this case) for large objects, where QM principles are unimportant. • by qc_dk (734452) And it could also be related to a gross misgeneralization of the theory of relativity. Which basically states the exact opposite: That any careful observer in any frame of reference will agree on the value of the speed of light and the laws of physics. A better name would have been the theory of constancy. • by honkycat (249849) It depends on your perspective. It's "relativity" because most measurements you make *are* relative to your reference frame, only the speed of light (and various invariant quantities) are absolute. The relativity that SR and GR deal with is different in kind than the "peculiarities" of quantum mechanics. And, the previous post was correct: the observation-related uncertainties of QM are (mostly) only important when systems get to microscopic scales. Yes, the same microscopic laws apply to macroscopic phys • by blueg3 (192743) Only observers in inertial reference frames agree on the laws of physics, no? • by Anonymous Coward on Thursday May 05, 2011 @08:58AM (#36034090) You need to actually study quantum physics if you want to talk about these things like an adult. It's obvious to everyone that HAS studied quantum physics that you're spouting nonsense and claiming that Science supports you. Quit watching "What the bleep do we know?". It's full of people lying to you to sell you an idea (and one scientist who was duped and every single quote taken out of context). • by xehonk (930376) The observer effect is not something specific to self-aware observers. It can simply be interaction with other matter - which has then "observed" the item in question. Now with that out of the way, what you want to happen has no influence on what does happen. That's simply not what the observer effect is about. • by tm2b (42473) Sorry, you're making a comment on Quantum Mechanics. I am going to have to ask to see you explain any version of a Schrödinger equation, or ask you to stop. That should really be a law. • I usually bow out of stories like this, but must make one comment: Anybody who thinks time is important as a metric is seriously missing the point. • ... but the Chinese are actively doing it - as seen here in 2007 [dailygalaxy.com]. Sometimes we to just shut up and do it else we'll have deja vu like solar energy [wikipedia.org] or nuclear power [world-nuclear.org] • by cephus440 (828210) I'm sorry, I posted this comment to the wrong article... sigh. • by fotoguzzi (230256) But your first post got Score:1 and your second got Score:2. I think the day is about here when the long running two-million monkey experiment that is slashdot.org will be shut down. Oh, and thank you, Dr. Einstein, for thinking about this stuff and putting it in a form that could be challenged experimentally. • Finally I can put an end to all of those naysayers of gravitation theory! • Look - it's just at THEORY - you admitted it yourself right in your post. Go find some facts and get back with me. I've got a Bible full of them right here at my desk, and there isn't a single mention of gravity. I can't believe you're still blathering on about this... ;-) • Now if I recall correctly, they were also looking for the existence of gravitational waves.. which they.. didn't find.. correct? • by Greyfox (87712) on Thursday May 05, 2011 @09:21AM (#36034254) Homepage Journal Relativity and black holes look like bugs in a not-very-well thought-out physics simulation. This sort of thing makes me wonder if the universe isn't just some extra-dimensional college kid's thesis project on how to find the best way to turn hydrogen into plutonium. • by StikyPad (445176) In the beginning, Bob created the heavens and the earth. But his emulation of Newtonian physics was but partially implemented, and so he only got a B-. • by qc_dk (734452) Dear Mr. 94343, I would like to thank you for considering our ilustrious instituion. I regret to inform you, however, that you have not been accepted to our "Universe creation and it's applications" Ph.d. programme. While your admission project did indeed show a lot of practical skill and hard effort, we believe your theoretical understanding is somewhat deficit. We asked for the best way to turn hydrogen into plutonium, not iron. We encourage you to take another year of theoretical physics, and reapplying for t • When I read something like "confirms Einstein's theory" AGAIN I just get annoyed. In my opinion, the mission would only be a success if it found a flaw in Einstein's theories. Those theories are many decades old and I'm hungry for some totally new physics. I get so disappointed when I hear that the Pioneer mystery (or whichever one was curving unexpectedly) is solved using perfectly well known physics. Where are the new unknown rules that we can use to create new breakthrough technologies? • by notpaul (181662) • by arisvega (1414195) From an extra-dimensional point of view, Hydrogen may as well already be Plutonium. • However the Stanford satellite supposedly is ten times more accurate • Why it took 52 years (Score:5, Interesting) by rotenberry (3487) on Thursday May 05, 2011 @10:39AM (#36035148) From what I have heard, the reason it took 52 years to get this spacecraft into space was political, not technical. There is no doubt that the technology developed to measure these parameters is very impressive. The real question is whether or not it was worth the effort. When I was at JPL in the 1980s a person who had published numerous papers in both experimental and theoretical relativity explained why scientists within the space program were not supporting this project. Since this conversation took place thirty years ago I must paraphrase: "No modern theory of gravity predicts anything else, and if the measurements showed anything but the predicted results it would be assumed to be an experimental error. Unlike the technology used to search for gravitational radiation (which is also used to study the atmospheres of planets), the hardware in this spacecraft cannot be used for any other scientific experiment." So for 52 years the money has been used for other science. For a much more worthy project read about the recently canceled LISA project. If you wish to read about the politics of how a science project is chosen by NASA I can think of no better description that Steven W. Squyres' "Roving Mars" where he describes how the Mars Rovers were nearly canceled. • by radtea (464814) on Thursday May 05, 2011 @11:48AM (#36035984) No modern theory of gravity predicts anything else Except Moffat's, of course. And while every experimental anomaly is first dismissed as error, the fact (you remember those things, facts?) is that scientists have an excellent record of poking away at anomalies until a robust, consistent explanation is found. Sometimes the explanation is mundane--the Pioneer Anomaly, for example. Sometimes it is profound--the anomalous precession of the orbit of Mercury comes to mind, which was measured quite precisely in the 1850's, if I recall correctly, some sixty years before the underlying cause was found. People who say things like this are simply ignorant of the history and timescales on which science actually operates. It is entirely implausible that a group of people who have collectively worked over hundreds of years to account for dozens of tiny numerical anomalies in extremely difficult precision measurements would suddenly throw up their hands and say, "OK, I guess we can ignore the data now!" • by Anonymous Coward Like everything else, science does not have access to infinite resources. However, posts such as yours remind us there is an infinite amount of testing to do. For example, we could pose the question of whether or not a ball and a feather fall at the same rate as each other on Pluto, if dropped simultaneously. In the case where our need for resources outpaces our access to them, we must prioritize what is important. One way of doing this is time and potential for payoff. Consider how many years the hypothetic Very likely, but nobody would have been absolutely sure. Physicists would have looked at possible theories that were in accordance with the experimental results, and come up with other tests. The Michelson-Morley experiment was similar in effect. People thought it very odd that it didn't show ether drift, but the theories were firmly established, and so physicists kept worrying at it. More expe • by Chris Burke (6130) They cancelled LISA?! D= If it's because there's no room in the budget for LISA and a shuttle-derived heavy-lift vehicle, I'm personally going to go kick a bunch of congresscritters in the jewels. • by equex (747231) Sometimes I wonder if these great minds that pops up from time to time (Newton, Copernicus, Einstein etc) are really one of us. It's funny how they appear, completely revolutionize a field or offer a world changing new perspective and then disappear, just to have us mere mortals work for years and decades to understand, confirm and accept it. Applause again for Einstein, you are a bit creepy to be completely honest. • My understanding was that (satellite-based) GPS would give you a drastically inaccurate position reading without an algorithmic correction for frame-dragging. If so, it would seem that part of Einstein's predictions were validated quite a few years ago. • by Strider- (39683) on Thursday May 05, 2011 @01:40PM (#36037490) No, GPS does takes General Relativity and Special Relativity into account, and confirms both nicely. Due to the motion of the spacecraft in orbit with respect to us on the ground, one would expect the GPS satellites to lose about 7 microseconds a day. However, because the satellites are further out of our gravity well, General Relativity predicts the satellites will gain about 45 microseconds a day. Basically, this means that if GR and SR were not taken into account, the GPS system would be useless after about 2 minutes. Source: http://www.astronomy.ohio-state.edu/~pogge/Ast162/Unit5/gps.html [ohio-state.edu] However, the effect of Frame Dragging is many orders of magnitude smaller, to the point where it will not have a measurable effect on GPS. To even have a hope of measuring it, Gravity Probe B had gyroscopes made from a set of the most perfect spheres ever manufactured. If you were to scale these spheres up to the size of the earth, the tallest mountain would be less than 1 meter tall. • by Required Snark (1702878) on Thursday May 05, 2011 @02:40PM (#36038456) According to this paper http://news.sciencemag.org/sciencenow/2011/05/at-long-last-gravity-probe-b.html?ref=ra [sciencemag.org] the Gravity Probe B experiment results were not very useful. The goal was to get numerical results to 1% accuracy, and the actual measurements only achieved %19 percent accuracy. This was due to a design error. Mechanically, the spheres were the roundest objects ever manufactured, Everitt explained. Were one blown up to the size of Earth, the biggest hill on it would be 3 meters tall. However, trapped charges in the niobium made the gyroscopes far less round electrically; an Earth-sized map of a sphere's voltage landscape would sport peaks as high as Mount Everest. Interactions between those imperfections and ones in the gyroscopes' housing created tiny tugs, and to reach the final precisions, researchers spent 5 years figuring out how to correct for them. On top of that, other researchers made better measurements using other much cheaper satellites. So they got scooped and their final results were not what they had planned. Not a complete failure, but not a real success either. • This is cool news! When I first got deep into physics, I often considered the ideal of; "a hot air balloon floating(not) around an earth without an atmosphere", and "would the balloon be dragged around the plaint as it rotates(by gravity)?", now I feel satisfied that know the answer! Which leads to the next n question: If you took our solar system and placed it at the most significant Lagrange point between two galaxy's, would our understanding of physical constants change? ;) And also the intermediary
863d799d5e193ed3
Joe Lykken on “Some good/bad news about string theory” Joe Lykken, “String Theory for Physicists,” XXXIII SLAC Summer Institute, 2005, Lecture 1 [PDF]; Lecture 2 [PDF]; and Lecture 3 [PDF]. “Some good/bad news about string theory: Good: String theory is a consistent theory of quantum gravity. Bad: It’s really a generator of an infinite number of mostly disconnected theories of quantum gravity, each around a different ground state. No background independent truely off-shell formulation of string theory is known (yet). Good: String theory is unique, i.e. there is only one distinct consistent theory of “fundamental” strings. Bad: It has an infinite number of continously connected ground states plus a google of discrete ones. There appears to be no vacuum selection principle, other than the stability of supersymmetric vacua, which gives the wrong answer. Good: String theory gives you chiral gauge theories, with big gauge groups, for free and complicated flavor structure at low energies is mapped into the geometry of extra dimensions. Bad: Doesn’t like to give the standard model as the low energy theory. A “typical” string compactification is either much simpler (with more SUSY and bigger gauge groups) or much more complicated (lot’s of extra exotic matter, extra U(1) gauge groups, etc.). Good: String theory predicts supersymmetry and extra dimensions of space. Bad: It’s happy to hide the both up at the Planck scale. Good: No length or energy scales are put in by hand; all scales should be determined diynamically. Bad: Appears to be too many (hundreds!) scalar fields (moduli) with too much SUSY to get determined dynamically; may be forced to appeal to cosmic initial conditions (the Landscape). Good: String theory gives a microphysical description of (at least some) black holes, resolves their singularities. Bad: Doesn’t seen to resolve the singularity of the Big Bang (good for inflation, though). Good: Lots of powerful dualities including weak ↔ strong coupling dualities and short ↔ long distance dualities. Bad: Can’t tell what are the “fundamental” degrees of freedom. String theory not necessarily a theory of strings. Good: Unification of all the forces is almost for free, may need an (interesting) extra dimensional assist. Bad: In our most realistic string constructions so far, SU(3)C, SU(2)W, and U(1)Y have essentially nothing to do with each other: related to different features of complicated D-brane setups. Good: AdS/CFT duality shows that 10-dimensional string theory in a certain background is equivalent to a 4-dimensional gauge theory!! Use this e.g. to show that RHIC QCD physics maps onto quantum gravity/black holes. Bad: Adds more confusion: can’t tell an extra dimension apart from technicolor. Good: We are starting to use string theory to learn tricks for perturbative QCD, understanding the QCD string, etc. Bad: The QCD community was already doing fine, thank you. That’s all folks!” Posted in Mathematics, Physics, Science, String Theory | Tagged | Leave a comment Dan Hooper on Light WIMPs “The thermal abundance (“WIMP Miracle”) argument works roughly equally well for WIMPs with masses between ~1 GeV and several TeV, but historically, physicists have focused on ~40 GeV to ~1 TeV WIMPs, and papers have been written, analyses have been carried out, and experiments have been designed (and funded) with this bias in mind. But, I know of no compelling argument for why dark matter should not consist of ~1-20 GeV particles.” Dan Hooper (Fermilab/University of Chicago) vindicates “Light WIMPs!,” at TeV Particle Astrophysics Workshop, August 2011. The body of evidence is quite suggestive: “DAMA/LIBRA, CoGeNT, and CRESST have each reported signals which are inconsistent with known backgrounds, and (roughly) consistent with the elastic scattering of ~5-10 GeV dark matter particles; the spectrum of gamma rays from the region surrounding the Galactic Center peaks at a few GeV, consistent with a ~7-10 GeV dark matter particle annihilating largely to leptons, with a cross section on the order of that predicted by relic abundance considerations.” However, “the case is not yet incontrovertible.” Posted in Physics, Science | Tagged , , | Leave a comment String theory and mathematical fertility “String theory dominates the research landscape of quantum gravity physics (despite any direct experimental evidence) due to its mathematical fertility. String theory has generated many surprising, useful, and well-confirmed mathematical ‘predictions’ made on the basis of general physical principles entering into string theory. The success of the mathematical predictions are then seen as evidence for the framework that generated them. Smolin argues that if mathematical fertility could be an indicator of truth, then we ought to take the success of knot theory as evidence for the idea that atoms are indeed knotted bits of ether. Hence, we have an apparent reductio ad absurdum of the idea that I am arguing for in this paper, that mathematical fertility might lead us to believe more strongly in a theory. But the fact that Kelvin’s theory was eventually disconfirmed does not mean that it was a bad theory—after all, it was discussed and studied as a serious theory for some 20 years. It was precisely the fact that it was taken seriously as a physical theory that led to the development of knot theory. The physics of knots forms an integral part of modern physics, especially in condensed matter physics, quantum field theory, and quantum gravity.” Dean Rickles, “Mirror Symmetry and Other Miracles in Superstring Theory,” Found. Phys. 2011. “String theory has not yet been able to make contact with experiments that would give us strong reasons to accept it as the ‘sure winner’ in the race to construct a theory of quantum gravity. However, though experiment can often function as a decisive arbiter in situations where there are several competing theories, there are many more theoretical virtues that play a role in our evaluation of theories. Taking these extraexperimental factors into account, string theory is very virtuous indeed, it is arguably the most mathematically fertile theory of the past century or so. I would go further and say that no direct experiment is likely to ever come about (other than ones that could be explained by multiple approaches), so we can assume that non-experimental factors will have to be relied upon more strongly in our assessments of future research in fundamental physics.” Posted in Mathematics, Physics, Science, String Theory | Tagged , | Leave a comment On the nature of time in string theory The journal Foundations of Physics commemorates “Forty Years of String Theory.” Vijay Balasubramanian (University of Pennsylvania) steps back and ask what we do not understand about time. What is time?  Within the broader quantum gravity community outside string theory there has also been considerable thinking about time. Traditionally, in the study of quantum gravity the “problem of time” arises because the Schrödinger equation when promoted to the diffeomorphism invariant context of gravity, becomes the Wheeler-de Witt equation which simply says nothing about time evolution. This is sometimes interpreted as saying that saying that in a quantum diffeomorphism-invariant universe time is meaningless. Vijay Balasubramanian presents nine questions and several lines of attack in string theory in his paper “What we don’t know about  time,” ArXiv, 14 Jul 2011. Let me summarizes his ideas. Why is there an arrow of time? A common idea is that the arrow of time is cosmologically defined by the macroscopic increase of entropy (the second law of thermodynamics). But this raises the associated question of why the universe starts in a low entropy state. This approach also suggests that the notion of time is inherently connected to the coarse graining of an underlying quantum gravitational configuration space. Why is there only one time? Geometrically, time is different from space because the geometry of spacetime is locally Minkowski (Lorentzian metric signature (1, 3)), not Euclidean (metric signature (0, 4)). From a geometrical point of view we could equally well imagine a signature (2, 2), with two times, which is more symmetric between space and time. In the context of string theory with its many extra dimensions one can ask why we seem to have extra spatial dimensions, not temporal dimensions. Is there a connection between the existence of a time, and the quantumness of the universe? The difference between time and space is somehow implicated in the difference between quantum mechanics with its characteristic features of quantum interference and entanglement, and classical statistical physics which lacks these features. This kind of difference appears in nonrelativistic quantum mechanics, in quantum field theory, and even in string theory. Could the real, Lorentzian structure of conventional spacetime be simply a convenient way of summarizing analytic information about an underlying complexified geometry? Physical quantities seem to be described by analytic functions of space and time in both quantum field theory and string theory.  How can singularities localized in time be resolved in string theory or some other quantum theory of gravity? A prediction of General Relativity is that spacetime singularities exist, either timelike (i.e. localized in space), lightlike (i.e. localized on a null curve), or spacelike (i.e. localized in time). One of the goals of a quantum theory of gravity such as string theory is to resolve such singularities. Why is the area of a horizon, a causal construct, related to entropy, a thermodynamic concept, and can this entropy be given a statistical explanation for general horizons? Semiclassical analyses of quantum mechanics in spacetimes containing horizons like black holes and accelerating geometries such as de Sitter space suggest that inertial observers perceive the horizon as having an entropy proportional to area and a temperature proportional to the surface gravity at the horizon. Neither is there any explanation of why entropy becomes associated to a geometrical construct – the area of a horizon. How precisely is physics beyond a black hole horizon encoded in a unitary description of spacetime? The “information loss paradox” for black holes is due to the non-unitary semiclassical evolution of quantum states in Hawking radiation. The apparent loss of unitarity can be traced ultimately to the causal disconnection of the region behind the horizon. A solution is required since there is simply no room in the full quantum theory for information loss in black holes. Can time be emergent from the dynamics of a timeless theory? In  the AdS/CFT correspondence, string theory in a (d+1)-dimensional, asymptotically Anti-de Sitter (AdS) spacetime is exactly equivalent to a d-dimensional quantum field theory defined on the timelike boundary of such a universe. Thus, the radial dimension of AdS spacetime (as well as any additional compact dimensions of the bulk string theory) must be regarded as somehow “emergent” from the dynamics of the d-dimensional field theory. The field theory contains a time and the emergent gravitational theory inherits its time directly from the field theory.  Are time and space concepts that only become effective in “phases” where the primordial degrees of freedom self-organize with appropriate relations of conditional dependence and entanglement? The spacetime and its metric are generally be thought of as a coarse-grained description of some underlying degrees of freedom which may, or may not, be organized with the proximity and continuity relations associated to smooth spacetime. The spacetime can be viewed as an emergent description of relations of conditional dependence of underlying fundamental variables. If you have enjoyed the questions, please refer to the paper “What we don’t know about  time” for possible lines of research in order to obtain the answers in string theory. Posted in Cosmology, Mathematics, Physics, Quantum Mechanics, Science, String Theory | Tagged , , , | 2 Comments Lisi and Weatherall in Scientific American: “A Geometric Theory of Everything” “In 2007 physicist A. Garrett Lisi wrote the most talked about theoretical physics paper of the year. He argues that the geometric framework of modern quantum physics can be extended to incorporate Einstein’s theory, leading to a long-sought unification of physics” based on a geometrical object referred to as the exceptional Lie Group E8. Lisi, the surfer physicist, has a Midas touch in Mathematical Physics. Everybody talks about his great achievements, even if they are criticized by the mainstream. In the December 2010 issue of Scientific American appears an 8-page article entitled “A Geometric Theory of Everything,” written with James Owen Weatherall. Let us extract some paragraphs from the paper. “The current best theory of nongravitational forces—the electromagnetic, weak and strong nuclear force—was largely completed by the 1970s and has become familiar as the Standard Model of particle physics. Mathematically, the theory describes theseforces and particles as the dynamics of elegant geometric objects called Lie groups and fiber bundles. Over the years physicists have proposed various Grand Unified Theories, or GUTs, in which a single geometric object would explain all these forces, but no one yet knows which, if any, of these theories is true. In Lisi’s theory a single geometric object unifies all forces and matter into a single geometric object. The main geometric idea underlying the Standard Model is that every point in our spacetime has shapes attached to it, called fibers, each corresponding to a different kind of particle. The entire geometric object is called a fiber bundle. The fibers are in internal spaces corresponding to particles’ properties. This idea was introduced by  Hermann Weyl in 1918 for the unification of gravity and electromagnetism. The electric and magnetic fields existing everywhere in our space are the result of fibers with the simplest shape: the circle, called U(1) by physicists, the simplest example of a Lie group. The fiber bundle of electromagnetism consists of circles attached to every point of spacetime. An electromagnetic wave is the undulation of circles over spacetime. Photons and electrons have different fiber bundles over spacetime. The fibers of electrons wrap around the circular fibers of electromagnetism like threads around a screw. Because twists must meet around the circle, these charges are integer multiples of some standard unit of electric charge. Physicists apply these same principles to the weak and strong nuclear forces. Each of these forces has its own kind of charge and its own propagating particles. They are described by more complicated fibers, made up not just of a single circle but of sets of intersecting circles, interacting with themselves and with matter according to their twists. The weak force is associated with a three-dimensional Lie group fiber called SU(2). Its shape has three symmetry generators, corresponding to the three weak-force boson particles: W+, W and W3. Matter particles, fermions, come in two varieties, related to how their spin aligns with their momentum: left-handed and right-handed. Only the left-handed fermions have weak charges, with the left-handed up quark and neutrino having weak charge +1/2 and the left-handed down quark and electron having weak charge –1/2. For antiparticles, this is reversed. Our universe is not left-right symmetrical, one of many mysteries a unified theory seeks to explain. Electroweak force unifies the weak force with electromagnetism by combining the SU(2) fiber with a U(1) circle. This circle is not the same as the electromagnetic one; it represents a precursor to electromagnetism known as the hypercharge force, with particles twisting around it according to their hypercharge, labeled Y. The W3 circles combine with the hypercharge circles to form a two-dimensional torus. The fibers of particles known as Higgs bosons twist around the electroweak Lie group and determine a particular set of circles, breaking the symmetry. The Higgs does not twist around these circles, which then correspond to the massless photon of electromagnetism. Perpendicular to these circles are another set that should correspond to another particle, which the developers of electroweak theory called the Z boson. The fibers of the Higgs bosons twist around the circles of the Z boson, as well as the circles of the Wand W, making all three particles massive. Experimental physicists discovered the Z in 1973, vindicating the theory and demonstrating how geometric principles have real-world consequences. The strong nuclear force that binds quarks into atomic nuclei corresponds geometrically to an even larger Lie group, SU(3). The SU(3) fiber is an eight-dimensional internal space composed of eight sets of circles twisting around one another in an intricate pattern, producing interactions among eight kinds of photonlike particles called gluons on account of how they “glue” nuclei together. This fiber shape can be broken into comprehensible pieces. Embedded within it is a torus formed by two sets of untwisted circles, corresponding to two generators, g3 and g8. The remaining six gluon generators twist around this torus and their resulting g3 and g8 charges form a hexagon in the weight diagram. The quark fibers twist around this SU(3) Lie group, their strong charges forming a triangle in the weight diagram. These quarks are whimsically labeled with three colors: red, green and blue. A collection of matter fibers forming a complete pattern, such as three quarks in a triangle, is called a representation of the Lie group. The colorful description of the strong interactions is known as the theory of quantum chromodynamics. Together, quantum chromodynamics and the electroweak model make up the Standard Model of particle physics, with a Lie group formed by combining SU(3), SU(2) and U(1), as well as matter in several representations. The Standard Model is a great success, but it presents several puzzles: Why does nature use this combination of Lie groups? Why do these matter fibers exist? Why do the Higgs bosons exist? Why is the weak mixing angle what it is? How is gravity included? The quarks, electrons and neutrinos that constitute common matter are called the first generation of fermions; they have second- and third-generation doppelgängers with identical charges but much larger masses. Why is that? And what are cosmic dark matter and dark energy? A unified theory should be able to provide answers to these and other questions. A Grand Unified Theory use a large Lie group with a single fiber encompassing both the electroweak and strong forces. The first attempt at such a theory was proposed in 1973, by Howard Georgi and Sheldon Glashow. They found that the combined Lie group of the Standard Model fits snugly into the Lie group SU(5) as a subgroup. This SU(5) GUT made some distinctive predictions. First, fermions should have exactly the hypercharges that they do. Second, the weak mixing angle should be 38 degrees, in fair agreement with experiments. And finally, in addition to the 12 Standard Model bosons, there are 12 new force particles in SU(5), called X bosons. It was the X bosons that got the theory into trouble. These new particles would allow protons to decay into lighter particles. In impressive experiments, including the observation of 50,000 tons of water in a converted Japanese mine, the predicted proton decay was not seen. Thus, physicists have ruled out this theory. A related Grand Unified Theory, developed around the same time, is based on the Lie group Spin(10). It produces the same hypercharges and weak mixing angle as SU(5) and also predicts the existence of a new force, very similar to the weak force. This new “weaker” force, mediated by relatives of the weak-force bosons called W’+, W’ and W’3, interacts with right-handed fermions, restoring leftright symmetry to the universe at short distances. Although this theory predicts an abundance of X bosons—a full 30 of them—it also indicates that proton decay would occur at a lower rate than for the SU(5) theory. So the theory remains viable. The Spin(10) Lie group with its 45 bosons, along with its representations of 16 fermions and their 16 antifermions, are in fact all parts of a single Lie group, a special one known as the exceptional Lie group E6. The classification of all the Lie groups found the existence of five exceptional ones that stand out: G2, F4, E6, E7 and E8. The fact that the bosons and fermions of Spin(10) and the Standard Model tightly fit the structure of E6, with its 78 generators, is remarkable. It provokes a radical thought. Up until now, physicists have thought of bosons and fermions as completely different. Bosons are parts of Lie group force fibers, and fermions are different kinds of fibers, twisting around the Lie groups. But what if bosons and fermions are parts of a single fiber? That is what the embedding of the Spin(10) GUT in E6 suggests. The structure of E6 includes both types of particles. In a radical unification of forces and matter, bosons and fermions can be combined as parts of a superconnection field. But E6 does not include the Higgs bosons or gravity. A Lie group formulation of gravity uses the group Spin(1,3) for rotations in three spaces and one time direction. Now it is just a matter of putting the pieces together. With gravity described by Spin(1,3) and the favored Grand Unified Theory based on Spin(10), it is natural to combine them using a single Lie group, Spin(11,3), yielding a Gravitational Grand Unified Theory—as introduced last year by Roberto Percacci of the International School for Advanced Studies in Trieste and Fabrizio Nesti of the University of Ferrara in Italy. It brings us close to a full Theory of Everything. The Spin(11,3) Lie group allows for blocks of 64 fermions and, amazingly, predicts their spin, electroweak and strong chargesperfectly. It also automatically includes a set of Higgs bosons and the gravitational frame; in fact, they are unified as “frame-Higgs” generators in Spin(11,3). The curvature of the Spin(11,3) fiber bundle correctly describes the dynamics of gravity, the other forces and the Higgs. It even includes a cosmological constant that explains cosmic dark energy. Everything falls into place. Skeptics objected that the Spin(11,3) theory should be impossible. It appears to violate a theorem in particle physics, the Coleman-Mandula theorem, which forbids combining gravity with the other forces in a single Lie group. But the theorem has an important loophole: it applies only when spacetime exists. In the Spin(11,3) theory (and in E8 theory), gravity is unified with the other forces only before the full Lie group symmetry is broken, and when that is true, spacetime does not yet exist. Our universe begins when the symmetry breaks: the frame-Higgs field becomes nonzero, singling out a specific direction in the unifying Lie group. At this instant, gravity becomes an independent force, and spacetime comes into existence with a bang. Thus, the theorem is always satisfied. The dawn of time was the breaking of perfect symmetry. Lisi’s theory uses the most beautiful structure in all of mathematics, the largest simple exceptional Lie group, E8. Just as E6 contains the structure of the Spin(10) Grand Unified Theory, with its 16 fermions, the E8 Lie group contains the structure of the Spin(11,3) Gravitational Grand Unified Theory, with its 64 Standard Model fermions, including their spins. In this way, gravity and the other known forces, the Higgs, and one generation of Standard Model fermions are all parts of the unified superconnection field of an E8 fiber bundle. The E8 Lie group, with 248 generators, has a wonderfully intricate structure. In addition to gravity and the Standard Model particles, E8 includes W’, Z’ and X bosons, a rich set of Higgs bosons, novel particles called mirror fermions, and axions—a cosmic dark matter candidate. Even more intriguing is a symmetry of E8 called triality. Using triality, the 64 generators of one generation of Standard Model fermions can be related to two other blocks of 64 generators. These three blocks might intermix to reproduce the three generations of known fermions. In this way, the physical universe could emerge naturally from a mathematical structure without peer. The theory tells us what Higgs bosons are, how gravity and the other forces emerge from symmetry-breaking, why fermions exist with the spins and charges they have, and why all these particles interact as they do. Although Lisi’s theory continues to be promising, much work remains to be done. We need to figure out how three generations of fermions unfold, how they mix and interact with the Higgs to get their masses, and exactly how E8 theory works within the context of quantum theory. If E8 theory is correct, it is likely the Large Hadron Collider will detect some of its predicted particles. If, on the other hand, the collider detects new particles that do not fit E8’s pattern, that could be a fatal blow for the theory. In either case, any particles that experimentalists uncover will lead us toward some geometric structure at the heart of nature. And if the structure of the universe at the tiny scales of elementary particles does turn out to be described by E8, with its 248 sets of circles wrapping around one another in an exquisite pattern, twisting and dancing over spacetime in all possible ways, then we will have achieved a complete unification and have the satisfaction of knowing we live in an exceptionally beautiful universe. Lisi’s papers on ArXiv. Posted in Mathematics, Particle Physics, Physics, Science, Theoretical Proposal, Uncategorized | Tagged | Leave a comment Gustafsson in PPC-CERN: “Fermi Gamma-ray Space Telescope Observations of the Galactic Center” A forthcoming paper of the Fermi-LAT Collaboration will describe method and results yielding to the left plot (the right one is widely known).  A map of the galactic center after 2 years of Fermi operation by the Large Area Telescope (LAT) γ-ray spectrum above 1 GeV. The announcement appears in Michael Gustafsson (Padova University, On behalf of the Fermi Collaboration), “Fermi Gamma-ray Space Telescope: Gamma-ray Observations and their Dark Matter Interpretations,” PPC 2011 @ CERN, June 14, 2011. Posted in Astronomy, News, Science | Tagged , , | Leave a comment Serpico at PPC-CERN: “Theoretical aspects of dark matter indirect detection” “Dark Matter (DM) was already discovered indirectly: via gravity. But gravity is “universal” and does not permit particle identification: a discovery via electromagnetic, strong or weak probes is needed. The LHC at CERN was designed to study the electroweak (EW) scale, however there is no astrophysical or cosmological evidence whatsoever for the EW scale being the right one for explaining the DM problem. In fact, there is no evidence that the astrophysical DM is made of particles. The logic has always been the opposite: since the EW scale can be motivated by particle physics, then it might offer “natural” candidates for the DM problem while being accessible to a multi-disciplinary strategy. In the “golden age” for direct searches and colliders, it’s advisable to go back to the “standard practice”: experiments must guide us to Beyond Standard Model (BSM) physics, following the good old pipeline: Particle Physics progress → Theory Framework → Prediction for indirect, allowing a priori searches.” Extracts from Pasquale D. Serpico, “Dark Matter Indirect Detection (theoretical aspects),” PPC 2011 CERN -14 June 2011. Posted in Astronomy, LHC at CERN, Particle Physics, Science | Tagged | Leave a comment
9ee53e21f6c1395f
Now Playing Tracks If Apple were a worker it would have paid the federal government $36 billion in taxes. Instead of paying taxes, Apple has taxes that are deferred for as long as it chooses. In total, I estimate from corporate disclosure documents, American multinational companies have $2 trillion of untaxed profits offshore because they did just what Apple has done. Had Congress required those companies to pay up last year it would have been the equivalent of all the income taxes paid by everyone in America from January until July 10. Imagine that, all the income taxes taken out of your pay or pension from January into the middle of summer just so Apple and other multinational companies can profit today and pay their taxes someday. David Cay Johnston (via soupsoup) The First Image Ever of a Hydrogen Atom’s Orbital Structure What you’re looking at is the first direct observation of an atom’s electron orbitalan atom’s actual wave function! To capture the image, researchers utilized a new quantum microscope — an incredible new device that literally allows scientists to gaze into the quantum realm. An orbital structure is the space in an atom that’s occupied by an electron. But when describing these super-microscopic properties of matter, scientists have had to rely on wave functions — a mathematical way of describing the fuzzy quantum states of particles, namely how they behave in both space and time. Typically, quantum physicists use formulas like the Schrödinger equation to describe these states, often coming up with complex numbers and fancy graphs. Continue Reading To Tumblr, Love Pixel Union
c8ac62e9bfac1217
Famously, Schrödinger's cat is found to be both dead and alive within a closed system - at the mercy of quantum mechanics. But why is the cat "both dead and alive"? For the Copenhagen interpretation, according to Heisenberg "the wave-function represents a probability, but not an objective reality itself in space and time." The conceptual construct of "dead" or "alive" is a 100% non probabilistic state (at least as conceived by an individual within his frame of reference). This 100% certainty can be seen as an 'objective reality' for the individual with that information. If I knew that someone (that I was not observing) was driving a car and had a 50% chance of death, they would not be objectively "both alive and dead" to me, rather given the probabilities they would be "neither alive nor dead". Any positive truth statement cannot be backed up by (non-existent) observational evidence, so no positive truth statement, beyond some assumed estimate of the probabilities, is valid. Does it make more sense to say that when a quantum system is not observable (is closed), whether a wave function or a cat, non-probabilistic conceptual statements with regards to what is inside the system will be incomplete? • $\begingroup$ Have you looked at the recent question and answers here physics.stackexchange.com/q/266606 ? $\endgroup$ – anna v Jul 8 '16 at 15:55 • $\begingroup$ @IlyaGrushevskiy: are you happy the answers to the question Anna linked address your question? If so I will close this as a duplicate. $\endgroup$ – John Rennie Jul 8 '16 at 16:03 • $\begingroup$ It's not both alive and dead but it is rather alive or dead. $\endgroup$ – user36790 Jul 8 '16 at 16:37 • $\begingroup$ @MAFIA36790 the OP is referencing before the cat is viewed, I believe, so it would be both dead and alive. $\endgroup$ – heather Jul 8 '16 at 17:05 • $\begingroup$ @MAFIA36790, I see, thanks for clarifying. $\endgroup$ – heather Jul 8 '16 at 17:07 Well, first of all, Schrödinger's cat is just a thought experiment. What the thought experiment's main point is that the radioactive substance is both decayed and not decayed when not observed. What observed means here means an act that can be used to get information about the system, with or without a person or any other conscious thing. It could be like a electron interacting with a photon. In the thought experiment, the box has to be assumed to be able to shield anything from going into the box and the radioactive substance have a half life of the time of the experiment. It is meant to show that an object or system can be in a linear combination of states. The box as a system will contain a radioactive substance that will kill the cat should it decay, but any sound or photons or even neutrinos are not able to enter or leave the box. If the half life of the substance is 30min but you leave it for an hour, the probability of the radioactive substance decaying is 75%, thus the cat is '75% dead and 25% alive' Now in your case, there isn't anything that will kill you that is probabilistic, unless you actually decide to gamble your life(tip:Normally a bad idea). Photons and also other particles from outside the car are also observing and observe(able to deduce) if you are dead or alive, as it would be easy for the particles, if they have their own mind, to know if you are alive or not and also exit your car. The only way the person could be 50% dead is if the car is shielded from any observations by anything which is practically not feasible. The wave function of any system is completely deterministic if you know the initial conditions. But upon observation or measurement, it becomes completely random. The common term that represents observation is wave function collapse Hopefully this clears any question you have in mind Some of the answers also touched on the many worlds theory, well, the theory stems from the fact that observation is completely random. It says that all the possible states that the system or object being observed will be happening simultaneously. • $\begingroup$ That the radioactive substance is in a state of superposition is not true. Only the radioactive substance plus the surrounding vacuum are in such a state. The nucleus itself, if we factor out the unknown vacuum quantum field wave function, is in a well defined state within a very small amount of time after the alpha, beta or gamma have left. That the cat can be in a superposition is also not true because it is being constantly measured by the classical gravitational field that it creates. Schroedinger pretends that none of these effects exist. $\endgroup$ – CuriousOne Jul 8 '16 at 19:35 • $\begingroup$ @CuriousOne Yes, the point you made about the decay product not having to peeks is correct, I'm taking what an outside observer will calculate. But for Gravity, the box I'm assuming is a hypothetical that only exist in the quantum realm that shields everything, that includes possibly the graviton. But no one is certain about it's existence, and if it do you can not shield from it as the gravitational field is just described by GR. As for quantum gravity, it depends on the amount of gravity there actually is, should there be less than h, ... $\endgroup$ – Ariana Jul 8 '16 at 19:51 • $\begingroup$ ...it will not be considered an observation as you can't observe affects below h. $\endgroup$ – Ariana Jul 8 '16 at 19:51 • $\begingroup$ The point is that the physical vacuum is the outside observer. It doesn't matter if we peak, or not, the nucleus has been decohered by nature trough the mechanism of special relativity, especially when a gamma is involved, which can not be caught by any local observer, i.e. the vacuum state is not knowable for the local observer! The same argument can be made for the cat and its gravitational field. Gravitational waves, no matter how weak, will always decohere a macroscopic body. $\endgroup$ – CuriousOne Jul 8 '16 at 20:00 parallel definition and parallel universe is one of your answers . you can imagine that you cant look inside the box because this work can decline the attention and the accuracy so if I let you to open the box there is two possibility 1. in one universe you have an alive cat and you might get happy also the other possibility is you ll open the box and run into an unhappy scene : the cat had been died ! so It depends you and in quantum mechanic every thing completely relate to the observer and the observer can change all the results for example : think you are a teacher and you want to know how much is the noise of your class if you go and take a look every thing will change just cause of you and children will become quite . every thing is just waves of possibility until observe . a photon in a far star is just a possibility ! then every when you look at a star take care that you have made a special fate for every photon that you can see! finally you can have both condition of the cats in parallel universes . • $\begingroup$ You can look into the box all the time and nature does look, even if you don't. Neutrinos can penetrate the box, whether you like it or not and so does gravity and both will decohere the experiment in no time, again whether we like it or not. Schroedinger wasn't thinking about these issues when he formulated this poorly conceived Gedankenexperiment, but that's no excuse for use who know better not to think about them. $\endgroup$ – CuriousOne Jul 8 '16 at 19:32 • $\begingroup$ @CuriousOne : the time for low-interaction particles like that to decohere an electromagnetically bound system like that is actually pretty long, considering that we are able to do macroscopic Bell's theorem experiments: en.wikipedia.org/wiki/Bell_test_experiments $\endgroup$ – Jerry Schirmer Jul 8 '16 at 20:16 • $\begingroup$ @JerrySchirmer: Indeed, and one of the failures of Schroedinger's cat is that it doesn't mention these things but leaves the student hanging with the false impression that it's an absolute statement about quantum states. Decoherence doesn't do that, it gets it right from the start. $\endgroup$ – CuriousOne Jul 8 '16 at 20:18 • $\begingroup$ @CuriousOne: Fine, but you're kind of throwing a non-sequitir into this conversation, since this example is unaffected by decoherence from gravitons and neutrinos by a choice of a time frame less than the decoherence time. $\endgroup$ – Jerry Schirmer Jul 8 '16 at 20:33 There have been many questions and answers about this topic here on StackExchange. One type of response is to say that it's not a serious thought experiment, one then notes that it's not really possible to put a macroscopic object like a cat in a coherent superposition. But one can then argue that it's in principle possible (the laws of physics do not forbid creating such a state), and then one has to address this issue, but doing so depends on which interpretation of quantum mechanics one favors. Here one has to note that in the Many Worlds Interpretation (MWI), the superposition does not vanish due to interactions with the environment, it's just leads to a far more complicated superposition where the cat's state gets entangled with the rest of the World. So, the fact that you can't create a coherent superposition of the cat is totally irrelevant in that interpretation. I favor my personal variant of the MWI, which is formally identical to it, except that I don't agree with the interpretation of the time dependent Schrödinger equation as specifying a time evolution. I don't believe that time really exists, I favor the block time view. Accordingly, I think it's more appropriate to consider the Schrödinger equation not as specifying a time evolution of the state vector, rather as just a change of basis of the Hilbert space. Or you could just work in the Heisenberg picture and consider the complete set of time evolving operators as specifying a one parameter family of observers. So, in this picture one should interpret the superposition as just the initial state. All the information about the possible futures is present there and as long as you don't measure the state of the cat, you will still have access to the initial state. This is why both possible futures exist side by side, they already existed like that side by side right from the start, to see this you just need to apply the time evolution operator to the initial state. After a measurement, you become entangled with the cat, but then that's a different "you" from the initial "you". All possible outcomes, all possible versions of "you", the cat etc. exist side by side in a timeless multiverse. The laws of quantum mechanics allow you to predict the probabilities of you finding yourself in a certain situation. • $\begingroup$ The Schroedinger equation is ontologically false, but it gives the correct results for a limited number of cases for which its false ontology maps neatly onto the non-relativistic single quantum case of quantum field theory. That makes for good physics, if we stay within the realm of applicability, but it makes for extremely poor philosophy when the interpretation of quantum mechanics based on the phenomenology to which the Schroedinger equation is restricted is being taken to far. $\endgroup$ – CuriousOne Jul 8 '16 at 19:29 • $\begingroup$ @CuriousOne QFT has a natural wavefunctional formulation. You can limit QFT to apply to only asymptotically free particle states to calculate the outcome of scattering experiments, but the full theory is just ordinary QM applied to fields. What we measure in practice in experiments is just one small part of the theory. $\endgroup$ – Count Iblis Jul 8 '16 at 20:21 • $\begingroup$ The problem occurs for massless fields. One can't catch a photon, as they say, so any ontology that presumes a local observer when mixed with a massless field suffers from decoherence because the total state of the observer's system plus the outgoing waves is unknowable. That, IMHO, is how nature creates a classical world to begin with: we don't have a choice, we are being measured, weighed and found too heavy all the time, just like the arrogant knight in "A Knight's Tale". :-) $\endgroup$ – CuriousOne Jul 8 '16 at 20:26 • $\begingroup$ @CuriousOne Yes, but the structure of the laws of QM with its infinite dimensional Hilbert space is so different from the laws of classical mechanics, that it's unlikely that a description in terms of concepts from classical mechanics are anything more than an effective description. Just like you can describe the state of a fluid in terms of a local density field a velocity field etc., it's good enough in practice, but it's not going to give you an exact description of the physical situation. $\endgroup$ – Count Iblis Jul 8 '16 at 20:40 • $\begingroup$ Schroedinger came up with this thing in 1935, I believe. Relativistic QM was in its infancy. He didn't know anything about decoherence, either, and, what's worse, he didn't think it trough, I am afraid. One can say the same about Newton and his corpuscular theory of light. Was it completely wrong? No. Is it useful? Not really. Same thing here. It doesn't teach us anything useful and, I am afraid, it doesn't ask the student to actually think critically about what is important in a physical scenario. IMHO it should not be taught any longer (we aren't teaching corpuscles, either, after all). $\endgroup$ – CuriousOne Jul 8 '16 at 20:45 Schrödinger's cat is a thought experiment about Heisenberg's cut. The box where the cat and its death-via-decay mechanism live really represents Heisenberg's cut itself. The aim of the cat proposal was to show how weird it is to accept that QM can describe any physical system, not only elementary ones, by applying its concepts to a system including a living being. Saying that the cat is both dead and alive has absolutely no physical significance, because if we want to consider what is in the box as a quantum system, then it is in a superposition of much, much more than two states. Indeed what would be the single state "dead"? Dead when? One seconds after we close the box? An hour after? A week after, when the cat is dead for sure because there is no more oxygen in the box? What about "alive"? There are many ways to be alive, and living things are known to have more degrees of freedoms than dead ones... In no way can the cat be described as a plain two-state superposition, so any question about whether it is "alive and dead", "75% dead" or anything of the like is meaningless. The only point here is that, if it is correct that Heisenberg's cut can be put as far as we want in a Von Neumann chain, then the fact that there is no classical way to describe the state of the box contents translates in plain English as something like: "We have no idea what has become of the cat in the box, so much what we cannot even think about it as being an actual plain cat anymore. That's creepy, where is my cat, help me understand what's happening". And of course, the whole point is that we still do not understand what is happening (Laloë, 2004); we still do not know what a quantum state really is. We make a habit of treating classical statistical mixtures as being "ignorance" probabilities — the "reality" of a situation is that exactly one outcome holds, and we are just quantifying the information we have about the outcomes. Schrödinger's cat is really just a classical problem involving a nondeterminstic event, and we would like to treat the nondeterminism as ignorance. However, there is overwhelming evidence that the nondeterministic source simply doesn't work that way. So, Schrödinger's cat forces us to revise our interpretation of what probabilities mean (or postulate some unknown physics kicks in that forces the universe to align with our prior notions). Language is an additional obstacle here, since it is awkward to distinguish, e.g., between talking about the event "$\text{dead}$ and $\text{alive}$" and talking about both of the events "$\text{dead}$" and "$\text{alive}$". protected by Qmechanic Jul 10 '16 at 5:41 Would you like to answer one of these unanswered questions instead?
44a5244556866011
MATSLISE is a graphical MATLAB software package for the interactive numerical study of regular Sturm-Liouville problems, one-dimensional Schrödinger equations, and radial Schrödinger equations with a distorted Coulomb potential. It allows the fast and accurate computation of the eigenvalues and the visualization of the corresponding eigenfunctions. This is realized by making use of the power of high-order piecewise constant perturbation methods, a technique described by Ixaru. For a well-outlined class of problems, the implemented algorithms are more efficient than the well-established SL-solvers SL02f, SLEDGE, SLEIGN, and SLEIGN2, which are included by Pryce in the SLDRIVER code that has been built on top of SLTSTPAK. References in zbMATH (referenced in 45 articles , 2 standard articles ) Showing results 1 to 20 of 45. Sorted by year (citations) 1 2 3 next 1. Mirzaei, Hanif: A family of isospectral fourth order Sturm-Liouville problems and equivalent beam equations (2018) 2. Zhao, Hou Yu; Fečkan, Michal: Periodic solutions for a class of differential equation with delays depending on state (2018) 3. Wang, Yu Ping; Shieh, Chung Tsun; Miao, Hong Yi: Inverse transmission eigenvalue problems with the twin-dense nodal subset (2017) 4. Ledoux, Veerle; Van Daele, Marnix: Matslise 2.0: a Matlab toolbox for Sturm-Liouville computations (2016) 5. Alıcı, H.; Taşeli, H.: The Laguerre pseudospectral method for the radial Schrödinger equation (2015) 6. Amodio, Pierluigi; Settanni, Giuseppina: Reprint of “Variable-step finite difference schemes for the solution of Sturm-Liouville problems” (2015) 7. Amodio, Pierluigi; Settanni, Giuseppina: Variable-step finite difference schemes for the solution of Sturm-Liouville problems (2015) 8. Kammanee, Athassawat: Derivative-free Broyden’s method for inverse partially known Sturm-Liouville potential functions (2015) 9. Ramos, Alberto Gil C. P.; Iserles, Arieh: Numerical solution of Sturm-Liouville problems via Fer streamers (2015) 10. Rundell, William; Sacks, Paul: Inverse eigenvalue problem for a simple star graph (2015) 11. Kravchenko, Vladislav V.; Torba, Sergii M.: Modified spectral parameter power series representations for solutions of Sturm-Liouville equations and their applications (2014) 12. Castillo-Pérez, Raúl; Kravchenko, Vladislav V.; Torba, Sergii M.: Spectral parameter power series for perturbed Bessel equations (2013) 13. Makarov, V. L.; Rossokhata, N. O.; Dragunov, D. V.: An exponentially convergent functional-discrete method for solving Sturm-Liouville problems with a potential including the Dirac (\delta)-function (2013) 14. Aceto, Lidia; Ghelardoni, Paolo; Magherini, Cecilia: Boundary value methods for the reconstruction of Sturm-Liouville potentials (2012) 15. Asghar, Saleem; Ahmad, Adeel: On the heat equation with variable properties (applying a WKB method involving turning points) (2012) 16. Böckmann, Christine; Kammanee, Athassawat: Broyden method for inverse non-symmetric Sturm-Liouville problems (2011) 17. Ledoux, Veerle; Van Daele, Marnix: On CP, LP and other piecewise perturbation methods for the numerical solution of the Schrödinger equation (2011) 18. Alıcı, H.; Taşeli, H.: Pseudospectral methods for solving an equation of hypergeometric type with a perturbation (2010) 19. Broeckhove, Jan; Kłosiewicz, Przemysław; Vanroose, Wim: Applying numerical continuation to the parameter dependence of solutions of the Schrödinger equation (2010) 20. Gadella, M.; Lara, L. P.: An algebraic method to solve the radial Schrödinger equation (2010) 1 2 3 next
54c215ad3e84fba7
Quantum communication: making two from one In the future, quantum physics could become the guarantor of secure information technology. To achieve this, individual particles of light—photons—are used for secure transmission of data. Findings by physicists from ... Graphene tunnelling junctions: beyond the breaking point Molecular electronics is a burgeoning field of research that aims to integrate single molecules as active elements in electronic devices. Obtaining a complete picture of the charge transport properties in molecular junctions ... Quantum tunnelling in water opens the way to improved biosensing Researchers at the University of Sydney have applied quantum techniques to understanding the electrolysis of water, which is the application of an electric current to H2O to produce the constituent elements hydrogen and oxygen. page 1 from 7 Quantum tunnelling Wave-mechanical tunnelling (also called quantum-mechanical tunnelling, quantum tunnelling, and the tunnel effect) is an evanescent wave coupling effect that occurs in the context of quantum mechanics because the behaviour of particles is governed by Schrödinger's wave-equation. All wave equations exhibit evanescent wave coupling effects if the conditions are right. Wave coupling effects mathematically equivalent to those called "tunnelling" in quantum mechanics can occur with Maxwell's wave-equation (both with light and with microwaves), and with the common non-dispersive wave-equation often applied (for example) to waves on strings and to acoustics. For these effects to occur there must be a situation where a thin region of "medium type 2" is sandwiched between two regions of "medium type 1", and the properties of these media have to be such that the wave equation has "traveling-wave" solutions in medium type 1, but "real exponential solutions" (rising and falling) in medium type 2. In optics, medium type 1 might be glass, medium type 2 might be vacuum. In quantum mechanics, in connection with motion of a particle, medium type 1 is a region of space where the particle total energy is greater than its potential energy, medium type 2 is a region of space (known as the "barrier") where the particle total energy is less than its potential energy - for further explanation see the section on "Schrödinger equation - tunnelling basics" below. If conditions are right, amplitude from a traveling wave, incident on medium type 2 from medium type 1, can "leak through" medium type 2 and emerge as a traveling wave in the second region of medium type 1 on the far side. If the second region of medium type 1 is not present, then the traveling wave incident on medium type 2 is totally reflected, although it does penetrate into medium type 2 to some extent. Depending on the wave equation being used, the leaked amplitude is interpreted physically as traveling energy or as a traveling particle, and, numerically, the ratio of the square of the leaked amplitude to the square of the incident amplitude gives the proportion of incident energy transmitted out the far side, or (in the case of the Schrödinger equation) the probability that the particle "tunnels" through the barrier.
2cd140e903e95a0a
Friday, March 29, 2019 Proving the Periodic Table The year 2019 is the International Year of the Periodic Table celebrating the 150th anniversary of Mendeleev's discovery. This prompts me to report on something that I learned in recent years when co-teaching "Mathematical Quantum Mechanics" with mathematicians in particular with Heinz Siedentop: We know less about the mathematics of the periodic table) than I thought. In high school chemistry you learned that the periodic table comes about because of the orbitals in atoms. There is Hundt's rule that tells you the order in which you have to fill the shells in and in them the orbitals (s, p, d, f, ...). Then, in your second semester in university, you learn to derive those using Sehr\"odinger's equation: You diagonalise the Hamiltonian of the hyrdrogen atom and find the shells in terms of the main quantum number $n$ and the orbitals in terms of the angular momentum quantum number $L$ as $L=0$ corresponds to s, $L=1$ to p and so on. And you fill the orbitals thanks to the Pauli excursion principle. So, this proves the story of the chemists. Except that it doesn't: This is only true for the hydrogen atom. But the Hamiltonian for an atom nuclear charge $Z$ and $N$ electrons (so we allow for ions) is (in convenient units) $$ H = -\sum_{i=1}^N \Delta_i -\sum_{i=1}^N \frac{Z}{|x_i|} + \sum_{i\lt j}^N\frac{1}{|x_i-x_j|}.$$ The story of the previous paragraph would be true if the last term, the Coulomb interaction between the electrons would not be there. In that case, there is no interaction between the electrons and we could solve a hydrogen type problem for each electron separately and then anti-symmetrise wave functions in the end in a Slater determinant to take into account their Fermionic nature. But of course, in the real world, the Coulomb interaction is there and it contributes like $N^2$ to the energy, so it is of the same order (for almost neutral atoms) like the $ZN$ of the electron-nucleon potential. The approximation of dropping the electron-electron Coulomb interaction is well known in condensed matter systems where there resulting theory is known as a "Fermi gas". There it gives you band structure (which is then used to explain how a transistor works) Band structure in a NPN-transistor Also in that case, you pretend there is only one electron in the world that feels the periodic electric potential created by the nuclei and all the other electrons which don't show up anymore in the wave function but only as charge density. For atoms you could try to make a similar story by taking the inner electrons into account by saying that the most important effect of the ee-Coulomb interaction is to shield the potential of the nucleus thereby making the effective $Z$ for the outer electrons smaller. This picture would of course be true if there were no correlations between the electrons and all the inner electrons are spherically symmetric in their distribution around the nucleus and much closer to the nucleus than the outer ones.  But this sounds more like a day dream than a controlled approximation. In the condensed matter situation, the standing for the Fermi gas is much better as there you could invoke renormalisation group arguments as the conductivities you are interested in are long wave length compared to the lattice structure, so we are in the infra red limit and the Coulomb interaction is indeed an irrelevant term in more than one euclidean dimension (and yes, in 1D, the Fermi gas is not the whole story, there is the Luttinger liquid as well). But for atoms, I don't see how you would invoke such RG arguments. So what can you do (with regards to actually proving the periodic table)? In our class, we teach how Lieb and Simons showed that in the $N=Z\to \infty$ limit (which in some sense can also be viewed as the semi-classical limit when you bring in $\hbar$ again) that the ground state energy $E^Q$ of the Hamiltonian above is in fact approximated by the ground state energy $E^{TF}$ of the Thomas-Fermi model (the simplest of all density functional theories, where instead of the multi-particle wave function you only use the one-particle electronic density $\rho(x)$ and approximate the kinetic energy by a term like $\int \rho^{5/3}$ which is exact for the three fermi gas in empty space): $$E^Q(Z) = E^{TF}(Z) + O(Z^2)$$ where by a simple scaling argument $E^{TF}(Z) \sim Z^{7/3}$. More recently, people have computed more terms in these asymptotic which goes in terms of $Z^{-1/3}$, the second term ($O(Z^{6/3})= O(Z^2)$ is known and people have put a lot of effort into $O(Z^{5/3})$ but it should be clear that this technology is still very very far from proving anything "periodic" which would be $O(Z^0)$. So don't hold your breath hoping to find the periodic table from this approach. On the other hand, chemistry of the periodic table (where the column is supposed to predict chemical properties of the atom expressed in terms of the orbitals of the "valence electrons") works best for small atoms. So, another sensible limit appears to be to keep $N$ small and fixed and only send $Z\to\infty$. Of course this is not really describing atoms but rather highly charged ions. The advantage of this approach is that in the above Hamiltonian, you can absorb the $Z$ of the electron-nucleon interaction into a rescaling of $x$ which then let's $Z$ reappear in front of the electron-electron term as $1/Z$. Then in this limit, one can try to treat the ugly unwanted ee-term perturbatively. Friesecke (from TUM) and collaborators have made impressive progress in this direction and in this limit they could confirm that for $N < 10$ the chemists' picture is actually correct (with some small corrections). There are very nice slides of a seminar talk by Friesecke on these results. Of course, as a practitioner, this will not surprise you (after all, chemistry works) but it is nice to know that mathematicians can actually prove things in this direction. But it there is still some way to go even 150 years after Mendeleev. Saturday, March 16, 2019 Nebelkerze CDU-Vorschlag zu "keine Uploadfilter" Sorry, this one of the occasional posts about German politics and thus in German. This is my posting to a German speaking mailing lists discussing the upcoming EU copyright directive (must be stopped in current from!!! March 23rd international protest day) and now the CDU party has proposed how to implement it in German law, although so unspecific that all the problematic details are left out. Here is the post. Vielleicht bin ich zu doof, aber ich verstehe nicht, wo der genaue Fortschritt zu dem, was auf EU-Ebene diskutiert wird, sein soll. Ausser dass der CDU-Vorschlag so unkonkret ist, dass alle internen Widersprüche im Nebel verschwinden. Auch auf EU-Ebene sagen doch die Befuerworter, dass man viel lieber Lizenzen erwerben soll, als filtern. Das an sich ist nicht neu. Neu, zumindest in diesem Handelsblatt-Artikel, aber sonst habe ich das nirgends gefunden, ist die Erwähnung von Hashsummen („digitaler Fingerabdruck“) oder soll das eher sowas wie ein digitales Wasserzeichen sein? Das wäre eine echte Neuerung, würde das ganze Verfahren aber sofort im Keim ersticken, da damit nur die Originaldatei geschützt wäre (das waere ja auch trivial festzustellen), aber jede Form des abgeleiteten Werkes komplett durch die Maschen fallen würde und man durch eine Trivialänderung Werke „befreien“ könnte. Ansonsten sind wir wieder bei den zweifelhaften, auf heute noch nicht existierender KI-Technologie beruhenden Filtern. Das andere ist die Pauschallizenz. Ich müsste also nicht mehr mit allen Urhebern Verträge abschliessen, sondern nur noch mit der VG Internet. Da ist aber wieder die grosse Preisfrage, für wen die gelten soll. Intendiert sind natürlich wieder Youtube, Google und FB. Aber wie formuliert man das? Das ist ja auch der zentrale Stein des Anstoßes der EU-Direktive: Eine Pauschallizenz brauchen all, ausser sie sind nichtkommerziell (wer ist das schon), oder (jünger als drei Jahre und mit wenigen Benutzern und kleinem Umsatz) oder man ist Wikipedia oder man ist GitHub? Das waere wieder die „Internet ist wie Fernsehen - mit wenigen grossen Sendern und so - nur eben anders“-Sichtweise, wie sie von Leuten, die das Internet aus der Ferne betrachten so gerne propagiert wird. Weil sie eben alles andere praktisch platt macht. Was ist denn eben mit den Foren oder Fotohostern? Müssten die alle eine Pauschallizenz erwerben (die eben so hoch sein müsste, dass sie alle Film- und Musikrechte der ganzen Welt pauschal abdeckt)? Was verhindert, dass das am Ende ein „wer einen Dienst im Internet betreibt, der muss eben eine kostenpflichtige Internetlizenz erwerben, bevor er online gehen kann“-Gesetz wird, das bei jeder nichttrivialen Höhe der Lizenzgebühr das Ende jeder gras roots Innovation waere? Interessant waere natuerlich auch, wie die Einnahmen der VG Internet verteilt werden. Ein Schelm waere, wenn das nicht in großen Teilen zB bei Presseverlegern landen würde. Das waere doch dann endlich das „nehmt denjenigen, die im Internet Geld verdienen dieses weg und gebt es und, die nicht mehr so viel Geld verdienen“-Gesetz. Dann müsste die Lizenzgebühr am besten ein Prozentsatz des Umsatz sein, am besten also eine Internet-Steuer. Und ich fange nicht damit an, wozu das führt, wenn alle europäischen Länder so krass ihre eigene Umsetzungssuppe kochen. Alles in allem ein ziemlich gelungener Coup der CDU, der es schaffen kann, den Kritikern von Artikel 13 in der öffentlichen Meinung den Wind aus den Segeln zu nehmen, indem man es alles in eine inkonkrete Nebelwolke packt, wobei die ganzen problematischen Regelungen in den Details liegen dürften. Wednesday, March 06, 2019 Challenge: How to talk to a flat earther? Further down the rabbit hole, over lunch I finished watching "Behind the Curve", a Netflix documentary on people believing the earth is a flat disk. According to them, the north pole is in the center, while Antarctica is an ice wall at the boundary. Sun and moon are much closer and flying above this disk while the stars are on some huge dome like in a planetarium. NASA is a fake agency promoting the doctrine and airlines must be part of the conspiracy as they know that you cannot directly fly between continents on the southern hemisphere (really?). These people are happily using GPS for navigation but have a general mistrust in the science (and their teachers) of at least two centuries. Besides the obvious "I don't see curvature of the horizon" they are even conducting experiments to prove their point (fighting with laser beams not being as parallel over miles of distance as they had hoped for). So at least some of them might be open to empirical disprove. So here is my challenge: Which experiment would you conduct with them to convince them? Warning: Everything involving stuff disappearing at the horizon (ships sailing away, being able to see further from a tower) are complicated by non-trivial diffraction in the atmosphere which would very likely turn this observation inconclusive. The sun being at different declination (height) at different places might also be explained by being much closer and a Foucault pendulum might be too indirect to really convince them (plus it requires some non-elementary math to analyse). My personal solution is to point to the observation that the declination of Polaris (around which I hope they can agree the night sky rotates) is given my the geographical latitude: At the north pole it is right above you but is has to go down the more south you get. I cannot see how this could be reconciled with a dome projection. How would you approach this? The rules are that it must only involve observations available to everyone, no spaceflight, no extra high altitude planes. You are allowed to make use of the phone, cameras, you can travel (say by car or commercial flight but you cannot influence the flight route). It does not involve lots of money or higher math. Tuesday, February 12, 2019 Bohmian Rapsody Visits to a Bohmian village Over all of my physics life, I have been under the local influence of some Gaul villages that have ideas about physics that are not 100% aligned with the main stream views: When I was a student in Hamburg, I was good friends with people working on algebraic quantum field theory. Of course there were opinions that they were the only people seriously working on QFT as they were proving theorems while others dealt with perturbative series only that are known to diverge and are thus obviously worthless. Funnily enough they were literally sitting above the HERA tunnel where electron proton collisions took place that were very well described by exactly those divergent series. Still, I learned a lot from these people and would say there are few that have thought more deeply about structural properties of quantum physics. These days, I use more and more of these things in my own teaching (in particular in our Mathematical Quantum Mechanics and Mathematical Statistical Physics classes as well as when thinking about foundations, see below) and even some other physicists start using their language. Later, as a PhD student at the Albert Einstein Institute in Potsdam, there was an accumulation point of people from the Loop Quantum Gravity community with Thomas Thiemann and Renate Loll having long term positions and many others frequently visiting. As you probably know, a bit later, I decided (together with Giuseppe Policastro) to look into this more deeply resulting in a series of papers there were well received at least amongst our peers and about which I am still a bit proud. Now, I have been in Munich for over ten years. And here at the LMU math department there is a group calling themselves the Workgroup Mathematical Foundations of Physics. And let's be honest, I call them the Bohmians (and sometimes the Bohemians). And once more, most people believe that the Bohmian interpretation of quantum mechanics is just a fringe approach that is not worth wasting any time on. You will have already guessed it: I did so none the less. So here is a condensed report of what I learned and what I think should be the official opinion on this approach. This is an informal write up of a notes paper that I put on the arXiv today. Bohmians don't like about the usual (termed Copenhagen lacking a better word) approach to quantum mechanics that you are not allowed to talk about so many things and that the observer plays such a prominent role by determining via a measurement what aspect is real an what is not. They think this is far too subjective. So rather, they want quantum mechanics to be about particles that then are allowed to follow trajectories. "But we know this is impossible!" I hear you cry. So, let's see how this works. The key observation is that the Schrödinger equation for a Hamilton operator of the form kinetic term (possibly with magnetic field) plus potential term, has  a conserved current $$j = \bar\psi\nabla\psi - (\nabla\bar\psi)\psi.$$ So as your probability density is $\rho=\bar\psi\psi$, you can think of that being made up of particles moving with a velocity field $$v = j/\rho = 2\Im(\nabla \psi/\psi).$$ What this buys you is that if you have a bunch of particles that is initially distributed like the probability density and follows the flow of the velocity field it will also later be distributed like $|\psi |^2$. What is important is that they keep the Schrödinger equation in tact. So everything that you can do with the original Schrödinger equation (i.e. everything) can be done in the Bohmian approach as well.  If you set up your Hamiltonian to describe a double slit experiment, the Bohmian particles will flow nicely to the screen and arrange themselves in interference fringes (as the probability density does). So you will never come to a situation where any experimental outcome will differ  from what the Copenhagen prescription predicts. The price you have to pay, however, is that you end up with a very non-local theory: The velocity field lives in configuration space, so the velocity of every particle depends on the position of all other particles in the universe. I would say, this is already a show stopper (given what we know about quantum field theory whose raison d'être is locality) but let's ignore this aesthetic concern. What got me into this business was the attempt to understand how the set-ups like Bell's inequality and GHZ and the like work out that are supposed to show that quantum mechanics cannot be classical (technically that the state space cannot be described as local probability densities). The problem with those is that they are often phrased in terms of spin degrees of freedom which have Hamiltonians that are not directly of the form above. You can use a Stern-Gerlach-type apparatus to translate the spin degree of freedom to a positional but at the price of a Hamiltonian that is not explicitly know let alone for which you can analytically solve the Schrödinger equation. So you don't see much. But from Reinhard Werner and collaborators I learned how to set up qubit-like algebras from positional observables of free particles (at different times, so get something non-commuting which you need to make use of entanglement as a specific quantum resource). So here is my favourite example: You start with two particles each following a free time evolution but confined to an interval. You set those up in a particular entangled state (stationary as it is an eigenstate of the Hamiltonian) built from the two lowest levels of the particle in the box. And then you observe for each particle if it is in the left or the right half of the interval. From symmetry considerations (details in my paper) you can see that each particle is with the same probability on the left and the right. But they are anti-correlated when measured at the same time. But when measured at different times, the correlation oscillates like the cosine of the time difference. From the Bohmian perspective, for the static initial state, the velocity field vanishes everywhere, nothing moves. But in order to capture the time dependent correlations, as soon as one particle has been measured, the position of the second particle has to oscillate in the box (how the measurement works in detail is not specified in the Bohmian approach since it involves other degrees of freedom and remember, everything depends on everything but somehow it has to work since you want to produce the correlations that are predicted by the Copenhagen approach). The trajectory of the second particle depending on its initial position This is somehow the Bohmian version of the collapse of the wave function but they would never phrase it that way. And here is where it becomes problematic: If you could see the Bohmian particle moving you could decide if the other particle has been measured (it would oscillate) or not (it would stand still). No matter where the other particle is located. With this observation you could build a telephone that transmits information instantaneously, something that should not exist. So you have to conclude you must not be able to look at the second particle and see if it oscillates or not. Bohmians  tell you you cannot because all you are supposed to observer about the particles are their positions (and not their velocity). And if you try to measure the velocity by measuring the position at two instants in time you don't because the first observation disturbs the particle so much that it invalidates the original state. As it turns out, you are not allowed to observe anything else about the particles than that they are distributed like $|\psi |^2$ because if you could, you could build a similar telephone (at least statistically) as I explain the in the paper (this fact is known in the Bohm literature but I found it nowhere so clearly demonstrated as in this two particle system). My conclusion is that the Bohm approach adds something (the particle positions) to the wave function but then in the end tells you you are not allowed to observe this or have any knowledge of this beyond what is already encoded in the wave function. It's like making up an invisible friend. PS: If you haven't seen "Bohemian Rhapsody", yet, you should, even if there are good reasons to criticise the dramatisation of real events. Thursday, January 17, 2019 Has your password been leaked? Today, there was news about a huge database containing 773 million email address / password pairs became public. On Have I Been Pawned you can check if any of your email addresses is in this database (or any similar one). I bet it is (mine are). These lists are very probably the source for the spam emails that have been around for a number of months where the spammer claims they broke into your account and tries to prove it by telling you your password. Hopefully, this is only a years old LinkedIn password that you have changed aeons ago. To make sure, you actually want to search not for your email but for your password. But of course, you don't want to tell anybody your password. To this end, I have written a small perl script that checks for your password without telling anybody by doing a calculation locally on your computer. You can find it on GitHub. Friday, October 26, 2018 Interfere and it didn't happen Coleman on GHZS Fruchtiger and Renner Interference and it did not happen Then, the result of step C becomes Wednesday, October 17, 2018 Bavarian electoral system Sunday's election resulted in the following distribution of seats: After the whole procedure, there are 205 seats distributed as follows • CSU 85 (41.5% of seats) • SPD 22 (10.7% of seats) • FW 27 (13.2% of seats) • GREENS 38 (18.5% of seats) • FDP 11 (5.4% of seats) • AFD 22 (10.7% of seats) You can find all the total of votes on this page. • CSU 85 (40.8%) • SPD 22 (10.6%) • FW 26 (12.5%) • GREENS 40 (19.2%) • FDP 12 (5.8%) • AFD 23 (11.1%) 221 seats • CSU 91 (41.2%) • SPD 24 (10.9%) • FW 28 (12,6%) • GREENS 42 (19.0%) • FDP 12 (5.4%) • AFD 24 (10.9%) The perl script I used to do this analysis is here. Seats: 220 • CSU  91 41.4% • SPD  24 10.9% • FW  28 12.7% • GREENS  41 18.6% • FDP  12 5.4% • AFD  24 10.9% Seats: 217 • CSU  90 41.5% • SPD  23 10.6% • FW  28 12.9% • GREENS  41 18.9% • FDP  12 5.5% • AFD  23 10.6% Seats: 210 • CSU  87 41.4% • SPD  22 10.5% • FW  27 12.9% • GREENS  40 19.0% • FDP  11 5.2% • AFD  23 11.0%
517ab0618a3ed5c0
Single qubit NMR based quantum computation In the previous post, we have sketched the basic ideas behind NMR based quantum computation. In this post, we will discuss single qubits and single qubit operations in more depth. The rotating frame of reference In NMR based quantum computing, quantum gates are realized by applying oscillating magnetic fields to our probe. As an oscillating field is time dependent, the Hamiltonian will be time dependent as well, making some calculations a bit more difficult. To avoid this, it is useful to pass to a different frame of reference, called the rotating frame of reference. To explain this, let us first study a more general setting. Assume that we are looking at a quantum system with Hamiltonian H and a state vector |\psi \rangle. Suppose further that we are given a unitary group, i.e. a time-dependent unitary operator T(t) = e^{-iAt} with a hermitian matrix A. We can then consider the transformed vector |\tilde{\psi}\rangle = T(t) |\psi \rangle Using the product rule and the Schrödinger equation, we can easily calculate the time derivative of this vector and obtain i \hbar \frac{d}{dt} |\tilde{\psi}\rangle = \hbar A |\tilde{\psi} \rangle + T(t) H |\psi\rangle = \tilde{H} |\tilde{\psi}\rangle \tilde{H} = THT^* + \hbar A In other words, the transformed vector again evolves over time according to an equation which is formally a Schrödinger equation if we replace the original Hamiltonian by the transformed Hamiltonian \tilde{H}. Let us now apply this to the system describing a single nuclear spin with spin 1/2 in a constant magnetic field B along the z-axis of the laboratory system. In the laboratory frame, the Hamiltonian is then given by H = \omega I_z with the Larmor frequency \omega and the spin operator I_z = \frac{\hbar}{2} \sigma_z We now pass into a new frame of reference by applying the transformation T(t) = \exp ( \frac{i\omega_{ref}}{\hbar} t I_z ) with an arbitratily chosen reference frequency \omega_{ref}. Geometrically, this is a rotation around the z-axis by the angle \omega_{ref}t. Using the formula above and the fact that T commutes with the original Hamiltonian, we find that the transformed Hamiltonian is \tilde{H} = (\omega - \omega_{ref}) I_z This is of the same form as the original Hamiltonian, with a corrected Larmor frequency \Omega = \omega - \omega_{ref} In particular, the Hamiltonian is trivial if the reference frequency is equal to the Larmor frequency. Intuitively, this is easy to understand. We know that the time evolution in the laboratory frame is described by a precession with the frequency \omega. When we choose \omega_{ref} = \omega, we place ourselves in a frame of reference rotating with the same frequency around the z-axis. In this rotating frame, the state vector will be constant, corresponding to the fact that the new Hamiltonian vanishes. If we choose a reference frequency different from the Larmor frequency, we will observe a precession with the frequency \Omega. Let us now repeat this for a different Hamiltonian – the Hamiltonian that governs the time evolution in the presence of an oscillating magnetic field. More precisely, we will look at the Hamiltonian of a rotating magnetic field (which is a good approximation for an oscillating magnetic field by an argument known as rotating wave approximation, see my notes for more details on this). In the presence of such a field, the Hamiltonian in the laboratory frame is H = \omega I_z + \omega_{nut} [I_x \cos (\omega_{ref}t + \Phi_p) + I_y \sin (\omega_{ref}t + \Phi_p) ] To calculate the Hamiltonian in the rotating frame, we have – according to the above formula – to apply the conjugation with T to each of the terms appearing in this Hamiltonian and add a correction term. Now the transformation T is a rotation around the z-axis, and the result of applying a rotation around the z-axis to the operators Ix and Iy is well known and in fact easy to calculate using the commutation relations between the Pauli matrices. The correction term cancels the first term of the Hamiltonian as above. The transformed Hamiltonian is then given by \tilde{H} = \omega_{nut} [I_x \cos \Phi_p + I_y \sin \Phi_p ] In other words, the time dependence has disappeared and only the phase term remains. Again, this is not really surprising – if we look at a rotating magnetic field from a frame of reference that is rotating around the same axis with the same frequency, the result is a constant magnetic field. The density matrix of a single qubit We are now ready to formally describe a single qubit, given by all nuclei at a specific position in a system consisting of a large number of molecules. According to the formalism of statistical quantum mechanics, this ensemble is described by a 2×2 density matrix \rho. The time evolution of this density matrix is governed by the Liouville-von Neumann equation i\hbar \frac{d}{dt} \rho = [H,\rho] The density matrix is a hermitian matrix with trace 1. Therefore the matrix \rho - \frac{1}{2} is a traceless hermitian matrix. Any such matrix can be expressed as a linear combination of the Pauli matrices with real coefficients. Consequently, we can write \rho(t) = \frac{1}{2} + f(t) \cdot I where f is a three-vector with real coefficients and the dot product is a shorthand notation for f \cdot I = f_x I_x + f_y I_y + f_z I_z Similarly, the most general time-independent Hamiltonian can be written as H = \frac{1}{2} tr(H) + a \cdot I We can remove the trace by adding a constant, which does not change the physics and corresponds to a shift of the energy scale. Further, we can express the vector a as the product of a unit vector and a scalar. Thus the most general Hamiltonian we need to consider is H = \omega_{eff} n \cdot I with a real number \omega_{eff} (the reason for this notation will become apparent in a second) and a unit vector n. Let us now plug this into the Liouville equation. Applying the very useful general identity (which is easily proved by a direct calculation) [a\cdot I, b \cdot I] = i \hbar (a \times b) \cdot I for any two vectors a and b, we find that \dot{f} = - \omega_{eff} [f \times n] This equation is often called the Bloch equation. By splitting f into a component perpendicular to n and a component parallel to n, one can easily see that the solution is a rotation around the axis n with frequency \omega_{eff}. What is the physical interpretation of this result and the physical meaning of f? To see this, let us calculate the expectation value of the magnetic moment induced by the spin of our system. The x-component of the magnetic moment, for instance, corresponds to the observable \gamma I_x. Therefore, according to the density matrix formalism, the expectation value of the x-component of the magnetic moment \mu is \langle \mu_x \rangle = \gamma tr (\rho I_x) If we compute the matrix product \rho I_x and use the fact that the trace of a product of two different Pauli matrices is zero, we find that the only term that contributes to the trace is the term fx, i.e. \langle \mu_x \rangle = \frac{\gamma \hbar^2}{4} f_x Similar calculations work for the other components and we find that, up to a constant, the vector f is the net magnetic moment of the probe. A typical NMR experiment After all these preparations, we now have all tools at our disposal to model the course of events during a typical NMR experiment in terms of the density matrix. Let us first try to understand the initial state of the system. In a real world experiment, none of the qubits is fully isolated. In addition, the qubits interact, and they interact with the surroundings. We can model these interactions by treating the qubits as being in contact with a heat bath of constant temperature T. According to the rules of quantum statistical mechanics, the equilibrium state, i.e. the state into which the system settles down after some time, is given by the Boltzmann distribution, i.e. \rho(t=0) = \frac{1}{Z} \exp (- \frac{H}{kT}) In the absence of an additional rotating field, the Hamiltonian in the laboratory frame is given by H = \omega I_z \rho(t=0) = \frac{1}{Z} \exp ( \frac{-\omega}{kT} I_z) Using the relations \omega = -\gamma B \sigma_z with the gyromagnetic moment \gamma, we can write this as \rho(t=0) = \frac{1}{Z} \exp ( \frac{1}{2} {{\hbar \gamma B}\over{kT}} \sigma_z) Let us now introduce the energy ratio \beta = \frac{\hbar \gamma B}{kT} The energy in the numerator is the energy scale associated with the Larmor frequency. For a proton in a magnetic field of a few Tesla, for example, this will be in the order of 10-25 Joule. The energy in the denominator is the thermal energy. If the experiment is conducted at room temperature, say T=300K, then this energy is in the order of 10-21 Joule (see this notebook for some calculations). This yields a value of roughly 10-4 for \beta. If we calculate the exponential in the Boltzmann distribution by expanding into a power series, we can therefore neglect all terms except the constant term and the term linear in \beta. This gives the approximation \rho(t=0) = \frac{1}{Z} (1 + \frac{1}{2} \beta \sigma_z) called the high temperature approximation. We can determine the value of Z by calculating the trace and find that Z = 2, so that \rho(t=0) = \frac{1}{2} + \frac{1}{4} \beta \sigma_z = \begin{pmatrix} \frac{1}{2} + \frac{\beta}{4} & 0 \\ 0 & \frac{1}{2} - \frac{\beta}{4} \end{pmatrix} = \frac{1}{2} + \frac{1}{2} \frac{\beta}{\hbar} I_z If we compare this to the general form of the density matrix discussed above, we find that the thermal state has a net magnetization in the direction of the z-axis (for positive \beta). This is what we expect – with our sign conventions, the energy is lowest if the spin axis is in the direction of the z-axis, so that slightly more nuclei will have their spins oriented in this direction, leading to a net magnetic moment. To calculate how this state changes over time, we again pass to the rotating frame of reference. As the initial density matrix clearly commutes with the rotation around the z-axis, the density matrix in the rotating frame is the same. If we choose the reference frequency to be exactly the Larmor frequency, the Hamiltonian given by the static magnetic field along the z-axis vanishes, and the density matrix does not change over time. When, however, we apply an additional pulse, i.e. an additional rotating magnetic field, for some time \tau, this changes. We have already seen that in the rotating frame, this pulse adds an additional term \omega_{nut} (I_x \cos \Phi_p + I_y \sin \Phi_p) to the Hamiltonian. This has the form discussed above – a scalar times a dot product of a unit vector with the vector I = (Ix, Iy, Iz). Therefore, we find that the time evolution induced by this Hamiltonian is a rotation around the vector (\cos \Phi_p, \sin \Phi_p, 0) with the frequency \omega_{nut}. If, for instance, we choose \Phi_p = 0, the vector f – and thus the magnetic moment – will slowly rotate around the x-axis. If we turn off the pulse after a time \tau such that \omega_{nut} \tau = \frac{\pi}{2} the net magnetization will be parallel to the y-axis. After the pulse has been turned off, the density matrix in the rotating frame is again constant, so the magnetic moment stays along the y-axis. In the laboratory frame, however, this corresponds to a magnetic moment which rotates with the Larmor frequency around the z-axis. This magnetic moment will induce a voltage in a coil placed in the x-y-plane which can be measured. The result will be an oscillating current, with frequency equal to the Larmor frequency. Over time, the state will slowly return into the thermal equilibrium, resulting in a decay of the oscillation. This is a good point in time to visualize what is happening. Given the explicit formulas for the density matrix derived above, it is not difficult to numerically simulate the state changes and NMR signals during an experiment as the one that we have just described (if you want to take a look at the required code, you can find a Python notebook here) The diagram below shows the result of such a simulation. Here, we have simulated a carbon nucleus in a TCE (Trichloroethylene) molecule. This molecule – pictured below (source Wikipedia) – has two central carbon nuclei. A small percentage of all TCE molecules in a probe will have two 13C nuclei instead of the more common 12C nuclei, which have spin 1/2 and therefore should be visible in the NMR spectrum. At 11.74 Tesla, an isolated 13C carbon nucleus has a Larmor precession frequency of 125 MHz. However, when the nuclei are part of a larger molecule as in our case, each nucleus is shielded from an external magnetic field by the surrounding cloud of electrons. As the electron configuration for both nuclei is different, the observed Larmor frequencies differ by a small amount known as the chemical shift. At the start of the simulation, the system was put into a thermal state. Then, an RF pulse was applied to flip the magnetization in the direction of the x-axis, and then a sample was taken over 0.1 seconds, resulting in the following signal. The signal that we see looks at the first glance as expected. We see an oscillating signal with an amplitude that is slowly decaying. However, you might notice that the frequency of the oscillation is clearly not 125 MHz. Instead, the period is roughly 0.001 seconds, corresponding to a frequency of 1200 Hz. The reason for this is that in an NMR spectrometer, the circuit processing the received signal will typically apply a combination of a mixer and a low pass filter to effectively shift the frequency by an adjustable reference frequency. In our case, the reference frequency was adjusted to be 1200 Hz above the Larmor frequency of the carbon nucleus, so the signal will oscillate with a frequency of 1200 Hz. In practice, the reference frequency determines a window in the frequency space in which we can detect signals, and all frequencies outside this window will be suppressed by the low pass filter. Now let us take a look at a more complicated signal. We again place the system in the thermal equilibrium state first, but then apply RF pulses to flip the spin of both carbon nuclei into the x-axis (in a simulation, this is easy, in a real experiment, this requires some thought, as the Larmor frequencies of these two carbon nuclei differ only by a small chemical shift of 900 Hz). We then again take a sample and plot the signal. The result is shown below. This time, we see a superposition of two oscillations. The first oscillations is what we have already seen – an oscillation with 1200 Hz, which is the difference of the chosen reference frequency and the Larmor frequency of the first carbon. The second oscillation corresponds to a frequency of roughly 300 Hz. This is the signal caused by the Larmor precession of the second spin. As we again measure the difference between the real frequency and the reference frequency, we can conclude that this frequency differs from the Larmor frequency of the first spin by 900 Hz. In reality, the signal that we observe is the superposition of many different oscillations and is not easy to interpret – even with a few oscillations, it soon becomes impossible to extract the frequencies by a graphical analysis as we have done it so far. Instead, one usually digitizes the signal and applies a Fourier transform (or, more precisely, a discrete Fourier transform). The following diagram shows the result of applying such a DFT to the signal above. Here, we have shifted the x-axis by the difference between the Larmor frequency \omega_0 of the first nucleus and the reference frequency, so that the value zero corresponds to \omega_0. We clearly see two peaks. The first peak at zero, i.e. at Larmor frequency \omega_0, is the signal emitted by the first nucleus. The second peak is shifted by 900 Hz and is emitted by the second nucleus. In general, each nucleus will result in one peak (ignoring couplings that we will study in a later post) and the differences between the peaks belonging to nuclei of the same isotope are the chemical shifts. Let us quickly summarize what we have learned. An ensemble of spin systems is described by a density which in turn is given by the net magnetization vector f. The result of applying a pulse to this state is a rotation around an axis given by the phase of the pulse (in fact, the phase can be adjusted to rotate around any axis in the x-y-plane, and as any rotation can be written as a decomposition of such rotations, we can generate an arbitrary rotation). The net magnetization can be measured by placing a coil close to the probe and measuring the induced voltage. This does already look like we are able to produce a reasonable single qubit. The vector f appears to correspond – after some suitable normalization – to points on the Bloch sphere, and as we can realize rotations, we should be able to realize arbitrary single qubit quantum gates. But what about multiple qubits? Of course a molecule typically has more than one nucleus, and we could try to use additional nuclei to create additional qubits, but there is a problem – in order to realize multi-qubit gates, these qubits have to interact. In addition, we need to be able to prepare our NMR system in a useful initial state and, at the end of the computation, we need to measure the outcome. These main ingredients of NMR based quantum computing will be the subject of the next post. 2 thoughts on “Single qubit NMR based quantum computation Leave a Reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
b2288fd056f9ed14
Download Edit this record How to cite View on PhilPapers A principle, according to which any scientific theory can be mathematized, is investigated. Social science, liberal arts, history, and philosophy are meant first of all. That kind of theory is presupposed to be a consistent text, which can be exhaustedly represented by a certain mathematical structure constructively. In thus used, the term “theory” includes all hypotheses as yet unconfirmed as already rejected. The investigation of the sketch of a possible proof of the principle demonstrates that it should be accepted rather a metamathematical axiom about the relation of mathematics and reality. The main statement is formulated as follows: Any scientific theory admits isomorphism to some mathematical structure in a way constructive. Its investigation needs philosophical means. Husserl’s phenomenology is what is used, and then the conception of “bracketing reality” is modelled to generalize Peano arithmetic in its relation to set theory in the foundation of mathematics. The obtained model is equivalent to the generalization of Peano arithmetic by means of replacing the axiom of induction with that of transfinite induction. The sketch of the proof is organized in five steps: a generalization of epoché; involving transfinite induction in the transition between Peano arithmetic and set theory; discussing the finiteness of Peano arithmetic; applying transfinite induction to Peano arithmetic; discussing an arithmetical model of reality. Accepting or rejecting the principle, two kinds of mathematics appear differing from each other by its relation to reality. Accepting the principle, mathematics has to include reality within itself in a kind of Pythagoreanism. These two kinds are called in paper correspondingly Hilbert mathematics and Gödel mathematics. The sketch of the proof of the principle demonstrates that the generalization of Peano arithmetic as above can be interpreted as a model of Hilbert mathematics into Gödel mathematics therefore showing that the former is not less consistent than the latter, and the principle is an independent axiom. The present paper follows a pathway grounded on Husserl’s phenomenology and “bracketing reality” to achieve the generalized arithmetic necessary for the principle to be founded in alternative ontology, in which there is no reality external to mathematics: reality is included within mathematics. That latter mathematics is able to self-found itself and can be called Hilbert mathematics in honour of Hilbert’s program for self-founding mathematics on the base of arithmetic. The principle of universal mathematizability is consistent to Hilbert mathematics, but not to Gödel mathematics. Consequently, its validity or rejection would resolve the problem which mathematics refers to our being; and vice versa: the choice between them for different reasons would confirm or refuse the principle as to the being. An information interpretation of Hilbert mathematics is involved. It is a kind of ontology of information. The Schrödinger equation in quantum mechanics is involved to illustrate that ontology. Thus the problem which of the two mathematics is more relevant to our being is discussed again in a new way A few directions for future work can be: a rigorous formal proof of the principle as an independent axiom; the further development of information ontology consistent to both kinds of mathematics, but much more natural for Hilbert mathematics; the development of the information interpretation of quantum mechanics as a mathematical one for information ontology and thus Hilbert mathematics; the description of consciousness in terms of information ontology. No keywords specified (fix it) PhilPapers/Archive ID Upload history Archival date: 2020-07-14 View other versions Added to PP index Total views 24 ( #62,085 of 2,454,450 ) Recent downloads (6 months) 6 ( #56,386 of 2,454,450 ) How can I increase my downloads? Downloads since first upload
e858387232905ce2
zbMATH — the first resource for mathematics Semiclassical eigenstates in a multidimensional well. (English) Zbl 0937.35511 Séminaire de théorie spectrale et géométrie. Année 1992-1993. Chambéry: Univ. de Savoie, Fac. des Sciences, Service de Math. Sémin. Théor. Spectrale Géom., Chambéry-Grenoble. 11, 147-155 (1993). Summary: The two-dimensional Schrödinger operator with an analytic potential, having a non-degenerated minimum (well) at the origin, is considered. Under the Diophantine condition on the frequencies, the full asymptotic series (the Planck constant \(\hbar\) tending to zero) for eigenfunctions with given quantum numbers \((n_1, n_2)\), concentrated at the bottom of the well, is constructed, the Gaussian-like asymptotics being valid in a neighbourhood of the origin which is independent of \(\hbar\). For small quantum numbers the second approximation to the eigenvalues is written in terms of the derivatives of the potential. For the entire collection see [Zbl 0812.00008]. 35P10 Completeness of eigenfunctions and eigenfunction expansions in context of PDEs 35J10 Schrödinger operator, Schrödinger equation Full Text: EuDML
616cfca580d1506d
Stanford Encyclopedia of Philosophy Modal Interpretations of Quantum Mechanics First published Tue Nov 12, 2002; substantive revision Thu Dec 6, 2007 The original ‘modal interpretation’ of quantum theory was born in the early 1970s, and at that time the phrase referred to a single interpretation, due to van Fraassen. The phrase now encompasses a class of interpretations, and is better taken to refer to a general approach to the interpretation of quantum theory. We shall describe the history of modal interpretations, how the phrase has come to be used in this way, and the general program of (at least some of) those who advocate this approach. 1. The Copenhagen Variant By the early 1970s, researchers in philosophy of physics had become painfully aware of the nonlocality inherent in standard quantum theory. It arises most dramatically in the context of the projection postulate, which asserts that upon measurement of a physical system, its state will ‘collapse’ (or be ‘projected’) to a state corresponding to the value found in the measurement. This postulate is difficult to accept in any case (what effects this discontinuous change in the physical state of a system? what exactly is a ‘measurement’ as opposed to an ordinary physical interaction?), but it is especially worrying when applied to entangled compound systems whose components are well-separated in space. The classic example is the Einstein-Podolsky-Rosen experiment, in which two particles which have interacted in the past are separated. Their quantum-mechanical state is ‘entangled’, which means, for our purposes, that there exist strict correlations between the two systems, in spite of the fact that the correlated quantities are not sharply defined in the individual systems. This correlation has the effect that the collapse resulting from a measurement on one of the systems simultaneously (and instantaneously) affects the other. A possible way clear of this problem was noticed by van Fraassen (1972, 1974, 1991), who proposed to eliminate the projection postulate from the theory. Of course, others had made this proposal before. Bohm's (1952) theory (itself preceded by de Broglie's proposals from the 1920s) eliminates the projection postulate, as do the various many-worlds (and relative-state) interpretations. Van Fraassen's elaboration of the proposal to do without the projection postulate was, however, different from these other approaches. It relied, in particular, on a distinction between what van Fraassen called the ‘value state’ of a system, and the ‘dynamical state’ of a system. The value state at any instant represents the system's physical properties at that instant, in the sense that it specifies the values of all physical quantities that are sharply defined for the system at the point in time in question. By contrast, the dynamical state determines the evolution of the system. It determines which properties the system might have at later times. In other words, the dynamical state is what we need to make predictions about future value states. The dynamical state is just the quantum state of the ordinary textbook approach (a vector or density matrix in Hilbert space). For an isolated system (which may consist of many component systems) this state is subject to the Schrödinger equation (in non-relativistic quantum mechanics). The important point added by the modal approach is that this dynamical state is stipulated never to collapse during its evolution. The value state is (typically) different from the dynamical state. More precisely, the value state of a system (usually a component of a bigger system) differs from the dynamical state of this same system (found by restricting the dynamical state of the total system to the subsystem in question) exactly when this dynamical state is not pure. In these cases (i.e., when the dynamical state of a system is mixed), the dynamical state does not fix the value state, but it determines the set of possible value states. The general idea of van Fraassen's proposal, and of modal interpretations in general, is that physical systems at all times possess a number of well-defined physical properties, i.e. definite values of physical quantities, and that these properties are represented by the system's value state. Which physical quantities are sharply defined, and which values they take, may change in time. The dynamical state determines the set of possible value states and their possible time evolutions. Further, empirical adequacy requires that the dynamical state generate the correct quantum mechanical frequencies for those properties that are observable. It is part of this proposal that a system may have a sharp value of an observable even if the dynamical state is not an eigenstate of that same observable. The proposal thus violates the so-called ‘eigenstate-eigenvalue link’, which says that a system can only have a sharp value of an observable (namely, one of its eigenvalues) if its quantum state is the corresponding eigenstate. In the value state terminology, the eigenstate-eigenvalue link would say that a system has the value state corresponding to a given eigenvalue (of a given observable) if and only if its dynamical state is an eigenstate of the observable corresponding to that eigenvalue. Van Fraassen accepts the ‘if’ part, but denies the ‘only if’ part. What are the possible ‘value states’ for a given system at a given time? Van Fraassen stipulates the following restriction: propositions about a physical system cannot be jointly true unless they can be jointly certain according to the standard quantum rules (i.e., generally speaking, unless they are represented by commuting observables). It follows that the non-commutativity of observables imposes limits on the possibilities of joint existence of properties. This non-commutativity does therefore not so much restrict our knowledge about the properties of a system, but rather restricts the possibility of joint existence of properties themselves, independently of our knowledge. Non-commuting quantities, like position and momentum, cannot jointly be well-defined quantities of a physical system. This motivates van Fraassen to term his interpretation the ‘Copenhagen Variant’ of the modal interpretation. Other variants (for example, van Fraassen identifies an ‘Anti-Copenhagen’ variant, which he attributes to Arthur Fine) would impose less restrictive conditions on the form of the value states. Finally, value states are taken to be maximal with respect to the restriction just noted, in the sense that they are representable as pure states in Hilbert space (this requirement is relaxed in other versions of the modal interpretation, as we shall see). Now, which pure states are the possible value states at a given moment? Van Fraassen formulates a very permissive criterion, which other authors have found too permissive, the reason stemming from his ‘constructive empiricist’ philosophy of science. He is concerned with giving an interpretation of the theory that is only restricted by the requirement that the theory be empirically adequate, i.e. compatible with all observable phenomena (in the sense used by van Fraassen). In particular, the interpretation should guarantee that measurements possess results. Van Fraassen is not striving for an interpretation that tells us the exact truth about what goes on behind the scenes of observation, by exactly describing the properties of physical systems even if not measured. In other words, the main problem to be solved for van Fraassen is that if we apply the standard eigenstate-eigenvalue link to quantum mechanics without the projection postulate, the result is that measurements do not have results: in general, after a measurement interaction a measuring device plus object system will end up in a superposition of eigenstates of the measured observable. Given only this task, the account given by van Fraassen can be relatively modest. According to van Fraassen, the just-mentioned problem can be solved by introducing value states (and therefore well-defined properties of physical systems, among them pointer states of measuring devices) and by adding only that the possible value states are all pure states that lie in the support of the (generally mixed) dynamical state of the system. In other words, these are all states that appear in the various possible decompositions (not only the diagonal ones) of the density matrices. Of course, empirical adequacy requires that in cases of measurement the actual value state of the apparatus be one describing a definite measurement result. Observation tells us also that, in these cases, the dynamical state generates a probability measure over exactly the set of possible measurement results (which is a smaller set than the set of properties deemed possible by van Fraassen), thus enabling us to make predictions. In the end, van Fraassen therefore faces the task of giving a detailed account of measurements, and according to many this has not been satisfactorily done in his approach. Van Fraassen's account is ‘modal’ because it leads to a modal logic of quantum propositions. Indeed, the dynamical state in general only tells us what is possible. An important point is that one should not consider this modality to arise from an incompleteness of the description, which it is the aim of science to remove. The dynamical state provides us with possible value states for all physical systems (i.e. possible stories about the world) that are compatible with all the observable data; and this is all an interpretation has to do, according to van Fraassen. On the other hand, it is quite easy to see how van Fraassen's approach gave rise to a program that is concerned with providing a further-going ‘realistic’ interpretation of quantum theory, a program to which we now turn. 2. Kochen-Dieks-Healey Interpretations The basic outlines of that program are already apparent in van Fraassen's work (or in what may be considered its limitations). The main idea is to precisely define a set of possible properties, or value states, for a physical system that is adequate in measurement situations and then to assert that the dynamical (i.e., quantum-mechanical) state generates an ignorance-interpretable probability measure over this set. More precisely, one defines an ignorance-interpretable probability measure over value states, which themselves assign ‘possessed’ or ‘not possessed’ to each possible property. One uses the quantum-mechanical probability measure, which makes it plausible to expect empirical adequacy. In the late 1980s, various researchers — typically, as noted above, with a more realistic bent than van Fraassen — realized the possibilities offered by a modal approach. Here we shall consider three cases, albeit briefly and largely without reference to their background philosophical motivations: Kochen, Dieks, and Healey. Kochen's (1985) modal interpretation is based on the polar decomposition theorem (see Reed and Simon (1979, pp. 197-198) for a statement and proof), but is somewhat easier to understand in terms of the so-called ‘biorthogonal decomposition theorem’: Biorthogonal Decomposition Theorem: Given a vector, |v>, in a tensor-product Hilbert space, H1H2, there exist bases {|ei>} and {|fj>} for H1 and H2 respectively such that |v> can be written as a linear combination of terms of the form |ei> ⊗ |fi>. If the absolute values (modulus) of the coefficients in this linear combination are all unequal then the bases are unique. In other words, the state of a two-particle system picks out (in many cases, uniquely) a basis for each of the component systems. (See, for example, Schrödinger (1935) for a proof of this theorem.) Recall from the previous section that van Fraassen refrained from providing a restrictive specification of the possible value states for a given system. We now see in the biorthogonal decomposition theorem a way to provide a rule that is restrictive: define as the possible value states (for each component system of a composite system) the elements of the basis picked out by the theorem. The set of possible properties is thus directly fixed by the dynamical state; this set is much smaller than the set considered by van Fraassen. It is also manifest that the dynamical state generates a probability measure over the set of possible value states, namely the standard quantum mechanical measure. In this way, the interpretation focuses essentially on compound systems. In one sense, this feature is not really a departure from van Fraassen's view, because for van Fraassen, only systems that are in mixed dynamical states (indeed, ‘improperly’ mixed: i.e., states represented by a density operator that is derived by partial tracing from the pure state of a bigger system, as opposed to ‘proper’ mixtures that represent ignorance about the true state) have value states that differ from their dynamical states. This situation will typically occur for systems that are components of a compound system. This similarity between Kochen and van Fraassen should not mask a significant philosophical difference, however. Kochen's account is meant to be perspectival or relational, meaning that a system has a property only in relation to other systems (see also below). It is illuminating to see how in a typical measurement situation Kochen's prescription differs from van Fraassen's. Consider a typical measurement in which a ‘pointer’ becomes correlated with the value that some ‘measured’ system has for a given observable. Letting the |ei> represent the possible ‘indicator states’ of the pointer, and |fj> the eigenstates corresponding to the possible values that the system might have for the measured observable, the final state of the compound system will indeed take the form of a linear combination of terms of the form |ei> ⊗ |fi>, so that by Kochen's prescription, the pointer has exactly its indicator states as the only possible value states. By contrast, for van Fraassen it only followed that the pointer's indicator states are among the potential value states; as discussed above, van Fraassen is concerned only to establish this fact, and not the fact that they are the value states even when they are unobservable. For Kochen, the fact that the application of the interpretation is restricted to subsystems of a two-component compound system is not a problem. Indeed, he appears to adopt a metaphysics of properties in which systems do not have intrinsic properties: all properties are relational. Kochen calls the relation ‘witnessing’. Consider again the measurement described above. In this case, the pointer (at the end of the measurement) may be said to ‘indicate’ (or, as Kochen prefers, ‘witness’) the result, i.e., the value that the measured system has for the measured observable. Now, because Kochen intends his interpretation to apply in all circumstances (not only in measurements), we must abstract the idea of ‘indication’ or ‘witnessing’ away from the context of measurements, and whatever notion we end up with is supposed to apply to all cases of possession of properties. Kochen's interpretation is therefore ‘perspectival’: systems do not possess properties intrinsically, but relative to the ‘perspective’ of another system that ‘witnesses’ it to posses the property in question. Other authors, e.g. Dieks, at least originally preferred a metaphysics of intrinsically possessed properties. Their proposals are therefore faced with consistency questions about the relations between properties assigned according to different ways of splitting up a system into components. To see how this question arises, note that a three-component compound system may be divided into pairs of subsystems in several ways. Consider, for example, the compound system A&B&C. We could arrive at properties for A by applying the biorthogonal decomposition theorem to the two-component system A&(B&C). We could also apply the theorem to (for example) B&(C&A) or C&(A&B). Now, how are the properties of A and B related to those of A&B? Suppose, for example, that A has the property P and B has the property Q. Should one ascribe the property P&Q to A&B, or should A&B have some property that it gets from applying the biorthogonal decomposition to C&(A&B), or both? Although in his early proposals Dieks (1988, 1989a, 1989b) did not focus on these questions, his later work, together with Vermaas (Vermaas and Dieks, 1995) explicitly addressed them. (The fullest account is in Vermaas (1999). See also Bacciagaluppi (1996).) Dieks first notes that the density operator (reduced state) of a single component of a two-component system has for its spectral resolution exactly the projections spanned by the basis elements picked out by the biorthogonal decomposition theorem, in the case when the decomposition is unique. One may then reformulate and generalize the original proposal by positing in general that the possible value states for any system are represented by the projection operators in its density operator's spectral decomposition (whose existence and uniqueness is guaranteed by the spectral theorem). This new proposal matches the old one in cases where the old one applies, and generalizes by fixing the definite-valued quantities in terms of multi-dimensional projectors when the biorthogonal decomposition is degenerate. In terms of value states the proposal says that definite properties need not always be represented by one-dimensional vector states — higher-dimensional subspaces of the Hilbert space can also occur. This idea also occurs in Healey's approach. With this general recipe for assigning properties in hand, we may now address the question about the relations between different subdivisions of the total system. We can make the issue even more complicated than stated above by noting that a given tensor-product Hilbert space can be factored in many ways. In essence, the factorization of a given Hilbert space, H, into two factors, H1 and H2, can be ‘rotated’ to produce additional factorizations into H1 and H2. There is a continuous infinity of such possibilities. Are we to apply the proposal to each such factorization? How are the results related, if at all? A theorem due to Bacciagaluppi (1995) shows, in essence, that if one applies Dieks' proposal to the ‘subsystems’ obtained in every factorization and insists that the results be comparable (i.e., that the subsystems thus obtained do not have their properties ‘relative to a factorization’ but instead have them absolutely), then one will be led to a mathematical contradiction of the Kochen-Specker variety. In response, one could adopt the view that subsystems have their properties ‘relative to a factorization’; some advocates of modal interpretations have instead adopted the view that there is a ‘preferred factorization’ of the universal Hilbert space into subsystems. This assumption amounts to the adoption of the existence of fixed ‘atomic’ degrees of freedom of the universe. One is still faced, however, with the question of how properties of a composite system are related to those of its components. The answer to this question depends on whether Dieks' proposal for assigning properties is to be applied to the ‘atoms’ only, or to any subsystem whatever. For example, do we apply the proposal (from our schematic example above) to A&B&C as well as to A&B? Vermaas (1997) has shown that doing so has the consequence that one cannot define generally valid correlations between a composite system and its components. If one is willing to adopt perspectivalism — as Kochen was from the start, and Dieks has become in later work (Bene and Dieks, 2002, see also Berkovitz and Hemmo, 2005, and Hemmo and Berkovitz, 2005) — then one can perhaps justify the lack of such correlations. The choice is therefore between some form of perspectivalism and the atomic modal interpretation (see, for example, Bacciagaluppi and Dickson, 1999), according to which the basic proposal is applied only to the ‘atomic’ subsystems of the universe. The properties of all other (compound) systems are in this case inherited from their subsystems. There are connections here with discussions in metaphysics about the possibility of the existence of ‘non-supervenient’ properties. (Clifton (1995c) also offers an important theorem concerning this issue.) Richard Healey (1989) was also among the first to make use of the biorthogonal decomposition theorem, taking Kochen's ideas in a somewhat different direction. Healey's main concern was the apparent nonlocality of quantum theory. Healey's intuition about the way a modal interpretation based on the biorthogonal decomposition theorem would be applied to, say, an EPR experiment is to implement the idea that an EPR pair possesses a 'holistic' property; this can then explain why the apparatus on one side of the experiment acquires a property that is correlated to the result on the other side. Irrespective of whether this picture is general enough for its intended purpose, it shows that Healey does not subscribe to an ‘atomic’ modal interpretation, since it is crucial for him that the EPR pair as a whole be assigned a (non-product) property. On the other hand, Healey's proposal begins with the atomic interpretation, making use of the biorthogonal decomposition theorem, but the set of possible properties is then expanded (and subsequently restricted) by a number of conditions. Healey's aim is apparently to walk a thin line amongst a variety of desiderata. The first is consistency. As shown by (for example) the theorems of Bacciagaluppi and Vermaas, mentioned above — not to mention the Kochen-Specker theorem itself — given certain conditions on the set of possibly-possessed properties, one cannot add properties to this set willy-nilly. A second is to maintain a plausible theory of the relationship between composite systems and their subsystems. A third is to maintain a plausible account of the relations among possessed properties at a given time. A fourth is to maintain a plausible account of the relations among possessed properties at different times. The structure of possibly-possessed properties that emerges from Healey's conditions is extremely complicated. Some progress has been made since Healey's book was published (see for example Reeder and Clifton, 1995), but in general, it remains difficult to see what the set of possibly-possessed properties is according to Healey's approach. 3. Motivating Modal Interpretations One might well ask: Why begin with the biorthogonal decomposition (or more generally, the spectral decomposition) in the first place? What is the physical motivation of interpretations that determine the set of possibly-possessed properties on the basis of this decomposition? A series of theorems proposes to answer (or to begin to answer) this question. The first of these theorems was due to Clifton (1995a), the title of the paper indicating the project: “Independently motivating the Kochen-Dieks Modal Interpretation of Quantum Mechanics”. A series of related results followed, including those by Clifton (1995b), Dickson (1995a, 1995b), Bub and Clifton (1996), Bub, Clifton and Goldstein (2000) and Dieks (1995, 2005, 2007). Here we shall discuss Clifton's original paper, and Bub and Clifton's theorem, the former to indicate the general thrust of these arguments, and the latter as a way to introduce Bub's own modal proposals. The theorem discussed here is not quite Clifton's, which is slightly stronger (because its assumptions are slightly weaker), but it will be sufficient to make the reader grasp the general idea of this group of theorems. These take the following general form, for some mathematically-stated (but hopefully physically motivated) conditions A, B, C, etc.: If one wants a set of possibly-possessed properties to obey conditions A, B, C, etc., then the set must take the form asserted by a spectral-decomposition-like version of the modal interpretation. In Clifton's and Bub's work that form is the following. (Dieks' theorem (2005, 2007) gives a justification for a set of properties that exactly coincides with the one given by the spectral decomposition.) Consider a system in the (improper) mixed state, W. Let {Pi} be the set of W's spectral projections, and let B<Pi> be the Boolean algebra generated by the Pi, which is in this case just the set of all sums of elements of {Pi}. Finally, let Q be the null space of W, that is, orthogonal to each Pi. Then the set of all possibly-valued projections, P, for our system is the set {P | P = Pj + Q′, where Pi is in B<Pi> and Q′ is contained in Q}. This set differs from that given by the spectral decomposition by the inclusion of individual projections from the null-space — the original Dieks-Vermaas spectral decomposition proposal only includes the projection on the null-space as one whole. So theorems of the sort proven by Clifton and others take the form: sets of the form like that given above are the only sets that fulfill conditions A, B, C, etc. In one such theorem, roughly the one proven originally by Clifton, the conditions are: 1. Closure: the set of all possibly-possessed properties is closed under conjunction, disjunction, and negation (suitably understood in quantum-logical terms). 2. Classicality: the quantum-mechanical probability measure (generated by the reduced state W) over the set of all possibly-possessed properties obeys all of the laws of classical probability, and — crucially — it is ‘ignorance-interpretable’. 3. Certainty: for any property R, if the reduced state W assigns probability 1 or probability 0 to R, then R is in the set of all possibly-possessed properties. 4. Ignorance: each member of the spectral resolution of W is in the set of all possibly-possessed properties. The justification of the final condition depends on an analysis of the measuring process, described in the way first proposed by von Neumann. One may note that Clifton's theorem actually relies on a considerably weaker condition. (But in conjunction with the other conditions, it implies the condition given here). The theorem of Bub and Clifton (1996) (in the improved version of Bub, Clifton, and Goldstein (2000)) concerns a set of possibly-possessed properties that is characterized somewhat differently. Specifically, it is characterized in terms of (in the simplest case) a pure state, |v> and a privileged observable, R. The pure state is the quantum-mechanical state of the system, while the observable is privileged in the sense that it is always ‘definitely-valued’; i.e., whatever else gets a definite value, the spectral projections of R certainly must. The conditions of the Bub-Clifton theorem are the following: 1. Closure: as above. 2. Truth and Probability: essentially the same condition as ‘classicality’, above. 3. R-preferred: the eigenspaces of R are among the set of possibly-possessed properties. 4. |v>, R-definability: the set of possibly-possessed properties are definable solely in terms of the pure state |v> and the observable R. 5. Maximality: the set of possibly-possessed properties is maximal with respect to the conditions above. The idea, then, is to determine a maximal set (in fact, a lattice) of possibly-possessed properties that admits an empirically adequate and ignorance-interpretable probability measure, makes R definite-valued, and is fixed by the state of the system, |v>, and R. Again, these conditions are supposed to be intuitively clear, if not compelling. The condition ‘R-preferred’ may look controversial, for it is unclear why there should be any ‘preferred’ observable in this sense, and how it might be picked out. One would not like the observable to be picked out by fiat, for example. If we were willing to choose an observable and stipulate in an ad hoc manner that it must have a value, then it is unclear why we would be concerned about the definition of definite-valued observables in the first place. However, it turns out that there are several well-known interpretations of quantum theory that become analysable as modal interpretations once the existence of a preferred observable is allowed. The earlier case, without a fixed preferred observable R, can be recovered by taking the system's density matrix for R — see Bub (1997), Dieks (2007). Bub and Clifton prove the remarkable result that the conditions above give rise to a unique lattice of possibly-possessed properties, defined as follows. Let {Pi} be the set of projections onto the vectors, |vi>, which are the projections of |v> onto the eigenspaces of R. Then the set is as defined above for Clifton's theorem concerning spectral-decomposition modal interpretations. Dieks's variation on the theorem (2005, 2007) differs by requiring not definability of the set of possibly-possessed properties, but rather definability of the individual elements of the set. This stronger requirement has the consequence that the lattice of possible properties becomes smaller: only the projector on the full null-space is part of it, not the individual one-dimensional projectors on the null-space. Bub suggests that a number of traditional interpretations of quantum theory can be characterized as modal interpretations if the existence of a preferred observable is allowed. Notable among them are the Dirac-von Neumann interpretation, (what Bub takes to be) Bohr's interpretation, and Bohm's theory. In the last case, Bub argues that Bohm's theory can be recovered as a modal interpretation in which the R is the position observable. In addition, Bub argues (especially in his 1997) that R could be picked out by the physical process of decoherence. We shall have to leave this suggestion as a tantalizing possibility. As already noted, proofs similar to the ones mentioned can be given if there is no preferred observable, and in that case the spectral-decomposition-like sets of possibly-possessed properties, definable from the dynamical state alone, are recovered. 4. Reality Sets in: The Problem of Imperfect Measurement Earlier we suggested that the spectral-decomposition (and the biorthogonal-decomposition) modal interpretations solve the measurement problem in a particularly direct way: at the end of a von Neumann measurement, the compound system (apparatus plus measured system) is in a state such that the possible properties picked out by these modal interpretations are exactly the pointer states of the apparatus. Hence these interpretations assign the right state to apparatuses. There is a problem with this claim, however, in spite of the fact that by itself it is true. Measurements in the real world do not satisfy the ideal von Neumann model that we described earlier. In particular, they do not effect a perfect correlation between the apparatus and the measured system — measuring apparatuses are imperfect. But then it is not so clear that the biorthogonal (or spectral) decomposition picks out the right properties for the apparatus. A related, more general problem surfaces if one attempts to invoke decoherence as a mechanism that is responsible for the emergence of classical observables as definite-valued: does decoherence always pick out appropriate observables as definite-valued? This problem was first raised by Albert and Loewer (1990, 1991), later developed by Elby (1993), and it sparked considerable discussion. Before we turn to the reply, we note that in fact the problem is unavoidable in the context of quantum theory. It is not due merely to the fact that measuring apparatuses are inaccurate. Rather, the quantum-mechanical formalism itself stands in the way of the formation of perfect correlations. Consider, for example, a standard Stern-Gerlach measurement of the spin of a particle. After the interaction between the particle and the magnets, the wavefunction for the particle emerging from the magnets does not have a perfect correlation between mutually orthogonal spin and spatial parts. As a consequence, on measurement the particle will necessarily have a non-zero probability of turning up in the ‘wrong’ region (see Dickson (1994) for a longer discussion of this point). Only in the limiting situation of infinite times will a perfect correlation develop. So the problem we are facing here is not a problem of engineering alone; it is intrinsic to quantum theory. (For this reason, we might expect to learn something by examining it, whether modal interpretations survive the problem or not.) The response of modal interpretations to this problem of intrinsic ‘inaccuracy’ in measurements comes in three stages. First, we may notice that the ‘error terms’ in the state of the compound (apparatus plus measured) system would typically be very small, so that the true final state would be extremely close to the ideal state (in the sense that their inner product would be very close to one). In that case, one might expect that the spectral decompositions (of the reduced states for the apparatus and measured system) would pick out states for the two systems that are extremely close to the ideal states. Specifically, the real possibly possessed properties of the apparatus would be very close (in Hilbert space) to the ideal possibly possessed properties. One interesting issue that arises here is whether close is good enough. Whatever one's answer, it is crucial to realize that modal interpretations are not here proposing a FAPP (‘for all practical purposes’) solution to the measurement problem. No, they assert that the real state of the apparatus is ‘close’ to the ideally expected state, and that there is no empirical problem with making this assertion. There are two other important problems relating to imperfect measurements. First, when the final state of the compound system of measuring device and object system is very nearly degenerate (when written in the basis given by the measured observable and the apparatus's ‘pointer’ observable — i.e., when the probabilities for the various results are nearly equal), the spectral decomposition does not, in general, choose a basis that is even close to the ideally expected result. This point was discussed in greatest detail by Bacciagaluppi and Hemmo (1996). This seems to pose a severe problem for the modal interpretation based on the spectral decomposition. However, relying on the (near) ubiquity of decoherence in the macroscopic realm, Bacciagaluppi and Hemmo show that when the apparatus is considered as a finite-dimensional system (when the apparatus is modelled in a finite-dimensional Hilbert space), decoherence guarantees that the spectral decomposition of the (reduced) state of any macroscopic object will be very close to the ideally expected result. For example, pointers will be well-localized in position. In other words, in the case of finite dimensional Hilbert spaces the degeneracy problem can be dissolved by appealing to the fact that in real measurements, with macroscopic devices, there is a decohering environment (or even decohering processes within the devices themselves, see Castagnino and Lombardi, 2004); the latter is responsible for the emergence of the right definite-valued observables (Schlosshauer, 2004). So the modal approach based on the spectral decomposition seems safe here. However, the case of infinitely many distinct states for the apparatus is perhaps more realistic. Bacciagaluppi (2000) has analyzed this situation, using a continuous model of the apparatus' interaction with the environment. He concludes that now the spectral decomposition of the reduced state of the apparatus does not pick out states that are highly localized. This result applies more generally to other cases where a macroscopic system (not idealized as finite-dimensional) experiences decoherence due to interaction with its environment (see Donald (1998)). This problem is serious. Even standard non-relativistic quantum theory occurs in the arena of infinite-dimensional Hilbert spaces, not to mention quantum field theory. Several worries and suspicions result from the problem that in the case of an infinite number of degrees of freedom the expected observables are generally not picked out as definite-valued by the spectral decomposition. The first is that the modal interpretation, as stated thus far, was never in a position to deal with quantum mechanics in infinite-dimensional Hilbert spaces. The second (related to the first) is that the spectral decomposition is perhaps after all not the right tool to pick out the possibly possessed properties. Perhaps there is a more appropriate general scheme of which the spectral decomposition is a special case (Spekkens and Sipe, 2001a,b, Castagnino and Lombardi, 2006, Dieks, 2007). This suggestion might be combined with the earlier-mentioned idea that it may be more appropriate to think in terms of perspectival or relational properties than in terms of properties that are possessed in an absolute way. Indeed, Bene and Dieks (2002) have proposed a perspectival version of the modal interpretation that seems to be able to circumvent some of the problems connected with infinitely many degrees of freedom. Berkovitz and Hemmo (2006) have also put forward a perspectival (or relational) version of the modal interpretation. These proposals deserve further study. The most direct and general response, however, is to generalize the original modal scheme so that it naturally fits into a field-theoretic context, with infinitely many degrees of freedom. A plausible generalization of this type has been proposed by Clifton (2000); see below. Results by Earman and Ruetsche (2005) show however that this field-theoretical modal interpretation still faces the problem that the definite properties that are picked out are not always the expected ones — often only trivial observables are made definite — though it is unclear whether this actually constitutes a threat to the empirical adequacy of the interpretation. 5. The Algebraic Approach The Algebraic approach to modal interpretations aims for a formalism that is significantly more general than that developed thus far — one that can apply to quantum theory in infinite-dimensional Hilbert space, and to quantum field theory — and, further, abstracts away from a particular choice for the possibly possessed properties. The rudiments of an algebraic approach are already present in the work of those who, from the mid 1990s, aimed to provide a motivation for modal interpretations. We saw there that modal interpretations were described in more or less algebraic terms, namely, as a certain set closed under algebraic operations (the operations of meet, join, and orthocomplement on the lattice of projections on a Hilbert space, for example). Indeed, Bub defines his interpretation in these terms: his set of possible possessed properties is defined algebraically, as a lattice (definable from the state and a preferred observable R). While it was recognized by early workers (Bub, Clifton, Dickson, and others) that the set of possibly possessed properties can be characterized in interesting algebraic ways, the first serious algebraic work on modal interpretations was done by Bell and Clifton (1995), who defined the notion of a ‘quasiBoolean algebra’. These algebras are ‘almost’ distributive, in a well-defined sense. It is their ‘near’ distributivity that permits the definition of classical probability measures over them, which in many interpreters' eyes is the precondition for adopting an ignorance interpretation of probabilities. Following on this work, Zimba and Clifton (1998) changed tack a bit, and considered not algebras (or lattices) of projection operators, but algebras of observables. The advantages of this approach are many. First, there is a well-developed theory of operator algebras upon which one can draw. Second, it allows one, in principle, to deal with observables generally, including those that do not have (proper) eigenspaces. Third, it provides a possibly more compelling justification for the kinds of ‘closure’ condition that have been mentioned above. Zimba and Clifton focus largely on this last issue, considering a number of closure conditions on the set of definite-valued observables. For example, should the set be closed under taking real linear combinations? (In this case, one assumes that a real linear combination of observables that are definite-valued is itself definite-valued.) Arbitrary algebraic combinations? Arbitrary (‘self-adjoint’) functions? Zimba and Clifton prove a number of interesting results concerning the algebra of observables picked out by modal interpretations. (Their results are not all applicable to the infinite-dimensional case, however). Somewhat more precisely, one begins with a quasiBoolean algebra of projections — not necessarily one picked out by any of the prescriptions we have discussed, but just any quasiBoolean algebra — and then considers the observables that are definite-valued in virtue of this quasiBoolean algebra's constituting an algebra of possibly-possessed properties. Following Zimba and Clifton, let us call such an algebra of observables D. Zimba and Clifton then consider whether there exist valuations on D (i.e., assignments of values to all observables in D) that respect arbitrary (self-adjoint) functional relationships among the observables in D. That is, letting v[A] represent the value of A (for A in D), and letting f be any (self-adjoint) function, we require that f(v[A]) = v[f(A)]. The answer is ‘yes’. More importantly, they show that there are sufficiently many such valuations that the quantum-mechanical probabilities over D can be recovered from a classical probability measure over all such valuations. In other words, one can understand quantum-mechanical probabilities as ignorance about which values the observables in D actually have. A later installment of this line of reasoning is due to Halvorson and Clifton (1999). They extend results from Zimba and Clifton to the case of unbounded observables (though there remain open questions about this case). 6. Dynamics As we have seen, modal interpretations propose to provide, for every moment in time, a set of possibly-possessed properties (or definite-valued observables) and probabilities for possession of these properties (or for values of these observables). Some advocates of modal interpretations may be willing to leave the matter, more or less, at that. Others take it to be crucial for any modal interpretation that it also answer questions of the form: Given that a system possesses property P at time s, what is the probability that it will possess property P′ at time t (t > s)? In other words, they want a dynamics of possessed properties. (It is clear for instance that Healey's account requires some such dynamics.) There are arguments on both sides. Those who consider a dynamics of possessed properties to be superfluous wonder whether quantum mechanics could not get away with just single-time probabilities. Why can we not settle for an interpretation that supplements standard quantum mechanics only by providing in a systematic way a set (the set of possibly possessed properties) over which its single-time probabilities are defined? If we require of this set that it include all the everyday properties of macroscopic objects, including those relating to records and memories, then what more do we need? Arguably, van Fraassen has such a position, considering a dynamics of value states to be more than what an interpretation of quantum mechanics needs to provide. Those who argue for the necessity of dynamics reply that we need an assurance that the trajectories of possessed properties really are, at least for macroscopic objects, like we see them to be, i.e., like records and memories indicate. For example, we should require not only that the book at rest on the desk have a definite location, but also that, if undisturbed, its location relative to the desk does not change in time. Hence one cannot get away with simply specifying the definite properties at each time. We need also to be shown that this specification is at least compatible with a reasonable dynamics. Even better, we would like to see the dynamics explicitly. The issue comes down to what one considers to be ‘the phenomena that need saving’ by an interpretation. Those who believe that the phenomena in question include dynamical phenomena will search for a dynamics of possessed properties (or definite values). Others might doubt whether we really have empirical access to history: are instaneous properties, including records and memories, not the only things we observe? As pointed out by Ruetsche (2003), it is important in this context whether the modal interpretation is viewed as resulting in a hidden-variables theory, in which value states are added as hidden variables to the original formalism in order to obtain a full description of the physical situation, or rather as only equipping the original formalism with a new semantics. In the first approach one would expect a full dynamics of value states, in the second this is not so clear. Of course, modal interpretations do admit — trivially — a dynamics, namely, one in which there is no correlation from one time to the next — however, this dynamics seems unreasonable. (In this case, probability of a transition from the property P at s to P′ at t is just the single-time probability for P′ at t.) In such a case, the book on the table might not remain at rest relative to the table, even if undisturbed. Such a dynamics is unlikely to interest those who feel the need for a dynamics at all. Several researchers have contributed to the project of constructing a more interesting form of dynamics for modal interpretations. An important account is due to Bacciagaluppi and Dickson (1999). That work shows that most of the significant challenges facing the construction of a dynamics can be answered in principle, though there remain open questions. The first challenge is posed by the fact that the set of possibly possessed properties — let us call it ‘S’ — can change over time. In other words, the ‘state space’ (S) over which we wish to define transition probabilities is itself time-dependent. One therefore has to define a family of maps, each one being a 1-1 map from S at one time to (a different!) S at another time. With such a family of maps, one can effectively define conditional probabilities within a single state space, then translate them into ‘transition’ probabilities. For this technique to work, S must have the same cardinality at each time. However, in general (for example, in those interpretations that rely on the spectral decomposition), it does not (the number of different projections appearing in the spectral decomposition of the density matrix may vary with time). A way out of this is to augment S at each time so that its cardinality matches the highest cardinality that S ever achieves. Of course, one hopes to do so in a way that is not completely ad hoc. For example, in the context of the spectral decomposition version of the modal interpretation, Bacciagaluppi, Donald, and Vermaas (1995) show that the ‘trajectory’ (through Hilbert space) of the spectral components of the reduced state of a physical system will, under reasonable conditions, be continuous, or have only isolated discontinuities (so that the trajectory can be naturally extended to a continuous trajectory). This result suggests a natural family of maps as discussed above: map each spectral component at one time to its unique (continuous) evolute at later times. The second challenge to the construction of a dynamics arises from the fact that one wants to define transition probabilities over infinitesimal units of time, then derive the finite-time transition probabilities from them. Adapting results from the theory of stochastic processes, one can show that the procedure can, more or less, be carried out for modal interpretations of at least some varieties. Finally, one must actually define infinitesimal transition probabilities that will give rise to the proper quantum-mechanical probabilities at a time. Following earlier work by Bell (1984) and Vink (1993) and others, Bacciagaluppi and Dickson define in fact an infinite class of such infinitesimal transition probabilities. Some of them might be considered more ‘quantum-mechanical’ than others, but all of them generate the correct single-time probabilities, which are, as we have seen, arguably all we can really test. However, Sudbery (2002) has contended that the form of the transition probabilities would be relevant to the precise form of spontaneous decay or the 'Dehmelt quantum jumps' (otherwise known as 'quantum telegraph' or 'intermittent fluorescence'); he independently develops the dynamics of Bacciagaluppi and Dickson and applies it in such a way that it leads to the correct predictions for these experiments (compare Shimony, 1990, about the idea that quantum dynamics may be tested in experiments). More recently, Gambetta et al. (2003, 2004) have developed a dynamical modal account in the form of a non-Markovian process with noise, also extending their approach to positive operator-valued measures (POVMs). 7. Open Problems and Projects There are a number of open projects and problems in the modal program. Above we saw that the original version based on the spectral decomposition may be empirically inadequate. Will a perspectival or relational extension (Bene and Dieks, 2002, Berkovitz and Hemmo, 2006) be able to solve these problems? If so, a more detailed analysis than given hitherto of the ontology of such a relational interpretation is needed. There may be a connection here with Rovelli's (1996) relational interpretation; see also Pearle (2005) for possible relations between the modal interpretation and other no-collapse interpretations. Other fundamental questions may be posed. For example, is it reasonable to attach direct physical meaning to the mathematical structure of quantum mechanics, in the way modal interpretations do? Some have argued that quantum theory should not be viewed primarily in terms of operators and quantum states; some even question the fundamentality of the Hilbert space formalism, which modal interpretations take quite seriously. For example, Daumer et al. (1996) contend that one should not naïvely take operators to represent physical quantities (it is controversial whether modal interpretations in fact do so, at least in the naïve sense that these authors dislike). On the other hand, Brown, Suárez, and Bacciagaluppi (1998) argue that there is more to quantum reality than what is described by operators and quantum states: they claim that gauges and coordinate systems are important to our description of physical reality as well, while modal interpretations have standardly not taken such things into consideration. The algebraic approach initially abstracts away from specific choices about the set of definite-valued observables, but in the end one feels compelled to return to this issue. Indeed, at the very least one would like to know that some choice or other can at least capture what we believe to be true about the world. We have noted a number of theorems of the form ‘the largest set of observables that can be made simultaneously definite (subject to some conditions) is S’. But is it plausible to assume that nature prefers such maximal sets and makes all statements corresponding to one of them simultaneously true? One may also ask more fundamental questions about the algebraic approach itself. For example, what is the motivation for the algebraic closure conditions? Do the functional operations correspond to well-defined empirical operations? The algebraic work is also a source of several open technical questions. Halvorson and Clifton (1999) mention several of them. A fundamental question that has only started to receive attention is the extension of the approach to algebraic quantum field theory (Clifton, 2000, Dieks, 2002, Kitajima, 2004, Earman and Ruetsche, 2005). Clifton has proposed a natural generalization of the non-relativistic modal scheme, but as Earman and Ruetsche show it is not yet clear whether it will be able to deal with measurement situations, and whether it can be empirically adequate. The field-theoretic approach may offer a promising answer to doubts that have been raised about the compatibility of the modal scheme with Lorentz-invariance, though. Dickson and Clifton (1998) have shown that a large class of modal interpretations of ordinary quantum mechanics cannot be made Lorentz-invariant in a straightforward way; these results were extended by Myrvold (2002). The problems revealed by these investigations seem related to the non-relativistic nature of the formalism of quantum mechanics, in particular to the fact that the concept of a state of an extended system at one instant is central. In a local field-theoretic context this becomes different, and this may avoid conflicts with relativity (Earman and Ruetsche, 2005). Berkovitz and Hemmo (2005) and Hemmo and Berkovitz (2005) propose a different way out: they argue that perspectivalism can come to the rescue here — see also Berkovitz and Hemmo (2006) In the realm of dynamics, Bacciagaluppi and Dickson (1999) raise a number of open questions. In addition to these, the issue whether a dynamics is really needed at all is an important topic of discussion, also from a wider philosophical point of view. There is a relation here to issues in the philosophy of time, in particular to the question of whether it is possible to do without the concept of history and evolution in time at all — perhaps it is sufficient to consider only instantaneous states, including records and memories (Barbour, 1999)? These and similar problems, and their proposed solutions, have arisen in the context of detailed technical investigations. This illustrates one of the advantages of the modal approach: it makes use of a precise set of rules that determine the set of definite-valued observables, and this makes it possible to derive rigorous results. It may well be that several of these results, e.g., no-go theorems, can be applied to other interpretations as well (e.g., to the many-worlds interpretation — Dieks, 2007). Whatever the merit of the modal ideas in the end, one can at least say that they have given rise to a serious and fruitful series of investigations into the nature of quantum theory. Other Internet Resources [Please contact the authors with suggestions.] Related Entries quantum mechanics | quantum mechanics: Bohmian mechanics | quantum mechanics: Everett's relative-state formulation of | quantum mechanics: Kochen-Specker theorem | quantum mechanics: many-worlds interpretation of | quantum mechanics: relational | quantum mechanics: the role of decoherence in | quantum theory: measurement in | quantum theory: the Einstein-Podolsky-Rosen argument in
6a1465aac1fd073f
Is bonding specifically for and between electrons? Why cant two atoms share muons, which are different particles of same charge, spin and different mass? Why there aren't muon-electron bonds? Why is the octet (or eighteen valence electron) rule only for electrons, and not for all particles with similar charge and spin compared to the electron? • $\begingroup$ I wrote 32 valence electron rule for f-block elements but then I remembered f orbitals dont participate in bonding.:) $\endgroup$ Dec 16 '19 at 0:22 • $\begingroup$ for lanthanides at least . $\endgroup$ Dec 16 '19 at 0:22 • 2 $\begingroup$ Remember that the Pauli exclusion principle applies to identical fermions. A muon is not identical to an electron, so if you introduce a muon into an atom it will decay into the lowest possible orbital. This orbital is 1s like of course, so in principle you could make a molecule out of protons and muons; however muons decay very quickly and are captured by the nucleus on a similar time scale. $\endgroup$ – PJ R Dec 16 '19 at 1:29 • 2 $\begingroup$ Also, wouldn't the fact that muons "orbit" much closer to the nucleus mean that it would be very hard to get two nuclei close enough together to have enough interaction of both nuclei with the muon that there is a net benefit? I think the cost of bringing the nuclei together would likely be greater than the benefit of bonding. $\endgroup$ – Andrew Dec 16 '19 at 13:23 • 1 $\begingroup$ I've tried adjusting the question title and content, let me know if it's not what you meant. $\endgroup$ Dec 16 '19 at 23:48 You are correct; electrons and muons are fermions with different quantum numbers (specifically, they differ in the electron number and the muon number), so the Pauli exclusion principle does not apply between them (though it of course applies among electrons and muons separately). A somewhat similar case happens with protons and neutrons (also fermions) in the nuclear shell model, which attempts to describe nuclei as containing proton and neutron shells, analogous to electron shells. The proton and neutron shells are filled independently. Because (negative) muons are the second generation Standard Model equivalent of the electron, whatever electrons do, muons can copy - since there are "electronic" orbitals, there are also "muonic" orbitals. However, there are two main differences. First, for electron orbitals, it's a good approximation to assume the nucleus is stationary with respect to the electrons (the Born-Oppenheimer approximation), due to the great difference in their masses (the lightest nucleus, a proton, has approximately 1836 times the mass of an electron). Because the muon is approximately 207 times heavier than an electron, now a muon has an appreciable mass relative to a proton (approximately one-ninth), and therefore the BO approximation is considerably worse. This is a scenario in-between a regular atom and positronium, where an electron "orbits" a positron "nucleus", which has the same mass. See Phys. Rep. 1982, 86 (4), 169-216 for a quantitative analysis of BO approximation errors in a simple muonic molecule. The second and more striking difference is that, again due to the muons being 207 times heavier, muonic orbitals are accordingly around 207 times smaller meaning a muonic orbital has a typical radius of around 0.5-1 pm compared to 100-200 pm for an electronic orbital. The energies are also 207 times larger in magnitude - the 1s electronic orbital in hydrogen has an energy of -13.6 eV, whereas for "muonic hydrogen", the 1s muonic orbital has an energy of -2815 eV. These facts can be determined simply by solving the Schrödinger equation, except inputting a mass 207 times greater for the negatively-charged particle. As you can see, this leads to a severe mismatch between the realms of electronics and muonics. There is no kind of shared electron-muon bond - at best, there would be a separate electron bond and a muon bond. However, because the energetics involved in the muon bond are so much higher, the system in the ground state is essentially equivalent to just having the muon bond, plus a small correction due to a grossly warped electron bond. The muon bond is formed normally, and the electron bond has to deal with whatever geometry is forced by the muons, however crazy. As an example, imagine a neutral atom of regular hydrogen and a neutral atom of muonic hydrogen interacting. The muonic hydrogen atom basically pierces into the depth of the electron cloud of the regular hydrogen atom (the electron can hardly repel the muon efficiently, since it is so tightly bound to its nucleus), until both nuclei get quite close. Then the muon latches onto the other proton. The system is stabilised when the two hydrogen nuclei are approximately 0.5 pm apart, and the lone muon forms half of a muonic sigma bond. From the "point of view" of the muon, the system looks like a singly-ionised muonic dihydrogen molecule ($\ce{\mu-H_2^+}$), with slight corrections due to the electron buzzing around, most of the time far away. However, from the "point of view" of the electron, it sees a bizarrely elongated nucleus (from the typical ~1 fm sphere to a ~500 fm spindle) with a total charge of +1e (since the muon almost perfectly screens out a full positive charge). This spindly nucleus is still quite small relative to the electron cloud, and so the electron likely behaves similarly to a normal isolated hydrogen atom, with some corrections due to the non-spherical distribution of charge at its "nucleus" (the two separate protons bound by a muon). The electron still provides a slight amount of bonding between the protons, but it is much less than normal due to the odd geometry forced by the muon. Muonic chemistry in its full glory would be a fascinating (and extremely dangerous!) copy of the electronic chemistry we know, but the two would operate almost completely independently. The muon is actually one of the most stable subatomic particles, with a lifetime of 2.2 µs. That sounds like almost nothing, but it's many orders of magnitude more than what is necessary to observe "chemistry". Unfortunately, it's just too difficult to produce for how ephemeral it is... Your Answer
662c345051d168bb
School of Mathematics People A-Z Dr. Ben Goddard Photo of Ben Goddard • Reader Contact details Research Interests My research combines mathematical modelling, numerics and rigorous analysis to tackle a range of interdisciplinary problems. These problems are generally complex (with phenomena that emerge from a collection of interacting objects) and multiscale (with important features at multiple scales, e.g. in time or space). My current research focuses on three main areas: 1) Complex fluids and soft matter: Complex fluids, such as colloids, often consist of microscopic particles suspended in a bath of many more, much smaller and lighter particles, called a bath. Typical examples are paints and inks, blood and milk. Soft matter is a more general term, normally applied to any material which is easily deformed at room temperature. Examples include gels, lcds, sand and clouds. Such systems are complex due to their long-range interactions and environmental effects. Their multiscale nature arises due to interesting dynamics from the size of particles (typically nano-micrometer) up to the macroscale (cm or m). The large number of particles and complex interactions render the full dynamics computationally intractable. I'm interested in obtaining accurate and efficient reduced models to describe the dynamics of such systems. 2) Quantum molecular dynamics: When studying the motion of nuclei in molecules, it is often possible to neglect the motion of electrons as, typically, they equilibrate on a much shorter timescale than the nuclei. This is due to the large difference between the masses of nuclei and electrons (nuclei are typically several thousand times heavier). However, in many interesting chemical processes, such as the photodissociation of ozone and the detection of light in the retina, this approximation breaks down and we must study the full systems. The complexity and multiscale nature of the problem are due to rapidly-oscillating, exponentially small wavepackets that describe the positions of the nuclei. This means that standard numerical algorithms are prohibitively expensive. Most successful approaches use heuristic basis sets and unconstrained approximations, meaning that systematic improvement of these methods is very difficult. In contrast, I'm interested in mathematically-motivated, simple models which give very accurate results for very little computational cost. 3) Electronic structure theory: The aim of electronic structure theory is to understand the arrangement of electrons in atoms and molecules, which in turn determines the chemistry of such systems. The link between mathematics and chemistry is much less well-established than those between, for example, mathematics and physics or biology. The complexity of such problems lies in the so-called `problem of exponential scaling'. This means that adding just a single electron to a system significantly increases the numerical cost of standard algorithms. Fully (numerically) solving the underlying Schrödinger equation for even a single carbon atom (with 6 electrons) is essentially impossible. In addition, the problems are multiscale as we are not interested in full energies of the systems, but in energy differences, which are typically 3-6 orders of magnitude smaller. I aim to use mathematics to connect simple, intuitive chemistry models, such as those found in standard textbooks, to accurate results. Research Groups Biographical Statement I was born and grew up in Birmingham, UK, before completing MMath and PhD degrees in Mathematics at the University of Warwick. Around the end of my PhD, I spent 18 months at the Zentrum Mathematik at the Technische Universität München. Following that, I undertook postdocs at Warwick (Mathematics) and Imperial College London (Chemical Engineering). I have Erdö‘s number 5: Me -- Gero Friesecke -- Georg Dolzmann -- Vladimir Å verák -- David Preiss -- Paul ErdÅ‘s, and Bacon number 4: Me --(G103)--> Patrick Niknejad --(Harry Potter)--> Daniel Radcliffe --(The Tailor of Panama)--> David Hayman --(Where the Truth Lies)--> Kevin Bacon. This gives me an ErdÅ‘s-Bacon number of 9. Previous Employment Imperial College London (Chemical Engineering), Research Associate with Serafim Kalliadasis, 2010-2013. University of Warwick (Mathematics), Postdoctoral Researcher with Volker Betz, 2007-2010.