chash
stringlengths
16
16
content
stringlengths
267
674k
93f1cea8b5a14067
Rotational transition From Wikipedia, the free encyclopedia Jump to: navigation, search A rotational transition is an abrupt change in angular momentum in quantum physics. Like all other properties of a quantum particle, angular momentum is quantized, meaning it can only equal certain discrete values, which correspond to different rotational energy states. When a particle loses angular momentum, it is said to have transitioned to a lower rotational energy state. Likewise, when a particle gains angular momentum, a positive rotational transition is said to have occurred. Rotational transitions are important in physics due to the unique spectral lines that result. Because there is a net gain or loss of energy during a transition, electromagnetic radiation of a particular frequency must be absorbed or emitted. This forms spectral lines at that frequency which can be detected with a spectrometer, as in rotational spectroscopy or Raman spectroscopy. Diatomic molecules[edit] Molecules have rotational energy owing to rotational motion of the nuclei about their center of mass. Due to quantization, these energies can take only certain discrete values. Rotational transition thus corresponds to transition of the molecule from one rotational energy level to the other through gain or loss of a photon. Analysis is simple in the case of diatomic molecules. Nuclear wave function[edit] Quantum theoretical analysis of a molecule is simplified by use of Born–Oppenheimer approximation. Typically, rotational energies of molecules are smaller than electronic transition energies by a factor of m/M ≈ 10−3 – 10−5, where m is electronic mass and M is typical nuclear mass.[1] From uncertainty principle, period of motion is of the order of Planck's constant h divided by its energy. Hence nuclear rotational periods are much longer than the electronic periods. So electronic and nuclear motions can be treated separately. In the simple case of a diatomic molecule, the radial part of the Schrödinger Equation for a nuclear wave function Fs(R), in an electronic state s, is written as (neglecting spin interactions) [- \frac{\hbar^2}{2\mu R^2} \frac{\partial}{\partial R} (R^2 \frac{\partial}{\partial R})+ \frac{\langle \Phi_s|N^2|\Phi_s \rangle}{2\mu R^2}+ E_s(R)-E]F_s(\mathbf R) = 0 where μ is reduced mass of two nuclei, R is vector joining the two nuclei, Es(R) is energy eigenvalue of electronic wave function Φs representing electronic state s and N is orbital momentum operator for the relative motion of the two nuclei given by N^2 = -\hbar^2 [ \frac{1}{sin\Theta} \frac{\partial}{\partial \Theta}(sin \Theta \frac{\partial}{\partial \Theta})+ \frac{1}{sin^2\Theta} \frac{\partial^2}{\partial \Phi^2} ] The total wave function for the molecule is \Psi_s = F_s(\mathbf R)\Phi_s(\mathbf R,\mathbf r_1, \mathbf r_2, ...., \mathbf r_N) where ri are position vectors from center of mass of molecule to ith electron. As a consequence of the Born-Oppenheimer approximation, the electronic wave functions Φs is considered to vary very slowly with R. Thus the Schrödinger equation for an electronic wave function is first solved to obtain Es(R) for different values of R. Es then plays role of a potential well in analysis of nuclear wave functions Fs(R). Vector addition triangle for orbital angular momentum of a diatomic molecule with components of orbital angular momentum of nuclei and orbital angular momentum of electrons, neglecting coupling between electron and nuclear orbital motion and spin-dependent coupling.Since angular momentum N of nuclei is perpendicular to internuclear vector R, components of electronic angular momentum L and total angular momentum J along R are equal. Rotational energy levels[edit] The first term in the above nuclear wave function equation corresponds to kinetic energy of nuclei due to their radial motion. Term <Φs|N2s>/2μR2 represents rotational kinetic energy of the two nuclei, about their center of mass, in a given electronic state Φs. Possible values of the same are different rotational energy levels for the molecule. Orbital angular momentum for the rotational motion of nuclei can be written as \mathbf N = \mathbf J - \mathbf L where J is the total orbital angular momentum of the whole molecule and L is the orbital angular momentum of the electrons. If internuclear vector R is taken along z-axis, component of N along z-axis - Nz - becomes zero as \mathbf N = \mathbf R \times \mathbf P Since molecular wave function Ψs is a simultaneous eigenfunction of J2 and Jz, J^2 \Psi_s = J(J+1) \hbar^2 \Psi_s where J is called rotational quantum number and J can be a positive integer or zero. J_z \Psi_s = M_j\hbar \Psi_s where -J ≤ Mj ≤ J. Also since electronic wave function Φsis an eigenfunction of Lz, L_z \Phi_s = \pm \Lambda\hbar \Phi_s Hence molecular wave function Ψs is also an eigenfunction of Lz with eigenvalue ±Λħ. Since Lz and Jz are equal, Ψs is an eigenfunction of Jz with same eigenvalue ±Λħ. As |J| ≥ Jz, we have J ≥ Λ. So possible values of rotational quantum number are J = \Lambda, \Lambda +1, \Lambda+2, ...... Thus molecular wave function Ψs is simultaneous eigenfunction of J2, Jz and Lz. Since molecule is in eigenstate of Lz, expectation value of components perpendicular to the direction of z-axis (internuclear line) is zero. Hence \langle \Psi_s|L_x|\Psi_s\rangle = \langle L_x \rangle = 0 \langle \Psi_s|L_y|\Psi_s\rangle = \langle L_y \rangle = 0 \langle \mathbf J . \mathbf L \rangle = \langle J_z L_z \rangle = \langle {L_z}^2 \rangle Putting all these results together, \langle \Phi_s |N^2|\Phi_s \rangle F_s(\mathbf R) = \langle \Phi_s |J^2 + L^2 - 2 \mathbf J . \mathbf L|\Phi_s \rangle F_s(\mathbf R) = \hbar^2 [J(J+1)-\Lambda^2]F_s(\mathbf R) + \langle \Phi_s |{L_x}^2 + {L_y}^2|\Phi_s \rangle F_s(\mathbf R) The Schrödinger equation for the nuclear wave function can now be rewritten as - \frac{\hbar^2}{2\mu R^2}[ \frac{\partial}{\partial R} (R^2 \frac{\partial}{\partial R})- J(J+1)]F_s(\mathbf R)+[{E'}_s(R)-E]F_s(\mathbf R) = 0 {E'}_s(R)=E_s(R) - \frac{\Lambda^2 \hbar^2}{2\mu R^2} + \frac{1}{2\mu R^2} \langle \Phi_s |{L_x}^2 + {L_y}^2|\Phi_s \rangle E's now serves as effective potential in radial nuclear wave function equation. Sigma states[edit] Molecular states in which the total orbital momentum of electrons is zero are called sigma states. In sigma states Λ=0. Thus E's(R) = Es(R). As nuclear motion for a stable molecule is generally confined to a small interval around R0 where R0 corresponds to internuclear distance for minimum value of potential Es(R0), rotational energies are given by, E_r = \frac{\hbar^2}{2\mu {R_0}^2} J(J+1) = \frac{\hbar^2}{2I_0} J(J+1) = BJ(J+1) I0 is moment of inertia of the molecule corresponding to equilibrium distance R0 and B is called rotational constant for a given electronic state Φs. Since reduced mass μ is much greater than electronic mass, last two terms in the expression of E's(R) are small compared to Es. Hence even for states other than sigma states, rotational energy is approximately given by above expression. Rotational spectrum[edit] When a rotational transition occurs, there is a change in the value of rotational quantum number J. Selection rules for rotational transition are, when Λ = 0, ΔJ = ±1 and when Λ ≠ 0, ΔJ = 0, ±1 as absorbed or emitted photon can make equal and opposite change in total nuclear angular momentum and total electronic angular momentum without changing value of J. The pure rotational spectrum of a diatomic molecule consists of lines in the far infrared or microwave region. The frequency of these lines is given by \hbar \omega = E_r(J+1)-E_r(J)=2B(J+1) Thus values of B, I0 and R0 of a substance can be determined from observed rotational spectrum. See also[edit] 1. ^ Chapter 10, Physics of Atoms and Molecules, B.H. Bransden and C.J. Jochain, Pearson education, 2nd edition. • B.H.Bransden C.J.Jochain. Physics of Atoms and Molecules. Pearson Education.  • L.D.Landau E.M.Lifshitz. Quantum Mechanics (Non-relativistic Theory). Reed Elsvier.
fba72747305b4e66
Dismiss Notice Dismiss Notice Join Physics Forums Today! The Quantum Challenge by George Greenstein 1. Oct 28, 2008 #1 Well, I bought this book about two months ago, and finally got around to reading past the first chapter or so. All I can say is, WOW. It's bloody brilliant! It actually gives you some examples and application, and then shows you the mathematics and mechanics behind it (i.e. the Schrödinger Equation isn't just thrown at you, Greenstein et. al. explain why mechanics work and then give you the equation). It's very well laid out. It does require some prerequisite of knowledge, or a serious dedication to self-study, regarding some of the concepts, but it's very well put together. 2. jcsd 3. Oct 29, 2008 #2 George Jones User Avatar Staff Emeritus Science Advisor Gold Member 4. Oct 29, 2008 #3 Doc Al User Avatar Staff: Mentor Mine too. It looks very useful. 5. Oct 29, 2008 #4 User Avatar Staff: Mentor I've had it for a couple of years, and like it a lot. It's not a normal QM textbook that focuses on the mathematical formalism, how to solve the Schrödinger equation, etc. Instead, it focuses on all the "fun stuff" that people here like to argue about endlessly: photons, Bell's Theorem, Schrödinger's Cat, etc. I think the people who can best take advantage of it would have had at least a brief introduction to the basics and formalism of QM. I'd love to use it as a textbook for a course on those topics that has our intro modern physics course as prerequisite. When I get a hole in my schedule, and can round up some students for it... 6. Oct 29, 2008 #5 Nice, if/when you get around to that, let me know, I'll be more than happy to help with anything. And yeah, some of the stuff does require a somewhat basic knowledge in QM or someone else who can help guide you, but it's the best book I've seen so far in not taking a strictly mathematical approach to QM (which helps, because I hadn't previously taken Linear Algebra or Partial Differentials, and have taught myself them using MIT Open Courseware, so I obviously like anything in QM that takes a less mathematical approach). Similar Discussions: The Quantum Challenge by George Greenstein
f4f4e75aed6d3a17
Take the 2-minute tour × When I read descriptions of the many-worlds interpretation of quantum mechanics, they say things like "every possible outcome of every event defines or exists in its own history or world", but is this really accurate? This seems to imply that the universe only split at particular moments when "events" happen. This also seems to imply that the universe only splits into a finite number of "every possible outcome". I imagine things differently. Rather than splitting into a finite number of universes at discrete times, I imagine that at every moment the universe splits into an uncountably infinite number of universes, perhaps as described by the Schrödinger equation. Which interpretation is right? (Or otherwise, what is the right interpretation?) If I'm right, how does one describe such a vast space mathematically? Is this a Hilbert space? If so, is it a particular subset of Hilbert space? share|improve this question No one has any justifiable unique answers to such questions. The many-worlds interpretation isn't an actual theory of physics, an actual set of rules, ideas, or equations. It's just a vague and, when looked with any precision, meaningless and vacuous philosophical paradigm. Obviously, proper quantum mechanics doesn't imply any splitting whatsoever. Any rule when a splitting occurs is bound to be unnatural. The only "splitting" that proper QM allows is an approximate one, given by decoherence: the moment when the chances of parts of $\psi$ to "re-interfere" in the future are negligible. –  Luboš Motl Jul 21 '12 at 7:11 @LubošMotl your statement that "Obviously, proper quantum mechanics doesn't imply any splitting whatsoever." I don't really understand in this contex. They are not explaining splitting, but the state vector reduction/collapse of the wavefunction. I agree that the many-world interpretation is pysically flawed and has no mathematical basis as a theory. However, interpretations like the Many-Minds/multi-consciousness interpretation do. Moreover, this particular theory is complete, well defined and cannot be disprooved from a physical stand-point. Of course, this does not make it correct! –  Killercam Jul 21 '12 at 10:43 2 Answers 2 The Many Worlds interpretation is popularly misunderstood. The wave function itself contains a spectrum of universes, one corresponding to each eigenvalue for a given operator. The "splitting" of the "many worlds" is represented by the time evolution of the wave function described by the Schrodinger equation. As Lubos mentions above, these "universes" only become separate through decoherence. Consider, for example, a wave function in the position-basis given by a delta-function at x=0. This represents one universe. Now time-evolve the wave function using the schrodinger equation. The delta-function has now spread-out a bit. It is peaked at x=0, but has non-zero values at x=+1 and x=-1. This represents the existence of universes in which the position of the particle is at x=0, x=+1, and x=-1. In some sense there are "more" universes at x=0 than at x=+-1, because the wave function is more highly peaked at x=0. This is where some of the difficulty in the Many Worlds interpretation comes in: what ontology to use to describe the "splitting", "how many universes" are at x=0 vs x=+-1, and so on. The main point I want to make is that the "splitting" is just an interpretation of what is happening with the evolution of the wave function according to the schrodinger equation. Nothing "more" is actually happening. You model the "splitting" using the tried-and-true schrodinger evolution of the wave function. share|improve this answer You imply that there is a spectrum (a countable infinity) of "possible universes". But is it actually a continuum (an uncountable infinity) of "possible universes"? Can the delta-function have non-zero values at locations everywhere between 0 and +/-1? Or maybe a better example (since I don't understand the delta-function), in the double-slit experiment, can't a particular photon hit the detector plane at any point on the plane? (<- thus uncountable infinite possible universes) –  John Berryman Jul 21 '12 at 13:04 @John Berryman The word 'spectrum' does not imply a countable infinity. It is a continuum representing an uncountably infinite number of universes in the example I gave. You can think of a delta function like a very narrow spike. The schrodinger equation time-evolves a narrow spike into a wider and wider gaussian shape. In the example, in order to keep things simple, I approximated this as {-1,0,1) (a very rough approximation, but serves to illustrate the point). –  user1247 Jul 21 '12 at 19:05 Many worlders won't tell you this dirty little secret but how often splitting happens, and how many worlds there are, depends upon the choice of coarse graining, and the coarse graining resolution. No, it's not possible to ramp up the coarse graining all the way to the finest levels because a decoherence/coherence threshold would be crossed. And no, there is no canonical coarse graining either. The preferred basis depends upon the environment. Always. What is the preferred basis for a closed self-contained universe? share|improve this answer A more accurate answer than "The preferred basis depends upon the environment. Always." would be that the supporters of the MWI haven't yet described any other mechanism by which it could arise, just as they haven't yet shown how the Born rule would emerge even for a finite system. –  Niel de Beaudrap Jul 21 '12 at 11:52 Your Answer
e90cc763740f50aa
Friday, October 29, 2010 Maybe I/m Against Humans. (A repsponse to Miguel Guhlin and too many other well-meaning writers.). Being neither rich nor powerful, I’m unqualified to comment on ‘empowering’ vs. ‘domesticating’ education…wait, I did redesign the world’s most complex (and powerful) sensor-processor-effector system…at age 23. What the heck, my one cent: We are ‘creativity’-ing ourselves down the path of the Roman Empire. We are a nation where it’s not important to walk from your touchdown to thank your blockers and focus ahead; it’s how creative a dance you do in the end zone. (Yes, those athletes are conditioned, but which part do the children see every week?). Now it’s to be unimportant to master math and logic, as long as we “create stuff”, no matter how distracting that might be. Like the Roman citizens who grew bored with engineering and democracy and military art, turning instead to circuses and outdoing each other in bad poetry, we ‘create’ 500 bland television programs per hour, 24/365. We build 14,000 ‘apps’ on top of Twitter alone. 250,000 for the iPhone. Every minute we upload another 24 hours of video to YouTube. What do we know of the world? Our place in it? How many readers here know the fundamental difference between Shia and Sunni? Can describe the Iraqi and Afghan borders? Know the difference between a battalion and a brigade? Can guess the percentage of a school’s budget spent on personnel? Know—really know—why  Washington was considered the “Father of our Country’? Understand why we don’t use much of the oil lying under our feet? We need people who are productive and dependable. Especially when they are young and still learning what it is to be an adult, let alone lead adults. We need people who can care for the elderly and do repetitive research on sickle-cell anemia. We need people who will plant the seed each spring and gather the harvest each fall to feed a malnourished world. We are, by the way, not as poor or unpowerful as you might think. Barack Obama is slave to his staff, cabinet, guards, and politicos. We have evenings and weekends free, can learn whatever we like, volunteer if we like to build parks, sing, deliver meals, guide youth groups, gather in spiritual need, organize a festival, build a business, golf, run. There is, true, slavery in having a family at eighteen or twenty when you have no skill or education. And there’s the rub because you will not have time to read to your children, speak with them, sing to them. And they too will not learn, will head to slavery. Unless great teachers intervene. Great teachers don’t teach you to be dangerous. All those dangerous people—they’re the ones keeping the sub-par teachers in place, distracting funds and resources from those in need, muddling the debate, spreading false economics, electing status-quo leaders. The useful idiots gathering at G-20 meetings to protest…well, to protest something, they have no idea what. Great teachers don’t teacher you to be creative because no one is creative standing alone. All build on the shoulders of giants. It’s getting harder to learn everything the giants have given us. To be truly powerful you must master accounting and capital asset modeling and something of proteomics. Of foreign policy, but also of the difficulties of leading and sustaining a platoon in the field. Of statistics,…and of their limits. Of all the little things it takes to build something in your community. One ought have time to have gathered intelligence like Sadaam Hussein’s offer of $25,000 to the family of every suicide bomber in Palestine. That 1 in 1200 teachers is delicensed, compared to more like 1 in 100 doctors or lawyers. That churches built most of our universities and hospitals. That the local auto-body shop is funding many of the local scholarships and public activities. Things all learned over time, while being productive and dependable. While learning mental discipline, logical thought, patient disinterested analytical rigor. Quadratic formulae, Schrödinger equations, and enantiomeric tranformations are hardly passionless, obsolete areas of study. They are the stuff of stars, of philosophy, of digital and analog empowerment. Washington, by the way (with von Steuben) transformed a creative, individualistic, and un-dangerous army into a productive, dependable one which could throw off a Despotic King. No comments:
a24bd61d8752af22
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Does this recent Zeilinger group delayed choice entanglement experiment imply backward-in-time influences? From the abstract: "This can also be viewed as “quantum steering into the past ”." share|cite|improve this question No, there is no retrocausal causation in the delayed choice entanglement swapping experiment (or any other experiment or process in the Universe, for that matter), see for a detailed explanation. Correlations between Alice, Bob, and Victor's outcomes may obviously be verified only after all of them complete their measurements. But one may easily prove – by a simple application of the completeness relation i.e. independence of some inner product on the choice of bases – that if Alice and Bob measure their particles before Victor, their outcomes (the probabilistic distributions governing them) will be totally unaffected by Victor's later decisions. For this reason, the cause behind correlations in any group – for example the group of four photons in this experiment – is always hiding in the fact that these components have been in contact in the past. Correlation isn't causation, at least not a randomly picked causation, and there's never any causation between spacelike-separated events and there's never any causation that would go backwards in time, either. Quantum mechanics fully respects this statement. Also, Heisenberg's equations make it totally obvious that the degree of entanglement between two isolated subsystems isn't changing with time. In fact, this experiment is really designed so that the polarizations of any of the photons don't change with time at all after the photons are produced in the special four-photon initial state. This makes it obvious that the results of the measurements don't depend on the timing of the measurements and it makes it silly to be surprised by this independence on the timing. In all such bizarre nonlocal or retrocausal interpretations of quantum measurements, the error hides in the people's attempts to create a "classical model" which already possesses objective properties before they are measured. But there's no valid classical model that would coincide with the Universe, a system that respects inequivalent, quantum rules. One must carefully wait for the moment of the measurements but whenever we learn the results of the measurements, the most accurate predictions of probabilities of new, future experiments has to take all the previously measured outcomes into account; we must switch to the appropriate conditional probabilities which some people materialistically interpret as "collapsing wave function". So if Victor does the experiment after Alice and Bob and he wants the most accurate predictions, he must "collapse the wave function" for his photons according to the results obtained by Alice and Bob. If he doesn't know these results, he won't be able to collapse them and he will only predict things according to the initial distribution. Whether he knows something or not, quantum mechanics unambiguously predicts that there would be correlations between the outcomes obtained by Alice, Victor, and Bob and no retrocausal influences are needed in quantum mechanics to produce these – experimentally verified – predictions. share|cite|improve this answer Interesting and informative answer. The very last sentence in the paper is this: "Some registered phenomena do not have a meaning unless they are put in relationship with other registered phenomena." I like that because it sums up the situation pretty nicely, and not that differently from the situation with space-separated effects where you can't see the need for spooky interactions until you correlate the final events. – Terry Bollinger Mar 24 '12 at 8:26 "backwards in time" is this another name for reversing the order of events compared to "forward in time"? – Physiks lover Sep 11 '12 at 11:05 Yes, "backwards in time" means that the cause (an event) takes place after its consequence (another event). – Luboš Motl Apr 6 '13 at 6:41 "...and there's never any causation between spacelike-separated events" The integral form of the Schrödinger equation is over all space. If there is no action at a distance, then AFAICS it is wrong. My understanding is that this was the basis of Einstein's issue with QM and he (and others) designed the EPR experiment to prove QM was incorrect but when it was performed it only confirmed QM. share|cite|improve this answer Why on earth should the fact that one can integrate over all space imply that there is "action at a distance?" Care to share this particular "integral form" of the Schrodinger equation that necessitates such a conclusion? It does not matter whether one is expressing physical laws as derivatives or integrals. What matters is whether the Lagrangian/Hamiltonian is Lorentz covariant, and the Standard Model is. One can cook up effective theories that appear to violate this (i.e. take a nonrelativistic limit) but it is just an illusion of low speed where we cannot see the time delay in propagation. – Doug Packard Sep 11 '12 at 9:37 A pretty good non-technical summary of the experiment and its possible interpretations is given by Chad Orzel here: Entangled In the Past: “Experimental delayed-choice entanglement swapping” share|cite|improve this answer Your Answer
a77decdf71964120
NEWTON, Ask A Scientist! Name: Chris Status: student Age: N/A Location: N/A Country: N/A Date: N/A which is larger, i or 2i? Interesting question! Actually, inequalities aren't applied to complex numbers (as they are to the real numbers.) Rather, one would "compare" absolute values or norms. For a given complex number z, the absolute value of z = the square root(the square of the x-component + the square of the y-component.) So...abs(i) = 1 and abs(2i) = 2, which shows 2i has the greater magnitude. It cannot be overemphasized, however, that the complex numbers themselves do not obey inequality properties. Bill Robinson 2i is larger than i, but this statement is only meaningful on the imaginary axis. We normally draw real numbers on a horizontal axis (positive = right), and imaginary numbers on a vertical axis (positive = upwards). On that vertical axis, 2i is "greater" than i. The imaginary numbers are not really translatable to real numbers, except for the fact that multiplying or dividing one imaginary number by another produces a real number. You can't place these numbers on an everyday ruler, but their being "imaginary" does not diminish their importance. They are used routinely in electronics and mechanical engineering, where they are useful in perfectly describing vibrating or oscillating devices. They also show up in the Schrödinger equation, a basis of quantum physics. This is another domain that appears to "make no sense", and yet the non-intuitive techniques accurately predict the way the world works to the limits of our ability to measure things. Paul Bridges Click here to return to the Mathematics Archives Educational Programs Building 360 9700 S. Cass Ave. Argonne, Illinois 60439-4845, USA Update: June 2012 Weclome To Newton Argonne National Laboratory
15d95d52e8aa1282
Skip to main content 1. Home > 2. News > 3. Press Releases > 4. Archives > 5. By Month > 6. 2010 > 7. Fujitsu Supercomputer Achieves World Record in Computational Quantum Chemistry Fujitsu Limited Chuo University Fujitsu Supercomputer Achieves World Record in Computational Quantum Chemistry Solves optimization problem to reveal the behavior of 3 key molecules, contributes to research in science and technology Tokyo, May 28, 2010 — Fujitsu Limited and Chuo University of Japan today announced that a team of researchers(1) from Chuo University, Kyoto University, Tokyo Institute of Technology and Japan's Institute of Physical and Chemical Research (known as Riken) employed the T2K Open Supercomputer - which was delivered by Fujitsu to Kyoto University's Academic Center for Computing and Media Studies - to successfully compute with high precision, as a world first, an optimization problem to reveal the molecular behavior of methyl radical (CH3), ammonia (NH3) and oxygen (O2). This accomplishment paves the way for computing the behavior of complicated molecules that cannot be seen by the human eye, by enabling researchers to gain a greater understanding of the behavior of water molecules, the properties of proteins, photosynthesis, and the mechanisms of superconductivity would also contribute to the development of new medicines and new materials. Furthermore, a wide range of potential applications is expected to emerge from this research, not only in the fields of physics and chemistry, but also in engineering and social sciences areas such as natural sciences, control design and signal/image processing. Potential of Supercomputers Supercomputers are computers capable of quickly performing large-scale and advanced computations that are difficult to solve using average computers. Supercomputers have received a great deal of attention as a tool for solving important issues facing human society, such as environmental problems and challenges in the medical and manufacturing fields. One reason why supercomputers have become so important is attributable to their role in computer simulations. Computer simulations, which use computers to compute and reproduce various phenomena, have been called the "third pillar of science" alongside theory and experimentation. Computer simulation is becoming an indispensable tool in all fields of research and development, from basic research to manufacturing. The T2K Open Supercomputer (Figure 1), which was delivered by Fujitsu to Kyoto University's Academic Center for Computing and Media Studies, is a computer equipped for handling large-scale advanced scientific computation. Figure 1: T2K Open Supercomputer and specifications Larger View Many of the physical and chemical phenomena surrounding us today are governed by an equation called the Schrödinger equation (Figure 2). By being able to solve the Schrödinger equation, one is able to determine the state and energy of atoms and molecules, thereby allowing for an understanding of various phenomena. For example, the Schrödinger equation enables scientists to determine how carbon dioxide (CO2) is transformed into oxygen (O2), what happens when two forms of matter are mixed, and how to formulate effective medicines. Through the computation of the Schrödinger equation, it is possible to explain the mechanisms of such chemical phenomena without the need for experimentation. In reality, however, if the Schrödinger equation is precisely applied, it can become extremely complex and can turn into an enormous equation that holds little hope of being computable. Thus far, the equation has only been employed in cases where it can be relatively easily computed. Figure 2: Schrödinger equation Larger View Previous Challenges In 2001, Maho Nakata of Kyoto University (presently a researcher at Riken) and Professor Hiroshi Nakatsuji (presently of the Quantum Chemistry Research Institute) proposed a computational method for solving the optimization problem of the direct variational calculation of reduced density matrices, instead of solving the massive Schrödinger equation. This computational method involved the use of an optimization problem computational technique called Semidefinite Programming (SDP)(2). However, the results were limited to small atoms and molecules, and faster computation of SDP became the key to performing computations for larger molecules with complicated behavior in a short amount of time. The research team from Chuo University, led by Professor Katsuki Fujisawa, developed the SDPARA software package, based on an advanced optimization algorithm, as a high-speed SDP computational method. By running large-scale tests of SDPARA on the T2K Open Supercomputer, the team was successfully able for the first time ever to precisely compute the behavior of methyl radical (CH3), ammonia (NH3) and oxygen (O2). During the actual computation, the matrix for the largest molecule employed in this study - ammonia (NH3) - reached a size of 19,640 × 19,640, and therefore had too many elements to be processed in a practical amount of time using average computer systems (Figure 3). By employing a supercomputer, the team was successfully able to solve the matrix in the computing time shown in Figure 4. For this computation, the T2K Open Supercomputer employed 128 nodes for its computations, utilizing a total memory volume of 4 terabytes and 2048 cores. Figure 3: Successfully calculated massive semidefinite programming (SDP) Larger View Figure 4: Computation time for massive-scale semidefinite programming (SDP) in the field of quantum chemistry Larger View Potential Applications As a world first, the research team succeeded for the first time ever in precisely computing the optimization problem (using SDP) to reveal the behavior of the molecules methyl radical (CH3), ammonia (NH3) and oxygen (O2). Because the methodology can compute the behavior of complicated molecules without the need for experimentation, it has the potential to be applied in a variety of fields, such as the development of new drugs and new materials, as well as applications in physics, chemistry and engineering. In addition, this research has opened up the possibility of using supercomputers for computations in the field of superconductivity - a feat which thus far no computer has been able to accomplish yet. Furthermore, the research is expected to contribute to the development of innovations that are presently impossible in the area of energy storage (power storage), and in the medical and electronics fields. Future Initiatives The research team will strive to contribute to the advancement of science and technology through research that leverages high-speed supercomputers. The large-scale supercomputer computations for this research have been supported by the Collaborative Research Program for Large-Scale Computation of ACCMS and IIMC, Kyoto University. In addition, partial software development has been made possible through the Chuo University Grant for Special Research. Glossary and Notes Team of researchers: Research team members: Katsuki Fujisawa, Associate Professor, Department of Industrial and Systems Engineering, Chuo University; Makoto Yamashita, Assistant Professor, Department of Mathematical and Computing Sciences, Tokyo Institute of Technology Graduate School of Information Science and Engineering; Maho Nakata, Advanced Center for Computing and Communication, RIKEN; Kinji Kimura, Associate Professor, Graduate School of Informatics, Kyoto University. Semidefinite Programming (SDP): A currently evolving field that originated from linear programming, a mathematical technique. Active research on SDP is underway globally. About Fujitsu About Chuo University Chuo University was founded as Igirisu Horitsu Gakko (the English Law School) in 1885 and have six faculties and their graduate schools, as well as three professional graduate schools (Chuo Graduate School of International Accounting, Chuo Law School, Strategic Management Course). The Faculty of Science and Engineering was founded in 1949 and have nine departments (Mathematics, Physics, Civil Engineering, Precision Mechanics, Electrical Electronic and Communication Engineering, Applied Chemistry, Industrial and Systems Engineering, Information and Systems Engineering, Biological Science). Number of Students of the Faculty of Science and Engineering: 4,154 (as of May 1, 2010) Please see: Press Contacts Fujitsu Limited Public and Investor Relations Division Customer Contacts Fujitsu Laboratories Ltd. Design Innovations Lab. IT Systems Lab.
8e8c6572edf9a1dc
Dismiss Notice Join Physics Forums Today! Interference of two particles - how does it work? 1. Sep 11, 2007 #1 I'm at this forum for some time, but that's my first post. Forum is so huge that I can't cope with reading even small part of it :) My question is connected with interference in quantum mechanics. We have two particles passing through two different slits. They interfere - but what exacly does it mean? What kind of interact is it? For water waves it seems simple, because their particles oscillate horizontally and when waves superimpose, it's intuitive what happen to them. But it's classical physics... 2. jcsd 3. Sep 11, 2007 #2 User Avatar Science Advisor Gold Member It is very similar to interference of light (water waves is a bit of a bad example, since you then have a medium). First of all, it is important to understand that there is no such thing as a "particle" in the classical sense in QM, e.g. an electron has properties that are sometimes best understood as "particle like" and sometimes -as in the case of interference- "wave like". The wavelength of a particle (known as de Broglie wavelength) is inversely proportional to its mass, which is why you rarely see this effect the macroscopic world; things that we can see with our bare eyes are simply too heavy for any of the "wave-like" properties to be visible. Also, note that there is NO PARADOX; the particle-wave duality is perhaps strange but is "natural" in the context of QM, it is only when we try to understand it in terms of our everyday experience that it becomes confusing. Hence, once you understand that all particles are also waves interference phenomena are nothing surprising. in some experiments interference has even been seen between very heavy (relatively speaking) object such as C60 molecules. 4. Sep 11, 2007 #3 There is interference even if only one particle at a time passes through the slits. 5. Sep 11, 2007 #4 OK, thanks, so I must understood something wrong... I thought that de Brogile wavelenght and wave-particle duality belong to classical physics and in QM we consider light only like an elementary particle... I've read too much about double-slit experiment :) On the other hand, is it a false that de Broglie wavelenght is displaced by wave function (maybe in other interpretations of QM)? 6. Sep 11, 2007 #5 User Avatar Science Advisor Gold Member The de Broglie wavelenght is something "physical" in the sense that it is a measure of the wavelength of a particle of mass m. A wavefunction is a mathematical construct which does not neccesarily have anything to do with waves (in fact, you do not even have to use wavefunctions in QM, density matricies can be used instead and describe exactly the same thing); you can write down a wavefunction for ANY system you can think of (including macroscopic systems). The Schrödinger equation belongs to a class of mathematical partial differential equations called "wave equations" (which also includes PDEs for water waves), I guess this is where the name wavefunction comes from. Wavefunctions can also be written in terms of amplitudes and phases, but generally speaking you can't talk of the "physical size" of a wavefunction (except in problems where the position of something is actually involved). 7. Sep 11, 2007 #6 Thank you :) 8. Sep 14, 2007 #7 I am motivated to comment on your assertion that two particles pass through two slits and they interfere. I assume you are thinking of electrons. The peculiar thing about electrons is that two different electrons DON'T interfere with each other. If they did, a typical atom (other than hydrogen) would be a confused jumble of interfering charge waves, radiating chaotically as they interfere with each other. It is precisely because each electron interferes only with itself that an atom can be stable. 9. Sep 14, 2007 #8 User Avatar Staff Emeritus Science Advisor Education Advisor You may want to read this thread: The interference pattern that we are familiar with is the single-particle interference, not two-particle interference. Two-particle interference almost never occurs and exhibit a different property (See the Mendel reference that I cited in that thread). 10. Sep 14, 2007 #9 To ZapperZ: I was talking about electrons and it seems you are talking about photons. These cases are pretty different. If you allow the Schroedinger function of two different electrons to interfere, you get a real mess. But two light waves add up and cancel each other according to the normal classical laws. So I don't see why this doens't work down to the level of photons. 11. Sep 14, 2007 #10 Two light waves is a thing, two photons is another. 12. Sep 21, 2007 #11 By the way if you read my post today you will realize that it has been even shown for a macroscopic object- a big silicon drop (1 million times larger than c60). here is the link http://www.physorg.com/news78650511.html thanks Viva Diva Have something to add?
2e2a397cf00e9c36
Modal Interpretations of Quantum Mechanics First published Tue Nov 12, 2002; substantive revision Wed Dec 12, 2012 The original “modal interpretation” of non-relativistic quantum theory was born in the early 1970s, and at that time the phrase referred to a single interpretation. The phrase now encompasses a class of interpretations, and is better taken to refer to a general approach to the interpretation of quantum theory. We shall describe the history of modal interpretations, how the phrase has come to be used in this way, and the general program of (at least some of) those who advocate this approach. 1. The origin of the modal approach In traditional approaches to quantum measurement theory a central role is played by the projection postulate, which asserts that upon measurement of a physical system its state will be projected (“collapses”) onto a state corresponding to the value found in the measurement. However, this postulate leads to many difficulties: What causes this discontinuous change in the physical state of a system? What exactly is a “measurement” as opposed to an ordinary physical interaction? The postulate is especially worrying when applied to entangled compound systems whose components are well-separated in space. For example, in the Einstein-Podolsky-Rosen (EPR) experiment there are strict correlations between two systems that have interacted in the past, in spite of the fact that the correlated quantities are not sharply defined in the individual systems. The projection postulate in this case implies that the collapse resulting from a measurement on one of the systems instantaneously defines a sharp property in the distant other system. A possible way clear of these problems was noticed by van Fraassen (1972, 1974, 1991), who proposed to eliminate the projection postulate from the theory. Others had made this proposal before, as Bohm (1952) in his theory (itself preceded by de Broglie's proposals from the 1920s), Everett (1957) in his relative-state interpretation and De Witt (1970) with the many-worlds interpretation. Van Fraassen's proposal was, however, different from these other approaches. It relied, in particular, on a distinction between what he called the “dynamical state” and the “value state” of a system at any instant: • The dynamical state determines what may be the case: which physical properties the system may possess, and which properties the system may have at later times. • The value state represents what actually is the case, that is, all the system's physical properties that are sharply defined at the instant in question. The dynamical state is just the quantum state of the ordinary textbook approach (a vector or density matrix in Hilbert space). For an isolated system, it always evolves according to the Schrödinger equation (in non-relativistic quantum mechanics): so the dynamical state never collapses during its evolution. The value state is (typically) different from the dynamical state. The general idea of this original proposal, and of modal interpretations in general, is that physical systems at all times possess a number of well-defined physical properties, i.e., definite values of physical quantities; these properties can be represented by the system's value state. Which physical quantities are sharply defined, and which values they take, may change in time. Empirical adequacy of course requires that the dynamical state generate the correct Born frequencies of observable quantities. An essential feature of this approach is that a system may have a sharp value of an observable even if the dynamical state is not an eigenstate of that same observable. The proposal thus violates the so-called “eigenstate-eigenvalue link”, which says that a system can only have a sharp value of an observable (namely, one of its eigenvalues) if its quantum state is the corresponding eigenstate. In the value state terminology, the eigenstate-eigenvalue link would say that a system has the value state corresponding to a given eigenvalue of a given observable if and only if its dynamical state is an eigenstate of the observable corresponding to that eigenvalue. This original modal approach accepts the “if” part, but denies the “only if” part. What are the possible “value states” for a given system at a given time? Van Fraassen stipulates the following restriction: propositions about a physical system cannot be jointly true, unless they are represented by commuting observables. In other words, the non-commutativity of observables imposes limits not on our knowledge about the properties of a system, but rather on the possibility of joint existence of properties, independently of our knowledge. Non-commuting quantities, like position and momentum, cannot jointly be well-defined quantities of a physical system. Empirical adequacy requires that, in cases of measurement, the actual value state of the apparatus be one describing a definite measurement result. Therefore, in these cases the dynamical state must generate a probability measure over exactly the set of possible measurement results. However, this original modal approach is more liberal in its assignment of possible value states, and according to many this does not yield a satisfactory account of measurements (see Ruetsche 1996). Van Fraassen's proposal is “modal” because it leads to a modal logic of quantum propositions. Indeed, the dynamical state in general only tells us what is possible. An important point is that one should not consider this modality as arising from an incompleteness of the description, which it is the aim of science to remove. The dynamical state provides us with possible physical properties of the system, and this is all the theory has to do. It is easy to see how, along the same lines as van Fraassen's ideas, a program could come into being for providing a more elaborate “realist” interpretation of quantum theory, a program to which we now turn. 2. General features of modal interpretations In the 1980s several authors presented realist interpretations which, in retrospect, can be regarded as elaborations or variations on the just-mentioned modal themes (for an overview and references, see Dieks and Vermaas 1998). In spite of the differences among them, all the modal interpretations agree on the following points: • The interpretation is based on the standard formalism of quantum mechanics, with one exception: the projection postulate is left out. • The interpretation is realist, in the sense that it assumes that quantum systems possess definite properties at all instants of time. • Quantum mechanics is taken to be fundamental: it applies both to microscopic and macroscopic systems. • The dynamical state of the system (pure or mixed) tells us what the possible properties of the system and their corresponding probabilities are. This is achieved by a precise mathematical rule that specifies a probabilistic relationship between the dynamical state and possible value states. • A quantum measurement is an ordinary physical interaction. There is no collapse of the dynamical state: the dynamical state always evolves unitarily according to the Schrödinger equation. The Kochen-Specker theorem (1967) is a barrier to any realist classical-like interpretation of quantum mechanics, since it proves the impossibility of ascribing precise values to all physical quantities (observables) of a quantum system simultaneously, while preserving the functional relations between commuting observables. Therefore, realist non-collapse interpretations are committed to selecting a privileged set of definite-valued observables out of all observables. Each modal interpretation thus supplies a “rule of definite-value ascription” or “actualization rule”, which picks out, from the set of all observables of a quantum system, the subset of definite-valued properties. The question is: what should this actualization rule look like? Since the mid-1990's a series of approaches faced this question (Clifton 1995a,b; Dickson 1995a,b; Dieks 1995). Each one of them proposed a group of conditions that the set of definite-valued properties should obey, and characterized this set in terms of the dynamical state |φ⟩ of the system. The common result was that the possible value states of the components of a two-part composite system are given by the states occurring in the Schmidt (bi-orthogonal) decomposition of the dynamical state, or, equivalently, by the projectors occurring in the spectral decomposition of the density matrices representing partial systems (obtained by partial tracing)---see Section 4 for more details. The definite-valued properties have also been characterized somewhat differently (Bub and Clifton 1996; for an improved version, see Bub, Clifton and Goldstein 2000), that is, in terms of the quantum state |φ⟩ plus a “privileged observable” R, which is privileged in the sense that it represents a property that is always definite-valued (see also Dieks 2005, 2007). On this basis, Bub (1992, 1994, 1997) suggests that with hindsight a number of traditional interpretations of quantum theory can be characterized as modal interpretations. Notable among them are the Dirac-von Neumann interpretation, (what Bub takes to be) Bohr's interpretation, and Bohm's theory. Bohm's theory is a modal interpretation in which the privileged observable R is the position observable. 3. Atomic modal interpretation The Hilbert space of the universe Huniv, like any Hilbert space, can be factorized in countless ways. If one supposes that each factorization defines a legitimate set of subsystems of the universe, the multiple factorizability implies that there exists a multiplicity of ways of defining the building blocks of nature. If the properties (value states) of all these quantum systems are defined by means of the partial trace with respect to the rest of the universe (see later for more details), it turns out that a contradiction of the Kochen-Specker type arises (Bacciagaluppi 1995). The Atomic Modal Interpretation (AMI, Bacciagaluppi and Dickson 1999) tries to overcome this obstacle by assuming that there is in nature a fixed set of mutually disjoint atomic quantum systems Sj that constitute the building blocks of all the other quantum systems. From the mathematical point of view, this means that the Hilbert space Huniv of the entire universe can only be meaningfully factorized in a single way, which defines a preferred factorization. If each atomic quantum system Sj is represented by its corresponding Hilbert space Hj, then the Hilbert space Huniv of the universe must be written as Huniv = H1H2 ⊗ ... ⊗ Hj ⊗ ... The main appeal of this idea is that it is in consonance with the standard model of particle physics, where the fundamental blocks of nature are the elemental particles, e.g., quarks, electrons, photons, etc., and their interactions. The property ascription to the atomic quantum systems in the AMI further follows the general idea of modal interpretations, that is, the ascription depends via a fixed rule on the dynamical state of the system. The main challenge for the AMI is to justify the assumption that there is a preferred partition of the universe and to provide some idea about what this factorization should look like. AMI also faces a conceptual problem. In this interpretation, a non-atomic quantum system Sσ, defined as composite of atomic quantum systems, does not necessarily have properties that correspond to the outcomes of measurements. The reason is that the system Sσ might be in the quantum state ρσ with an eigenprojector ∏σ such that Trσσσ) = 1. This implies that if one measured the property represented by ∏σ, one would obtain a positive outcome with probability 1. But it may be the case that the projector ∏σ is not a composite of atomic properties and, therefore, according to the AMI, it is not a property possessed by the composite quantum system Sσ. Two answers to this conceptual difficulty have been proposed. The first allows the existence of dispositional properties in addition to ordinary properties (Clifton 1996). According to the second answer, the projector ∏σ of the composite system Sσ shows that Sσ has a collective dynamical effect onto the measurement device, that is, an effect that cannot be explained by the action of the atomic components (Dieks 1998). In other words, the composite quantum system, when interacting with its environment, can behave as a collective entity, screening off the contribution of the atomic quantum systems. This means that sometimes a non-atomic quantum system Sσ may be taken as if it were an atomic quantum system within the framework of a coarse-grained description. 4. Biorthogonal-decomposition and spectral-decomposition modal interpretations In the biorthogonal-decomposition interpretation (BDMI, sometimes known as “Kochen-Dieks modal interpretation”, Kochen 1985; Dieks 1988, 1989a,b, 1994a,b), the definite-valued observables are picked out by the biorthogonal (Schmidt) decomposition of the pure quantum state of the system: • Biorthogonal Decomposition Theorem: Given a vector |ψ⟩ in a tensor-product Hilbert space H1H2, there exist bases {|ai⟩} and {|pi⟩} for H1 and H2 respectively, such that |ψ⟩ can be written as a linear combination of terms of the form |ai⟩ ⊗ |pi⟩. If the absolute values (modulus) of the coefficients in this linear combination are all unequal, then the bases are unique (see, for example, Schrödinger 1935 for a proof). In quantum mechanics the theorem means that, given a composite system consisting of two subsystems, its state picks out (in many cases, uniquely) a basis for each of the subsystems. According to the BDMI, those bases generate the definite-valued properties (the value states) of the corresponding subsystems. The BDMI is particularly appropriate to account for quantum measurement. Let us consider an ideal measurement under the standard von Neumann model, according to which a quantum measurement is an interaction between a system S and a measuring apparatus M. Before the interaction, M is prepared in a ready-to-measure state |p0⟩, eigenvector of the pointer observable P of M, and the state of S is a superposition of the eigenstates |ai⟩ of an observable A of S. The interaction introduces a correlation between the eigenstates |ai⟩ of A and the eigenstates |pi⟩ of P: 0⟩ = ∑i ci |ai⟩ ⊗ |p0⟩ → |ψ⟩ = ∑i ci |ai⟩ ⊗ |pi In this case, according to the BDMI prescription, the preferred context of the measured system S is defined by the set {|ai⟩} and the preferred context of the measuring apparatus M is defined by the set {|pi⟩}. Therefore, the pointer position is a definite-valued property of the apparatus: it acquires one of its possible values (eigenvalues) pi. And analogously in the measured system: the measured observable is a definite-valued property of the measured system, and it acquires one of its possible values (eigenvalues) ai. In spite of the fact that this modal interpretation is characterized by the central role played by biorthogonal decomposition, two different versions can be distinguished. One of them adopts a metaphysics in which all properties are relational and, as a consequence, the fact that the application of the interpretation is restricted to subsystems of a two-component compound system is not a problem (Kochen 1985). This relation has been called “witnessing”: properties are not possessed by the system absolutely, but only when it is “witnessed” by another system. Consider the measurement described above: the pointer “witnesses” the value acquired by the measured observable of the measured system. By contrast, according to the other version (Dieks 1988, 1989a,b) the properties ascribed to the system do not have a relational character. This proposal therefore faces consistency questions about the assignments of definite values to observables according to different ways of splitting up the total system into components. Consider, for example, the three-component composite system αβχ. We could apply the biorthogonal decomposition theorem to the two-component system (i) α(βχ), or (ii) β(χα) or (iii) χ(αβ). Suppose that, as a result of this, in case (i) the system α has the definite-valued property P, in case (ii) the system β has the definite-valued property Q, and in case (iii) the system αβ has the definite-valued property R. How do the definite-valued properties of α and β relate to those of αβ? Are the definite-values properties of system αβ P&Q, or R, or both? This problem has been addressed by different authors during the 1990's (see Vermaas 1999; Bacciagaluppi 1996). This work led to the spectral-decomposition modal interpretation (SDMI, sometimes known as “Vermaas-Dieks modal interpretation”, Vermaas and Dieks 1995) a generalization of the BDMI interpretation to mixed states. The SDMI is based on the spectral decomposition of the reduced density operator: the definite-valued properties ∏i of a system and their corresponding probabilities Pri are given by the non-zero diagonal elements of the spectral decomposition of the system's state, ρ = ∑i αii     Pri = Tr(ρ∏i) This new proposal matches the old one in cases where the old one applies, and generalizes it by fixing the definite-valued properties in terms of multi-dimensional projectors when the biorthogonal decomposition is degenerate: definite-valued properties need not always be represented by one-dimensional vectors—higher-dimensional subspaces of the Hilbert space can also occur. The SDMI also has a direct application to the measurement situation. Consider quantum measurement as described above, where the reduced states of the measured system S and the measuring apparatus M are ρrS  = Tr(M)|ψ⟩⟨ψ| = ∑i |ci|2 |ai⟩⟨ai| = ∑i |ci|2ia ρrM  = Tr(S)|ψ⟩⟨ψ| = ∑i |ci|2 |pi⟩⟨pi| = ∑i |ci|2ip According to the SDMI, the preferred context of S is defined by the projectors ∏ia  and the preferred context of M is defined by projectors ∏ip . Therefore, also in the SDMI, the observables A of S and P of M acquire actual definite values, whose probabilities are given by the diagonal elements of the diagonalized reduced states. The SDMI faces the same difficulty as the non-relational version of the BDMI: the fact that a system can be decomposed in a variety of different ways. In particular, the factorization of a given Hilbert space H into two factors, H = H1H2, can be “rotated” to produce different factorizations H′ = H1′ ⊗ H2′. Are we to apply the SDMI to each such factorization? How are the results related, if at all? A theorem due to Bacciagaluppi (1995, see also Vermaas 1997) shows, in essence, that if one applies the SDMI to the “subsystems” obtained in every factorization and insists that the definite-valued properties so-obtained are not relational, then one will be led to a mathematical contradiction of the Kochen-Specker variety. In response, one could adopt the view that subsystems have their definite-valued properties “relative to a factorization”; we will come back to this issue below. Healey (1989) was also among the first to make use of the biorthogonal decomposition theorem, developing these ideas in a somewhat different direction. His main concern was the apparent non-locality of quantum mechanics. Healey's intuition about the way a modal interpretation based on the biorthogonal decomposition theorem would be applied to, say, an EPR experiment is to implement the idea that an EPR pair possesses a “holistic” property; this can then explain why the apparatus on one side of the experiment acquires a property that is correlated to the result on the other side. In Healey's proposal, the biorthogonal decomposition theorem is used, but the set of possible properties is subsequently modified in order to fulfill a variety of desiderata. The first is consistency: the aim is to avoid Kochen-Specker-type results. A second is to maintain a plausible theory of the relationship between composite systems and their subsystems. A third is to maintain a plausible account of the relations among definite-valued properties at a given time. A fourth is to maintain a plausible account of the relations among definite-valued properties at different times. The structure of definite-valued properties that emerges from these conditions is extremely complicated. Some progress has been made since Healey's book was published (see for example Reeder and Clifton 1995) but, in general, it remains difficult to see what the set of definite-valued properties is according to his approach. 5. Non-ideal measurements Above we suggested that the BDMI and the SDMI solve the measurement problem in a particularly direct way. This is right in the case of the ideal von Neumann measurement, as explained in the previous section, where the eigenstates |ai⟩ of an observable A of the measured system S are perfectly correlated with the eigenstates |pi⟩ of the pointer P of the measuring apparatus M. However, ideal measurement is a situation that can never be achieved in practice: the interaction between S and M never introduces a completely perfect correlation. Two kinds of non-ideal measurements are usually distinguished in the literature: • Imperfect measurement (first kind) i ci |ai⟩ ⊗ |p0⟩ → ∑ij dij |ai⟩ ⊗ |pj⟩ (in general, dij ≠ 0 with ij) • Disturbing measurement (second kind) i ci |ai⟩ ⊗ |p0⟩ → ∑i ci |aid  ⟩ ⊗ |pi⟩ (in general, ⟨aid  | ajd  ⟩ ≠ δij) Note, however, that disturbing measurement can be rewritten as imperfect measurements (and vice versa). Imperfect measurements pose a challenge to the BDMI and the SDMI, since their rules for selecting the definite-valued properties do not pick out the right properties for the apparatus in the imperfect case (see Albert and Loewer 1990, 1991, 1993; also Ruetsche 1995). An example that clearly brings out the difficulties introduced by non-ideal measurements was formulated in the context of Stern-Gerlach experiments (Elby 1993). This argument uses the fact that the wavefunctions in the z-variable typically have infinite “tails” that introduce non-zero cross-terms; therefore, the “tail” of the wavefunction of the “down” beam may produce detection in the upper detector, and vice versa (see Dickson 1994 for a detailed discussion). In fact, if the biorthogonal decomposition is applied to the non-perfectly correlated state ∑ij dij |ai⟩ ⊗ |pj⟩ = ∑i ci′ |ai′⟩ ⊗ |pi′⟩, according to the BDMI the result does not select the pointer P as a definite-valued property, but a different observable P′ with eigenstates |pi′⟩. In this case, in which the definite-valued properties selected by a modal interpretation are different from those expected, the question arises how different they are. In the case of an imperfect measurement, it may be assumed that the dij ≠ 0, with ij, be small; then, the difference might be also small. But in the case of a disturbing measurement, the dij ≠ 0, with ij, need not be small and, as a consequence, the disagreement between the modal interpretation assignment and the experimental result might be unacceptable (see a full discussion in Bacciagaluppi and Hemmo 1996). This fact has been considered as a “silver bullet” for killing the modal interpretations (Harvey Brown, cited in Bacciagaluppi and Hemmo 1996). There is another important problem related to non-ideal measurements. When the final state of the composite system (measured system plus measuring device) is very nearly degenerate when written in the basis given by the measured observable and the apparatus's pointer (that is, when the probabilities for the various results are nearly equal), the spectral decomposition does not, in general, select as definite-valued properties close to those ideally expected. In fact, the observables so selected may be incompatible (non-commuting) with the observables that we expect on the basis of observation (Bacciagaluppi and Hemmo 1994, 1996). In order to face the problems that non-ideal measurements pose to the BDMI and the SDMI, several authors have appealed to the phenomenon of decoherence; this will be discussed below. 6. Properties of composite systems Let us take a composite system αβ, whose component subsystems α and β are represented by the Hilbert spaces Hα and Hβ, respectively, and consider a property represented by the projector ∏α defined on Hα. It is usual to assume that ∏α represents the same property as that represented by ∏αIβ defined on HαHβ, where Iβ is the identity operator on Hβ. This assumption is based on the observational indistinguishability of the magnitudes represented by ∏α and ∏αIβ: if the ∏α-measurement has a certain outcome, then the ∏αIβ-measurement has exactly the same outcome. The question is then: If the rules of the BDMI and the SDMI applied to α assign a value to ∏α, do those rules applied to the composite system αβ assign the same value to ∏αIβ (condition known as Property Composition), and vice versa (Property Decomposition)? The answer to this question is negative: the BDMI and the SDMI violate Property Composition and Property Decomposition (for a proof, see Vermaas 1998). Of course, if one maintains that the projectors ∏α and ∏αIβ represent the same property, the violation of Property Composition and Property Decomposition is a serious problem for any interpretation. This is the position adopted by Arntzenius (1990), who judges this violation to be bizarre, since it assigns different truth values to propositions like ‘the left-hand side of a table is green’ and ‘the table has a green left-hand side’, which are normally not distinguished; a similar argument is put forward by Clifton (1996, see also Clifton 1995c). However, Vermaas (1998) argues that the observational indistinguishability of the magnitudes represented by ∏α and ∏αIβ does not force one to consider these two projectors as representing the same property: in fact, they are distinguishable from a theoretical viewpoint, since they are defined on different Hilbert spaces. Moreover, he argues that the examples developed by Arntzenius and Clifton sound bizarre precisely in the light of Property Composition and Property Decomposition. But in the quantum realm we must accept that the questions of which properties are possessed by a system and which by its subsystems are different questions: the properties of a composite system αβ don't reveal information about the properties of subsystem α, and vice versa. Vermaas concludes that the tenet that ∏α and ∏αIβ do represent the same property can be viewed as an addition to quantum mechanics, which can be denied as, for instance, van Fraassen (1991) did. 7. Dynamics of properties As we have seen, modal interpretations intend to provide, for every instant, a set of definite-valued properties and their probabilities. Some advocates of modal interpretations may be willing to leave the matter, more or less, at that. Others take it to be crucial for any modal interpretation that it also answers questions of the form: Given that the property P of a system has the actual value α at time t0, what is the probability that its property P′ has the actual value β at time t1 > t0? In other words, they want a dynamics of actual properties. There are arguments on both sides. Those who argue for the necessity of such a dynamics maintain that we have to assure that the trajectories of actual properties really are, at least for macroscopic objects, like we see them to be, i.e., like the records contained in memories. For example, we should require not only that the book at rest on the desk possess a definite location, but also that, if undisturbed, its location relative to the desk does not change in time. Accordingly, one cannot get away with simply specifying the definite properties at each instant of time. We need also to show that this specification is at least compatible with a reasonable dynamics; better still, specify this dynamics explicitly. Those who consider a dynamics of actual properties to be superfluous reply that such a dynamics is more than what an interpretation of quantum mechanics needs to provide. Memory contents for each instant are enough to make empirical adequacy possible. As pointed out by Ruetsche (2003), in this debate about the need for a dynamics of actual properties it is important whether the modal interpretation is viewed as leading to a hidden-variables theory, in which value states are added as hidden variables to the original formalism in order to obtain a full description of the physical situation, or rather as only equipping the original formalism with a new semantics. In the first approach one would expect a full dynamics of actual properties, in the second this is not so clear. Of course, modal interpretations do admit a trivial dynamics, namely, one in which there is no correlation from one time to the next. In this case, the probability of a transition from the property P having the actual value α at t0, to the property P′ having the actual value β at t1 > t0 is just the single-time probability for P′ having β at t1. However, this dynamics is unlikely to interest those who feel the need for a dynamics at all. Several researchers have contributed to the project of constructing a more interesting form of dynamics for modal interpretations (see Vermaas 1996, 1998). An important account is due to Bacciagaluppi and Dickson (1999, see also Bacciagaluppi 1998). That work shows the most significant challenges that the construction of a dynamics of actual properties must face. The first challenge is posed by the fact that the set of definite-valued properties—let us call it ‘S’—may change over time. One therefore has to define a family of maps, each one being a 1–1 map from S0 at time t0 to a different St at time t, for any time. With such a family of maps, one can effectively define conditional probabilities within a single state space, and then translate them into “transition” probabilities. For this technique to work, St must have the same cardinality at any time. However, in general this is not the case: for instance, in the SDMI, the number of different projectors appearing in the spectral decomposition of the density matrix may vary with time. A way out of this is to augment S at each time so that its cardinality matches the highest cardinality that S ever achieves. Of course, one hopes to do so in a way that is not completely ad hoc. For example, in the context of the SDMI, Bacciagaluppi, Donald and Vermaas (1995) show that the “trajectory” through Hilbert space of the spectral components of the reduced state of a physical system will, under reasonable conditions, be continuous, or have only isolated discontinuities, so that the trajectory can be naturally extended to a continuous trajectory (see also Donald 1998). This result suggests a natural family of maps as discussed above: map each spectral component at one time to its unique continuous evolved component at later times. The second challenge to the construction of a dynamics arises from the fact that one wants to define transition probabilities over infinitesimal units of time, and then derive the finite-time transition probabilities from them. Bacciagaluppi and Dickson (1999) argue that, adapting results from the theory of stochastic processes, one can show that the procedure may, more or less, be carried out for modal interpretations of at least some varieties. Finally, one must actually define infinitesimal transition probabilities that will give rise to the proper quantum-mechanical probabilities at each time. Following earlier papers by Bell (1984), Vink (1993) and others, Bacciagaluppi and Dickson (1999) define an infinite class of such infinitesimal transition probabilities, such that all of them generate the correct single-time probabilities, which arguably are all we can really test. However, Sudbery (2002) has contended that the form of the transition probabilities would be relevant to the precise form of spontaneous decay or the “Dehmelt quantum jumps”; he independently developed the dynamics of Bacciagaluppi and Dickson and applied it in such a way that it leads to the correct predictions for these experiments. Gambetta and Wiseman (2003, 2004) developed a dynamical modal account in the form of a non-Markovian process with noise, also extending their approach to positive operator-valued measures (POVMs). 8. Perspectival modal interpretation As we have seen, both the SDMI and the non-relational version of the BDMI have to face the problem of the multiple factorizability of a given Hilbert space: if the definite-valued properties are monadic (i.e., non-relational), both interpretations led to a Kochen-Specker-type contradiction (Bacciagaluppi 1995). This points to the direction of an interpretation that makes properties relational, in this case relative to a factorization. Extending this idea, a perspectival modal interpretation (PMI, Bene and Dieks 2002) was developed, in which the properties of a physical system have a relational character and are defined with respect to another physical system that serves as a “reference system” (see Bene 1997). This interpretation is similar in spirit to the idea that systems have properties as “witnessed” by the rest of the universe (Kochen 1985). However, the PMI goes further by defining states of a system not only with respect to the universe, but also with respect to arbitrary larger systems. The PMI is closely related to the SDMI since similar rules are used to assign properties to quantum systems. In the PMI, the state of any system S needs the specification of a “reference system” R with respect to which the state is defined: this state of S with respect to R is denoted by ρRS  . In the special case in which R coincides with S, the state ρSS   is called “the state of S with respect to itself”. If the system S is contained in a system A, the state ρAS   is defined as the density operator that can be derived from ρAA   by taking the partial trace over the degrees of freedom in A that do not pertain to S: ρAS   = Tr(A\S)ρAA With these definitions, the point of departure of the PMI is the quantum state of the whole universe with respect to itself, which it is assumed to be a pure state ρUU   = |ψ⟩⟨ψ| which evolves unitarily according to the Schrödinger equation. For any system S contained in the universe, its state with respect to itself ρSS   is postulated to be one of the projectors of the spectral resolution of ρUS   = Tr(U\S) ρUU   = Tr(U\S) |ψ⟩⟨ψ| In particular, if there is no degeneracy among the eigenvalues of ρUS  , these projectors are one-dimensional and ρSS   is the one-dimensional projector |ψS⟩⟨ψS|. Within this PMI conceptual framework it can be shown that a system may be localized from the perspective of one observer and, nevertheless, may be delocalized from a different perspective. But it also follows that observers who look at the same macroscopic object, at the same time and under identical circumstances, will see it (practically) at the same spot. The core idea of this interpretation is that all different relational descriptions, given from different perspectives, are equally objective and all correspond to physical reality (which has a relational character itself). We cannot explain the relational states by appealing to a definition in terms of more basic non-relational states. Further analysis shows that in this interpretation EPR-type situations can be understood in a basically local manner. Indeed, the change in the relational state of particle 2 with respect to the 2-particle system can be understood as a consequence of the change in the reference system brought about by the local measurement interaction between particle 1 and the measuring device. This local measurement is responsible for the creation of a new perspective, and from this new perspective there is a new relational state of particle 2 (see also Dieks 2009). The PMI agrees with Bohr's qualitative argument that any reasonable definition of physical reality in the quantum realm should include the experimental setup. However, the PMI is more general in the sense that the state of a system is defined with respect to any larger physical system, not necessarily an instrument. This removes the threat of subjectivism, since the relational states follow unambiguously from the quantum formalism and the physics of the situation. It is interesting to consider the connections between the PMI and other relational proposals. For instance, Berkovitz and Hemmo (2006) propose the prospects of a relational modal interpretation in the relativistic case (we will come back to this point below). In turn, Rovelli and coworkers propose an explicit ‘relational quantum mechanics’ that emphasizes the possibility of different descriptions of a physical system depending on the perspective (Rovelli 1996; Rovelli and Smerlak 2007; Laudisa and Rovelli 2008; see also van Fraassen 2010). In spite of the points of contact between the PMI and Rovelli's relational interpretation, there are significant differences. In Rovelli's proposal, the concepts of measurement interaction and of definite outcomes of measurements are primary; moreover, the state has to be updated every time that a measurement event occurs and, as a consequence, it changes discontinuously with every new event. On the contrary, the PMI is a realist interpretation where a measurement is nothing else than a quantum interaction, and where unitary evolution is the main dynamical principle, also when systems interact (see Dieks 2009). 9. Modal-Hamiltonian interpretation As Bub (1997) points out, in most modal interpretations the preferred context of definite-valued observables depends on the state of the system. An exception is Bohmian mechanics, in which the preferred context is a priori defined by the position observable; in this case, property composition and property decomposition hold. But this is not the only reasonable possibility for a modal interpretation with a fixed preferred observable. In fact, the modal-Hamiltonian interpretation (MHI, Lombardi and Castagnino 2008; Ardenghi, Castagnino, and Lombardi 2009; Lombardi, Castagnino, and Ardenghi 2010; Ardenghi and Lombardi 2011) endows the Hamiltonian of a system with a determining role, both in the definition of systems and subsystems and in the selection of the preferred context. The MHI is based on the following postulates: • Systems postulate (SP): A quantum system S is represented by a pair (O, H) such that (i) O is a space of self-adjoint operators on a Hilbert space, representing the observables of the system, (ii) HO is the time-independent Hamiltonian of the system S, and (iii) if ρ0O′ (where O′ is the dual space of O) is the initial state of S, it evolves according to the Schrödinger equation. Although any quantum system can be decomposed in parts in many ways, according to the MHI a decomposition leads to parts which are also quantum systems only when the components' behaviors are dynamically independent of each other, that is, when there is no interaction among the subsystems: • Composite systems postulate (CSP): A quantum system represented by S: (O, H), with initial state ρ0O′, is composite when it can be partitioned into two quantum systems S1: (O1, H1) and S2: (O2, H2) such that (i) O = O1O2, and (ii) H = H1I2 + I1H2 (where I1 and I2 are the identity operators in the corresponding tensor product spaces). In this case, we say that S1 and S2 are subsystems of the composite system S = S1S2. If the system is not composite, it is elemental. With respect to the preferred context, the basic idea of the MHI is that the Hamiltonian of the system defines actualization. Any observable that does not have the symmetries of the Hamiltonian cannot acquire a definite actual value, since this actualization would break the symmetry of the system in an arbitrary way. • Actualization rule (AR): Given an elemental quantum system represented by S: (O, H), the actual-valued observables of S are H and all the observables commuting with H and having, at least, the same symmetries as H. The selection of the preferred context exclusively on the basis of a preferred observable has been criticized by arguing that in the Hilbert space formalism all observables are on an equal footing. However, quantum mechanics is not just Hilbert space mathematics: it is a physical theory that includes a dynamical law in which the Hamiltonian is singled out to play a central role. The justification for selecting the Hamiltonian as the preferred observable ultimately lies in the success of the MHI and its ability to solve interpretive difficulties. With respect to the first point: the scheme has been applied to several well-known physical situations (free particle with spin, harmonic oscillator, free hydrogen atom, Zeeman effect, fine structure, the Born-Oppenheimer approximation), leading to results consistent with empirical evidence (Lombardi and Castagnino 2008, Section 5). With respect to interpretation, the MHI confronts quantum contextuality by selecting a preferred context, and has proved to be able to supply an account of the measurement problem, both in its ideal and its non-ideal versions; moreover, in the non-ideal case it gives a criterion to distinguish between reliable and non-reliable measurements (Lombardi and Castagnino 2008, Section 6). In the MHI property composition and property decomposition hold because the actualization rule only applies to elemental systems: the definite-valued properties of composite systems are selected on the basis of those of the elemental components, and following the usual quantum assumption according to which the observable A1 of a subsystem S1 and the observable A = A1I2 of the composite system S = S1S2 represent the same property (Ardenghi and Lombardi 2011). The preferred context of the MHI does not change with time: the definite-valued observables always commute with the Hamiltonian and, therefore, they are constants of motion of the system. This means that they are the same during the whole “life” of the quantum system as a closed system, since its initial “birth”, when it arises as a result of an interaction, up to its final “death”, when it disappears by interacting with another system. As a consequence, there is no need of accounting for the dynamics of the actual properties as in the BDMI and the SDMI. 10. The interpretation of probability One of the leading ideas of the modal interpretations is probabilism: quantum mechanics does not correspond in a one-to-one way to actual reality, but rather provides us with a list of possibilities and their probabilities. Therefore, the notions of possibility and probability are central in this interpretive framework. This raises two issues: the formal treatment of probabilities, and the interpretation of probability. Since the set of events corresponding to all projector operators on a given Hilbert space does not have a Boolean structure, the Born probability (which is defined over these projectors) does not satisfy the definition of probability of Kolmogorov (which applies to a Boolean algebra of events). For this reason, some authors define a generalized non-Kolmogorovian probability function over the ortho-algebra of quantum events (Hughes 1989; Cohen 1989). Modal interpretations do not follow this path: they conceive probabilities as represented by a Kolmogorovian measure on the Boolean algebra representing the definite-valued quantities, generated by mutually commuting projectors. The various modal interpretations differ from each other in their definitions of the preferred context on which the Kolmogorovian probability is defined. As we have seen, the definite-valued properties of a system are usually characterized in terms of the quantum state |φ⟩ and a privileged observable R (Bub and Clifton 1996; Bub, Clifton, and Goldstein 2000; Dieks 2005). Dieks (2007) derives a uniqueness result, namely that given the splitting of a total Hilbert space into two factors spaces, representing the system and its environment, respectively, the Boolean lattice of definite-valued observables is fixed by the state of the system alone. Furthermore, it follows that the Born measure is the only one that is definable from just the product structure of Hilbert space, the state in the Hilbert space, and the definite-valued observables selected by the state. The MHI defines a context as a complete set of orthogonal projectors {∏α}, such that ∑ii = I and ∏ij = δiji, where I is the identity operator in HH. Since each context generates a Boolean structure, the state of the system defines a Kolmogorovian probability function on each individual context (Lombardi and Castagnino 2008). However, only the probabilities defined on the context determined by the eigenprojectors of the Hamiltonian of an elemental closed system correspond to the possible values one of which becomes actual. In modal interpretations the event space on which the (preferred) probability measure is defined is a space of possible events, among which only one becomes actual. The fact that the actual event is not singled out by these interpretations is what makes them fundamentally probabilistic. This aspect distinguishes modal interpretations from many-worlds interpretations, where the probability measure is defined on a space of events that are all actual. Nevertheless, this does not mean that all modal interpretations agree about the interpretation of probability. In the context of the BDMI, the SDMI and the PMI, it is usually claimed that, given the space of possible events, the state generates an ignorance-interpretable probability measure over this set: quantum probabilities quantify the ignorance of the observer about the actual values acquired by the system's observables (see, e.g., Dieks 1988; Clifton 1995a; Vermaas 1999; Bene and Dieks 2002). By contrast to actualism—the conception that reduces possibility to actuality (see Dieks 2010)—some modal interpretations, in particular the MHI, adopt a possibilist conception, according to which possible events—possibilia—constitute a basic ontological category (see Menzel 2007). The probability measure is in this case seen as a representation of an ontological propensity of a possible quantum event to become actual (Lombardi and Castagnino 2008; see also Suárez 2004). These views do not all exclude each other. If probabilities quantify ignorance about the actual values of the observables, this need not mean that this ignorance can be removed by the addition of further information. If quantum probabilities are ontological propensities, our ignorance about the possible event that becomes actual is a necessary consequence of the indeterministic nature of the system because there simply is no additional information about a more accurate state of the system. 11. The role of decoherence According to the environment-induced approach to decoherence (Zurek 1981, 2003; see also Schlosshauer 2007), the measuring apparatus is an open system in continuous interaction with its environment; as a consequence of this interaction, the reduced state of the apparatus and the measured system becomes, almost instantaneously, indistinguishable from a state that would represent an ignorance mixture (“proper mixture”) over unknown values of the apparatus' pointer. The idea that decoherence might play a role in modal interpretations was proposed by several authors early on (Dieks 1989b; Healey 1995). But the phenomenon has acquired a central relevance in the modal context in relation to the discussion of non-ideal measurements. As we have seen, in the BDMI and the SDMI, the biorthogonal or the spectral decomposition does not pick out the right properties for the apparatus in non-ideal measurements. Bacciagaluppi and Hemmo (1996) show that, when the apparatus is a finite-dimensional system in interaction with an environment with a huge number of degrees of freedom, decoherence guarantees that the spectral decomposition of the apparatus' reduced state will be very close to the ideally expected result and, as a consequence, the apparatus' pointer is—approximately—selected as an actual definite-valued observable. Alternatively, Bub (1997) proposes that it is not decoherence—with the “tracing out” of the environment and the diagonalization of the reduced state of the apparatus—that is relevant for the definite value of the pointer, but the triorthogonal or n-orthogonal decomposition theorem, since it singles out a unique pointer basis for the apparatus. In either case, the interaction of the environment seems to be a great help to the BDMI and the SDMI for handling non-ideal measurements with finite-dimensional apparatuses. However, the case of infinitely many distinct states for the apparatus is perhaps more realistic. Bacciagaluppi (2000) has analyzed this situation, using a continuous model of the apparatus' interaction with the environment. He concludes that in this case the spectral decomposition of the reduced state of the apparatus does not pick out states that are close enough to the ideally expected state. This result applies more generally to other cases where a macroscopic system (not idealized as finite-dimensional) experiences decoherence due to interaction with its environment (see Donald 1998). As said above, in the case of the MHI decoherence is not explicitly appealed to in order to account for the definite reading of the apparatus' pointer (neither in ideal nor in non-ideal measurements). However, there still is a relation with the decoherence program. In fact, the measuring apparatus is always a macroscopic system with a huge number of degrees of freedom, and the pointer must be a “collective” and empirically accessible observable; as a consequence, the many degrees of freedom corresponding to the degeneracies of the pointer play the role of a decohering “internal environment” (for details, see Lombardi 2010; Lombardi et al. 2011). The compatibility between the MHI and decoherence becomes clearer when the phenomenon of decoherence is understood from a closed-system perspective (Castagnino, Laura, and Lombardi 2007; Castagnino, Fortin, and Lombardi 2010; Lombardi, Fortin, and Castagnino 2012). 12. Open problems and perspectives There are a number of open problems and perspectives in the modal program. Here we will consider some of them. Modal interpretations are based on the standard formalism of quantum mechanics (in the Hilbert space version or in the algebraic version). However, Brown, Suárez and Bacciagaluppi (1998) argue that there is more to quantum reality than what is described by operators and quantum states: they claim that gauges and coordinate systems are important to our description of physical reality as well, while modal interpretations (AM, BDMI and SDMI) have standardly not taken such things into consideration. In a similar vein, it has been argued that the Galilean space-time symmetries endow the formal skeleton of quantum mechanics with the physical flesh and blood that identify the fundamental physical magnitudes and that allow the theory to be applied to concrete physical situations (Lombardi and Castagnino 2008). The set of definite-valued observables of a system should be left invariant by the Galilean transformations: it would be unacceptable that this set changed as a mere result of a change in the perspective from which the system is described. On the basis of this idea, the MHI rule of actualization has been reformulated in an explicitly invariant form, in terms of the Casimir operators of the Galilean group (Ardenghi, Castagnino, and Lombardi 2009; Lombardi, Castagnino, and Ardenghi 2010). Another fundamental question is the relativistic extension of the modal approach. Dickson and Clifton (1998) have shown that a large class of modal interpretations of ordinary quantum mechanics cannot be made Lorentz-invariant in a straightforward way (see also Myrvold 2002). With respect to the extension to algebraic quantum field theory (see Dieks 2002; Kitajima 2004), Clifton (2000) proposed a natural generalization of the non-relativistic modal scheme, but Earman and Ruetsche (2005) showed that it is not yet clear whether it will be able to deal with measurement situations and whether it is empirically adequate. The problems revealed by these investigations are due to the non-relativistic nature of the formalism of quantum mechanics that is employed, in particular to the fact that the concept of a state of an extended system at one instant is central. In a local field-theoretic context this becomes different, and this may avoid conflicts with relativity (Earman and Ruetsche 2005). Berkovitz and Hemmo (2005) and Hemmo and Berkovitz (2005) propose a different way out: they argue that perspectivalism can come to the rescue here (see also Berkovitz and Hemmo 2006). In turn, in the context of the MHI, it has been argued that the actualization rule, expressed in terms of the Casimir operators of the Galilean group in non-relativistic quantum mechanics, can be transferred to the relativistic domain by changing the symmetry group accordingly: the definite-valued observables of a system would be those represented by the Casimir operators of the Poincaré group. Since the mass operator and the squared spin operator are the only Casimir operators of the Poincaré group, they would always be definite-valued observables. This conclusion would be in agreement with a usual assumption in quantum field theory: elemental particles always have definite values of mass and spin, and those values are precisely what define the different kinds of elemental particles of the theory (Ardenghi, Castagnino, and Lombardi 2009; Lombardi, Castagnino, and Ardenghi 2010). There are also specifically philosophical issues concerning ontological matters: about the nature of the items referred to by quantum mechanics, that is, about the basic categories of the quantum ontology. As we have seen, in general the properties of quantum systems are considered to be monadic, with the exception of the relational version of the BDMI and the PMI where these properties are relational. In any case, it might be asked whether a quantum system has to be conceived as an individual substratum supporting properties or as a mere “bundle” of properties. Lombardi and Castagnino (2008), and da Costa, Lombardi and Lastiri (forthcoming) have suggested that, in the modal context, the bundle view might be appropriate to supply an answer to the problem of indistinguishability (see also French and Krause 2006). • Albert, D. and B. Loewer, 1990, “Wanted dead or alive: two attempts to solve Schrödinger's paradox,” in Proceedings of the PSA 1990, Vol. 1, A. Fine, M. Forbes, and L. Wessels (eds.), East Lansing, Michigan: Philosophy of Science Association, pp. 277–285. • –––, 1991, “Some alleged solutions to the measurement problem,” Synthese, 88: 87–98. • –––, 1993, “Non-ideal measurements,” Foundations of Physics Letters, 6: 297–305. • Ardenghi, J. S., M. Castagnino, and O. Lombardi, 2009, “Quantum mechanics: modal interpretation and Galilean transformations,” Foundations of Physics, 39: 1023–1045. • Ardenghi, J. S. and O. Lombardi, 2011, “The Modal-Hamiltonian Interpretation of quantum mechanics as a kind of ‘atomic’ interpretation,” Physics Research International, 2011: 379604. • Arntzenius, F., 1990, “Kochen's interpretation of quantum mechanics,” in Proceedings of the PSA 1990, Vol. 1, A. Fine, M. Forbes, and L. Wessels (eds.), East Lansing, Michigan: Philosophy of Science Association, pp. 241–249. • Bacciagaluppi, G., 1995, “A Kochen-Specker theorem in the modal interpretation of quantum mechanics,” International Journal of Theoretical Physics, 34: 1205–1216. • –––, 1996, Topics in the Modal Interpretation of Quantum Mechanics. Dissertation, Cambridge University. • –––, 1998, “Bohm-Bell dynamics in the modal interpretation,” in The Modal Interpretation of Quantum Mechanics, D. Dieks, and P. Vermaas (eds.), Dordrecht: Kluwer Academic Publishers, pp. 177–211. • –––, 2000, “Delocalized properties in the modal interpretation of a continuous model of decoherence,” Foundations of Physics, 30: 1431–1444. • Bacciagaluppi, G. and M. Dickson, 1999, “Dynamics for modal interpretations,” Foundations of Physics, 29: 1165–1201. • Bacciagaluppi, G., M. Donald, and P. Vermaas, 1995, “Continuity and discontinuity of definite properties in the modal interpretation,” Helvetica Physica Acta, 68: 679–704. • Bacciagaluppi, G. and M. Hemmo, 1994, “Making sense of approximate decoherence,” in Proceedings of the PSA 1994, Vol. 1, D. Hull, M. Forbes, and R. Burian (eds.), East Lansing, Michigan: Philosophy of Science Association, pp. 345–354. • –––, 1996, “Modal interpretations, decoherence and measurements,” Studies in History and Philosophy of Modern Physics, 27: 239–277. • Ballentine, L., 1998, Quantum Mechanics: A Modern Development, Singapore: World Scientific. • Bell, J. S., 1984, “Beables for quantum field theory,” in Speakable and Unspeakable in Quantum Mechanics (1987), Cambridge: Cambridge University Press, pp. 173–180. • Bene, G., 1997, “Quantum reference systems: A new framework for quantum mechanics,” Physica A, 242: 529–565. • Bene, G. and D. Dieks, 2002, “A perspectival version of the modal interpretation of quantum mechanics and the origin of macroscopic behavior,” Foundations of Physics, 32: 645–671. • Berkovitz, J. and M. Hemmo, 2005, “Can modal interpretations of quantum mechanics be reconciled with relativity?,” Philosophy of Science, 72: 789–801. • –––, 2006, “A new modal interpretation in terms of relational properties,” in Physical Theory and its Interpretation: Essays in honor of Jeffrey Bub, W. Demopoulos and I. Pitowsky (eds.), New York: Springer, pp.1–28. • Bohm, D., 1952, “A suggested interpretation of the quantum theory in terms of ‘hidden’ variables, I and II,” Physical Review, 85: 166–193. • Brown, H., M. Suárez, and G. Bacciagaluppi, 1998, “Are ‘sharp values’ of observables always objective elements of reality?,” in The Modal Interpretation of Quantum Mechanics, D. Dieks and P. Vermaas (eds.), Dordrecht: Kluwer Academic Publishers, pp. 69–101. • Bub, J., 1992, “Quantum mechanics without the projection postulate,” Foundations of Physics, 22: 737–754. • –––, 1994, “On the structure of quantal proposition systems,” Foundations of Physics, 24: 1261–1279. • –––, 1997, Interpreting the Quantum World, Cambridge: Cambridge University Press. • Bub, J. and R. Clifton, 1996, “A uniqueness theorem for interpretations of quantum mechanics,” Studies in History and Philosophy of Modern Physics, 27: 181–219. • Bub, J., R. Clifton, and S. Goldstein, 2000, “Revised proof of the uniqueness theorem for ‘no collapse’ interpretations of quantum mechanics,” Studies in History and Philosophy of Modern Physics, 31: 95–98. • Castagnino, M., S. Fortin, and O. Lombardi, 2010, “Is the decoherence of a system the result of its interaction with the environment?,” Modern Physics Letters A, 25: 1431–1439. • Castagnino, M., R. Laura, and O. Lombardi, 2007, “A general conceptual framework for decoherence in closed and open systems,” Philosophy of Science, 74: 968–980. • Clifton, R., 1995a, “Independently motivating the Kochen-Dieks modal interpretation of quantum mechanics,” The British Journal for the Philosophy of Science, 46: 33–57. • –––, 1995b, “Making sense of the Kochen-Dieks ‘no-collapse’ interpretation of quantum mechanics independent of the measurement problem,” Annals of the New York Academy of Science, 755: 570–578. • –––, 1995c, “Why modal interpretations of quantum mechanics must abandon classical reasoning about the physical properties,” International Journal of Theoretical Physics, 34: 1302–1312. • –––, 1996, “The properties of modal interpretations of quantum mechanics,” The British Journal for the Philosophy of Science, 47: 371–398. • –––, 2000, “The modal interpretation of algebraic quantum field theory,” Physics Letters A, 271: 167–177. • Cohen, D. W., 1989, An Introduction to Hilbert Space and Quantum Logic, New York: Springer-Verlag. • Da Costa, N., O. Lombardi, and M. Lastiri, forthcoming, “A modal ontology of properties for quantum mechanics,” Synthese, DOI 10.1007/s11229-012-0218-4 [available online]. • De Witt, B. S. M., 1970, “Quantum mechanics and reality,” Physics Today, 23: 30–35. • Dickson, M., 1994, “Wavefunction tails in the modal interpretation,” in D. Hull, M. Forbes, and R. Burian (eds.), Proceedings of the PSA 1994, Vol. 1, East Lansing, Michigan: Philosophy of Science Association, pp. 366–376. • –––, 1995a, “Faux-Boolean algebras, classical probability, and determinism,” Foundations of Physics Letters, 8: 231–242. • –––, 1995b, “Faux-Boolean algebras and classical models,” Foundations of Physics Letters, 8: 401–415. • Dickson, M. and R. Clifton, 1998, “Lorentz-invariance in modal interpretations,” in The Modal Interpretation of Quantum Mechanics, D. Dieks and P. Vermaas (eds.), Dordrecht: Kluwer Academic Publishers, pp. 9–48. • Dieks, D., 1988, “The formalism of quantum theory: an objective description of reality?,” Annalen der Physik, 7: 174–190. • –––, 1989a, “Quantum mechanics without the projection postulate and its realistic interpretation,” Foundations of Physics, 38: 1397–1423. • –––, 1989b, “Resolution of the measurement problem through decoherence of the quantum state,” Physics Letters A, 142: 439–446. • –––, 1994a, “Objectification, measurement and classical limit according to the modal interpretation of quantum mechanics,” in P. Busch, P. Lahti, and P. Mittelstaedt (eds.), Proceedings of the Symposium on the Foundations of Modern Physics, Singapore: World Scientific, pp. 160–167. • –––, 1994b, “Modal interpretation of quantum mechanics, measurements, and macroscopic behaviour,” Physical Review A, 49: 2290–2300. • –––, 1995, “Physical motivation of the modal interpretation of quantum mechanics,” Physics Letters A, 197: 367–371. • –––, 1998, “Preferred factorizations and consistent property attribution”, in Quantum Measurement: Beyond Paradox, R. Healey and G. Hellman (eds.), Minneapolis: University of Minnesota Press, pp. 144–160. • –––, 2002, “Events and covariance in the interpretation of quantum field theory,” in Ontological Aspects of Quantum Field Theory, M. Kuhlmann, H. Lyre, and A. Wayne (eds.), Singapore: World Scientific, pp. 215–234. • –––, 2005, “Quantum mechanics: an intelligible description of objective reality?,” Foundations of Physics, 35: 399–415. • –––, 2007, “Probability in modal interpretations of quantum mechanics,” Studies in History and Philosophy of Modern Physics, 38: 292–310. • –––, 2009, “Objectivity in perspective: relationism in the interpretation of quantum mechanics,” Foundations of Physics, 39: 760–775. • –––, 2010, “Quantum mechanics, chance and modality,” Philosophica, 83: 117–137. • Dieks, D. and P. Vermaas (eds.), 1998, The Modal Interpretation of Quantum Mechanics, Dordrecht: Kluwer Academic Publishers. • Donald, M., 1998, “Discontinuity and continuity of definite properties in the modal interpretation,” in The Modal Interpretation of Quantum Mechanics, D. Dieks and P. Vermaas (eds.), Dordrecht: Kluwer Academic Publishers, pp. 213–222. • Earman, J. and L. Ruetsche, 2005, “Relativistic invariance and modal interpretations,” Philosophy of Science, 72: 557–583. • Elby, A., 1993, “Why ‘modal’ interpretations of quantum mechanics don't solve the measurement problem,” Foundations of Physics Letters, 6: 5–19. • Everett, H., 1957, “Relative state formulation of quantum mechanics,” Review of Modern Physics, 29: 454–462. • French, S. and D. Krause, 2006, Identity in Physics: A Historical, Philosophical and Formal Analysis, Oxford: Oxford University Press. • Gambetta, J. and H. M. Wiseman, 2003, “Interpretation of non-Markovian stochastic Schrödinger equations as a hidden-variable theory,” Physical Review A, 68: 062104. • –––, 2004, “Modal dynamics for positive operator measures,” Foundations of Physics, 34: 419–448. • –––, 1995, “Dissipating the quantum measurement problem,” Topoi, 14: 55–65. • Hemmo, M. and J. Berkovitz, 2005, “Modal interpretations of quantum mechanics and relativity: a reconsideration,” Foundations of Physics, 35: 373–397. • Hughes, R. I. G., 1989, The Structure and Interpretation of Quantum Mechanics, Cambridge Mass.: Harvard University Press. • Kitajima, Y., 2004, “A remark on the modal interpretation of algebraic quantum field theory,” Physics Letters A, 331: 181–186. • Kochen, S., 1985, “A new interpretation of quantum mechanics,” in Symposium on the Foundations of Modern Physics 1985, P. Mittelstaedt and P. Lahti (eds.), Singapore: World Scientific, pp. 151–169. • Kochen, S. and E. Specker, 1967, “The problem of hidden variables in quantum mechanics,” Journal of Mathematics and Mechanics, 17: 59–87. • Laudisa, F. and C. Rovelli, 2008, “Relational quantum mechanics,” in The Stanford Encyclopedia of Philosophy, Fall 2008 Edition, Edward N. Zalta (ed.), URL = <>. • Lombardi, O., 2010, “The central role of the Hamiltonian in quantum mechanics: decoherence and interpretation,” Manuscrito, 33: 307–349. • Lombardi, O. and M. Castagnino, 2008, “A modal-Hamiltonian interpretation of quantum mechanics,” Studies in History and Philosophy of Modern Physics, 39: 380–443. • Lombardi, O., M. Castagnino, and J. S. Ardenghi, 2010, “The modal-Hamiltonian interpretation and the Galilean covariance of quantum mechanics,” Studies in History and Philosophy of Modern Physics, 41: 93–103. • Lombardi, O., S. Fortin, and M. Castagnino, 2012, “The problem of identifying the system and the environment in the phenomenon of decoherence,” in EPSA Philosophy of Science: Amsterdam 2009, H. W. de Regt, S. Hartmann, and S. Okasha (eds.), Dordrecht: Springer, pp. 161–174. • Lombardi, O., S. Fortin, M. Castagnino, and J. S. Ardenghi, 2011, “Compatibility between environment-induced decoherence and the modal-Hamiltonian interpretation of quantum mechanics,” Philosophy of Science, 78: 1024–1036. • Menzel, C., 2007, “Actualism,” in The Stanford Encyclopedia of Philosophy, Spring 2007 Edition, Edward N. Zalta (ed.), URL = <>. • Myrvold, W., 2002, “Modal interpretations and relativity,” Foundations of Physics, 32: 1773–1784. • Reeder, N. and R. Clifton, 1995, “Uniqueness of prime factorizations of linear operators in quantum mechanics,” Physics Letters A, 204: 198–204. • Rovelli, C., 1996, “Relational quantum mechanics,” International Journal of Theoretical Physics, 35: 1637–1678. • Rovelli, C. and M. Smerlak, 2007, “Relational EPR,” Foundations of Physics, 37: 427–445. • Ruetsche, L., 1995, “Measurement error and the Albert-Loewer problem,” Foundations of Physics Letters, 8: 327–344. • –––, 1996, “Van Fraassen on preparation and measurement,” Philosophy of Science, 63: S338-S346. • –––, 2003, “Modal semantics, modal dynamics and the problem of state preparation,” International Studies in the Philosophy of Science, 17: 25–41. • Schlosshauer, M., 2007, Decoherence and the Quantum-to-Classical Transition, Heidelberg-Berlin: Springer. • Schrödinger, E., 1935, “Discussion of probability relations between separated systems,” Proceedings of the Cambridge Philosophical Society, 31: 555–563. • Suárez, M., 2004, “Quantum selections, propensities and the problem of measurement,” The British Journal for the Philosophy of Science, 55: 219–255. • Sudbery, A., 2002, “Diese verdammte Quantenspringerei,” Studies in History and Philosophy of Modern Physics, 33: 387–411. • van Fraassen, B. C., 1972, “A formal approach to the philosophy of science,” in Paradigms and Paradoxes: The Philosophical Challenge of the Quantum Domain, R. Colodny (ed.), Pittsburgh: University of Pittsburgh Press, pp. 303–366.. • –––, 1974, “The Einstein-Podolsky-Rosen paradox,” Synthese, 29: 291–309. • –––, 1991, Quantum Mechanics, Oxford: Clarendon Press. • –––, 2010, “Rovelli's world,” Foundations of Physics, 40: 390–417. • Vermaas, P., 1996, “Unique transition probabilities in the modal interpretation,” Studies in History and Philosophy of Modern Physics, 27: 133–159. • –––, 1997, “A no-go theorem for joint property ascriptions in modal interpretations of quantum mechanics,” Physical Review Letters, 78: 2033–2037. • –––, 1998, “The pros and cons of the Kochen-Dieks and the atomic modal interpretation,” in The Modal Interpretation of Quantum Mechanics, D. Dieks and P. Vermaas (eds.), Dordrecht: Kluwer Academic Publishers, pp. 103–148. • –––, 1999, A Philosopher's Understanding of Quantum Mechanics: Possibilities and Impossibilities of a Modal Interpretation, Cambridge: Cambridge University Press. • Vermaas, P. and D. Dieks, 1995, “The modal interpretation of quantum mechanics and its generalization to density operators,” Foundations of Physics, 25: 145–158. • Vink, J., 1993, “Quantum mechanics in terms of discrete beables,” Physical Review A, 48: 1808–1818. • Zurek, W. H., 1981, “Pointer basis of quantum apparatus: into what mixtures does the wave packet collapse?,” Physical Review D, 24: 1516–1525. • –––, 2003, “Decoherence, einselection, and the quantum origins of the classical,” Reviews of Modern Physics, 75: 715–776. Other Internet Resources [Please contact the author with suggestions.] As of the December 2012 update, the credited authors for this entry are Olympia Lombardi and Dennis Dieks. The original version of this entry (published in 2002, last archived in Fall 2007) was authored solely by Michael Dickson and we acknowledge that some sentences (particularly in Section 7 of the current entry) may remain. Copyright © 2012 by Olimpia Lombardi <> Dennis Dieks <> Please Read How You Can Help Keep the Encyclopedia Free
a4f7abfa392ea582
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer For a time dependent wavefunction, are the instantaneous probability densities meaningful? (The question applies for instances or more generally short lengths of time that are not multiples of the period.) What experiment could demonstrate the existence of a time dependent probability density? Can an isolated system be described by a time dependent wavefunction? How would this not violate conservation of energy? I see the meaning of the time averaged probability density. Is the time dependence just a statistical construct? share|cite|improve this question Hello Praxeo, Welcome to Physics.SE. Please try to ask questions only related to the topic (of yours) or consider a revision to focus on your question. The problem (for now) is, you're asking lot of questions... – Waffle's Crazy Peanut Nov 15 '12 at 16:38 up vote 1 down vote accepted 1) Why do you believe that instantaneous probability densities are not meaningful? 2) Essentially any non-stationary state for which you need to compute time-dependent wavefunctions: e.g. chemical reaction dynamics, particle scattering, etc. 3) Yes, the time dependant Schrödinger equation applies to isolated systems. 4) By definition energy is conserved in an isolated system. Moreover, the Schrödinger equation conserves energy because the generator of time translations is the Hamiltonian and this commutes with itself $[H,H]=0$, i.e. energy is conserved. For isolated systems, the Hamiltonian is time-independent (explicitly) and the time-dependent wavefunction $\Psi$ has the well-known form $\Psi = \Phi e^{-iEt/\hbar}$, with $E$ the energy of the isolated system. 5) I do not understand the question. share|cite|improve this answer In (4), one needs a further condition that the Hamiltonian is itself time-independent, $\frac{\partial H}{\partial t} = 0$. – Stan Liou Nov 16 '12 at 9:34 As well, one has to distinguish the energy certainty from the energy conservation. – Vladimir Kalitvianski Nov 16 '12 at 15:17 @StanLiou: Conservation, by definition, implies zero production $d_iH/dt=0$. If the Hamiltonian has explicit time dependence then the equation of motion contains a 'flow' term $d_eH/dt$ but the production term continues being zero. – juanrga Nov 16 '12 at 18:28 @VladimirKalitvianski: Not sure what do you mean, but the conservation law $[H,H]=0$ is independent of the kind of quantum state. – juanrga Nov 16 '12 at 18:32 It's certainly correct and tautologous to say that energy is conserved in an isolated system. But if your last sentence were correct, all quantum systems whatever would conserve energy, because $[H,H] = 0$ is an exact identity and $H$ is always the generator of time translation. Hence, I expected the point of $[H,H] = 0$ to be a reference to $\frac{dA}{dt} = \frac{\partial A}{\partial t} + \frac{1}{i\hbar}[A,H]$ in the Heisenberg picture or analogous expectations in the Schrödinger picture. In the Lagrangian formalism, energy through Noether's theorem also needs no explicit time dependence. – Stan Liou Nov 16 '12 at 19:13 Yes, $|\psi(t)|^2$ is an instantaneous probability density. Passage of a wave packet can be experimentally observed. An isolated system can be in a superposition of different energy eigenfunctions. It does not violate the energy conservation law because initially the system is not in an eigenstate - it has some energy uncertainty at $t=0$. This uncertainty evolves as any other uncertainty. EDIT: Let us make a superposition of two states: $$\psi(t)=c_1\psi_1(x)e^{-iE_1 t}+c_2\psi_2(x)e^{-iE_2 t}.$$ It means that we can find in experiment the system in state 1 with probability $|c_1|^2$ and in state 2 with probability $|c_2|^2$. The system is free and this is due to coefficients $c_1$ and $c_2$ being constant in time (occupation numbers do not depend on time). Measuring the system energy will give sometimes $E_1$ and sometimes $E_2$, with the same probabilities. So initially and later on the system does not have a certain energy. The state $H\psi$ depends on time as $$H\psi=c_1 E_1 \psi_1(x)e^{-iE_1 t}+c_2 E_2 \psi_2(x)e^{-iE_2 t}.$$ It is not an eigenstate of the Hamiltonian, so the time derivative $\partial\psi/\partial t$ is not proportional to $\psi$. The Hamiltonian expectation value, however, does not depend on time: $$\langle\psi|H|\psi\rangle = |c_1|^2 E_1 + |c_2|^2 E_2 = const.$$ In other words, it is the energy expectation value that conserves, not the energy. The latter is undefined, uncertain in this free state. You invoke the "energy conservation law" $dH/dt=0$ which is an operator relationship. If the system has a certain energy $E_n$ in the initial state, this value remains the system energy in later moments, so your "conservation law" may be cast in a form $dE(t)/dt=0$ that means $E=const=E(0)=E_n$. But if the system does not have a certain energy at the initial state $\psi(0)$, then there is no $E(0)$ to conserve and your operator relationship turns into conservation of the expectation value. share|cite|improve this answer Your Answer
ac9dfe3e39f60acd
Support Options Submit a Support Ticket Home Series Quantum Mechanics: Wavepackets About Quantum Mechanics: Wavepackets By Dragica Vasileska1, Gerhard Klimeck2 1. Arizona State University 2. Purdue University View Series Slides/Notes podcast Licensed according to this deed. Published on In physics, a wave packet is an envelope or packet containing an arbitrary number of wave forms. In quantum mechanics the wave packet is ascribed a special significance: it is interpreted to be a "probability wave" describing the probability that a particle or particles in a particular state will be measured to have a given position and momentum. By applying the Schrödinger equation in quantum mechanics it is possible to deduce the time evolution of a system, similar to the process of the Hamiltonian formalism in classical mechanics. The wave packet is a mathematical solution to the Schrödinger equation. The square of the area under the wave packet solution is interpreted to be the probability density of finding the particle in a region. In the coordinate representation of the wave (such as the Cartesian coordinate system) the position of the wave is given by the position of the packet. Moreover, the narrower the spatial wave packet, and therefore the better defined the position of the wave packet, the larger the spread in the momentum of the wave. This trade-off between spread in position and spread in momentum is one example of the Heisenberg uncertainty principle. • Wavepackets Description • Homework Assignment on Wavepackets • Sponsored by Cite this work Researchers should cite this work as follows: • Dragica Vasileska; Gerhard Klimeck (2008), "Quantum Mechanics: Wavepackets," BibTex | EndNote In This Series 1. Reading Material: Wavepackets 2. Homework Assignment: Wavepackets
370cf976de7a4b34
Complex number From Wikipedia, the free encyclopedia   (Redirected from Imaginary part) Jump to: navigation, search A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, that satisfies the equation i2 = −1.[1] In this expression, a is the real part and b is the imaginary part of the complex number. Complex numbers allow for solutions to certain equations that have no solutions in real numbers. For example, the equation (x+1)^2 = -9 \, has no real solution, since the square of a real number cannot be negative. Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the imaginary unit i where i2 = −1, so that solutions to equations like the preceding one can be found. In this case the solutions are −1 + 3i and −1 − 3i, as can be verified using the fact that i2 = −1: According to the fundamental theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. The real number a is called the real part of the complex number a + bi; the real number b is called the imaginary part of a + bi. By this convention the imaginary part does not include the imaginary unit: hence b, not bi, is the imaginary part.[3][4] The real part of a complex number z is denoted by Re(z) or ℜ(z); the imaginary part of a complex number z is denoted by Im(z) or ℑ(z). For example, \operatorname{Im}(-3.5 + 2i) &= 2. Hence, in terms of its real and imaginary parts, a complex number z is equal to \operatorname{Re}(z) + \operatorname{Im}(z) \cdot i . This expression is sometimes known as the Cartesian form of z. A real number a can be regarded as a complex number a + 0i whose imaginary part is 0. A purely imaginary number bi is a complex number 0 + bi whose real part is zero. It is common to write a for a + 0i and bi for 0 + bi. Moreover, when the imaginary part is negative, it is common to write abi with b > 0 instead of a + (−b)i, for example 3 − 4i instead of 3 + (−4)i. Some authors[5] write a + ib instead of a + bi, particularly when b is a radical. In some disciplines, in particular electromagnetism and electrical engineering, j is used instead of i,[6] since i is frequently used for electric current. In these cases complex numbers are written as a + bj or a + jb. Complex plane[edit] Main article: Complex plane A position vector may also be defined in terms of its magnitude and direction relative to the origin. These are emphasized in a complex number's polar form. Using the polar form of the complex number in calculations may lead to a more intuitive interpretation of mathematical results. Notably, the operations of addition and multiplication take on a very natural geometric character when complex numbers are viewed as position vectors: addition corresponds to vector addition while multiplication corresponds to multiplying their magnitudes and adding their arguments (i.e. the angles they make with the x axis). Viewed in this way the multiplication of a complex number by i corresponds to rotating the position vector counterclockwise by a quarter turn (90°) about the origin: (a+bi)i = ai+bi2 = -b+ai. History in brief[edit] Main section: History Many mathematicians contributed to the full development of complex numbers. The rules for addition, subtraction, multiplication, and division of complex numbers were developed by the Italian mathematician Rafael Bombelli.[7] A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions. Two complex numbers are equal if and only if both their real and imaginary parts are equal. In symbols: z_{1} = z_{2} \, \, \leftrightarrow \, \, ( \operatorname{Re}(z_{1}) = \operatorname{Re}(z_{2}) \, \and \, \operatorname{Im} (z_{1}) = \operatorname{Im} (z_{2})). Because complex numbers are naturally thought of as existing on a two-dimensional plane, there is no natural linear ordering on the set of complex numbers.[8] There is no linear ordering on the complex numbers that is compatible with addition and multiplication. Formally, we say that the complex numbers cannot have the structure of an ordered field. This is because any square in an ordered field is at least 0, but i2 = −1. Elementary operations[edit] Formally, for any complex number z: \bar{z} = \operatorname{Re}(z) - \operatorname{Im}(z) \cdot i . Geometrically, \bar{z} is the "reflection" of z about the real axis. Conjugating twice gives the original complex number: \bar{\bar{z}}=z. The real and imaginary parts of a complex number z can be extracted using the conjugate: Conjugation distributes over the standard arithmetic operations: \overline{z-w} = \bar{z} - \bar{w}, \, Addition and subtraction[edit] Similarly, subtraction is defined by Multiplication and division[edit] (a+bi) (c+di) = ac + bci + adi + bidi (distributive law) = ac + bidi + bci + adi (commutative law of addition—the order of the summands can be changed) = ac + bdi^2 + (bc+ad)i (commutative and distributive laws) = (ac-bd) + (bc + ad)i (fundamental property of the imaginary unit). The division of two complex numbers is defined in terms of complex multiplication, which is described above, and real division. When at least one of c and d is non-zero, we have As shown earlier, cdi is the complex conjugate of the denominator c + di. At least one of the real part c and the imaginary part d of the denominator must be nonzero for division to be defined. This is called "rationalization" of the denominator (although the denominator in the final expression might be an irrational real number). \frac{1}{z}=\frac{\bar{z}}{z \bar{z}}=\frac{\bar{z}}{x^2+y^2}=\frac{x}{x^2+y^2} -\frac{y}{x^2+y^2}i. This formula can be used to compute the multiplicative inverse of a complex number if it is given in rectangular coordinates. Inversive geometry, a branch of geometry studying reflections more general than ones about a line, can also be expressed in terms of complex numbers. In the network analysis of electrical circuits, the complex conjugate is used in finding the equivalent impedance when the maximum power transfer theorem is used. Square root[edit] where sgn is the signum function. This can be seen by squaring \pm (\gamma + \delta i) to obtain a + bi.[9][10] Here \sqrt{a^2 + b^2} is called the modulus of a + bi, and the square root sign indicates the square root with non-negative real part, called the principal square root; also \sqrt{a^2 + b^2}= \sqrt{z\bar{z}}, where z = a + bi .[11] Polar form[edit] Absolute value and argument[edit] An alternative way of defining a point P in the complex plane, other than using the x- and y-coordinates, is to use the distance of the point from O, the point whose coordinates are (0, 0) (the origin), together with the angle subtended between the positive real axis and the line segment OP in a counterclockwise direction. This idea leads to the polar form of complex numbers. \textstyle |z|^2=z\bar{z}=x^2+y^2.\, where \bar{z} is the complex conjugate of z. The argument of z (in many applications referred to as the "phase") is the angle of the radius OP with the positive real axis, and is written as \arg(z). As with the modulus, the argument can be found from the rectangular form x+yi:[12] \varphi = \arg(z) = The value of φ is expressed in radians in this article. It can increase by any integer multiple of and still give the same angle. Hence, the arg function is sometimes considered as multivalued. Normally, as given above, the principal value in the interval (−π,π] is chosen. Values in the range [0,2π) are obtained by adding if the value is negative. The polar angle for the complex number 0 is indeterminate, but arbitrary choice of the angle 0 is common. Using Euler's formula this can be written as Using the cis function, this is sometimes abbreviated to z = r \operatorname{cis} \varphi. \, z = r \ang \varphi . \, Multiplication and division in polar form[edit] Formulas for multiplication, division and exponentiation are simpler in polar form than the corresponding formulas in Cartesian coordinates. Given two complex numbers z1 = r1(cos φ1 + i sin φ1) and z2 = r2(cos φ2 + i sin φ2), because of the well-known trigonometric identities \cos(a)\cos(b) - \sin(a)\sin(b) = \cos(a + b) \cos(a)\sin(b) + \sin(a)\cos(b) = \sin(a + b) we may derive Similarly, division is given by Euler's formula[edit] Euler's formula states that, for any real number x, where e is the base of the natural logarithm. This can be proved through induction by observing that i^0 &{}= 1, \quad & i^1 &{}= i, \quad & i^3 &{}= -i, \\ i^4 &={} 1, \quad & i^5 &={} i, \quad & i^7 &{}= -i, and so on, and by considering the Taylor series expansions of eix, cos(x) and sin(x): Natural logarithm[edit] Euler's formula allows us to observe that, for any complex number where r is a non-negative real number, one possible value for z's natural logarithm is \ln (z)= \ln(r) + \varphi i Because cos and sin are periodic functions, the natural logarithm may be considered a multi-valued function, with: \ln(z) = \left\{ \ln(r) + (\varphi + 2\pi k)i \;|\; k \in \mathbb{Z}\right\} Integer and fractional exponents[edit] We may use the identity \ln(a^{b}) = b \ln(a) to define complex exponentiation, which is likewise multi-valued: \ln (z^n)=\ln((r(\cos \varphi + i\sin \varphi ))^{n}) = n \ln(r(\cos \varphi + i\sin \varphi)) = \{ n (\ln(r) + (\varphi + k2\pi) i) | k \in \mathbb{Z} \} = \{ n \ln(r) + n \varphi i + nk2\pi i | k \in \mathbb{Z} \}. When n is an integer, this simplifies to de Moivre's formula: z^{n}=(r(\cos \varphi + i\sin \varphi ))^{n} = r^n\,(\cos n\varphi + i \sin n \varphi). The nth roots of z are given by \sqrt[n]{z^n} = z Field structure[edit] z_1+ z_2 = z_2 + z_1, z_1 z_2 = z_2 z_1. When the underlying field for a mathematical topic or construct is the field of complex numbers, the topic's name is usually modified to reflect that fact. For example: complex analysis, complex matrix, complex polynomial, and complex Lie algebra. Solutions of polynomial equations[edit] Algebraic characterization[edit] Characterization as a topological field[edit] • P is closed under addition, multiplication and taking inverses. Formal construction[edit] Formal development[edit] (a, b) + (c, d) &= (a + c, b + d)\\ (a, b) \cdot (c, d) &= (ac - bd, bc + ad). (x+y) z = xz + yz where the a0, ..., an are real numbers. The usual addition and multiplication of polynomials endows the set R[X] of all such polynomials with a ring structure. This ring is called polynomial ring. The quotient ring R[X]/(X 2 + 1) can be shown to be a field. This extension field contains two square roots of −1, namely (the cosets of) X and X, respectively. (The cosets of) 1 and X form a basis of R[X]/(X 2 + 1) as a real vector space, which means that each element of the extension field can be uniquely written as a linear combination in these two elements. Equivalently, elements of the extension field can be written as ordered pairs (a, b) of real numbers. Moreover, the above formulas for addition etc. correspond to the ones yielded by this abstract algebraic approach—the two definitions of the field C are said to be isomorphic (as fields). Together with the above-mentioned fact that C is algebraically closed, this also shows that C is an algebraic closure of R. Matrix representation of complex numbers[edit] Complex numbers a + bi can also be represented by 2 × 2 matrices that have the following form: a & -b \\ b & \;\; a |z|^2 = a & -b \\ b & a Though this representation of complex numbers with matrices is the most common, many other representations arise from matrices other than \bigl(\begin{smallmatrix}0 & -1 \\1 & 0 \end{smallmatrix}\bigr) that square to the negative of the identity matrix. See the article on 2 × 2 real matrices for other representations of complex numbers. Complex analysis[edit] Main article: Complex analysis Complex exponential and related functions[edit] is a complete metric space, which notably includes the triangle inequality for any two complex numbers z1 and z2. for any real number φ, in particular \exp(i \pi) = -1 \, \exp(z) = w \, Complex exponentiation zω is defined as Holomorphic functions[edit] Complex numbers have essential concrete applications in a variety of scientific and related areas such as signal processing, control theory, electromagnetism, fluid dynamics, quantum mechanics, cartography, and vibration analysis. Some applications of complex numbers are: Control theory[edit] Improper integrals[edit] Fluid dynamics[edit] Dynamic equations[edit] Electromagnetism and electrical engineering[edit] Main article: Alternating current In electrical engineering, the imaginary unit is denoted by j, to avoid confusion with I, which is generally in use to denote electric current, or, more particularly, i, which is generally in use to denote instantaneous electric current. To obtain the measurable quantity, the real part is taken: The complex-valued signal V(t) is called the analytic representation of the real-valued, measurable signal v(t). [14] Signal analysis[edit] x(t) = Re \{X( t ) \} \, X( t ) = A e^{i\omega t} = a e^{ i \phi } e^{i\omega t} = a e^{i (\omega t + \phi) } \, where ω represents the angular frequency and the complex number A encodes the phase and amplitude as explained above. Quantum mechanics[edit] The complex number field is intrinsic to the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics—the Schrödinger equation and Heisenberg's matrix mechanics—make use of complex numbers. In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time component of the spacetime continuum to be imaginary. (This approach is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity. Every triangle has a unique Steiner inellipse—an ellipse inside the triangle and tangent to the midpoints of the three sides of the triangle. The foci of a triangle's Steiner inellipse can be found as follows, according to Marden's theorem:[15][16] Denote the triangle's vertices in the complex plane as a = xA + yAi, b = xB + yBi, and c = xC + yCi. Write the cubic equation \scriptstyle (x-a)(x-b)(x-c)=0, take its derivative, and equate the (quadratic) derivative to zero. Marden's Theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner inellipse. Algebraic number theory[edit] Construction of a regular pentagon using straightedge and compass. Another example are Gaussian integers, that is, numbers of the form x + iy, where x and y are integers, which can be used to classify sums of squares. Analytic number theory[edit] Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta function ζ(s) is related to the distribution of prime numbers. The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Hero of Alexandria in the 1st century AD, where in his Stereometrica he considers, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term \scriptstyle \sqrt{81 - 144} = 3i\sqrt{7} in his calculations, although negative quantities were not conceived of in Hellenistic mathematics and Heron merely replaced it by its positive (\scriptstyle \sqrt{144 - 81} = 3\sqrt{7}).[17] The impetus to study complex numbers proper first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (see Niccolò Fontana Tartaglia, Gerolamo Cardano). It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. As an example, Tartaglia's formula for a cubic equation of the form \scriptstyle x^3 = px + q[18] gives the solution to the equation x3 = x as A further source of confusion was that the equation \scriptstyle \sqrt{-1}^2=\sqrt{-1}\sqrt{-1}=-1 seemed to be capriciously inconsistent with the algebraic identity \scriptstyle \sqrt{a}\sqrt{b}=\sqrt{ab}, which is valid for non-negative real numbers a and b, and which was also used in complex number calculations with one of a, b positive and the other negative. The incorrect use of this identity (and the related identity \scriptstyle \frac{1}{\sqrt{a}}=\sqrt{\frac{1}{a}}) in the case when both a and b are negative even bedeviled Euler. This difficulty eventually led to the convention of using the special symbol i in place of −1 to guard against this mistake.[citation needed] Even so, Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way throughout. The common terms used in the theory are chiefly due to the founders. Argand called \scriptstyle \cos \phi + i\sin \phi the direction factor, and \scriptstyle r = \sqrt{a^2+b^2} the modulus; Cauchy (1828) called \cos \phi + i\sin \phi the reduced form (l'expression réduite) and apparently introduced the term argument; Gauss used i for \scriptstyle \sqrt{-1}, introduced the term complex number for a + bi, and called a2 + b2 the norm. The expression direction coefficient, often used for \cos \phi + i\sin \phi, is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass. Generalizations and related notions[edit] However, just as applying the construction to reals loses the property of ordering, more properties familiar from real and complex numbers vanish with increasing dimension. The quaternions are only a skew field, i.e. for some x, y: x·yy·x for two quaternions, the multiplication of octonions fails (in addition to not being commutative) to be associative: for some x, y, z: (x·yzx·(y·z). Reals, complex numbers, quaternions and octonions are all normed division algebras over R. However, by Hurwitz's theorem they are the only ones. The next step in the Cayley–Dickson construction, the sedenions, in fact fails to have this structure. Hypercomplex numbers also generalize R, C, H, and O. For example, this notion contains the split-complex numbers, which are elements of the ring R[x]/(x2 − 1) (as opposed to R[x]/(x2 + 1)). In this ring, the equation a2 = 1 has four solutions. See also[edit] 1. ^ Charles P. McKeague (2011), Elementary Algebra, Brooks/Cole, p. 524, ISBN 978-0-8400-6421-9  2. ^ Burton (1995, p. 294) 4. ^ Aufmann, Richard N.; Barker, Vernon C.; Nation, Richard D. (2007), "Chapter P", College Algebra and Trigonometry (6 ed.), Cengage Learning, p. 66, ISBN 0-618-82515-0  5. ^ For example Ahlfors (1979). 6. ^ Brown, James Ward; Churchill, Ruel V. (1996), Complex variables and applications (6th ed.), New York: McGraw-Hill, p. 2, ISBN 0-07-912147-0, In electrical engineering, the letter j is used instead of i.  7. ^ Katz (2004, §9.1.4) 8. ^ 11. ^ Ahlfors (1979, p. 3) 12. ^ Kasana, H.S. (2005), "Chapter 1", Complex Variables: Theory And Applications (2nd ed.), PHI Learning Pvt. Ltd, p. 14, ISBN 81-203-2641-5  13. ^ Nilsson, James William; Riedel, Susan A. (2008), "Chapter 9", Electric circuits (8th ed.), Prentice Hall, p. 338, ISBN 0-13-198925-1  15. ^ Kalman, Dan (2008a), "An Elementary Proof of Marden's Theorem", The American Mathematical Monthly 115: 330–38, ISSN 0002-9890  16. ^ Kalman, Dan (2008b), "The Most Marvelous Theorem in Mathematics", Journal of Online Mathematics and its Applications  External link in |journal= (help) 17. ^ Nahin, Paul J. (2007), An Imaginary Tale: The Story of −1, Princeton University Press, ISBN 978-0-691-12798-9, retrieved 20 April 2011  18. ^ In modern notation, Tartaglia's solution is based on expanding the cube of the sum of two cube roots: \scriptstyle \left(\sqrt[3]{u} + \sqrt[3]{v}\right)^3 = 3 \sqrt[3]{uv} \left(\sqrt[3]{u} + \sqrt[3]{v}\right) + u + v With \scriptstyle x = \sqrt[3]{u} + \sqrt[3]{v}, \scriptstyle p = 3 \sqrt[3]{uv}, \scriptstyle q = u + v, u and v can be expressed in terms of p and q as \scriptstyle u = q/2 + \sqrt{(q/2)^2-(p/3)^3} and \scriptstyle v = q/2 - \sqrt{(q/2)^2-(p/3)^3}, respectively. Therefore, \scriptstyle x = \sqrt[3]{q/2 + \sqrt{(q/2)^2-(p/3)^3}} + \sqrt[3]{q/2 - \sqrt{(q/2)^2-(p/3)^3}}. When \scriptstyle (q/2)^2-(p/3)^3 is negative (casus irreducibilis), the second cube root should be regarded as the complex conjugate of the first one. 19. ^ Descartes, René (1954) [1637], La Géométrie | The Geometry of René Descartes with a facsimile of the first edition, Dover Publications, ISBN 0-486-60068-8, retrieved 20 April 2011  Mathematical references[edit] Historical references[edit] • Nahin, Paul J. (1998), An Imaginary Tale: The Story of \scriptstyle\sqrt{-1}, Princeton University Press, ISBN 0-691-02795-1  • H.D. Ebbinghaus; H. Hermes; F. Hirzebruch; M. Koecher; K. Mainzer; J. Neukirch; A. Prestel; R. Remmert (1991), Numbers (hardcover ed.), Springer, ISBN 0-387-97497-0  Further reading[edit] • Conway, John B., Functions of One Complex Variable I (Graduate Texts in Mathematics), Springer; 2 edition (12 September 2005). ISBN 0-387-90328-3. External links[edit]
fba8b7cc2ad24917
Take the 2-minute tour × Is there any specific reasons why so few consider the possibility that there might be something underlying the Schrödinger equation which is nonlinear? For instance, can't quantum gravity (QG) be nonlinear like general relativity (GR)? share|improve this question 6 Answers 6 There are nonlinear versions of the Schrodinger equation that are completely irrelevant to your question. These are like the Gross-Pitaevski equation, they are nonlinear classical field equations that describe the flow of a self-interacting superfluid or BEC. These equations have nothing to do with the evolution of probability amplitudes, and I will not consider them further. Probability theory is exactly linear To understand why the concept of a nonlinear equation for probability amplitudes is not reasonable, and most likely completely impossible, consider first classical probability. Suppose I have a classical equation of motion of the form $$ {dx\over dt} = V(x)$$ where the vector field V describes the future behavior as a flow on phase space, coordinatized by x. Now I can ask what is the evolution of a probability distribution $\rho(x)$, if I have incomplete knowledge of the initial position. The evolution equation is determined by considering the probability of ending in a little box surrounding x'. This probability is the sum of all possible paths that lead to x' times the probability of being at the beginning of the path. This sum gives the probability equation: $${\partial \rho\over \partial t} = V(x) \cdot {\partial \rho \over \partial x} - \rho(x)\nabla\cdot V $$ The point is that this equation is exactly linear, for fudamental reasons. It is impossible to even conceive of a nonlinear term in the evolution equation of a probability distribution, because the very definition of probability is lack of information, as represented by a linear space. Note that classical probability distributions are defined on the entire phase space, so they are enormous dimensional linear equations which completely include the nonlinear dynamics if you restrict to delta-function sharp probability distributions on x. The only difference with quantum mechanics is that there are no delta-function sharp distributions in the presence of non-commuting observables on all observables. Otherwise the two types of descriptions are similar Quantum mechanics mixes amplitudes and probabilities If you have a quantum mechanical system, the wavefunction mixes with classical probability in a nontrivial way. If you consider a quantum system of two entangled spin 1/2 particles in a spin singlet, the projection of the wavefunction onto one of the two particles is a density matrix which is a classical probability. This is extremely important to preserve, because the probabilities are nonlocally correlated, so if there were any way to extract the far-away component of the spin wavefunction, you would be almost certainly be able to use this to signal faster than light, because you can collapse the wavefunction where you are, and the far-away density matrix would then not have a probability interpretation. These types of nonlinear theories are so difficult to conceive, that Weinberg suggested in the 1960s that quantum mechanics has absolutely no deformation of any kind which is consistent with no-signalling. Although this conjecture is not proved, to my knowledge, it is certainly plausible, and there are no nonlinear deformations which could serve as counterexamples (the link to this paper has just been posted as I write by Oda). It is wrong to think that there is any nonlinear deformation of the Schrodinger amplitude equation. Such modifications do not exist, and almost certainly cannot exist. If the world obeyed such an equation with a tiny nonlinearity, different Everett branches would become interacting, and we would be able to see the ghosts of our other selves, and other nonsense. It would rule out any form of hidden-variable interpretation of the wavefunction, and it would almost certainly lead to violations of no-signalling. share|improve this answer Thanks for your answer and towards the end you touch upon exactly why I am asking. There is currenlty no Everettian interpretation that makes sense, it doesn't give us Born Rule, it doesn't give us a ontological structure and when someone tries they run into problems with relativity etc. Hidden variables as in deBroglie Bohm can indeed derive Born Rule, but there you have the nonrelativistic fact of the interpretation... People like Tim Palmer is working on "deeper" underlying theories as described here: physorg.com/news169725980.html Continued in next post –  SchroedingersGhost Sep 7 '11 at 2:39 Gerard 't Hooft is also working on something deeper and more fundamental. And it seems this is the only way to restore determinism. So why couldn't it be nonlinear at a deeper level ? –  SchroedingersGhost Sep 7 '11 at 2:40 To fix Bohm for relativity, you can just consider a bosonic field as the variable which is doing the Bohmian motion. I haven't thought about this for Fermionic fields, but I am sure it is doable. So it is not correct to say Bohm is nonrelativistic. I have read t'Hooft's stuff on quantum mechanics, and I have never been able to understand it (not for lack of trying). The key problem I have is that it is still amplitudes. A question focused on that would be good. If you use an t'Hooft style theory, the wavefunction (or density matrix) should be a derived quantity, obeying a linear equation. –  Ron Maimon Sep 7 '11 at 3:13 I believe the standard Bohmian attitude towards fermionic fields is just to say that only the boson fields are "beables" - and that this is fine since the Higgs boson will still tell you where all the matter is. But Bohmian field theory's main problem with relativity is that it isn't covariant. As Bell pointed out in "Beables for quantum field theory", there is a preferred frame but it is experimentally undetectable, as in electrodynamics before Einstein. –  Mitchell Porter Sep 7 '11 at 4:35 So both the likelihood of constructing a relativistic Bohmian interpertation and discovering that the schreodinger equation is nonlinear has vanished? –  SchroedingersGhost Sep 7 '11 at 6:46 @Ron Maimon has given the canonical answer to this: the wavefunction is probabilities, and to preserve probabilities one must have a linear equation (indeed, also a norm-preserving evolution operator). I offer another viewpoint, in the style of how Einstein thought about relativity, i.e. two postulates. The postulate is that it is not possible to solve NP-complete problems in polynomial time. Abraham and Lloyd showed that if quantum mechanics were non-linear at all, then this would be possible. Aaronson has a nice paper, the start of which references a large literature on why quantum mechanics has to be the way it is. share|improve this answer Thanks yeah I discovered teh Aaronson paper right before you posted it and now feel that I have the answers to my questions. Thanks –  SchroedingersGhost Sep 8 '11 at 3:28 @genneth: the postulate could be stated more broadly: it should be impossible to compute anything with a physical system exponentially faster than we can compute it with a Turing machine. Why not state it this way? Because Shor showed us that quantum mechanics can do that! So this principle is not very compelling. If quantum computers can factor, What is the philosophical objection to them solving NP complete problems too? –  Ron Maimon Sep 11 '11 at 15:42 @Ron Maimon: you answered it yourself: because factorisation of integers is not NP-complete. And it's precisely because of this that one states NP-complete and not just exponentially faster than a Turing machine. In addition, the fact remains that we don't have non-trivial lower bounds on factorisation. –  genneth Sep 11 '11 at 17:12 A classical paper on this is Weinberg's Testing Quantum Mechanics. share|improve this answer Thanks, I didn't see your post right away. I'll take a look at the paper now –  SchroedingersGhost Sep 7 '11 at 2:54 I would advise the following reference. In this thesis, we study a variant of the Schrodinger equation Nonlinear Schrodinger equation nonlinear spatially inhomogeneous nonlinearity. The thesis is divided into 4 parts: Following an introductory chapter on the nonlinear Schrodinger equation, we divide the study of nonlinear Schrodinger equation inhomogeneous in 3 blocks: The first results are given on the existence and stability of solutions of this equation. This uses various techniques such as variational approximation techniques, dynamical systems, eigenvalue problem ... In the second block, once proven the existence of solutions, we proceed to calculate analytical solutions of this equation using different analytical methods, such as the method of Lie symmetries, transformacions of similarity, and so on. In the third and last block, discusses some physical applications of this equation for Bose Einstein condensates and nonlinear optics. share|improve this answer ah indeed. I forgot. the reference is in Spanish –  jormansandoval Sep 6 '11 at 23:57 Hehe thanks, yeah I don't speak or read Spanish, so that'll be a problem. However can you tell me your personal opinion or what you conclude in that paper? –  SchroedingersGhost Sep 7 '11 at 0:01 I will to try find you a better paper. –  jormansandoval Sep 7 '11 at 0:56 @SchroedingersGhost First of all, NLSE is a generalisation of Schroedinger eq only on mathematical ground, not physical -- it is a theory claiming that a gas of many bosons can be described by a single 3D creature analogous to weave function driven by an eq that is mathematically equivalent to SE of one particle + this nonlinear term. And it is not even a ab initio theory -- it contains empirical parameter. –  mbq Sep 7 '11 at 11:53 In addition to the classic Weinberg paper cited above, there's this shorter version, and then follow ups by Peres 1989 on how it violates the 2nd law, by Gisin on how it allows superluminal communications, and by Polchinski on how it would allow for an 'Everett' phone. More recently, there's this mathematical argument against nonlinear QM by Kapustin. share|improve this answer Although it's not a very satisfying (or informative) answer, nonlinear equations are a pain in the butt to solve so we prefer to avoid them whenever possible. It makes sense that the first equation(s) developed to describe quantum systems would be linear, simply because they're the simplest. That being said, there's no reason that the "true" theory underlying QM would have to be linear. In fact, for exactly the reason you pointed out (i.e. that general relativity is nonlinear), it's commonly believed that we will need some kind of nonlinear theory to properly explain the universe at its most basic level. share|improve this answer What? No! The waveunction is exactly linear for much the same reasons that probability distribution functions are exactly linear. It is extraordinarily difficult, if not impossible, to deform QM with a nonlinearity. –  Ron Maimon Sep 7 '11 at 2:09 David Zaslavsky, thanks for the answer. Could you elaborate a little on the last part of your post? Or perhaps you will do so in a answer to Ron Maimon and Ill just watch and see if I get my answers from your disagreement –  SchroedingersGhost Sep 7 '11 at 2:44 Your Answer
d9e2891ddd6a8963
Dismiss Notice Join Physics Forums Today! Quantum Mechanical Conceptual Problems 1. Apr 21, 2005 #1 Some straightforward problems I have encountered in QM; i'll post them gradually, otherwise it 'd be a little long, thanks 1a) Wave/particle Duality: A quantum wave as stated through Dirac and von Neumann is a probability wave expressed by Schrödinger equation and thus here implying a superimposed state. A first conceptual problem I encounter here is that the very being of superposition can never be observed. It can be derived from the interference of for example the double slit experiment. Logically, the superimposed wave function has encountered a "wave collapse" due to a certain form of measurement. Due to this wave collapse, the former probability wave will act as a single vector in Hilbert space, containing finite energy, thus being a point in Hilbert space. If we keep on using this definition of the quantum properties, I am very curious what that exactly causes the wave collapse. 1b) The notion of the "particle"- being of the quantum as a single vector in Hilbert space with finite energy thus implies a major problem explaining classical "rest mass". Thus if a classical observable particle,if being fundamentally different in some way of the quantum vector,is observed, it could never generate an interference pattern (if we do a gedankenexperiment containing two slits and bowling balls), just because the quantum properties of the propability wave needed for interference avoid the classical notion of matter. IF! on the other hand you don't make a difference of fundamental level between a single quantum system and a classical system, -what- does then "convert" your theoretical vector in Hilbert space to a classical observable system having a structural rest mass, in which E=mc² must play a major role? 1c) If indeed, you don't make a fundamental difference between a single quantum state and a "classical" system (as being build up by single quantum states), then what contains the information to collapse the wave function of a whole system, thus creating a logically structured classical system?? Last edited: Apr 21, 2005 2. jcsd 3. Apr 21, 2005 #2 User Avatar Staff Emeritus Science Advisor Education Advisor 2016 Award I'm going to risk sounding like a broken record (broken CD?), but here goes. The consequences of superposition CAN be observed. When you make a measurement, you are only forcing a definite state only on that corresponding to the commuting observable. It means that the observable that do NOT commute will still be represented by a superposition of states. For example, when you make a measure of Lz, the other two orthorgonal observables, Lx and Ly, REMAINS in indefinite state. Lz do not commute with both Lx and Ly. This means that a measurement of Lz does NOT remove the superposition of states that may be describing Lx and Ly. Thus, if something is in a superposition of states, if I can find a non-commuting observable, I can make that measurement and see if so-and-so values reflect the fact that there is some form of superposition going on. This is what has been observed in the Stony Brook/Delft SQUID experiment (I have made repeated references to this here and in my Journal entry). Thus, you can still detect the effect of such superposition without causing a total "collapse" of the wavefunction. 4. Apr 21, 2005 #3 Ok, so according to that paper superposition also is also noted in macroscopic distant states. Thats a clear and straight answer , thanks. But that leaves most of my other questions unanswered. I still have conceptual problems on how a dimensionless "energy" vector such as stated by Quantum Mechanics can in some way or another be altered and be converted in a "particle" observable in 3D, and Above all, can be combined in such a way that they form logically structured systems. In other words, how a single quantum state can be collapsed in some way or another and form with numerous other similarly collapsed functions a "classical system". Thank you 5. Apr 21, 2005 #4 User Avatar Science Advisor Gold Member Maybe this will help. 6. Apr 22, 2005 #5 There is no wave particle duality. The superposition is just a description of the space/time distribution of some of the particle or system's properties.These properties constitute a partial description of the particle and its fields. There is no real collapse. There is just redistribution of particle or system properties over space/time. Have something to add?
8c083f8dfe0dbf9b
Green's function From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about the classical approach to Green's functions. For a modern discussion, see fundamental solution. In mathematics, a Green's function is the impulse response of an inhomogeneous differential equation defined on a domain, with specified initial conditions or boundary conditions. Via the superposition principle, the convolution of a Green's function with an arbitrary function f(x) on that domain is the solution to the inhomogeneous differential equation for f(x). Green's functions are named after the British mathematician George Green, who first developed the concept in the 1830s. In the modern study of linear partial differential equations, Green's functions are studied largely from the point of view of fundamental solutions instead. Under many-body theory, the term is also used in physics, specifically in quantum field theory, aerodynamics, aeroacoustics, electrodynamics and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition. Definition and uses[edit] A Green's function, G(xs), of a linear differential operator L = L(x) acting on distributions over a subset of the Euclidean space Rn, at a point s, is any solution of where \delta is the Dirac delta function. This property of a Green's function can be exploited to solve differential equations of the form If the kernel of L is non-trivial, then the Green's function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria will give a unique Green's function. Also, Green's functions in general are distributions, not necessarily proper functions. Green's functions are also useful tools in solving wave equations and diffusion equations. In quantum mechanics, the Green's function of the Hamiltonian is a key concept with important links to the concept of density of states. As a side note, the Green's function as used in physics is usually defined with the opposite sign; that is, This definition does not significantly change any of the properties of the Green's function. If the operator is translation invariant, that is, when L has constant coefficients with respect to x, then the Green's function can be taken to be a convolution operator, that is, In this case, the Green's function is the same as the impulse response of linear time-invariant system theory. See also: Spectral theory Loosely speaking, if such a function G can be found for the operator L, then if we multiply the equation (1) for the Green's function by f(s), and then perform an integration in the s variable, we obtain: \int LG(x,s) f(s) \, ds = \int \delta(x-s)f(s) \, ds = f(x). The right-hand side is now given by the equation (2) to be equal to L u(x), thus: Lu(x)=\int LG(x,s) f(s) \, ds. Because the operator L = L(x) is linear and acts on the variable x alone (not on the variable of integration s), we can take the operator L outside of the integration on the right-hand side, obtaining Lu(x)=L\left(\int G(x,s) f(s) \,ds\right), which suggests Thus, we can obtain the function u(x) through knowledge of the Green's function in equation (1) and the source term on the right-hand side in equation (2). This process relies upon the linearity of the operator L. In other words, the solution of equation (2), u(x), can be determined by the integration given in equation (3). Although f(x) is known, this integration cannot be performed unless G is also known. The problem now lies in finding the Green's function G that satisfies equation (1). For this reason, the Green's function is also sometimes called the fundamental solution associated to the operator L. Not every operator L admits a Green's function. A Green's function can also be thought of as a right inverse of L. Aside from the difficulties of finding a Green's function for a particular operator, the integral in equation (3) may be quite difficult to evaluate. However the method gives a theoretically exact result. This can be thought of as an expansion of f according to a Dirac delta function basis (projecting f over δ(x − s)) and a superposition of the solution on each projection.) Such an integral equation is known as a Fredholm integral equation, the study of which constitutes Fredholm theory. Green's functions for solving inhomogeneous boundary value problems[edit] The primary use of Green's functions in mathematics is to solve non-homogeneous boundary value problems. In modern theoretical physics, Green's functions are also usually used as propagators in Feynman diagrams (and the phrase Green's function is often used for any correlation function). Let L be the Sturm–Liouville operator, a linear differential operator of the form L=\dfrac{d}{dx}\left[p(x) \dfrac{d}{dx}\right]+q(x) and let D be the boundary conditions operator Du= \begin{cases} \alpha_1 u'(0)+\beta_1 u(0) \\ \alpha_2 u'(l)+\beta_2 u(l). Let f(x) be a continuous function in [0,l]. We shall also suppose that the problem Lu &= f \\ Du &= 0 is regular (i.e., only the trivial solution exists for the homogeneous problem). There is one and only one solution u(x) that satisfies Lu & = f\\ Du & = 0, and it is given by u(x)=\int_0^\ell f(s) G(x,s) \, ds, where G(x,s) is a Green's function satisfying the following conditions: 1. G(x,s) is continuous in x and s. 2. For x \ne s, L G(x, s)=0. 3. For s \ne 0, D G(x, s)=0. 4. Derivative "jump": G'(s_{+0}, s)-G'(s_{-0}, s)=1 / p(s). 5. Symmetry: G(x,s) = G(s, x). Advanced and retarded Green's functions[edit] Sometimes the Green's function can be split in an addition of two functions. One with the variable positive (+) and the other with the variable negative (-). These are the advanced and retarded Green's functions, and when the equation under study depends on time, one of the parts is causal and the other anti-causal. In these problems usually the causal part is the important one. Finding Green's functions[edit] Eigenvalue expansions[edit] If a differential operator L admits a set of eigenvectors \Psi_n(x) (i.e., a set of functions \Psi_n and scalars \lambda_n such that L \Psi_n=\lambda_n \Psi_n) that is complete, then it is possible to construct a Green's function from these eigenvectors and eigenvalues. "Complete" means that the set of functions \left\{ \Psi_n \right\} satisfies the following completeness relation: \delta(x-x')=\sum_{n=0}^\infty \Psi_n^\dagger(x) \Psi_n(x'). Then the following holds: G(x, x')=\sum_{n=0}^\infty \dfrac{\Psi_n^\dagger(x) \Psi_n(x')}{\lambda_n}, where \dagger represents complex conjugation. Applying the operator L to each side of this equation results in the completeness relation, which was assumed true. The general study of the Green's function written in the above form, and its relationship to the function spaces formed by the eigenvectors, is known as Fredholm theory. There are several other methods for finding Green's functions, including the method of images, separation of variables, and Laplace transforms (Cole 2011). Table of Green's functions[edit] The following table gives an overview of Green's functions of frequently appearing differential operators, where \theta(t) is the Heaviside step function, r=\sqrt{x^2+y^2+z^2} and \rho=\sqrt{x^2+y^2}.[1] Differential Operator L Green's Function G Example of application \partial_t + \gamma \theta(t)\mathrm e^{-\gamma t} \left(\partial_t + \gamma \right)^2 \theta(t)t\mathrm e^{-\gamma t} \partial_t^2 + 2\gamma\partial_t + \omega_0^2 \theta(t)\mathrm e^{-\gamma t}\frac{1}{\omega}\sin(\omega t) with \omega=\sqrt{\omega_0^2-\gamma^2} one-dimensional damped harmonic oscillator \Delta_\text{2D}=\partial_x^2 + \partial_y^2 \frac{1}{2 \pi}\ln \rho \nabla^2=\partial_x^2 + \partial_y^2 + \partial_z^2= \Delta \frac{-1}{4 \pi r} Poisson equation Helmholtz operator \Delta + k^2 \frac{-\mathrm e^{-ikr}}{4 \pi r} stationary 3D Schrödinger equation for free particle D'Alembert operator \square = \frac{1}{c^2}\partial_t^2-\Delta \frac{\delta(t-\frac{r}{c})}{4 \pi r} wave equation \partial_t - k\Delta \theta(t)\left(\frac{1}{4\pi kt}\right)^{3/2}\mathrm e^{-r^2/4kt} diffusion Green's functions for the Laplacian[edit] Green's functions for linear differential operators involving the Laplacian may be readily put to use using the second of Green's identities. To derive Green's theorem, begin with the divergence theorem (otherwise known as Gauss's theorem): \int_V \nabla \cdot \vec A\ dV=\int_S \vec A \cdot d\hat\sigma. Let \vec A=\phi\nabla\psi-\psi\nabla\phi and substitute into Gauss' law. Compute \nabla\cdot\vec A and apply the product rule for the \nabla operator: \nabla\cdot\vec A &=\nabla\cdot(\phi\nabla\psi \;-\; \psi\nabla\phi)\\ &=(\nabla\phi)\cdot(\nabla\psi) \;+\; \phi\nabla^2\psi \;-\; (\nabla\phi)\cdot(\nabla\psi) \;-\; \psi\nabla^2\phi\\ &=\phi\nabla^2\psi \;-\; \psi\nabla^2\phi. Plugging this into the divergence theorem produces Green's theorem: \int_V (\phi\nabla^2\psi-\psi\nabla^2\phi) dV=\int_S (\phi\nabla\psi-\psi\nabla\phi)\cdot d\hat\sigma. Suppose that the linear differential operator L is the Laplacian, \nabla^2, and that there is a Green's function G for the Laplacian. The defining property of the Green's function still holds: L G(x,x')=\nabla^2 G(x,x')=\delta(x-x'). Let \psi=G in Green's theorem. Then: \int_V \left[ \phi(x') \delta(x-x')-G(x,x') \nabla^2\phi(x')\right]\ d^3x' = \int_S \left[\phi(x')\nabla' G(x,x')-G(x,x')\nabla'\phi(x')\right] \cdot d\hat\sigma'. Using this expression, it is possible to solve Laplace's equation \nabla^2\phi(x)=0 or Poisson's equation \nabla^2\phi(x)=-\rho(x), subject to either Neumann or Dirichlet boundary conditions. In other words, we can solve for \phi(x) everywhere inside a volume where either (1) the value of \phi(x) is specified on the bounding surface of the volume (Dirichlet boundary conditions), or (2) the normal derivative of \phi(x) is specified on the bounding surface (Neumann boundary conditions). Suppose the problem is to solve for \phi(x) inside the region. Then the integral \int\limits_V {\phi(x')\delta(x-x')\ d^3x'} reduces to simply \phi(x) due to the defining property of the Dirac delta function and we have: \phi(x)=\int_V G(x,x') \rho(x')\ d^3x'+\int_S \left[\phi(x')\nabla' G(x,x')-G(x,x')\nabla'\phi(x')\right] \cdot d\hat\sigma'. This form expresses the well-known property of harmonic functions that if the value or normal derivative is known on a bounding surface, then the value of the function inside the volume is known everywhere. In electrostatics, \phi(x) is interpreted as the electric potential, \rho(x) as electric charge density, and the normal derivative \nabla\phi(x')\cdot d\hat\sigma' as the normal component of the electric field. If the problem is to solve a Dirichlet boundary value problem, the Green's function should be chosen such that G(x,x') vanishes when either x or x′ is on the bounding surface.Thus only one of the two terms in the surface integral remains. If the problem is to solve a Neumann boundary value problem, the Green's function is chosen such that its normal derivative vanishes on the bounding surface, as it would seem to be the most logical choice. (See Jackson J.D. classical electrodynamics, page 39). However, application of Gauss's theorem to the differential equation defining the Green's function yields \int_S \nabla' G(x,x') \cdot d\hat\sigma' = \int_V \nabla'^2 G(x,x') d^3x' = \int_V \delta (x-x') d^3x' = 1 meaning the normal derivative of G(x,x') cannot vanish on the surface, because it must integrate to 1 on the surface. (Again, see Jackson J.D. classical electrodynamics, page 39 for this and the following argument). The simplest form the normal derivative can take is that of a constant, namely 1/S, where S is the surface area of the surface. The surface term in the solution becomes \int_S \phi(x')\nabla' G(x,x')\cdot d\hat\sigma' = \langle\phi\rangle_S where \langle\phi\rangle_S is the average value of the potential on the surface. This number is not known in general, but is often unimportant, as the goal is often to obtain the electric field given by the gradient of the potential, rather than the potential itself. With no boundary conditions, the Green's function for the Laplacian (Green's function for the three-variable Laplace equation) is: Supposing that the bounding surface goes out to infinity, and plugging in this expression for the Green's function, this gives the familiar expression for electric potential in terms of electric charge density as \phi(x)=\int_V \dfrac{\rho(x')}{|x-x'|} \, d^3x'. Example. Find the Green function for the following problem: Lu & = u'' + k^2 u = f(x)\\ u(0)& = 0, \quad u\left(\tfrac{\pi}{2k}\right) = 0. First step: The Green's function for the linear operator at hand is defined as the solution to g''(x,s) + k^2 g(x,s) = \delta(x-s). If x\ne s, then the delta function gives zero, and the general solution is g(x,s)=c_1 \cos kx+c_2 \sin kx. For x<s, the boundary condition at x=0 implies g(0,s)=c_1 \cdot 1+c_2 \cdot 0=0, \quad c_1 = 0 if x < s and s \ne \tfrac{\pi}{2k}. For x>s, the boundary condition at x=\tfrac{\pi}{2k} implies g\left(\tfrac{\pi}{2k},s\right) = c_3 \cdot 0+c_4 \cdot 1=0, \quad c_4 = 0 The equation of g(0,s)=0 is skipped for similar reasons. To summarize the results thus far: g(x,s)= \begin{cases} c_2 \sin kx, & \text{for }x<s\\ c_3 \cos kx, & \text{for }s<x Second step: The next task is to determine c_{2} and c_{3}. Ensuring continuity in the Green's function at x=s implies c_2 \sin ks=c_3 \cos ks One can ensure proper discontinuity in the first derivative by integrating the defining differential equation from x=s-\epsilon to x=s+\epsilon and taking the limit as \epsilon goes to zero: c_3 \cdot \left(-k \sin ks\right)-c_2 \cdot \left( k \cos ks\right )=1 The two (dis)continuity equations can be solved for c_{2} and c_{3} to obtain c_2 = -\frac{\cos ks}{k} \quad;\quad c_3 = -\frac{\sin ks}{k} So the Green's function for this problem is: -\frac{\cos ks}{k} \sin kx, & x<s\\ -\frac{\sin ks}{k} \cos kx, & s<x Further examples[edit] G(x, y, x_0, y_0) =\dfrac{1}{2\pi} &\left[\ln\sqrt{(x-x_0)^2+(y-y_0)^2}-\ln\sqrt{(x+x_0)^2+(y-y_0)^2} \right. \\ &\left. - \ln\sqrt{(x-x_0)^2+(y+y_0)^2}+ \ln\sqrt{(x+x_0)^2+(y+y_0)^2}\right] See also[edit] • S. S. Bayin (2006), Mathematical Methods in Science and Engineering, Wiley, Chapters 18 and 19. • Eyges, Leonard, The Classical Electromagnetic Field, Dover Publications, New York, 1972. ISBN 0-486-63947-9. (Chapter 5 contains a very readable account of using Green's functions to solve boundary value problems in electrostatics.) • G. B. Folland, Fourier Analysis and Its Applications, Wadsworth and Brooks/Cole Mathematics Series. • K. D. Cole, J. V. Beck, A. Haji-Sheikh, and B. Litkouhi, Methods for obtaining Green's functions, Heat Conduction Using Green's Functions, Taylor and Francis, 2011, pp. 101–148. ISBN 978-1-4398-1354-6 • Green G, An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism (Nottingham, England: T. Wheelhouse, 1828). pages 10-12 1. ^ some examples taken from Schulz, Hermann: Physik mit Bleistift. Frankfurt am Main: Deutsch, 2001. ISBN 3-8171-1661-6 (German) External links[edit]
0d40721342456969
Lecture — A Century of Controversy over the Foundations of Mathematics [Originally published in C. S. Calude and G. Paun, Finite versus Infinite, Springer-Verlag, 2000, pp. 75-100.] This 1999 talk at UMass-Lowell was my last major lecture of the previous century, and it summarizes that century's work on the foundations of mathematics, discusses connections with physics, and proposes a program of research for the next century. Not to be confused with another talk with the same title, my Distinguished Lecture given at Carnegie-Mellon University in 2000. Prof. Ray Gumb We're happy to have Gregory Chaitin from IBM's Thomas J. Watson Research Lab to speak with us today. He's a world-renowned figure, and the developer as a teenager of the theory of algorithmic information. And his newest book which is accessible to undergraduates, and I hope will be of great appeal to our undergraduates in particular, is available on the Web and comes with LISP programs to run with it. It's kind of like a combination of mathematics, computer science, and philosophy. Greg--- Greg Chaitin Thanks a lot! Okay, a great pleasure to be here! [Applause] Thank you very much! I'm awfully sorry to be late! You've got a beautiful town here! Those old brick buildings and the canals are really breathtaking! And thanks for being here for this talk! It's such a beautiful spring day---I think one has to be crazy to be indoors! Okay, I'd like to talk about some crazy stuff. The general idea is that sometimes ideas are very powerful. I'd like to talk about theory, about the computer as a concept, a philosophical concept. We all know that the computer is a very practical thing out there in the real world! It pays for a lot of our salaries, right? But what people don't remember as much is that really---I'm going to exaggerate, but I'll say it---the computer was invented in order to help to clarify a question about the foundations of mathematics, a philosophical question about the foundations of mathematics. Now that sounds absurd, but there's some truth in it. There are actually lots of threads that led to the computer, to computer technology, which come from mathematical logic and from philosophical questions about the limits and the power of mathematics. The computer pioneer Turing was inspired by these questions. Turing was trying to settle a question of Hilbert's having to do with the philosophy of mathematics, when he invented a thing called the Turing machine, which is a mathematical model of a toy computer. Turing did this before there were any real computers, and then he went on to actually build computers. The first computers in England were built by Turing. And von Neumann, who was instrumental in encouraging the creation of computers as a technology in the United States, (unfortunately as part of a war effort, as part of the effort to build the atom bomb), he knew Turing's work very well. I learned of Turing by reading von Neumann talking about the importance of Turing's work. So what I said about the origin of the computer isn't a complete lie, but it is a forgotten piece of intellectual history. In fact, let me start off with the final conclusion of this talk... In a way, a lot of this came from work of Hilbert. Hilbert, who was a very well-known German mathematician around the beginning of this century, had proposed formalizing completely all of mathematics, all of mathematical reasoning---deduction. And this proposal of his is a tremendous, glorious failure! In a way, it's a spectacular failure. Because it turned out that you couldn't formalize mathematical reasoning. That's a famous result of Gödel's that I'll tell you about, done in 1931. But in another way, Hilbert was really right, because formalism has been the biggest success of this century. Not for reasoning, not for deduction, but for programming, for calculating, for computing, that's where formalism has been a tremendous success. If you look at work by logicians at the beginning of this century, they were talking about formal languages for reasoning and deduction, for doing mathematics and symbolic logic, but they also invented some early versions of programming languages. And these are the formalisms that we all live with and work with now all the time! They're a tremendously important technology. So formalism for reasoning did not work. Mathematicians don't reason in formal languages. But formalism for computing, programming languages, are, in a way, what was right in the formalistic vision that goes back to Hilbert at the beginning of this century, which was intended to clarify epistemological, philosophical questions about mathematics. So I'm going to tell you this story, which has a very surprising outcome. I'm going to tell you this surprising piece of intellectual history. The Crisis in Set Theory So let me start roughly a hundred years ago, with Cantor... Georg Cantor The point is this. Normally you think that pure mathematics is static, unchanging, perfect, absolutely correct, absolute truth... Right? Physics may be tentative, but math, things are certain there! Well, it turns out that's not exactly the case. In this century, in this past century there was a lot of controversy over the foundations of mathematics, and how you should do math, and what's right and what isn't right, and what's a valid proof. Blood was almost shed over this... People had terrible fights and ended up in insane asylums over this. It was a fairly serious controversy. This isn't well known, but I think it's an interesting piece of intellectual history. More people are aware of the controversy over relativity theory. Einstein was very controversial at first. And then of the controversy over quantum mechanics... These were the two revolutions in the physics of this century. But what's less well known is that there were tremendous revolutions and controversies in pure mathematics too. I'd like to tell you about this. It really all starts in a way from Cantor. Georg Cantor What Cantor did was to invent a theory of infinite sets. Infinite Sets He did it about a hundred years ago; it's really a little more than a hundred years ago. And it was a tremendously revolutionary theory, it was extremely adventurous. Let me tell you why. Cantor said, let's take 1, 2, 3, ... 1, 2, 3, ... We've all seen these numbers, right?! And he said, well, let's add an infinite number after this. 1, 2, 3, ... ω He called it ω, lowercase Greek omega. And then he said, well, why stop here? Let's go on and keep extending the number series. 1, 2, 3, ... ω, ω+1, ω+2, ... ω plus one, ω plus two, then you go on for an infinite amount of time. And what do you put afterwards? Well, two ω? (Actually, it's ω times two for technical reasons.) 1, 2, 3, ... ω ... 2ω Then two ω plus one, two ω plus two, two ω plus three, two ω plus four... 1, 2, 3, ... 2ω, 2ω+1, 2ω+2, 2ω+3, 2ω+4, ... Then you have what? Three ω, four ω, five ω, six ω, ... 1, 2, 3, ... 3ω ... 4ω ... 5ω ... 6ω ... Well, what will come after all of these? ω squared! Then you keep going, ω squared plus one, ω squared plus six ω plus eight... Okay, you keep going for a long time, and the next interesting thing after ω squared will be? ω cubed! And then you have ω to the fourth, ω to the fifth, and much later? 1, 2, 3, ... ω ... ω2 ... ω3 ... ω4 ... ω5 ω to the ω! And then much later it's ω to the ω to the ω an infinite number of times! 1, 2, 3, ... ω ... ω2 ... ωω ... ωωωω... I think this is usually called epsilon nought. ε0 = ωωωω... It's a pretty mind-boggling number! After this point things get a little complicated... And this was just one little thing that Cantor did as a warm-up exercise for his main stuff, which was measuring the size of infinite sets! It was spectacularly imaginative, and the reactions were extreme. Some people loved what Cantor was doing, and some people thought that he should be put in an insane asylum! In fact he had a nervous breakdown as a result of those criticisms. Cantor's work was very influential, leading to point-set topology and other abstract fields in the mathematics of the twentieth century. But it was also very controversial. Some people said, it's theology, it's not real, it's a fantasy world, it has nothing to do with serious math! And Cantor never got a good position and he spent his entire life at a second-rate institution. Bertrand Russell's Logical Paradoxes Then things got even worse, due mainly, I think, to Bertrand Russell, one of my childhood heroes. Bertrand Russell Bertrand Russell was a British philosopher who wrote beautiful essays, very individualistic essays, and I think he got the Nobel prize in literature for his wonderful essays. Bertrand Russell started off as a mathematician and then degenerated into a philosopher and finally into a humanist; he went downhill rapidly! [Laughter] Anyway, Bertrand Russell discovered a whole bunch of disturbing paradoxes, first in Cantor's theory, then in logic itself. He found cases where reasoning that seemed to be okay led to contradictions. And I think that Bertrand Russell was tremendously influential in spreading the idea that there was a serious crisis and that these contradictions had to be resolved somehow. The paradoxes that Russell discovered attracted a great deal of attention, but strangely enough only one of them ended up with Russell's name on it! For example, one of these paradoxes is called the Burali-Forti paradox, because when Russell published it he stated in a footnote that it had been suggested to him by reading a paper by Burali-Forti. But if you look at the paper by Burali-Forti, you don't see the paradox! But I think that the realization that something was seriously wrong, that something was rotten in the state of Denmark, that reasoning was bankrupt and something had to be done about it pronto, is due principally to Russell. Alejandro Garciadiego, a Mexican historian of math, has written a book which suggests that Bertrand Russell really played a much bigger role in this than is usually realized: Russell played a key role in formulating not only the Russell paradox, which bears his name, but also the Burali-Forti paradox and the Berry paradox, which don't. Russell was instrumental in discovering them and in realizing their significance. He told everyone that they were important, that they were not just childish word-play. Anyway, the best known of these paradoxes is called the Russell paradox nowadays. You consider the set of all sets that are not members of themselves. And then you ask, ``Is this set a member of itself or not?'' If it is a member of itself, then it shouldn't be, and vice versa! It's like the barber in a small, remote town who shaves all the men in the town who don't shave themselves. That seems pretty reasonable, until you ask ``Does the barber shave himself?'' He shaves himself if and only if he doesn't shave himself, so he can't apply that rule to himself! Now you may say, ``Who cares about this barber!'' It was a silly rule anyway, and there are always exceptions to the rule! But when you're dealing with a set, with a mathematical concept, it's not so easy to dismiss the problem. Then it's not so easy to shrug when reasoning that seems to be okay gets you into trouble! By the way, the Russell paradox is a set-theoretic echo of an earlier paradox, one that was known to the ancient Greeks and is called the Epimenides paradox by some philosophers. That's the paradox of the liar: ``This statement is false!'' ``What I'm now saying is false, it's a lie.'' Well, is it false? If it's false, if something is false, then it doesn't correspond with reality. So if I'm saying this statement is false, that means that it's not false---which means that it must be true. But if it's true, and I'm saying it's false, then it must be false! So whatever you do you're in trouble! So you can't get a definite logical truth value, everything flip flops, it's neither true nor false. And you might dismiss this and say that these are just meaningless word games, that it's not serious. But Kurt Gödel later built his work on these paradoxes, and he had a very different opinion. Kurt Gödel He said that Bertrand Russell made the amazing discovery that our logical intuitions, our mathematical intuitions, are self-contradictory, they're inconsistent! So Gödel took Russell very seriously, he didn't think that it was all a big joke. Now I'd like to move on and tell you about David Hilbert's rescue plan for dealing with the crisis provoked by Cantor's set theory and by Russell's paradoxes. David Hilbert David Hilbert to the Rescue with Formal Axiomatic Theories One of the reactions to the crisis provoked by Cantor's theory of infinite sets, one of the reactions was, well, let's escape into formalism. If we get into trouble with reasoning that seems okay, then one solution is to use symbolic logic, to create an artificial language where we're going to be very careful and say what the rules of the game are, and make sure that we don't get the contradictions. Right? Because here's a piece of reasoning that looks okay but it leads to a contradiction. Well, we'd like to get rid of that. But natural language is ambiguous---you never know what a pronoun refers to. So let's create an artificial language and make things very, very precise and make sure that we get rid of all the contradictions! So this was the notion of formalism. Now I don't think that Hilbert actually intended that mathematicians should work in such a perfect artificial language. It would sort of be like a programming language, but for reasoning, for doing mathematics, for deduction, not for computing, that was Hilbert's idea. But he never expressed it that way, because there were no programming languages back then. So what are the ideas here? First of all, Hilbert stressed the importance of the axiomatic method. Axiomatic Method The notion of doing mathematics that way goes back to the ancient Greeks and particularly to Euclidean geometry, which is a beautifully clear mathematical system. But that's not enough; Hilbert was also saying that we should use symbolic logic. Symbolic Logic And symbolic logic also has a long history: Leibniz, Boole, Frege, Peano... These mathematicians wanted to make reasoning like algebra. Here's how Leibniz put it: He talked about avoiding disputes---and he was probably thinking of political disputes and religious disputes---by calculating who was right instead of arguing about it! Instead of fighting, you should be able to sit down at a table and say, ``Gentleman, let us compute!'' What a beautiful fantasy!... So the idea was that mathematical logic should be like arithmetic and you should be able to just grind out a conclusion, no uncertainty, no questions of interpretation. By using an artificial math language with a symbolic logic you should be able to achieve perfect rigor. You've heard the word ``rigor'', as in ``rigor mortis'', used in mathematics? [Laughter] It's not that rigor! But the idea is that an argument is either completely correct or else it's total nonsense, with nothing in between. And a proof that is formulated in a formal axiomatic system should be absolutely clear, it should be completely sharp! In other words, Hilbert's idea was that we should be completely precise about what the rules of the game are, and about the definitions, the elementary concepts, and the grammar and the language---all the rules of the game---so that we can all agree on how mathematics should be done. In practice it would be too much work to use such a formal axiomatic system, but it would be philosophically significant because it would settle once and for all the question of whether a piece of mathematical reasoning is correct or incorrect. Okay? So Hilbert's idea seemed fairly straightforward. He was just following the axiomatic and the formal traditions in mathematics. Formal as in formalism, as in using formulas, as in calculating! He wanted to go all the way, to the very end, and formalize all of mathematics, but it seemed like a fairly reasonable plan. Hilbert wasn't a revolutionary, he was a conservative... The amazing thing, as I said before, was that it turned out that Hilbert's rescue plan could not work, that it couldn't be done, that it was impossible to make it work! Hilbert was just following the whole mathematics tradition up to that point: the axiomatic method, symbolic logic, formalism... He wanted to avoid the paradoxes by being absolutely precise, by creating a completely formal axiomatic system, an artificial language, that avoided the paradoxes, that made them impossible, that outlawed them! And most mathematicians probably thought that Hilbert was right, that of course you could do this---it's just the notion that in mathematics things are absolutely clear, black or white, true or false. So Hilbert's idea was just an extreme, an exaggerated version of the normal notion of what mathematics is all about: the idea that we can decide and agree on the rules of the game, all of them, once and for all. The big surprise is that it turned out that this could not be done. Hilbert turned out to be wrong, but wrong in a tremendously fruitful way, because he had asked a very good question. In fact, by asking this question he actually created an entirely new field of mathematics called metamathematics. Metamathematics is mathematics turned inward, it's an introspective field of math in which you study what mathematics can achieve or can't achieve. What is Metamathematics? That's my field---metamathematics! In it you look at mathematics from above, and you use mathematical reasoning to discuss what mathematical reasoning can or cannot achieve. The basic idea is this: Once you entomb mathematics in an artificial language à la Hilbert, once you set up a completely formal axiomatic system, then you can forget that it has any meaning and just look at it as a game that you play with marks on paper that enables you to deduce theorems from axioms. You can forget about the meaning of this game, the game of mathematical reasoning, it's just combinatorial play with symbols! There are certain rules, and you can study these rules and forget that they have any meaning! What things do you look at when you study a formal axiomatic system from above, from the outside? What kind of questions do you ask? Well, one question you can ask is if you can prove that ``0 equals 1'' ? 0 = 1 ? Hopefully you can't, but how can you be sure? It's hard to be sure! And for any question A, for any affirmation A, you can ask if it's possible to settle the matter by either proving A or the opposite of A, not A. A ?     ¬A ? That's called completeness. A formal axiomatic system is complete if you can settle any question A, either by proving it (A), or by proving that it's false (¬A). That would be nice! Another interesting question is if you can prove an assertion (A) and you can also prove the contrary assertion (¬A). That's called inconsistency, and if that happens it's very bad! Consistency is much better than inconsistency! So what Hilbert did was to have the remarkable idea of creating a new field of mathematics whose subject would be mathematics itself. But you can't do this until you have a completely formal axiomatic system. Because as long as any ``meaning'' is involved in mathematical reasoning, it's all subjective. Of course, the reason we do mathematics is because it has meaning, right? But if you want to be able to study mathematics, the power of mathematics, using mathematical methods, you have to ``desiccate'' it to ``crystallize out'' the meaning and just be left with an artificial language with completely precise rules, in fact, with one that has a mechanical proof-checking algorithm. Proof-Checking Algorithm The key idea that Hilbert had was to envision this perfectly desiccated or crystallized axiomatic system for all of mathematics, in which the rules would be so precise that if someone had a proof there would be a referee, there would be a mechanical procedure, which would either say ``This proof obeys the rules'' or ``This proof is wrong; it's breaking the rules''. That's how you get the criterion for mathematical truth to be completely objective and not to depend on meaning or subjective understanding: by reducing it all to calculation. Somebody says ``This is a proof'', and instead of having to submit it to a human referee who takes two years to decide if the paper is correct, instead you just give it to a machine. And the machine eventually says ``This obeys the rules'' or ``On line 4 there's a misspelling'' or ``This thing on line 4 that supposedly follows from line 3, actually doesn't''. And that would be the end, no appeal! The idea was not that mathematics should actually be done this way. I think that that's calumny, that's a false accusation. I don't think that Hilbert really wanted to turn mathematicians into machines. But the idea was that if you could take mathematics and do it this way, then you could use mathematics to study the power of mathematics. And that is the important new thing that Hilbert came up with. Hilbert wanted to do this in order to reaffirm the traditional view of mathematics, in order to justify himself... He proposed having one set of axioms and this formal language, this formal system, which would include all of mathematical reasoning, that we could all agree on, and that would be perfect! We'd then know all the rules of the game. And he just wanted to use metamathematics to show that this formal axiomatic system was good---that it was consistent and that it was complete---in order to convince people to accept it. This would have settled once and for all the philosophical questions ``When is a proof correct?'' and ``What is mathematical truth?'' Like this everyone could agree on whether a mathematical proof is correct or not. And in fact we used to think that this was an objective thing. In other words, Hilbert's just saying, if it's really objective, if there's no subjective element, and a mathematical proof is either true or false, well, then there should be certain rules for deciding that and it shouldn't depend, if you fill in all the details, it shouldn't depend on interpretation. It's important to fill in all the details---that's the idea of mathematical logic, to ``atomize'' mathematical reasoning into such tiny steps that nothing is left to the imagination, nothing is left out! And if nothing is left out, then a proof can be checked automatically, that was Hilbert's point, that's really what symbolic logic is all about. And Hilbert thought that he was actually going to be able to do this. He was going to formalize all of mathematics, and we were all going to agree that these were in fact the rules of the game. Then there'd be just one version of mathematical truth, not many variations. We don't want to have a German mathematics and a French mathematics and a Swedish mathematics and an American mathematics, no, we want a universal mathematics, one universal criterion for mathematical truth! Then a paper that is done by a mathematician in one country can be understood by a mathematician in another country. Doesn't that sound reasonable?! So you can imagine just how very, very shocking it was in 1931 when Kurt Gödel showed that it wasn't at all reasonable, that it could never be done! 1931 Kurt Gödel Kurt Gödel Discovers Incompleteness Gödel did this is Vienna, but he was from what I think is now called the Czech republic, from the city of Brünn or Brno. It was part of the Austro-Hungarian empire then, but now it's a separate country. And later he was at the Institute for Advanced Study in Princeton, where I visited his grave a few weeks ago. And the current owner of Gödel's house was nice enough to invite me in when he saw me examining the house [laughter] instead of calling the police! They know they're in a house that some people are interested in for historical reasons. Okay, so what did Kurt Gödel do? Well, Gödel sort of exploded this whole view of what mathematics is all about. He came up with a famous incompleteness result, ``Gödel's incompleteness theorem''. And there's a lovely book explaining the way Gödel originally did it. It's by Nagel and Newman, and it's called Gödel's Proof. I read it when I was a child, and forty years later it's still in print! What is this amazing result of Gödel's? Gödel's amazing discovery is that Hilbert was wrong, that it cannot be done, that there's no way to take all of mathematical truth and to agree on a set of rules and to have a formal axiomatic system for all of mathematics in which it is crystal clear whether something is correct or not! More precisely, what Gödel discovered was that if you just try to deal with elementary arithmetic, with 0, 1, 2, 3, 4... and with addition and multiplication +   ×   0, 1, 2, 3, 4, ... ---this is ``elementary number theory'' or ``arithmetic''---and you just try to have a set of axioms for this---the usual axioms are called Peano arithmetic---even this can't be done! Any set of axioms that tries to have the whole truth and nothing but the truth about addition, multiplication, and 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10... will have to be incomplete. More precisely, it'll either be inconsistent or it'll be incomplete. So if you assume that it only tells the truth, then it won't tell the whole truth. There's no way to capture all the truth about addition, multiplication, and 0, 1, 2, 3, 4... ! In particular, if you assume that the axioms don't allow you to prove false theorems, then it'll be incomplete, there'll be true theorems that you cannot prove from these axioms! This is an absolutely devastating result, and all of traditional mathematical philosophy ends up in a heap on the floor! At the time this was considered to be absolutely devastating. However you may notice that in 1931 there were also a few other problems to worry about. The situation in Europe was bad. There was a major depression, and a war was brewing. I agree, not all problems are mathematical! There's more to life than epistemology! But you begin to wonder, well, if the traditional view of mathematics isn't correct, then what is correct? Gödel's incompleteness theorem was very surprising and a terrible shock. How did Gödel do it? Well, Gödel's proof is very clever. It almost looks crazy, it's very paradoxical. Gödel starts with the paradox of the liar, ``I'm false!'', which is neither true nor false. ``This statement is false!'' And what Gödel does is to construct a statement that says of itself ``I'm unprovable!'' ``This statement is unprovable!'' Now if you can construct such a statement in elementary number theory, in arithmetic, a mathematical statement---I don't know how you make a mathematical statement say it's unprovable, you've got to be very clever---but if you can do it, it's easy to see that you're in trouble. Just think about it a little bit. It's easy to see that you're in trouble. Because if it's provable, it's false, right? So you're in trouble, you're proving false results. And if it's unprovable and it says that it's unprovable, then it's true, and mathematics is incomplete. So either way, you're in trouble! Big trouble! And Gödel's original proof is very, very clever and hard to understand. There are a lot of complicated technical details. But if you look at his original paper, it seems to me that there's a lot of LISP programming in it, or at least something that looks a lot like LISP programming. Anyway, now we'd call it LISP programming. Gödel's proof involves defining a great many functions recursively, and these are functions dealing with lists, which is precisely what LISP is all about. So even though there were no programming languages in 1931, with the benefit of hindsight you can clearly see a programming language in Gödel's original paper. And the programming language I know that's closest to it is LISP, pure LISP, LISP without side-effects, interestingly enough---that's the heart of LISP. So this was a very, very shocking result, and people didn't really know what to make of it. Now the next major step forward comes only five years later, in 1936, and it's by Alan Turing. 1936 Alan Turing Alan Turing Discovers Uncomputability Turing's approach to all these questions is completely different from Gödel's, and much deeper. Because Turing brings it out of the closet! [Laughter] What he brings out of the closet is the computer! The computer was implicit in Gödel's paper, but this was really not visible to any ordinary mortal, not at that time, only with hindsight. And Turing really brings it out in the open. Hilbert had said that there should be a ``mechanical procedure'' to decide if a proof obeys the rules or not. And Hilbert never clarified what he meant by a mechanical procedure, it was all words. But, Turing said, what you really mean is a machine, and a machine of a kind that we now call a Turing machine---but it wasn't called that in Turing's original paper. In fact, Turing's original paper contains a programming language, just like Gödel's paper does, what we would now call a programming language. But the two programming languages are very different. Turing's programming language isn't a high-level language like LISP, it's more like a machine language. In fact, it's a horrible machine language, one that nobody would want to use today, because it's too simple. But Turing makes the point that even though Turing machines are very simple, even though their machine language is rather primitive, they're very flexible, very general-purpose machines. In fact, he claims, any computation that a human being can perform, should be possible to do using such a machine. Turing's train of thought now takes a very dramatic turn. What, he asks, is impossible for such a machine? What can't it do? And he immediately finds a question that no Turing machine can settle, a problem that no Turing machine can solve. That's the halting problem, the problem of deciding in advance if a Turing machine or a computer program will eventually halt. The Halting Problem So the shocking thing about this 1936 paper is that first of all he comes up with the notion of a general-purpose or universal computer, with a machine that's flexible, that can do what any machine can do. One calculating machine that can do any calculation, which is, we now say, a general-purpose computer. And then he immediately shows that there are limits to what such a machine can do. And how does he find something that cannot be done by any such machine? Well, it's very simple! It's the question of whether a computer program will eventually halt, with no time limit. If you put a time limit, it's very easy. If you want to know if a program halts in a year, you just run it for a year, and either it halted or doesn't. What Turing showed is that you get in terrible trouble if there's no time limit. Now you may say, ``What good is a computer program that takes more than a year, that takes more than a thousand years?! There's always a time limit!'' I agree, this is pure math, this is not the real world. You only get in trouble with infinity! But Turing shows that if you put no time limit, then you're in real difficulties. So this is called the halting problem. And what Turing showed is that there's no way to decide in advance if a program will eventually halt. The Halting Problem If it does halt, by running it you can eventually discover that, if you're just patient. The problem is you don't know when to give up. And Turing was able to show with a very simple argument which is just Cantor's diagonal argument---coming from Cantor's theory of infinite sets, by the way---I don't have time to explain all this---with a very simple argument Turing was able to show that this problem The Halting Problem cannot be solved. [For Turing's original proof, see the first chapter of my book on The Limits of Mathematics. For a modern proof using the notion of information, see the last lecture in this book.] No computer program can tell you in advance if another computer program will eventually halt or not. And the problem is the ones that don't halt, that's really the problem. The problem is knowing when to give up. So now the interesting thing about this is that Turing immediately deduces as a corollary that if there's no way to decide in advance by a calculation if a program will halt or not, well then there cannot be any way to deduce it in advance using reasoning either. No formal axiomatic system can enable you to deduce in advance whether a program will halt or not. Because if you can use a formal axiomatic system to always deduce whether a program will halt or not, well then, that will give you a way to calculate in advance whether a program will halt or not. You simply run through all possible deductions---you can't do this in practice---but in principle you can run through all possible proofs in size order, checking which ones are correct, until either you find a proof that the program will halt eventually or you find a proof that it's never going to halt. This is using the idea of a completely formal axiomatic system where you don't need a mathematician---you just run through this calculation on a computer---it's mechanical to check if a proof is correct or not. So if there were a formal axiomatic system which always would enable you to prove, to deduce, whether a program will halt or not, that would give you a way to calculate in advance whether a program will halt or not. And that's impossible, because you get into a paradox like ``This statement is false!'' You get a program that halts if and only if it doesn't halt, that's basically the problem. You use an argument having the same flavor as the Russell paradox. [This comes across particularly clearly in the LISP version of Turing's proof that I give in my book The Unknowable, Chapter IV.] So Turing went more deeply into these questions than Gödel. As a student I read Gödel's proof, and I could follow it step by step: I read it in Nagel and Newman's book, which is a lovely book. It's a marvelous book, it's so understandable! It's still in print, and it was published in 1958... But I couldn't really feel that I was coming to grips with Gödel's proof, that I could really understand it. The whole thing seemed too delicate, it seemed too fragile, it seemed too superficial... And there's this business in the closet about computing, that's there in Gödel, but it's hidden, it's not in the open, we're not really coming to terms with it. Now Turing is really going, I think, much deeper into this whole matter. And he's showing, by the way, that it's not just one particular axiomatic system, the one that Gödel studied, that can't work, but that no formal axiomatic system can work. But it's in a slightly different context. Gödel was really looking at 0, 1, 2, 3, 4... and addition and multiplication, and Turing is looking at a rather strange mathematical question, which is does a program halt or not. It's a mathematical question that did not exist at the time of Gödel's original paper. So you see, Turing worked with completely new concepts... But Gödel's paper is not only tremendously clever, he had to have the courage to imagine that Hilbert might be wrong. There's another famous mathematician of that time, von Neumann---whose grave I found near Gödel's, by the way, at Princeton. Von Neumann was probably as clever as Gödel or anyone else, but it never occurred to him that Hilbert could be wrong. And the moment that he heard Gödel explain his result, von Neumann immediately appreciated it and immediately started deducing consequences. But von Neumann said, ``I missed it, I missed the boat, I didn't get it right!'' And Gödel did, so he was much more profound... Now Turing's paper is also full of technical details, like Gödel's paper, because there is a programming language in Turing's paper, and Turing also gives a rather large program, which of course has bugs, because he wasn't able to run it and debug it---it's the program for a universal Turing machine. But the basic thing is the ideas, and the new ideas in Turing's work are just breathtaking! So I think that Turing went beyond Gödel, but you have to recognize that Gödel took the first step, and the first step is historically the most difficult one and takes the most courage. To imagine that Hilbert could be wrong, which never occurred to von Neumann, that was something! I Discover Randomness in Pure Mathematics Okay, so then what happened? Then World War II begins. Turing starts working on cryptography, von Neumann starts working on how to calculate atom bomb detonations, and people forget about incompleteness for a while. This is where I show up on the scene. The generation of mathematicians who were concerned with these questions basically passes from the scene with World War II. And I'm a kid in the 1950s in the United States reading the original article by Nagel and Newman in Scientific American in 1956 that became their book. And I didn't realize that mathematicians really preferred to forget about Gödel and go on working on their favorite problems. I'm fascinated by incompleteness and I want to understand it. Gödel's incompleteness result fascinates me, but I can't really understand it, I think there's something fishy... As for Turing's approach, I think it goes much deeper, but I'm still not satisfied, I want to understand it better. And I get a funny idea about randomness... I was reading a lot of discussions of another famous intellectual issue when I was a kid---not the question of the foundations of mathematics, the question of the foundations of physics! These were discussions about relativity theory and cosmology and even more often about quantum mechanics, about what happens in the atom. It seems that when things are very small the physical world behaves in a completely crazy way that is totally unlike how objects behave here in this classroom. In fact things are random---intrinsically unpredictable---in the atom. Einstein hated this. Einstein said that ``God doesn't play dice!'' By the way, Einstein and Gödel were friends at Princeton, and they didn't talk very much with anybody else, and I heard someone say that Einstein had brainwashed Gödel against quantum mechanics! [Laughter] It was the physicist John Wheeler, who told me that he once asked Gödel if there could be any connection between quantum uncertainty and Gödel's incompleteness theorem, but Gödel refused to discuss it... Okay, so I was reading about all of this, and I began to wonder---in the back of my head I began to ask myself---could it be that there was also randomness in pure mathematics? The idea in quantum mechanics is that randomness is fundamental, it's a basic part of the universe. In normal, everyday life we know that things are unpredictable, but in theory, in Newtonian physics and even in Einstein's relativity theory---that's all called classical as opposed to quantum physics---in theory in classical physics you can predict the future. The equations are deterministic, not probabilistic. If you know the initial conditions exactly, with infinite precision, you apply the equations and you can predict with infinite precision any future time and even in the past, because the equations work either way, in either direction. The equations don't care about the direction of time... This is that wonderful thing sometimes referred to as Laplacian determinism. I think that it's called that because of Laplace's Essai Philosophique sur les Probabilités, a book that was published almost two centuries ago. At the beginning of this book Laplace explains that by applying Newton's laws, in principle a demon could predict the future arbitrarily far, or the past arbitrarily far, if it knew the exact conditions at the current moment. This is not the type of world where you talk about free will and moral responsibility, but if you're doing physics calculations it's a great world, because you can calculate everything! But in the 1920s with quantum mechanics it began to look like God plays dice in the atom, because the basic equation of quantum mechanics is the Schrödinger equation, and the Schrödinger equation is an equation that talks about the probability that an electron will do something. The basic quantity is a probability and it's a wave equation saying how a probability wave interferes with itself. So it's a completely different kind of equation, because in Newtonian physics you can calculate the precise trajectory of a particle and know exactly how it's going to behave. But in quantum mechanics the fundamental equation is an equation dealing with probabilities! That's it, that's all there is! You can't know exactly where an electron is and what its velocity vector is---exactly what direction and how fast it's going. It doesn't have a specific state that's known with infinite precision the way it is in classical physics. If you know very accurately where an electron is, then its velocity---its momentum---turns out to be wildly uncertain. And if you know exactly in which direction and at what speed it's going, then its position becomes infinitely uncertain. That's the infamous Heisenberg uncertainty principle, there's a trade-off, that seems to be the way the physical universe works... It's an interesting historical fact that before people used to hate this---Einstein hated it---but now people think that they can use it! There's a crazy new field called quantum computing where the idea is to stop fighting it. If you can't lick them, join them! The idea is that maybe you can make a brand new technology using something called quantum parallelism. If a quantum computer is uncertain, maybe you can have it uncertainly do many computations at the same time! So instead of fighting it, the idea is to use it, which is a great idea. But when I was a kid people were still arguing over this. Even though he had helped to create quantum mechanics, Einstein was still fighting it, and people were saying, ``Poor guy, he's obviously past his prime!'' Okay, so I began to think that maybe there's also randomness in pure mathematics. I began to suspect that maybe that's the real reason for incompleteness. A case in point is elementary number theory, where there are some very difficult questions. Take a look at the prime numbers. [A prime is a whole number with no exact divisors except 1 and itself. E.g., 7 is prime, and 9 = 3 × 3 is not.] Individual prime numbers behave in a very unpredictable way, if you're interested in their detailed structure. It's true that there are statistical patterns. There's a thing called the prime number theorem that predicts fairly accurately the over-all average distribution of the primes. But as for the detailed distribution of individual prime numbers, that looks pretty random! So I began to think about randomness... I began to think that maybe that's what's really going on, maybe that's a deeper reason for all this incompleteness. So in the 1960s I, and independently some other people, came up with some new ideas. And I like to call this new set of ideas algorithmic information theory. Algorithmic Information Theory That name makes it sound very impressive, but the basic idea is just to look at the size of computer programs. You see, it's just a complexity measure, it's just a kind of computational complexity... I think that one of the first places that I heard about the idea of computational complexity was from von Neumann. Turing came up with the idea of a computer as a mathematical concept---it's a perfect computer, one that never makes mistakes, one that has as much time and space as it needs to work---it's always finite, but the calculation can go on as long as it has to. After Turing comes up with this idea, the next logical step for a mathematician is to study the time, the work needed to do a calculation---its complexity. And in fact I think that around 1950 von Neumann suggested somewhere that there should be a new field which looks at the time complexity of computations, and that's now a very well-developed field. So of course if most people are doing that, then I'm going to try something else! My idea was not to look at the time, even though from a practical point of view time is very important. My idea was to look at the size of computer programs, at the amount of information that you have to give a computer to get it to perform a given task. From a practical point of view, the amount of information required isn't as interesting as the running time, because of course it's very important for computers to do things as fast as possible... But it turns out that from a conceptual point of view, it's not that way at all. I believe that from a fundamental philosophical point of view, the right question is to look at the size of computer programs, not at the time. Why?---Besides the fact that it's my idea so obviously I'm going to be prejudiced! The reason is because program-size complexity connects with a lot of fundamental stuff in physics. You see, in physics there's a notion called entropy, which is how disordered a system is. Entropy played a particularly crucial role in the work of the famous 19th century physicist Boltzmann, Ludwig Boltzmann and it comes up in the field of statistical mechanics and in thermodynamics. Entropy measures how disordered, how chaotic, a physical system is. A crystal has low entropy, and a gas at high temperature has high entropy. It's the amount of chaos or disorder, and it's a notion of randomness that physicists like. And entropy is connected with some fundamental philosophical questions---it's connected with the question of the arrow of time, which is another famous controversy. When Boltzmann invented this wonderful thing called statistical mechanics---his theory is now considered to be one of the masterpieces of 19th century physics, and all physics is now statistical physics---he ended up by committing suicide, because people said that his theory was obviously wrong! Why was it obviously wrong? Because in Boltzmann's theory entropy has got to increase and so there's an arrow of time. But if you look at the equations of Newtonian physics, they're time reversible. There's no difference between predicting the future and predicting the past. If you know at one instant exactly how everything is, you can go in either direction, the equations don't care, there's no direction of time, backward is the same as forward. But in everyday life and in Boltzmann statistical mechanics, there is a difference between going backward and forward. Glasses break, but they don't reassemble spontaneously! And in Boltzmann's theory entropy has got to increase, the system has to get more and more disordered. But people said, ``You can't deduce that from Newtonian physics!'' Boltzmann was pretending to. He was looking at a gas. The atoms of a gas bounce around like billiard balls, it's a billiard ball model of how a gas works. And each interaction is reversible. If you run the movie backwards, it looks the same. If you look at a small portion of a gas for a small amount of time, you can't tell whether you're seeing the movie in the right direction or the wrong direction. But Boltzmann gas theory says that there is an arrow of time---a system will start off in an ordered state and will end up in a very mixed up disordered state. There's even a scary expression in German, heat death. People said that according to Boltzmann's theory the universe is going to end up in a horrible ugly state of maximum entropy or heat death! This was the dire prediction! So there was a lot of controversy about his theory, and maybe that was one of the reasons that Boltzmann killed himself. And there is a connection between my ideas and Boltzmann's, because looking at the size of computer programs is very similar to this notion of the degree of disorder of a physical system. A gas takes a large program to say where all its atoms are, but a crystal doesn't take as big a program, because of its regular structure. Entropy and program-size complexity are closely related... This idea of program-size complexity is also connected with the philosophy of the scientific method. You've heard of Occam's razor, of the idea that the simplest theory is best? Well, what's a theory? It's a computer program for predicting observations. And the idea that the simplest theory is best translates into saying that a concise computer program is the best theory. What if there is no concise theory, what if the most concise program or the best theory for reproducing a given set of experimental data is the same size as the data? Then the theory is no good, it's cooked up, and the data is incomprehensible, it's random. In that case the theory isn't doing a useful job. A theory is good to the extent that it compresses the data into a much smaller set of theoretical assumptions. The greater the compression, the better!---That's the idea... So this idea of program size has a lot of philosophical resonances, and you can define randomness or maximum entropy as something that cannot be compressed at all. It's an object with the property that basically the only way you can describe it to someone is to say ``this is it'' and show it to them. Because it has no structure or pattern, there is no concise description, and the thing has to be understood as ``a thing in itself'', it's irreducible. Randomness = Incompressibility The other extreme is an object that has a very regular pattern so you can just say that it's ``a million 0s'' or ``half a million repetitions of 01'', pairs 01, 01, 01 repeated half a million times. These are very long objects with a very concise description. Another long object with a concise description is an ephemeris, I think it's called that, it's a table giving the positions of the planets as seen in sky, daily, for a year. You can compress all this astronomical information into a small FORTRAN program that uses Newtonian physics to calculate where the planets will be seen in the sky every night. But if you look at how a roulette wheel behaves, then there is no pattern, the series of outcomes cannot be compressed. Because if there were a pattern, then people could use it to win, and having a casino wouldn't be such a good business! The fact that casinos make lots of money shows that there is no way to predict what a roulette wheel will do, there is no pattern---the casinos make it their job to ensure that! So I had this new idea, which was to use program-size complexity to define randomness. And when you start looking at the size of computer programs---when you begin to think about this notion of program-size or information complexity instead of run-time complexity---then the interesting thing that happens is that everywhere you turn you immediately find incompleteness! You immediately find things that escape the power of mathematical reasoning, things that escape the power of any computer program. It turns out that they're everywhere! It's very dramatic! In only three steps we went from Gödel, where it's very surprising that there are limits to reasoning, to Turing, where it looks much more natural, and then when you start looking at program size, well, incompleteness, the limits of mathematics, it just hits you in the face! Why?! Well, the very first question that you ask in my theory gets you into trouble. What's that? Well, in my theory I measure the complexity of something by the size of the smallest computer program for calculating it. But how can I be sure that I have the smallest computer program? Let's say that I have a particular calculation, a particular output, that I'm interested in, and that I have this nice, small computer program that calculates it, and I think that it's the smallest possible program, the most concise one that produces this output. Maybe a few friends of mine and I were trying to do it, and this was the best program that we came up with; nobody did any better. But how can you be sure? Well, the answer is that you can't be sure. It turns out you can never be sure! You can never be sure that a computer program is what I like to call elegant, namely that it's the most concise one that produces the output that it produces. Never ever! This escapes the power of mathematical reasoning, amazingly enough. But for any computational task, once you fix the computer programming language, once you decide on the computer programming language, and if you have in mind a particular output, there's got to be at least one program that is the smallest possible. There may be a tie, there may be several, right?, but there's got to be at least one that's smaller than all the others. But you can never be sure that you've found it! And the precise result, which is one of my favorite incompleteness results, is that if you have N bits of axioms, you can never prove that a program is elegant---smallest possible---if the program is more than N bits long. That's basically how it works. So any given set of mathematical axioms, any formal axiomatic system in Hilbert's style, can only prove that finitely many programs are elegant, are the most concise possible for their output. To be more precise, you get into trouble with an elegant program if it's larger than a computerized version of the axioms---It's really the size of the proof-checking program for your axioms. In fact, it's the size of the program that runs through all possible proofs producing all possible theorems. If you have in mind a particular programming language, and you need a program of a certain size to implement a formal axiomatic system, that is to say, to write the proof-checking algorithm and to write the program that runs through all possible proofs filtering out all the theorems, if that program is a certain size in a language, and if you look at programs in that same language that are larger, then you can never be sure that such a program is elegant, you can never prove that such a program is elegant using the axioms that are implemented in the same language by a smaller program. That's basically how it works. So there are an infinity of elegant programs out there. For any computational task there's got to be at least one elegant program, and there may be several, but you can never be sure except in a finite number of cases. That's my result, and I'm very proud of it!---Another can of soda? Thanks a lot! My talk would be much more interesting if this were wine or beer! [Laughter] So it turns out that you can't calculate the program-size complexity, you can never be sure what the program-size complexity of anything is. Because to determine the program-size complexity of something is to know the size of the most concise program that calculates it---but that means---it's essentially the same problem---then I would know that this program is the most concise possible, I would know that it's an elegant program, and you can't do that if the program is larger than the axioms. So if it's N bits of axioms, you can never determine the program-size complexity of anything that has more than N bits of complexity, which means almost everything, because almost everything has more than N bits of complexity. Almost everything has more complexity than the axioms that you're using. Why do I say that? The reason for using axioms is because they're simple and believable. So the sets of axioms that mathematicians normally use are fairly concise, otherwise no one would believe in them! Which means that in practice there's this vast world of mathematical truth out there, which is an infinite amount of information, but any given set of axioms only captures a tiny finite amount of this information! And that's why we're in trouble, that's my bottom line, that's my final conclusion, that's the real dilemma. So in summary, I have two ways to explain why I think Gödel incompleteness is natural and inevitable rather than mysterious and surprising. The two ways are---that the idea of randomness in physics, that some things make no sense, also happens in pure mathematics, is one way to say it. But a better way to say it, is that mathematical truth is an infinite amount of information, but any particular set of axioms just has a finite amount of information, because there are only going to be a finite number of principles that you've agreed on as the rules of the game. And whenever any statement, any mathematical assertion, involves more information than the amount in those axioms, then it's very natural that it will escape the ability of those axioms. So you see, the way that mathematics progresses is you trivialize everything! The way it progresses is that you take a result that originally required an immense effort, and you reduce it to a trivial corollary of a more general theory! Let me give an example involving Fermat's ``last theorem'', namely the assertion that xn + yn = zn has no solutions in positive integers x, y, z, and n with n greater than 2. Andrew Wiles's recent proof of this is hundreds of pages long, but, probably, a century or two from now there will be a one-page proof! But that one-page proof will require a whole book inventing a theory with concepts that are the natural concepts for thinking about Fermat's last theorem. And when you work with those concepts it'll appear immediately obvious---Wiles's proof will be a trivial afterthought---because you'll have imbedded it in the appropriate theoretical context. And the same thing is happening with incompleteness. Gödel's result, like any very fundamental basic result, starts off by being very mysterious and complicated, with a long impenetrable proof. People said about Gödel's original paper the same thing that they said about Einstein's theory of relativity, which is that there are less than five people on this entire planet who understand it. The joke was that Eddington, astronomer royal Sir Arthur Eddington, is at a formal dinner party---this was just after World War I---and he's introduced as one of the three men who understands Einstein's theory. And he says, ``Let's see, there's Einstein, and there's me, but who's the other guy?'' I'm ruining this joke! [Laughter] So in 1931 Gödel's proof was like that. If you look at his original paper, it's very complicated. The details are programming details we would say now---really it's a kind of complication that we all know how to handle now---but at the time it looked very mysterious. This was a 1931 mathematics paper, and all of a sudden you're doing what amounts to LISP programming, thirty years before LISP was invented! And there weren't even any computers then! But when you get to Turing, he makes Gödel's result seem much more natural. And I think that my idea of program-size complexity and information---really, algorithmic information content---makes Gödel's result seem more than natural, it makes it seem, I'd say, obvious, inevitable. But of course that's the way it works, that's how we progress. Where Do We Go from Here?! I should say, though, that if this were really true, if it were that simple, then that would be the end of the field of metamathematics. It would be a sad thing, because it would mean that this whole subject is dead. But I don't think that it is! You know, I've been giving versions of this talk for many years. I make a career, a profession out of it! It's tourism, it's the way I get to see the world! It's a nice way to travel!... In these talks I like to give examples of things that might escape the power of normal mathematical reasoning. And my favorite examples were Fermat's last theorem, the Riemann hypothesis, and the four-color conjecture. When I was a kid these were the three most outstanding open questions in all of mathematics. But a funny thing happened. First the four-color conjecture was settled by a computer proof, and recently the proof has been greatly improved. The latest version has more ideas and less computation, so that's a big step forward. And then Wiles settled Fermat's last theorem. There was a misstep, but now everyone's convinced that the new proof is correct. In fact, I was at a meeting in June 1993, when Wiles was presenting his proof in Cambridge. I wasn't there, but I was at a meeting in France, and the word was going around by e-mail that Wiles had done it. It just so happened that I was session chairman, and at one point the organizer of the whole meeting said, ``Well, there's this rumor going around, why don't we make an announcement. You're the session chairman, you do it!'' So I got up and said, ``As some of you may have heard, Andrew Wiles has just demonstrated Fermat's last theorem.'' And there was silence! But afterwards two people came up and said, ``You were joking, weren't you?'' [Laughter] And I said, ``No, I wasn't joking.'' It wasn't April 1st! Fortunately the Riemann hypothesis is still open at this point, as far as I know! But I was using Fermat's last theorem as a possible example of incompleteness, as an example of something that might be beyond the power of the normal mathematical methods. I needed a good example, because people used to say to me, ``Well, this is all very well and good, AIT is a nice theory, but give me an example of a specific mathematical result that you think escapes the power of the usual axioms.'' And I would say, well, maybe Fermat's last theorem! So there's a problem. Algorithmic information theory is very nice and shows that there are lots of things that you can't prove, but what about individual mathematical questions? How about a natural mathematical question? Can these methods be applied? Well, the answer is no, my methods are not as general as they sound. There are technical limitations. I can't analyze Fermat's last theorem with these methods. Fortunately! Because if I had announced that my methods show that Fermat's last theorem can't be settled, then it's very embarrassing when someone settles it! So now the question is, how come in spite of these negative results, mathematicians are making so much progress? How come mathematics works so well in spite of incompleteness? You know, I'm not a pessimist, but my results have the wrong kind of feeling about them, they're much too pessimistic! So I think that a very interesting question now is to look for positive results... There are already too many negative results! If you take them at face value, it would seem that there's no way to do mathematics, that mathematics is impossible. Fortunately for those of us who do mathematics, that doesn't seem to be the case. So I think that now we should look for positive results... The fundamental questions, like the questions of philosophy, they're great, because you never exhaust them. Every generation takes a few steps forward... So I think there's a lot more interesting work to be done in this area. And here's another very interesting question: Program size is a complexity measure, and we know that it works great in metamathematics, but does it have anything to do with complexity in the real world? For example, what about the complexity of biological organisms? What about a theory of evolution? Von Neumann talked about a general theory of the evolution of life. He said that the first step was to define complexity. Well, here's a definition of complexity, but it doesn't seem to be the correct one to use in theoretical biology. And there is no such thing as theoretical biology, not yet! As a mathematician, I would love it if somebody would prove a general result saying that under very general circumstances life has to evolve. But I don't know how you define life in a general mathematical setting. We know it when we see it, right? If you crash into something alive with your car, you know it! But as a mathematician I don't know how to tell the difference between a beautiful deer running across the road and the pile of garbage that my neighbor left out in the street! Well, actually that garbage is connected with life, it's the debris produced by life... So let's compare a deer with a rock instead. Well, the rock is harder, but that doesn't seem to go to the essential difference that the deer is alive and the rock is a pretty passive object. It's certainly very easy for us to tell the difference in practice, but what is the fundamental difference? Can one grasp that mathematically? So what von Neumann was asking for was a general mathematical theory. Von Neumann used to like to invent new mathematical theories. He'd invent one before breakfast every day: the theory of games, the theory of self-reproducing automata, the Hilbert space formulation of quantum mechanics... Von Neumann wrote a book on quantum mechanics using Hilbert spaces---that was done by von Neumann, who had studied under Hilbert, and who said that this was the right mathematical framework for doing quantum mechanics. Von Neumann was always inventing new fields of mathematics, and since he was a childhood hero of mine, and since he talked about Gödel and Turing, well, I said to myself, if von Neumann could do it, I think I'll give it a try. Von Neumann even suggested that there should be a theory of the complexity of computations. He never took any steps in that direction, but I think that you can find someplace where he said that this has got to be an interesting new area to develop, and he was certainly right. Von Neumann also said that we ought to have a general mathematical theory of the evolution of life... But we want it to be a very general theory, we don't want to get involved in low-level questions like biochemistry or geology... He insisted that we should do things in a more general way, because von Neumann believed, and I guess I do too, that if Darwin is right, then it's probably a very general thing. For example, there is the idea of genetic programming, that's a computer version of this. Instead of writing a program to do something, you sort of evolve it by trial and error. And it seems to work remarkably well, but can you prove that this has got to be the case? Or take a look at Tom Ray's Tierra... Some of these computer models of biology almost seem to work too well---the problem is that there's no theoretical understanding why they work so well. If you run Ray's model on the computer you get these parasites and hyperparasites, you get a whole ecology. That's just terrific, but as a pure mathematician I'm looking for theoretical understanding, I'm looking for a general theory that starts by defining what an organism is and how you measure its complexity, and that proves that organisms have to evolve and increase in complexity. That's what I want, wouldn't that be nice? And if you could do that, it might shed some light on how general the phenomenon of evolution is, and whether there's likely to be life elsewhere in the universe. Of course, even if mathematicians never come up with such a theory, we'll probably find out by visiting other places and seeing if there's life there... But anyway, von Neumann had proposed this as an interesting question, and at one point in my deluded youth I thought that maybe program-size complexity had something to do with evolution... But I don't think so anymore, because I was never able to get anywhere with this idea... So I think that there's a lot of interesting work to be done! And I think that we live in exciting times. In fact, sometimes I think that maybe they're even a little bit too exciting!... And I hope that if this talk were being given a century from now, in 2099, there would be another century of exciting controversy about the foundations of mathematics to summarize, one with different concerns and preoccupations... It would be interesting to hear what that talk would be like a hundred years from now! Maybe some of you will be there! Or give the talk even! Thank you very much! [Laughter & Applause] Further Reading 1. G. J. Chaitin, The Unknowable, Springer-Verlag, 1999. 2. G. J. Chaitin, The Limits of Mathematics, Springer-Verlag, 1998.
259c829e6248636f
Statistical physics From Wikipedia, the free encyclopedia Jump to: navigation, search Statistical physics is a branch of physics that uses methods of probability theory and statistics, and particularly the mathematical tools for dealing with large populations and approximations, in solving physical problems. It can describe a wide variety of fields with an inherently stochastic nature. Its applications include many problems in the fields of physics, biology, chemistry, neurology, and even some social sciences, such as sociology. Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.[1] In particular, statistical mechanics develops the phenomenological results of thermodynamics from a probabilistic examination of the underlying microscopic systems. Historically, one of the first topics in physics where statistical methods were applied was the field of mechanics, which is concerned with the motion of particles or objects when subjected to a force. Statistical mechanics[edit] Main article: Statistical mechanics Statistical mechanics provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk properties of materials that can be observed in everyday life, therefore explaining thermodynamics as a natural result of statistics, classical mechanics, and quantum mechanics at the microscopic level. Because of this history, the statistical physics is often considered synonymous with statistical mechanics or statistical thermodynamics.[note 1] One of the most important equations in Statistical mechanics (analogous to F=ma in mechanics, or the Schrödinger equation in quantum mechanics ) is the definition of the partition function Z, which is essentially a weighted sum of all possible states q available to a system. Z = \sum_q \mathrm{e}^{-\frac{E(q)}{k_BT}} where k_B is the Boltzmann constant, T is temperature and E(q) is energy of state q. Furthermore, the probability of a given state, q, occurring is given by P(q) = \frac{ {\mathrm{e}^{-\frac{E(q)}{k_BT}}}}{Z} A statistical approach can work well in classical systems when the number of degrees of freedom (and so the number of variables) is so large that exact solution is not possible, or not really useful. Statistical mechanics can also describe work in non-linear dynamics, chaos theory, thermal physics, fluid dynamics (particularly at high Knudsen numbers), or plasma physics. Scientists and Universities[edit] A significant contribution (at different times) in development of statistical physics was given by James Clerk Maxwell, Albert Einstein, Enrico Fermi, Richard Feynman, L. Landau, Vladimir Fock, Werner Heisenberg, Nikolay Bogolyubov and others. Statistical physics is studied in the nuclear center at Los Alamos. Also, Pentagon has organized a large department for the study of turbulence at the University of Princeton. Work in this area is also being conducted by Saclay (Paris), Max Planck Institute, Netherlands Institute for Atomic and Molecular Physics and other research centers. Statistical physics allowed us to explain and quantitatively describe superconductivity, superfluidity, turbulence, collective phenomena in solids and plasma, and the structural features of liquid. It underlies the modern astrophysics. It is statistical physics that helped us to create such intensively developing study of liquid crystals and to construct a theory of Phase Transition and Critical phenomena. Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of cold neutrons, X-ray, visible light, and more. See also[edit] 1. ^ Huang, Kerson. Introduction to Statistical Physics (2nd ed.). CRC Press. p. 15. ISBN 978-1-4200-7902-9.  Thermal and Statistical Physics (lecture notes, Web draft 2001) by Mallett M., Blumler P. BASICS OF STATISTICAL PHYSICS: Second Edition by Harald J W Müller-Kirsten (University of Kaiserslautern, Germany) Statistical physics by Kadanoff L.P. Statistical Physics - Statics, Dynamics and Renormalization by Kadanoff L.P. History and outlook of statistical physics by Dieter Flamm
3c8a2e386ba897e9
TOWNSEND -- There is little doubt that music is one of the most powerful tools for changing brain chemistry; it can make you euphoric, and it can give you the blues. But on Thursday, March 6, at 7 p.m., chemistry and the blues come together in a unique pairing of a blues guitar-playing UCLA chemistry professor and a former special-education teacher turned blues guitarist. The event, titled "Elements of the Blues," will take place in the North Middlesex Regional High School auditorium, and will feature UCLA chemistry professor Dr. Eric Scerri and W.C. Handy Blues Award winner Ronnie Earl. This unusual pairing was the brainchild of Earl's wife, NMRHS chemistry teacher Donna Horvath. "I have been teaching chemistry for several years now," said Horvath in a recent interview, "and I have observed that some kids like some parts of it and not others. But they seem to have an affinity with the periodic table. Even the least interested students seem to form an attachment to it." Horvath said that she had read Scerri's June 2013 "Scientific American" article "Cracks in the Periodic Table," which discusses a possible new staircase form of the periodic table of the elements. The staircase on the right side of the periodic table is a dividing line between metals and nonmetals. Scerri's article addresses the fact that some recent additions to the table may differ in their chemistry from the other elements in the same column, thus breaking the periodic rule that had defined the table for the past 150 years. "I just really love the article, because we talk about the new elements and we are starting a new row now" in class, Horvath said. A brief biography accompanying the article states that Scerri is not only a historian and philosopher of chemistry at UCLA, but also that he is a serious blues guitarist. "I read his bio and wondered if I could make a personal connection with him and use my husband," Horvath revealed. She contacted Scerri and told him how she would use his article in her class, and asked if he would consider making a trip to the east coast. "He said that he would and that his wife would love to visit Boston," she said. The Bluesman Horvath's husband was born Ronald Horvath in Queens, New York, a first-generation American of Hungarian Jewish parents. Earl eventually moved to Boston to pursue a degree in Special Education at Boston University, but also took an interest in guitar. This, said Horvath, despite an early report card of Earl's that stated that he had no aptitude in music. "He took piano as a child, but it was when he saw the Beatles that he knew he wanted to play guitar," Horvath said. "He thought it was so cool to play the guitar, but his parents wanted him to have a respectable career. Then, he went to a concert and heard B.B. King and Freddie King, and it really resonated with him." He eventually joined Roomful of Blues, and later formed the band Ronnie Earl and the Broadcasters. He has played with Eric Clapton, B.B. King, Muddy Waters, and many other blues greats, and has released nearly 30 albums during his 30-plus years as a professional musician. Still, in between gigs, Earl enjoys coming to her school to talk with her students, Horvath said. "He talks about what it means to be an artist and create music. Students may have a real passion for music, but that may not be their avocation. Ronnie talks with them about how people can blend their avocations with their jobs." Scerri, she discovered, chose to teach at UCLA because he wanted music in his life. "And Ronnie had to make a choice, too. He gave up his job (teaching special education) to spend more time on music." The Professor In a recent coast-to-coast phone interview, Scerri said that he is delighted by the way the upcoming program has come together. "I have known of his playing for many years," he said of Earl, "and I cannot wait to jam with him." And just what does music have to do with the periodic table of the elements? "The history of the periodic table, the topic that I have published three books on, has featured a curious incident involving chemistry and music," Scerri replied.  "The London chemist, John Newlands, first proposed his law of octaves in the 1860s, and made an analogy with musical octaves whereby notes repeat after a certain interval just as elements seem to in the periodic table. When he presented this idea he was mocked by London's leading chemists," Scerri went on. "But the analogy is essentially correct!" "The periodic table is an arrangement of all the elements?the fundamental building blocks of nature. They are all very different and have characteristic properties, yet there is this underlying system that gathers them all together and makes sense of them." Scerri said that during the program at the high school, he is mainly going to speak about the periodic table, but will also try to make a connection with music. Besides the law of octaves, which found that each element was similar to the element eight places further on, there is a moving from the Bohr model of the atom to one of quantum mechanics known as the Schrödinger equation, which draws an analogy between physics and music, Scerri added. Put forward by Erwin Schrödinger, this partial differential equation describes how the quantum state of some physical system changes with time. "In order to understand that conceptual change," said Scerri, "I use the guitar and show how when you lightly touch the strings you can produce a harmonic." When a string is fixed at both ends, it makes an open string. If you touch the string lightly on the twelfth fret and strum it, you get a harmonic, he said. "By breaking up the string into halves, thirds, quarters, or fifths, it's a perfect analogy to the Schrödingerapproach to quantum mechanics," Scerri explained. "You just apply the math given those boundary conditions." And the Blues? "I absolutely love the blues. It's one of the reasons I moved to the states from England. During the '60s and '70s blues revival, the kids in England started listening to Eric Clapton, Jeff Beck, and The Rolling Stones, and then started listening to American blues when Americans were not listening to the blues. The Americans were listening to the British musicians, strangely," Scerri reminisced. "For me, although I discovered blues in London, the British lost interest but the Americans retained it. That's one of the reasons I wanted to be in America. (The American blues) were not sophisticated; the expression is the sophistication," he said. The Program "The Elements of the Blues" will be held in the NMRHS auditorium, located at 19 Main Street, Townsend, from 7-9 p.m. on Thursday, March 6, with a snow date of Friday, March 7. It is free and open to the public, and the auditorium is wheelchair accessible. For more information, contact Dr. Horvath at 978-587-8721, or The program is supported, in part, by a grant from the local Cultural Councils of Ashby, Pepperell, and Townsend, and from the Amanda Dwight Entertainment Committee of Townsend, local agencies that are supported by the Massachusetts Cultural Council, a state agency.
951408222cccbaa8
World Library   Flag as Inappropriate Email this Article Jewish culture Article Id: WHEBN0001262063 Reproduction Date: Title: Jewish culture   Author: World Heritage Encyclopedia Language: English Subject: Jewish history, Jewish religious movements, Jews, Hasidic Judaism, Humanistic Judaism Collection: Haskalah, Jewish History, Secular Jewish Culture Publisher: World Heritage Encyclopedia Jewish culture Visual arts Made by Jews Yiddish Ladino Judeo-Arabic Hebrew Israeli English and American Performance arts Music Dance Israeli Sephardi Ashkenazi Mizrahi Other aspects Humour Languages Symbols Clothing Jewish culture is the diverse international culture of the Jews. Since the formation of the Jewish nation in biblical times the international community of Jewish people has been considered a tribe or an ethnoreligious group rather than solely a religion. Judaism guides its adherents in both practice and belief, so that it has been called not only a religion, but an orthopraxy.[1] Not all individuals or all cultural phenomena can be classified as either "secular" or "religious", a distinction native to Enlightenment thinking.[2] Jewish culture in its etymological meaning retains the linkage to the land of origin, the people named for the Kingdom of Judah, study of Jewish texts, practice of community charity, and Jewish history. The term "secular Jewish culture" therefore refers to many aspects, including: Religion and World View, Literature, Media, and Cinema, Art and Architecture, Cuisine and Traditional Dress, attitudes to Gender, Marriage, and Family, Social Customs and Lifestyles, Music and Dance.[3] "Secular Judaism," is a distinct phenomenon related to Jewish secularization - a historical process of divesting all of these elements of culture from their religious beliefs and practices.[4] Secular Judaism, derived from the philosophy of Moses Mendelssohn,[5] arose out of the Haskalah, or Jewish Enlightenment, which was itself driven by the values of the Enlightenment. In recent years, the academic field of study, has encompassed Jewish Studies, History, Literature, Sociology, and Linguistics. Historian David Biale[6] has traced the roots of Jewish secularism back to the pre-modern era. He, and other scholars highlight the Dutch philosopher Baruch Spinoza, who was dubbed "the renegade Jew who gave us modernity" by scholar and novelist Rebecca Newberger Goldstein[7] in an intellectual biography of him. Today, the subject of Jewish secularization is taught, and researched, at many North American and Israeli universities, including Harvard, Tel Aviv University, UCLA, Temple University and City University of New York which have significant Jewish alumni. Additionally, many schools include the academic study of Judaism and Jewish culture in their curricula. Throughout history, in eras and places as diverse as the ancient Hellenic world, in Europe before and after the Age of Enlightenment, in Al-Andalus, North Africa and the Middle East, in India and China, and in the contemporary United States and Israel, Jewish communities have seen the development of cultural phenomena that are characteristically Jewish without being at all specifically religious. Some factors in this come from within Judaism, others from the interaction of Jews with host populations in the Diasporas, and others from the inner social and cultural dynamics of the community, as opposed to religion itself. This phenomenon has led to considerably different Jewish cultures unique to their own communities. • History 1 • Philosophy 2 • Education and politics 3 • Economic activity 4 • Science and technology 5 • Literature and poetry 6 • Theatre 7 • Yiddish theatre 7.1 • European theatre 7.2 • English-language theatre 7.3 • Hebrew and Israeli theatre 7.4 • Cinema 8 • Radio and television 9 • Music 10 • Dance 11 • Humor 12 • Visual arts and architecture 13 • Comics, cartoons and animation 14 • Cuisine 15 • See also 16 • References 17 • Further reading 18 • External links 19 Various stages of a Jewish male's life. There has not been a political unity of Jewish society since the united monarchy. Since then Israelite populations were always geographically dispersed (see Jewish diaspora), so that by the 19th century the Ashkenazi Jews were mainly in Eastern and Central Europe; the Sephardi Jews were largely spread among various communities in the Mediterranean region; Mizrahi Jews were primarily spread throughout Western Asia; and other populations of Jews were in Central Asia, Ethiopia, the Caucasus, and India. (See Jewish ethnic divisions.) Although there was a high degree of communication and traffic between these communities — many Sephardic exiles blended into the Ashkenazi communities in Central Europe following the Spanish Inquisition; many Ashkenazim migrated to the Ottoman Empire, giving rise to the characteristic Syrian-Jewish family name "Ashkenazi"; Iraqi-Jewish traders formed a distinct Jewish community in India; many of these populations were cut off to some degree from the surrounding cultures by ghettoization, by Muslim laws of dhimma, and traditional discouragement of contact with polytheistic populations. Medieval Jewish communities in Eastern Europe continued to display distinct cultural traits over the centuries. Despite the universalist leanings of the Enlightenment (and its echo within Judaism in the Haskalah movement), many Yiddish-speaking Jews in Eastern Europe continued to see themselves as forming a distinct national group — " 'am yehudi", from the Biblical Hebrew — but, adapting this idea to Enlightenment values, they assimilated the concept as that of an ethnic group whose identity did not depend on religion, which under Enlightenment thinking fell under a separate category. Constantin Măciucă writes of "a differentiated but not isolated Jewish spirit" permeating the culture of Yiddish-speaking Jews.[8] This was only intensified as the rise of Romanticism amplified the sense of national identity across Europe generally. Thus, for example, members of the General Jewish Labour Bund in the late 19th and early 20th centuries were generally non-religious, and one of the historical leaders of the Bund was the child of converts to Christianity, though not a practicing or believing Christian himself. The Haskalah combined with the Jewish Emancipation movement under way in Central and Western Europe to create an opportunity for Jews to enter secular society. At the same time, pogroms in Eastern Europe provoked a surge of migration, in large part to the United States, where some 2 million Jewish immigrants resettled between 1880 and 1920. By 1931, shortly before The Holocaust, 92% of the World's Jewish population was Ashkenazi in origin. Secularism originated in Europe as series of movements that militated for a new, heretofore unheard-of concept called "secular Judaism". For these reasons, much of what is thought of by English-speakers and, to a lesser extent, by non-English-speaking Europeans as "secular Jewish culture" is, in essence, the Jewish cultural movement that evolved in Central and Eastern Europe, and subsequently brought to North America by immigrants. During the 1940s, the Holocaust uprooted and destroyed most of the Jewish communities living in much of Europe. This, in combination with the creation of the State of Israel and the consequent Jewish exodus from Arab lands, resulted in a further geographic shift. Defining secular culture among those who practice traditional Judaism is difficult, because the entire culture is, by definition, entwined with religious traditions: the idea of separate ethnic and religious identity is foreign to the Hebrew tradition of an " 'am yisrael". (This is particularly true for Orthodox Judaism.) Gary Tobin, head of the Institute for Jewish and Community Research, said of traditional Jewish culture: The dichotomy between religion and culture doesn’t really exist. Every religious attribute is filled with culture; every cultural act filled with religiosity. Synagogues themselves are great centers of Jewish culture. After all, what is life really about? Food, relationships, enrichment … So is Jewish life. So many of our traditions inherently contain aspects of culture. Look at the Passover Seder — it's essentially great theater. Jewish education and religiosity bereft of culture is not as interesting.[9] Yaakov Malkin, Professor of Aesthetics and Rhetoric at Tel Aviv University and the founder and academic director of Meitar College for Judaism as Culture[10] in Jerusalem, writes: Today very many [11] In North America, the secular and cultural Jewish movements are divided into three umbrella organizations: the Workmen's Circle. Jewish philosophy includes all philosophy carried out by Jews, or in relation to the religion of Judaism. The Jewish philosophy is extended over several main eras in Jewish history, including the ancient and biblical era, medieval era and modern era (see Haskalah). The ancient Jewish philosophy is expressed in the bible. According to Prof. Israel Efros the principles of the Jewish philosophy start in the bible, where the foundations of the Jewish monotheistic beliefs can be found, such as the belief in one god, the separation of god and the world and nature (as opposed to Pantheism) and the creation of the world. Other biblical writings that associated with philosophy are Psalms that contains invitations to admire the wisdom of God through his works; from this, some scholars suggest, Judaism harbors a Philosophical under-current[12] and Ecclesiastes that is often considered to be the only genuine philosophical work in the Hebrew Bible; its author seeks to understand the place of human beings in the world and life's meaning.[13] Other writings related to philosophy can be found in the Deuterocanonical books such as Sirach and Book of Wisdom. During the Hellenistic era, Hellenistic Judaism aspired to combine Jewish religious tradition with elements of Greek culture and philosophy. The philosopher Philo used philosophical allegory to attempt to fuse and harmonize Greek philosophy with Jewish philosophy. His work attempts to combine Plato and Moses into one philosophical system.[14] He developed an allegoric approach of interpreting holy scriptures (the bible), in contrast to (old-fashioned) literally interpretation approaches. His allegorical exegesis was important for several Christian Church Fathers and some scholars hold that his concept of the Logos as God's creative principle influenced early Christology. Other scholars, however, deny direct influence but say both Philo and Early Christianity borrow from a common source.[15] The opening page of Spinoza's magnum opus, Ethics Between the Ancient era and the Middle Ages most of the Jewish philosophy concentrated around the Rabbinic literature that is expressed in the Talmud and Midrash. In the 9th century Saadia Gaon wrote the text Emunoth ve-Deoth which is the first systematic presentation and philosophic foundation of the dogmas of Judaism. The Golden age of Jewish culture in Spain included many influential Jewish philosophers such as Moses ibn Ezra, Abraham ibn Ezra , Solomon ibn Gabirol, Yehuda Halevi, Isaac Abravanel, Nahmanides, Joseph Albo, Abraham ibn Daud, Nissim of Gerona, Bahya ibn Paquda, Abraham bar Hiyya, Joseph ibn Tzaddik, Hasdai Crescas and Isaac ben Moses Arama. The Most notable is Maimonides who is considered, beside the Jewish world, as a prominent philosopher and polymath in the Islamic and Western worlds. Outside of Spain, other philosophers are Natan'el al-Fayyumi, Elia del Medigo, Jedaiah ben Abraham Bedersi and Gersonides. Philosophy by Jews in Modern era was expressed by philosophers, mainly in Europe, such as Baruch Spinoza founder of Spinozism, whose work included modern Rationalism and Biblical criticism and laying the groundwork for the 18th-century Enlightenment.[16] His work has earned him recognition as one of Western philosophy's most important thinkers; Others are Isaac Orobio de Castro, Tzvi Ashkenazi, David Nieto, Isaac Cardoso, Jacob Abendana, Uriel da Costa, Francisco Sanches and Moses Almosnino. A new era began in the 18th century with the thought of Moses Mendelssohn. Mendelssohn has been described as the "'third Moses,' with whom begins a new era in Judaism," just as new eras began with Moses the prophet and with Moses Maimonides.[17] Mendelssohn was a German Jewish philosopher to whose ideas the renaissance of European Jews, Haskalah (the Jewish Enlightenment) is indebted. He has been referred to as the father of Reform Judaism, though Reform spokesmen have been "resistant to claim him as their spiritual father".[18] Mendelssohn came to be regarded as a leading cultural figure of his time by both Germans and Jews. The Jewish Enlightenment philosophy included Menachem Mendel Lefin, Salomon Maimon and Isaac Satanow. The next 19th century comprised both secular and religious philosophy and included philosophers such as Elijah Benamozegh, Hermann Cohen, Moses Hess, Samson Raphael Hirsch, Samuel Hirsch, Nachman Krochmal, Samuel David Luzzatto, Nachman of Breslov founder of Breslov and Karl Marx founder of Marxist worldview. The 20th century included the notable philosophers Jacques Derrida, Karl Popper, Hilary Putnam, Alfred Tarski, Ludwig Wittgenstein, A. J. Ayer, Isaiah Berlin and Henri Bergson. (c. 25 BCE – c. 50 CE)) (1194 – 1270) (1135/1138 - 1204) Baruch Spinoza (1632 - 1677) Moses Mendelssohn (1729 - 1786) Karl Marx (1818 - 1883) Ludwig Wittgenstein (1889 - 1951) Jacques Derrida (1930 - 2004) Education and politics A range of moral and political views is evident early in the history of Judaism, that serves to partially explain the diversity that is apparent among secular Jews who are often influenced by moral beliefs that can be found in Jewish scripture, and traditions. In recent centuries, secular Jews in Europe and the Americas have tended towards the liberal political left, and played key roles in the birth of the 19th century's labor movement and socialism. While Diaspora Jews have also been represented in the conservative side of the political spectrum, even politically conservative Jews have tended to support pluralism more consistently than many other elements of the political right. Some scholars[19] attribute this to the fact that Jews are not expected to proselytize, derived from Halakha. This lack of a universalizing religion is combined with the fact that most Jews live as minorities in diaspora countries, and that no central Jewish religious authority has existed since 363 CE. Economic activity David Ricardo (1772 - 1823). He was one of the most influential of the classical economists[20][21] In the Middle Ages, European laws prevented Jews from owning land and gave them powerful incentive to go into other professions that the indigenous Europeans were not willing to follow.[22] During the medieval period, there was a very strong social stigma against lending money and charging interest among the Christian majority. In most of Europe until the late 18th century, and in some places to an even later date, Jews were prohibited by Roman Catholic governments (and others) from owning land. On the other hand, the Church, because of a number of Bible verses (e.g., Leviticus 25:36) forbidding usury, declared that charging any interest was against the divine law, and this prevented any mercantile use of capital by pious Christians. As the Canon law did not apply to Jews, they were not liable to the ecclesiastical punishments which were placed upon usurers by the popes. Christian rulers gradually saw the advantage of having a class of men like the Jews who could supply capital for their use without being liable to excommunication, and so the money trade of western Europe by this means fell into the hands of the Jews. However, in almost every instance where large amounts were acquired by Jews through banking transactions the property thus acquired fell either during their life or upon their death into the hands of the king. This happened to Aaron of Lincoln in England, Ezmel de Ablitas in Navarre, Heliot de Vesoul in Provence, Benveniste de Porta in Aragon, etc. It was often for this reason that kings supported the Jews, and even objected to them becoming Christians (because in that case they could not be forced to give up money won by usury). Thus, both in England and in France the kings demanded to be compensated for every Jew converted. This type of royal trickery was one factor in creating the stereotypical Jewish role of banker and/or merchant. Science and technology Illustration of God creating the cosmos, reflecting the biblical narrative The strong Jewish tradition of religious scholarship often left Jews well prepared for secular scholarship. In some times and places, this was countered by banning Jews from studying at universities, or admitted them only in limited numbers (see Jewish quota). Over the centuries, Jews have been poorly represented among land-holding classes, but far better represented in academia, professions, finance, commerce and many scientific fields. The strong representation of Jews in science and academia is evidenced by the fact that 193 persons known to be Jews or of Jewish ancestry have been awarded the Nobel Prize, accounting for 22% of all individual recipients worldwide between 1901 and 2014.[23] Of whom, 26% in physics,[24] 22% in chemistry[25] and 27% in Physiology or Medicine.[26] In the fields of mathematics and computer science, 31% of Turing Award recipients[27] and 27% of Fields Medal in mathematics[28] were or are Jewish. The agricultural science.[30][31] The Mosaic code has provisions concerning the conservation of natural resources, such as trees ( Deuteronomy 20:19-20 ) and birds ( Deuteronomy 22:6-7 ). German edition of the astronomy book De scientia motvs orbis, originally by Mashallah ibn Athari A Jewish physician in traditional costume During Medieval era astronomy was a primary field among Jewish scholars and was widely-studied and practiced.[32] Prominent astronomers included Abraham Zacuto who published in 1478 his Hebrew book Ha-hibbur ha-gadol[33] where he wrote about the solar system, charting the positions of the Sun, Moon and five planets.[33] His work served Portugal's explorering journeys and was used by Vasco da Gama and also by Christopher Columbus. The lunar crater Zagut is named after Zacuto's name. The mathematician and astronomer Abraham bar Hiyya Ha-Nasi authored the first European book to include the full solution to the quadratic equation x2 - ax + b = 0,[34] and influenced the work of Leonardo Fibonacci. Bar Hiyya proved by geometro-mechanical method of indivisibles the following equation for any circle: S = LxR/2, where S is the surface area, L is the circumference length and R is radius.[35] Garcia de Orta, Portuguese Renaissance Jewish physician, was a pioneer of Tropical medicine. He published his work Colóquios dos simples e drogas da India in 1563, which deals with a series of substances, many of them unknown or the subject of confusion and misinformation in Europe at this period. He was the first European to describe Asiatic tropical diseases, notably cholera; he performed an autopsy on a cholera victim, the first recorded autopsy in India. Bonet de Lattes known chiefly as the inventor of an astronomical ring-dial by means of which solar and stellar altitudes can be measured and the time determined with great precision by night as well as by day. Other related personalities are Abraham ibn Ezra, whose the Moon crater Abenezra named after, David Gans, Judah ibn Verga, Mashallah ibn Athari an astronomer, The crater Messala on the Moon is named after him. Quantum mechanics and nuclear energy. Castle Romeo (nuclear test), a large number of Jewish scientists were involved in Project Manhattan The Manhattan Project was a research and development project that produced the first atomic bombs during World War II and many Jewish scientist had a significant role in the project.[39] The theoretical physicist Robert Oppenheimer, often considered as the "father of the atomic bomb", was chosen to direct the Manhattan Project at Los Alamos National Laboratory in 1942. The physicist Leó Szilárd, that conceived the nuclear chain reaction; Edward Teller, "the father of the hydrogen bomb" and Stanislaw Ulam; Eugene Wigner contributed to theory of Atomic nucleus and Elementary particle; Hans Bethe whose work included Stellar nucleosynthesis and was head of the Theoretical Division at the secret Los Alamos laboratory; Richard Feynman, Niels Bohr, Victor Weisskopf and Joseph Rotblat. The mathematician and physicist Alexander Friedmann pioneered the theory that universe was expanding governed by a set of equations he developed now known as the Friedmann equations. Arno Allan Penzias, the physicist and radio astronomer co-discoverer of the cosmic microwave background radiation, which helped establish the Big Bang theory, the scientists Robert Herman and Ralph Alpher had also worked on that field. In quantum mechanics Jewish role was significant as well and many of most influential figures and pioneers of the theory were Jewish: Niels Bohr and his work on the atom structure, Max Born (Schrödinger equation), Wolfgang Pauli, Richard Feynman (Quantum chromodynamics), Fritz London work on London dispersion force and London equations, Walter Heitler and Julian Schwinger work on Quantum electrodynamics, Asher Peres a pioneer in Quantum information, David Bohm (Quantum potential). Sigmund Freud, known as the father of psychoanalysis, is one of the most influential scientists of the 20th century. In creating psychoanalysis, a clinical method for treating psychopathology through dialogue between a patient and a psychoanalyst,[40] Freud developed therapeutic techniques such as the use of free association and discovered transference, establishing its central role in the analytic process. Freud’s redefinition of sexuality to include its infantile forms led him to formulate the Oedipus complex as the central tenet of psychoanalytical theory. His analysis of dreams as wish-fulfillments provided him with models for the clinical analysis of symptom formation and the mechanisms of repression as well as for elaboration of his theory of the unconscious as an agency disruptive of conscious states of mind.[41] Freud postulated the existence of libido, an energy with which mental processes and structures are invested and which generates erotic attachments, and a death drive, the source of repetition, hate, aggression and neurotic guilt.[42] The first functioning laser, created by Theodore H. Maiman in 1960[43][44] John von Neumann, a mathematician and physicist, made major contributions to a number of fields,[45] including foundations of mathematics, functional analysis, ergodic theory, geometry, topology, numerical analysis, quantum mechanics, hydrodynamics and game theory.[46] In made also a major work with computing and the development of the computer, he suggested and described a computer architecture called Von Neumann architecture and worked on linear programming, self-replicating machines, stochastic computing), and statistics. Emmy Noether was an influential mathematician known for her groundbreaking contributions to abstract algebra and theoretical physics. Described by many prominent scientists as the most important woman in the history of mathematics,[47][48] she revolutionized the theories of rings, fields, and algebras. In physics, Noether's theorem explains the fundamental connection between symmetry and conservation laws.[49] Israeli Shavit space launcher More remarkable contributors include Georg Simmel. Beside Scientific discoveries and researches, Jews have created significant and influential innovations in a large variety of fields such as the listed samples: Siegfried Marcus- automobile pioneer, inventor of the first car; Emile Berliner- developer of the disc record phonograph; Mikhail Gurevich- co-inventor of the MIG aircraft; Theodore Maiman- inventor of the laser; Robert Adler- inventor of the wireless remote control for televisions; Edwin H. Land - inventor of Land Camera; Bob Kahn- inventor of TCP and IP; Bram Cohen- creator of Bittorent; Sergei Brin and Larry Page- creators of Google; Laszlo Biro - Ballpoint pen; Simcha Blass- Drip irrigation; Lee Felsenstein - designer of Osborne 1; Zeev Suraski and Andi Gutmans co-creators of PHP and founders of Zend Technologies; Ralph H. Baer, "The Father of Video Games". Albert Einstein John von Neumann Sigmund Freud Niels Bohr Emmy Noether Richard Feynman Robert Oppenheimer Literature and poetry In some places where there have been relatively high concentrations of Jews, distinct secular Jewish subcultures have arisen. For example, ethnic Jews formed an enormous proportion of the literary and artistic life of Vienna, Austria at the end of the 19th century, or of New York City 50 years later (and Los Angeles in the mid-late 20th century). Many of these creative Jews were not particularly religious people. In general, Jewish artistic culture in various periods reflected the culture in which they lived. Gutenberg Bible. The Bible was authored by Jews during the Iron Ages and the Classical era. It comprise cultural values, basic human values, mythology and religious beliefs of both Judaism and Christianity[50] Literary and theatrical expressions of secular Jewish culture may be in specifically Jewish languages such as Hebrew, Yiddish or Ladino, or it may be in the language of the surrounding cultures, such as English or German. Secular literature and theater in Yiddish largely began in the 19th century and was in decline by the middle of the 20th century. The revival of Hebrew beyond its use in the liturgy is largely an early 20th-century phenomenon, and is closely associated with Zionism. Apart from the use of Hebrew in Israel, whether a Jewish community will speak a Jewish or non-Jewish language as its main vehicle of discourse is generally dependent on how isolated or assimilated that community is. For example, the Jews in the shtetls of Poland and the Lower East Side of New York during the early 20th century spoke Yiddish at most times, while assimilated Jews in 19th and early 20th-century Germany spoke German, and American-born Jews in the United States speak English. Jewish authors have both created a unique Jewish literature and contributed to the national literature of many of the countries in which they live. Though not strictly secular, the Yiddish works of authors like Sholem Aleichem (whose collected works amounted to 28 volumes) and Isaac Bashevis Singer (winner of the 1978 Nobel Prize), form their own canon, focusing on the Jewish experience in both Eastern Europe, and in America. In the United States, Jewish writers like Philip Roth, Saul Bellow, and many others are considered among the greatest American authors, and incorporate a distinctly secular Jewish view into many of their works. The poetry of Allen Ginsberg often touches on Jewish themes (notably the early autobiographical works such as Howl and Kaddish). Other famous Jewish authors that made contributions to world literature include Heinrich Heine, German poet, Mordecai Richler, Canadian author, Isaac Babel, Russian author, Franz Kafka, of Prague, and Harry Mulisch, whose novel The Discovery of Heaven was revealed by a 2007 poll as the "Best Dutch Book Ever".[51] Hebrew Book Week in Jerusalem In Modern Judaism: An Oxford Guide, Yaakov Malkin, Professor of Aesthetics and Rhetoric at Tel Aviv University and the founder and academic director of Meitar College for Judaism as Culture in Jerusalem, writes: Secular Jewish culture embraces literary works that have stood the test of time as sources of aesthetic pleasure and ideas shared by Jews and non-Jews, works that live on beyond the immediate socio-cultural context within which they were created. They include the writings of such Jewish authors as Sholem Aleichem, Itzik Manger, Isaac Bashevis Singer, Philip Roth, Saul Bellow, S.Y. Agnon, Isaac Babel, Martin Buber, Isaiah Berlin, Haim Nahman Bialik, Yehuda Amichai, Amos Oz, A.B. Yehoshua, and David Grossman. It boasts masterpieces that have had a considerable influence on all of western culture, Jewish culture included - works such as those of Heinrich Heine, Gustav Mahler, Leonard Bernstein, Marc Chagall, Jacob Epstein, Ben Shahn, Amedeo Modigliani, Franz Kafka, Max Reinhardt (Goldman), Ernst Lubitsch, and Woody Allen.[11] Other notable contributors are A Song of Ice and Fire novels.[52][53][54] Another aspect of Jewish literature is the ethical, called Musar literature. Among recipient of Nobel Prize in Literature, 13% were or are Jewish.[55] Hebrew poetry is expressed by various of poets in different eras of Jewish history. Biblical poetry is related to the poetry in biblical times as it expressed in the Hebrew bible and Jewish sacred texts. In medieval times the Jewish poetry was mainly expressed by piyyutim and several poets such as Yehuda Halevi, Samuel ibn Naghrillah, Solomon ibn Gabirol, Moses ibn Ezra, Abraham ibn Ezra and Dunash ben Labrat. Modern Hebrew poetry is mostly related to the era of and after the revival of the Hebrew language, pioneered by Moshe Chaim Luzzatto in the Haskalah era and succeeded by poets such as Hayim Nahman Bialik, Nathan Alterman and Shaul Tchernichovsky. Yehuda Halevi (c. 1075 – 1141) Heinrich Heine (1797 - 1856) Sholem Aleichem (1859 - 1916) Franz Kafka (1883 - 1924) Boris Pasternak (1890 - 1960) Isaac Asimov (1920 - 1992) Allen Ginsberg (1926 - 1997) Yiddish theatre Hana Rovina in The Dybbuk (1920), a play by S. Ansky The Ukrainian Jew Abraham Goldfaden founded the first professional Yiddish-language theatre troupe in Iași, Romania in 1876. The next year, his troupe achieved enormous success in Bucharest. Within a decade, Goldfaden and others brought Yiddish theater to Ukraine, Russia, Poland, Germany, New York City, and other cities with significant Ashkenazic populations. Between 1890 and 1940, over a dozen Yiddish theatre groups existed in New York City alone, in the Yiddish Theater District, performing original plays, musicals, and Yiddish translations of theatrical works and opera. Perhaps the most famous of Yiddish-language plays is The Dybbuk (1919) by S. Ansky. Yiddish theater in New York in the early 20th century rivalled English-language theater in quantity and often surpassed it in quality. A 1925 New York Times article remarks, "…Yiddish theater… is now a stable American institution and no longer dependent on immigration from Eastern Europe. People who can neither speak nor write Yiddish attend Yiddish stage performances and pay Broadway prices on Second Avenue." This article also mentions other aspects of a New York Jewish cultural life "in full flower" at that time, among them the fact that the extensive New York Yiddish-language press of the time included seven daily newspapers.[56] In fact, however, the next generation of American Jews spoke mainly English to the exclusion of Yiddish; they brought the artistic energy of Yiddish theater into the American theatrical mainstream, but usually in a less specifically Jewish form. Yiddish theater, most notably Moscow State Jewish Theater directed by Solomon Mikhoels, also played a prominent role in the arts scene of the Soviet Union until Stalin's 1948 reversal in government policy toward the Jews. (See Rootless cosmopolitan, Night of the Murdered Poets.) Montreal's Dora Wasserman Yiddish Theatre continues to thrive after 50 years of performance. European theatre S. Ansky, 1910 From their Emancipation to World War II, Jews were very active and sometimes even dominant in certain forms of European theatre, and after the Holocaust many Jews continued to that cultural form. For example, in pre-Nazi Germany, where Nietzsche asked "What good actor of today is not Jewish?", acting, directing and writing positions were often filled by Jews. Both MacDonald and Jewish Tribal Review would generally be counted as anti-Semitic sources, but reasonably careful in their factual claims. "In Imperial Berlin, Jewish artists could be found in the forefront of the performing arts, from high drama to more popular forms like cabaret and revue, and eventually film. Jewish audiences patronized innovative theater, regardless of whether they approved of what they saw."[57] The British historian Paul Johnson, commenting on Jewish contributions to European culture at the Fin de siècle, writes that The area where Jewish influence was strongest was the theatre, especially in Berlin. Playwrights like Carl Sternheim, Arthur Schnitzler, Ernst Toller, Erwin Piscator, Walter Hasenclever, Ferenc Molnár and Carl Zuckmayer, and influential producers like Max Reinhardt, appeared at times to dominate the stage, which tended to be modishly left-wing, pro-republican, experimental and sexually daring. But it was certainly not revolutionary, and it was cosmopolitan rather than Jewish.[58] Jews also made similar, if not as massive, contributions to theatre and drama in Austria, Britain, France, and Russia (in the national languages of those countries). Jews in Vienna, Paris and German cities found cabaret both a popular and effective means of expression, as German cabaret in the Weimar Republic "was mostly a Jewish art form".[59] The involvement of Jews in Central European theatre was halted during the rise of the Nazis and the purging of Jews from cultural posts, though many emigrated to Western Europe or the United States and continued working there. English-language theatre See also List of Jewish American musicals writers, List of Jewish Americans in theatre, List of Jewish American playwrights. Yiddish theatre fed into the mainstream of American stage and film acting: the Marc Blitzstein are predominantly Jewish" comes from "the tradition established from New York's Yiddish theater."[60] Not only have "Jewish composers and lyricists always dominated Broadway musicals"[61] in New York City, but they were instrumental in the creation and development of genre of musical theatre and earlier forms of theatrical entertainment, as well as contributing to non-musical theatre in the United States. Brandeis University Professor Stephen J. Whitfield has commented that "More so than behind the screen, the talent behind the stage was for over half a century virtually the monopoly of one ethnic group. That is... [a] feature which locates Broadway at the center of Jewish culture".[62] New York University Professor Laurence Maslon says that "There would be no American musical without Jews… Their influence is corollary to the influence of black musicians on jazz; there were as many Jews involved in the form".[63] Other writers, such as Jerome Caryn, have noted that musical theatre and other forms of American entertainment are uniquely indebted to the contributions of Jewish-Americans, since "there might not have been a modern Broadway without the "Asiatic horde" of comedians, gossip columnists, songwriters, and singers that grew out of the ghetto, whether it was on the Lower East Side, Harlem (a Jewish ghetto before it was a black one), Newark, or Washington, D.C.."[64] Likewise, in the analysis of Aaron Kula, director of The Klezmer Company, "…the Jewish experience has always been best expressed by music, and Broadway has always been an integral part of the Jewish-American experience… The difference is that one can expand the definition of "Jewish Broadway" to include an interdisciplinary roadway with a wide range of artistic activities packed onto one avenue--theatre, opera, symphony, ballet, publishing companies, choirs, synagogues and more. This vibrant landscape reflects the life, times and creative output of the Jewish-American artist".[65] Fiddler on the roof, original Broadway windowcard In the 19th and early 20th centuries the European Bizet's Carmen (not an operetta proper but rather a work of the earlier Opéra comique form) was the Jewish Ludovic Halévy, niece of composer Fromental Halévy (Bizet himself was not Jewish but he married the elder Halevy's daughter, many have suspected that he was the descendant of Jewish converts to Christianity, and others have noticed Jewish-sounding intervals in his music).[66] The Viennese librettist Victor Leon summarized the connection of Jewish composers and writers with the form of operetta: "The audience for operetta wants to laugh beneath tears—and that is exactly what Jews have been doing for the last two thousand years since the destruction of Jerusalem".[67] Another factor in the evolution of musical theatre was vaudeville, and during the early 20th century the form was explored and expanded by Jewish comedians and actors such as Jack Benny, Fanny Brice, Eddie Cantor, The Marx Brothers, Anna Held, Al Jolson, Molly Picon, Sophie Tucker and Ed Wynn. During the period when Broadway was monopolized by revues and similar entertainments, Jewish producer Florenz Ziegfeld dominated the theatrical scene with his Follies. By 1910 Jews (the vast majority of them immigrants from Morrie Ryskind. From that time until the 1980s a vast majority of successful musical theatre composers, lyricists, and book-writers were Jewish (a notable exception is the Protestant Cole Porter, who acknowledged that the reason he was so successful on Broadway was that he wrote what he called "Jewish music").[68] Rodgers and Hammerstein, Frank Loesser, Lerner and Loewe, Stephen Sondheim, Leonard Bernstein, Stephen Schwartz, Kander and Ebb and dozens of others during the "Golden Age" of musical theatre were Jewish. Since the Tony Award for Best Original Score was instituted in 1947, approximately 70% of nominated scores and 60% of winning scores were by Jewish composers. Of successful British and French musical writers both in the West End and Broadway, Claude-Michel Schönberg and Lionel Bart are Jewish, among others. One explanation of the affinity of Jewish composers and playwrights to the musical is that "traditional Finian's Rainbow, South Pacific and The King and I. Towards the end of Golden Age, writers also began to openly and overtly tackle Jewish subjects and issues, such as Fiddler on the Roof and Rags; Bart's Blitz! also tackles relations between Jews and Gentiles. Jason Robert Brown and Alfred Uhry's Parade is a sensitive exploration of both anti-Semitism and historical American racism. The original concept that became West Side Story was set in the Lower East Side during Easter-Passover celebrations; the rival gangs were to be Jewish and Italian Catholic.[71] The ranks of prominent Jewish producers, directors, designers and performers include Boris Aronson, David Belasco, Joel Grey, the Minskoff family, Zero Mostel, Joseph Papp, Mandy Patinkin, the Nederlander family, Harold Prince, Max Reinhardt, Jerome Robbins, the Shubert family and Julie Taymor. Jewish playwrights have also contributed to non-musical drama and theatre, both Broadway and regional. Edna Ferber, Moss Hart, Lillian Hellman, Arthur Miller and Neil Simon are only some of the prominent Jewish playwrights in American theatrical history. Approximately 34% of the plays and musicals that have won the Pulitzer Prize for Drama were written and composed by Jewish Americans.[72] The Association for Jewish Theater is a contemporary organization that includes both American and international theaters that focus on theater with Jewish content. It has also expanded to include Jewish playwrights. Hebrew and Israeli theatre Habima theater, 2011 The earliest known Hebrew language drama was written around 1550 by a Jewish-Italian writer from Mantua.[73] A few works were written by rabbis and Kabbalists in the 17th century Amsterdam, where Jews were relatively free from persecution and had both flourishing religious and secular Jewish cultures.[74] All of these early Hebrew plays were about Biblical or mystical subjects, often in the form of Talmudic parables. During the post-Emancipation period in 19th-century Europe, many Jews translated great European plays such as those by Shakespeare, Molière and Schiller, giving the characters Jewish names and transplanting the plot and setting to within a Jewish context. Modern Hebrew theatre and drama, however, began with the development of Modern Hebrew in Europe (the first Hebrew theatrical professional performance was in Moscow in 1918)[75] and was "closely linked with the Jewish national renaissance movement of the twentieth century. The historical awareness and the sense of primacy which accompanied the Hebrew theatre in its early years dictated the course of its artistic and aesthetic development".[76] These traditions were soon transplanted to Israel. Playwrights such as Natan Alterman, Hayyim Nahman Bialik, Leah Goldberg, Ephraim Kishon, Hanoch Levin, Aharon Megged, Moshe Shamir, Avraham Shlonsky, Yehoshua Sobol and A. B. Yehoshua have written Hebrew-language plays. Themes that are obviously common in these works are the Holocaust, the Arab-Israeli conflict, the meaning of Jewishness, and contemporary secular-religious tensions within Jewish Israel. The most well-known Hebrew theatre company and Israel's national theatre is the Habima (meaning "the stage" in Hebrew), which was formed in 1913 in Lithuania, and re-established in 1917 in Russia; another prominent Israeli theatre company is the Cameri Theatre, which is "Israel's first and leading repertory theatre".[77] In the era when Yiddish theatre was still a major force in the world of theatre, over 100 films were made in Yiddish. Many are now lost. Prominent films included Shulamith (1931), the first Yiddish musical on film His Wife's Lover (1931), A Daughter of Her People (1932), the anti-Nazi film The Wandering Jew (1933), The Yiddish King Lear (1934), Shir Hashirim (1935), the biggest Yiddish film hit of all time Yidl Mitn Fidl (1936), Where Is My Child? (1937), Green Fields (1937), Dybuk (1937), The Singing Blacksmith (1938), Tevya (1939), Mirele Efros (1939), Lang ist der Weg (1948), and God, Man and Devil (1950). The roster of Jewish entrepreneurs in the English-language American film industry is legendary: Samuel Goldwyn, Louis B. Mayer, the Warner Brothers, David O. Selznick, Marcus Loew, and Adolph Zukor, Fox to name just a few, and continuing into recent times with such industry giants as super-agent Michael Ovitz, Michael Eisner, Lew Wasserman, Jeffrey Katzenberg, Steven Spielberg, and David Geffen. However, few of these brought a specifically Jewish sensibility either to the art of film or, with the sometime exception of Spielberg, to their choice of subject matter. The historian Eric Hobsbawm described the situation as follows:[78] It would be ... pointless to look for consciously Jewish elements in the songs of Irving Berlin or the Hollywood movies of the era of the great studios, all of which were run by immigrant Jews: their object, in which they succeeded, was precisely to make songs or films which found a specific expression for 100 per cent Americanness. A more specifically Jewish sensibility can be seen in the films of the Marx Brothers, Mel Brooks, or Woody Allen; other examples of specifically Jewish films from the Hollywood film industry are the Barbra Streisand vehicle Yentl (1983), or John Frankenheimer's The Fixer (1968). Radio and television The first radio chains, the Radio Corporation of America and the Columbia Broadcasting System, were created by the Jewish-American David Sarnoff and William S. Paley, respectively. These Jewish innovators were also among the first producers of televisions, both black-and-white and color.[79] Among the Jewish immigrant communities of America there was also a thriving Yiddish language radio, with its "golden age" from the 1930s to the 1950s. Although there is little specifically Jewish television in the United States (National Jewish Television, largely religious, broadcasts only three hours a week), Jews have been involved in American television from its earliest days. From Sid Caesar and Milton Berle to Joan Rivers, Gilda Radner, and Andy Kaufman to Billy Crystal to Jerry Seinfeld, Jewish stand-up comedians have been icons of American television. Other Jews that held a prominent role in early radio and television were Eddie Cantor, Al Jolson, Jack Benny, Walter Winchell and David Susskind. More figures are Larry King, Michael Savage and Howard Stern. In the analysis of Paul Johnson, "The Broadway musical, radio and TV were all examples of a fundamental principle in Jewish diaspora history: Jews opening up a completely new field in business and culture, a tabula rasa on which to set their mark, before other interests had a chance to take possession, erect guild or professional fortifications and deny them entry."[80] One of the first televised situation comedies, The Goldbergs was set in a specifically Jewish milieu in the Bronx. While the overt Jewish milieu of The Goldbergs was unusual for an American television series, there were a few other examples, such as Brooklyn Bridge (1991–1993) and Bridget Loves Bernie. Jews have also played an enormous role among the creators and writers of television comedies: Woody Allen, Mel Brooks, Selma Diamond, Larry Gelbart, Carl Reiner, and Neil Simon all wrote for Sid Caesar; Reiner's son Rob Reiner worked with Norman Lear on All in the Family (which often engaged anti-semitism and other issues of prejudice); Larry David and Jerry Seinfeld created the hit sitcom Seinfeld, Lorne Michaels, Al Franken, Rosie Shuster, and Alan Zweibel of Saturday Night Live breathed new life into the variety show in the 1970s. More recently, American Jews have been instrumental to "novelistic" television series such as The Wire and The Sopranos. Variously acclaimed as one of the greatest television series of all time, The Wire was created by David Simon. Simon also served as executive producer, head writer, and show runner. Matthew Weiner produced the fifth and sixth seasons of The Sopranos and later created Mad Men. More remarkable contributors are David Benioff and D. B. Weiss, creators of Game of Thrones TV series; Ron Leavitt co-creator of Married... with Children; Damon Lindelof and J. J. Abrams, co-creators of Lost; David Crane and Marta Kauffman, creators of Friends; Tim Kring creator of Heroes; Sydney Newman co-creator of Doctor Who; Darren Star, creator Sex and the City and Melrose Place; Aaron Spelling co-creator of Beverly Hills, 90210; Chuck Lorre, co-creator of The Big Bang Theory and Two and a Half Men; Gideon Raff, creator of Prisoners of War which Homeland is based on; Aaron Ruben and Sheldon Leonard co-creators of The Andy Griffith Show; Don Hewitt creator of 60 Minutes; Garry Shandling, co-creator of The Larry Sanders Show; Ed. Weinberger, co-creator of The Cosby Show; David Milch, creator of Deadwood; Steven Levitan, co-creator of Modern Family; Dick Wolf, creator of Law & Order; David Shore, creator House; Max Mutchnick and David Kohan creators of Will & Grace. There is also a significant role of Jews in acting by actors such as Sarah Jessica Parker, William Shatner, Leonard Nimoy, Mila Kunis, Zac Efron, Hank Azaria, David Duchovny, Fred Savage, Zach Braff, Noah Wyle, Adam Brody, Katey Sagal, Sarah Michelle Gellar, Alyson Hannigan, Michelle Trachtenberg, David Schwimmer, Lisa Kudrow and Mayim Bialik. Jewish musical contributions also tend to reflect the cultures of the countries in which Jews live, the most notable examples being classical and popular music in the United States and Europe. (See: Jews in Classical Music and Jews in Mainstream and Jazz). Some music, however, is unique to particular Jewish communities, such as Israeli music, Israeli Folk music, Klezmer, Sephardic and Ladino music, and Mizrahi music. Deriving from Biblical traditions, Jewish dance has long been used by Jews as a medium for the expression of joy and other communal emotions. Each Jewish diasporic community developed its own dance traditions for wedding celebrations and other distinguished events. For Ashkenazi Jews in Eastern Europe, for example, dances, whose names corresponded to the different forms of klezmer music that were played, were an obvious staple of the wedding ceremony of the shtetl. Jewish dances both were influenced by surrounding Gentile traditions and Jewish sources preserved over time. "Nevertheless the Jews practiced a corporeal expressive language that was highly differentiated from that of the non-Jewish peoples of their neighborhood, mainly through motions of the hands and arms, with more intricate legwork by the younger men."[81] In general, however, in most religiously traditional communities, members of the opposite sex dancing together or dancing at times other than at these events was frowned upon. Jewish humor is the long tradition of humor in Judaism dating back to the Torah and the Midrash, but generally refers to the more recent stream of verbal, frequently self-deprecating and often anecdotal humor originating in Eastern Europe. Jewish humor took root in the United States over the last hundred years, beginning with vaudeville, and continuing through radio, stand-up, film, and television. A significant number of American comedians have been or are Jewish. Visual arts and architecture See also List of Jews in the visual arts. "Death of King Saul", by Elie Marcuse (1848). (Tel Aviv Museum of Art) Compared to music or theater, there is less of a specifically Jewish tradition in the visual arts. The most likely and accepted reason is that, as has been previously shown with Jewish music and literature, before Emancipation Jewish culture was dominated by the religious tradition of aniconism. As most Rabbinical authorities believed that the Second Commandment prohibited much visual art that would qualify as "graven images", Jewish artists were relatively rare until they lived in assimilated European communities beginning in the late 18th century.[82][83] It should be noted however, that despite fears by early religious communities of art being used for idolatrous purposes, Jewish sacred art is recorded in the Tanakh and extends throughout Jewish Antiquity and the Middle Ages.[84] The Tabernacle and the two Temples in Jerusalem form the first known examples of "Jewish art". During the first centuries of the Common Era, Jewish religious art also was created in regions surrounding the Mediterranean such as Syria and Greece, including frescoes on the walls of synagogues, of which the Dura Europas Synagogue is the only survivor[85] as well as the Jewish catacombs in Rome.[86][87] Zodiac Wheel Mosaic in the great synagogue of Tzippori (5th century) in Galilee, Israel Las Meninas, by Diego Velázquez (1656) who was of Jewish ancestry[88] A Jewish tradition of illuminated manuscripts in at least Late Antiquity has left no survivors, but can be deduced from borrowings in Early Medieval Christian art. A number of luxury pieces of gold glass from the later Roman period have Jewish motifs. Several Hellenistic-style floor mosaics have also been excavated in synagogues from Late Antiquity in Israel and Palestine, especially of the signs of the Zodiac, which was apparently acceptable in a low-status position on the floor. Some, such as that at Naaran, show evidence of a reaction against images of living creatures around 600 CE. The decoration of sarcophagi and walls at the cave cemetery at Beit She'arim shows a mixture of Jewish and Hellenistic motifs. However, for a period of several centuries between about 700 and 1100 CE there are scarely any survivals of identifiably Jewish art. Two Women (Dos Mujeres), portrait of Angelina Beloff and Maria Dolores Bastian by Diego Rivera (1914) Middle Age Rabbinical and Kabbalistic literature also contain textual and graphic art, most famously illuminated haggadahs such as the Sarajevo Haggadah, and other manuscripts like the Nuremberg Mahzor. Some of these were illustrated by Jewish artists and some by Christians; equally some Jewish artists and craftsmen in various media worked on Christian commissions.[89] Johnson again summarizes this sudden change from a limited participation by Jews in visual art (as in many other arts) to a large movement by them into this branch of European cultural life: Again, the arrival of the Jewish artist was a strange phenomenon. It is true that, over the centuries, there had been many animals (though few humans) depicted in Jewish art: lions on Torah curtains, owls on Judaic coins, animals on the Capernaum capitals, birds on the rim of the fountain-basis in the 5th century Naro synagogue in Tunis; there were carved animals, too, on timber synagogues in eastern Europe - indeed the Jewish wood-carver was the prototype of the modern Jewish plastic artist. A book of Yiddish folk-ornament, printed at Vitebsk in 1920, was similar to Chagall's own bestiary. But the resistance of pious Jews to portraying the living human image was still strong at the beginning of the 20th century.[90] There were few Jewish secular artists in Europe prior to the Emancipation that spread throughout Europe with the Napoleonic conquests. There were exceptions, and Salomon Adler was a prominent portrait painter in 18th century Milan. The delay in participation in the visual arts parallels the lack of Jewish participation in European classical music until the nineteenth century, and which was progressively overcome with the rise of Modernism in the 20th century. There were many Jewish artists in the 19th century, but Jewish artistic activity boomed during the end of World War I. The Jewish artistic Renaissance has its roots in the 1901 Fifth Zionist Congress, which included an art exhibition featuring Jewish artists E.M. Lilien and Hermann Struck. The exhibition helped legitimize art as an expression of Jewish culture.[91] According to Nadine Nieszawer, "Until 1905, Jews were always plunged into their books but from the first Russian Revolution, they became emancipated, committed themselves in politics and became artists. A real Jewish cultural rebirth".[92] Individual Jews figured in the modern artistic movements of Europe— With the exception of those living in isolated Jewish communities, most Jews listed here as contributing to secular Jewish culture also participated in the cultures of the peoples they lived with and nations they lived in. In most cases, however, the work and lives of these people did not exist in two distinct cultural spheres but rather in one that incorporated elements of both. During the early 20th century Jews figured particularly prominently in the Montparnasse movement, and after World War II among the abstract expressionists: Alexander Bogen, Helen Frankenthaler, Adolph Gottlieb, Philip Guston, Al Held, Lee Krasner, Barnett Newman, Milton Resnick, Jack Tworkov, Mark Rothko, and Louis Schanker as well as among Contemporary artists, Modernists and Postmodernists.[93] Many Russian Jews were prominent in the art of scenic design, particularly the aforementioned Chagall and Aronson, as well as the revolutionary Léon Bakst, who like the other two also painted. One Mexican Jewish artist was Pedro Friedeberg; historians disagree as to whether Frida Kahlo's father was Jewish or Lutheran. Gustav Klimt was not Jewish, but nearly all of his patrons and several of his models were. Among major artists Chagall may be the most specifically Jewish in his themes. But as art fades into graphic design, Jewish names and themes become more prominent: Leonard Baskin, Al Hirschfeld, Ben Shahn, Art Spiegelman and Saul Steinberg. Jews have also played a very important role in medias other than painting; in photography some notable figures are André Kertész, Robert Frank, Helmut Newton, Garry Winogrand, Cindy Sherman, Steve Lehman,[94] and Adi Nes; in installation art and street art some notable figures are Sigalit Landau,[95] Dede,[96] and Michal Rovner. Diego Velázquez (1599 - 1660) Camille Pissarro (1830 - 1903) Amedeo Modigliani (1884 - 1920) Diego Rivera (1886 - 1957) Marc Chagall (1887 - 1985) Comics, cartoons and animation Stan Lee (left) and Jack Kirby made a major contribution to the American comic book industry. Their work includes Spider-Man, Captain America, Fantastic Four, Avengers and X-Men Graphic art, as expressed in the art of comics, has been a key field for Jewish artists as well. In the Golden and Silver ages of American comic books, the Jewish role was overwhelming and large number of the medium's foremost creators have been Jewish.[97] Max Gaines was a pioneering figure in the creation of the modern comic book when in 1935 he published the first one called Famous Funnies.[98] In 1939 he founded, with Jack Liebowitz and Harry Donenfeld, All-American Publications (the AA Group).[99] The publication is known for the creation of several superheroes such as the original Atom, Flash, Green Lantern, Hawkman, and Wonder Woman. Donenfeld and Liebowitz were also the owners of National Allied Publications which distributed Detective Comics and Action Comics. That company was also a precursor of DC Comics. In 1939 the pulp magazine publisher Martin Goodman formed Timely Publications,[100] a company to be known, since the 1960s, as Marvel Comics. At Marvel, Artists such as Stan Lee, Jack Kirby,[101] Larry Lieber and Joe Simon created a large variety of characters and cultural icons including Spider-Man, Hulk, Captain America, Iron Man, Thor, Daredevil, and the teams Fantastic Four, Avengers, X-Men (including many of its characters) and S.H.I.E.L.D.. Stan Lee attributed the Jewish role in comics to the Jewish culture.[102] At DC Comics Jewish role was significant as well; the character of Superman, which was created by the Jewish artists Joe Shuster and Jerry Siegel,[97] is partly based on the biblical figure of Samson.[103] It was also suggested the Superman is partly influenced by Moses,[104][105] and other Jewish elements. More at DC Comics are Bob Kane, Bill Finger and Martin Nodell, creators of Green Lantern, Batman[97] and many related characters as Robin, The Joker, Riddler, Scarecrow and Catwoman; Gil Kane, co-creator of Atom and Iron Fist. Many of those involved in the later ages of comics are also Jewish, such as Julius Schwartz, Joe Kubert, Jenette Kahn, Len Wein, Peter David, Neil Gaiman, Chris Claremont and Brian Michael Bendis. There is also a large amount of Jewish characters among comics superheroes such as Magneto, Quicksilver, Kitty Pryde, The thing, Sasquatch, Sabra, Ragman, Legion and Moon Knight, of whom were and are influenced by events in Jewish history and elements of Jewish life.[106] In 1944 Max Gaines founded EC Comics.[107] The company is known for specializing in horror fiction, crime fiction, satire, military fiction and science fiction from the 1940s through the mid-1950s, notably the Tales from the Crypt series, The Haunt of Fear, The Vault of Horror, Crime SuspenStories and Shock SuspenStories. Jewish artists that are associated with the publisher include Al Feldstein, Dave Berg, and Jack Kamen. Will Eisner was an American cartoonist and was known as one of the earliest cartoonists to work in the American comic book industry. He is the creator of the Spirit comics series and the graphic novel A Contract with God.[108] The Eisner Award was named in his honor, and is given to recognize achievements each year in the comics medium. Ralph Bakshi is a director of animated and live-action films, known for films such as The Lord of the Rings, Wizards and Fire and Ice In 1952, William Gaines and Harvey Kurtzman founded Mad, an American humor magazine. It was widely imitated and influential, affecting satirical media as well as the cultural landscape of the 20th century, with editor Al Feldstein increasing readership to more than two million during its 1970s circulation peak.[109] Other known cartoonists are Lee Falk creator of The Phantom and Mandrake the Magician; The Hebrew comics of Michael Netzer creator of Uri-On and Uri Fink creator of Zbeng!; William Steig, creator of Shrek!; Daniel Clowes, creator of Eightball; Art Spiegelman creator of graphic novel Maus and Raw (with Françoise Mouly). In Animation, Jewish animators role is expressed by many: Genndy Tartakovsky is the creator of several animation TV series such as Dexter's Laboratory and Samurai Jack;[110] Matt Stone co-creator of South Park; David Hilberman who helped animate Bambi and Snow White and the Seven Dwarfs; Friz Freleng, Loony Tunes; Ralph Bakshi, Fritz the Cat, Mighty Mouse: The New Adventures, Wizards, The Lord of the Rings, Heavy Traffic, Coonskin, Hey Good Lookin', Fire and Ice, and Cool World;[111] Alex Hirsch, creator of Gravity Falls; Dave Fleischer and Lou Fleischer, founders of Fleischer Studios; Max Fleischer, animation of Betty Boop, Popeye and Superman. Several companies producing animation were founded by Jews, such as DreamWorks, which its products include Shrek, Madagascar, Kung Fu Panda and The Prince of Egypt; Warner Bros., which its animation division is known for cartoons such as Loony Tunes, Tiny Toon Adventures, Animaniacs, Pinky and the Brain and Freakazoid! . Jewish cooking combines the food of many cultures in which Jews have settled, including Middle Eastern, Mediterranean, Spanish, German and Eastern European styles of cooking, all influenced by the need for food to be kosher. Thus, "Jewish" foods like bagels, hummus, stuffed cabbage, and blintzes all come from various other cultures. The amalgam of these foods, plus uniquely Jewish contributions like tzimmis, cholent, gefilte fish and matzah balls, make up Jewish cuisine. See also 1. ^ Biale, David, Not in the Heavens: The Tradition of Jewish Secular Thought, Princeton University Press, 2011, p.15 2. ^ Biale, David, Not in the Heavens: The Tradition of Jewish Secular Thought, Princeton University Press, 2011, pp.5-6 3. ^ Torstrick, Rebecca L., Culture and customs of Israel, Greenwood Press, 2004 4. ^ Beit-Hallahmi, Benjamin, The Secular Israeli (Jewish) Identity: An Impossible Dream?, in Barry Alexander Kosmin, Ariela Keysar, eds., Secularism & secularity: contemporary international perspectives, Institute for the Study of Secularism in Society and Culture, Trinity College, Hartford, 2007, p.157 6. ^ David Biale is the Emanuel Ringelblum Professor of Jewish History and the Chair of the Department of History at the University of California, Davis. 7. ^ Rebecca Newberger Goldstein, Betraying Spinoza: The renegade Jew who gave us modernity, Schocken/Nextbook, 2006 8. ^ Măciucă, Constantin, preface to Bercovici, Israil, O sută de ani de teatru evriesc în România ("One hundred years of Yiddish/Jewish theater in Romania"), 2nd Romanian-language edition, revised and augmented by Constantin Măciucă. Editura Integral (an imprint of Editurile Universala), Bucharest (1998). ISBN 973-98272-2-5. See the article on the author for further information. 9. ^ The Emergence of a Jewish Cultural Identity, undated (2002 or later) on, reprinted from the National Foundation for Jewish Culture. Accessed 11 February 2006. 10. ^ 11. ^ a b Malkin, Y. "Humanistic and secular Judaisms." Modern Judaism An Oxford Guide, p. 107. 13. ^ "Introduction to Philosophy" by Dr Tom Kerns 14. ^ 15. ^ 16. ^ 17. ^ 18. ^ Wein (1997), p. 44. (Google books) 19. ^ Daniel J. Elazar, Judaism and Democracy: The Reality. Undated. Jerusalem Center for Public Affairs. Accessed 11 February 2006. 20. ^ Sowell, Thomas (2006). On classical economics. New Haven, CT: Yale University Press. 21. ^ 22. ^ The section on banking is drawn largely from the article "Usury" in the public domain Jewish Encyclopedia (1901–1906). 23. ^ Nobel Prize Laureates 32. ^ Science in Medieval Jewish Scholarship 34. ^ 35. ^ 36. ^ a b 37. ^ a b 38. ^ a b 39. ^ JEWS & THE ATOM BOMB, [1] 40. ^ Ford & Urban 1965, p. 109 43. ^ 44. ^ 45. ^ 46. ^ Glimm, p. vii 47. ^ Einstein, Albert (1 May 1935), "Professor Einstein Writes in Appreciation of a Fellow-Mathematician", New York Times (5 May 1935), retrieved 13 April 2008. Online at the MacTutor History of Mathematics archive. 48. ^ Alexandrov 1981, p. 100. 49. ^ Ne'eman, Yuval. "The Impact of Emmy Noether's Theorems on XX1st Century Physics", Teicher 1999, pp. 83–101. 50. ^ Biblical literature, Encyclopedia Britannica 51. ^ 52. ^ 53. ^ 54. ^ 56. ^ Melamed, S.M., "The Yiddish Stage", New York Times, September 27, 1925 (X2). 57. ^ Berlin Metropolis: Jews and the New Culture, 1890–1918, on the site of The Jewish Museum, New York. Accessed 12 February 2006. 58. ^ Johnson, Paul (1987). A History of the Jews, pg. 479. New York: Harper Perennial. – Erwin Piscator was a Lutheran Protestant (Nazi propagandists had claimed since 1927 that he was a “Jewish Bolshevik”, though). 59. ^ Suzanne Weiss, Jewish cabaret singer brings songs of Berlin to Berkeley, The Jewish News Weekly of Northern California, September 27, 1996. Accessed 12 February 2006. 60. ^ Keith D. Cohen, John Kander to be honored in KC concerts. The Kansas City Jewish Chronicle, May 27, 2005. Accessed 11 February 2006. 61. ^ Chris Curcio, This 'Musical Journey' slips along the way, March 31, 2005, The Arizona Republic. Accessed 11 February 2006. 62. ^ Stephen J. Whitfield, Musical Theater (PDF). Brandeis Review, Winter/Spring 2000. Accessed 11 February 2006. 63. ^ Samantha M. Shapiro, The Arts: A Jewish Street Called Broadway. Hadassah Magazine, October 2004 Vol. 86 No.2. Accessed 11 February 2006. 64. ^ Charyn, Jerome. "Early Broadway's un-Jewish Jews." Midstream 50.1 (January 2004): 19(7). Expanded Academic ASAP. Thomson Gale. UC Irvine (CDL). 9 March 2006 65. ^ The Klezmer Company Breaks New Ground with Orchestral Klezmer Production "Jewish Broadway with Orchestra and Chorus" at FAU. Florida Atlantic University press release, February 8, 2005. Accessed 11 February 2006. 66. ^ Raphael Mostel, Carmen Comes Home, The Forward, May 7, 2004. Accessed 12 February 2006. 67. ^ Dr. Kenneth Libo Ph. D and Michael Skakun, The Persecution of Creativity: Jews, Music and Vienna, Center for Jewish History, April 16, 2004. Accessed 12 February 2006 68. ^ Michael Billig, Creating the American Musical. Originally from Rock 'N' Roll Jews (Five Leaves Publications), extracted on Accessed 12 February 2006. 69. ^ Jacob Baron, Jewish Composers, Machar, The Washington Congregation for Secular Humanistic Judaism, June 2, 2005. Accessed 15 February 2006. 70. ^ Alan Gomberg, op. cit. 71. ^ Arthur Laurents, Theater: West Side Story; The Growth of an Idea, New York Herald Tribune, August 4, 1957. Reproduced on Accessed 12 February 2006. 73. ^ Shimon Levy, The Development of Israeli Theatre– a brief overview. Credited to Ministry of Foreign Affairs, Jerusalem, 2000. Accessed 12 February 2006. 74. ^, Jewish Encyclopedia. Could not access 12 February 2006. 75. ^ Shimon Levy, op. cit. 76. ^ Orna Ben-Meir, Biblical Thematics in Stage Design for the Hebrew Theatre, Assaph, Section C, no. 11 (July 1999), p. 141 et. seq.. Accessed 12 February 2006. 77. ^ History of Israeli Theatre, on a Geocities site, credits and 78. ^ 79. ^ Johnson, op. cit.' p. 462-463. 80. ^ Johnson, op. cit. p. 462-463. 82. ^ Ismar Schorsch, Shabbat Shekalim Va-Yakhel 5755, commentary on Exodus 35:1 - 38:20. February 25, 1995. Accessed 12 February 2006. 83. ^ Velvel Pasternak, Music and Art, part of "12 Paths" on Accessed 12 February 2006. 84. ^ Not a pretty picture, Haaretz 85. ^ Jessica Spitalnic Brockman, A Brief History of Jewish Art on Accessed 12 February 2006. 86. ^ Michael Schirber, Did Christians copy Jewish catacombs?, MSNBC, July 20, 2005. Accessed 12 February 2006. 87. ^ Jona Lendering, The Jewish diaspora: Rome. Accessed 12 February 2006. 88. ^ Diego Rodriguez de Silva y Velazquez Biography at 89. ^ Roza Bieliauskiene and Felix Tarm, Brief History of Jewish Art, Jewish Art Network. Accessed January 14, 2010. 90. ^ Johnson, op.cit., p. 411. 91. ^ Artistic Expressions of the Jewish Renaissance 92. ^ Rebecca Assoun, Jewish artists in Montparnasse. European Jewish Press, 19 July 2005. Accessed 12 February 2006. 93. ^ Jewish Artists, Jewish Virtual Library, 2005. Accessed 12 February 2006. 94. ^, John Levy, "Review of The Tibetans", photo 8,, Lehman, Steve, The Tibetans: A Struggle to Survive (New York: How Town / Umbrage), 1998. 95. ^ See: Ohad Meromi in the online exhibition "Real Time" . 96. ^ 97. ^ a b c 98. ^ FAMOUS FUNNIES at Toonopedia 99. ^ HOW THE JEWS CREATED THE COMIC BOOK INDUSTRY Part I: The Golden Age (1933-1955) 100. ^ 101. ^ 102. ^ 105. ^ 106. ^ 107. ^ EC COMICS at Toonopedia 108. ^ A short biography at 109. ^ 110. ^ 111. ^ FILMOGRAPHY at Further reading • Landa, M.J. (1926). The Jew in Drama. New York: Ktav Publishing House (1969). • Veidlinger, Jeffrey. Jewish Public Culture in the Late Russian Empire. Bloomington: Indiana University Press, 2009. External links • The City Congregation for Humanistic Judaism • Congress of Secular Jewish Organizations • Global Directory of Jewish Museums • News and reviews about Jewish literature and books • Festival of Jewish Theater and Ideas • The Bezalel Narkiss Index of Jewish Art
13d8167a7c7eff24
Connect with us IQT Group at Sussex University is Hiring Want to help construct a quantum computer demonstrator device, operate a small-scale quantum computer, implement quantum simulations towards quantum supremacy or develop portable quantum sensors? The Sussex Ion Quantum Technology group is expanding their team. We are hiring four PhD students, three Research Fellows, and one electrical engineer. We are looking for outstanding individuals who can think outside the box and are ready to take on a challenge. We are hiring three Research Fellows in Quantum Device Engineering with specialisation in: • Quantum Logic Implementation (detailed information can be found here) • Quantum Computing Operations (detailed information can be found here) • Manufacturing Quantum Microchips (detailed information can be found here) Successful applicants should have a PhD in physics, engineering or a related discipline. We also have a position as Electrical Engineer in Quantum Device Engineering (detailed information can be found here). The successful applicant should have a degree in electrical engineering or a related discipline. In addition we are hiring four PhD students with specialisation in: • Developing a trapped-ion quantum computer demonstrator device (detailed information can be found here) • Quantum algorithms on a trapped-ion quantum co-processor (detailed information can be found here) • Advanced microchips for quantum technology devices (detailed information can be found here) • Developing a portable quantum sensor (detailed information can be found here) Successful applicants need to have a degree in physics, engineering or a related discipline. The Sussex Ion Quantum Technology Group The Sussex Ion Quantum Technology group is developing a quantum computer demonstrator device, a quantum simulation engine and they are developing a portable quantum sensor. Detailed reading about their research can be found here, including: You can find more details on the group’s web page: Please contact Prof. Winfried Hensinger for more information. Continue Reading DNA’s Histone Spools Hint at How Complex Cells Evolved Molecular biology has something in common with kite-flying competitions. At the latter, all eyes are on the colorful, elaborate, wildly kinetic constructions darting through the sky. Nobody looks at the humble reels or spools on which the kite strings are wound, even though the aerial performances depend on how skillfully those reels are handled. In the biology of complex cells, or eukaryotes, the ballet of molecules that transcribe and translate genomic DNA into proteins holds centerstage, but that dance would be impossible without the underappreciated work of histone proteins gathering up the DNA into neat bundles and unpacking just enough of it when needed. Histones, as linchpins of the apparatus for gene regulation, play a role in almost every function of eukaryotic cells. “In order to get complex, you have to have genome complexity, and evolve new gene families, and you have to have a cell cycle,” explained William Martin, an evolutionary biologist and biochemist at Heinrich Heine University in Germany. “And what’s in the middle of all this? Managing your DNA.” New work on the structure and function of histones in ancient, simple cells has now made the longstanding, central importance of these proteins to gene regulation even clearer. Billions of years ago, the cells called archaea were already using histones much like our own to manage their DNA — but they did so with looser rules and much more variety. From those similarities and differences, researchers are gleaning new insights, not only into how the histones helped to shape the origins of complex life, but also into how variants of histones affect our own health today. At the same time, though, new studies of histones in an unusual group of viruses are complicating the answers about where our histones really came from. Dealing With Too Much DNA Eukaryotes arose about 2 billion years ago, when a bacterium that could metabolize oxygen for energy took up residence inside an archaeal cell. That symbiotic partnership was revolutionary because energy production from that proto-mitochondrion suddenly made expressing genes much more metabolically affordable, Martin argues. The new eukaryotes suddenly had free rein to expand the size and diversity of their genomes and to conduct myriad evolutionary experiments, laying the foundation for the countless eukaryotic innovations seen in life today. “Eukaryotes are an archaeal genetic apparatus that survives with the help of bacterial energy metabolism,” Martin said. But the early eukaryotes went through serious growing pains as their genomes expanded: The larger genome brought new problems stemming from the need to manage an increasingly unwieldy string of DNA. That DNA had to be accessible to the cell’s machinery for transcribing and replicating it without getting tangled up in a hopeless spaghetti ball. The DNA also sometimes needed to be compact, both to help regulate transcription and regulation, and to separate the identical copies of DNA during cell division. And one danger of careless compaction is that DNA strands can irreversibly bind together if the backbone of one interacts with the groove of another, rendering the DNA useless. Bacteria have a solution for this that involves a variety of proteins jointly “supercoiling” the cells’ relatively limited libraries of DNA. But eukaryotes’ DNA management solution is to use histone proteins, which have a unique ability to wrap DNA around themselves rather than just sticking to it. The four primary histones of eukaryotes — H2A, H2B, H3 and H4 — assemble into octamers with two copies of each. These octamers, called nucleosomes, are the basic units of eukaryotic DNA packaging. By curving the DNA around the nucleosome, the histones prevent it from clumping together and keep it functional. It’s an ingenious solution — but eukaryotes didn’t invent it entirely on their own. Back in the 1980s, when the cellular and molecular biologist Kathleen Sandman was a postdoc at Ohio State University, she and her adviser, John Reeve, identified and sequenced the first known histones in archaea. They showed how the four principal eukaryotic histones were related to each other and to the archaeal histones. Their work provided the early evidence that in the original endosymbiotic event that led to eukaryotes, the host was likely to have been an archaeal cell. But it would be a teleological mistake to think that archaeal histones were just waiting for the arrival of eukaryotes and the chance to enlarge their genomes. “A lot of these early hypotheses looked at histones in terms of their ability to allow the cell to expand its genome. But that doesn’t really tell you why they were there in the first place,” said Siavash Kurdistani, a biochemist at the University of California, Los Angeles. As a first step toward those answers, Sandman joined forces several years ago with the structural biologist Karolin Luger, who solved the structure of the eukaryotic nucleosome in 1997. Together, they worked out the crystallized structure of the archaeal nucleosome, which they published with colleagues in 2017. They found that the archaeal nucleosomes are “uncannily similar” in structure to eukaryotic nucleosomes, Luger said — despite the marked differences in their peptide sequences. Archaeal nucleosomes had already “figured out how to bind and bend DNA in this beautiful arc,” said Luger, now a Howard Hughes Medical Institute investigator at the University of Colorado, Boulder. But the difference between the eukaryotic and archaeal nucleosomes is that the crystal structure of the archaeal nucleosome seemed to form looser, Slinky-like assemblies of varying sizes. In a paper in eLife published in March, Luger, her postdoc Samuel Bowerman, and Jeff Wereszczynski of the Illinois Institute of Technology followed up on the 2017 paper. They used cryo-electron microscopy to solve the structure of the archaeal nucleosome in a state more representative of a live cell. Their observations confirmed that the structures of archaeal nucleosomes are less fixed. Eukaryotic nucleosomes are always stably wrapped by about 147 base pairs of DNA, and always consist of just eight histones. (For eukaryotic nucleosomes, “the buck stops at eight,” Luger said.) Their equivalents in archaea wind up between 60 and 600 base pairs. These “archaeasomes” sometimes hold as few as three histone dimers, but the largest ones consist of as many as 15 dimers. They also found that unlike the tight eukaryotic nucleosomes, the Slinky-like archaeasomes flop open stochastically, like clamshells. The researchers suggested that this arrangement simplifies gene expression for the archaea, because unlike eukaryotes, they don’t need any energetically expensive supplemental proteins to help unwind DNA from the histones to make them available for transcription. That’s why Tobias Warnecke, who studies archaeal histones at Imperial College London, thinks that “there’s something special that must have happened at the dawn of eukaryotes, where we transition from just having simple histones … to having octameric nucleosomes. And they seem to be doing something qualitatively different.” What that is, however, is still a mystery. In archaeal species, there are “quite a few that have histones, and there are other species that don’t have histones. And even those that do have histones vary quite a lot,” Warnecke said. Last December, he published a paper showing that there are diverse variants of histone proteins with different functions. The histone-DNA complexes vary in their stability and affinity for DNA. But they are not as stably or regularly organized as eukaryotic nucleosomes. As puzzling as the diversity of archaeal histones is, it provides an opportunity to understand the different possible ways of building systems of gene expression. That’s something we cannot glean from the relative “boringness” of eukaryotes, Warnecke says: Through understanding the combinatorics of archaeal systems, “we can also figure out what’s special about eukaryotic systems.” The variety of different histone types and configurations in archaea may also help us deduce what they might have been doing before their role in gene regulation solidified. A Protective Role for Histones Because archaea are relatively simple prokaryotes with small genomes, “I don’t think that the original role of histones was to control gene expression, or at least not in a manner that we are used to from eukaryotes,” Warnecke said. Instead, he hypothesizes that histones might have protected the genome from damage. Archaea often live in extreme environments, like hot springs and volcanic vents on the seafloor, characterized by high temperatures, high pressures, high salinity, high acidity or other threats. Stabilizing their DNA with histones may make it harder for the DNA strands to melt in those extreme conditions. Histones also might protect archaea against invaders, such as phages or transposable elements, which would find it harder to integrate into the genome when it’s wrapped around the proteins. Kurdistani agrees. “If you were studying archaea 2 billion years ago, genome compaction and gene regulation are not the first things that would come to mind when you are thinking about histones,” he said. In fact, he has tentatively speculated about a different kind of chemical protection that histones might have offered the archaea. Last July, Kurdistani’s team reported that in yeast nucleosomes, there is a catalytic site at the interface of two histone H3 proteins that can bind and electrochemically reduce copper. To unpack the evolutionary significance of this, Kurdistani goes back to the massive increase in oxygen on Earth, the Great Oxidation Event, that occurred around the time that eukaryotes first evolved more than 2 billion years ago. Higher oxygen levels must have caused a global oxidation of metals like copper and iron, which are critical for biochemistry (although toxic in excess). Once oxidized, the metals would have become less available to cells, so any cells that kept the metals in reduced form would have had an advantage. During the Great Oxidation Event, the ability to reduce copper would have been “an extremely valuable commodity,” Kurdistani said. It might have been particularly attractive to the bacteria that were forerunners of mitochondria, since cytochrome c oxidase, the last enzyme in the chain of reactions that mitochondria use to produce energy, requires copper to function. Because archaea live in extreme environments, they might have found ways to generate and handle reduced copper without being killed by it long before the Great Oxidation Event. If so, proto-mitochondria might have invaded archaeal hosts to steal their reduced copper, Kurdistani suggests. The hypothesis is intriguing because it could explain why the eukaryotes appeared when oxygen levels went up in the atmosphere. “There was 1.5 billion years of life before that, and no sign of eukaryotes,” Kurdistani said. “So the idea that oxygen drove the formation of the first eukaryotic cell, to me, should be central to any hypotheses that try to come up with why these features developed.” Kurdistani’s conjecture also suggests an alternative hypothesis for why eukaryotic genomes got so big. The histones’ copper-reducing activity only occurs at the interface of the two H3 histones inside an assembled nucleosome wrapped with DNA. “I think there’s a distinct possibility that the cell wanted more histones. And the only way to do that was to expand this DNA repertoire,” Kurdistani said. With more DNA, cells could wrap more nucleosomes and enable the histones to reduce more copper, which would support more mitochondrial activity. “It wasn’t just that histones allowed for more DNA, but more DNA allowed for more histones,” he said. “One of the neat things about this is that copper is very dangerous because it will break DNA,” said Steven Henikoff, a chromatin biologist and HHMI investigator at the Fred Hutchinson Cancer Research Center in Seattle. “Here’s a place where you have the active form of copper being made, and it’s right next to the DNA, but it doesn’t break the DNA because, presumably, it’s in a tightly packaged form,” he said. By wrapping the DNA, the nucleosomes keep the DNA safely out of the way. The hypothesis potentially explains aspects of how the architecture of the eukaryotic genome evolved, but it has met with some skepticism. The key outstanding question is whether archaeal histones have the same copper-reducing ability that some eukaryotic ones do. Kurdistani is investigating this now. The bottom line is that we still don’t definitively know what functions histones served in the archaea. But even so, “the fact that you see them conserved over long distances strongly suggests that they are doing something distinct and important,” Warnecke said. “We just need to find out what it is.” Histones Are Still Evolving Although the complex eukaryotic histone apparatus has not changed much since its origin about a billion years ago, it hasn’t been totally frozen. In 2018, a team at the Fred Hutchinson Cancer Research Center reported that a set of short histone variants called H2A.B is evolving rapidly. The pace of the changes is a sure sign of an “arms race” between genes vying for control over regulatory resources. It wasn’t initially clear to the researchers what the genetic conflict was about, but through a series of elegant crossbreeding experiments in mice, they eventually showed that the H2A.B variants dictated the survival and growth rate of embryos, as reported in December in PLOS Biology. The findings suggested that paternal and maternal versions of the histone variants are mediating a conflict over how to allocate resources to the offspring during pregnancy. They are rare examples of parental-effect genes — ones that don’t directly affect the individual carrying them, but instead strongly affect the individual’s offspring. The H2A.B variants arose with the first mammals, when the evolution of in utero development rewrote the “contract” for parental investment. Mothers had always invested a lot of resources in their eggs, but mammalian mothers also suddenly became responsible for the early development of their progeny. That set up a conflict: Paternal genes in the embryo had nothing to lose by demanding resources aggressively, while the maternal genes benefited from moderating the burden to spare the mother and let her live to breed another day. “That negotiation is still ongoing,” said Harmit Malik, an HHMI investigator at the Fred Hutchinson Cancer Research Center who studies genetic conflicts. Exactly how the histones affect the growth and viability of offspring is still not completely understood, but Antoine Molaro, the postdoctoral fellow who led the work and who now leads his own research group at the University of Clermont Auvergne in France, is investigating it. Some histone variants may cause health problems, too. In January, Molaro, Malik, Henikoff and their colleagues reported that short H2A histone variants are implicated in some cancers: More than half of diffuse large B cell lymphomas carry mutations in them. Other histone variants are associated with neurodegenerative diseases. But little is yet understood about how a single copy of a histone variant can produce such dramatic disease effects. The obvious hypothesis is that the variants affect the stability of nucleosomes and disrupt their signaling functions, changing gene expression in a way that alters cell physiology. But if histones can act as enzymes, then Kurdistani suggests another possibility: The variants may alter enzymatic activity inside cells. An Alternative Viral Origin? Despite the decades-old evidence from Sandman and others that eukaryotic histones evolved from archaeal histones, some intriguing recent work has unexpectedly opened the door to an alternative theory about their origins. According to a paper published on April 29 in Nature Structural & Molecular Biology, giant viruses of the Marseilleviridae family have viral histones that are recognizably related to the four main eukaryotic histones. The only difference is that in the viral versions, the histones that routinely pair up within the octamer (H2A with H2B, and H3 with H4) in eukaryotes are already fused into doublets. The fused viral histones form structures that are “virtually identical to canonical eukaryotic nucleosomes,” according to the paper’s authors. Luger’s team posted a preprint on about viral histones the same day, showing that in the cytoplasm of infected cells, viral histones stay near the “factories” that produce new viral particles. “Here’s the thing that is really compelling,” said Henikoff, who was among the authors on the new Nature Structural & Molecular Biology paper. “All of the histone variants turn out to be derived from a common ancestor that was shared between eukaryotes and giant viruses. By standard phylogenetic criteria, these are a sister group to eukaryotes.” It makes a compelling case that this common ancestor is where the eukaryotic histones came from, he says. A “proto-eukaryote” that had histone doublets might have been ancestral to both the giant viruses and eukaryotes and could have passed the proteins along to both lines of organisms a very long time ago. Warnecke, however, is skeptical about inferring phylogenetic relationships from viral sequences, which are notoriously mutable. As he explained in an email to Quanta, reasons other than shared ancestry might explain how the histones ended up in both lineages. In addition, the idea would require that the histone doublets later “unfused” into the H2A, H2B, H3 and H4 histones, because there are no doublets of those histones in extant eukaryotes. “How and why that would have happened is unclear,” he wrote. Although Warnecke is not convinced that the viral histones tell us much about the origin of eukaryotic histones, he is fascinated by their possible functions. One possibility is that they help to compact the viral DNA; another idea is that they could be disguising the viral DNA from the host’s defenses. Histones have had myriad roles since the dawn of time. But it was really in the eukaryotes that they became the linchpins for complex life and countless evolutionary innovations. That’s why Martin calls the histone “a basic building block that never could realize its full potential without the help of mitochondria.” Coinsmart. Beste Bitcoin-Börse in Europa Continue Reading Multi-time correlations in the positive-P, Q, and doubled phase-space representations Piotr Deuar Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, 02-668 Warsaw, Poland A number of physically intuitive results for the calculation of multi-time correlations in phase-space representations of quantum mechanics are obtained. They relate time-dependent stochastic samples to multi-time observables, and rely on the presence of derivative-free operator identities. In particular, expressions for time-ordered normal-ordered observables in the positive-P distribution are derived which replace Heisenberg operators with the bare time-dependent stochastic variables, confirming extension of earlier such results for the Glauber-Sudarshan P. Analogous expressions are found for the anti-normal-ordered case of the doubled phase-space Q representation, along with conversion rules among doubled phase-space s-ordered representations. The latter are then shown to be readily exploited to further calculate anti-normal and mixed-ordered multi-time observables in the positive-P, Wigner, and doubled-Wigner representations. Which mixed-order observables are amenable and which are not is indicated, and explicit tallies are given up to 4th order. Overall, the theory of quantum multi-time observables in phase-space representations is extended, allowing non-perturbative treatment of many cases. The accuracy, usability, and scalability of the results to large systems is demonstrated using stochastic simulations of the unconventional photon blockade system and a related Bose-Hubbard chain. In addition, a robust but simple algorithm for integration of stochastic equations for phase-space samples is provided. Multi-time correlations are important for answering many physical questions. For example, the determination of lifetimes out-of-time-order correlations which are important indicators of quantum chaos, or finding the time resolution required to observe a transient effect. In general, however, they are more difficult to calculate in a quantum system than instantaneous correlations, and the difficulty grows with system size. Phase-space representations are a formulation of quantum mechanics in which the calculation of multi-time correlations has a particularly intuitive structure, and in which the difficulties of dealing with large systems are often alleviated. In this work, the framework for calculating multi-time correlations with phase-space representations has been strongly extended to a much wider range of correlations and representations than before, facilitating future studies of large systems, including systems with dissipation. The paper also describes a robust but simple algorithm for integration of phase space stochastic equations, something that has been difficult to find in the literature to date. ► BibTeX data ► References [1] G. S. Agarwal. Phase-space analysis of time-correlation functions. Phys. Rev., 177: 400–407, 1969a. https:/​/​​10.1103/​PhysRev.177.400. [2] G. S. Agarwal. Master equations in phase-space formulation of quantum optics. Phys. Rev., 178: 2025–2035, 1969b. https:/​/​​10.1103/​PhysRev.178.2025. [3] G. S. Agarwal and S. Chaturvedi. Scheme to measure the positive $P$ distribution. Phys. Rev. A, 49: R665–R667, 1994. https:/​/​​10.1103/​PhysRevA.49.R665. [4] G. S. Agarwal and E. Wolf. Calculus for functions of noncommuting operators and general phase-space methods in quantum mechanics. II. quantum mechanics in phase space. Phys. Rev. D, 2: 2187–2205, 1970a. https:/​/​​10.1103/​PhysRevD.2.2187. [5] G. S. Agarwal and E. Wolf. Calculus for functions of noncommuting operators and general phase-space methods in quantum mechanics. III. a generalized Wick theorem and multitime mapping. Phys. Rev. D, 2: 2206–2225, 1970b. https:/​/​​10.1103/​PhysRevD.2.2206. [6] Takeshi Aimi and Masatoshi Imada. Gaussian-Basis Monte Carlo Method for Numerical Study on Ground States of Itinerant and Strongly Correlated Electron Systems. J. Phys. Soc. Jpn., 76: 084709, 2007a. https:/​/​​10.1143/​JPSJ.76.084709. [7] Takeshi Aimi and Masatoshi Imada. Does Simple Two-Dimensional Hubbard Model Account for High-$T_c$ Superconductivity in Copper Oxides? J. Phys. Soc. Jpn., 76: 113708, 2007b. https:/​/​​10.1143/​JPSJ.76.113708. [8] Francesco Albarelli, Marco G. Genoni, Matteo G. A. Paris, and Alessandro Ferraro. Resource theory of quantum non-Gaussianity and Wigner negativity. Phys. Rev. A, 98: 052350, 2018. https:/​/​​10.1103/​PhysRevA.98.052350. [9] Juan Atalaya, Shay Hacohen-Gourgy, Leigh S. Martin, Irfan Siddiqi, and Alexander N. Korotkov. Multitime correlators in continuous measurement of qubit observables. Phys. Rev. A, 97: 020104, 2018. https:/​/​​10.1103/​PhysRevA.97.020104. [10] Motoaki Bamba, Atac Imamoğlu, Iacopo Carusotto, and Cristiano Ciuti. Origin of strong photon antibunching in weakly nonlinear photonic molecules. Phys. Rev. A, 83: 021802, 2011. https:/​/​​10.1103/​PhysRevA.83.021802. [11] D. W. Barry and P. D. Drummond. Qubit phase space: SU$(n)$ coherent-state $P$ representations. Phys. Rev. A, 78: 052108, 2008. https:/​/​​10.1103/​PhysRevA.78.052108. [12] J. R. Bell. Algorithm 334, normal random deviates. Communications of the ACM, 11 (7): 498, 1968. https:/​/​​10.1145/​363397.363547. [13] B. Berg, L. I. Plimak, A. Polkovnikov, M. K. Olsen, M. Fleischhauer, and W. P. Schleich. Commuting heisenberg operators as the quantum response problem: Time-normal averages in the truncated Wigner representation. Phys. Rev. A, 80: 033624, 2009. https:/​/​​10.1103/​PhysRevA.80.033624. [14] P. B. Blakie, A. S. Bradley, M. J. Davis, R. J. Ballagh, and C. W. Gardiner. Dynamics and statistical mechanics of ultra-cold Bose gases using c-field techniques. Advances in Physics, 57 (5): 363–455, 2008. https:/​/​​10.1080/​00018730802564254. [15] A. Bohrdt, C. B. Mendl, M. Endres, and M. Knap. Scrambling and thermalization in a diffusive quantum many-body system. New Journal of Physics, 19 (6): 063001, 2017. https:/​/​​10.1088/​1367-2630/​aa719b. [16] Denys I. Bondar, Renan Cabrera, Dmitry V. Zhdanov, and Herschel A. Rabitz. Wigner phase-space distribution as a wave function. Phys. Rev. A, 88: 052108, 2013. https:/​/​​10.1103/​PhysRevA.88.052108. [17] G. E. P. Box and Mervin E. Muller. A Note on the Generation of Random Normal Deviates. The Annals of Mathematical Statistics, 29 (2): 610 – 611, 1958. https:/​/​​10.1214/​aoms/​1177706645. [18] Richard P. Brent. Algorithm 488, a Gaussian pseudo-random number generator. Communications of the ACM, 17 (12): 704, 1974. https:/​/​​10.1145/​361604.361629. [19] K. E. Cahill and R. J. Glauber. Ordered expansions in boson amplitude operators. Phys. Rev., 177: 1857–1881, 1969a. https:/​/​​10.1103/​PhysRev.177.1857. [20] K. E. Cahill and R. J. Glauber. Density operators and quasiprobability distributions. Phys. Rev., 177: 1882–1902, 1969b. https:/​/​​10.1103/​PhysRev.177.1882. [21] S. J. Carter, P. D. Drummond, M. D. Reid, and R. M. Shelby. Squeezing of quantum solitons. Phys. Rev. Lett., 58: 1841–1844, 1987. https:/​/​​10.1103/​PhysRevLett.58.1841. [22] I. Carusotto and Y. Castin. An exact stochastic field method for the interacting Bose gas at thermal equilibrium. Journal of Physics B: Atomic, Molecular and Optical Physics, 34 (23): 4589, 2001. https:/​/​​10.1088/​0953-4075/​34/​23/​305. [23] Iacopo Carusotto and Yvan Castin. Exact reformulation of the bosonic many-body problem in terms of stochastic wave functions: an elementary derivation. Ann. Henri Poincaré, 4 (2): 783–792, 2003. https:/​/​​10.1007/​s00023-003-0961-7. [24] W. Casteels, R. Rota, F. Storme, and C. Ciuti. Probing photon correlations in the dark sites of geometrically frustrated cavity lattices. Phys. Rev. A, 93: 043833, 2016. https:/​/​​10.1103/​PhysRevA.93.043833. [25] M. Cheneau, P. Barmettler, D. Poletti, M. Endres, P. Schauß, T. Fukuhara, C. Gross, I. Bloch, C. Kollath, and S. Kuhr. Light-cone-like spreading of correlations in a quantum many-body system. Nature, 484: 484–487, 2012. https:/​/​​10.1038/​nature10748. [26] Alessio Chiocchetta and Iacopo Carusotto. Quantum langevin model for nonequilibrium condensation. Phys. Rev. A, 90: 023633, 2014. https:/​/​​10.1103/​PhysRevA.90.023633. [27] J. F. Corney and P. D. Drummond. Gaussian Quantum Monte Carlo Methods for Fermions and Bosons. Phys. Rev. Lett., 93: 260401, 2004. https:/​/​​10.1103/​PhysRevLett.93.260401. [28] J. F. Corney, P. D. Drummond, and A. Liebman. Quantum noise limits to terabaud communications. Opt. Commun., 140: 211–215, 1997. https:/​/​​10.1016/​S0030-4018(97)00191-0. [29] Joel F. Corney, Peter D. Drummond, Joel Heersink, Vincent Josse, Gerd Leuchs, and Ulrik L. Andersen. Many-body quantum dynamics of polarization squeezing in optical fibers. Phys. Rev. Lett., 97: 023606, 2006. https:/​/​​10.1103/​PhysRevLett.97.023606. [30] Joel F. Corney, Joel Heersink, Ruifang Dong, Vincent Josse, Peter D. Drummond, Gerd Leuchs, and Ulrik L. Andersen. Simulations and experiments on polarization squeezing in optical fiber. Phys. Rev. A, 78: 023831, 2008. https:/​/​​10.1103/​PhysRevA.78.023831. [31] Fernando A. M. de Oliveira. s-order nondiagonal quasiprobabilities. Phys. Rev. A, 45: 5104–5112, 1992. https:/​/​​10.1103/​PhysRevA.45.5104. [32] K. Dechoum, P. D. Drummond, S. Chaturvedi, and M. D. Reid. Critical fluctuations and entanglement in the nondegenerate parametric oscillator. Phys. Rev. A, 70: 053807, 2004. https:/​/​​10.1103/​PhysRevA.70.053807. [33] Graham R. Dennis, Joseph J. Hope, and Mattias T. Johnsson. XMDS2: Fast, scalable simulation of coupled stochastic partial differential equations. Computer Physics Communications, 184 (1): 201–208, 2013. https:/​/​​10.1016/​j.cpc.2012.08.016. [34] P. Deuar. First-principles quantum simulations of many-mode open interacting Bose gases using stochastic gauge methods. PhD thesis, University of Queensland, arXiv:cond-mat/​0507023, 2005. URL https:/​/​​abs/​cond-mat/​0507023. [35] P. Deuar. Simulation of complete many-body quantum dynamics using controlled quantum-semiclassical hybrids. Phys. Rev. Lett., 103: 130402, 2009. https:/​/​​10.1103/​PhysRevLett.103.130402. [36] P. Deuar. A tractable prescription for large-scale free flight expansion of wavefunctions. Computer Physics Communications, 208: 92 – 102, 2016. http:/​/​​10.1016/​j.cpc.2016.08.004. [37] P. Deuar and P. D. Drummond. Gauge $P$ representations for quantum-dynamical problems: Removal of boundary terms. Phys. Rev. A, 66: 033812, 2002. https:/​/​​10.1103/​PhysRevA.66.033812. [38] P. Deuar and P. D. Drummond. First-principles quantum dynamics in interacting Bose gases: I. the positive P representation. Journal of Physics A: Mathematical and General, 39 (5): 1163, 2006. https:/​/​​10.1088/​0305-4470/​39/​5/​010. [39] P. Deuar and P. D. Drummond. Correlations in a BEC collision: First-principles quantum dynamics with 150 000 atoms. Phys. Rev. Lett., 98: 120402, 2007. https:/​/​​10.1103/​PhysRevLett.98.120402. [40] P. Deuar, A. G. Sykes, D. M. Gangardt, M. J. Davis, P. D. Drummond, and K. V. Kheruntsyan. Nonlocal pair correlations in the one-dimensional Bose gas at finite temperature. Phys. Rev. A, 79: 043619, 2009. https:/​/​​10.1103/​PhysRevA.79.043619. [41] P. Deuar, J. Chwedeńczuk, M. Trippenbach, and P. Ziń. Bogoliubov dynamics of condensate collisions using the positive-P representation. Phys. Rev. A, 83: 063625, 2011. https:/​/​​10.1103/​PhysRevA.83.063625. [42] P. Deuar, T. Wasak, P. Ziń, J. Chwedeńczuk, and M. Trippenbach. Tradeoffs for number squeezing in collisions of Bose-Einstein condensates. Phys. Rev. A, 88: 013617, 2013. https:/​/​​10.1103/​PhysRevA.88.013617. [43] P. Deuar, J.-C. Jaskula, M. Bonneau, V. Krachmalnicoff, D. Boiron, C. I. Westbrook, and K. V. Kheruntsyan. Anisotropy in $s$-wave Bose-Einstein condensate collisions and its relationship to superradiance. Phys. Rev. A, 90: 033613, 2014. https:/​/​​10.1103/​PhysRevA.90.033613. [44] Piotr Deuar and Joanna Pietraszewicz. A semiclassical field theory that is freed of the ultraviolet catastrophe, 2019. URL https:/​/​​abs/​1904.06266. arXiv:1904.06266. [45] Piotr Deuar, Alex Ferrier, Michał Matuszewski, Giuliano Orso, and Marzena H. Szymańska. Fully quantum scalable description of driven-dissipative lattice models. PRX Quantum, 2: 010319, 2021. https:/​/​​10.1103/​PRXQuantum.2.010319. [46] Mark R. Dowling, Peter D. Drummond, Matthew J. Davis, and Piotr Deuar. Time-reversal test for stochastic quantum dynamics. Phys. Rev. Lett., 94: 130401, 2005. https:/​/​​10.1103/​PhysRevLett.94.130401. [47] P. D. Drummond. Central partial difference propagation algorithms. Computer Physics Communications, 29 (3): 211–225, 1983. https:/​/​​10.1016/​0010-4655(83)90001-2. [48] P. D. Drummond. Fundamentals of higher order stochastic equations. J. Phys. A, 47 (33): 335001, 2014. https:/​/​​10.1088/​1751-8113/​47/​33/​335001. [49] P. D. Drummond and J. F. Corney. Quantum dynamics of evaporatively cooled Bose-Einstein condensates. Phys. Rev. A, 60: R2661–R2664, 1999. https:/​/​​10.1103/​PhysRevA.60.R2661. [50] P. D. Drummond and C. W. Gardiner. Generalised P-representations in quantum optics. Journal of Physics A: Mathematical and General, 13 (7): 2353, 1980. https:/​/​​10.1088/​0305-4470/​13/​7/​018. [51] P. D. Drummond and I. K. Mortimer. Computer simulations of multiplicative stochastic differential equations. Journal of Computational Physics, 93 (1): 144–170, 1991. https:/​/​​10.1016/​0021-9991(91)90077-X. [52] P. D. Drummond and D. F. Walls. Quantum theory of optical bistability. I. nonlinear polarisability model. Journal of Physics A: Mathematical and General, 13 (2): 725–741, 1980. https:/​/​​10.1088/​0305-4470/​13/​2/​034. [53] P. D. Drummond, B. Opanchuk, L. Rosales-Zárate, M. D. Reid, and P. J. Forrester. Scaling of boson sampling experiments. Phys. Rev. A, 94: 042339, 2016. https:/​/​​10.1103/​PhysRevA.94.042339. [54] Peter D. Drummond and Bogdan Opanchuk. Initial states for quantum field simulations in phase space. Phys. Rev. Research, 2: 033304, 2020. https:/​/​​10.1103/​PhysRevResearch.2.033304. [55] Dominic V. Else, Bela Bauer, and Chetan Nayak. Floquet time crystals. Phys. Rev. Lett., 117: 090402, 2016. https:/​/​​10.1103/​PhysRevLett.117.090402. [56] Ruihua Fan, Pengfei Zhang, Huitao Shen, and Hui Zhai. Out-of-time-order correlation for many-body localization. Science Bulletin, 62 (10): 707–711, 2017. https:/​/​​10.1016/​j.scib.2017.04.011. [57] M. D. Feit, J. A. Fleck, and A. Steiger. Solution of the Schrödinger equation by a spectral method. Journal of Computational Physics, 47 (3): 412–433, 1982. http:/​/​​10.1016/​0021-9991(82)90091-2. [58] Christopher Ferrie. Quasi-probability representations of quantum theory with applications to quantum information science. Reports on Progress in Physics, 74 (11): 116001, 2011. https:/​/​​10.1088/​0034-4885/​74/​11/​116001. [59] S. Finazzi, A. Le Boité, F. Storme, A. Baksic, and C. Ciuti. Corner-space renormalization method for driven-dissipative two-dimensional correlated systems. Phys. Rev. Lett., 115: 080604, 2015. https:/​/​​10.1103/​PhysRevLett.115.080604. [60] Laura Foini and Jorge Kurchan. Eigenstate thermalization hypothesis and out of time order correlators. Phys. Rev. E, 99: 042139, 2019. https:/​/​​10.1103/​PhysRevE.99.042139. [61] Matteo Frigo and Steven G. Johnson. The design and implementation of FFTW3. Proceedings of the IEEE, 93 (2): 216–231, 2005. https:/​/​​10.1109/​JPROC.2004.840301. [62] C. W. Gardiner. Quantum Noise. Springer-Verlag, Berlin, 1991. ISBN 9783662096444, 9783662096420. [63] Martin Gärttner, Justin G. Bohnet, Arghavan Safavi-Naini, Michael L. Wall, John J. Bollinger, and Ana Maria Rey. Measuring out-of-time-order correlations and multiple quantum spectra in a trapped-ion quantum magnet. Nature Physics, 13 (8): 781–786, 2017. https:/​/​​10.1038/​nphys4119. [64] Martin Gärttner, Philipp Hauke, and Ana Maria Rey. Relating out-of-time-order correlations to entanglement via multiple-quantum coherences. Phys. Rev. Lett., 120: 040402, 2018. https:/​/​​10.1103/​PhysRevLett.120.040402. [65] C. Gehrke, J. Sperling, and W. Vogel. Quantification of nonclassicality. Phys. Rev. A, 86: 052118, 2012. https:/​/​​10.1103/​PhysRevA.86.052118. [66] A. Gilchrist, C. W. Gardiner, and P. D. Drummond. Positive P representation: Application and validity. Phys. Rev. A, 55: 3014–3032, 1997. https:/​/​​10.1103/​PhysRevA.55.3014. [67] Roy J. Glauber. Coherent and incoherent states of the radiation field. Phys. Rev., 131: 2766–2788, 1963a. https:/​/​​10.1103/​PhysRev.131.2766. [68] Roy J. Glauber. The quantum theory of optical coherence. Phys. Rev., 130: 2529–2539, 1963b. https:/​/​​10.1103/​PhysRev.130.2529. [69] V. Goblot, B. Rauer, F. Vicentini, A. Le Boité, E. Galopin, A. Lemaı̂tre, L. Le Gratiet, A. Harouri, I. Sagnes, S. Ravets, C. Ciuti, A. Amo, and J. Bloch. Nonlinear polariton fluids in a flatband reveal discrete gap solitons. Phys. Rev. Lett., 123: 113901, 2019. https:/​/​​10.1103/​PhysRevLett.123.113901. [70] Rong-Qiang He and Zhong-Yi Lu. Characterizing many-body localization by out-of-time-ordered correlation. Phys. Rev. B, 95: 054201, 2017. https:/​/​​10.1103/​PhysRevB.95.054201. [71] Scott E. Hoffmann, Joel F. Corney, and Peter D. Drummond. Hybrid phase-space simulation method for interacting Bose fields. Phys. Rev. A, 78: 013622, 2008. https:/​/​​10.1103/​PhysRevA.78.013622. [72] A. A. Houck, H. E. Türeci, and J. Koch. On-chip quantum simulation with superconducting circuits. Nature Physics, 8: 292–299, 2012. https:/​/​​10.1038/​nphys2251. [73] Julian Huber, Peter Kirton, and Peter Rabl. Phase-Space Methods for Simulating the Dissipative Many-Body Dynamics of Collective Spin Systems. SciPost Phys., 10: 45, 2021. URL https:/​/​​10.21468/​SciPostPhys.10.2.045. https:/​/​​10.21468/​SciPostPhys.10.2.045. [74] M. R. Hush, S. S. Szigeti, A. R. R. Carvalho, and J. J. Hope. Controlling spontaneous-emission noise in measurement-based feedback cooling of a Bose-Einstein condensate. New J. Phys., 15 (11): 113060, 2013. https:/​/​​10.1088/​1367-2630/​15/​11/​113060. [75] K. Husimi. Some formal properties of the density matrix. Proc. Phys. Math. Soc. Jpn., 22: 264–314, 1940. https:/​/​​10.11429/​ppmsj1919.22.4_264. [76] Nobuyuki Ikeda and Shinzo Watanabe. Stochastic Differential Equations and Diffusion Processes, volume 24 of North-Holland Mathematical Library. North Holland, 2nd edition, 1988. ISBN 0444861726, 9780444861726. [77] Juha Javanainen and Janne Ruostekoski. Symbolic calculation in development of algorithms: split-step methods for the Gross–Pitaevskii equation. Journal of Physics A: Mathematical and General, 39 (12): L179, 2006. https:/​/​​10.1088/​0305-4470/​39/​12/​L02. [78] Kai Ji, Vladimir N. Gladilin, and Michiel Wouters. Temporal coherence of one-dimensional nonequilibrium quantum fluids. Phys. Rev. B, 91: 045301, 2015. https:/​/​​10.1103/​PhysRevB.91.045301. [79] Guy Jumarie. Complex-valued Wiener measure: An approach via random walk in the complex plane. Statistics and Probability Letters, 42 (1): 61–67, 1999. https:/​/​​10.1016/​S0167-7152(98)00194-1. [80] Guy Jumarie. On the representation of fractional brownian motion as an integral with respect to $(dt)^a$. Applied Mathematics Letters, 18 (7): 739–748, 2005. https:/​/​​10.1016/​j.aml.2004.05.014. [81] P. L. Kelley and W. H. Kleiner. Theory of electromagnetic field measurement and photoelectron counting. Phys. Rev., 136: A316–A334, 1964. https:/​/​​10.1103/​PhysRev.136.A316. [82] K. V. Kheruntsyan, J.-C. Jaskula, P. Deuar, M. Bonneau, G. B. Partridge, J. Ruaudel, R. Lopes, D. Boiron, and C. I. Westbrook. Violation of the Cauchy-Schwarz inequality with matter waves. Phys. Rev. Lett., 108: 260401, 2012. https:/​/​​10.1103/​PhysRevLett.108.260401. [83] T. Kiesel, W. Vogel, V. Parigi, A. Zavatta, and M. Bellini. Experimental determination of a nonclassical Glauber-Sudarshan $P$ function. Phys. Rev. A, 78: 021804, 2008. https:/​/​​10.1103/​PhysRevA.78.021804. [84] S. Kiesewetter, Q. Y. He, P. D. Drummond, and M. D. Reid. Scalable quantum simulation of pulsed entanglement and Einstein-Podolsky-Rosen steering in optomechanics. Phys. Rev. A, 90: 043805, 2014. https:/​/​​10.1103/​PhysRevA.90.043805. [85] P. Kinsler and P. D. Drummond. Quantum dynamics of the parametric oscillator. Phys. Rev. A, 43: 6194–6208, 1991. https:/​/​​10.1103/​PhysRevA.43.6194. [86] Katja Klobas, Matthieu Vanicat, Juan P Garrahan, and Tomaž Prosen. Matrix product state of multi-time correlations. Journal of Physics A: Mathematical and Theoretical, 53 (33): 335001, 2020. https:/​/​​10.1088/​1751-8121/​ab8c62. [87] Peter E. Kloeden and Eckhard Platen. Numerical solution of stochastic differential equations. Stochastic Modelling and Applied Probability. Springer-verlag, Berlin Heidelberg, 1992. ISBN 978-3-540-54062-5. https:/​/​​10.1007/​978-3-662-12616-5. [88] J. K. Korbicz, J. I. Cirac, Jan Wehr, and M. Lewenstein. Hilbert’s 17th problem and the quantumness of states. Phys. Rev. Lett., 94: 153601, 2005. https:/​/​​10.1103/​PhysRevLett.94.153601. [89] F. Krumm, J. Sperling, and W. Vogel. Multitime correlation functions in nonclassical stochastic processes. Phys. Rev. A, 93: 063843, 2016. https:/​/​​10.1103/​PhysRevA.93.063843. [90] Ryogo Kubo, Morikazu Toda, and Natsuki Hashitsume. Statistical Physics II. Springer-Verlag, Berlin, 1985. ISBN 978-3-540-53833-2. https:/​/​​10.1007/​978-3-642-58244-8. [91] C. Lamprecht, M. K. Olsen, P. D. Drummond, and H. Ritsch. Positive-P and Wigner representations for quantum-optical systems with nonorthogonal modes. Phys. Rev. A, 65: 053813, 2002. https:/​/​​10.1103/​PhysRevA.65.053813. [92] Melvin Lax. Quantum noise. XI. multitime correspondence between quantum and classical stochastic processes. Phys. Rev., 172: 350–361, 1968. https:/​/​​10.1103/​PhysRev.172.350. [93] Hai-Woong Lee. Theory and application of the quantum phase-space distribution functions. Physics Reports, 259 (3): 147–211, 1995. https:/​/​​10.1016/​0370-1573(95)00007-4. [94] R. J. Lewis-Swan and K. V. Kheruntsyan. Proposal for demonstrating the Hong–Ou–Mandel effect with matter waves. Nature Commun., 5: 3752, 2014. https:/​/​​10.1038/​ncomms4752. [95] R. J. Lewis-Swan and K. V. Kheruntsyan. Proposal for a motional-state Bell inequality test with ultracold atoms. Phys. Rev. A, 91: 052114, 2015. https:/​/​​10.1103/​PhysRevA.91.052114. [96] T. C. H. Liew and V. Savona. Single photons from coupled quantum modes. Phys. Rev. Lett., 104: 183601, 2010. https:/​/​​10.1103/​PhysRevLett.104.183601. [97] Andreas M Läuchli and Corinna Kollath. Spreading of correlations and entanglement after a quench in the one-dimensional Bose-Hubbard model. Journal of Statistical Mechanics: Theory and Experiment, 2008 (05): P05018, 2008. https:/​/​​10.1088/​1742-5468/​2008/​05/​p05018. [98] Juan Maldacena, Stephen H. Shenker, and Douglas Stanford. A bound on chaos. Journal of High Energy Physics, 2016 (8): 106, 2016. https:/​/​​10.1007/​JHEP08(2016)106. [99] L. Mandel. Antinormally ordered correlations and quantum counters. Phys. Rev., 152: 438–451, 1966. https:/​/​​10.1103/​PhysRev.152.438. [100] Stephan Mandt, Darius Sadri, Andrew A Houck, and Hakan E Türeci. Stochastic differential equations for quantum dynamics of spin-boson networks. New Journal of Physics, 17 (5): 053018, 2015. https:/​/​​10.1088/​1367-2630/​17/​5/​053018. [101] Amy C. Mathey, Charles W. Clark, and L. Mathey. Decay of a superfluid current of ultracold atoms in a toroidal trap. Phys. Rev. A, 90: 023604, 2014. https:/​/​​10.1103/​PhysRevA.90.023604. [102] S. L. W. Midgley, S. Wüster, M. K. Olsen, M. J. Davis, and K. V. Kheruntsyan. Comparative study of dynamical simulation methods for the dissociation of molecular Bose-Einstein condensates. Phys. Rev. A, 79: 053632, 2009. https:/​/​​10.1103/​PhysRevA.79.053632. [103] Magdalena Moczała-Dusanowska, Łukasz Dusanowski, Stefan Gerhardt, Yu Ming He, Marcus Reindl, Armando Rastelli, Rinaldo Trotta, Niels Gregersen, Sven Höfling, and Christian Schneider. Strain-tunable single-photon source based on a quantum dot–micropillar system. ACS Photonics, 6 (8): 2025–2031, 2019. https:/​/​​10.1021/​acsphotonics.9b00481. [104] Ekaterina Moreva, Marco Gramegna, Giorgio Brida, Lorenzo Maccone, and Marco Genovese. Quantum time: Experimental multitime correlations. Phys. Rev. D, 96: 102005, 2017. https:/​/​​10.1103/​PhysRevD.96.102005. [105] J. E. Moyal. Quantum mechanics as a statistical theory. Mathematical Proceedings of the Cambridge Philosophical Society, 45 (01): 99–124, 1949. https:/​/​​10.1017/​S0305004100000487. [106] Ray Ng and Erik S. Sørensen. Exact real-time dynamics of quantum spin systems using the positive-P representation. J. Phys. A, 44: 065305, 2011. https:/​/​​10.1088/​1751-8113/​44/​6/​065305. [107] Ray Ng, Erik S. Sørensen, and Piotr Deuar. Simulation of the dynamics of many-body quantum spin systems using phase-space techniques. Phys. Rev. B, 88: 144304, 2013. https:/​/​​10.1103/​PhysRevB.88.144304. [108] A. A. Norrie, R. J. Ballagh, and C. W. Gardiner. Quantum turbulence in condensate collisions: An application of the classical field method. Phys. Rev. Lett., 94: 040401, 2005. https:/​/​​10.1103/​PhysRevLett.94.040401. [109] M. K. Olsen, L. I. Plimak, and M. Fleischhauer. Quantum-theoretical treatments of three-photon processes. Phys. Rev. A, 65: 053806, 2002. https:/​/​​10.1103/​PhysRevA.65.053806. [110] M. K. Olsen, A. B. Melo, K. Dechoum, and A. Z. Khoury. Quantum phase-space analysis of the pendular cavity. Phys. Rev. A, 70: 043815, 2004. https:/​/​​10.1103/​PhysRevA.70.043815. [111] Bogdan Opanchuk, Rodney Polkinghorne, Oleksandr Fialko, Joachim Brand, and Peter D. Drummond. Quantum simulations of the early universe. Annalen der Physik, 525 (10-11): 866–876, 2013. https:/​/​​10.1002/​andp.201300113. [112] Bogdan Opanchuk, Laura Rosales-Zárate, Margaret D. Reid, and Peter D. Drummond. Simulating and assessing boson sampling experiments with phase-space representations. Phys. Rev. A, 97: 042304, 2018. https:/​/​​10.1103/​PhysRevA.97.042304. [113] Bogdan Opanchuk, Laura Rosales-Zárate, Margaret D. Reid, and Peter D. Drummond. Robustness of quantum Fourier transform interferometry. Opt. Lett., 44 (2): 343–346, 2019. https:/​/​​10.1364/​OL.44.000343. [114] J. Pietraszewicz, M. Stobińska, and P. Deuar. Correlation evolution in dilute Bose-Einstein condensates after quantum quenches. Phys. Rev. A, 99: 023620, 2019. https:/​/​​10.1103/​PhysRevA.99.023620. [115] L. I. Plimak and M. K. Olsen. Quantum-field-theoretical approach to phase–space techniques: Symmetric Wick theorem and multitime Wigner representation. Annals of Physics, 351: 593 – 619, 2014. https:/​/​​10.1016/​j.aop.2014.09.010. [116] L. I. Plimak, M. K. Olsen, M. Fleischhauer, and M. J. Collett. Beyond the Fokker-Planck equation: Stochastic simulation of complete Wigner representation for the optical parametric oscillator. Europhysics Letters (EPL), 56 (3): 372–378, 2001. https:/​/​​10.1209/​epl/​i2001-00529-8. [117] L. I. Plimak, M. Fleischhauer, M. K. Olsen, and M. J. Collett. Quantum-field-theoretical approach to phase-space techniques: Generalizing the positive-P representation. Phys. Rev. A, 67: 013812, 2003. https:/​/​​10.1103/​PhysRevA.67.013812. [118] Anatoli Polkovnikov. Phase space representation of quantum dynamics. Annals of Physics, 325 (8): 1790 – 1852, 2010. http:/​/​​10.1016/​j.aop.2010.02.006. [119] Martin Ringbauer, Fabio Costa, Michael E. Goggin, Andrew G. White, and Fedrizzi Alessandro. Multi-time quantum correlations with no spatial analog. NPJ Quantum Information, 4: 37, 2018. https:/​/​​10.1038/​s41534-018-0086-y. [120] J. A. Ross, P. Deuar, D. K. Shin, K. F. Thomas, B. M. Henson, S. S. Hodgman, and A. G Truscott. Survival of the quantum depletion of a condensate after release from a harmonic trap in theory and experiment, 2021. URL https:/​/​​abs/​2103.15283. arXiv:2103.15283. [121] Mutsuo Saito and Makoto Matsumoto. Simd-oriented fast Mersenne twister: a 128-bit pseudorandom number generator. In Alexander Keller, Stefan Heinrich, and Harald Niederreiter, editors, Monte Carlo and Quasi-Monte Carlo Methods 2006, pages 607–622, Berlin, Heidelberg, 2008. Springer Berlin Heidelberg. ISBN 978-3-540-74496-2. https:/​/​​10.1007/​978-3-540-74496-2_36. [122] Sebastian Schmidt and Jens Koch. Circuit qed lattices: Towards quantum simulation with superconducting circuits. Annalen der Physik, 525 (6): 395–412, 2013. https:/​/​​10.1002/​andp.201200261. [123] C. Schneider, K. Winkler, M. D. Fraser, M. Kamp, Y. Yamamoto, E. A. Ostrovskaya, and S. Höfling. Exciton-polariton trapping and potential landscape engineering. Reports on Progress in Physics, 80 (1): 016503, 2016. https:/​/​​10.1088/​0034-4885/​80/​1/​016503. [124] Huitao Shen, Pengfei Zhang, Ruihua Fan, and Hui Zhai. Out-of-time-order correlation at a quantum phase transition. Phys. Rev. B, 96: 054503, 2017. https:/​/​​10.1103/​PhysRevB.96.054503. [125] Alice Sinatra, Carlos Lobo, and Yvan Castin. The truncated Wigner method for Bose-condensed gases: limits of validity and applications. Journal of Physics B: Atomic, Molecular and Optical Physics, 35 (17): 3599, 2002. https:/​/​​10.1088/​0953-4075/​35/​17/​301. [126] A. M. Smith and C. W. Gardiner. Simulations of nonlinear quantum damping using the positive P representation. Phys. Rev. A, 39: 3511–3524, 1989. https:/​/​​10.1103/​PhysRevA.39.3511. [127] Robert W. Spekkens. Negativity and contextuality are equivalent notions of nonclassicality. Phys. Rev. Lett., 101: 020401, 2008. https:/​/​​10.1103/​PhysRevLett.101.020401. [128] J. Sperling. Characterizing maximally singular phase-space distributions. Phys. Rev. A, 94: 013814, 2016. https:/​/​​10.1103/​PhysRevA.94.013814. [129] J Sperling and W Vogel. Quasiprobability distributions for quantum-optical coherence and beyond. Physica Scripta, 95 (3): 034007, 2020. https:/​/​​10.1088/​1402-4896/​ab5501. [130] Herbert Spohn. Kinetic equations from hamiltonian dynamics: Markovian limits. Rev. Mod. Phys., 52: 569–615, 1980. https:/​/​​10.1103/​RevModPhys.52.569. [131] M. J. Steel, M. K. Olsen, L. I. Plimak, P. D. Drummond, S. M. Tan, M. J. Collett, D. F. Walls, and R. Graham. Dynamical quantum noise in trapped Bose-Einstein condensates. Phys. Rev. A, 58: 4824–4835, 1998. https:/​/​​10.1103/​PhysRevA.58.4824. [132] E. C. G. Sudarshan. Equivalence of Semiclassical and Quantum Mechanical Descriptions of Statistical Light Beams. Phys. Rev. Lett., 10: 277–279, 1963. https:/​/​​10.1103/​PhysRevLett.10.277. [133] Brian Swingle, Gregory Bentsen, Monika Schleier-Smith, and Patrick Hayden. Measuring the scrambling of quantum information. Phys. Rev. A, 94: 040302, 2016. https:/​/​​10.1103/​PhysRevA.94.040302. [134] Tomasz Świsłocki and Piotr Deuar. Quantum fluctuation effects on the quench dynamics of thermal quasicondensates. Journal of Physics B: Atomic, Molecular and Optical Physics, 49 (14): 145303, 2016. https:/​/​​10.1088/​0953-4075/​49/​14/​145303. [135] Andrzej Syrwid, Jakub Zakrzewski, and Krzysztof Sacha. Time crystal behavior of excited eigenstates. Phys. Rev. Lett., 119: 250602, 2017. https:/​/​​10.1103/​PhysRevLett.119.250602. [136] Kishore Thapliyal, Subhashish Banerjee, Anirban Pathak, S. Omkar, and V. Ravishankar. Quasiprobability distributions in open quantum systems: Spin-qubit systems. Annals of Physics, 362: 261–286, 2015. https:/​/​​10.1016/​j.aop.2015.07.029. [137] Hidekazu Tsukiji, Hideaki Iida, Teiji Kunihiro, Akira Ohnishi, and Toru T. Takahashi. Entropy production from chaoticity in Yang-Mills field theory with use of the Husimi function. Phys. Rev. D, 94: 091502, 2016. https:/​/​​10.1103/​PhysRevD.94.091502. [138] Victor Veitch, Christopher Ferrie, David Gross, and Joseph Emerson. Negative quasi-probability as a resource for quantum computation. New Journal of Physics, 14 (11): 113011, 2012. https:/​/​​10.1088/​1367-2630/​14/​11/​113011. [139] Werner Vogel. Nonclassical correlation properties of radiation fields. Phys. Rev. Lett., 100: 013605, 2008. https:/​/​​10.1103/​PhysRevLett.100.013605. [140] M. J. Werner and P. D. Drummond. Robust algorithms for solving stochastic partial differential equations. Journal of Computational Physics, 132 (2): 312 – 326, 1997. https:/​/​​10.1006/​jcph.1996.5638. [141] E. Wigner. On the Quantum Correction For Thermodynamic Equilibrium. Phys. Rev., 40: 749–759, 1932. https:/​/​​10.1103/​PhysRev.40.749. [142] Frank Wilczek. Quantum time crystals. Phys. Rev. Lett., 109: 160401, 2012. https:/​/​​10.1103/​PhysRevLett.109.160401. [143] Michiel Wouters and Vincenzo Savona. Stochastic classical field model for polariton condensates. Phys. Rev. B, 79: 165302, 2009. https:/​/​​10.1103/​PhysRevB.79.165302. [144] S. Wüster, J. F. Corney, J. M. Rost, and P. Deuar. Quantum dynamics of long-range interacting systems using the positive-$P$ and gauge-$P$ representations. Phys. Rev. E, 96: 013309, 2017. https:/​/​​10.1103/​PhysRevE.96.013309. [145] J. Zhang, P. W. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I. D. Potirniche, A. C. Potter, A. Vishwanath, N. Y. Yao, and C. Monroe. Observation of a discrete time crystal. Nature, 543 (7644): 217, 2017. https:/​/​​10.1038/​nature21413. Cited by [1] J. A. Ross, P. Deuar, D. K. Shin, K. F. Thomas, B. M. Henson, S. S. Hodgman, and A. G. Truscott, “Survival of the quantum depletion of a condensate after release from a harmonic trap in theory and experiment”, arXiv:2103.15283. [2] Piotr Deuar, Alex Ferrier, Michał Matuszewski, Giuliano Orso, and Marzena H. Szymańska, “Fully quantum scalable description of driven dissipative lattice models”, arXiv:2012.02014. The above citations are from SAO/NASA ADS (last updated successfully 2021-05-10 11:58:52). The list may be incomplete as not all publishers provide suitable and complete citation data. Could not fetch Crossref cited-by data during last attempt 2021-05-10 11:58:49: Could not fetch cited-by data for 10.22331/q-2021-05-10-455 from Crossref. This is normal if the DOI was registered recently. Coinsmart. Beste Bitcoin-Börse in Europa Continue Reading How to Solve Equations That Are Stubborn as a Goat If you’ve ever taken a math test, you’ve probably met a grazing goat. Usually it’s tied to a fence post or the side of some barn, left there by an absent-minded farmer to graze on whatever grass it can reach. When you meet a grazing goat, your job is to calculate the total area of the region it can graze on. It’s a math test, after all. Math teachers have stymied students by sticking goats in strangely shaped fields for hundreds of years, but one particular grazing goat problem has gotten the goat of mathematicians for more than a century. Until last year they were only able to find approximate answers to the problem, and it took a new approach with some very advanced mathematics to finally produce an exact solution. Let’s take a look at how a question you might find on a math test can turn into a problem that stumps mathematicians for over a century. The simplest kind of grazing goat problem has the hungry animal attached to the side of a long barn by a fixed length of rope. Usually in these problems we want to find the area of the region the goat has access to. What does that region look like? With the leash pulled taut the goat can make a semicircle and can reach anything inside it. The area of a circle is Aπr2, so the area of a semicircle is A = $latex frac{1}{2}$πr2. If, for example, the rope has length 4, then the goat could graze in a region with area A = $latex frac{1}{2}$π × 42 = 8π square units. This straightforward setup doesn’t pose much of a challenge to the math student or to the goat, so let’s make it more interesting. What if the goat is tied to the side of a square barn? Let’s say the rope and the side of the barn both have length 4, and that the rope is attached to the middle of one side. What’s the area of the region the goat has access to now? Well, the goat still has access to the same semicircle as in the first problem. But the goat can also continue around the corner of the barn. Once it’s at the corner, the goat has two more units of rope to work with, so it can sweep out another quarter circle of radius 2 on either side of the barn. The goat can access the semicircle of radius 4 plus two quarter circles of radius 2, for a total area of A = $latex frac{1}{2}$π × 42 + $latex frac{1}{4}$π × 22 + $latex frac{1}{4}$π × 22 = 10π square units. You can make the problem more challenging by changing the shape of the obstruction. I’ve seen goats attached to triangles, hexagons and even concave shapes. You can also make a new math question from an old one by reversing it: Instead of starting with rope length and finding the area, you can start with the area and find the rope length. For example, let’s stick with our square barn and ask a new question: How long would the rope have to be for the goat to have access to a total of 50 square units of area? Reversing a math problem can breathe new life into an old idea, but it also makes this problem much more challenging. First, notice that the shape of the region depends on the length of the rope. For example, if the rope is shorter than 2 units in length, the goat can’t get around the corner of the barn, so the region will only be a semicircle. If the rope is longer than 2 units, the goat can get around the corner, as we saw above. And if the rope is longer than 6 units, the goat can get behind the barn, creating another set of quarter circles to consider. (If the rope gets much longer, there will be overlap. See the exercises at the end of the column for an example of this.) We want to find the rope length that gives us 50 square units of total area. The way to do this mathematically is to set our area formula equal to 50 and solve for r. But each kind of region has a different area formula. Which one do we use? Figuring this out requires a little casework. If r ≤ 2 the area of the region is A = $latex frac{1}{2}$πr2. The biggest area would occur when r = 2, which yields a total area of A = $latex frac{1}{2}$π × 22 = 2π ≈ 6.28. This is less than 50, so we know we need more than 2 units of rope. If 2 < r ≤ 6, this gives us the semicircle plus the two quarter circles we encountered before. The radius of the semicircle is r, and the radius of the quarter circles is r – 2, since two units of rope are needed to get to the corner and whatever rope remains acts like the radius of the quarter circle centered at the corner. The area of this semicircle is $latex frac{1}{2}$πr2, and the area of each quarter circle is $latex frac{1}{4}$π(r – 2)2. Adding this up gives us a total area of $latex begin{aligned}A&=frac{1}{2} pi r^{2} +frac{1}{4}pi (r-2)^{2} + frac{1}{4}pi (r-2)^{2}\[1pt] \A &=frac{1}{2} pi r^{2} + frac{1}{2}pi (r-2)^{2}.end{aligned}$ We get the biggest possible area when r = 6, which gives an area of A = $latex frac{1}{2}$π × 62 + $latex frac{1}{2}$π × 42 = 26π ≈ 81.68 square units. Since 50 < 26π, that means the r that will give us 50 square units of area must be less than 6. Knowing that r must be between 2 and 6 units settles the question of which area formula we should use: When 2 < r ≤ 6, the area is A = $latex frac{1}{2}$πr2 + $latex frac{1}{2}$π(r – 2)2. To find the exact value of r that gives us 50 square units of area, we set up this equation: Notice that this is another way in which our reversed question is more complicated than the original: Instead of just computing the area the goat can reach, we need to solve an equation to figure out the length of the rope. To do that, we need to isolate r. We have to use arithmetic and algebra to get r by itself on one side of the equation, and that will tell us exactly what r must be. Our equation may look a little intimidating at first, but it’s just a quadratic equation in r. There’s a standard procedure for solving such equations: We rearrange it in the form ar2 + br + c = 0 and then use the quadratic formula. A little algebra and arithmetic does the trick. 50 = $latex frac{1}{2}$π2 + $latex frac{1}{2}$π(r – 2)2 $latex frac{100}{pi}$ = r2 + (r – 2)2 $latex frac{100}{pi}$ = 2r2 – 4r + 4 0 = 2r2 – 4r + 4 – $latex frac{100}{pi}$. This may not be the most beautiful mathematical expression in the world, but it’s just a quadratic equation, so we can apply the quadratic formula to solve exactly for r. This gives us an answer of r = 1 + $latexsqrt{frac{50}{pi} – 1}$ ≈ 4.86. Because we were able to isolate r in our equation, we now know exactly how long the rope must be to get an area of 50 square units. (Notice that the value of r we found is between 2 and 6, as expected.) As challenging as this reversed goat grazing problem was compared to the initial ones we looked at, mathematicians discovered that the problem becomes even more challenging when you stick the goat inside the barn. So challenging, in fact, that they couldn’t solve it exactly. Let’s put the goat inside our square barn with side length 4 and attach the rope to the middle of a wall. How long does the rope need to be for the goat to have access to half the area inside the barn? As above, part of the challenge is that the shape of the region depends on the value of r. To get half the area of the square we need r to be longer than half the side of the barn but shorter than the full side, which gives us a region like this. Finding a formula for the area of this region isn’t so easy. We can imagine the region as one sector of a circle of radius r plus two right triangles, and then use some high school geometry to get a formula. But as we’ll soon see, the mixing of circles and triangles is going to cause some trouble. Let’s start with the triangles. The Pythagorean theorem tells us that the length of the missing leg in each right triangle is $latexsqrt{r^{2}-4}$. This makes the area of one of the triangles $latex frac{1}{2}$ × 2 × $latexsqrt{r^{2} – 4}$ = $latexsqrt{r^{2} – 4}$, so the two triangles together have an area of 2 $latexsqrt{r^{2}-4}$. Now for the circular sector. The area of a sector is A = $latexfrac{1}{2}r^{2}$θ, where θ is the measure of the central angle (in radians, not degrees). We need a formula for the area in terms of r, so we need to express the angle θ in terms of r. To do this, we’ll use the law of cosines, an underappreciated theorem from high school trigonometry. Applying the law of cosines to the isosceles triangle with sides r, r and 4 gives us 42 = r2 + r2 – 2r2cosθ, which we can solve for cosθ: cosθ = $latex frac{2 r^{2}-16}{2 r^{2}}$ = $latex frac{r^{2}-8}{r^{2}}$. To isolate θ, we need to take the inverse cosine, or arccosine, of both sides of the equation. This gives us θ = arccos $latex left(frac{r^{2}-8}{r^{2}}right)$. Now we have the angle θ in terms of r, so we can now express the area of our sector in terms of r and r alone. A = $latex frac{1}{2}$r2θ A = $latex frac{1}{2}$r2arccos $latex left(frac{r^{2}-8}{r^{2}}right)$. Our final area formula is the sum of the sector area and the area of the two triangles, which is A = $latex frac{1}{2}$r2arccos $latex left(frac{r^{2}-8}{r^{2}}right)$ + 2 $latexsqrt{r^{2}-4}$. We now have a formula for the area of the region accessible to the goat inside the square entirely in terms of r. Now we just need to find the value of r that gives the goat access to half the square. The entire square has area 16, so all we have to do is plug A = 8 into our equation and solve for r and we’ll be finished. There’s just one small problem: It’s not possible to solve for r in this equation. That is, it’s not possible to solve exactly for r in this equation. We can use a calculator to approximate the value of r that makes this equation true (r ≈ 2.331), but we can’t isolate rin our equation. The mixing of trigonometric functions and polynomial functions in our equation creates obstacles we can’t get around. We could try to get the r’s out from inside the arccosine function, but to do that we’d have to put the other r’s inside a cosine function. Either way we’d be dealing with an equation that involves a transcendental function, like an exponential or trigonometric function. Transcendental functions can’t be simply expressed in terms of the usual algebraic operations like addition and multiplication, and so in general transcendental equations can’t be solved exactly. This issue lies at the heart of a famous grazing goat problem posed in the 19th century where the goat was placed inside a circular barn. As in our square barn problem, the goal was to determine how long the rope had to be for the goat to have access to half the region. The region accessible by the goat takes the shape of a “lens” — two circular segments stacked together. It’s possible to use high school geometry to find the area of this lens in terms of the rope length r, but the formula is much more complicated than it is for the square. And when you set this equal to half the area of the circular barn, you run into the same problem we ran into inside the square: You just can’t isolate r. You can approximate it, but you can’t solve for r exactly. This sort of obstinacy is no more appealing in an equation than it is in a goat. For over 100 years, mathematicians tried to find an exact solution to this goat-in-a-circle puzzle, but it wasn’t until last year that a German mathematician finally figured it out. He used complex analysis — mathematics far removed from the geometry of circles and squares most goat problems rely on — to solve explicitly for . And while using something as advanced as a contour integral to find the length of a goat’s leash may seem like overkill, there’s always mathematical satisfaction in doing what couldn’t be done before. And there’s always the possibility that these new methods, even if they arise from studying a silly problem about goats, might lead to insights beyond the barnyard. 1. If the goat is attached to the middle of the side of a square barn with side length 4 by a rope of length 8, outside the barn, what’s the area of the region the goat has access to? 2. If the goat is attached to the corner of a square barn with side length 4 by a rope of length 8, outside the barn, what’s the area of the region the goat has access to? 3. Suppose the goat is inside an equilateral triangle of side 4 attached to a vertex. How long would the rope have to be for the goat to have access to half the triangle? 4. If the goat is attached to the middle of the side of a square barn with side length 4 by a rope of length 10, outside the barn, what’s the area of the region the goat has access to? Click for Answer 1: The region is comprised of a semicircle of radius 8, two quarter circles of radius 6, and two quarter circles of radius 2. Since 8 is equal to half the perimeter of the barn, the two semicircles behind the barn meet up at the midpoint. The area of this region is $latexfrac{1}{2}$π × 82 + $latexfrac{1}{2}$π × 62 + $latexfrac{1}{2}$π × 22 = 52π. Click for Answer 2: This region is comprised of three-quarters of a circle of radius 8 and two quarter circles of radius 4. This area is $latexfrac{3}{4}$π × 82 + $latexfrac{1}{2}$π × 42 = 56π. As a challenge, think about what happens if the rope has length 10. Click for Answer 3: Since the angles of an equilateral triangle are 60 degrees, the region the goat has access to is one-sixth of a circle of radius r, which has area $latexfrac{1}{6}$πr2. The area of an equilateral triangle is $latexfrac{sqrt{3}}{4}$s2, so the area of the triangle of side length 4 is $latexfrac{sqrt{3}}{4}$ × 42 = 4 $latex{sqrt{3}}$. We set the two areas equal, $latexfrac{1}{6}$πr2 = $latexfrac{1}{2}$ × 4 $latex{sqrt{3}}$, and solve for r to get r = $latexsqrt{frac{12 sqrt{3}}{pi}}$. Notice how we can solve exactly for r here, unlike when the region mixed circular sectors and triangles. Click for Answer 4: The region is apparently comprised of a semicircle of radius 10, two quarter circles of radius 8, and two quarter circles of radius 4. This has an area of $latexfrac{1}{2}$π × 102 + $latexfrac{1}{2}$π × 82 + $latexfrac{1}{2}$π × 42 = 90π. But the last two quarter circles overlap behind the barn. That overlap has been counted twice, so we need to subtract that overlapping area from 90π. The overlapping area can be thought of as two sixths of circles of radius 4 minus an equilateral triangle of side 4. The area of this region comes to 2 × $latexfrac{1}{6}$π × 42 – $latexfrac{sqrt{3}}{4}$ × 42 = $latexfrac{16}{3}$π – 4 $latex{sqrt{3}}$. So the total area is 90π – $latexleft(frac{16}{3} pi-4 sqrt{3}right)$ = $latexfrac{254}{3}$π + 4 $latex{sqrt{3}}$. (Note: This overlap would be much more difficult to find if the two circles had different radii, which is why finding the area of the lens mentioned above is so difficult.) Coinsmart. Beste Bitcoin-Börse in Europa Continue Reading Quantum Double-Slit Experiment Offers Hope for Earth-Size Telescope Imagine being able to see the surface of an Earth-like planet orbiting another star, or watching a star get shredded by a black hole. Such precise observations are currently impossible. But scientists are proposing ways to quantum mechanically link up optical telescopes around the world in order to view the cosmos at a mind-boggling level of detail. The trick is to transport fragile photons between telescopes, so that the signals can be combined, or “interfered,” to create far sharper images. Researchers have known for years that this kind of interferometry would be possible with a futuristic network of teleportation devices called a quantum internet. But whereas the quantum internet is a far-off dream, a new proposal lays out a scheme for doing optical interferometry with quantum storage devices that are under development now. The approach would represent the next stage of astronomy’s obsession with size. Wider mirrors create sharper images, so astronomers are constantly designing ever-bigger telescopes and seeing more details of the cosmos unfold. Today they’re building an optical telescope with a mirror nearly 40 meters wide, 16 times the width (and thus resolution) of the Hubble Space Telescope. But there’s a limit to how much mirrors can grow. “We’re not going to be building a 100-meter single-aperture telescope. That’s insane!” said Lisa Prato, an astronomer at Lowell Observatory in Arizona. “So what’s the future? The future’s interferometry.” Earth-Size Telescope Radio astronomers have been doing interferometry for decades. The first-ever picture of a black hole, released in 2019, was made by synchronizing signals that arrived at eight radio telescopes dotted around the world. Collectively, the telescopes had the resolving power of a single mirror as wide as the distance between them — an effectively Earth-size telescope. To make the picture, radio waves arriving at each telescope were precisely time-stamped and stored, and the data was then stitched together later on. The procedure is relatively easy in radio astronomy, both because radio-emitting objects tend to be extremely bright, and because radio waves are relatively large and thus easy to line up. Optical interferometry is much harder. Visible wavelengths measure hundreds of nanometers long, leaving far less room for error in aligning waves according to when they arrived at different telescopes. Moreover, optical telescopes build images photon-by-photon from very dim sources. It’s impossible to save these grainy signals onto normal hard drives without losing information that’s vital for doing interferometry. Astronomers have managed by directly linking nearby optical telescopes with optical fibers — an approach that led in 2019 to the first direct observation of an exoplanet. But connecting telescopes farther apart than 1 kilometer or so is “extremely unwieldly and expensive,” said Theo ten Brummelaar, director of the CHARA Array, an optical interferometric array in California. “If there was a way of recording photon events at an optical telescope with some kind of quantum device, that would be a great boon to the science.” Young’s Slits Joss Bland-Hawthorn and John Bartholomew of the University of Sydney and Matthew Sellars of the Australian National University recently proposed a scheme for doing optical interferometry with quantum hard drives. The principle behind the new proposal traces back to the early 1800s, before the quantum revolution, when Thomas Young devised an experiment to test whether light is made of particles or waves. Young passed light through two closely separated slits and saw a pattern of regular bright bands form on a screen behind. This interference pattern, he argued, appeared because light waves from each slit cancel out and add together at different locations. Then things got a whole lot weirder. Quantum physicists discovered that the double-slit interference pattern remains even if photons are sent toward the slits one at a time; dot by dot, they gradually create the same bands of light and dark on the screen. However, if anyone monitors which slit each photon goes through, the interference pattern disappears. Particles are only wavelike when undisturbed. Now imagine that, instead of two slits, you have two telescopes. When a single photon from the cosmos arrives on Earth, it could hit either telescope. Until you measure this — as with Young’s double slits — the photon is a wave that enters both. Bland-Hawthorn, Bartholomew and Sellars suggest plugging in a quantum hard drive at each telescope that can record and store the wavelike states of incoming photons without disturbing them. After a while, you transport the hard drives to a single location, where you interfere the signals to create an incredibly high-resolution image. Quantum Memory To make this work, quantum hard drives have to store lots of information over long periods of time. One turning point came in 2015, when Bartholomew, Sellars and colleagues designed a memory device made from europium nuclei embedded in a crystal that could store fragile quantum states for six hours, with the potential to extend this to days. Then, earlier this year, a team from the University of Science and Technology of China in Hefei demonstrated that you could save photon data into similar devices and later read it out. “It’s very exciting and surprising to see that quantum information techniques can be useful for astronomy,” said Zong-Quan Zhou, who co-authored the recently published paper. Zhou describes a world in which high-speed trains or helicopters rapidly shuttle quantum hard drives between far-apart telescopes. But whether these devices can work outside laboratories remains to be seen. Bartholomew is confident that the hard drives can be shielded from errant electric and magnetic fields that disrupt quantum states. But they’ll also have to withstand pressure changes and acceleration. And the researchers are working to design hard drives that can store photons with many different wavelengths — a necessity for capturing images of the cosmos. Not everyone thinks it’ll work. “In the long run, if these techniques are to become practical, they will require a quantum network,” said Mikhail Lukin, a quantum optics specialist at Harvard University. Rather than physically transporting quantum hard drives, Lukin has proposed a scheme that would rely on a quantum internet — a network of devices called quantum repeaters that teleport photons between locations without disturbing their states. Bartholomew counters that “we have good reasons to be optimistic” about quantum hard drives. “I think in a five-to-10-year time frame you could see tentative experiments where you actually start looking at real [astronomical] sources.” By contrast, the construction of a quantum internet, Bland-Hawthorn said, is “decades from reality.” Coinsmart. Beste Bitcoin-Börse in Europa Continue Reading PR Newswire4 days ago Aviation3 days ago What Happened To Lufthansa’s Boeing 707 Aircraft? Blockchain4 days ago Launch of Crypto Trading Team by Goldman Sachs Cyber Security4 days ago How to Become a Cryptographer: A Complete Career Guide AR/VR4 days ago Aviation2 days ago JetBlue Hits Back At Eastern Airlines On Ecuador Flights Cyber Security3 days ago Cybersecurity Degrees in Massachusetts — Your Guide to Choosing a School Payments5 days ago G20 TechSprint Initiative invites firm to tackle green finance Ripple’s XRP Price Blockchain4 days ago Charted: Ripple (XRP) Turns Green, Here’s Why The Bulls Could Aim $2 Cyber Security4 days ago How To Unblock Gambling Websites? Blockchain3 days ago DOGE Co-founder Reveals the Reasons Behind its Price Rise Blockchain3 days ago Miten tekoälyä käytetään videopeleissä ja mitä tulevaisuudessa on odotettavissa Blockchain4 days ago South America’s Largest E-Commerce Company Adds $7.8M Worth of Bitcoin to its Balance Sheet Blockchain4 days ago Bitcoin Has No Existential Threats, Says Michael Saylor Aviation3 days ago United Airlines Uses The Crisis To Diversify Latin American Network Cyber Security3 days ago Fintech3 days ago Blockchain3 days ago Blockchain4 days ago Cardano (ADA) Staking Live on the US-Based Kraken Exchange Private Equity4 days ago
967f8b4a4c1a62d9
Quasinormal modes of black holes and Borel summation Yasuyuki Hatsuda We propose a simple and efficient way to compute quasinormal frequencies of spherically symmetric black holes. We revisit an old idea that relates them to bound state energies of anharmonic oscillators by an analytic continuation. This connection enables us to achieve remarkable high-order computations of WKB series by Rayleigh–Schrödinger perturbation theory. The known WKB results are easily reproduced. Our analysis shows that the perturbative WKB series of the quasinormal frequencies turn out to be Borel summable divergent series both for the Schwarzschild and for the Reissner–Nordström black holes. Their Borel sums reproduce the correct numerical values. RUP-19-18 institutetext: Department of Physics, Rikkyo University, Toshima, Tokyo 171-8501, Japan 1 Introduction In black hole perturbation theory, characteristic oscillatory modes appear. Because of emission of gravitational radiation, these oscillations decay, and thus are called quasinormal modes. The quasinormal modes play a central role at the final stage, ringdown phase, of coalescence of two black holes, and have a direct connection with the recent observation of gravitational waves abbott2016 ; abbott2016a ; abbott2016b . It is an important task to compute the quasinormal frequencies for various black holes as precise as possible. Unfortunately, it is almost impossible for us to follow a huge number of references on this subject. We refer to a few comprehensive reviews kokkotas1999 ; berti2009 ; konoplya2011 . The purpose of the present work is to develop a widely applicable method to compute the quasinormal frequencies of spherically symmetric black holes. We combine a few new ideas with some known ones in various fields. The method that we propose here is simple and efficient. Anyone who has basic skills on Mathematica can compute the accurate quasinormal frequencies for a wide class of black holes from now on! We explicitly demonstrate it for two simple examples: the Schwarzschild black hole and the Reissner--Nordström black hole.111Mathematica codes for these examples are available on request to the author. It is well-known that the WKB approach, initiated by Schutz and Iyer in schutz1985 ,222We should note that in the WKB approach, one does not necessarily have to expand a potential around its maximum. The general treatment leads to Bohr–Sommerfeld-like quantization conditions froman1992 , which are more accurate than the approach in schutz1985 for high overtone numbers. can be applied to many cases. In iyer1987 , Iyer and Will computed the third order correction, and Konoplya extended it to the sixth order konoplya2003 . Recently, the computation up to the 13th order has been done by Matyjasek and Opala matyjasek2017 (see also konoplya2019 ). While these great computations actually improved approximate values of the quasinormal frequencies of black holes, it seems hard to answer more fundamental questions: Are the WKB series convergent or divergent? Do they receive nonperturbative corrections? Can we reconstruct the exact quasinormal frequencies from their WKB series? Of course, these questions are all interrelated. To answer them, much higher-order data are desirable. The high-order data also provide us more accurate numerical values as a result. In this work, we propose a simple way to do so. It easily reproduces the previous WKB results. We emphasize that the application range of our approach is as wide as that of the WKB approach. We revisit an old idea originally proposed by Blome, Ferrari and Mashhoon blome1984 ; ferrari1984 ; Ferrari:1984zz . It is explained in those papers that the quasinormal frequencies of black holes are related to the bound state energies of anharmonic oscillators by an analytic continuation. We find that this connection allows us to use a powerful technique, à la Bender and Wu bender1969 , in quantum mechanics. Surprisingly, no one has ever applied this famous technique to the computation of the quasinormal frequencies, though it has been known for half a century! Recently, from another motivation in high-energy physics, the Bender–Wu method has been beautifully packaged in Mathematica by Sulejmanpasic and Ünsal sulejmanpasic2018 . To the author’s knowledge, this is now the best tool to look into perturbative series deeply in quantum mechanics. Based on these excellent works, we show that the Bender--Wu method is indeed greatly useful in the computation of the quasinormal modes. By using this method, we can compute the perturbative WKB expansion to extremely high orders. We have reached the 200th order for the Schwarzschild and for the Reissner--Nordström black holes, and one can easily go beyond it, depending on one’s CPU power and time constraint.333The 50th order evaluation for a given overtone number will be done in 15-30 seconds, and the 200th order in 10-15 minutes on recent home computers. Note that our approach also gives the spectrum of bound states in an associated eigenvalue problem as a bonus. Such high-order data are useful to clarify the convergence or the divergence of the WKB series at the very quantitative level. We find that the WKB series is almost surely divergent, i.e., its radius of convergence is just zero. The same result has been observed in matyjasek2017 ; konoplya2019 . Our result gives much stronger evidence. This result is not surprising because in quantum mechanics, in quantum field theories and in string theory, perturbative series are usually divergent. The Borel summation method tells us a lot of important information on exact functions from their perturbative series. In the examples in this paper, the WKB series of the quasinormal frequencies are always Borel summable, and it strongly implies that they do not receive any nonperturbative corrections. As a consequence, we conclude that the Borel summation of the WKB series gives the exact quasinormal frequency. The organization of this paper is as follows. In the next section, we simply re-derive the WKB results in our approach. We map the problem to Rayleigh–Schrödinger perturbation theory in quantum mechanics. Section 3 is the main part of this paper. We show the Borel analysis for the Schwarzschild and for the Reissner–Nordshtröm black holes. The singularity structure of the Borel transform reveals that the perturbative WKB series are Bore summable divergent series. It implies that the WKB series do not receive nonperturbative corrections. We can easily perform the Borel summation, and it agrees with the known numerical data with remarkable accuracy. We comment on some future directions in section 4. For the reader who is not familiar with the Borel summation method, we give a brief review in appendix A. 2 Semiclassical perturbative expansions In blome1984 ; ferrari1984 ; Ferrari:1984zz , quasinormal frequencies of black holes are related to bound state energies. There, the relationship was applied to exactly solvable potentials. We stress that this idea is quite general, and it is not restricted to the special application. Here we revisit this relation in a refined way. The similar computation is also found in zaslavskii1991 , but our approach is much more efficient. The obtained result is directly compared with the WKB results in schutz1985 ; iyer1987 ; konoplya2003 . 2.1 Basic idea Throughout this paper, we focus on spherically symmetric black holes. Black hole perturbation theory leads to the following radial master equation: where is the tortoise variable. The black hole horizon is at , and the spacial infinity is at . We have introduced a formal parameter that characterizes the WKB series. In the language of quantum mechanics, it of course corresponds to the Planck constant, and thus we often refer to the expansion around as the semiclassical expansion. Usually, is set to be unity. Typically, the potential has a shape shown in the left of figure 1, and it has a global maximum. Our procedure, however, is not restricted on such typical potentials. (Left) A typical shape of the potential Figure 1: (Left) A typical shape of the potential . (Right) The inverted potential usually has bound states. In parallel, let us consider the Schrödinger equation with the inverted potential: It is clear that the inverted potential has the minimum, shown in the right of figure 1, and usually has bound states for and . We denote its bound state energy by (). Now we consider the analytic continuation of . If setting , the Schrödinger equation (2) with formally coincides with (1). Therefore, it is expected that the quasinormal frequency at is simply related to the bound state energy at by This is the key equation in our analysis. Of course, to prove (or disprove) this optimistic guess, we have to carefully see how the boundary conditions on both sides are related by the analytic continuation of . Roughly speaking, in the bound state problem, the exponentially decaying solution in behaves as , and it is analytically continued to the outgoing solution , which is the boundary condition imposed at the spacial infinity for the quasinormal modes. Similarly, the decaying solution in is continued to the ingoing mode at the horizon. Therefore both the boundary conditions seem to be related appropriately by the analytic continuation. However, this argument is quite intuitive. There is a possibility that the decaying solution leads to a linear combination of the outgoing and the ingoing solutions after the analytic continuation due to the Stokes phenomenon. This phenomenon is invisible as far as one considers asymptotic solutions. The Stokes phenomenon in the WKB method was formulated in the seminal work voros1983 of Voros, and now known as the exact WKB analysis. We expect that the relation (3) is rigorously (dis)proved by the exact WKB analysis, but it is beyond the scope of this work. We currently assume the relation (3) without any rigorous proofs. We will check it by comparing the results obtained in this way with the known ones. This is the main goal of this paper. Let us comment on the difference from the original proposal in blome1984 ; ferrari1984 ; Ferrari:1984zz . The authors in blome1984 ; ferrari1984 ; Ferrari:1984zz considered the analytic continuation of the radial coordinates and . To get the inverted potential, one has to do the further analytic continuation of other parameters (mass, charge, etc.) in the potential simultaneously, while the Planck constant is fixed. Here, we rather analytically continue only the Planck constant. Though these two complementary procedures seem equivalent, ours looks much simpler. 2.2 Re-deriving WKB results We start with the Taylor expansion of the inverted potential: where the inverted potential has the minimum at . For our purpose, it is more useful to define , and rewrite the Schrödinger equation (2) as We regard the equation (5) as an anharmonic oscillator with an infinite number of interaction terms. In this viewpoint, the Planck constant is set to be unity, and plays the role of the coupling constant of the interactions. The standard (but boring) textbook-like method in perturbation theory leads to the expansion around order by order in principle zaslavskii1991 . Instead, a much more economical way is known bender1969 . We do not repeat the argument in  bender1969 , but mention that there is a useful Mathematica package444The correctness of this package has been confirmed for various quantum mechanical models studied in a recent revival of resurgence theory dunne2016 ; kozcaz2018 ; codesido2017a ; codesido2019 . sulejmanpasic2018 to compute the perturbative expansion of the spectrum for a given potential by this method. See these papers in detail. With the help of this Mathematica package, we easily get the following perturbative expansion for the eigenvalue equation (5): where . Clearly, the first term is due to the harmonic potential with the frequency . The second term comes from the cubic and the quartic interactions. As in (6), the interaction term leads to the contribution with order . If one wants to know the perturbative series of up to the -th order, one needs to compute the Taylor expansion up to the -th order. This result should be compared to the WKB result in iyer1987 . To do so, we use the relation with , and compare the Taylor expansions on both sides. We find Using these relations and setting , one finds that our result (7) is in agreement with (1.5a) and (1.5b) in iyer1987 . Even if the function cannot be divided into the simple form , we can still regard the constant term as the “energy” in the Schrödinger equation. In this case, the extremal point , as well as the Tayler coefficients , depends on . 3 Borel analysis The greatest advantage of our approach is that one can push the semiclassical perturbative computation to very high orders. This high-order computation helps us to understand the analytic structure of and also . Here we show explicit computations for a few examples. We note that our method is widely applicable for many other cases. 3.1 Schwarzschild black hole For the Schwarzschild black hole, the Regge–Wheeler potential is universally given by where the tortoise variable is given by . We have set the Schwarzschild mass to be . The cases of correspond to scalar, electromagnetic and gravitational perturbations, respectively. We here show the explicit computation for the gravitational perturbation () with . The other cases are completely the same. In this case, the (inverted) potential takes the minimal value at It is straightforward to compute the Taylor expansion of the potential around . To make it easy to see the result, we show approximate numerical coefficients: where . Of course, one can do this computation by keeping the coefficients analytic. From a practical point of view, it is enough to compute these coefficients numerically, kept sufficiently accurate. Recall that in order to get the -th order perturbative correction to , the Taylor expansion of the potential up to the -th order is needed. In the current case, we can compute it up to any desired order (at least numerically). Once the Taylor expansion of the potential is known, the Mathematica package in sulejmanpasic2018 automatically computes the perturbative series of . For example, the perturbative series of the ground state and the first excited state energies to the fifth order are given by To get the quasinormal modes, we finally use the relation (3), i.e., , and set . The perturbative series of up to lead to the following numerical values: where we have chosen a branch such that . These values are compared with the sixth order WKB approximation in konoplya2003 , and one finds a precise agreement. Note that from , the fifth order correction to corresponds to the sixth order to . Let us write the all-order perturbative expansion of the energy as How can we get the exact quasinormal frequency from this perturbative series? The answer is not so simple. It turns out that we need a technique in asymptotic analysis. We have computed () up to . Everything can be put on Mathematica. The large order behavior of the absolute values of the ground state coefficients is shown in figure 2. The same behavior is also observed for excited energies. This terribly growing behavior strongly suggests that (14) is a divergent series, as in most quantum mechanical models. This fact is clearly confirmed by looking at singularities of the Borel transform, as we will show below. The large order behavior of Figure 2: The large order behavior of for . Note that the divergent series (14) works approximately up to a certain finite order . Beyond this order, the approximation by (14) gets worse. If one regards (14) as an approximate expression, one should truncate all the terms beyond the best order . This is well-known as the optimal truncation. This behavior is crucially different from convergent series. In convergent series, the approximation (inside the convergence circle) gets better and better by computing higher and higher order corrections. This is not true for divergent series. In this sense, it is very important to know whether a considered perturbative series is convergent or divergent. The perturbative series beyond the optimal order does not work as an approximate expression any more. The reader might think that the higher-order corrections in the perturbative series are useless. This is not the case. There are some ways to improve the approximation. One way is to use Padé approximants, as in matyjasek2017 . However, it seems difficult in this way to understand the analyticity property of the WKB series, such as the Stokes phenomenon. Moreover, to the author’s knowledge, the convergency is unclear when Padé approximants are applied to divergent series. For these reasons, we here use another important method, well-known as Borel summation. The Borel summation is a basic tool in analysis of divergent series. We review this method in appendix A. As in appendix A, we define the Borel transform of (14) by We denote its analytic continuation by . Then the Borel sum is given by the Laplace transform: The Borel summed quasinormal frequency is finally defined by We expect that gives the exact value of the quasinormal frequency. We have the perturbative data up to the 200th order. To perform the Borel summation, we need the analytically continued Borel transform . How do we get it from these finite data? Probably, the Padé approximant is the best solution.555Note again that in matyjasek2017 , the Padé approximant was used for the original WKB series, which is divergent. Here we use the Padé approximant of the Borel transform, which is convergent. As far as we know, the convergency of Padé approximants is guaranteed for convergent series, but we are not sure whether it is so for divergent series. Let be the Padé approximant, with an order- numerator and an order- denominator, of a given function . It is well-known that the Padé approximant works even outside the convergence circle, and moreover captures the singularity structure of the original function.666Since Padé approximants are rational functions, they never have branch point singularities. Nevertheless, Padé approximants tell us about branch cuts. A branch cut appears as a cluster of poles. Consider the Padé approximant of , for instance. Because of this nice property, the Padé approximant is suitable for our purpose. Finally, we perform the Laplace transform (16) by replacing with its Padé : This practical prescription is sometimes called Borel–Padé summation. The pole distributions of the Padé approximants The pole distributions of the Padé approximants Figure 3: The pole distributions of the Padé approximants for (left) and for (right). There are no singularities on the positive imaginary axis in both cases. A cluster of poles implies a branch cut. In the computation of the quasinormal frequencies, we have to do the analytic continuation . The integrand of the Laplace transform becomes . Therefore it is important to see the singularities of on the positive imaginary axis. In figure 3, we show the pole structure of the Padé approximant with . We do not find any singularities on the real axis or on the imaginary axis. We conclude that the perturbative expansion (14) is Borel summable both for and for . We play the same game for other values of . The Borel–Padé summed quasinormal frequencies are shown in table 1. These values are compared with known results obtained by other methods. We have checked that ours are in excellent agreement with the values obtained by Leaver’s method leaver1985 . Table 1: The Borel–Padé summed quasinormal frequencies in the odd-parity gravitational perturbations of the Schwarzschild black hole. We have computed the perturbative expansion of up to the 200th order, and have used the (diagonal) Padé approximant of the Borel transform. We show only the reliable stable parts of the numerical values. These values are consistent with all the available data up to now in the literature. It is easy to see in table 1 that the convergence speed of the Borel–Padé summation gets better for larger multipole numbers and smaller overtone numbers . This is because our method zooms in the bottom of the inverted potential, and it in general works well for a potential with a deep well. This is a general property in perturbation theory. If one considers the scalar perturbations (), things get worse. In the case of , we get the following slowly convergent value even for : The precision should be improved by computing further higher order corrections. 3.2 Reissner–Nordström black hole The application to the Reissner-Nordström black hole is almost trivial. In this case, the (odd-parity) potential is given by where is the electromagnetic perturbation, and is the gravitational perturbation. As in the Schwarzschild case, the mass is set to be . The parameters are defined by The tortoise variable is given by In the limit , the potential reduces to the Schwarzschild case. In the limit , the black hole becomes extremal. Since in this limit, we have to use the following expression: The remaining computation is completely the same as the non-extremal case. Note that Leaver’s method leaver1990 does not work in the extremal case. One needs a modification  onozawa1996 . Our approach has no difficulties in this limit. Table 2: The Borel–Padé summed quasinormal frequencies with in the odd-parity gravitational perturbations of the Reissner–Nordström black hole. We show only the reliable stable parts of the numerical values. These values are consistent with all the available data up to now in the literature. The case of corresponds to the extremal black hole. We repeat all the computations that are done in the Schwarzschild case. The Borel transform does not have any singularities on the positive imaginary axis again. The Borel–Padé sums for the mode in the gravitational perturbations with are shown in table 2. We have compared these numerical results with the known available data in leaver1990 ; andersson1993 ; onozawa1996 ; matyjasek2017 , and find perfect agreement. 4 Summary In this paper, we propose a simple procedure to get the quasinormal frequencies of spherically symmetric black holes by combining two old ideas in blome1984 ; ferrari1984 ; Ferrari:1984zz ; bender1969 . Our recipe consists of the following four steps: 1. Compute the Taylor expansion of a given potential in the master equation (1). 2. Compute the perturbative series of the bound state energy in the inverted potential. 3. Sum it up by the Borel(–Padé) summation method. 4. Do the analytic continuation at the end. The second step is most non-trivial, but the recent Mathematica package in sulejmanpasic2018 automatically does it! The high-order computation revealed that the perturbative series of the quasinormal frequencies are divergent and more importantly Borel summable. This strongly supports that the Borel summed frequency gives the exact quasinormal frequency. We indeed confirmed that the Borel–Padé summation precisely reproduces all the known results. Though we showed the explicit computations only for the Schwarzschild and for the Reissner–Nordström black holes, our procedure must be widely applicable, as the WKB approach is so. A simple way is to apply our method to deformations of the Schwarzschild potential cardoso2019 . In this work, we focused on the behavior near the top (or the bottom) of the potential (or the inverted potential). This restriction makes it hard to obtain the accurate values of the quasinormal frequencies with high overtone numbers . In addition, the convergence of the Borel–Padé summation of in the scalar perturbations is quite slow because of the shallow well of the inverted potential. These difficulties are probably resolved in the approach in froman1992 . In bound state problems, the energy spectrum is approximately obtained by Bohr–Sommerfeld quantization conditions. These quantization conditions are improved by taking into account high-order quantum corrections dunham1932 .777However, sometimes they receive nonperturbative corrections balian1978 ; voros1983 . The same is true for quasinormal modes. Quantum corrected Bohr–Sommerfeld conditions are found in froman1992 . However, even for the Schwarzschild black hole, their exact forms have not been known. It would be significant to discuss the Borel analysis for the quantization conditions in froman1992 . Finally, our method heavily relies on the assumption (3). Though we have tested it for a few examples, it is important to (dis)prove the assumption (3) rigorously. As already mentioned before, we have to care about the Stokes phenomenon. This problem should be solved by understanding an evolution of (anti-)Stokes curves in the -plane (or in the -plane) from to . The similar evolution in the pure quartic oscillator is found in voros1983 . This work was inspired by a nice seminar on “Parametrized Black Hole Quasinormal Ringdown” by Masashi Kimura at Rikkyo University. His talk was clear for non-experts, and drew my attention to this fascinating topic. I also thank him for telling me a lot of basics and references on the quasinormal modes. This work is supported by JSPS KAKENHI Grant Number JP18K03657. Appendix A Review of Borel summation method In this appendix, we briefly review the Borel summation method. This method is important in physics because we can probe nonperturbative corrections from perturbative series. We refer the reader to an educational lecture note marino2014 and to a comprehensive review aniceto2019 on this topic. Let be a formal perturbative series of a given function : where means that both sides are equal in the asymptotic sense. This point is not crucial in our analysis. We assume that the sequence satisfies the following condition: where and are constants. Mathematically, the series (25) with the condition (26) are called Gevrey-1 series. Most perturbative series in physics belong to this class. Therefore throughout this appendix, we consider only Gevrey-1 series. Gevrey-1 series are in general divergent series. For divergent series, there exist several functions that have the same perturbative series because of the Stokes phenomenon. It is far from obvious to reconstruct the exact function from its divergent perturbative series . The Borel summation provides us a hint on this problem. We first define the Borel transform of (25) by Due to the Gevrey-1 condition (26), the Borel transform has a finite radius of convergence. The radius of convergence is determined by the nearest singularity of the Borel transform from the origin in the complex -plane (sometimes called the Borel plane). Note that if is a convergent series, then its Borel transform is an entire function and has no singularities (except for ). Inverting the logic, the existence of singularities in the Borel plane means that the sequence factorially diverges in . The Borel transform is analytically continued outside the convergence circle. We denote it by . The Borel sum is finally defined by the Laplace transform The Borel sum (28) has the same asymptotic perturbative expansion of the original series (25) because the factorial has the following integral representation: Does the Borel sum agree with the exact function ? Sometimes, the answer is yes, but in general it is not true. The reason is as follows. We suppose . The Borel transform has singularities in the Borel plane. If the Laplace transform in (28) is well-defined, then the series (25) is called Borel summable. Sometimes, some singularities of are located on the positive real axis, and the Laplace transform in (28) is not defined. This case is called Borel non-summable. In the Borel non-summable case, one has to deform the integration contour to avoid these singularities. There are two possible directions to do so, and one can define two modified Borel sums for these deformed contours. Importantly, these modified Borel sums have the same perturbative expansion (25), but they have an exponentially small difference: The difference is nonperturbative in . The magnitude is determined by the nearest singularity of the Borel transform from the origin marino2014 . The discontinuity (30) is nothing but the Stokes phenomenon in the Borel summation. The positive real axis is the Stokes line. Quite interestingly, such an ambiguity is removed by taking into account nonperturbative corrections to the original perturbative series. Then the exact function is reconstructed by the combination of the perturbative series and the nonperturbative corrections without the ambiguity. Very roughly, we have where are Stokes constants. The asymptotic expansion now takes the form like where means the higher order nonperturbative corrections. The asymptotic expansion (32) is called the transseries expansion. In summary, if a perturbative expansion is Borel summable, one can expect that its Borel sum agrees with the exact function.888However, there is a counterexample of this statement in string theory grassi2015 . The analysis in the main text corresponds to this case. If a perturbative expansion is Borel non-summable, there is an ambiguity to define modified Borel sums. We have to add nonperturbative corrections to cancel out the ambiguity. The perturbative sector and the nonperturbative sector are interrelated in a non-trivial way. In fact, the equations (30) and (31) can be regarded as a constraint to nonperturbative corrections if we know the perturbative part. This is why the Borel summation is important both in mathematics and in physics. See marino2014 ; aniceto2019 in more detail. • (1) B. P. Abbott et al., Observation of Gravitational Waves from a Binary Black Hole Merger, Phys. Rev. Lett. 116 (2016) 061102, LIGO Scientific Collaboration and Virgo Collaboration. • (2) B. P. Abbott et al., Tests of General Relativity with GW150914, Phys. Rev. Lett. 116 (2016) 221101, LIGO Scientific and Virgo Collaborations. • (3) B. P. Abbott et al., GW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary Black Hole Coalescence, Phys. Rev. Lett. 116 (2016) 241103, LIGO Scientific Collaboration and Virgo Collaboration. • (4) K. D. Kokkotas and B. G. Schmidt, Quasi-Normal Modes of Stars and Black Holes, Living Rev. Rel. 2 (1999) 2. • (5) E. Berti, V. Cardoso and A. O. Starinets, Quasinormal modes of black holes and black branes, Class. Quant. Grav. 26 (2009) 163001. • (6) R. A. Konoplya and A. Zhidenko, Quasinormal modes of black holes: From astrophysics to string theory, Rev. Mod. Phys. 83 (2011) 793–836. • (7) B. F. Schutz and C. M. Will, Black hole normal modes - A semianalytic approach, Astrophys. J. 291 (1985) L33–L36. • (8) N. Fröman, P. O. Fröman, N. Andersson and A. Hökback, Black-hole normal modes: Phase-integral treatment, Phys. Rev. D 45 (1992) 2609–2616. • (9) S. Iyer and C. M. Will, Black-hole normal modes: A WKB approach. I. Foundations and application of a higher-order WKB analysis of potential-barrier scattering, Phys. Rev. D 35 (1987) 3621–3631. • (10) R. A. Konoplya, Quasinormal behavior of the D-dimensional Schwarzschild black hole and the higher order WKB approach, Phys. Rev. D 68 (2003) 024018. • (11) J. Matyjasek and M. Opala, Quasinormal modes of black holes: The improved semianalytic approach, Phys. Rev. D 96 (2017) 024011. • (12) R. A. Konoplya, A. Zhidenko and A. F. Zinhailo, Higher order WKB formula for quasinormal modes and grey-body factors: Recipes for quick and accurate calculations, arXiv:1904.10333 [gr-qc]. • (13) H.-J. Blome and B. Mashhoon, Quasi-normal oscillations of a schwarzschild black hole, Phys. Lett. A 100 (1984) 231–234. • (14) V. Ferrari and B. Mashhoon, Oscillations of a Black Hole, Phys. Rev. Lett. 52 (1984) 1361–1364. • (15) V. Ferrari and B. Mashhoon, New approach to the quasinormal modes of a black hole, Phys. Rev. D 30 (1984) 295–304. • (16) C. M. Bender and T. T. Wu, Anharmonic Oscillator, Phys. Rev. 184 (1969) 1231–1260. • (17) T. Sulejmanpasic and M. Ünsal, Aspects of perturbation theory in quantum mechanics: The BenderWu Mathematica package, Comput. Phys. Commun. 228 (2018) 273–289. • (18) O. B. Zaslavskii, Black-hole normal modes and quantum anharmonic oscillator, Phys. Rev. D 43 (1991) 605–608. • (19) A. Voros, The return of the quartic oscillator. The complex WKB method, Annales de l’I.H.P. Physique théorique 39 (1983) 211–338. • (20) G. V. Dunne and M. Ünsal, Deconstructing zero: Resurgence, supersymmetry and complex saddles, JHEP 2016 (2016) 2. • (21) C. Kozçaz, T. Sulejmanpasic, Y. Tanizaki and M. Ünsal, Cheshire Cat Resurgence, Self-Resurgence and Quasi-Exact Solvable Systems, Commun. Math. Phys. 364 (2018) 835–878. • (22) S. Codesido and M. Mariño, Holomorphic anomaly and quantum mechanics, J. Phys. A: Math. Theor. 51 (2017) 055402. • (23) S. Codesido, M. Mariño and R. Schiappa, Non-perturbative Quantum Mechanics from Non-perturbative Strings, Ann. Henri Poincaré 20 (2019) 543–603. • (24) E. W. Leaver, An analytic representation for the quasi-normal modes of Kerr black holes, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 402 (1985) 285–298. • (25) E. W. Leaver, Quasinormal modes of Reissner-Nordstrom black holes, Phys. Rev. D 41 (1990) 2986–2997. • (26) H. Onozawa, T. Mishima, T. Okamura and H. Ishihara, Quasinormal modes of maximally charged black holes, Phys. Rev. D 53 (1996) 7033–7040. • (27) N. Andersson, Normal-mode frequencies of Reissner–Nordström black holes, Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences 442 (1993) 427–436. • (28) V. Cardoso, M. Kimura, A. Maselli, E. Berti, C. F. B. Macedo and R. McManus, Parametrized black hole quasinormal ringdown: Decoupled equations for nonrotating black holes, Phys. Rev. D 99 (2019) 104077. • (29) J. L. Dunham, The Wentzel-Brillouin-Kramers Method of Solving the Wave Equation, Phys. Rev. 41 (1932) 713–720. • (30) R. Balian, G. Parisi and A. Voros, Discrepancies from Asymptotic Series and Their Relation to Complex Classical Trajectories, Phys. Rev. Lett. 41 (1978) 1141–1144. • (31) M. Mariño, Lectures on non-perturbative effects in large N gauge theories, matrix models and strings, Fortsch. Phys. 62 (2014) 455–540. • (32) I. Aniceto, G. Başar and R. Schiappa, A primer on resurgent transseries and their asymptotics, Phys. Rept. 809 (2019) 1–135. • (33) A. Grassi, M. Mariño and S. Zakany, Resumming the string perturbation series, JHEP 2015 (2015) 38. For everything else, email us at [email protected].
1449765171add39a
Schrödinger's equation — what is it? Marianne Freiberger Here is a typical textbook question. Your car has run out of petrol. With how much force do you need to push it to accelerate it to a given speed? The answer comes from Newton’s second law of motion:   \[ F=ma, \]     where $a$ is acceleration, $F$ is force and $m$ is mass. This wonderfully straightforward, yet subtle law allows you to describe motion of all kinds and so it can, in theory at least, answer pretty much any question a physicist might want to ask about the world. Erwin Schrödinger Schrödinger's equation is named after Erwin Schrödinger, 1887-1961. Or can it? When people first started considering the world at the smallest scales, for example electrons orbiting the nucleus of an atom, they realised that things get very weird indeed and that Newton's laws no longer apply. To describe this tiny world you need quantum mechanics, a theory developed at the beginning of the twentieth century. The core equation of this theory, the analogue of Newton's second law, is called Schrödinger's equation. Waves and particles "In classical mechanics we describe a state of a physical system using position and momentum," explains Nazim Bouatta, a theoretical physicist at the University of Cambridge. For example, if you’ve got a table full of moving billiard balls and you know the position and the momentum (that’s the mass times the velocity) of each ball at some time $t$, then you know all there is to know about the system at that time $t$: where everything is, where everything is going and how fast. "The kind of question we then ask is: if we know the initial conditions of a system, that is, we know the system at time $t_0,$ what is the dynamical evolution of this system? And we use Newton’s second law for that. In quantum mechanics we ask the same question, but the answer is tricky because position and momentum are no longer the right variables to describe [the system]." The problem is that the objects quantum mechanics tries to describe don't always behave like tiny little billiard balls. Sometimes it is better to think of them as waves. "Take the example of light. Newton, apart from his work on gravity, was also interested in optics," says Bouatta. "According to Newton, light was described by particles. But then, after the work of many scientists, including the theoretical understanding provided by James Clerk Maxwell, we discovered that light was described by waves." But in 1905 Einstein realised that the wave picture wasn't entirely correct either. To explain the photoelectric effect (see the Plus article Light's identity crisis) you need to think of a beam of light as a stream of particles, which Einstein dubbed photons. The number of photons is proportional to the intensity of the light, and the energy E of each photon is proportional to its frequency f:   \[ E=hf, \]     Here $h=6.626068 \times 10^{-34} m^2kg/s$ is Planck's constant, an incredibly small number named after the physicist Max Planck who had already guessed this formula in 1900 in his work on black body radiation. "So we were facing the situation that sometimes the correct way of describing light was as waves and sometimes it was as particles," says Bouatta. The double slit experiment Einstein's result linked in with the age-old endeavour, started in the 17th century by Christiaan Huygens and explored again in the 19th century by William Hamilton: to unify the physics of optics (which was all about waves) and mechanics (which was all about particles). Inspired by the schizophrenic behaviour of light the young French physicist Louis de Broglie took a dramatic step in this journey: he postulated that not only light, but also matter suffered from the so-called wave-particle duality. The tiny building blocks of matter, such as electrons, also behave like particles in some situations and like waves in others. De Broglie's idea, which he announced in the 1920s, wasn't based on experimental evidence, rather it sprung from theoretical considerations inspired by Einstein's theory of relativity. But experimental evidence was soon to follow. In the late 1920s experiments involving particles scattering off a crystal confirmed the wave-like nature of electrons (see the Plus article Quantum uncertainty). One of the most famous demonstrations of wave-particle duality is the double slit experiment. In it electrons (or other particles like photons or neutrons) are fired one at a time all over a screen containing two slits. Behind the screen there's a second one which can detect where the electrons that made it through the slits end up. If the electrons behaved like particles, then you would expect them to pile up around two straight lines behind the two slits. But what you actually see on the detector screen is an interference pattern: the pattern you would get if the electrons were waves, each wave passing through both slits at once and then interfering with itself as it spreads out again on the other side. Yet on the detector screen, the electrons are registered as arriving just as you would expect: as particles. It's a very weird result indeed but one that has been replicated many times — we simply have to accept that this is the way the world works. Schrödinger's equation The radical new picture proposed by de Broglie required new physics. What does a wave associated to a particle look like mathematically? Einstein had already related the energy $E$ of a photon to the frequency $f$ of light, which in turn is related to the wavelength $\lambda $ by the formula $\lambda = c/f.$ Here $c$ is the speed of light. Using results from relativity theory it is also possible to relate the energy of a photon to its momentum. Putting all this together gives the relationship $\lambda =h/p$ between the photon’s wavelength $\lambda $ and momentum $p$ ($h$ again is Planck’s constant). (See Light's identity crisis for details.) Following on from this, de Broglie postulated that the same relationship between wavelength and momentum should hold for any particle. At this point it's best to suspend your intuition about what it really means to say that a particle behaves like a wave (we'll have a look at that in the third article) and just follow through with the mathematics. In classical mechanics the evolution over time of a wave, for example a sound wave or a water wave, is described by a wave equation: a differential equation whose solution is a wave function, which gives you the shape of the wave at any time $t$ (subject to suitable boundary conditions). For example, suppose you have waves travelling through a string that is stretched out along the $x$-axis and vibrates in the $xy$-plane. In order to describe the wave completely, you need to find the displacement $y(x,t)$ of the string in the $y$-direction at every point $x$ and every time $t$. Using Newton’s second law of motion it is possible to show that $y(x,t)$ obeys the following wave equation:   \[ \frac{\partial ^2y}{\partial x^2} = \frac{1}{v^2} \frac{\partial ^2 y}{\partial t^2}, \]     where $v$ is the speed of the waves. Cosine wave A snapshot in time of a string vibrating in the xy-plane. The wave shown here is described by the cosine function. A general solution $y(x,t)$ to this equation is quite complicated, reflecting the fact that the string can be wiggling around in all sorts of ways, and that you need more information (initial conditions and boundary conditions) to find out exactly what kind of motion it is. But as an example, the function   \[ y(x,t)=A \cos {\omega (t-\frac{x}{v})} \]     describes a wave travelling in the positive $x$-direction with an angular frequency $\omega $, so as you would expect, it is a possible solution to the wave equation. By analogy, there should be a wave equation governing the evolution of the mysterious "matter waves", whatever they may be, over time. Its solution would be a wave function $\Psi $ (but resist thinking of it as describing an actual wave) which tells you all there is to know about your quantum system — for example a single particle moving around in a box — at any time $t$. It was the Austrian physicist Erwin Schrödinger who came up with this equation in 1926. For a single particle moving around in three dimensions the equation can be written as   \[ \frac{ih}{2\pi } \frac{\partial \Psi }{\partial t} = -\frac{h^2}{8 \pi ^2 m} \left(\frac{\partial ^2 \Psi }{\partial x^2} + \frac{\partial ^2 \Psi }{\partial y^2} + \frac{\partial ^2 \Psi }{\partial z^2}\right) + V\Psi . \]     Here $V$ is the potential energy of the particle (a function of $x$, $y$, $z$ and $t$), $i=\sqrt{-1},$ $m$ is the mass of the particle and $h$ is Planck’s constant. The solution to this equation is the wave function $\Psi (x,y,z,t).$ In some situations the potential energy does not depend on time $t.$ In this case we can often solve the problem by considering the simpler time-independent version of the Schrödinger equation for a function $\psi $ depending only on space, i.e. $\psi =\psi (x,y,z):$   \[ \frac{\partial ^2 \psi }{\partial x^2} + \frac{\partial ^2 \psi }{\partial y^2} + \frac{\partial ^2 \psi }{\partial z^2} + \frac{8 \pi ^2 m}{h^2}(E-V)\psi = 0, \]     where $E$ is the total energy of the particle. The solution $\Psi $ to the full equation is then   \[ \Psi = \psi e^{-(2 \pi i E/h)t}. \]     These equations apply to one particle moving in three dimensions, but they have counterparts describing a system with any number of particles. And rather than formulating the wave function as a function of position and time, you can also formulate it as a function of momentum and time. Enter uncertainty We'll see how to solve Schrödinger's equation for a simple example in the second article, and also that its solution is indeed similar to the mathematical equation that describes a wave. But what does this solution actually mean? It doesn't give you a precise location for your particle at a given time $t$, so it doesn't give you the trajectory of a particle over time. Rather it's a function which, at a given time $t,$ gives you a value $\Psi (x,y,z,t)$ for all possible locations $(x,y,z)$. What does this value mean? In 1926 the physicist Max Born came up with a probabilistic interpretation. He postulated that the square of the absolute value of the wave function,   \[ |\Psi (x,y,z,t)|^2 \]     gives you the probability density for finding the particle at position $(x,y,z)$ at time $t$. In other words, the probability that the particle will be found in a region $R$ at time $t$ is given by the integral   \[ \int _{R} |\Psi (x,y,z,t)|^2 dxdydz. \]     (You can find out more about probability densities in any introduction to probability theory, for example here.) Werner Heisenberg Werner Heisenberg, 1901-1976. This probabilistic picture links in with a rather shocking consequence of de Broglie's formula for the wavelength and momentum of a particle, discovered by Werner Heisenberg in 1927. Heisenberg found that there is a fundamental limit to the precision to which you can measure the position and the momentum of a moving particle. The more precise you want to be about the one, the less you can say about the other. And this is not down to the quality of your measuring instrument, it is a fundamental uncertainty of nature. This result is now known as Heisenberg's uncertainty principle and it's one of the results that's often quoted to illustrate the weirdness of quantum mechanics. It means that in quantum mechanics we simply cannot talk about the location or the trajectory of a particle. "If we believe in this uncertainty picture, then we have to accept a probabilistic account [of what is happening] because we don’t have exact answers to questions like ’where is the electron at time $t_0$?’," says Bouatta. In other words, all you can expect from the mathematical representation of a quantum state, from the wave function, is that it gives you a probability. Whether or not the wave function has any physical interpretation was and still is a touchy question. "The question was, we have this wave function, but are we really thinking that there are waves propagating in space and time?" says Bouatta. "De Broglie, Schrödinger and Einstein were trying to provide a realistic account, that it's like a light wave, for example, propagating in a vacuum. But [the physicists], Wolfgang Pauli, Werner Heisenberg and Niels Bohr were against this realistic picture. For them the wave function was only a tool for computing probabilities." We'll have a closer look at the interpretation of the wave function in the third article of this series. Does it work? Louis de Broglie Louis de Broglie, 1892-1987. Why should we believe this rather fantastical set-up? In this article we have presented Schrödinger's equation as if it were plucked out of thin air, but where does it actually come from? How did Schrödinger derive it? The famous physicist Richard Feynman considered this a futile question: "Where did we get that [equation] from? It's not possible to derive it from anything you know. It came out of the mind of Schrödinger." Yet, the equation has held its own in every experiment so far. "It's the most fundamental equation in quantum mechanics," says Bouatta. "It's the starting point for every quantum mechanical system we want to describe: electrons, protons, neutrons, whatever." The equation's earliest success, which was also one of Schrödinger's motivations, was to describe a phenomenon that had helped to give birth to quantum mechanics in the first place: the discrete energy spectrum of the hydrogen atom. According to Ernest Rutherford's atomic model, the frequency of radiation emitted by atoms such as hydrogen should vary continuously. Experiments showed, however, that it doesn't: the hydrogen atom only emits radiation at certain frequencies, there is a jump when the frequency changes. This discovery flew in the face of conventional wisdom, which endorsed a maxim set out by the 17th century philosopher and mathematician Gottfried Leibniz: "nature does not make jumps". In 1913 Niels Bohr came up with a new atomic model in which electrons are restricted to certain energy levels. Schrödinger applied his equation to the hydrogen atom and found that his solutions exactly reproduced the energy levels stipulated by Bohr. "This was an amazing result — and one of the first major achievement of Schrödinger's equation." says Bouatta. With countless experimental successes under its belt, Schrödinger's equation has become the established analogue of Newton's second law of motion for quantum mechanics. Now let's see Schrödinger's equation in action, using the simple example of a particle moving around in a box. We will also explore another weird consequence of the equation called quantum tunneling. Read the next article: Schrödinger's equation — in action But if you don't feel like doing the maths you can skip straight to the third article which explores the interpretation of the wave function. About this article Nazim Bouatta is a Postdoctoral Fellow in Foundations of Physics at the University of Cambridge. Marianne Freiberger is Editor of Plus. She interviewed Bouatta in Cambridge in May 2012. She would also like to thank Jeremy Butterfield, a philosopher of physics at the University of Cambridge, and Tony Short, a Royal Society Research Fellow in Foundations of Quantum Physics at the University of Cambridge, for their help in writing these articles. I am semi-retired, so I have the time now to learn those subjects I should have taken in high school, and college. This is just for my own curiosity, and to exercise my brain. Thanks for this article. I found it very interesting Was just browsing the web looking for a quick blurb about schrodinger and found this. Very nice setup and nice flow. Reminded me of my college days. Thank you very much for this very nice illustration, developed step by step chronically. Reading this nice article, one can see how quantum mechanics evolves and understand it better. Thank you very much. This article has been extremely helpful with the readings of Einstein's essays. Thank you The article is really helpful !!!! A very concise article giving a big picture description of the basic tenets of QM. Thanks. Would have been more impactful if the article was written straight instead of writing quotes from the interview. Although I am a PhD student in physics and have been studying quantum physics for several years, this article gives me a better view and summary of quantum concepts. With special thanks Thanks Marianne. I was looking for a straight forward explanation of Schrodinger's equation, and here is your thoughtful article. The best article on schrodinger's equation ever read. I am doing my masters from IISC India. Helped me a lot. Good job Thank You Thanks for the nice article. I just notice that the solution of the Schrodinger equation in the special case is a function of time whereas it should be a function of space only. This is by far the best description I've ever seen on Quantum Mechanics and the Schrodinger Equation. Simply brilliant Professor Dave! It's the first of your series I've watched and it most definitely gives me a reason to subscribe and watch more Trying to help my daughter studying and understanding some physic... ..I stumbled across this fantastic and superclear article! In an eye-blink I refreshed all my 35 years old knowledge! And now I am ready for a very effective lesson as a teacher able to answer the most tricky student questions. Thank you! Hi I am 14 years old but I have a strong desire to become a physicist and I have so far taught my self up to general relativity.I just wanted to say this article has helped to to know even more. I'm also in high school and also curious to know more about physics. I too believe that this was best article on QM No we don't know everything there is to know! We know a lot about the obvious movement but the transfer of intelligence or information between the elements, (not confined to the same time), is mentioned about this. Been searching a long time for an adequate explanation of Schrodinger's equation. Thanks a million for the clarity and context. In need of some detailed explanations of the Schrodinger equation I luckily found these articles. Very nice, clear and detailed, and with much more that I could image. Thank you very much indeed. Very good introduction. Thank you. At the beginning of the twentieth century, experimental evidence suggested that atomic particles were also wave-like in nature. For example, electrons were found to give diffraction patterns when passed through a double slit in a similar way to light waves. Therefore, it was reasonable to assume that a wave equation could explain the behaviour of atomic particles. Schrodinger was the first person to write down such a wave equation. Much discussion then centred on what the equation meant. The eigenvalues of the wave equation were shown to be equal to the energy levels of the quantum mechanical system, and the best test of the equation was when it was used to solve for the energy levels of the Hydrogen atom, and the energy levels were found to be in accord with Rydberg's Law. It was initially much less obvious what the wavefunction of the equation was. After much debate, the wavefunction is now accepted to be a probability distribution. The Schrodinger equation is used to find the allowed energy levels of quantum mechanical systems (such as atoms, or transistors). The associated wavefunction gives the probability of finding the particle at a certain position. Answered by: Ian Taylor, Ph.D., Theoretical Physics (Cambridge), PhD (Durham), UK The Shrodinger equation is: The solution to this equation is a wave that describes the quantum aspects of a system. However, physically interpreting the wave is one of the main philosophical problems of quantum mechanics. The solution to the equation is based on the method of Eigen Values devised by Fourier. This is where any mathematical function is expressed as the sum of an infinite series of other periodic functions. The trick is to find the correct functions that have the right amplitudes so that when added together by superposition they give the desired solution. So, the solution to Schrondinger's equation, the wave function for the system, was replaced by the wave functions of the individual series, natural harmonics of each other, an infinite series. Shrodinger has discovered that the replacement waves described the individual states of the quantum system and their amplitudes gave the relative importance of that state to the whole system. Schrodinger's equation shows all of the wave like properties of matter and was one of greatest achievements of 20th century science. It is used in physics and most of chemistry to deal with problems about the atomic structure of matter. It is an extremely powerful mathematical tool and the whole basis of wave mechanics. Answered by: Simon Hooks, Physics A-Level Student, Gosport, UK The Schrodinger equation is the name of the basic non-relativistic wave equation used in one version of quantum mechanics to describe the behaviour of a particle in a field of force. There is the time dependant equation used for describing progressive waves, applicable to the motion of free particles. And the time independent form of this equation used for describing standing waves. Schrodinger�s time-independent equation can be solved analytically for a number of simple systems. The time-dependant equation is of the first order in time but of the second order with respect to the co-ordinates, hence it is not consistent with relativity. The solutions for bound systems give three quantum numbers, corresponding to three co-ordinates, and an approximate relativistic correction is possible by including fourth spin quantum number. Great article - much appreciated Very nicely expressed in a concise form. In the article, as a solution to the Schödinger equation, the function is given (in Maple notation): Psi (t): = psi*exp(-(2*Pi*I*E/h)*t); If one then takes (|Psi(t)|)**2, one would expect the integral for t from zero to infinity to be equal to 1. The integral (in Maple notation): Int ((abs(psi*exp(-(2*Pi*I*E/h)*t)))**2, t = 0 .. infinite); however, is equal to infinity. What error did I make?
e88fc08079569bd6
Advances in Physical Chemistry Advances in Physical Chemistry / 2013 / Article Research Article | Open Access Volume 2013 |Article ID 497267 | Burke Ritchie, Charles A. Weatherford, "Quantum-Dynamical Theory of Electron Exchange Correlation", Advances in Physical Chemistry, vol. 2013, Article ID 497267, 8 pages, 2013. Quantum-Dynamical Theory of Electron Exchange Correlation Academic Editor: Benjaram M. Reddy Received04 Nov 2012 Revised13 Jan 2013 Accepted14 Jan 2013 Published20 Mar 2013 The relationship between the spin of an individual electron and Fermi-Dirac statistics (FDS), which is obeyed by electrons in the aggregate, is elucidated. The relationship depends on the use of spin-dependent quantum trajectories (SDQT) to evaluate Coulomb’s law between any two electrons as an instantaneous interaction in space and time rather than as a quantum-mean interaction in the form of screening and exchange potentials. Hence FDS depends in an ab initio sense on the inference of SDQT from Dirac’s equation, which provides for relativistic Lorentz invariance and a permanent magnetic moment (or spin) in the electron’s equation of motion. Schroedinger’s time-dependent equation can be used to evaluate the SDQT in the nonrelativistic regime of electron velocity. Remarkably FDS is a relativistic property of an ensemble of electron, even though it is of order in the nonrelativistic limit, in agreement with experimental observation. Finally it is shown that covalent versus separated-atoms limits can be characterized by the SDQT. As an example of the use of SDQT in a canonical structure problem, the energies of the 1Σg and 3Σu states of H2 are calculated and compared with the accurate variational energies of Kolos and Wolniewitz. 1. Introduction One may consider that quantum chemistry is dominated by theoretical and computational efforts to achieve an accurate description of electron exchange correlation, evolving such workhorse methodologies as Hartree-Fock-Configuration Interaction, Density Functional Theory, and numerous variations on the theme of nonrelativistic quantum mechanics applied to problems of chemical interest. But yet, owing to historical happenstance, more heat than light has been generated concerning the fundamental physical understanding of exchange-correlation. Even in early calculations in which correlation was built into the wave function it was recognized that the concept of exchange tended to lose meaning in a calculation in which correlation was treated to high accuracy [1]. As another example it was shown that a high-order perturbation calculation in which the electron-electron interaction is treated as the perturbation is able to achieve order by order the correct permutation symmetry of the wave function starting in zeroth order with a simple unsymmetrized product of orbitals [2]. This conundrum can be easily understood for the two-electron, one-nucleus problem by a simple change from nucleus centered to Jacobi coordinates in which one vector connects the two electrons, and the other connects the center of mass of the two electrons to the nucleus (where of course it is understood that the Born-Oppenheimer picture is being used). Then the wave function separates naturally into two linearly-independent wave functions which are either even or odd in the exchange of electrons. In other words the new choice of coordinates in which correlation is achieved through a physically judicious choice of coordinates also achieves the correct exchange symmetry, which is guaranteed by the symmetry of the Hamiltonian with respect to the electron exchange. Nonrelativistic quantum theory fails us, however, even for two-electron problems. According to experimental observation, the electrons have intrinsic angular momentum comprising two-spin-1/2 states, such that the total wave function must be antisymmetric on electron exchange, which is satisfied either by a product of the even spatial wave function adduced above with a spin state which is odd on electron exchange (singlet state) or by a product of the adduced odd spatial wave function with a spin state which is even on electron exchange (triplet state). The generalization is of course the Slater determinantal wave function of spin orbitals which guarantees the correct antisymmetric permutation symmetry for N spin-1/2 particles. Hence the standard methodologies [3], for all their success, simulate many-electron quantum states in an ad hoc manner, based on experimental observation, but tell us nothing fundamental about the physical basis of how the spin state of an individual electron is related to the Pauli exclusion principle and the observed Fermi-Dirac statistical behavior of an aggregate of electrons. 2. Spin-Dependent Quantum Trajectories and Electron Exchange Correlation Readers should recognize that an electron’s spin state and its spatial correlation with another electron are strongly related, as suggested by our observation that a successful simulation of correlation is also accompanied by the correct exchange symmetry. Notice, however, that such a relationship between electron spin and the electron-electron Coulomb interaction appears to be grossly at odds with our intuitive understanding of quantum chemistry, likely due to the absence of a particle-trajectory picture in the standard methodologies. Bohm formulated a quantum dynamical approach in which the phase-amplitude solution of Schroedinger’s equation has a formal relationship to classical hydrodynamics [4, 5]. Bohmian dynamics has recently undergone a renewed interest [6]. It is a method of solving the time-dependent Schrodinger equation by assuming a form for the wavefunction where R (such that R is greater than or equal to zero for all x) is the amplitude and S is the action function. Then upon substitution into the time-dependent Schroedinger equation, two coupled equations (continuity equation and the quantum Hamilton-Jacobi equation) are obtained. The net result is to produce quantum trajectories. These equations have proven to be difficult to solve accurately, particularly for multi electrons [6]. In fact, it is difficult to solve in three spatial dimensions for one electron. Note that in the context of the Bohmian equations, the discretization is done in terms of what are called pseudo particles. Thus, when it is said that say fifty particles are propagated, it typically means fifty pseudo particles are used to describe the dynamics of one real particle. As far as we are aware, no one has computed the dynamics of two real particles using Bohmian dynamics, and in fact, it does not seem possible to do so with current algorithms and computer hardware. We therefore seek a quantum trajectory approach for real electrons in which the spin of an individual electron is correctly accounted for, which means that we must look to Dirac’s rather than to Schroedinger’s equation. One of the desiderata of Dirac’s program [7] for incorporating Einstein’s special theory of relativity into quantum mechanics was that the quantum analog of the classical equation of continuity, should be inferred from the equation of motion for the relativistic electron, whereupon [7], where is Pauli’s vector, and the quantum density is where and are the large and small Dirac spinors, respectively, and the superscripts denote Hermitian conjugates. On eliminating (2b) in favor of (2a) and dropping all terms of order , which is the nonrelativistic limit, Schroedinger’s equation is recovered for , which was another desideratum in Dirac’s program for a relativistic-electron theory. Now notice that a velocity field can be inferred from the current by writing and solving for , from which a trajectory, , can be calculated from the time integration of the velocity field to find a position field, and finally by finding the quantum expectation value of the position field, One more step is needed to show the relationship of electron spin and electron exchange correlation, namely, the evaluation of the electron-electron Coulomb interaction using quantum trajectories. Considering the case of two electrons, one of which is described by (1)–(7). The Coulomb interaction for an electron with position vector with the other electron with position vector can then be written as follows: Similarly the electron at is described by (1)–(7) in which is replaced by and the interaction is given by Thus the electron-electron potential is evaluated as an instantaneous interaction in space and time rather than as a quantum-mean interaction in the form of Coulomb and exchange integrals. This is a critical step. Normally the interaction is written as , which is a function in 6-space well known to be incompatible with Lorentz-invariance [8], which is a requirement for a correct relativistic theory. The authors’ comment [8], following their equation (38.4), “the Coulomb term [e-e′ interaction] in (38.3), however, is not even approximately Lorentz invariant, and relativistic corrections to the interaction between the two electrons are furnished by quantum electrodynamics,” refers to the Lorentz-invariant form of the e-e′ interaction given by its representation in QED in terms of the exchange of virtual photons. Notice that in the present paper we are claiming that these relativistic corrections, at least for nonradiative interactions, are given by representing the e-e′ or e′-e interactions in the forms given by (8)-(9) and by calculating the trajectories using (2a)–(7). This is so because the 4-space form of (8)-(9) is fully compatible with the Lorentz invariance of Dirac’s one-electron equation as given by (2a) and (2b), in which V is explicitly time dependent and can be written as . In other words the time-independent 6-space potential, is represented in 3-space and the time by using the position vector of one electron and the trajectory of the other. Readers will note that Dirac’s one-electron equation, as given for example in the literature for the hydrogen atom, is exact and therefore Lorentz invariant for the Coulomb attraction to the nucleus, . It is still exact and Lorentz invariant for the general scalar potential . This is a true statement because in the electron’s 4-momentum, , the scalar part can be expressed as terms of on using the operator form of the classical relativistic Hamiltonian , where is the Lorentz factor . We can only guess that workers have not used the present methodology to find a fully relativistic, Lorentz-invariant theory for many fermions due historically to the dominance of time-independent quantum theory in absence of an explicitly time-dependent interaction such as the vector potential, , in matter-radiation physics, in which case the 4-momentum is written . Equations (8)-(9) are functions of 3-space and the time and are fully compatible with the Lorentz invariance of Dirac’s one-electron Hamiltonian and with the Lorentz covariance of his one-electron wave equation [7]. Notice that we do not venture onto the ice of two-body Dirac theory [9]. Proceeding heuristically we believe that it is possible to derive a correct many-electron relativistic quantum theory simply by replacing Newton’s equation of motion for each electron in an aggregate of electrons by Dirac’s equation of each electron in the aggregate, thusly for an electron whose position vector is interacting with another electron whose position vector is , where , , , , , and I is the identity matrix. Equations (2a) and (2b) are identical to the right-hand side of the arrow in (10), which is just another way of writing Dirac’s equation. Pauli’s exclusion principle is obeyed if (2a) and (2b) and their counterparts for a second electron are evolved in the time for identical spatial functions and opposite spin states at initial time (Figure 1), which corresponds to a singlet state or for different spatial functions and the same spin states at initial time (Figure 2), which corresponds to a triplet state. Notice that Figure 1 shows quantum trajectories with slightly different frequencies, which can happen because spatial wave functions which are identical at initial time can develop differences over time, in analogy to the use of unrestricted Hartree-Fock theory in standard time-independent theory. It should be noted that, although one must choose a set of initial conditions for the time integration of the equations, the problem is not an initial-value problem in the same sense as in classical mechanics in which, due to the deterministic nature of the trajectories, one must find the physical result as an average over a set of different initial conditions since we cannot know a unique set of initial conditions for the particles. The reason can be found in the Heisenberg uncertainty principle,, which governs the time integration of the quantum wave equations: near initial time the energy is very broad and narrows with increasing time to give a spectrum of energy levels (Figure 3). Quantum dynamics, however, does share with classical dynamics the property that stationary (i.e., constant energy) solutions are obtained. The envelop of trajectory amplitudes shown in Figure 1 is observed to increase over time because the quantum trajectories (hereafter called spectral trajectories) are calculated using the time-dependent solutions (hereafter called the spectral solutions [10]), which are a superposition of quantum states including the continuum, such that the envelop growth over time reflects the excited and continuum content of the wave function. On the other hand the envelop of trajectory amplitudes shown in Figure 4 is not observed to increase over time because the trajectories (hereafter called eigentrajectories) are calculated using eigenfunctions as filtered from the spectral solutions at a specific stationary energy as determined from Figure 3, as described in [10]. In some of the calculations Hamiltonian expectation values calculated using eigenfunctions and eigentrajectories (Figure 5) still exhibit an oscillation in the time. In these cases the physical energies can be found from the time average of the Hamiltonian expectation values (Figures 5 and 6); in other words the time average is sensibly stationary. 3. Quantum Trajectories in the Nonrelativistic Limit The time-independent solution of (2b) can be written as Evaluating in the nonrelativistic limit, for which , we immediately see that the current given by (3) is of order in this limit. Remarkably the order of the quantum trajectories is in agreement with experimental observation that Fermi-Dirac statistics is obeyed for all electron velocities, including those whose Dirac current is calculated in the regime of nonrelativistic velocity. The trajectories are quantum mechanical since they can be calculated either using Dirac’s equation in the regime of relativistic velocities or Schrodinger’s equation in the regime of nonrelativistic velocities. The quantum trajectories are spin dependent since they depend explicitly on Pauli’s vector. Our result, for the first time, gives a relativistic correction which is of order , all other relativistic corrections being of order . This relativistic correction is due exclusively to the spin of the electron and accounts for Fermi-Dirac statistics. Traditionally relativistic corrections have been considered to start at order for atomic fine structure. Readers will appreciate the apparent paradox that we had to appeal to Dirac’s equation to explain the omnipresence of electron spin in all fermionic phenomena notwithstanding the relativistic or nonrelativistic nature of the motion. We believe that this unforeseen Dirac correction of order , which is of course the same order as all Schroedinger’s contributions to the Hamiltonian, is due to the Lorentz covariance [7] of Dirac’s equation appropriate for a spin-1/2 particle, as discussed in the last section. This surmise is supported by the fact that the Klein-Gordon relativistic equation appropriate for a spin-0 particle, although it is Lorentz invariant, it fails to satisfy the equation of continuity given by (1) for a physically acceptable current. In the nonrelativistic regime of electron velocity, (2a)–(9) may be evaluated using Schroedinger’s equation, which follows on exactly eliminating (2b) in favor of (2a) and dropping contributions in the resulting equation for the large component, , of order . The current is evaluated in the nonrelativistic limit using and given by (11) in the regime , where obeys the time-dependent Schroedinger’s equation as follows: has up (plus sign) or down (minus sign) magnetic substates denoted by (i.e., or spin states), respectively. Written out explicitly in terms of the large component the current given by (3) becomes where we have used and the identity, , from which the identities useful in evaluating the current can be inferred as follows: Written out explicitly for up (upper sign) or down (lower sign) spin states the nonrelativistic current is The first term on the right side of (15), which is independent of spin, is the familiar term contributed by Schroedinger’s equation, while the second and third terms, which are transverse to the axis of quantization along z, are contributed uniquely by Dirac theory. Notice that all contributions to the current are of order , as discussed earlier in this section. 4. Calculations and Results The time-dependent Schroedinger’s equation [(12)] is solved in the time and three Cartesian coordinates for each electron of H2 using an algorithm described previously [11]. (Preliminary He-atom calculations were presented previously [12].) The computational grid box is square with a uniform mesh of 323. The accuracy of the calculation could be improved by mesh refinement of the uniform mesh or by use of an adaptive mesh (which we do not have at our disposal); however, we find that for or the present mesh is adequate, especially for the proof of principle calculations presented here. The two Schroedinger’s equations were integrated in the time for a length of 200 au using 4000 mesh points (for internuclear distance calculations) or 8000 mesh points (for all other internuclear distance calculations). At initial time a Slater-type orbital with form , centered on each proton is used for each electron for the state and with forms and , for the electrons of the state. The Pauli principle is enforced at initial time for electrons in the same spin state, and the electrons obey the Pauli principle for all subsequent times due to their mutual correlation. If the Pauli principle is violated at initial time by assigning the same spatial orbital to electrons in the same spin state, then the electrons fail to correlate for subsequent times. The time-dependent wave function for each electron is evolved, and its spectrum of eigenvalues and eigenfunctions is obtained using the methods described in the pioneering paper of Feit et al. [10] on the spectral solution of the time-dependent Schroedinger equation. The results are sensitive to initial conditions only in the sense that a poor choice (use of and larger values of , e.g.) was found to lead to trajectory run away and an unphysical solution. As in classical dynamics care should be taken to use energy-conserving numerical methods to integrate the trajectory fields using (6). Trajectory run away is easily recognized from a plot of trajectory versus time in which the trajectory is not periodic and grows with time, representing an unbound electron (self ionization) or in which it is periodic at large distances from the nuclei, representing a spurious excited state (self excitation). The energies of runaway trajectories are observed to lie below the true energies due to insufficient strength of the interelectronic potential. When sufficient care is taken to obtain energy-conserving trajectories, calculations tend to converge from below rather than from above experimental energies. Readers should be reminded that the evolution of the wave function by solving (12) is not an initial-value problem, although it may appear to be from the methodology just outlined. This is a point of confusion for workers whose experience is in time-independent structure theory. In fact the Heisenberg uncertainty principle, , guarantees that at initial time the spectral width of a line is infinitely broad and narrows with the length of the time integration, whose measure is . The positions of the spectral peaks (Figure 3) are accurately predicted even for early times and wide spectral lines, as measured by , but the integration should be continued for long enough times that the spectral peaks for different eigenvalues are well separated. This may be hard to accomplish for integrating (12) for heavy particles. Figure 1 shows quantum trajectories in the y direction for up and down spin states and identical spatial orbitals at initial time. All calculations are performed for protons located at along the axis. These are trajectories which belong to the state. It is easy to see that the two opposite-spin trajectories are correlated in the sense that they keep apart and on opposite sides of the centers of attraction most of the time. Notice the expanding envelops of the amplitudes with increasing time, which indicates the increasing excited and continuum-orbital contribution to the wave functions with increasing time. This behavior is possible because the time-dependent or spectral wave functions which are used to calculate the currents, whence the trajectories are a superposition of the complete set of states of the Hamiltonians, as emphasized in [10]. Figure 3 shows the spectra at an internuclear distance of as calculated from the temporal Fourier transform of the overlaps of the wave functions at time t with the initial-time wave functions, as discussed in [10]. Having located the spectral energies at the positions of the peaks, eigenfunctions (Figures 7 and 8) are calculated from the Fourier transform of the spectral wave function precisely at the spectral peak positions. A window function [10], , is used in both the eigenvalue and eigenfunction calculations in order to suppress side-lobe numerical artifacts. Figure 7 shows a z-slice at of the eigenfunctions for , and Figure 8 similarly shows the eigenfunctions for . The fixed locations of the protons are clearly depicted in both plots as the twin peaks of the eigenfunctions. The nascent separation into H dissociation products is evident in Figure 8 by the dissymmetry of the wave functions with respect to the nuclear centers. The wave function which is symmetric about the proton centers has the interelectronic potential set equal to zero; thus the correct separation into He-atom dissociation products depends on the correlation of the two electrons. Figure 4 shows corresponding quantum trajectories in the z direction calculated using (7) in the nonrelativistic limit, as in Figure 1, but with the spectral wave function, which is a superposition of states, replaced by eigenfunctions obtained, as mentioned previously and described in [10], from the temporal Fourier transform of the spectral wave function at a spectral eigenvalue. These may be considered to be eigentrajectories, as opposed to the spectral trajectories appearing in Figure 1. Notice that at later times the eigentrajectories for are pulled into the internuclear region, reflecting the covalent nature of the bond. In contrast the eigentrajectories for remain in the separated-atoms region for all times, reflecting the weak covalency of the bond for large internuclear separation. We regard this behavior as strong justification for the present methodology, in which the correlation of the two electrons guarantees the correct dissociation products automatically without having to build this behavior into the basis set in a conventional variational calculation. Figure 2 shows eigentrajectories for electrons having the same spin states and different initial orbitals. These are trajectories belonging to the state. Notice that the trajectories are separated into a ground-state trajectory centered about one nucleus and an excited-state trajectory moving at large distances from the two centers of attraction. Such a state is known in quantum chemistry as antibonding. Figure 5 shows the late-time energies of the two states and the time averages of the energies. The energies are calculated from the expectation value of the two-electron Hamiltonian using the eigenfunctions inferred from the spectral solution, as discussed previously and in [10]. The interelectronic potentials are calculated using the eigentrajectories, as discussed previously, and the sum over two electrons is divided by 2 in order to avoid double counting. Notice that the time dependence of the potential, as given by (8)-(9), means that the Hamiltonian expectation values tend to oscillate about a stationary time average, as shown in Figures 5 and 6. Finally Figure 6 shows the energy time averages for four different internuclear distances. The agreement of the time averages with the variational calculations of Kolos and Wolniewicz [13] is good, whose energies for the bonding and antibonding states are −1.174 and −0.7842, respectively. Readers are reminded that the binding energy is only 17.45% of the total electronic energy of −1.174 for the ground state of molecular hydrogen. The agreement at the larger internuclear distances is not as good owing to smaller binding energies whose magnitude falls into the last digit of accuracy of the calculation. 5. Conclusions One cannot extract more than three significant figures from the calculations at the present level of accuracy, but we are confident that the results can be systematically improved for accuracy by further refinements in the grid meshes and especially by using improved, energy-conserving numerical methods to integrate the quantum trajectories in order to avoid run away, the underestimation of the interelectronic potential energy, and as a consequence the overestimation of the binding energy. These problems notwithstanding, we believe that the quantum-trajectory theory (a) establishes the relationship between the spin of an individual electron and the Fermi-Dirac statistical behavior of an ensemble of electrons and (b) achieves the pair-wise correlation of electrons by evaluating Coulomb’s law as an instantaneous interaction in the time rather than as a quantum stationary-state mean or exchange interaction. The author is grateful to T. Scott Carman for his support of this work. This work was performed under the auspices of the Lawrence Livermore National Security, LLC (LLNS), under Contract no. DE-AC52-07NA273. 1. H. M. James and A. S. Coolidge, “The ground state of the hydrogen molecule,” The Journal of Chemical Physics, vol. 1, no. 12, pp. 825–835, 1933. View at: Google Scholar 2. M. E. Riley, J. M. Schulman, and J. I. Musher, “Nonsymmetric perturbation calculations of excited states of helium,” Physical Review A, vol. 5, no. 5, pp. 2255–2259, 1972. View at: Publisher Site | Google Scholar 3. R. J. Bartlett and M. Musial, “Coupled-cluster theory in quantum chemistry,” Reviews of Modern Physics, vol. 79, no. 1, pp. 291–352, 2007. View at: Publisher Site | Google Scholar 4. D. Bohm, Physical Review A, vol. 85, p. 166, 1952. 5. P. Holland, Quantum Theory of Motion, Cambridge Press, Cambridge, UK, 1993. 6. R. E. Wyatt, Quantum Dynamics with TrajecTories: Introduction to Quantum Hydrodynamics, Springer, New York, NY, USA, 2005. 7. J. D. Bjorken and S. D. Drell, Relativistic Quantum Mechanics, McGraw-Hill, New York, NY, USA, 1964. 8. H. A. Bethe and E. E. Salpeter, Quantum Mechanics of One- and Two-Electron Atoms, Dover, New York, NY, USA, 2008. 10. M. D. Feit, J. A. Fleck, and A. Steiger, “Solution of the Schrödinger equation by a spectral method,” Journal of Computational Physics, vol. 47, no. 3, pp. 412–433, 1982. View at: Google Scholar 11. M. E. Riley and B. Ritchie, “Numerical time-dependent Schrödinger description of charge-exchange collisions,” Physical Review A, vol. 59, no. 5, pp. 3544–3547, 1999. View at: Publisher Site | Google Scholar 12. B. Ritchie, “Quantum molecular dynamics,” International Journal of Quantum Chemistry, vol. 111, no. 1, pp. 1–7, 2011. View at: Publisher Site | Google Scholar 13. W. Kolos and L. Wolniewicz, “Potential—energy curves for the X1Σg, b3Σu, and C1Πu states of the hydrogen molecule,” The Journal of Chemical Physics, vol. 43, no. 7, pp. 2429–2441, 1965. View at: Google Scholar Copyright © 2013 Burke Ritchie and Charles A. Weatherford. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. More related articles PDF Download Citation Citation Download other formatsMore Order printed copiesOrder Related articles
6758f0ff6c03dfba
Quantum Matrix Diagonalization Visualized Process the drawn potential. Clear the current potential to draw another. Matrix options: nmax = 8 Reset    Cell highlighting Color scheme: Positive Negative Zero Contrast = 1.0 Current cell: Find max cell H = θ = ° Plot display options: Emax = scale = 1.00 Colorful wavefunctions Display energies Potential options: V(x): = 100 Update Hamiltonian Matrix Discretized potential Grid size = 20 The basics User interface For a complete explanation of what this program is doing, please see the related paper in the American Journal of Physics (preprint at arXiv:1905.13269 [physics.ed-ph]). Here is a brief summary \(\ldots\) Our goal is to solve the time-independent Schrödinger equation (TISE), \begin{equation} \hat{H} \psi_n(x) = E_n \psi_n(x), \end{equation} where the Hamiltonian operator \(\hat{H}\)\( = -\frac{1}{2} \frac{d^2}{dx^2} + V(x)\) (in units where \(m = ℏ = 1\)) includes the potential energy function shown above (right). To do so we expand \(\psi_n(x)\) in terms of a set of normalized sine-wave basis functions, also initially shown above. Denoting these functions as \(\varphi_m(x)\), we write the expansion as \begin{equation} \psi_n = \sum_m c_{mn} \varphi_m, \end{equation} where the first subscript on the coefficient \(c_{mn}\) indicates which component, while the second subscript indicates which \(\psi_n\) eigenfunction. Inserting this expansion on both sides of the TISE and then taking the inner product with an arbitrary basis function \(\varphi_l\), we obtain (after a few algebraic steps) the TISE in matrix form: \begin{equation} \begin{bmatrix} H_{11} & H_{12} & \cdots \\ H_{21} & H_{22} & \cdots \\ \vdots & \vdots & \ddots \\ \end{bmatrix} \begin{bmatrix} c_{1n} \\ c_{2n} \\ \vdots \end{bmatrix} = E_n \begin{bmatrix} c_{1n} \\ c_{2n} \\ \vdots \end{bmatrix}. \end{equation} Here the matrix elements \(H_{mn}\) are defined by integrals, \begin{equation} H_{mn} = \langle \varphi_m | \hat{H} | \varphi_n \rangle = \int\!\varphi_m(x)\hat{H}\varphi_n(x)\,dx, \end{equation} which the program computes when it loads, and whenever the matrix is reset. The results are displayed above (left) as colored squares. To solve the matrix eigenvalue problem we use a method devised long ago by Jacobi. Note that the normalized column eigenvectors form an orthogonal matrix \(C\), which we break down into a product of elementary rotation matrices \(R_i\): \begin{equation} C = \begin{bmatrix} c_{11} & c_{12} & \cdots \\ c_{21} & c_{22} & \cdots \\ \vdots & \vdots & \ddots \\ \end{bmatrix} = R_1 R_2 R_3\ldots, \end{equation} where the subscripts on the \(R_i\) symbols indicate the sequence of rotations. Each rotation matrix has the form \begin{equation} R = \begin{bmatrix} \,1 \\[-5pt] & \ddots \\ && \cos\theta & \cdots & {-}\sin\theta \\[-3pt] && \vdots & 1 & \vdots \\ && \sin\theta & \cdots & \ \cos\theta \\[-3pt] &&&&& \ddots \\ &&&&&& 1\, \\ \end{bmatrix}, \end{equation} consisting of an identity matrix except for the four entries that accomplish a rotation by \(\theta\) in a particular plane. When you click on an off-diagonal matrix cell above, you are selecting the plane for one of these rotations. Turning the dial changes the rotation angle \(\theta\). Each rotation modifies the Hamiltonian matrix according to \begin{equation} H_\textrm{new} = R^T H_\textrm{old} R, \end{equation} so if you choose the rotation angle to (approximately) zero out an off-diagonal matrix element, the new Hamiltonian will be closer to diagonal than the old one. Eventually, after enough of these rotations, you will have diagonalized the Hamiltonian so that its diagonal elements are the energy eigenvalues. Meanwhile, the program accumulates the product of rotation matrices to form successive approximations to the matrix \(C\) of eigenvectors, and uses these (approximate) eigenvectors to construct and plot the (approximate) eigenfunctions \(\psi_n(x)\) as the calculation proceeds. The Rotate button calculates the optimum rotation angle according to the formula \begin{equation} \theta = \frac{1}{2} \tan^{-1} \biggl( \frac{2H_{mn}}{H_{mm} - H_{nn}} \biggr), \end{equation} where \(m\) and \(n\) are the chosen row and column numbers. To derive this formula, multiply out the matrix product \(R^T H R\) in the rotating two-dimensional subspace, and set either off-diagonal element to zero.
9f42c99446ebeeff
Wednesday, June 05, 2019 [Image: CERN] 1. Dear Dr. Hossenfelder There is a paper about the social and economic impact of the LHC by the OECD in case you haven't seen it. Quite interesting I think: There is one part I particularly like: "Attempts have been made elsewhere to develop analytical methods, and to gather appropriate data, in order to derive statements of the form “X currency units invested by governments in fundamental research generates Y currency units in additional GDP (or jobs, or increases in quality of life indices)”. But these attempts (and results) have been criticised on methodological grounds, and the Global Science Forum decided not pursue a quantitative study of this type." I agree with them. So much for any form of usable ROI. I pretty much changed my view during the last few month and I read quite a few of the proposals published here: Not all countries or groups are all that favorable towards a new machine. Therefore the FCC isn't a done deal at all. How about relocation half of CERN to Saint-Paul-lès-Durance? That could accelerate other important (at least just for saving the world) experiments? 1. Christian, Yes, the EU has also done a variety of studies on the impact of investing into R&D. There are a number of other attempts that you find on the arXiv. But as I have said before, you don't need to know an absolute ROI to make statements about a relative ROI. More scientific promise for less money is better even if you cannot quantify the payoff in absolute terms. Yes, that's right, not all particle physics is high energy collider physics. But not all research in the foundations of physics is particle physics either. The idea that particle physicists decide whether to do particle physics or particle physics strikes me as too silly to even discuss. In any case, as I have hopefully made clear, my major issues are (a) the community does not fix problems that waste money but they want more money and (b) hype and lobbying. I worry about the latter not because I think people may accidentally believe it, but because I am sure they will not believe it, unless they are already believers. In other words, people who go around and proclaim we need a particle collider but fail to mention the obvious reasons against it do big damage to science. 2. I'm a computer scientist not much into physics, so I don't know the matter well. But didn't the experiments at the LHC produced a lot of inventions, patents and spin-off products that entered our lives? Would love to read your opinion on that side of the story. 1. Ahmet, It is certainly the case that pouring money into developing large, supercooled magnets and high precision detectors has the occasional technological spin-off. It would be shocking if that wasn't so. The question you should ask, however, is this: If the spin-offs are what you are really after, wouldn't it be better to invest into this in the first place? Besides this, as I said above, the case about potential spin-offs can likewise be made about any other large-scale science investment. So it's really not a reason to build a larger collider. 2. Ahmet, There is certainly an impressive list of spin offs, knowledge transfer and startups from CERN - see for more details. Dr Hossenfelder points out spin offs etc. can be expected from any large scale science investment. I'd go further and generalise this to say spin offs are almost inevitable from any project which comes up against the currently available technologies. As an example take cross head (aka Philips) screws - 'invented' to allow higher torque settings to be used in automobile mass production. Whether the expectation of known or unknown spin offs should contribute to the decision to fund a project is complex. Some (unknown) spin offs will arise due to the need to overcome unseen problems encountered during the project, others (known) can be foreseen - though the practical solution and any potential uses may be unclear - from the project requirements. I have this vision of a neolithic fashion designer approaching their tribal elders with a project to produce a fur coat - warm windproof etc. - but to do so they'll need to overcome the problem of how to join fur pelts together. The tribal elders have to approve the time and effort needed to sort out a solution. Given approval the neolithic Calvin Klein comes up with a needle and thread - the known problem has generated a solution which has the (unexpected and unknown) spin off which is still 61000 years later used in many areas of our society, not just fashion. 3. Dear Dr. H, Can you please detail your stand on the dark matter direct detection experiments? Do you think they are poorly motivated as well? How about the Axion searches? 1. C, The reason to believe in WIMPs an axions are arguments from naturalness or a numerical coincidence known as the WIMP miracle. These are metaphysical principles and are belief based, and therefore unscientific. Even if you think that the first searches for WIMPs and axions in the 1980s were worth a shot, tuning these models every time an experiment has not found the particles has not made the predictions any more reliable. The short answer is, yes, these experiments are poorly motivated. On the other hand, they are not as expensive as a larger particle collider, so I think they don't require as good a motivation. 4. A few copy edits: Par 5; "A sociologists would be" --> "Sociologists would be" Par 5; "But for what the physics is concerned" --> "But as far as physics is concerned" Par 7; "for just to mention a few" --> "just to mention a few". Par 8; "And that is only for what the foundations of physics are concerned" --> "An that is only as far as the foundation of physics is concerned" Or maybe "And that is only considering the foundations of physics". 1. Thanks, I have fixed that! Will try to keep this in mind... 5. It is such a refreshing change to hear articulate reasons why any huge scientific endeavor should be scrutinized with regards to the overall benefit to humanity. We are more likely to die of starvation once we destroy the ocean than the angst of not knowing if physics models are complete to the degree of precision being sought. I studied the quantum theory of semiconductors and manufacture computerized gasoline detectors to protect drinking water and physics works just fine with what we already know. 6. Sabine, re: "If we spend money on a larger particle collider, we risk that progress in physics stalls." in the 2030-40 timeframe after HL-LHC completes its run do you consider spending $7 billion to upgrade LHC magnets from 8.3 T to 16 T thereby increasing collision energies from 14 TEV to 27 TEV to be "building a larger collider" the 27km tunnel and most of the equipment will be re-used. only the magnets and injectors will be upgraded. IMO $7 billion to explore energies from 14 TEV to 27 TEV for HE-LHC is reasonable. 7. Do you not accept the recent European effort to evaluate the future of particle physics, nor the ongoing preparation for the five year review of the same topic in the U.S. as good faith efforts to ask precisely the question you claim you want answered? (This question being: "What research approaches should we pursue in the next few decades?") Certainly in the U.S. possibilities beyond a VLHC are being discussed. 1. Don, What do you mean with "not accept"? Do I "accept" that particle physicists hold meetings to make a strategy about particle physics? Sure. What is there not to "accept"? But is this the discussion we need to have? No. As I already said above, if you ask particle physicists if they want to do particle physics or particle physics the answer will be particle physics. I don't need strategy reports to figure that out. The question I asked is not "what particle physics should we do in the next decades" but "what should we do to advance the foundations of physics". And, in case you missed the point, asking particle physicists may not be the wisest thing to do. I told you this previously, but particle physicists seem slow to get the message, so here we go again. This community has a problem with information processing that it isn't doing anything about. This alone is a major reason to not throw further money at them until they have done something about it. What have particle physicists learned from the past decades? What have they done to prevent fads from taking over again and wasting further money? What have they done to prevent the next hype cycle? The answer is: Nothing. 8. Dr. Hossenfelder You do not have to convince me (anymore). I just have a problem with the use of ROI. It is after all a number and therefore has to be quantified in some way or another and it does not apply to investments of a country (and I have yet to find the relative ROI, but I think I know what you mean). If we would use ROI in this case, we would have to use standard bookkeeping techniques for a company, but CERN is an international organisation and therefore ROI just does not work, because a ROI - if we could quantifie the results in any way, which I doubt due to the complexity - would always be positive, even if nothing is found. Some of the proponents of the FCC even argue that finding nothing is a good result (Yeeehaaaa, awesome, now we finally know that the last 60 years were a complete waste of time and money and we can finally start looking elsewhere. Cool! Positive ROI). What you can do, is compare benefits of results. Getting ITER to work is much more pressing than finding out that we looked at the wrong corner (just my pet example, I know). Hype and lobbying. Well that's democracy. There will always be those who point out the benefits and those who point out the shortcomings. In the end the gouvernements will decide, if they want to commit to the FCC. There is a high probability that the FCC will be build (but not 1), but that is the way it works and frankly I don't want it otherwise, even if the next generation of particle physicist will call the current generation morons for deciding something that is with even higher probability, utterly useless. But frankly, whoever is going to decide on the FCC, the decision will be an informed one where all the pros an cons are taken into account (Yes, I am helplessly democratic), despite all the lobbying and hype. I remember the talk in the Royal Institution and I was pretty annoyed about it, because these guys just don't understand, how financing something like the FCC works. If I had to decide after this talk, I would rather have bought an aircraft carrier. Economics isn't always an exact science, if some aspects can even called science, but some things work and sometimes even physicist should listen to an economist. They did not help their cause. 1. Christian, I understand your objections to the ROI. I will admit that personally I don't think it's a particularly useful measure to evaluate basic research either. If anyone has a better argument, that's all fine with me. My issue is primarily that the discussion isn't even taking place. Regarding hype and lobbying. Science isn't a profession like any other. Scientists are not selling products or offering services. They have the task to find out how the world works. If they hype, if they lobby, if they market, they corrupt the principles of their own profession and, ultimately, hinder progress. 2. Christian, Let me add, that regarding the decision for the FCC, I am not at all sure that the decision will be an informed one. I am saying this because I have, in the past year, had a number of truly shocking exchanges with experimental particle physics who, by and large, actually did believe the arguments coming from theoretical physicists about the prospects of finding dark matter or supersymmetry. I am afraid that many of them still believe it. I don't blame them because, of course, much of science is highly specialized and relies on division of labor. They trust what the theorists say. Even if they personally don't buy it, it's good enough for them to repeat it and bring it up for funding decisions. And so here is the thing. Theorists are looking for new arguments for why there is supposedly something weird with the Higgs that requires further study. They do this as we speak, basically, because they need such an argument to get further funding. This, needless to say, is not a good initial condition to arrive at correct conclusions. Now, there is a chance that someone will actually come up with a good argument why the Higgs needs further study. If so, fine. I'll turn around 180 degrees tomorrow in that case and become the greatest fan of the FCC if there is a reason why it would pay off. What is much more likely, however, is that they'll come up with another nonsense reason, find enough people to believe it, and the sheer number of people who believe it will convince those who finally make the decision that there is something to it. That's a real risk, and it's a risk that particle physicists are not doing anything to prevent. 3. I see your point and that danger is real (Sounds a bit like Sheldon Cooper making fun of the experimental physicist - and lower life form - Leonard Hoffstetter). There is also the danger, that it's funded because it is CERN. They played their cards pretty good from a marketing point of view. But in a way I think CERN has lost its mojo it had 30 years ago. But let me ask you this. When the funding for the LHC was decided, what has changed since then? Are the arguments at the same "uncertainty" level as then? Are the people who actually prepare the papers for the politicians in the different countries better informed? Do they know both sides? Is "something weird" enough? I think, that the FCC has much less chance to get the funding than the LHC many moons ago, because the information is available much easier than it was in the antediluvial time before HTML. But then again, the situation is somewhat similar, because then the SSC was on the horizon and now the Chinese want to build one. So the deciding body might want to build the FCC, just to be in the game. Looking at the Chinese project, they have underestimated the cost quite a bit, though the situation could be in the end, that the Chinese drop out (like the US from the SSC) and CERN could be again on top, just for the sake of having the longest. Many other factors decide the FCC, not just the research results. In the OECD paper, there are some interesting informations, why for companies building the FCC might not be as interested in the knowledge transfer as one might believe (no, that paper is not only about how to measure/forecasting the benefits). Expected spin-off will have a much lower impact on the decision today. One reason is, that the problems companies work on today, are much more "common" or just in another field. I don't think that anyone in the field expects the FCC making a big impact on quantum computers or anything else. CERN isn't even on top of the game regarding AI. They are much too big to move as fast as a game programmer today. Even if the theorist are playing their selling game very good, I don't think that will be enough. 4. Christian, What has changed is that for the FCC there is no good prediction that it would see anything that would help advance theory development. The Higgs was a good prediciton. With the Higgs found, the standard model is complete. There is the issue with the neutrino masses, of course, but that doesn't require solution until ten orders of magnitude of energy higher. So, there is really no good reason to build the thing. If it wasn't all that costly you could say, well, you never know, let's just test the SM somewhat further. All right, there's that. But for that amount of money you need a better motivation, imo. Yes, of course there are political factors besides the scientific ones. I'm not the right person to ask about this. Personally I don't think the Chinese will go ahead with their collider plans. The reason is as follows. The Chinese, in my impression, seem to have two main goals with the collider. One is to bring international top researchers into the country. The other is prestige. You can import people with many large scale investments into science, think fusion, telescopes, quantum computing, and so on, so really that's not a good argument for a collider. What's good about the mega collider is that they might well be the only nation to have one. On the other hand, they risk being the first nation to invest lots of money into a high energy particle collider that doesn't find anything new, which would make them look stupid. I don't think they will take this risk. And in any case, for all I know CERN will have to make a decision about whether or not to push forward with the FCC plans in spring 2020, while the next 5-year plan of the Chinese isn't due until 2021. So the Europeans can't wait for the Chinese to make up their mind. When it comes to CERN, I think you are underestimating inertia. Particle physicists are not making a good case (in fact, they are pretty much not making any case), all right. The reason they don't bother is that they know as well as I that they have a good chance to get money just because they already have money. 5. Dear Dr. Hossenfelder Yep, we are on the same page here. I might underestimate CERN's inertia, but CERN might overestimate it. The Higgs-hype is pretty much over. Ask anybody in the street, what "The Higgs" does. Apart from God particle, you won't get any response. I do count on envy. Some particle physicist might think, that CERN has had its run, now somebody else should get a shot. In the ECPP proposals were quite a few that do not need the FCC and some of them are vastly underfunded. Whatever, we will see in '20 what happens and with past experience the FCC will be at least 10 years late (thank god it's not in Berlin, otherwise we would still talk about first beam on the LHC). Around 2050 I do not care anymore and ITER should be not more than 5 years away from first plasma. 9. Put $10 billion into Alzheimer's research; find a solution; use the resulting savings in future health care expenditures to fund a collider and 10 other research projects just as big. 10. Dr. Lee Smolin's recently published book "Einstein's Unfinished Revolution" makes the clear point, accessible to the general public, that fundamental work on the foundations of Quantum Mechanics needs to be accomplished. It clearly echoes your own book and postings. 11. I have no problem with beauty in physics. The variational principle is beautiful. Gauge invariance is beautiful. So is Dirac's electron equation. But those ideas have paid off, whereas string theory and supersymmetry have not. They're also not beautiful, mainly because their complexity betrays an underlying ugliness. 12. Sabine wrote: the cost for a larger particle collider could dramatically go down in the next 20-30 years with future technological advances That's an interesting point. Sabine wrote: If you encounter any such person, I recommend you ask them the following . . . I've seen the responses to those questions in this blog. It's sort of like asking Trump supporters tough questions about Trump. Or worse, asking Trump himself about his own statements and policies. It seems like an exercise in futility. That being said, in a blog like this, at least the questions and answers are on the record, for anyone to see and make up their own mind. But it sure seems hard to persuade anyone to change his mind or even take the questions, or the facts, seriously. :-) 1. Tbh, I don't so much write this to convince anyone, I primarily write it to make my position clear so I can still look in the mirror tomorrow without flinching. It is beyond me how scientists can ask for $20 billion without even thinking about the reasons why there hasn't been progress in their field for 40 years. Especially those who want to shame other scientists into silence by claiming any scientist needs to be supportive of any major investment. Let me not name names. If you know who I mean, you know who I mean. And on the other line I have a climate scientist who can't get funding for making better predictions. I'm serious. That's the world we live in. 13. I guess it is not clear to me that this situation is that dire. I am not sure all of physics will wither away because of the FCC. I am still somewhat on the fence about this. I can honestly see points either way. 1. Lawrence, Not any time soon, no. But progress in physics is ultimately driven by breakthroughs in the foundations. If atoms hadn't been discovered, there wouldn't be any condensed matter physics today. If we didn't understand light, there wouldn't be any interferometers. All electronic gadgets we use today build on quantum mechanics, which is technology that benefits all areas of physics. Now, we aren't done squeezing the existing knowledge and can go on squeezing for some while. But returns are already diminishing and without any major breakthroughs, we'll just continue to incrementally improve a digit here and there. I know that this offends a lot of people, but I hope you will forgive if I say that the foundations of physics are essential to progress overall. This doesn't only concern physics, but all other disciplines. Think of any imaging method used in the life sciences. X-rays, MRI, spectrocospy, electron microscopes - it's all physics. So this is why it matters we get the foundational work right. (And in any case, if you don't think it matters, this isn't a reason to build the FCC either.) 2. I turned away from particle physics in graduate school for a couple of reasons. One of them being that while I found supersymmetry fascinating, I saw it as a relationship between quantum statistics and spacetime boosts and physics. I had a hard time seeing supersymmetry as having much to do with the standard model, but more to do with quantum gravity. The other is the whole framework for what was to be observed seemed to be largely worked out. The standard model was put in place around the time I was in 1st grade and there seemed to be few compelling reasons to consider theory beyond this. It appeared that particle physics had rendered itself somewhat uninteresting. I then chose gravitation instead, which turns out to be about as employable in that field as is linguistics. My sense is that if we are to probe physics in the 100TeV range that we might consider finding new ways to accelerate particles. Laser-plasma systems, such as the plasma wake as I recall it is called, might be considered for development. It is not hard to derive a longitudinal electric field for an EM wave if the dielectric constant ε = ε(E) or ε = ε(D). The equation is sometimes called the cubic or nonlinear Schrödinger equation. In this way a proton can be pushed with a much higher field density than we can with RF cavities used today. In that way a future LHC+ machine might be made that is comparable in size to 1970s accelerators. 3. I did not have time to write everything I wanted to. When it comes to technology there are still some possible impacts that particle physics can have. Dark matter detection may play a role in neutrino communications. Detectors that involve a putative DM weak interacting particle that measure phonons from such interaction may pave the way for neutrino signals and communication. Of course a lot has to be worked out, such as cryogenic cooling by solid state devices, which are in a research stage. If means can be made to produce neutrinos at low energy in small devices we may begin to get neutrino transmission and reception. Of course there are a lot of things that may not happen. In the movie 2001 A Space Odyssey there is "Luna City" and large interplanetary spaceships. Well, so far that has not happened, and it seems a long way off. We do not have flying cars or jet packs. Robots though are starting to make appearances. Nuclear fusion seems to be as far off as it ever was, in spite of New Scientist reports to the contrary. We must realize that with all the techy stuff with the internet about 40% of it is the dark web that involve criminal activity. Of the above ground internet over 25% involves pornography about an equal amount games and then most of the remaining is social websites. We call it progress. Of course from the perspective of civilization we do need to have great discoveries and breakthroughs. This is the main way we can compete against ignorance. I am not sure about the future here. There are of course other things that can serve this role besides physics. The discovery of an extrasolar planet that bears unmistakeble signatures of biology would get people excited. 14. I got an idea ... let's abandon the Copenhagen interpretation and start over. 15. Null results are also important but they don’t strengthen the arguments why the positive results will be found where they think it is now. They could be just as wrong as before, or more likely to be wrong now. Spending huge money to test predictions of theories that have not been successful is unwise. Physicists don’t think in terms of cost and benefit. It’s the job of economists. Yes they are lobbying. Economists and policy makers may not have a good grasp of how solid are the predictions, the benefit side of the equation. When there is an EU hearing on the FCC, you volunteer as resource person. It will be for the good of society. 16. Particle physicists which want a bigger collider are hoping the public is even dumber than when they sold the LHC to them. The argument goes like this.."The LHC is old technology,just like your iPhone, we need to upgrade to the new model to make new discoveries. Don't ask questions about a complex field that you don't understand. We're the professional scientists, give us the money and we will find the answers." For years I would hear that the reason the average citizen needs to learn science in public schools is so they can make an informed decision about funding projects just like act as a check and balance on radical science claims that need more funding. Well, it doesn't get any simpler than this.. thanks to the failure of the LHC to find anything new beyond the Higgs boson...if the other predictions of exotic new particles didn't come true I think Joe Public should be as knowledgeable about deciding funding on a new collider as the current Presidents science advisor (if he has one) would be. It has sadly come to that, were our public education (which includes understanding the scientific method) is just as good as any expert's opinion. 17. So, I take your arguments to another field: Dark Matter searches have been going on for quite a while. Total costs could also be in the billion by now. They haven't found anything. So, let us stop it? 1. Quant, I already commented on this above. Current direct searches for dark matter are not promising for the same reason as the FCC is not promising: There is no reason to think the postulated particles actually exist. The arguments for them are based on metaphysical principles that are essentially appeals to beauty. There is no good scientific reason to think this is correct. Also, as a matter of fact, it has not worked for 4 decades. Different matter entirely, however, with astrophysical searchers that collect evidence about the behavior of dark matter (or its alternatives) through its graviational influence. Collect more of that information, at better quality, and you might be able to pin down what particle it is (or conclude it's not a particle). Then you would know better what Earth-based experiment could test this particular hypothesis. On the other hand, direct dark matter searches are far less costly than a mega collider, so you might not want to put the bar quite as high. 2. Sabine, The logic here eludes me. There is no reason to think the postulated dark matter particles exist but you want to study the behavior of those same particles you don't think exist? 3. bud rap, Please read what I wrote carefully. I wrote there is no reason to think the postulated particles exist. I am referring to the *specific* particles that were postulated whose existence you could measure at the FCC. A particle collider is not a good machine to find any kind of particle. It is good to find specific types of particles. I am pointing out that we have no reason to think dark matter is of this specific type. If it is a particle to begin with. I didn't say I want to study the behavior of those particles, I said collect more details on the already known observational discrepancies, until you can either pin down what type of particle it is (or figure out that it's not a particle). 18. I'm just a layman who is curious about the Universe, so I read all this stuff. It's my layman's understanding that you do science by: 1) Positing a theory. The theory HAS to be testable, otherwise it's not science, it's closer to metaphysics. 2) Next, you devise one or more experiments to prove or disprove your theory. If you don't have the technology to test your theory you create it, as with the LHC, or you wait for a future time when you will may be able to create a test. From what Sabine says, the new collider is not this at all. It's a hugely expensive roulette wheel that people want to have to keep their careers. They want to just spin the electromagnetic wheel and see which numbers come up. 1. Mike, As I explained here, a scientific theory has to be testable, yes, but just because a theory is testable doesn't mean it's scientific. All kinds of wild guesses are totally falsifiable, so that is an exceedingly poor criterion. So, what you say is correct, but not because the theories that particle physicists invent are not falsifiable. Most of them are falsifiable. Indeed, they have promptly been falsified. The problem is that they were not reliable predictions to begin with. 2. 👍 I was just trying to say that testability is necessary. I understand it's not sufficient. 19. Here's a cautionary tale that may be applicable to the current discussion. There are two very enjoyable reality-based television shows on cable (I think they're on the Discovery Channel or something similar). One is called Engineering Marvels, or words to that effect, the other is Engineering Catastrophes. A short while ago, I saw on Engineering Marvels a glowing tribute to the seawall project in Venice designed to prevent the constant flooding currently afflicting the city. It described the project's ambitious scope, state of the art design, and insane cost, and had fabulous footage of the huge panels designed to rise up out of the sea to block the encroaching tidal surges. It was inspiring. A few weeks later, I tuned into Engineering Catastrophes to see...the same project! Turns out the paint on the panels doesn't resist corrosion, the joints are getting clogged with sand, and various ocean-dwelling critters are mucking up the works. The whole contraption looks to be as good as useless. It's still being built, at a cost of countless billions, with no end in sight because there's no "Plan B" thanks to the enormous cost, time, and resources already invested. Draw your own conclusions... 20. Bruce Rout, True Dat... /DropsMic Ok seriously now, As much as a agree with the above I think you're going after the wrong target. It's too esoteric for the lay public (and me too for that matter). What we should be telling the uninitiated is that there is no such thing a "observation" or "measurement" in QM only interactions. To be more specific we should insist that all interactions change the level of correlation between quantum systems. In QM the useful information that results from a experiment is the combination of the nominal result AND the error bars. We can then move to explaining that the correlations between systems are the error bars and that we have discovered some hard limits on the error bars for sufficiently simple quantum systems. I like this twist because I think it eliminates the "superposition collapse" myth. It also guides people away from doing the observation-->observer inference which often leads incautious minds to start rumination on meta-physics... which I think is 'a long road that don't-go no-where'. As to many worlds or any sort of infinite inflation to solve ones aversion to fine tuning, why can't people see that nakedly inserting an (unprovable) infinity into an argument doesn't make it stronger? 21. @Sabine It's not the first time you cite nuclear fusion as a project where more money should go, instead of a next collider. This is very ill-thought indeed. Nuclear fusion has a terrible reputation among politicians (indeed, among scientists from other fields) because it keeps promising clean energy "within the next 25 years", and this for about 50 years now (talk about lies and hype...) It is estimated that, since the 50s, only the US have spent about 30 billion dollars on fusion. Counting the rest of the world might almost double that figure. And, with all that, we still run our old fission power plants. I fail to understand why you would like to pour even more money there. 1. Opamanfred, I'll write some more about nuclear fusion in the future, then maybe you will understand. Let me just say that I think the investment is well justified. Frankly I think that given the enormous potential payoff if would easily justify much more investment. 2. You are falling into one of your own pitfalls. The question is not whether fusion has "enormous potential payoff". The question is (like for the next collider) whether these tens of billions of dollars are well invested in fusion, or should rather be better invested in other types energy research. These are the terms into which you phrased the question about the collider, and should remain the same for fusion as well. 3. If you think that is what I have said, you misunderstood. As I have explained many times in case of the FCC the situation is that we know it is extremely costly compared to other investments in the foundations of physics, and we know that investing in other directions is currently more promising for progress, which leads to the conclusion that a larger collider is not a good investment in the foundations of physics. It adds to this that it is not a good societal investment either because there isn't much you can do with particles that decay in less than a nanosecond. If you believe you can make a similar argument against fusion power, I would be curious to hear it. 4. I will make an argument against fusion power, at least until the physics community clean-up their theory. That means (please) revisit the plasma physics that suggest fusion is possible not just plausible, and stop simplifying a fusion hypothesis with simplifications and omissions to render more beautiful mathematics. A physicist assured me to achieve fusion ignition on a laboratory scale does not rely on quantum tunneling (QM) or Special Relativity. Great, that leaves first-law fundamentals to guide us. Fusion, I have read, occurs in the sun's core through quantum tunneling of (sparse) Hydrogen nuclei but on a grand scale. By eliminating the scaling necessary for fusion in the sun's core, and rather dirty fusion assisted by a secondary fission as in a thermonuclear weapon, that should simplify our laboratory experiment. Another physicist assured me ignition should occur when the rhoTtau is dense enough (rho), hot enough (T) and confinement time long enough (tau). So far so good, the task appears simpler now. But what if first-law fundamentals show us that fusion cannot occur? What-if a comprehensive first-law assessment of an energy balance shows there simply isn't enough energy in the system to allow fusion to occur due to losses from neutron escape, thermal leaks and energy losses the beautiful math left out? Has anybody actually done-the-math to confirm the predictions for fusion ignition? No, I don't mean sophisticated 3D code analysis, and no philosophical arguments to dispute the meaning of '=' as an abstraction of energy balance (thank you First Name Surname). We need a sanity-check. Let's apply first-law fundamentals before we engineer D/T targets for RT instabilities, or build bigger and bigger Tokamaks. Given the above I eagerly anticipate Sabine's future post on the promises of fusion, the 30-year kind of-course. 22. Sabine, for your next blog post, what do you think are the most important future particle physics experiments HEP and even QG should invest in? future physics experiments in HEP that should be funded before any talk of FCC 100TEV scale collider obviously future electron/neutron EDM should be on this list 1. neo, I have answered this question many times: I cannot as one person replace the work of a whole community. 23. Building a new particle collider must have a purpose. For example the discovering of new phenomena. Or a better understanding of the way our universe acts (unification of particles and forces). Unfortunately the latter is fundamentally impossible with the help of phenomenological physics. So it is about new particles… That’s not quite convincing. 24. Sabine, you are right regarding the particles: boring topic. Game over, the same with cosmology - a good seller for the general public looking at the heavens via tv show. But read the BACKPAGE of APS News from August 2018. The author is Katepalli Sreenivasan, a man socialized in Idia who has run for decades the Princeton Superpipe to study turbulence. He is stating that he is often asked as to whether it is true that "we" cannot compute the turbulent flow through a water pipe based on first principles - althogh flying to moon and walking trough exotic particle zoos. There's a contradiction. We need to realize that the so-called WEST is the youngest part of humankind, still close to simple barbarians when compared with the much older civilization of Asia where cooperation and solidarity are key to survival in an environmenr of mighty rivers etc. - but with huge chances, too. There only those survive who develop somehow sort of cooperative and solidaric ethics/habeits. Others simply dye off ... The physics crisis is simply embedded in the general civilizationanl crisis of the WEST. 25. I think one should put things into perspective: The nuclear-armed nations spend about US$100 billions a year on their nuclear forces. These days hardly anybody seems to care anymore even though the doomsday clock has been set to 2 minutes to midnight. Why care about US$20 billions spent once where in the worst case we find no new particles, which is nothing compared to the eradication of mankind as result of a nuclear war apocalypse. 26. Sabina, what is yourtake on the Elctron-Ion Collider planned in the USA, probing the inner structure of protons and neutrons? 27. What's the point of the comment about quantum computers and fusion? We do have the former; IBM will let you run code on theirs of you ask nicely, and over the next few years the major tech firms will scale these up unless there is some unexpected problem like dynamical wavefunction collapse. And the latter requires trillion Euro sized investment, but is achievable. CERN isn't competing with these enterprises, but rather all of the rest of basic science that will give us the quantum computing and fusion of tomorrow. 1. Wibble, Needless to say I was speaking of quantum computers that can do better than classical computers and that are actually good for something. You could similarly say we have fusion reactions, it's just that those happen to not have a positive energy output. Well, yes, let's see how they scale these up. I know that CERN high energy particle physics is a more long-term investment than that. Why do you think you have to inform me of this? This doesn't mean you do not have to justify the investment. By your logic, if you invest in the farthest future, say 10 billion years from now, then any investment is justified. This is just economic nonsense. 2. I don't think that we disagree that CERN is a bad bet, but my understanding on 'real world' quantum computing and fusion is that these have moved in recent years to be much more problems in engineering than fundamental science so that selling these as a possible return on investment in fundamental physics is risky. Advanced materials, biophysical insight, and quantum technology generally seem much more likely to arise as benefits from investment in multiple, cheaper, fundamental physics questions that we probably agree is a better strategy than a new collider. 28. Whenever there is a large community representing and practicing any given highly skilled profession, there is a need to support that profession with a stable source of funding. If that high skilled profession is deprived of that funding base, then the profession will decline and eventually die. Examples of this process is fusion physics with funding that comes from fusion reactor development, nuclear engineering with funding coming from the deployment of nuclear reactors, aerospace workers who depend on funding for rocket development and space missions, astrophysicists with the deployment of large telescopes, defense workers who depend on the funding for weapons systems development, or particle physics with the fielding of large particle accelerators. Some organization has to sign the paychecks or the young people who are deciding what profession to devote their lives to will avoid a choice that has no funding base. By killing the funding, you kill the field. 1. Axil, IN 1925, there was no field of experimental high-energy particle physics (Lawrence invented the cyclotron in 1929-30). In 1965, there was. Somehow, we got going from scratch during those four decades. If we "kill the field" and it seems necessary to start it up again, we can do so. It might even be that the reboot would bring fresh new ideas that the old geezers overlooked. If you want to be more quantitative, you should figure in the cost of keeping the field going for decades when there is really no point to it vs. the start-up cost of starting afresh. In essence, you are making a "too big to fail" argument. Experience suggests that institutions that are "too big to fail" actually need to be put out of their misery and replaced by fresh new growth. Anyway, try refining your argument along the lines I have suggested, compare it with experience in, e.g., high-tech industry (IBM vs. Apple, RCA vs. Intel) and see where it leads. 2. When all the experts (aka old geezers) in the dying field are gone, who has the wisdom and the background to inform the decision makers that the field requires a comeback? In other fields, it is obvious that a field needs to be reactivated. Anybody can recognize that a new weapons system or energy source must be developed. But in particle physics, who is going to determine that a new particle is absolutely needed to the tune of hundreds of billions in startup money? 3. The problem with "too-big-to-fail" is, they are too big to fail. 2007-2009 we have seen, that no matter what, these organisations will be saved. That begs the question, is CERN too big to fail? Would not building the FCC kill a whole field? Or to turn it around: does a whole field depend on one 100 km experiment? If that is the case, that field is bound to disappear anyway so why not getting it over with. I don't think that too big to fail applies to the FCC. If it does, we should under no circumstances build the FCC, because it would aggravate the situation. Too-big-to-fail is bad. Throwing money at it just to keep it alive is the worst possible solution, worse than not throwing money. Radical change must be part of any solution and the FCC would not be the change needed. Dr. Hossenfelder, would you mind to give me a hint about IBM vs. Apple and RCA vs. Intel. I don't see any relation to the too-big-to-fail problem in the first pair and no connection between Intel and RCA. I just don't get it. 4. @Axil: We haven't been to the Moon for nearly 50 years. And yet, despite this fact, NASA is currently investing in the technology to send people to the Moon again. It will take some money to reinvent the technology, but compare that with the money it would have taken to keep sending people to the moon over the last 50 years. And would we really want to send people to the moon using 50-year old technology today, anyway. There may be good arguments for building the next big accelerator, but if this is the best argument you can come up with, Sabine is absolutely right and it shouldn't be built. 5. Going to the moon and later to Mars is an ego trip for more that a few current politicians and multi-billionaires. Most of the Moon and Mars tech is privately funded anyway. Space tourism is about to become a money making proposition. There are millions of star trek and star wars fans pushing the romance of space travel into the public political domain. Nobody has written sci-fi books or movies about discovering a new fundamental particle. The public does not care about particles. The concept and uses for a new fundamental particle is not tangible in the public mind. When the level of brain power needed to understand something gets to a critical level, people just turn off. Than level isn't very high. People can understand getting to the moon, they will never understand what a Gaugino is or will they care. Particle physics has no popular constituency. The take away, once particle physics is dead, it will never rise again. 6. If particle physics is stopped it is possible it will not arise again. The comparison with flying astronauts to the moon is somewhat illustrative, in that the Apollo-Saturn program was worked out in 10 years, whereas trying to duplicate this effort now is taking longer and appears more difficult. As Peter indicates though it might revise with different methods. In particular the current approach to building particle accelerators is based on the idea Lawrence developed originally in the late 1930s to 1940s time period. The electric field oscillates in RF cavities to push a charged particle along. The electric field density can only be made so large. However, with a plasma you can get charge separations by optical means that have much larger field density. Other means might work as well, so we can increase the EM field density used to accelerate a particle by many orders of magnitude. In this way a putative 100TeV accelerator might be in a ring a kilometer in diameter instead of 60km as proposed with the FCC. It is the case that particle physics does not have the power appeal that large rockets do. Yet remember that particle physics got its early push due to the ultimate physics experiment conducted on July 16, 1945. Nuclear physics illustrated how the understanding of nuclei and particles lead to this enormous power. Of course there has not been much follow on of that sort with particle physics. The nuclear industry has stalled in ways, and the prospect of some new energy source seems remote. Maybe if we can push to higher energy we can find if the Higgs particle is a manifestation of the sphaleron. The sphaleron conserves baryon plus anti-lepton number and anti-baryon plus lepton number. As such this physics might it found permit us to directly convert matter to energy. That would be a game changer, though in the end the same sort of maniacs who got control of nuclear bombs will doubtless end up in control of this. 7. The various forms of wakefield acceleration are interesting technologies that continue to be studied for decades. However the beam emittance, energy spread, top energy, beam intensity, duty cycle, operational lifetime, reliability etc. that can be achieved are, multiplied together, double-digit orders of magnitude worse than the requirements of a 100 TeV collider. These radically different approaches should be and continued to be studied of course, but one should not believe they will present a magical lower-cost technology that would make the 100 TeV range economically accessible compared to current accelerator designs. Even the FCC is at the limit of accessible realistic technology in the 30-40 year time span. The reason why the "current approach" is so very resilient is that it is actually extremely good. 29. Optical telescopes offer an interesting analogy. The Palomar Observatory saw first light in 1949. The glass blank spent World War II cooling enough before it could be ground into a mirror. The Palomar was the largest optical telescope in the world (*) until the mid-1980s when new technologies, like the honeycomb active focus optics used in the Keck, were developed. Since then a number of monster optical telescopes have been built, but there was nearly a 40 year gap. The old technology, grind a huge glass blank that might take years to cool, had reached its limits. A new technology involving precision machining and computer controls was developed and augmented with advances in interferometry, laser guide star focusing and image processing. Starting in the 1930s, physicists developed more and more powerful particle accelerators that were basically the original frying pan sized cyclotron with higher magnetic fields, higher energies, other refinements and much improved sensors. The LHC is the Palomar reflector of the particle physics world. It's not as if no one did astronomy between 1949 and 1990. There were lots of big telescopes. There were all sorts of new bandwidths to explore. The universe offered a variety of surprises. You are absolutely right. Physics, even fundamental particle physics, could do a lot more with the money that would otherwise go to build a new LHC like collider. 1. In our universe, only electrons and protons exist as stable particles that can be readily produced and artificially accelerated to the highest energies. They need to be accelerated in a vacuum to prevent them from colliding with air molecules. A magnetic field is needed to bend or focus the particles; an electric field is needed to accelerate the particles. The field should alternate at high frequency since this is the most efficient. Thus an accelerator becomes either circular or linear. This is not a matter of "more technology", it is a matter of principle. Other than these very basic principles, the 1930's cyclotrons and the superconducting LHC synchrotron have little in common. 30. 20 billion dollars is nothing in big scale. Trump increased military budget by 60billion this year. Germany built an airport with 15billion and it doesnt even work. And no country is funding this alone either, but multiple countries. Your mindless witch-hunt against particle physics has no meaning and is entirely pointless. 31. @Alex: Is that the best argument you can come up with? Out of the four top latest discoveries in high-energy physics, three have not come from large particle accelerators. (I'd say these four are neutrino mass, dark energy, gravity waves, and the Higgs particle. Feel free to criticize this list, but note that three of these have won Nobel prizes.) Why don't we put the 20 billion into gravity-wave detectors, neutrino experiments, and telescopes? Or at least ask the neutrino physicists, the gravity-wave scientists, and the astronomers whether they want some of the 20 billion? (Hint: I am absolutely sure they can probably spend all of it in worthwhile ways.) 32. You really undermine the knowledge and technology thats has been developed while finding the Higgs (and top quark etc.) Multiple countries are building neutrino detectors. eLISA is still going on. Theres nothing to prevent a much better collider except mindless physicists who thinks money is an actual problem and not a marketing problem. 33. Alex: Your assertion that we're not canceling projects because of money is blatantly false. Look at the wikipedia entry for the Overwhelmingly Large Telescope, which was canceled because a mere €1.5 billion was thought to be to expensive. You could a dozen of them for the cost of the next generation collider. Is the scientific value of the next collider twelve times as large as the scientific value of the Overwhelmingly Large telescope? Did anybody even think to do this comparison? Comment moderation on this blog is turned on.
25b6acda781abaa4
Tuesday, August 19, 2014 Maldacena's bound on statistical significance When Juan Maldacena began his Strings 2014 talk after so many speakers who have displayed their eloquence back in June, JM: Geometry and Quantum Mechanics, he was feeling like a soccer player who had to play against Argentina. ;-) The audience exploded in laughter; Juan is clearly an Argentine patriot. Despite his personal modesty on steroids, the 13-minute talk was filled with inspiring thoughts. Many of them have been discussed on this blog repeatedly. But let me focus on a rather new thing that starts to be covered around 6:00. Maldacena reminds us of the obvious and old observation that the spacetime inside the black hole interior (i.e. the lifetime and the Lebensraum of the poor infalling observers) is limited which inevitably seems to affect the accuracy and reliability of the experiments. Such limitations are often described in terms of the usual uncertainty relations. Inside the hole, you can't measure the energy more accurately than with the \[ \Delta E = \frac\hbar{2 \Delta t} \] error margin and similarly for the momentum, and so on. But Juan chose to phrase his speculative ideas about the universal bound in a more invariant and more novel way, using the notion of entropy. A person who is falling into a black hole and wants to make a measurement must be sufficiently different from the vacuum. But after she is torn apart, hung by her balls, and destroyed (note that I am politically correct and "extra" nice to the women so I have used "she"), the space she has once occupied is turned into the vacuum. The vacuum inside a black hole of a fixed mass is more generic so the "emptying" means that the total entropy goes up. Juan says that the relative entropy\[ S(\rho|\rho_{\rm vac}) = \Delta K - \Delta S \geq 0 \] Because we know that once she's destroyed at the singularity, the entropy jumps at least by her entropy, it is logical – and Juan is tempted – to interpret the life and measurements inside the black hole, and not just the fatal end, as a process in which she approaches the equilibrium. So it's not possible to perform a sophisticated, accurate, and/or reliable experiment without sending something in. And if we send something in, the entropy will increase. An explicit inequality that Maldacena conjectured is the following inequality for the statistical significance:\[ p \gt \exp(-S) \] That's a formula written in the convention where the \(p\)-value is close to zero. If you prefer to talk about "\(P=\)99% certainty", you would write the same thing as\[ P \lt 1-\exp(-S) \] The certainty is less certain than 100% minus the exponential of the negative entropy and I suppose that by \(S\), Juan only means the entropy of the object. It's still huge which means that the statement above is very weak. The entropy of a human being exceeds \(10^{26}\) (in the dimensional units nats or, almost equivalently, in the less natural but more well-known bits) so the deviation from 100% is just \(\exp(-10^{26})\) which is a really small number morally closer to the inverse googolplex than the inverse googol. There may be stronger inequalities like that. And I also suspect that many such inequalities could be applicable generally – outside the context of black hole interiors. Have you ever encountered such inequalities or proved them? Note that the \(p\)-value encoding the statistical significance is the probability of a false positive. If we're constrained to live in a finite-dimensional Hilbert space where all basis vectors get ultimately mixed up with each other or something else, it's probably impossible to be any certain than your microstate isn't a particular one. But there are just \(\exp(S)\) basis vectors in the relevant Hilbert space and one of them may be right even if the "null hypothesis" holds, whatever it is. I am essentially trying to say that \(\exp(-S)\) is the minimum probability of a false positive. If someone thinks that she can formulate such comments more clearly or construct some evidence if not a conclusive proof (or proofs to the contrary), I will be very curious. If you allow me to return to the black hole interior issues: It seems to me that these "bounds on accuracy or significance" haven't played an important role in the recent firewall wars. But they're still likely to be a part of any complete picture of the black hole interior. For example, it's rather plausible that all the arguments (and instincts) directed against the state dependence violate these bounds. Juan tends to say that the rules of quantum mechanics may become approximate or inaccurate or emergent inside the black hole, and so on. He even says that "because the time is emergent inside, so is probably the whole quantum mechanics". Well, the answer may depend on which rule of quantum mechanics we exactly talk about. But quite generally, I don't believe that there can be any modification of quantum mechanics, even in the mysterious black hole interiors. In particular, the inequalities sketched by Maldacena himself might be derivable from orthodox quantum mechanics itself. And I would be repeating myself if I were arguing that ideas like ER-EPR and state dependence agree with all the postulates of quantum mechanics. Also, if we sacrifice the exact definition of time as a variable that state vectors or operators depend on – and we do so e.g. in the S-matrix description of string theory – it doesn't really mean that we deform quantum mechanics, does it? If we lose time, we no longer describe the evolution from one moment to another and we get rid of the explicit form of the Heisenberg or Schrödinger equations. But the "true core" of quantum mechanics – linearity and Hermiticity of operators, unitarity of transformation operators, and Born's rule – remain valid. What breaks down inside the black hole is the idea that exactly local degrees of freedom capture the nature of all the phenomena. But unlike locality, quantum mechanics doesn't break down. I should perhaps emphasize that even locality is only broken "spontaneously" – because the black hole geometry doesn't allow us to use the Minkowski spacetime as an approximation for the questions we want to be answered. 1. They're government workers. Of course 80% are going to require an operating system that was designed for mental defectives! Frankly, I'm surprised that the number is not even higher. I guess that's an indication of the partial success that the LiMux developers had in dumbing down the system to government-worker level -- a difficult task. I guess that Munich, in anticipation of the change, is transferring the budget for hiring a competent IT staff to purchasing third-party virus-protection software. "Penguins belong to the South Pole, not to European or American buildings." Except, apparently, Google datacenters. You do know that Google Web Server (which feeds this blog) runs on Linux, don't you? 2. Linux is fast and tight, Windows is pretty. I did a 3-month calculation of a growing crystal lattice. Knoppix (boot from CD) ran 30% faster than Windows, AMD ran 30% faster than Intel. Knoppix in AMD still ran three months - but the log-log plot of the output was longer, Past 32 A radius ran in blades. Theoretical slope is -2. The fun is in the intercept (smaller is better) and the bandwidth. Unix is not unfriendly, but it is selective about who its friends are. "the Linux solution is very expensive because it requires lots of custom programming." Bespoke vs. off the rack. 3. Nope, I am using Linux since over 20 years, and I am in trouble only whenever I have to use a computer with Windows installed :-) 4. This is silly. Germany is (unlike Greece and others) a very well functioning country with a healthy equilibrium between the commercial and government sector. So the people who work for the government are in principle the very same kind of people who work in the private sector, too. The government sector has a different way how it's funded - it's stealing money from the productive citizens via the so-called "taxes" - but that doesn't really affect the work that the employees are doing there. I think that the Google web server running this server should be moved to the South Pole, too. ;-) 5. I just cannot envision any modification of quantum mechanics whatsoever. I’ll bet that lubos is correct here. 6. "Time" is a whore concept. No reason to believe QM depends on its survival. 7. Interesting point that one cannot perform a measurement absent a source and a sink. If everything is at equilibrium, one can build a thermometer and read it, but not calibrate it to assign the output meaning. 8. Sadly, Windows taught people that (1) Computers should be pretty and should be so easy a 3 year old could use them and (2) Computers should crash all the time. People expect lousy performance and don't care, as long as Facebook and Twitter come up most of the time. I don't use Windows at all now. I use open source software. I fully admit that most people have not the training nor the ambition to do this. I pay nothing for my software and my computer works the way I want it to. I find Windows too confining. On the other hand, for those who want pretty, sparkly screens, and no thought required, Windows is the way to go. 9. OK but having used Linux for 20 years should be classified as a medical disorder. ;-) 10. It's only strange because the "technical people" have been penetrated by anti-market zealots who suppress everyone else. It's much stranger to be a fan of such a thing. Unix is a system from the 1960s that should be as obsolete today as the cars or music from the 1960s. But it's not obsolete especially because its modern clones have been promoted by a political movement. Unix, like Fortran and other things, should share the fate of Algol, Cobol, Commodore 64 OS, and many other things, and go to the dumping ground of the history where it has belonged for quite some time. 11. There is nothing wrong for a system to be usable by a 3-year-old. Coffee machines, toasters, and vacuum cleaners have the same property. Kids are ultimately the best honest benchmarks to judge whether software is constructed naturally. When kids may learn it, it really means that an adult is spending less energy with things that could also be made unnecessarily complicated, and it's a good thing. My Windows 7 laptop hasn't crashed for a year since I stopped downloading new and new graphics drivers etc. I had freezes due to Mathematica's insane swapping to the disk - when it should say "I give up" instead - but that's a different thing. 12. "So the people who work for the government are in principle the very same kind of people who work in the private sector, too." Ah ... so can you show me the private sector equivalent, in principle, of the Potsdam Institute for Climate Impact Research? ;-) The United States also is a very well-functioning country with a healthy equilibrium between the commercial and government sector. (In fact, I would argue that the US is less socialist than Germany.) Surely, during your time in the US you must have been forced to deal with the New Jersey or Massachusetts DMV? (Here I use the generic term -- in New Jersey it's called the MVC, while in Massachusetts it's the RMV.) If not, consider yourself very fortunate. There's a little bit of Greece in every government bureaucracy. (In the US, we have to tell them not to defecate in the hall -- http://www.newser.com/story/189036/epa-to-workers-stop-pooping-in-the-hall.html -- yeah.) These are the folks who prefer a platform that is better suited for gaming, entertainment, and viruses than getting quality work done. Hence, I agree with you, I think that Munich is leaning toward making the right decision. 13. Sure, I can. The commercial sector is literally drowning in similar šit, too. Try e.g. 14. Your taking of COBOL out to the dumping ground of history may be a bit premature. It's still actively being used in bluechip industries such as banking, insurance, and telecommunications. As far as new development goes it's rarely (if ever) used in GUI type applications but remains popular for high volume backend transaction processing in the bluechip industries. My guess is that your recent Bank of America transactions were touched by COBOL at some point, most likely in the mission critical application of updating your account. Not that I don't agree with your sentiment, it's just that it's incredibly difficult to get rid of. The business case for replacing existing backend systems with a more modern platform are usually weak. 15. Keyboards and mice should theoretically be obsolete too, but after playing with tablets for a couple of years, many people are moving back to laptops and even desktops for "real work". Linux having its origins in the 1960's is not an argument at all against it. 16. LOL, right, it surely feels like the two debit cards were attempted to be sent to me by a COBOL robot. ;-) I understand it's hard to get rid of things when lots of stuff has been written in an old framework. 17. Eelco HoogendoornAug 19, 2014, 10:57:00 PM 'What I am really stunned by is the unbelievably complicated culture of installing things on Linux.' Indeed. The only thing such accomplishes is making people feel clever because they haxxored their computer with 1337 compilars. In the real world of people trying to get stuff done, such nonsense is known as a lack of encapsulation, which is simply objectively bad software design. 18. Wow, what a highly emotional and non-factual piece. I come here for science news, but the credibility of the blog just plummeted. So three year old user friendliness is the main criterion for municipal desktop operating systems? Where did this criterion come from? If valid, there are several Linux distributions dedicated to three year olds. Dou Dou, for example. Come on Lubos you can de better. Where is the meat (facts)? 19. Have people who struggled with Linux run Windows computers for a long time before switching to a different operative system? Are there people who have always run Linux machines and never used Windows, but still feel unhappy about the Linux user experience. Just wondering because my mother started using computers when she was 60 yo, and she always found it pretty straightforward to use. Only time she tried to use Windows she found it pretty disgusting and user-unfriendly. 20. Lubos is a theorist. All theorists use Windows, while most all experimentalists use Linux (Scientific Linux is the official OS of Fermilab and CERN). I'll let someone else explain the reasons. 21. I think I get it already. Theorists tax the Operating System as lightly as a three-year-old, whereas experimentalists need the system for real work. 22. Dear Eelco, thanks for making these observations clear with some adult terminology! ;-) 23. I think it is true to some extent and there is nothing to be ashamed of. Of course that theorists often use computers in similar ways as writers (of literature), not really to compute, and they don't want to waste their time by forcing computers to do elementary things because computers are supposed to make things simpler, not harder. Experimenters do lots of complicated things with computers so they may sacrifice some friendliness without increasing the amount of wasted time by too high a percentage. For the Kaggle contest, I had to recreate an Ubuntu virtual machine because it seemed like the most plausible if not only way to install software that helps one produce competitive scores. By now, someone has ported it to Windows. I would probably prefer it but my experience with things like Visual Studio etc. is really non-existent, due to my Linux training, so the Linux path could have been easier for me due to the historical coincidences, too. 24. "it's been my point for years that the movement to spread Linux on desktop is an ideological movement" The reverse is true. Computing in the free world is subject to market forces. Linux has won hands down everywhere except for the Desktop where MS Office addicted persons obstruct innovation. Political and objective reasoning has placed Linux everywhere except the desktop. Grandmothers, children and some theorists have been well served on Desktop Linux for a decade or more. I invite you to drill down to the objective reasons why that is. We will probably never know the truth about Munich IT management decisions, but the wider market tells a clear and dramatic story in favour of open (but profit making) systems. If you find being called out for lack of meat obnoxious then I am sorry. This article happens to be the the first protein lacking I have seen by you, Thank you for the Reference Frame. 25. Desktop - and increasingly more often, mobile platforms - are the places where the actual work is being done and where the actual relevant features of operating systems are being tested. It's unambiguously clear that for the operating systems to do their work well, they should be profit-driven, company-protected systems. Whether the source is open or closed isn't too important. What's important is that a company has a financial interest to make it work. So Apple is doing the same thing for iOS and Google for Android that Microsoft is doing for Windows. The underlying mechanisms that make all these things usable are completely analogous and they require capitalism. 26. You call the sharing of IT ideas, architecture and open core modules "socialism". By the same token you are a rabid socialist for openly discussing your physics theories. By all means let Apple and Microsoft tinker with buttons and pixels to accommodate the increasingly dumbed down populations, but let the core architecture be defined by the Open Source world. This massively benefits the corporate world as well as the rest of humanity, which is why the corporate world all use Open solutions in one way or another. 27. Yes, I am an insane socialist donating intellectual assets of multi-million values to others for free. But that's less unethical than to be forcing others to use unusable products. 28. It may be several hundred thousand generations behind the most obsolete flying saucer dimensional transfer management system in the galaxy, but .NET is the greatest thing in the known universe for sure. Do the Linux bug dwellers have anything remotely like this? I don't know since I haven't looked but I seriously doubt it. Congratulations to the officials of Munich city who have belatedly achieved common sense. 29. Hmm, think you have been brainwashed by microsoft, Lubos---there are plenty of uses for Linux...even Google uses a lightly morphed version, as does Android, etc...here is a partial list of surprising adopters from Wikipedia: --lots of free compilers as well for developers and programmers. 30. I have never communicated with Microsoft or read any of its opinions - unfortunately, I would say - so I couldn't have been "brainwashed by Microsoft". I am not saying that people aren't using all kinds of other products, and so am I. Concerning mobile OSes, I have devices with iOS, Android, as well as Windows Phone, and Android is the most expensive one. I am just warning against the political movement that is trying to force different systems upon desktop users whose majority clearly and voluntarily prefers Microsoft Windows as the market conditions unambiguously show. 31. Unlike benchtop chemistry and biology, physics can be mostly taught online, with engineers later being hired to do experiments. I sure would like Lubos to join an online university to create video lectures, at both advanced and entry level physics. 32. Honest question: What's so great about it? Can you explain or give an example? Thanks. 33. I have to say that I fail to see the Linux world as some sort of sinister kabal that is forcing innocents to use unusable systems. Look at the Linux desktop market share, and you can at least say that they have failed. Windows is great for Microsoft-style word processing and spreadsheets. Perhaps it's even OK for TeX/LaTeX, if there's a decent and easy to install distribution for it (I know there is one for OSX, not sure about Windows). Linux seems popular for scientific computing, and where such users want a more polished and easy to use system for their work laptop/desktop, they choose OSX, which gives you Unix underneath and a polished user interface on top. That's why a progressive household would have all three operating systems on their computers. I know mine does. :) 34. OT: Which reminds me ... I'm feeling nostalgic. It's many decades since every other word in those horrible computer trade magazines seemed to be about the 'goto' statement and 'spaghetti code'. Now all is silent — as far as I know anyway. Oh, how I miss the tedium of it all! Anyone care to rekindle the exquisite ennui? Hey, how about a discussion on punched cards versus paper tape? :) Incidentally, as far as operating systems go, I mostly use Windows simply because, reluctantly, that was all that was made available to me at one point (more accurately it started with that awful DOS), but I got used to it and I can do all I need to do with it. But most of all I use it these days because I'm buggered if I'm going to spend any time looking up the kind of stuff that I lost interest in and forgot about years ago just to make a change for the sake of Greater F#cking Spartan. Also VBA behind Excel can be very handy for a quickie, a little like a fast shag behind the bicycle shed. Just the ticket sometimes. :) P.S. Many years ago, but again long past my interest date, I surprised myself by reading Bjarne Stroustrup's book on the genesis of C++ (I forget the title) and found it fascinating. I'm pretty sure I'm fully cured now though. :) 35. I just noticed that Microsoft is currently in the process of shifting its German operational center to - München, Schwabing. Now that they are becoming a big tax payer over there, it seems inconvenient for the municipal government to run on Linux. After all, Linux won't finance any pleasure ('amigo') trips for the local politicians, Microsoft perhaps does ... 36. Absent a source and a sink of time... everything happens? Or nothing happens? The event horizon is when happening stops? Can entropy be static? 37. "Suggestions the council has decided to back away from Linux are wrong, according to council spokesman Stefan Hauf." Some meat: 38. Dear FlanObrien, the committee to review the computing in the city was probably built by the executive power in the city which is why one should also respect the interpretation of the executive power, and not the council, why it was done. 39. Believing the world should run on the level of three-year-olds is really very disturbing. It may also explain why social has become more and juvenile over time. I figure if you need pretty pictures and shiny baubles, you're not really looking for a computer. More like an electronic playmate. It's interesting that your Vista computer worked so well. Mine crashed, despised the peripherals (all of which I replaced) and drove me to buy an Apple to escape the Microsoft curse. Maybe I just really use my computer more than most and expect it to function like I want it, not like a three-year-old wants it. I'm a grown-up now. I want a grown-up computer.
fc1532fb26c2e7aa
Viewpoint: A New Twist on Relativistic Electron Vortices • Hugo Larocque, Department of Physics, University of Ottawa, 25 Templeton St., Ottawa, Ontario K1N 6N5, Canada • Ebrahim Karimi, Department of Physics, University of Ottawa, 25 Templeton St., Ottawa, Ontario K1N 6N5, Canada Physics 10, 26 APS/Hugo Larocque/Ebrahim Karimi Figure 1: Artistic rendering of a nonrelativistic electron vortex (left) and a relativistic electron vortex (right), in which probability-density currents (thin red and purple lines, respectively) rotate around the propagation axis. In nonrelativistic electron vortices, the spin angular momentum (thin blue lines) and orbital angular momentum (red helical surfaces) components of the electron’s wave function are independent, and the probability-density current come entirely from the electron’s orbital angular momentum. Bialynicki-Birula and Bialynicka-Birula [1] and Barnett [2] determined wave functions that can be used to construct a relativistic electron vortex. In this case, the electron’s orbital and spin angular momenta are coupled, and the probability-density currents must instead be attributed to the electron’s total—orbital plus spin—angular momentum (purple cloud).Artistic rendering of a nonrelativistic electron vortex (left) and a relativistic electron vortex (right), in which probability-density currents (thin red and purple lines, respectively) rotate around the propagation axis. In nonrelativistic electron... Show more Vortices are rotational currents around a stagnant point. They can appear in everyday contexts, like a whirlpool in a river. They can also form in a purely quantum entity like a free electron, where what rotates is the probability-density current associated with the particle’s quantum-mechanical wave function. Electron vortices have several applications: they can be used to probe nanoscale magnetic materials and to manipulate nanoparticles. However, a fundamental question is whether a vortex can be formed from an electron at relativistic energies. Two papers, one by Iwo Bialynicki-Birula from the Center for Theoretical Physics in Warsaw, Poland, and Zofia Bialynicka-Birula from the Institute of Physics, also in Warsaw [1], the other by Stephen Barnett from the University of Glasgow, UK [2], address this question by developing a mathematical framework to describe relativistic electron vortices. Their approaches are different, and they reach differing opinions about whether a true vortex can be made from a relativistic electron, as experiments have claimed. An electron vortex is essentially a sum of electron plane waves, or a wave packet, with a special shape: a central null point surrounded by a coiling probability-density current. For nonrelativistic electrons, the vortex can be made with a wave packet that has a helical wave front and thus appears to spiral like a corkscrew (Fig. 1, left). In general, wave fronts consisting of intertwined helices, where is an integer, are said to have a vortex of strength . In most (but not all) cases, the helical wave front can be directly related to the orbital angular momentum carried by the electron [3]. Moreover, the orbital and spin angular momentum of nonrelativistic electrons are independently conserved because the two momenta are not coupled in the Hamiltonian that describes free electrons. This means that an electron vortex can be described as a product of two wave functions, one that carries information about the electron’s orbital angular momentum and defines the vortex, and a second that carries information about the electron’s spin. The formalism for describing nonrelativistic electron vortices was established in 2007 [4]. This led to a surge in experimental efforts to generate electron vortex beams, with researchers using devices such as spiral phase plates [5] and pitchfork holograms [6, 7] to impart a helical wave front, and thereby a nonrelativistic vortex, on the electron wave packet. These (ultimately successful) efforts were largely motivated by an interest in using the beams as nanoscale magnetic probes [8]. But they also prompted other ideas, such as using the electron’s orbital angular momentum to generate spin-polarized electron beams [9] and employing the electron’s coiling probability-density currents to nondestructively measure its orbital angular momentum [10]. Our discussion so far pertains to electron wave packets built from wave functions that are solutions to the Schrödinger equation, which only describes nonrelativistic particles. To describe relativistic electron vortices, one must instead turn to the considerably more difficult-to-solve Dirac equation. Researchers first attempted to do this in 2011 [11], finding that the vortex of a relativistic electron has new features compared to that of its low-speed counterpart. Namely, because spin is directly built into the Dirac equation, there is an intrinsic coupling between the electron’s spin and its orbital angular momentum, and these two quantities are no longer separately conserved. However, a limitation of this early work with relativistic electron vortices was that it entailed expressing the electron wave functions in terms of Bessel functions, which extend to infinity and don’t reflect the finite nature of real electrons. In the two new papers, the Bialynicki-Birula duo and Barnett go beyond previous work by looking for finite wave packet vortex solutions to the Dirac equation. But exactly how they do this varies between the two studies. The Bialynicki-Birula pair constructed helical solutions to the Klein-Gordon equation, a relativistic extension of the Schrödinger equation that is simpler to solve than the Dirac equation. The researchers then transformed these solutions into spinor wave functions, which they assembled to form solutions to the Dirac equation. Once they had these solutions, they derived expressions with helical wave fronts, which they used to construct finite wave-packet states that converge to standard vortex wave packets [4] in the nonrelativistic limit. In contrast, Barnett used the so-called Foldy-Wouthuysen transformation of the Dirac equation, which describes the electron in its rest frame. Following a series of fairly common approximations, he arrived at a form of the equation that could be more easily solved to find an expression for the electron vortices. He then transformed this expression so that it could be represented in an observer’s rest frame. In addition, he transformed the spin and orbital angular momentum operators to the observer’s rest frame and found two angular-momentum-like quantities that are separately conserved. The wave packets described in the two papers both have the quintessential structure of vortices. And in agreement with [11], they both display forms of spin-orbit coupling. Moreover, they also have a coiling probability-density current about the wave’s propagation axis, thus confirming the presence of an entity similar to a vortex. However, Bialynicki-Birula and Bialynicka-Birula additionally calculated the electron’s vorticity along the propagation axis—an indicator of the vortex’s strength—and found that the spin and orbital contributions to the vorticity canceled one another out. In simpler terms, this means there is no null point in the center of the vortex. As such, they question whether it’s possible to construct vortices from relativistic electrons. If correct, their conclusions might affect the interpretation of experiments that have used vortices made from relativistic electrons. Regardless of which study [1, 2] presents the right picture, the new results aren’t likely to affect practical applications of electron vortices. That’s because applications primarily rely on the strength of the coiling probability-density current in the wave packet, and both papers agree that this can still be high at relativistic energies. Regardless of the practical implications, the question of whether it’s possible to make relativistic electron vortices is of fundamental interest, as it surrounds the nature of vortices themselves. This in itself could motivate experimental works to expand on the claims of both papers. This research is published in Physical Review Letters. 1. I. Bialynicki-Birula and Z. Bialynicka-Birula, “Relativistic Electron Wave Packets Carrying Angular Momentum,” Phys. Rev. Lett. 118, 114801 (2017). 2. S. M. Barnett, “Relativistic Electron Vortices,” Phys. Rev. Lett. 118, 114802 (2017). 3. M. V. Berry, “Paraxial Beams of Spinning Light,” Proc. SPIE Int. Conf. on Singular Optics 3487, 6 (1998). 4. K. Y. Bliokh, Y. P. Bliokh, S. Savel’ev, and F. Nori, “Semiclassical Dynamics of Electron Wave Packet States with Phase Vortices,” Phys. Rev. Lett. 99, 190404 (2007). 5. M. Uchida and A. Tonomura, “Generation of Electron Beams Carrying Orbital Angular Momentum,” Nature 464, 737 (2010). 6. J. Verbeeck, H. Tian, and P. Schattschneider, “Production and Application of Electron Vortex Beams,” Nature 467, 301 (2010). 7. B. J. McMorran, A. Agrawal, I. M. Anderson, A. A. Herzing, H. J. Lezec, J. J. McClelland, and J. Unguris, “Electron Vortex Beams with High Quanta of Orbital Angular Momentum,” Science 331, 192 (2011). 8. J. Harris, V. Grillo, E. Mafakheri, G. C. Gazzadi, S. Frabboni, R. W. Boyd, and E. Karimi, “Structured Quantum Waves,” Nat. Phys. 11, 629 (2015). 9. E. Karimi, L. Marrucci, V. Grillo, and E. Santamato, “Spin-to-Orbital Angular Momentum Conversion and Spin-Polarization Filtering in Electron Beams,” Phys. Rev. Lett. 108, 044801 (2012). 10. H. Larocque, F. Bouchard, V. Grillo, A. Sit, S. Frabboni, R. E. Dunin-Borkowski, M. J. Padgett, R. W. Boyd, and E. Karimi, “Nondestructive Measurement of Orbital Angular Momentum for an Electron Beam,” Phys. Rev. Lett. 117, 154801 (2016). 11. K. Y. Bliokh, M. R. Dennis, and F. Nori, “Relativistic Electron Vortex Beams: Angular Momentum and Spin-Orbit Interaction,” Phys. Rev. Lett. 107, 174802 (2011). About the Authors Image of Hugo Larocque Hugo Larocque is a graduate student in physics at the University of Ottawa. His current research interests center on the applications of structured optical and electron waves in quantum technologies, including the conception and implementation of devices that can generate and detect such waves. Image of Ebrahim Karimi Ebrahim Karimi is an Assistant Professor of Physics at the University of Ottawa, Canada, and an Adjunct Professor at the Institute for Advanced Studies in Basic Sciences in Iran. He received his Ph.D. from the University of Naples Federico II, Italy, where he studied the fundamentals and applications of angular momentum in photonics systems. He holds the Canada Research Chair in the field of structured light. His research focuses on experimental and theoretical studies of structured quantum waves, such as photon and electron waves, and their applications in quantum communication, quantum computation, and materials science. Subject Areas Quantum PhysicsNanophysics Related Articles Synopsis: Speeding Up Battery Charging with Quantum Physics Quantum Physics Synopsis: Speeding Up Battery Charging with Quantum Physics Calculations show that charging a set of batteries can go faster if the batteries are coupled together quantum mechanically. Read More » Viewpoint: Photonic Hat Trick Viewpoint: Photonic Hat Trick Synopsis: Small Particles Untangle Polymer Chains Soft Matter Synopsis: Small Particles Untangle Polymer Chains Adding nanoparticles to molten polymer disentangles its constituent molecular chains, allowing them to flow more easily. Read More » More Articles
3738f9cf18649581
Microwave ionization of hydrogen atoms From Scholarpedia Dima Shepelyansky (2012), Scholarpedia, 7(1):9795. doi:10.4249/scholarpedia.9795 revision #169323 [link to/cite this article] Jump to: navigation, search Post-publication activity Curator: Dima Shepelyansky Figure 1: Fraction of ionized hydrogen atoms as a function of microwave frequency \( \omega_0 \), measured in units of level spacing for initially excited level \( n_0=66 \); data are obtained by numerical simulations of classical (circles) and quantum (crosses) evolution; one-photon ionization threshold is marked by \( \omega_\phi \) (from [3]) Microwave ionization of hydrogen atoms is a process of electron ionization of excited hydrogen atoms by an electromagnetic microwave field when tens or hundreds of photons are required to ionize one electron. Even if a microwave field is relatively weak this multiphoton ionization is much more efficient than a direct one-photon ionization at high photon energies (see Fig.1). Such a rapid ionization happens due to a diffusive growth of electron energy generated by dynamical chaos in the classical system. Quantum effects can suppress this diffusion with emergence of photonic localization which is similar to the Anderson localization in disordered solid state systems. The diffusive photoeffect was first observed in experiments of Bayfield and Koch (1974) [1], which happened to be first experiments performed in a regime of quantum chaos. The quantum effects of photonic localization were first observed by the group of Koch (1988) [2]. System description The evolution of the classical system is governed by the Hamiltonian \[ H({\mathbf p}, {\mathbf r})= {\mathbf p}^2/2-1/|{\mathbf r}| - {\mathbf {\epsilon r}} \cos{\omega t}, \] written in canonical momentum-coordinate variables in atomic units, with \( \epsilon, \omega \) being amplitude and frequency of microwave field. The quantum evolution is described by the corresponding Schrödinger equation. The classical dynamics depends only on rescaled variables \( \epsilon_0={n_0}^4, \omega_0=\omega {n_0}^3\), and \( n_0 \) is a principal quantum number of initially excited level. For ionization an electron should absorb \( N_I=n_0/2\omega_0 \) photons. For typical experimental conditions with a microwave field of 10 GHz and \( n_0 =66 \) [1], \( \omega_0 \approx 0.43 \) so that about 76 photons are required to ionize one atom. A strong ionization takes place at relatively weak field amplitude being several times smaller than the static field ionization threshold \( \epsilon_0 = 0.13 \). Kepler map Figure 2: Poincaré section of the Kepler map at \( \epsilon_0=0.04, \omega_0=3 \) with rescaled electron energy \( E_0=\omega N n_0^2 \) and microwave phase taken at perihelion. The classical dynamics of one-dimensional atom is well described by the Kepler map \[ {\bar N} = N + k \sin \phi, \;\; {\bar \phi} = \phi +2\pi \omega (-2\omega {\bar N})^{-3/2} , \] where \( N=-1/2\omega n^2 \) is a number of photons at a given electron energy, \( \phi \) is the phase of microwave field at the perihelion of electron orbit, bars mark new values of canonical conjugate variables after one orbital period, and \( k=2.58 \epsilon/\omega^{5/3}\) is dimensionless kick amplitude, corresponding to an energy change generated by a microwave field on one electron period. This symplectic map is valid for \( \omega_0 \geq 1 \). The second equation can be locally linearized giving \( {\bar \phi}= \phi+T{\bar N} \) with \( T= 6\pi \omega^2 n_0^5\). This local map is equivalent to the Chirikov standard map with the chaos parameter \( K=kT = \epsilon_0/\epsilon_c\). An example of the Poincaré section of the phase space is shown in Fig.2, here the electron energy \( E_0=\omega N n_0^2 \), expressed via the number of photons \( N \), is shown as the phase of microwave field \( \phi=\omega t \) at the moment when electron passes via the perihelion. Locally, the dynamics in the phase space can be described by the Chirikov standard map with the global chaos and diffusion appearing at \( K=\epsilon_0/\epsilon_c >1 \). As a result the chaotic diffusive ionization takes place above the chaos border [4],[5],[6] \[ \epsilon_0 > \epsilon_c \approx 1/(49 \omega_0^{1/3}) . \] Above the chaos border the classical chaos leads to a rapid diffusive ionization which rate is significantly larger than a direct one-photon ionization, that is clearly seen in Fig.1. This diffusion process gives a sequence of absorption and emission of photons which is changes by \( k \sin \phi \) at each orbital period of electron. Chirikov localization of photonic transitions Figure 3: The Chirikov localization of photonic transitions: exponential probability distribution as a function of the number of absorbed photons \( N_\phi=N_I-1/(2\omega n^2) \) for \( n_0=100, \epsilon_0=0.04, \omega_0=3 \), open circles give probability in one-photon interval, full circles are obtained from the quantum Kepler map, the straight line shows exponential localization. The photonic transitions are characterized by the classical diffusion with the diffusion coefficient \( D \approx k^2/2 \). Under certain conditions the quantum interference effects lead to the Chirikov (or dynamical) localization of this diffusion. As for the quantum Chirikov standard map, the localization length counted in the number of photons is given by the diffusion rate [5],[6],[7] \[ \ell_\phi \approx D \approx k^2/2 = 3.33 \epsilon^2/\omega^{10/3} . \] The localization length can be also expressed in another form useful for a generic system with density levels \( \rho \) excited by a monochromatic field of strength \( \epsilon \) and one-photon transition matrix element \( \mu \): \[ \ell_\phi \approx 2 \pi^2 \epsilon^2 \mu^2 \rho^2 . \] The above expressions are valid for \( \rho \omega > 1, \ell_\phi>1\). For the case of hydrogen atom we have \( \rho=n^3, \mu = 0.41 /(\omega^{5/3} n^3) \). This localization is similar to the Anderson localization in disordered quasi-one-dimensional systems with the important difference that here we have a purely dynamical system without any disorder. Due to localization the excitation probability on high levels drops exponentially \( f_N =<|\psi|^2> \propto \exp(-2|N-N_0|/\ell_\phi) \) as it is well seen for the results of numerical simulations shown in Fig.3. The excitation process is approximately described by the quantum Kepler map \[ {\bar \psi}=e^{-i{\hat H}_0}{\hat P} e^{-ik \cos \phi}\psi , \] where \( {\hat H}_0=2\pi[-2\omega(N_0+{\hat N}_\phi)]^{-1/2}, N_0=-1/(2 \omega n_0^2), [N_\phi,\phi]=-i \) and \( {\hat P} \) is the projection operator on coupled states [6]. Figure 4: Scaled 10% ionization threshold filed \( \epsilon_0 \) versus scaled microwave frequency \( \omega_0 =\omega n_0^3\): experimental results of Koch group, taken from [2], are shown by circles, results of numerical simulations with the quantum Kepler map are shown by full circles, here \( \omega/2\pi = 36.02 GHz \) and initial level changes from 45 to 80, dashed curve shows quantum delocalization border for experiment conditions, dotted curve shows classical chaos border, no fit parameters (from [8]). The quantum delocalization border takes place when the localization length becomes larger than the number of photons required to ionize one electron \( \ell_\phi > N_I \) that gives \[ \epsilon_0 > \epsilon_q \approx \omega_0^{7/6}/(6.6 n_0)^{1/2} \approx 0.4 \omega^{1/6}\omega_0 . \] At fixed microwave frequency the delocalization border is growing with an increase of initially excited level since \( \epsilon_q \propto \omega_0 \propto n_0^3 \). This theoretical prediction had been observed in the experiments of Koch group [2]. The experimental results are in good agreement with the numerical simulations based on the quantum Kepler map as it is shown in Fig.4 from [8]. Further experiments of Bayfield group with hydrogen atoms [9] and Walther group with Rydberg atoms [10] also confirmed the predictions of photonic localization theory. Ionization of three-dimensional atoms The above theory is based on a one-dimensional approximation of excited states of the hydrogen atom. In fact, as it is explained in [6], this approximation describes well the ionization process of real three-dimensional atoms if initial orbital number \( l < (3/\omega)^{1/3} \). The physical origin of the Kepler map validity is related to the Kepler degeneracy of hydrogen atom due to which the dynamics in orbital momenta and conjugated phases is adiabatically slow and does not give significant influence on rapid dynamics in \( N,\phi \) variables. Chirikov localization and Anderson localization The phenomenon of Anderson localization (1958) appears in disordered solids when a diffusive spreading in space, existing for classical trajectories, becomes exponentially localized due to quantum interference effects (a detailed description can be find in internal references and recommended reading). In systems of dynamical chaos there is no disorder but the classical diffusive spreading appears due to chaos. In a way similar to the Anderson localization this chaotic diffusion can be localized by quantum interference effects. This localization of dynamical quantum chaos is known in a literature as dynamical or Chirikov localization. This term stresses the dynamical origin of this phenomenon emerging in absence of any disorder. Here we discuss the appearance of Chirikov localization for a microwave ionization of hydrogen atoms. More examples of this phenomenon is given in the internal references below. Stabilization in strong fields At very strong fields the classical dynamics of three-dimensional atom becomes stable and ionization is suppressed. This stabilization regime exists for the fields in the range \( 10 \omega/(|m|+1) < \epsilon < 20 \omega^2 n_0^2/(|m|+1)^2 \), where \( m \) is initial magnetic quantum number [11]. The analogies between this stabilization, the Kapitsa pendulum and channeling of charged particle beams in crystals are discussed in [11]. The stabilization theory still has not been verified experimentally. Related physical systems • Ionization of chaotic Rydberg atoms: A hydrogen atom is an integrable system and a microwave field should be relatively strong to induce chaotic diffusive ionization. However, it is possible to have atoms which are chaotic in absence of microwave field: it can be hydrogen or Rydberg atoms in a magnetic field, or Rydberg atoms in a static electric field. For such atoms a diffusive excitation can take place even when the microwave frequency is significantly smaller than the Kepler frequency so that hundreds of photons are required to ionize one electron. The properties of photonic localization for such atoms are analyzed in [12]. A similar type of photonic localization appears for microwave excitation of noninteracting electrons in metallic quantum dots of micron size, which can be viewed as artificial Rydberg atoms, the theory of photonic localization in such systems is described in [13]. • Chaotic autoionization of molecular Rydberg states: For Rydberg states of a molecule there is a coupling between rotations of dipole moment of charged molecular core and electron excited states. This coupling can be described in the frame of the Kepler map which under certain conditions leads to a diffusive autoionization of molecular Rydberg states as described in [14]. • Capture of dark matter by the Solar System: The capture of dark matter particles scattering on the Sun with a rotating planet is a process inverse to ionization. This capture process is also described by a simple map which is similar to the Kepler map. The energies of particles which can be captured and the capture cross-section are analytically determined in [15]. According to these results the cross-section diverges as an inverse particle energy being much larger than the planetary orbit area. The properties of dark matter chaos in the Solar System are analyzed in [16]. Historical notes The striking experiments of Bayfield and Koch [1] done at Yale in 1974 remained for a long time as a theoretical puzzle. In 1978 Delone, Zon and Krainov [17] proposed to describe the excitation process by a diffusion equation in energy using random phase approximation and assuming that this approach is valid at \( \epsilon > 1/n^5 \), when the field perturbation becomes larger than the level spacing. This gave a correct estimate of the diffusion rate and ionization time scale but a wrong ionization border which goes to zero in the semiclassical limit. Independently, in 1978 Leopold and Percival [18] performed numerical simulations of classical dynamics showing that such an approach gives a fraction of ionized atoms being close to the experimental values. However, the discussion of dynamical chaos as the origin of strong ionization appears only in the work of Meerson, Oks and Sasorov in 1979 [19], where they give a correct estimate of the ionization threshold on the basis of the Chirikov criterion of resonance overlap for the case of \( \omega_0 \sim 1 \). The dependence of the chaos border on frequency was found in [4]. The first signatures of quantum suppression of classical diffusive excitations have been found in [20] and later in advanced studies [21]. The numerical codes for quantum evolution developed in [20] were used in [3],[5],[6],[9],[21],[22]. The analytical theory of photonic localization was developed in [5],[6],[21],[22] and it was confirmed by extensive numerical simulations performed in these works. The first experimental confirmations of this theory have been obtained by the Koch group [2] and later by the groups of Bayfield [9] and Walther [10]. The description of ionization by the classical and quantum Kepler map was developed in [6],[22]. The process of microwave ionization was studied also by Jensen [23], Blumel and Smilansky [24]. The classical Kepler map for the hydrogen ionization was also obtained in [25], for comet dynamics in the Solar system this map was derived by Petrosky [26]; map description of the dynamics of the comet Halley was developed by Chirikov and Vecheslavov [27]. Additional references and results for microwave ionization of excited atoms can be find in the reviews [28],[29],[30]. More recent experiments on microwave ionization of Rydberg atoms are presented in [31]. The term Chirikov localization was introduced in [32] to honor the pioneering contribution of Boris Chirikov in the discovery and investigation of this phenomenon. • [1] Bayfield, J.E. and Koch, P.M. (1974). Multiphoton ionization of highly excited hydrogen atoms, Phys. Rev. Lett. 33: 258. doi:10.1103/physrevlett.33.258. • [2] Galvez, E.J.; Sauer, B.E.; Moorman, L.; Koch, P.M. and Richards, D. (1988). Microwave ionization of H atoms: breakdown of classical dynamics for high frequencies, Phys. Rev. Lett. 61: 2011. doi:10.1103/physrevlett.61.2011. • [3] Casati, G.; Chirikov, B.V.; Guarneri, I. and Shepelyansky, D.L. (1986). New photoelectric ionization peak in the hydrogen atom, Phys. Rev. Lett. 57: 823. doi:10.1103/physrevlett.57.823. • [4] Delone, N.B.; Krainov, V.P. and Shepelyansky, D.L. (1983). Highly excited atom in electromagnetic field, Sov. Phys. Uspekhy 26: 551. doi:10.1070/pu1983v026n07abeh004445. • [5] Casati, G.; Chirikov, B.V.; Shepelyansky, D.L. and Guarneri, I. (1987). Relevance of classical chaos in quantum mechanics: the hydrogen atom in a monochromatic field, Phys. Rep. 154: 77. doi:10.1016/0370-1573(87)90009-3. • [6] Casati, G.; Guarneri, I. and Shepelyansky, D.L. (1988). Hydrogen atom in monochromatic field: chaos and dynamical photonic localization, IEEE J. Quant. Elect. 24: 1420. doi:10.1109/3.982. • [7] Shepelyansky, D.L. (1987). Localization of diffusive excitation in multi-level systems, Physica D 28: 103. doi:10.1016/0167-2789(87)90123-0. • [8] Casati, G.; Guarneri, I. and Shepelyansky, D.L. (1990). Classical chaos, quantum localization and fluctuations: a unified view, Physica A 163: 205. doi:10.1016/0378-4371(90)90330-u. • [9] Bayfield, J.E.; Casati, G.; Guarneri, I. and Sokol, D.W. (1989). Localization of classically chaotic diffusion for hydrogen atoms in microwave fields, Phys. Rev. Lett. 63: 364. doi:10.1103/physrevlett.63.364. • [10] Arndt, M.; Buchleitner, A.; Mantegna, R.N. and Walther, H. (1991). Experimental study of quantum and classical limits in microwave ionization of rubidium Rydberg atoms, Phys.Rev.Lett. 67: 2435. doi:10.1103/physrevlett.67.2435. • [11] Shepelyansky, D.L. (1994). Kramers map approach for stabilization of hydrogen atom in a monochromatic field, Phys. Rev. A 28: 575. doi:10.1103/physreva.50.575. • [12] Benenti, G.; Casati, G. and Shepelyansky, D.L. (1999). Chaotic enhancement in microwave ionization of Rydberg atoms, Eur. Phys. J. D 5: 311. doi:10.1007/s100530050261. • [13] Prosen, T. and Shepelyansky, D.L. (2005). Microwave control of transport through a chaotic mesoscopic dot, Eur. Phys. J. B 46: 515. doi:10.1140/epjb/e2005-00282-4. • [14] Benvenuto, F.; Casati, G. and Shepelyansky, D.L. (1994). Chaotic autoionization of molecular Rydberg states, Phys. Rev. Lett. 72: 1818. doi:10.1103/physrevlett.72.1818. • [15] Khriplovich, I.B. and Shepelyansky, D.L. (2009). Capture of dark matter by the Solar System, Int. J. Mod. Phys. D 18(12): 1903. doi:10.1142/s0218271809015758. • [16] Lages, J. and Shepelyansky, D.L. (2013). Dark matter chaos in the Solar System, Mon. Not. Royal Astr. Soc.Lett. 430: L25. doi:10.1093/mnrasl/sls045. • [17] Delone, N.B.; Zon, B.A. and Krainov, V.P. (1978). Diffusion mechanism of ionization of highly excited atoms in an alternating electromagnetic field (Zh. Eksp. Teor. Fiz. 75:445 (1978)), Sov. Phys. JETP 48: 223. • [18] Leopold, J.G. and Percival, I.C. (1978). Microwave Ionization and excitation of Rydberg atoms, Phys. Rev. Lett. 41: 944. doi:10.1103/physrevlett.41.944. • [19] Meerson, B.I.; Oks, E.A. and Sasorov, P.V. (1979). Stochastic instability of an oscillator and the ionization of highly-excited atoms under the action of electromagnetic radiation (Pis'ma Zh. Eksp. Teor. Fiz. 29:79 (1979)), Sov. JETP Lett. 29: 72. • [20] Shepelyansky, D.L. (1983). Quantum diffusion limitation at excitation of Rydberg atom in variable field, (also at Proc. Int. Conf. on Quantum Chaos (Como 1983), Plenum p.187 (1985)) Preprint Inst. Nuclear Physics 83-61, Novosibirsk : . • [21] Casati, G.; Chirikov, B.V. and Shepelyansky, D.L. (1984). Quantum limitations for chaotic excitation of hydrogen atom in monochromatic field, Phys. Rev. Lett. 53: 2525. doi:10.1103/physrevlett.53.2525. • [22] Casati, G.; Guarneri, I. and Shepelyansky, D.L. (1987). Exponential photonic localization for hydrogen atom in a monochromatic field, Phys. Rev. A 36: 3501. doi:10.1103/physreva.36.3501. • [23] Jensen, R.V. (1982). Stochastic ionization of surface-State electrons, Phys. Rev. Lett. 49: 1365. doi:10.1103/physrevlett.49.1365. • [24] Blumel, R. and Smilansky, U. (1989). Ionization of excited hydrogen atoms by microwave fields: a test case for quantum chaos, Phys. Scripta 40: 386. • [25] Gontis, V. and Kaulakys, B. (1987). Stochastic dynamics of hydrogenic atoms in the microwave field: modelling by maps and quantum description, J. Phys. B: At. Mol. Opt. Phys. 20: 5051. doi:10.1088/0022-3700/20/19/016. • [26] Petrosky, T.Y. (1986). Chaos and cometary clouds in the Solar System, Phys. Lett. A 117: 328. doi:10.1016/0375-9601(86)90673-0. • [27] Chirikov, B.V. and Vecheslavov, V.V. (1989). Chaotic dynamics of comet Halley, Astron. Astrophys.. 221: 146. • [28] Jensen, R.V.; Susskind, S.M. and Sanders, M.M. (1991). Chaotic ionization of highly excited hydrogen atoms: Comparison of classical and quantum theory with experiment, Phys. Rep. 201: 1. doi:10.1016/0370-1573(91)90113-z. • [29] Koch, P.M. and van Leeuwen, K.H.A. (1995). The importance of resonances in microwave “ionization” of excited hydrogen atoms, Phys. Rep. 256: 289. doi:10.1016/0370-1573(94)00093-i. • [30] Buchleitner, A.; Delande, D. and Zakrewski, J. (2002). Non-dispersive wave packets in periodically driven quantum systems, Phys. Rep. 368: 409. doi:10.1016/s0370-1573(02)00270-3. • [31] Gurian, J.H.; Overstreet, K.R.; Maeda, H. and Gallagher, T.F. (2010). Connecting field ionization to photoionization via 17- and 36-GHz microwave fields Phys. Rev. A 82: 043415. doi:10.1103/physreva.82.043415. • [32] Frahm, K.M. and Shepelyansky, D.L. (2009). Diffusion and localization for the Chirikov typical map, Phys. Rev. B 80: 016210. doi:10.1103/physreve.80.016210. Internal references Recommended reading T.F.Gallagher, "Rydberg atoms", Cambridge University Press (1994). L.E.Reichl, "The Transition to chaos: conservative classical systems and quantum manifestations", Springer, Berlin (2004). Y.Imry, "Introduction to mesoscopic physics", Oxford University Press (1997) Personal tools Focal areas
be8e696b0c0e5f59
Dismiss Notice Join Physics Forums Today! Semantics of Wave 1. Dec 20, 2007 #1 Semantics of "Wave" Hi all, I have two questions related to the use of the word "wave"...and I would like know whether this actually represents a physical wave nature. #1 The "wave"-function. This is what little I think I know about the wavefunction...it represents certain values about the subject (i.e. electron quantum numbers) and how they evolve with time, and the amplitude squared represents the probability of these values occurring. This of course may be wrong. Now I was wondering how the wavefunction is actually related to a sinusoidal wave. I don't think it means that subject travels along a wavelike path (could someone confirm this please) - but is the shape of the wavefunction on a graph actually sinusoidal, or gaussian? As it is related to probability, I would have said it was Gaussian, and this seems confusing...because it is then not exactly a "wave". #2 De Broglie wavelength. So all I know about this is that it implies all matter has a specific wavelength, related to it's momentum. I was wondering again how to interpret this. Does it mean that the mass actually "wiggles" along, travelling a sinusoidal path through spacetime (again a yes/no here would be helpful)? Or is it again somewhat like the wavefunction above, related to probabilities? Or is it another way of putting Heisenburgs Uncertainty principle...I thought of this possibility after reading the following from http://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality Any ideas are welcome, 2. jcsd 3. Dec 21, 2007 #2 User Avatar Staff Emeritus Science Advisor Gold Member There a number of 'interpretations' of quantum mechanics (summarised here), all with differing idea's of what the wave function actually is. However, for the purposes of this post I will refer to the Copenhagen [non-real wave function] Interpretation since this is generally, the most widely accepted (and taught) interpretation, although the 'Many Worlds' interpretation is gaining ground. Roughly speaking, the wave function is just a complex function or a 'mathematical abstraction' or tool used to describe the state of a physical system. In other words, the wave function itself has no physical observables. So, whereas classically the wave function of a vibrating string describes the periodic variation of real physical observables (amplitude etc.) there is no such corresponding observable for a quantum mechanical wave-function. Therefore, the wave function of a particle tells us nothing of how it actually travels. As for the actual shape of the wave function, this very much depends on the system that the wave function is describing and furthermore, the specific state that the system is in. For example, the wave functions of a particle confined to a box (potential well) are generally sinusoidal; however, the wave function of a harmonic oscillator (e.g. diatomic molecule) can be Gaussian. The de Broglie hypothesis states that all particles have a wave-like nature (wave-particle duality). The de Broglie hypothesis does not say anything about the wave function of a particle only that a particle can be describe by a classical wave of angular frequency [itex]\omega = E/\hbar[/itex]. However, the de Broglie hypothesis can be used to find the wave function of a 'free particle' (via the dispersion relation [itex]\omega =k^2\habr/2m[/itex]), that is a particle that has a non-zero constant probability to be over all space. As you say, this is related to the Heisenberg Uncertainty Principle, since to apply the dispersion relation we must fix the momentum of the particle and therefore by HUP, the uncertainty in the position of the particle approaches infinity. To describe a localised particle, that is a particle which is restricted to some finite region in space, we must make use of quantum wave packets, which are analogous to classical wave packets. To construct a wave packet we must integrate over all possible values of the wave vector k, which can be related back to the momentum of the particle. Since we are integrating over many values of k, we do not have a definite value of momentum to assign the particle and hence, although we have reduced our uncertainty in the position of the particle (by localising it), we have increased the uncertainty in the momentum of the particle. I hope that make sense and apologise if it's a bit verbose in parts. Last edited: Dec 21, 2007 4. Dec 21, 2007 #3 Thanks for the in depth reply Hootenanny! Good, good. Ok now I am somewhat confused. You say the wavefunction represents the physical state of a system, however this is not a variable...is it not comprised of many variables...position, momentum, spin etc? So when you say the shape of the wavefunction can be a certain shape...in what way have you obtained this shape? If it is drawn on the graph, then what are the two axis variables? I don't believe you can plot quantum physical state on the y, and time on the x... My next question stems from the answer to the previous one, but if you have a sinusoidal wavefunction...what is oscillating? Lets take a photon for example. When treated as a wave, the sinusoidal nature represents the oscillating EM field. If your particle in a box has a sinusoidal nature, what does this imply? Changing from a particle to an antiparticle (lol I know this isn't true)? Or does it indicate the probability of being at that point (for position in this case, opposed to whole quantum state), is changing from zero to a maximum? Again, somewhat confused. And with de Broglie: I feel this is true in the sense that photons have wave-like nature, but it does not mean the photon wiggles along through space. So I don't think the masses wiggle through space, even though they have wave-like nature. Ok moving further with this idea then, would HUP and duality pretty much be same thing? If one knows the momentum of something, its position is undefined, thus a wave. If one knows the position, it is a particle (with undefined momentum). So in a sense could not wave-like nature simply represent uncertainties in position? With respect to de Broglie wavelength...bigger something is (more momentum) means smaller wavelength...i.e. a more defined position, as one would expect by HUP and intuition. Thanks again for your help, 5. Dec 22, 2007 #4 User Avatar Staff Emeritus Science Advisor Gold Member To determine the wave function for a particular system you must solve the Schrödinger equation (SE) for that particular system. When you solve the SE you will obtain a set of allowed energy states (Eigenvalues) and the wave function (Eigenfunction) for that system, it is the quantum numbers (n,l,m etc.) that determines the state, and hence the energy, of the system. In the one-dimensional, time-independent case (the state of the system has no time evolution) the wave function is a function of a single variable (position), [itex]\psi = \psi(x)[/itex]. Now, a wave function that is purely a function of position is known as a probability amplitude(1), and the values of this function are probability amplitudes. It is these values that we plot, in other words when we say that a wave function is a certain shape (sinusoidal, Gaussian etc.), we mean that when we plot the values of the wave function against position we obtain a certain shape. So the oscillations represent how the probability amplitude varies with position. For example, the wave function for a particle undergoing one-dimensional simple harmonic oscillations in the ground state is given by; [tex]\psi_0(x) = A_0\exp\left(-\frac{x^2}{2\sigma^2}\right)[/tex] So, if we plot [itex]\psi(x)[/itex] against [itex]x[/itex] we obtain a Gaussian curve (see this figure). To reiterate, the wave function doesn't have a physical observable and hence the oscillations of the wave function don't have any physical significance (since the wave function is complex-valued). I hope that answered your first two questions. I think you've got the general idea. However, in general you should note that there will be some uncertainty in both position and momentum of a particle described by quantum mechanics. For example, as I said in my previous post we can construct a wave packet by integrating of a range of values ([itex]\Delta k[/itex]) of the wave vector k. A wake packet is characterised by a zero probability amplitude over all space except for a small region [itex]\Delta x[/itex]. According to de Broglie the spread in the wave vector k results in a spread of momentum [itex]\Delta p_x[/itex]. Hyperphysics - further reading and excellent pictorial representations. I think you're confusing the issue a little here. The actual value of the momentum of a particle says nothing about the uncertainty in either position nor momentum, a smaller wavelength does not mean less uncertainty in position, the uncertainty will remain unchanged. For example, let us take the [1D] free particle solution; [tex]\Psi(x,t) = Ae^{i\left(kx-\omega t\right)}[/tex] And the us find the probability density; [tex]P = \Psi\cdot\bar{\Psi} = Ae^{i\left(kx-\omega t\right)}\cdot Ae^{-i\left(kx-\omega t\right)}[/tex] [tex]P = A^2[/tex] Hence, the probability density is constant throughout all space and is independent of the wave vector k and hence the momentum. So to conclude it is the width of the wave packet that determines the uncertainty in position, rather than the wavelength. (1)It should be stressed that the probability amplitude is not equivalent to the probability density. The probability amplitudes are simply the values of the wave function at some position in space and are therefore complex values and as such have no physical observables. Last edited: Dec 22, 2007 6. Dec 22, 2007 #5 Ok thats kind of what I expected about the position and time etc, after reading a bit on probability and the like but I'm still confused with the sinusoidal wave nature of the wavefunction. When we square the wavefunction, for the case you are referring too, this then gives the probability for finding the particle at that certain position. But how come the probability oscillates from zero to a maximum and back again? For example in the link you gave previously... http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/imgqua/hoscom2.gif I would expect the shape of the bottom graph, but not the graphs above it. How come these sinusoidal ones implie that there is zero chance of finding a particle at some positions and a maximum at others? I would have thought the probability of finding a particle would be shaped like the bottom graph...? Sorry I can't really put my problem into words...I hope you know what I'm asking. Wow ok thats really neat...I totally like thought of that idea myself and you (and the hyperphysics page) just confirmed it! That is really cool!! Ok now after reading the hyperphysics page, and your explanation I am somewhat confused. In the Hyperphysics page, it shows that summing some waves together (is this to simulate the uncertainties in momentum?) will give an interference pattern - the "wavepacket", and its width represents the uncertainty of position. However these waves were infinite weren't they, as they represented exact momentums? Therefore should not the intereference pattern repeat itself, and it too be infinite...thus giving infinitely many wavepackets, and thus giving the particle an infinitely undefined position? 7. Dec 23, 2007 #6 User Avatar Staff Emeritus Science Advisor Gold Member From what I can gather, you believe that the probability density of a localised particle (i.e. Harmonic Oscillator, particle in a box etc.) should be Gaussian and you can't see why it would be otherwise? Furthermore, you can't understand how when we find the probability density from a wave function, the probability density oscillates. If I'm wrong, please correct me. To simplify things, lets step away from the harmonic oscillator and stick to a particle in a box. Now classically, if you put a single particle in a sealed box away from any other influences you know whats going to happen. The particle will travel with a uniform velocity until it collides with a the wall of the box, in which case it will bounce off the wall (accelerate) and then proceed with uniform motion once again. Therefore, there would be equal probability to find the particle anywhere in the box, it is equally probably to find the particle at any point in the box. However, if we use quantum mechanics to describe the 'particle in a box' system we find that the system doesn't behave as classically predicted. For a particle in a bound state (localised), the probability density will oscillate as a function of position and there will be points where the probability amplitude vanishes (nodes) and points where it is maximal (anti-nodes). How do we know this? We know this because when we solve the Schrödinger equation for a bound state, the solution we obtain does indeed oscillate. There is not classical explanation for this nor any intuitive explanation of why this is case, it is a purely quantum mechanical effect. There are some 'analogies' I've seen (and indeed, was taught at undergraduate level) but they tend to only confuse the matter further. Let us now take a concrete example so that you can see how we determine the probability density from a wave function. Let us take the example of the one-dimensional case of a particle in a infinite potential well (particle in a box) of width a. In this case the Schrödinger equation has solutions of the form; [tex]\psi_n(x) = \sqrt{\frac{2}{a}}\sin\left\{\frac{n\pi}{a}\left(x+\frac{a}{2}\right)\right\}[/tex] Now we find the probability density; [tex]P_n(x) = \psi_n(x)\cdot\overline{\psi_n}(x) = \psi_n^2(x)[/tex] [tex]P_n(x) = \frac{2}{a}\sin^2\left\{\frac{n\pi}{a}\left(x+\frac{a}{2}\right)\right\}[/tex] So, we have essentially squared the wave function to obtain the probability density, which effectively squares the amplitudes and reflects the portion of the curve below the x-axis in the x-axis. Hence, the probability density still oscillates, but is positive and has a greater amplitude than the wave function. Although you may not be satisfied with the explanation, hopefully now you can understand why (mathematically at least) the probability density of a localised particle oscillates. I apologies for confusing you, I will attempt to qualitatively clarify the idea of wave packets here. Firstly, the reason we construct a wave packet is not to simulate the uncertainty in momentum, we do so to localise the wave function (and hence the particle) into some small region Δx. The resultant uncertainty in momentum is a necessary 'bi-product' if you like of this process. In addition, we can't just 'choose' any old waves to superimpose, the wave packet must be constructed from the eigenfunctions of the system, that is the waves must be solutions of the Schrödinger equation. As for your final point, yes, when the waves 'interfere' they will generate more than one 'wave packet'. However, the probability amplitudes of these 'secondary wave packets' will be small compared to the probability amplitude of the 'primary wave packet', analogous to the amplitudes of the diffraction pattern observed for the double slit experiment. Furthermore, the probability amplitudes rapidly decrease away from the center of the wave packet. Take note that wave packets are difficult to accurately explain without going into the mathematics, and the above is a 'rough and ready guide'; but I hope I managed to answer your questions. If you would like to study wave packets more rigorously I can recommend some texts, but beware that the mathematics required is not trivial. I should also mention that the solution for a free particle (post #4) is not a true wave function, since a requirement for a valid wave function is that it should be square integrable(2). Hence, the solution given in post #4 is unphysical and the reason for this relates to my comment earlier; In other words, a free particle cannot have an exactly defined momentum, this implies that there must be sum uncertainty in the momentum of the particle and hence sum spread in the wave vector k. Therefore, physically acceptable solutions take the form of a wave packet. (2)If a function is square integrable over some interval, then the integral of the square of it's absolute value over that interval must be finite. In the case of a free particle the interval in question would be [itex](-\infty,\infty)[/itex]. Last edited: Dec 23, 2007 8. Dec 23, 2007 #7 That is correct Yup, you figured out my problem. I was thinking like your classical example...there should be an equal probability of finding the particle anywhere in the box. I'm glad you pointed out then that there is no intuitive explanation for the probability osciallation, because thats what I was looking for. And unfortunately it seems that QM can't explain it, but I'm betting that in experiments QM will predict observations correctly. Thanks for this...normally I just pass over the maths because its way over my head, but this I can follow. QM is a mathematical model, so I suppose looking at the mathematics occasionally will probably help me understand it :smile:. However, I have one last question to do with this probability oscillation. With your particle in a box example, you get the oscillating probability density. I must assume the placement of nodes and anti-nodes is independent of your point of reference for position? I.e. the wavefunction will remain in the same place relative to the walls of the box, even if you take the position values from varying reference points - outside the box? Otherwise the wavefunction would move relative to the box, due to moving reference points; which, from the particles point of view, should have nothing to do with it or its placement of the wavefunction in the box. Just looking over the equation you gave previously, I just saw the value a, as the width of the well. Therefore I assume position is measured relative to the box, so I assume my prior question is somewhat irrelevant, as if reference is the box, then there should be no problem. I see what you are saying about localizing the position by adding the waves, but this must only apply if the waves are finite...i.e. already have a somewhat localised position. Ah, this must be where this comes in: Previously I was thinking these waves that you add together were infinite. Adding simple sine waves like in the Hyperphysics example, would not - I don't believe - give wavepackets that diminished, similiar to diffraction patterns. There would be some point where they would all be in phase again and the process would repeat itself all over again. However if these waves are not infinite (as have some certainty in position), then I can see how this could happen. However, then again, looking over the Schrodinger solution you showed me above, it seems to indicate that the wave is infinite...a normal sine wave, so maybe my train of thought is completely wrong. Apart from this little point on adding the waves, the answer to the following would be yes...thanks a bunch. I suspect that in the real world, one does not actually add the waves together...there is some other mathematical process that gives the wave packets (that are also diminishing). However as you said, the maths would be difficult, in which case I will probably avoid those books you refer to until my maths can keep up. Thanks again, 9. Dec 24, 2007 #8 User Avatar Staff Emeritus Science Advisor Gold Member Indeed, since Physics is based on Mathematics I firmly believe that one can never truly understand a phenomenon until one can follow the Mathematics. One may be able to get the general idea from a qualitative analysis, but there will often be observations that at first seem counter-intuitive, it is only when one follows through the mathematics that it becomes clear. As for QM not being able to explain oscillating probability densities, it can mathematically, it follows directly from the postulates of quantum mechanics. I shall try and answer all of the above in one fell swoop. You are entirely correct, we define our system relative to the box. So in the case of our one-dimensional infinite potential well x=0 is in the middle and at the bottom of the well and the two walls are located at x = +/- (a/2) as shown here; Of course you can define your coordinate system however you like, but a symmetric system usually makes life simpler. Writing the solutions to the Schrödinger equation in that form isn't entirely correct. In this case, our potential function (V) is defined piecewise thus; [tex]V(x) = \left\{ \begin{array}{cr} \infty & \left|x\right| > a/2 \\ 0 & \left|x\right| \leq a/2 \end{array}\right.[/tex] Which in words means, the potential energy of the system goes to infinity if the distance from the origin is greater than a/2, otherwise, the potential is equal to zero. This restricts our particle to exist in a 'box' of width a. Equally we must define our wave function (ψn) in a similar fashion since we have two distinct cases; the case where we have zero potential, and the case where our potential tends to infinity. Hence, we write the wave function for a particle of mass m in our one-dimensional potential well thus; [tex]\psi_n(x) = \left\{ \begin{array}{cr} 0 & \left|x\right| > a/2 \\ So you see in actual fact, the wave function of a particle in a box is finite, it only exists inside the box. Hopefully that will put your mind at rest in terms of the construction of wave packets and finite wave functions. Incidentally, something you may find interesting is if we consider a finite potential well, that is similar to the case above but where the potential energy function terminates at some finite value. We find that a particle with a kinetic energy that is less than the potential energy of the well has some non-zero probability to be found inside the walls of the potential well! This is forbidden classically and is the basis for Quantum Tunneling. The mathematical technique used to construct wave packets are called Fourier Transformations, which are part of the more general area of Fourier Analysis. If your looking to study Quantum Mechanics at any sort to depth, I would recommend taking courses in Linear Algebra and Analysis, specifically Functional Analysis. I hope I managed to clear everything up for you, if I didn't I'm sure you'll be back. In the meantime have a very merry Christmas. Last edited: Dec 24, 2007 10. Dec 24, 2007 #9 I suppose I best keep taking mathematics then, in an attempt to further understand QM! Firstly I'm not sure if you're a teacher or not already, but you should consider the profession! Very helpful replies! Ok thanks for going through the simple maths of it, I can now see how the wavepacket is finite, of width a, and although the waves forming it (differing values of n) are infinite sine waves, the wave packet itself is only defined for within the potential well. So thats good! However I still do have some questions still... = ) #1 Well firstly I think that this means the wavepacket has width a, so this represents the uncertainty in position (as a side question, I'm guessing the "main" packet is not a wide, so the width of the "main" packet does not indicate uncertainty in position, but actually the width of the whole wave packet). Anyway this would make classical sense; as the potential well and a gets bigger, so does the uncertainty in position. So thats whats I am assuming, however my question is: what happens as a tends to infinity - mathematically? I'm guessing, that by HUP, a and thus the uncertainty in position, approaching infinity would result in an exactly determined momentum. In Hyperphysics pages this means a perfect sine wave, so I'm guessing as the value of a approaches infinity, the n value in the equations become negligible, so that all the waves produced as solutions of Schrodingers equation are the same sine wave (or conversely ones with same period) so as to produce a final wave packet that is a perfect sine wave, i.e. defined momentum. I wonder if the actual mathematical solution for when a approaches infinity, agrees with what I've stated before?? #2 I just noticed the n in the equation. I assume the differing integer values of n give the differing sine waves that are added together to make the resultant wave packet (as referred to in Hyperphysics)? However I was wondering what the value for n actually represents? I don't believe it is just to represent a general solution to a trigonometric equation...? Yes I'm considering whether to try and get into Cambridge, and take the natural sciences course. However for first year, you can do a course that is something like the mathematics involved in physics instead...and then progress to the natural sciences in 2nd year. I'm betting this would be helpful!! Anyways thanks, and you have a great Christmas too, 11. Dec 24, 2007 #10 User Avatar Staff Emeritus Science Advisor Gold Member Thank you for the kind words :smile: Okay, I'm going to address your second question and hopefully this will make sense of your first question. First and foremost, the solution that I gave in post #6 is not a wave packet, it is a pure sinusoidal wave. The solution to the infinite potential well does not require the use of wave packets and can be obtained trivially by directly solving the Schrödinger equation. It is a pure sinusoidal wave, with no interference or summations. The n in the solution does not refer to any summations or integrations, it simply defines the energy eigenstates of the system. The energy eigenstates for a particular system are the set of eigenvalues (energies) and eigenfunctions (wave functions) that satisfy the [time independent] Schrödinger equation. Perhaps it would have been prudent to mention it soon, but in general, there isn't just one single solution that satisfies the Schrödinger equation for each system, there can be infinitely many solutions. This set of solutions are the energy eigenstates for that particular system. If we now go back to our particle in a box and examine our general solution together with the energy eigenvalues (just considering the case where [itex]\left|x\right| \leq a/2[/itex]); [tex]\psi_n(x) = \sqrt{\frac{2}{a}}\sin\left\{\frac{n\pi}{a}\left(x +\frac{a}{2}\right)\right\} \hspace{5cm} E_n = \frac{h^2}{8ma^2}n^2[/tex] And in this case, [itex]n\in\mathbb{Z}^+[/itex], that is n must be a positive integer. Hence, we can start writing our our energy eigenstates; [tex]\psi_1(x) = \sqrt{\frac{2}{a}}\sin\left\{\frac{\pi}{a}\left(x +\frac{a}{2}\right)\right\} \hspace{5cm} E_1 = \frac{h^2}{8ma^2}[/tex] [tex]\psi_2(x) = \sqrt{\frac{2}{a}}\sin\left\{\frac{2\pi}{a}\left(x +\frac{a}{2}\right)\right\} \hspace{5cm} E_2 = \frac{h^2}{2ma^2}[/tex] And so on. You can see a visual representation of the solutions here. So rather than n representing combinations of several waveforms into a single solution (wave packet), it represents individual discrete solutions. I hope that makes more sense to you. While we're here we may as well discuss some consequences of the above solutions. Firstly, you should observe that we have quantised energy eigenvalues, the particle is only 'allowed' to have certain energies. For example, the particle can have an energy equivalent to E1 or E2 (or E3,4,5,...), but can't have anything in between. We say the particle has a discrete energy spectrum, which is in stark contrast to classical physics, were the energy spectrum is continuous. Secondly, note that our lowest permitted energy state (i.e. the energy eigenstate corresponding to n=1) is non-zero, this phenomenon is known as zero-point energy and I'm sure you've at least heard of it before. We can understand this phenomenon qualitatively in terms of the HUP, which states that the product in the uncertainty in two measurements must be of the order of [itex]\hbar[/itex]. However, if a particle has zero energy it will be at rest, and therefore it will have a uniquely define momentum (zero) and position, thus violating HUP. We can take this concept further and say that the particle in the infinite potential well of width a is restricted to [itex]|x|\leq a/2[/itex], and hence has an associated uncertainty in position of [itex]\Delta x \approx a[/itex] (we know that the particle must be somewhere in the well, but we don't know where). Hence, we can write; [tex]\Delta x \cdot \Delta p \approx \hbar \Rightarrow \Delta p \approx \frac{\hbar}{a}\hspace{5cm}(1)[/tex] Furthermore, we know that kinetic energy is related to momentum thus, [itex]E = p^2/2m[/itex], hence we can write; [tex]\Delta E \approx \frac{\hbar^2}{2ma^2} = \frac{h^2}{8ma^2}\pi^2[/tex] Which is 'qualitatively' in agreement with our first energy eigenvalue E1. Note that although this is a very 'rough and ready' analysis, a more formal treatment can show that the associated uncertainty in the energy is in exact agreement with the energy eigenvalues. Furthermore if we examine equation (1), we find that the uncertainty in momentum is inversely proportional to the width (a) of the well, which intuitively makes sense. If we reduce the the width of the well (as a approaches zero), we are increasing the spatial localisation of the particle and hence, decreasing the uncertainty in position. Therefore, by HUP we would expect the uncertainty in momentum to increase. Conversely, if we increase the size of the well (a approaches infinity), the particle becomes less localised and behaves more like a free particle, hence the uncertainty in momentum approaches zero. I think that, partially at least, answers your first question. A good friend of mine took natural sciences at Cambridge and he had very good things to say about it. From what he said the course sounded interesting and apparently, after your first year, you have virtually free choice over which modules you take. It's good to be interested in Quantum, but I wouldn't worry about understanding it all yet, any Quantum Mechanics you take in your first two years as an undergraduate will in all probability be fairly superficial. That said, there's no harm in getting ahead of the game, especially if your interested in the subject! Well at the outset I intended that to be quite a concise post, but it seems that it ran away with me a little. I apologise if it seemed a little hard going. Last edited: Dec 24, 2007 12. Dec 24, 2007 #11 Ok these two confused me somewhat...in the first quote I read it as meaning the wave packet was constructed from the various solutions to the Schrodinger equation, which I then thought meant due to the differing values of n. I thought adding these waves together gave the wavepacket. However the second quote seems to go against this, and suggest the solutions to the Schrodinger equation is the wavepacket (which in this case is just sinusoidal)? Unless a wavepacket is only defined for each energy eigenstate... i.e. each n value gives a different wavepacket. In this case the solution to the Schrodinger equation is just the sinusoidal wavepacket? Actually thinking about it logically, you can't have a wavepacket for all n values, otherwise when added together it would be a mess, so to speak...as n goes to infinity. So then I'm guessing you can have a wave function to represent one variable (in this case position), however it only applies to certain eigenstates/eigenvalues...i.e. one n value? I am assuming then that one must know the energy eigenstate the particle is in, otherwise one would not know which solution to the Schrodinger equation to apply...? Lets clear this up too...is the wave function the same as the "wave packet" which is the same as a particular solution to the Schrodinger equation? Hmmm Fourier analysis is representing a function as the sum of sinusoidal terms, right? So in the case of the wave packet - wave function - in the Hyperphysics page, how it referred to adding waves together to get the wave packet, do these waves actually have a physical meaning (like the differing n values I implied, although as you have said, this is wrong), or are they simply the basis function in the fourier analysis? Thanks for this...I now know the role of the n value much more clearly! The relation to energy eigenstates is also neat...I assume that again one must know the energy eigenstate the system is in, for the wave function to be of any use?? Am I correct to assume that HUP still applies to energy, as momentum and energy are directly proportional? However this would then imply one could not know the exact energy eigenstate the system is in, so I am thus making contradictory assumptions!! Anyways quantised energy...thats an interesting proposition. I've heard it before, but thanks for showing me the maths to prove it...well to some degree anyway. Also yes, that answers my first point. You've agreed with me on that one, so that problems settled! The zero point energy is interesting...I have heard of it briefly before. I think applying this with quantum field theory results in the so called "Vacuum energy". Not sure my classical side really likes the idea. As a completely off topic bit of info, while reading this page from Wikipedia http://en.wikipedia.org/wiki/Vacuum_energy I found the following: It's somewhat reminiscent of aether isn't it? Anyways thanks again for your replies, Have something to add?
2e7d45e145160679
In quantum physics, you can solve for the allowable energy states of a particle, whether it is bound, or trapped, in a potential well or is unbound, having the energy to escape. Take a look at the potential in the following figure. The dip, or well, in the potential, means that particles can be trapped in it if they don’t have too much energy. A potential well. A potential well. The particle’s kinetic energy summed with its potential energy is a constant, equal to its total energy: If its total energy is less than V1, the particle will be trapped in the potential well, as you see in the figure; to get out of the well, the particle’s kinetic energy would have to become negative to satisfy the equation, which is impossible according to classical mechanics. Quantum-mechanically speaking, there are two possible states that a particle with energy E can take in the potential given by the figure — bound and unbound. Bound states happen when the particle isn’t free to travel to infinity — it’s as simple as that. In other words, the particle is confined to the potential well. A particle traveling in the potential well you see in the figure is bound if its energy, E, is less than both V1 and V2. In that case, the particle moves between x1 and x2. It is possible to discover the particle outside this region. A particle trapped in such a well is represented by a wave function, and you can solve the Schrödinger equation for the allowed wave functions and the allowed energy states. You need to use two boundary conditions (the Schrödinger equation is a second-order differential equation) to solve the problem completely. Bound states are discrete — that is, they form an energy spectrum of discrete energy levels. The Schrödinger equation gives you those states. In addition, in one-dimensional problems, the energy levels of a bound state are not degenerate — that is, no two energy levels are the same in the entire energy spectrum. If a particle’s energy, E, is greater than the potential (V1 in the figure), the particle can escape from the potential well. There are two possible cases: V1 < E < V2 and E > V2. Case 1: Particles with energy between the two potentials (V1 < E < V2) If V1 < E < V2, the particle in the potential well has enough energy to overcome the barrier on the left but not on the right. The particle is thus free to move to negative infinity, so its classically allowed x region is between Here, the allowed energy values are continuous, not discrete, because the particle isn’t completely bound. The energy eigenvalues are not degenerate — that is, no two energy eigenvalues are the same. The Schrödinger equation, is a second-order differential equation, so it has two linearly independent solutions; however, in this case, only one of those solutions is physical and doesn’t diverge. The wave equation in this case turns out to oscillate for x < x2 and to decay rapidly for x > x2. Case 2: Particles with energy greater than the higher potential (E > V2) If E > V2, the particle isn’t bound at all and is free to travel from negative infinity to positive infinity. The energy spectrum is continuous and the wave function turns out to be a sum of a wave moving to the right and one moving to the left. The energy levels of the allowed spectrum are therefore doubly degenerate.
bd0e069aee14b18a
You are viewing shpalman shpalman [userpic] The futility of transcendental speculations 9th May 2008 (19:15) Milgrom's futile transcendental speculations have been going on for six years. This latest paper is light on equations but heavy on pictures and mysticism and further from science (and indeed reality) than ever. But it's still possible to find some things which are meaningful enough to be wrong.First we find complaints and special pleading to be allowed to overturn evidence-based medicine (EBM) and the double-blind randomized-controlled trial (DBRCT): “EBM and the DBRCT, like much of biomedical science, are rooted in the reductionist philosophy of logical positivism combined with local realism. The latter states that: (a), the universe is real and it exists whether we observe it or not; (b), legitimate conclusions and predictions can be drawn from consistent experimental outcomes and observations; and (c), no signal can travel faster than light [3,4,5]. In questioning (a) and (c) above, quantum theory transcends local realism [4] and the reductionism of biomedicine [5]. Attempts at explaining homeopathy's efficacy have made use of concepts generalized from the discourses of semiotics [6,7] and quantum theory [8,9,10].” EBM just means that someone's checked that it actually works - if it could be demonstrated that homeopathy worked at actually curing diseases then it would be part of EBM. In fact anything in CAM would quickly become part of EBM if it worked. What we actually get is subjective reports of improvements in self-limiting or cyclic conditions, while journals publish flawed, biased articles on effects at the fringes of statistical significance [11,12,13,14,15]. There's nothing magical about DBRCTs either, they are just the most rigorous way of trying to sort out if there is actually any weak effect there. If homeopathy really worked as well as its proponents seem to suggest then the results should be blatantly obvious and there would be no need to dig so hard to find them. Let's pedantically consider each letter. T for Trial You have to test something to make sure that you aren't just remembering the positive anecdotes and forgetting the negative ones. C for Control You compare your treatment group with a group receiving no treatment, to make sure that it's really the treatment having an effect. It's usual to give the control group a placebo. R for Randomized To make sure that the patients in the treatment and control group are similar, so that similar disease progressions would be expected in each group if the treatment were ineffective. Otherwise you could deliberately or subconciously put the healthier people into the treatment group and then of course they likely to be healthier at the end of the trial. It's good to have large groups. B for Blind The patient shouldn't know whether they are getting the treatment or the control because this could bias their self-reported symptoms and also their expectations. D for Double The doctor shouldn't know whether a patient is in the treatment or control group either, or else he or she can deliberately or subconciously influence the patient. A DBRCT is just the best way of minimizing all the possible biasing factors in the case that the effect of the treatment is less than blatantly obvious. So it's not surprising that good quality DBRCTs tend to come out negative for homeopathy while less well-controlled trials show positive effects - that shows exactly that the positive effects of homeopathy are nothing to do with the remedies themselves [16,17,18,19]. And then, on with the entanglement, as if I haven't already explained how the Greenberger-Horne-Zeilinger [20] system actually works, or why most of what he says about entanglement isn't correct. “Entanglement is said to occur in a quantum system when its seemingly separate parts are so holistically matched or correlated, measurement of one part of the system instantaneously (i.e., not limited by the speed of light and without classical signal transmission) provides information about all its other parts, regardless of their separation in space and time, or their size [21].” Italics are his. I've tried to explain entanglement in other posts, and I've tried to clarify that “size” doesn't mean “number of interacting particles” since even maintaining seven nuclear spins in a state of coherent quantum phase is quite hard [22]; macroscopic coherent states do not persist for very long at all [23,24]. Superconductors and superfluids work because of ways in which the particles in question are prevented from interacting [25,26]. Apparently, “the Memory of Water [27] also relies on macro-entangled coherence, albeit between large numbers of water molecules [28].” which isn't true at all, and not just because there is no such thing [29]. The Memory of Water is supposed to be a physical effect whereby the structure of a sample water depends on what used to be in solution in it: it's nothing to do with coherence of the quantum phase. (Del Giudice et al. [28] seem to be talking about coherence of dipoles in an electric field, not coherence of the quantum phase.) Milgrom then goes on to explain: ‘Nonlocal correlation is not the only prerequisite for entanglement. A quantum system's processes must also be describable in terms of a “non-commuting algebra of complementary observables.” [4]’ All this means is that it matters what order you do certain pairs of measurements in, since eigenstates of one operator are not eigenstates of another. I just found the quote marks interesting, as if he's pasted that in without knowing how to explain it. To be fair, I can't be bothered to explain it either. But this complementarity means, according to Milgrom, that “To fully explain quantum phenomena, therefore, it is necessary to have two different but complementary concepts. The answer one obtains performing two different sets of observations depends entirely on the order in which they are performed; yet both are necessary in order to acquire a complete picture of the system.” A “complete picture of the system” is not actually possible in these terms. It is impossible to have a system in two complementary states at the same time. A “complete picture” in terms of macroscopic variables (such as position and momentum) therefore does not exist. We just have the idea that there's a wavefunction which exists but is not directly observable, on which we can operate in various ways in order to obtain observable results. Having misunderstood and misrepresented quantum theory, Milgrom now goes on to do the same for weak quantum theory (WQT) [30]. Leick has already pointed out [31] that ‘Milgrom writes “Complementarity and indeterminacy are epistemological in origin not ontological”, [5] which is a serious misquote of the original paper, where it says that “[...] there is no way to argue that complementarity and indeterminacy in weak quantum theory are of ontic rather than epistemic nature.[...] one would expect them to be of rather innocent epistemic origin in many cases.” [30] The difference between the two versions cannot be emphasized enough, as quantum effects such as entanglement are due to the ontic nature (ie not simply to our incomplete knowledge) of complementarity and indeterminacy!’ But what does it mean that WQT “relaxes several of its nanoscopically limiting axioms, including dependence on Planck's constant.”? Planck's constant h is what connects quantum theory with reality - it turns out that light comes in photons and the energy of each photon is proportional to the frequency of the light, with the constant of proportionality being h. This is how Planck was able to solve the problem of black-body radiation. If “complementarity and entanglement are not restricted by a constant like Planck's constant” then what do we have in its place, to connect WQT with reality? The simple answer is that there is no connection to reality so it's not even a sensible question. The more involved answer is that “WQT has no interpretation in terms of probabilities” which amounts to more or less the same thing. How can Milgrom then write that “the product ΨPPRPPR=|ΨPPR|2 presumably represents the probability of cure”? (If ΨPPR is properly normalized then ΨPPRPPR=1 and it says nothing about the “probability of cure” or anything - to find that he’d have to define an “cure” operator and calculate its expectation value.) By the way, it's often more convenient to work with ℏ = h/2π so you'll see that in some equations later on. I have already wondered what use WQT would be in answering objective questions like “does homeopathy work?” if it doesn't seem to have any interpretation in terms of observables. Medical effects are quantifiable. Anyway, Milgrom then goes on to introduce Walach's use of semiotics [7] and there's a box-out which contains the unintentionally ironic Hahnemann quote. Semiotics is more linguistics than science, it's got no place here. The way we interpret signs and produce meaning has got nothing to do with the molecular biology of how actual pharmaceuticals work. The rest of the quote in the box-out explains that the observer “can take note of nothing in every individual disease, except the changes in the health of the body and of the mind (morbid phenomena, accidents, symptoms) which can be perceived externally by means of the senses... All these signs represent the disease in its whole extent, that is, together they form the true and only conceivable portrait of the disease.” The first part of that may have been true a couple of hundred years ago but it isn't true now. The second part was never true: we now know about germs, viruses, genes, DNA and molecular biology. Symptoms are part of the body's reaction to an underlying pathology. They are not the pathology itself. The same pathology can present in different ways in different people, and many symptoms are shared between different diseases. A few kets finally turn up now, as Milgrom once again formulates his patient, practitioner and remedy wavefunctions. He then decides to attach one of Walach's semiotic sign-object-meaning triangles (each corner of which seems to represent an operator or possibly the expectation value of it) to each of the three corners of the patient-practitioner-remedy triangle. It's meaningless, but where it becomes actually wrong is in the invocation of complex numbers and a strange sort of quantum origami. Already in part C of his Fig. 2 the bra-ket notation seems to have broken down - and how he manages to fold the “corners of the large triangle to create a pyramid with a hexagonal base” is beyond me, since a pyramid with a hexagonal base needs six sides and a triangle only has three corners. This folding appears to have turned the states into their complex conjugates, but then Milgrom reflects the whole thing so that it's upside down and then unfolds it and it turns out to be twisted through 60°. How is that supposed to happen? It's nonsense mathematically (not to mention scientifically) and I don't even think it makes geometrical sense. Which directions are real and which are imaginary doesn't seem to be made clear for fairly obvious reasons - taking the complex conjugate means mirroring in the Real line but each of the three corners is flipped over a different line in the 2-d plane, and then the whole “pyramid” is mirrored in the whole 2-d plane which apparently represents the “homeopathic operator, Πr”. This is all I suppose taking place in the ‘“therapeutic state space” [32] (an analogue of the complex mathematical Hilbert space more familiar from orthodox quantum theory) [4].’ In the nicest possible way, how many readers of J. Alt. Complement. Med. are familiar with Hilbert spaces? He seems to think that in an equation such as ΨPPRrPPR=⟨Δ S x⟩ that it's the operator which is making the complex conjugate ΨPPR| out of |ΨPPR, which just isn't the way it works at all. (Anyway, if you fold over the corners of an equilateral triangle so that you are left with a regular hexagon, the triangles will meet in the middle when they are flat against the hexagon - the pyramid they define has zero height. And each wavefunction exist in its own Hilbert space so I don't know what it's supposed to mean to put them all in each others’ spaces.) It's hardly worth looking at his Fig. 3 where he does it all again only with tetrahedra. The lack of any explicit conceptual difference between Figs. 2 and 3 demonstrates how arbitrary and meaningless it is, since he can apparently produce two completely different pictures to represent what is supposed to be the same things, and this makes it useless trying to work out on which level to take it seriously - there isn't a level on which it makes any sense. (There are probably lots more versions of this quantum homeopathic origamy nonsense coming soon to “peer reviewed” journals with low editorial standards near you.) Meanwhile, there's a second box-out on the Kochen-Specker theorem [33]. This is a theorem which says that it's not possible to find a direct correspondence between quantum mechanical observables and classical quantities. The first half of the box seems to be ok, up until the part where he he claims that ‘signs and symptoms of disease are considered observable manifestations of an “invisible” disturbed vital force, Vf.’ This is apparently because Auyang [4] said “Eigenvalues are analogous to symptoms of a disease, which are disturbances of the body that show up and indicate something that does not show up. Just as a cold persists though its symptoms are suppressed, so a quantum system's wave function has a definite amplitude, even though it has no eigenvalue...” and Milgrom has taken this analogy far too seriously. Common cold viruses are not invisible. (I'm not sure what “no eigenvalue” means in this context either: is it that the eigenvalue is zero or that the state is not an eigenstate? Measurement is supposed to collapse a mixed state into an eigenstate.) There's a mention of self-adjoint operators, which are those operators which operate on states to give physical observables. (There are, for example, ladder operators which operate on states to give new states.) It's not exactly true that “they consist only of real numbers” because for example the momentum operator for the x direction is −i ℏ ∂/∂x - rather, it means that the operator is a Hermitian matrix which is equal to its own conjugate transpose and it has real eigenvalues (but see also the spectral theorem). The Kochen-Specker theorem [33] knackers hidden variable theories, in which the quantum mechanical correlations leading to entanglement are explained by theorizing that the system somehow already “knows” which state it's going to turn out to be in when you measure, even if this information is not available from the wavefunction. It turns out that you can't have definite values of all the hidden variables corresponding to quantum mechanical observables all the time which are independent of the way in which they might be measured. This is because for classical quantities it shouldn't matter in what order you measure certain properties, but for certain complementary pairs of quantum mechanical observables it does indeed matter in what order you measure. This is actually only a problem if the Hilbert space has three or more dimensions [34], and Milgrom decides that since his homeopathy Πr mirror is a 2d plane, so the “therapeutic state space” is this 2d plane on which the Kochen-Specker theorem need not apply. In fact he's drawn his mirror as a 2d plane embedded in a 3d space, and if he wants a pyramid which goes upside-down then he needs at least a 3d Hilbert space to do it in. It's clear that in the real world there are wavefunctions which really do “exist” in Hilbert spaces with three or more dimensions out of which observable quantities can be extracted with the appropriate measurement operators: the theorem just says that these very observables were not some how “in there” before we did the extraction. What comes out actually depends on the interaction between the measuring operation and the wavefunction, so the intrinsic properties of the wavefunction (and I maintain that it does have them) are not those which correspond exactly to things we are intuitively familiar with, such as position or momentum. So I don't think that the Kochen-Specker theorem is particularly relevant to what Milgrom is trying to do, and he wouldn't be able to get around it anyway because he's working in 3d not 2d. (What he's drawn isn't a Hilbert space anyway: states exist as rays in a Hilbert space, not polygons.) On to Fig. 3 anyway. As I mentioned, for some reason this time he folds up the big triangle into a tetrahedron. Does this represent a mathematical transformation of some kind? (No.) There are no brakets arounds the Ψs this time, perhaps that's the difference. The practitioner has a wavefunction ΨPr and therefore a triangle, but then apparently “sits at the center of tetrahedron” too. There's clearly no special reason for this apart from Milgrom wanted it that way and thereby made it up (and in the text it's “the patient notionally at the tetrahedral epicenters”.). And then of course the practitioner also has an operator Πr which is supposed to be a mirror which somehow also twists the tetrahedron in a way which doesn't make a huge amount of sense (and I don't think this is a self-adjoint operator if it flips between these two states). Then there's another box-out regarding chirality and there's nothing wrong with it, apart from that it's almost totally irrelevant, only serving to remind us that Milgrom used to be a chemist. The final step is to combine the original tetrahedron and the twisted one into the shape called the stella octangula which Hankey got so excited about. (But he also folds up the big triangle into a small flat triangle which apparently introduces a 60° twist. I don't think he runs with this; he was just getting carried away. I don't know why the Ψs have now moved to the corners where previously we had operators.) The twisting is supposed to be the practitioner showing the cure to the patient or some nonsense like that. It's not a real-space twist: it doesn't matter which way the patient is “looking” or “going”. States evolve through Hilbert space according to the time-dependent Schrödinger equation: HΨ = iℏ  ∂ t where the left side has the Hamiltonian operating on Ψ (which classically involves the kinetic and potential energies, where the former involves taking derivatives with respect to space - stationary states are energy eigenstates) and the right side involves taking the derivative with respect to time. (This equation is completely deterministic, by the way.) How should we describe pointing “the patient in the direction of cure” now exactly? The problem I always have with Milgrom is that I try to read it as if it were science. I assume that there's sense and meaning in there but the concepts are difficult and require work to get to. The problem is that there's no sense or meaning, and I end up doing a lot of work trying to get the right level into focus when there is no right level. It's meaningless. I don't think it's even correct geometrically. It's nearly finished though so that's good. We only have to deal with the stella octangula's role in quantum teleportation first [35,36,37,38]. I'm not interested in the stuff about the Platonic solids or the “classical four elements”, or the Merkabah. (Read Finding Moonshine if you want a more sensible discussion about symmetry and that.) It's the link back to quantum mechanics which is more troubling, since some might see that and think Milgrom's on to something. Let me assure you he isn't. The picture which Aravind [38] draws is a representation of operations described by Bennett et al. [36] when dealing with a entangled state of two spin-1/2 particles - it gives a way of understanding which combinations of spin states are more entangled than others, or something. The corners of a tetrahedron A, B, C and D represent four Bell states while the centre E represents a totally unpolarized state. Aravind explains: “The twirl operation can also be visualized readily on the Horodecki diagram. The effect of a twirl on an arbitrary Bell diagonal mixture is to project it orthogonally onto the line AE containing the Werner states. For a non-separable state in the A-sector of the tetrahedron, this reduction is achieved without any loss in entanglement but for states in the B, C and D sectors there is a complete loss of entanglement. The proper way to reduce the latter states is to either subject them to a modified twirl [36] that projects them onto Werner-like states in their own sectors or else to transfer them into the A-sector (by a suitable unilateral rotation) and then apply the standard twirl.” There's an octagon embedded in the tetrahedron, formed by the intersection of the tetrahedron and its inverse, within which lie all the separable states. How does this compare to Migrom's picture? Milgrom built up his intersecting tetrahedra from at least three “particles” so he would need a different shape (probably in more than three dimensions) to represent all their states; the centre, representing complete unpolarization and being the most unentangled state in Aravind's picture, is the patient (probably) in Milgrom's picture, but the patient is also a face; in Aravind's picture the vertices of the tetrahedron represent maximally-entangled Bell states, while Milgrom seems to have expectation values or operators or something. So it's clear that just because he has contrived to arrive at the same shape doesn't mean that he's somehow doing something connected to what these guys are doing. (It may not be a total coincidence either that Sandu Popescu [39] is acknowledged by Aravind [38] and cited by Milgrom [40] in his reply to Leick [31].) To conclude, then: in order to avoid facing the fact that quantum mechanics is simply not relevant to the system of a homeopath and a patient [41], Milgrom concludes that the “state functions representing each of the Px, Pr, Rx, and the PPR entangled state are not related to quantifiable physical observables”, admitting how useless it all is for actually working anything out; but when he states that “it is clear that the nature of the therapeutic process requires its initial separation and ‘isolation’ from the usual external environment, as a necessary prerequisite for the coherence of entanglement to occur, and cure to begin,” he admits something I think we already knew: that it is necessary to be out of touch with reality to be a homeopath. 1.  L. R. Milgrom, J. Alt. Comp. Med. 14, 329 (2008). 2.  A. Hankey, J. Alt. Comp. Med. 14, 221 (2008). 3.  K. R. Popper, The Logic of Scientific Discovery (Hutchinson, 1959). 4.  S. Y. Auyang, How is Quantum Field Theory Possible? (Oxford University Press, 1995). 5.  L. R. Milgrom, Homeopathy 96, 209 (2007). 6.  H. Walach, Semiotica 83, 81 (1991). 7.  H. Walach, Brit. Homeopathy J. 89, 127 (2000). 8.  D. Gernert, Biosystems 54, 165 (2000). 9.  D. Gernert, Frontier Perspectives 14, 8 (2005). 10.  L. R. Milgrom, Homeopathy 91, 239 (2002). 11.  M. Frass, C. Dielacher, M. Linkesch, C. Endler, I. Muchitsch, E. Schuster, et al. Chest 127, 936 (2005). 12.  I. Chalmers, and R. Matthews, The Lancet 367, 449 (2006). 13.  A. Robertson, R. Suryanarayanan, and A. Banerjee, Homeopathy 96, 17 (2007). 14.  E. Ernst, Homeopathy 96, 285 (2007). 15.  A. Robertson, Homeopathy 96, 285 (2007). 16.  K. Linde, N. Clausius, G. Ramirez, D. Melchart, F. Eitel, L. V. Hedges, et al. The Lancet 350, 834 (1997). 17.  K. Linde, M. Scholz, G. Ramirez, N. Clausius, D. Melchart, and W. B. Jonas, J. Clin. Epidemiol. 52, 631 (1999). 18.  K. Linde, and W. Jonas, The Lancet 366, 2081 (2005). 19.  A. Shang, K. Huwiler-Müntener, L. Nartey, P. Jüni, S. Dörig, et al. The Lancet 366, 726 (2005). 20.  D. M. Greenberger, M. A. Horne, A. Shimony, and A. Zeilinger, Am. J. Phys. 58, 1131 (1990). 21.  L. J. Landau, Lett. Math. Phys. 14, 33 (1987). 22.  L. M. K. Vandersypen, M. Steffen, G. Breyta, C. S. Yannoni, and M. H. Sherwood, I. L. Chuang Nature 414, 883 (2001). 23.  M. Tegmark, Phys. Rev. E 61, 4194 (2000). 24.  S. Hagan, S. R. Hameroff, and J. A. Tuszynski, Phys. Rev. E 65, 061901 (2002). 25.  J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev. 108, 1175 (1957). 26.  L. Landau, Phys. Rev. 60, 356 (1941). 27.  M. Schiff, The Memory of Water: Homoeopathy and the Battle of Ideas in the New Science (Thorsons, 1995). 28.  E. Del Giudice, G. Preparata, and G. Vitiello, Phys. Rev. Lett. 61, 1085 (1988). 29.  J. Teixeira, Homeopathy 96, 158 (2007). 30.  H. Atmanspacher, H. Römer, and H. Walach, Found. Phys. 32, 379 (2002). 31.  P. Leick, Homeopathy 97, 50 (2008). 32.  L. R. Milgrom, Forsch. Komplementmed. 13, 174 (2006). 33.  S. Kochen, and E. Specker, Indiana Univ. Math. J. 17, 59 (1968). 34.  A. Peres, J. Phys. A 24, L175 (1991). 35.  R. Horodecki, and M. Horodecki, Phys. Rev. A 54, 1838 (1996). 36.  C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, Phys. Rev. A 54, 3824 (1996). 37.  C. H. Bennett, G. Brassard, S. Popescu, B. Schumacher, J. A. Smolin, and W. K, Wootters Phys. Rev. Lett. 76, 722 (1996). 38.  P. K. Aravind, Phys. Lett. A 233, 7 (1997). 39.  S. Popescu, Nature Physics 2, 507 (2006). 40.  L. R. Milgrom, Homeopathy 97, 96 (2008). 41.  H. M. Wiseman, and J. Eisert, e-Print archive physics, arXiv:0705.1232v2 (2007). free hit counter javascript This document was translated from LATEX by HEVEA. Posted by: ( Posted at: 9th May 2008 20:12 (UTC) "The problem I always have with Milgrom is that I try to read it as if it were science." I was wondering if Milgrom might feel that the problem he has with you is that you try to read his work as if it were science - because, having done so, you then demonstrate that it isn't science. Posted by: ( Posted at: 10th May 2008 14:39 (UTC) Excellent Analysis Once again, an excellent analysis. Nice de-mystification of the DBRCT methodology. My big problems with Milgrom's work are (as you observe): (a) removing the 'constraint' of Planck's constant breaks the link between (the now WQT) theory and reality; (b) not being clear if he is using quantum theory as a metaphor (or illustration as he now has it in your ref.40) or explanation; (c) not correcting his mistakes. This just leads to a body of work that has no link with any kind of practical reality and is not self-consistent; the latter point totally negates any value (not that there is any) in using quantum theory as a metaphor for the homeopathic intervention. The lack of clarity about what it is that he is really doing invites the mistake of homeopathic apologists seeing this work as showing that quantum theory can 'explain' how homeopathy 'works'. Posted by: ((Anonymous)) Posted at: 10th May 2008 20:38 (UTC) Re: Excellent Analysis I agree with A.P. Gaylard: Once again, excellent and very thorough analysis! I must admit that I tried reading this latest paper from L.R. Milgrom, but only succeeded insofar as I managed, from the beginning to the end of the text, to decipher the words. I understood some "highlights", but the deeper meaning was lost somewhere in the shallow recesses of my materialistic brain. Anyway, there is actually a fourth big problem with Milgrom's work: even if you are willing to give him the benefit of doubt, and start thinking within the framework of Weak Quantum Theory, his reasoning still makes no sense whatsoever. Shpalman's dissection of Milgrom's latest paper shows this in glorious detail (see above). There is no logical train of thought, just one metaphor or illustration after the other. The connections between them are vague, with no compelling reason to move from one step to the next. The whole thing seems rather gratuitous... For example, there are plenty of other ways to play with triangles, and I see no justifications of the lines of thought followed by Milgrom. Perhaps Milgrom's admission, in his response to my letter, that he has "implicitly adopted a post-modern stance", gives us a clue as to how we are supposed to read his papers. Certainly not as science: as you so rightly observe, this only leads to headaches. Philippe Leick Posted by: shpalman (shpalman) Posted at: 10th May 2008 16:25 (UTC) Thanks. For those of you who appreciate a Feynman chaser: Posted by: ((Anonymous)) Posted at: 12th May 2008 07:29 (UTC) Great stuff shpalman. You really should try to get this work published in a respectable journal. Given the time and effort involved you really should get some professional recognition. Posted by: ((Anonymous)) Posted at: 12th May 2008 18:27 (UTC) Why would a respectable journal publish a critique of some articles that they themselves would never even consider for publication? The Skeptical Inquirer or some similar magazine might be the way to go. Or letters to the "offending" journal. Having done both myself, I know all too well how much work this is. And then, hasn't this blog already been knighted? In his recent editorial praising Milgrom's work, Alex Hankey cites Shpalman, writing that from "[...] the pain [Milgroms] work has evidently caused in recent months." , it is all too clear that he is a creative scientific mind. And then Milgrom Himself: "However, it is one thing to invite open debate and criticism; quite another to allow the cynicism and disparagement that is the lingua franca of some sceptical blog-sites." The citations ending the last quote lead directly to this well-known site. I think, however, that these topics are beyond debate. Debate requires some common ground, which I just don't see here... Philippe Leick Posted by: ((Anonymous)) Posted at: 15th May 2008 16:57 (UTC) I have to agree that there's probably not a lot of point in sending this stuff to a journal, unless you want to submit it to "Homeopathy", and have Milgrom write a lot of rubbish in response. I think it would definitely be worth trying to publish it somewhere more formal than a blog, though. There's no doubt that it's excellent work, and it must take up a reasonable amount of time, too. In any case, best wishes, and keep up the good work. Posted by: ((Anonymous)) Posted at: 3rd August 2008 11:41 (UTC) I should say Thanks ! Posted by: ((Anonymous)) Posted at: 23rd January 2010 23:29 (UTC) Homeopathy Heals Posted by: shpalman (shpalman) Posted at: 24th January 2010 07:19 (UTC) Re: Homeopathy Heals No it doesn't. Posted by: ((Anonymous)) Posted at: 25th January 2010 01:28 (UTC) Re: Homeopathy Heals No it doesn't. Not any better than an allopathic placebo, anyway. Posted by: ((Anonymous)) Posted at: 4th March 2011 14:37 (UTC) give me your facebook What is your real name? Maybe I can reply you on your Facebook page. 12 Read Comments
eb390215d99df710
Monday, May 29, 2006 Non-Relativistic QCD This is another installment in our series about fermions on the lattice. In the previous posts in this series we had looked at various lattice discretisations of the continuum Dirac action, and how they dealt with the problem of doublers posed by the Nielsen-Ninomiya theorem. As it turned out, one of the main difficulties in this was maintaining chiral symmetry, which is important in the limit of vanishing quark mass. But what about the opposite limit -- the limit of infinite quark mass? As it turns out, that limit is also difficult to handle, but for entirely different reasons: The correlation functions, from which the properties of bound states are extracted, show an exponential decay of the form $$C(T,0)\sim e^{-maT}$$, where $$t$$ is the number of timesteps, and $$ma$$ is the product of the state's mass and the lattice spacing. Now for a heavy quark, e.g. a bottom, and the lattice spacings that are feasible with the biggest and fastest computers in existence today, $$ma\approx 2$$, which means that the correlation functions for an $$\Upsilon$$ will decay like $$e^{-4T}$$, which is way too fast to extract a meaningful signal. (Making the lattice spacing smaller is so hard because in order to fill the same physical volume you need to increase the number of lattice points accordingly, which requires a large increase in computing power.) Fortunately, in the case of heavy quark systems the kinetic energies of the heavy quarks are small compared to their rest masses, as evidenced by the relatively small splittings between the ground and excited states of heavy $$Q\bar{Q}$$ mesons. This means that the heavy quarks are moving at non-relativistic velocities $$v<<c$$ and can hence be well described by a Schrödinger equation instead of the full Dirac equation after integrating out the modes with energies of the order of $$E\gesim M$$. The corresponding effective field theory is known as Non-Relativistic QCD (NRQCD) and can be schematically written using the Lagrangian $$\mathcal{L} = \psi^\dag \left(\Delta_4 - H\right)\psi$$ where $\psi$ is a non-relativistic two-component Pauli spinor and the Hamiltonian is $$H = - \frac{\bm{\Delta}^2}{2M} + \textrm{(relativistic and other<br />corrections)}$$ In actual practice, this is not a useful way to write things, since it is numerically unstable for $$Ma<3$$; instead one uses an action that looks like $$\mathcal{L} = \psi^\dag\psi - \psi^dag\left( 1 - \frac{a\delta H}{2} \right) \left( 1 - \frac{aH_0}{2n} \right)^n U_4^\dagger\left( 1 - \frac{aH_0}{2n} \right)^n \left( 1 - \frac{a \delta H}{2} \right)\psi$$ $$H_0 = - \frac{\bm{\Delta}^2}{2M}$$ whereas $$\delta H$$ incorporates the relativistic and other corrections, and $$n\ge 1$$ is a numerical stability parameter that makes the system stable for $$Ma>3/(2n)$$. This complicated form makes NRQCD rather formidable to work with, but it can be and has been successfully used in the description of the $$\Upsilon$$ system and in other contexts. In fact, some of the most precise predictions from lattice QCD rely on NRQCD for the description of heavy quarks. It should be noted that the covariant derivatives in NRQCD are nearest-neighbours differences -- the reasons for having to take symmetric derivatives don't apply in the non-relativistic case; hence there are no doublers in NRQCD. No comments:
3fd0ca2b897ca2bf
On Particles Mass and the Universons Hypothesis Jacques Consiglio In the logic of the Universons assumption, we deduce the nature of De Broglie wave and periodic mass variation for particles. We verify consistency with quantum mechanics, in particular the Schrödinger equation. We analyze the hypothesis that elementary particle mass is momentum circulating at light speed. We discover resonance rules acting within elementary particles leading to a formula governing the quantization of masses. Applying this formula to the electrons, muons, tauons and quarks, we find resonances that match with current measurements. We deduce the energy of unknown massless sub–particles at the core of electrons, muons, and tauons. Geometrical constraints inherent to our formula lead to a possible explanation to only three generations of particles. Based on particles geometry, we verify the consistency of the deduced quarks structure with QCD and raise the hypothesis that color charge is magnetic. We verify consistency with QCD symmetry and find that P and CP symmetry are broken by the interaction, in agreement with weak force knowledge. Our logic leads to re–interpret the Dirac condition on magnetic monopole charge, explain why the detection of magnetic monopoles is so difficult and, when detected, why magnetic charge can depart from Dirac prediction. We deduce a possible root cause of gravitation, resulting in the Schwarzschild metric and probable non existence of dark matter. Full Text: PDF Supp. DOI: 10.5539/apr.v4n2p144 Creative Commons License This work is licensed under a Creative Commons Attribution 3.0 License. Copyright © Canadian Center of Science and Education doaj_logo_new_120 images_120. proquest_logo_120 lockss_logo_2_120 udl_120.
87f9dbc59fbd26c2
Project info for QuantuMagic Share This Created 15 May 2000 at 21:22 UTC by Aspuru. QuantuMagiC is a FORTRAN program based upon the quantum Monte Carlo (QMC) method for solving the electronic, non-relativistic, clamped-nuclei Schrödinger equation. The current version performs variational Monte Carlo (VMC) and fixed-node diffusion Monte Carlo (DMC) computations of the energy and other properties of atoms and molecules. Version 7.7 can perform both all-electron and effective-core potential calculations. Before using this program I suggest you read as many of the following as possible: 1. P. J. Reynolds, D. M. Ceperley, B. J. Alder, and W. A. Lester, Jr., ``Diffusion Monte Carlo for Atoms and Molecules,'' J. Chem. Phys. 77, 5593-5603 (1982). 2. W. A. Lester, Jr. and B. L. Hammond, ``Fixed Node Quantum Monte Carlo for Molecules,'' Annu. Rev. Phys. Chem. 41, 283-311 (1990). 3. B. H. Wells, ``Green's function Monte Carlo,'' in Methods in Computational Chemistry 1, 311-50 (1987). 4. M. H. Kalos and P. A. Whitlock, Monte Carlo Methods, Vol. 1: Basics. Wiley. 5. B. L. Hammond, W. A. Lester, Jr., P. J. Reynolds, Monte Carlo Methods in ab initio Electronic Structure Theory. World Scientific Press, 1994 (ISBN 981-02-0322-5). License: Modified BSD / Elsevier-Like This project has the following developers: New Advogato Features Share this page
a29e40d84e40a5c0
Take the 2-minute tour × Inspired by this question: Are these two quantum systems distinguishable? and discussion therein. Given an ensemble of states, the randomness of a measurement outcome can be due to classical reasons (classical probability distribution of states in ensemble) and quantum reasons (an individual state can have a superposition of states). Because a classical system cannot be in a superposition of states, and in principle the state can be directly measured, the probability distribution is directly measurable. So any differing probability distributions are distinguishable. However in quantum mechanics, an infinite number of different ensembles can have the same density matrix. What assumptions are necessary to show that if two ensembles initially have the same density matrix, that there is no way to apply the same procedure to both ensembles and achieve different density matrices? (ie. that the 'redundant' information regarding what part of Hilbert space is represented in the ensemble is never retrievable even in principle) To relate to the referenced question, for example if we could generate an interaction that evolved: 1) an ensemble of states $|0\rangle + e^{i\theta}|1\rangle$ with a uniform distribution in $\theta$ 2) an ensemble of states $|0\rangle + e^{i\phi}|1\rangle$ with a non-uniform distribution in $\phi$ such an mapping of vectors in Hilbert space can be 1-to-1. But it doesn't appear it can be done with a linear operator. So it hints that we can probably prove an answer to the question using only the assumption that states are vectors in a Hilbert space, and the evolution is a linear operator. Can someone list a simple proof showing that two ensembles with initially the same density matrix, can never evolve to two different density matrices? Please be explicit with what assumptions you make. Update: I guess to prove they are indistinguishable, we'd also need to show that non-unitary evolution like the projection from a measurement, can't eventually allow one to distinguish the underlying ensemble either. Such as perhaps using correlation between multiple measurements or possibly instead of asking something with only two answers, asking something with more that two so that finally the distribution of answers needs more than just the expectation value to characterize the results. share|improve this question Hah! I addressed your update in my answer before I even saw it. –  Keenan Pepper Apr 6 '11 at 1:05 2 Answers 2 up vote 7 down vote accepted You only need to assume 1. the Schrödinger equation (yes, the same old linear Schrödinger equation, so the proof doesn't work for weird nonlinear quantum-mechanics-like theories) 2. the standard assumptions about projective measurements (i.e. the Born rule and the assumption that after you measure a system it gets projected into the eigenspace corresponding to the eigenvalue you measured) Then it's easy to show that the evolution of a quantum system depends only on its density matrix, so "different" ensembles with the same density matrix are not actually distinguishable. First, you can derive from the Schrödinger equation a time evolution equation for the density matrix. This shows that if two ensembles have the same density matrix and they're just evolving unitarily, not being measured, then they will continue to have the same density matrix at all future times. The equation is $$\frac{d\rho}{dt} = \frac{1}{i\hbar} \left[ H, \rho \right]$$ Second, when you perform a measurement on an ensemble, the probability distribution of the measurment results depends only on the density matrix, and the density matrix after the measurement (of the whole ensemble, or of any sub-ensemble for which the measurement result was some specific value) only depends on the density matrix before the measurement. Specifically, consider a general observable (assumed to have discrete spectrum for simplicity) represented by a hermitian operator $A$. Let the diagonalization of $A$ be $$A = \sum_i a_i P_i$$ where $P_i$ is the projection operator in to the eigenspace corresponding to eigenvalue (measurement outcome) $a_i$. Then the probability that the measurement outcome is $a_i$ is $$p(a_i) = \operatorname{Tr}(\rho P_i)$$ This gives the complete probability distribution of $A$. The density matrix of the full ensemble after the measurment is $$\rho' = \sum_i P_i \rho P_i$$ and the density matrix of the sub-ensemble for which the measurment value turned out to be $a_i$ is $$\rho'_i = \frac{P_i \rho P_i}{\operatorname{Tr}(\rho P_i)}$$ Since none of these equations depend on any property of the ensemble other than its density matrix (e.g. the pure states and probabilities of which the mixed state is "composed"), the density matrix is a full and complete description of the quantum state of the ensemble. share|improve this answer Oh, and for the case of an observable $A$ with a continuous spectrum, it works basically the same way. For mathematicians it might get more hairy, but as a physicist I have no problem just saying "replace all the summation signs with integrals". –  Keenan Pepper Apr 6 '11 at 0:59 You don't even need to assume Schrödinger equation, but only the fact that the evolution of a quantum state is unitary. –  Frédéric Grosshans Apr 10 '11 at 19:14 Density matrices are an alternative description of quantum mechanics. Consequently, if two ensembles have the same density matrix, they are not distinguishable. Example, consider the unpolarized spin-1/2 density matrix which can be modeled as a system that is half pure states in the +x direction and half in the -x direction, or alternatively, as half pure states in the +z direction (i.e. spin up) and half in the -z direction (i.e. spin down): $$\begin{pmatrix}0.5&0\\0&0.5\end{pmatrix} = 0.5\rho_{+x}+0.5\rho_{-x} = 0.5\rho_{+z}+0.5\rho_{-z}$$ Now compute the average value of an operator $H$ with respect to these ensembles. Let $$H = \begin{pmatrix}h_{11}&h_{12}\\h_{21}&h_{22}\end{pmatrix}$$ then the averages for the four states involved are: $$\begin{array}{rcl} \langle H\rangle_{+x} &=& 0.5(h_{11}+h_{12}+h_{21}+h_{22})\\ \langle H\rangle_{-x} &=& 0.5(h_{11}-h_{12}-h_{21}+h_{22})\\ \langle H\rangle_{+z} &=& h_{11}\\ \langle H\rangle_{-z} &=& h_{22} \end{array}$$ From the above, it's clear that taking the average over $\pm x$ will give the same result as taking the average over $\pm z$, that is, in both cases the ensemble will give an average of $$\langle H\rangle = 0.5(h_{11}+h_{22})$$ Any preparation of the system amounts to an operator acting on the states and so $H$ can stand for a general operation. Therefore there is no way of distinguishing an unpolarized mixture of +- x from an unpolarized mixture of +-z. The argument for general density matrices is similar, but I think this gets the point across. share|improve this answer Are you saying instead of representing a state as a vector in Hilbert space, it is sufficient to represent a state as a density matrix? It seems like this view would change the counting of physical states and would have an effect in statistical mechanics or thermodynamics of a system. It almost seems like you would be reducing the entropy by mixing two ensembles. –  Ginsberg Apr 6 '11 at 0:32 Either way, the whole point of the question was to see a concrete mathematical proof. Instead of just saying it is so, can you please show how it is so, such that I can learn more? –  Ginsberg Apr 6 '11 at 0:34 @Ginsberg; Yes, a density matrix is equivalent to a collection of pure states (presumably represented by state vectors) along with a probability density for the pure states. I've not found the reference I was looking for so I'll type up an outline of a proof and edit it in. –  Carl Brannen Apr 6 '11 at 0:45 Your Answer
9edf6b739e939681
Submission Form Plenary and Prize Lectures By session By speaker Nonlinear Wave Equations and Applications Org: Walter Craig (McMaster) and Catherine Sulem (Toronto) STEPHEN ANCO, Brock University, St. Catharines, Ontario Symmetry analysis of nonlinear wave equations in n > 1 dimensions Symmetry analysis has several important uses in the study of nonlinear evolution equations, particularly for (1) identifying critical dimensions, (2) deriving conserved norms and conservation identities, and (3) finding explicit solutions with invariance properties. Applications to semilinear wave equations, Schrodinger equations, and generalized Korteveg-de Vries equations in n > 1 dimensions will be presented. Long wave expansions for water waves over random bottom We introduce a technique, based on perturbation theory for Hamiltonian PDEs, to derive the asymptotic equations of the motion of a free surface of a fluid over a rough bottom (one dimension). The rough bottom is described by a realization of a stationary mixing process which varies on short length scales. We show that the problem in this case does not fully homogenize, and random effects are as important as dispersive and nonlinear phenomena in the scaling regime. We will explain how these technique can be generalized to higher dimensions. CLEMENT GALLO, McMaster University, 1280 Main Street W., Hamilton, ON L8S 4K1 Transverse instability for the dark solitons of the cubic defocusing NLS equation In one space dimension, the cubic defocusing Nonlinear Schrödinger equation it u+Du+(1-|u|2)u = 0,     (t,x) Î R ×Rd admits solitary waves which do not vanish at infinity, the so-called dark solitons. These dark solitons are orbitally stable for the dynamic of the one-dimensional equation (d=1). The dark solitons can also be seen as solutions of the two-dimensional equation (d=2), being constant in the transverse direction. The purpose of this talk is to show that they are nonlinearly unstable for the dynamic of the two-dimensional equation. JIANSHENG GENG, McMaster University, Hamilton, Ontario L8S 4K1 Invariant Tori of Full Dimension for a Nonlinear Schrödinger Equation In this talk, we consider the one-dimensional nonlinear Schrödinger equation i ut - uxx + mu + f(|u|2)u = 0 with periodic boundary conditions or Dirichlet boundary conditions, where f is a real analytic function in some neighborhood of the origin satisfying f(0)=0, f¢(0) ¹ 0. We prove that for each given constant potential m, the equation admits a Whitney smooth family of small-amplitude, time almost-periodic solutions with all frequencies. The proof is based on a Birkhoff normal form reduction and an improved version of the KAM theorem. Thus, we give an affirmative answer to an open problem stated in Pöschel (Ergodic Theory Dynam. Systems 22(2002), 1537-1549) and Bourgain (J. Funct. Anal. 229(2005), 62-94). PHILIPPE GUYENNE, Department of Mathematical Sciences, University of Delaware, Newark, DE 19176, USA Hamiltonian formulation and long wave models for internal waves We derive a Hamiltonian formulation of the problem of a dynamic free interface (with rigid lid boundary conditions), and of a free interface coupled with a free surface, in view of modeling internal waves in oceans. Based on the linearized equations, we highlight the discrepancies between the cases of rigid lid and free-surface boundary conditions, which in some circumstances can be significant. We also derive systems of nonlinear dispersive long-wave equations in the large-amplitude regime, and numerically compute their solitary wave solutions. Comparisons with other weakly and fully nonlinear results show good agreement. KONSTANTIN KHANIN, University of Toronto, Department of Mathematics, 40 St. George Street, Toronto, Ontario M5S 2E4 Localization and pinning for directed polymers We shall present few results (joint with Yu. Bakhtin) on localization for directed polymers. Directed polymers can be considered as random walks in random potential. They play important role in analysis of parabolic Anderson model and random forced Burgers equation. We are mostly interested in the case when the random potential has the product structure. Namely, it is given by the product of two terms. The first one is a space-dependent potential, while the second is the white noise in time. We show that corresponding polymers are localized provided that the spatial part of the potential has a large maximum (or minimum). We also consider the case when the spatial potential is a stationary process. In this case we show that polymers at zero temperature (action-optimizing paths) has strong pinning properties. We calculate critical exponents for the optimal action fluctuations and for transversal fluctuations of optimal paths. We also show that probability distribution for normalized optimal action fluctuations converges to the universal limit as t ® ¥. DAVID LANNES, McMaster University, Hamilton, Ontario, Canada The Camassa-Holm and Degasperis-Procesi equations and water waves The Camassa-Holm and Degasperis-Procesis are well-known bi-Hamiltonian equations with a very rich structure. The aim of this talk is to show how these equations can be seen as model equations in the water wave theory and to point out their relevance to the description of wave breaking phenomena. JEREMY QUASTEL, University of Toronto Wiener meets Kortweg and deVries Gaussian white noise is an invariant measure for KdV on the circle. We explain what this means, and why it is true. Joint work with Benedek Valko.
ea6ba2568386d750
OJMIPOpen Journal of Molecular and Integrative Physiology2162-2159Scientific Research Publishing10.4236/ojmip.2017.74005OJMIP-79739ArticlesBiomedical&Life Sciences Rotational Flows, Thermodynamics, Angle Vibration and Action of Whirly Flows on Fishes LenaJ.-T. Strömberg1*Department of Solid Mechanics, KTH Royal Institute of Technology, Stockholm, Sweden* E-mail:lena_str@hotmail.com201020170704535613, August 201717, October 2017 20, October 2017© Copyright 2014 by authors and Scientific Research Publishing Inc. 2014This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/ A continuum thermodynamic model for how whirls can transform into thermal energy-forms determined by a functional relation for temperature is derived. This is used to describe how fishes maintain circulation in the vascular system, at very low temperatures. Entropy Temperature Thermodynamics Angular Velocity Vorticity Eddy Harmonic Oscillator Angle Vibration Temperature Distribution Spatial Cylindrical Coordinates φ(t ) 1. Introduction Theories for interactions of different types of energy, e.g. kinetic, mechanical and thermal are of importance in many fields. It invokes also transformations between them, dissipation and creation of sublevels and super levels. Here, we will scrutinize the ramifications of thermodynamics and continuum mechanics for a viscous incompressible fluid. 2. Balance Equation for Entropy For a fluid with viscosity, continuum mechanics with thermodynamics and additional assumptions of heat capacity and heat flux gives an equation for the rate of entropy, reading ρ θ d t η = s − κ Δ θ + λ ( t r L ) 2 + 2 μ t r L 2 (1) where, with usual notations; ρ , η , s, θ , trL, ( λ ( t r L ) 2 + 2 μ t r L 2 ) is the density, entropy, heat radiation, temperature, trace of velocity gradient and internal friction in the viscous motion, and dt denotes entire time derivative. Here, we consider a flow consisting of whirls such that L = skew(L) = wx, where x is the cross product. From thermodynamics, entropy is energy conjugated to temperature, and there are different types of energies, c.f. [1] . The energies are potential functions in so called Legendre transformations, to change between (independent) variables. The conjugated variables for mechanical energy are p/ρ and ρ, but these will not be used in the present analysis. Entropic Forces in Elasticity and Continuum Mechanics An example of an entropic system in material science is a study of biological tissues with some elasticity, and motion also governed by a statistic random distribution dependent on temperature, c.f. [2] . It appears that in harmonisation between continuum mechanical formulation and classical statistical mechanics, the molecule mass enters in the statistics caloric equation of state. An exact evaluation gives that p = ( k / m w ) ρ θ , where k is Boltzman’s constant and mw is the molecule-weight. If altering the example in [2] into continuum mechanics, then k should be replaced with k/mw. Since these systems often are used with self-calibration, the exact expressions do not show, but instead changes and ratios. First, we shall adopt the framework in [2] to rotations instead of displacements. Hereby, interpreting w2 as weighted from a statistical temperature distribution, we obtain that w2 is proportional to θ k / m w . Then, with notations of the balance Equation (1), we identify left side with this such that ρ d t η = k / m w . 3. Solutions in Terms of Independent Variables on a Manifold To connect w2 with kinematics and geometry, alternate frames could be chosen: With discrete memory as implied from Tti in anco, the acceleration may be from a previous state. This gives change of direction, such that the angular acceleration is the square of angular velocity. No heat or terms with temp balanced: ρ θ d t η = 2 μ w 2 (2) d t η proportional to temperature gives a “symmetry” such that vorticity w is proportional to temperature. For systems with many (rotational) d.o.f, often harmonic oscillators are assumed as the point of departure. Therefore, only the inertia parts are exported to a multi-dimensional Hamiltonian and the individual potentials, as well as potential energy is invoked in some manner into entire energy, and the kinematic interactions are neglected. This is the foundation of the Schrödinger equation in QM. Here we will proceed with one d.o.f on the meso-scale provided by continuum mechanics. Formulations in terms of energies depending on θ , η , w and energy conjugate to w, could be done comparison with notations of a manifold, c.f. [1] . Then, either a measure of whirl and temperature as forms without connection to coordinates, or the whirl and temperature expressed in R3-coordinates, could be used as the independent variable in analysis. Next, the latter will be considered to see how the fields matter in real space. Solutions in Terms of Spatial Coordinates in R3 Since w is connected to angle φ as w = φ t , we will consider solutions θ = θ ( φ ) . ・ With w 2 = φ t t i.e. angular acceleration and θ proportional to angle φ (2) gives hyperbolic solutions as functions of angles. ・ When d t η = const there are solutions to (1); θ = C exp ( − a φ ) where r, φ are cylindrical coordinates, and ( a / r ) 2 κ = ρ d t η − b , 2 μ w 2 = − s + b θ . When d t η = 0 , the solution parameters a, b can be expressed in statistical variables; b = k / m w , ( a / r ) 2 κ = − b . ・ Finally, a case with small scale harmonic oscillator solutions will be derived. With Δ θ proportional to θ , a solution is θ = D sinh ( a φ ) where D and a are constants with w 2 = φ t t , insertion in (1) gives; 2 μ φ t t = − s + ( ρ d t η − κ ( a / r ) 2 ) θ . Linearised Taylor expansion of θ in gives now a harmonic oscillator for a sub-scale coordinate angle φ ( t ) , and thus, the Taylor expanded θ ( φ ) . Hereby, from a spatial representation of temperature as sinus-hyperbolicus ( φ ) , we obtained (on a sub-level) a time dependent harmonic oscillator for a sub-angle, i.e. φ = φ 0 sin ( Ω 0 t ) , where Ω 0 is constant depending on material parameters, coordinate r and ρ d t η . 4. Applications A plausible application is the vascular system of fishes in the North Sea where it is cold. At the outside boundary of the fish, the flow creates whirl. This could be copied inside, but such a remote connection of shapes is not necessary to invoke in this modeling. It is sufficient to assume that any large scale kinetic energy from outside can be transferred to interior fluid with a density and a vorticity flow. Another application, where whirls transform into other energy is when fishes move up in a fall, c.f. Figure 1 and [3] . First, we may consider it as a quantised system with two states, namely down with whirls of energy V, and on its way almost up with energy V-Vpot. Looking in more details, it appears that the whirls provide an elastic ground to bump from. Then the description will materialize in spatial coordinates, but the input may remain similar with energies. Functional expressions while depending on coordinates in R3 is explained in [1] , as exemplified above. 5. Conclusions The model shows how whirls can transform into a thermal energy-form determined by a functional relation for temperature. Solutions are evaluated in cylindrical coordinates, such that temperature is a function of angle. The possibility for fishes of turning a larger scale kinetic energy into heat inside a system is probably the most important application in this respect. To Dr Chaffin and Dr Pudikievicz for providing guidance and references. Cite this paper Strömberg, L.J.-T. (2017) Rotational Flows, Thermodynamics, Angle Vibration and Action of Whirly Flows on Fishes. Open Journal of Molecular and Integrative Physiology, 7, 53-56. https://doi.org/10.4236/ojmip.2017.74005 ReferencesStarke, R. and Schober, G.A.H. (2016) Ab Initio Materials Physics and Microscopic Electromagnetism of Media. Subsection 4.1. Short Review of Thermodynamics. arXiv:1606.00445v2.Freund, L.B. (2014) Entropic Forces in the Mechanics of Solids. Procedia IUTAM, 10, 115-124. https://doi.org/10.1016/j.piutam.2014.01.013https://www.youtube.com/watch?v=iM0mn5unvoM
6c5588dd05a8f552
Pauli's Exclusion Principle   One very important concept and principle of quantum mechanics is the Pauli exclusion principle. Responsible for things such as different electron orbits, it can be observed in many places. The exclusion principle states that no two Fermions can occupy the same quantum state.  But what does this mean? This can be shown in terms of the wavefunction of the fermions, and in particular the wavefunction of a system comprising two neutrons. The wavefunction of a fermion describes mathematically its position and other such quantum properties. According to the Pauli exclusion principle the following is true about the wavefunction (Ψ) of a composite system of positions of two fermions:  The formula above describes the fact that if the two neutrons are swapped, the composite wavefunction is multiplied my -1. However, if the wavefunction is the same for both: However, the exclusion principle states that Ψ(n1,n2)= -Ψ(n2,n1). Thus for both the above to be true: Therefore, it is mathematically impossible for 2 fermions to have the same wavefunction, and therefore occupy the same quantum state. Unless one (or both) the wave functions are equal to zero (meaning that they don't exist). I know the proof above is very basic, so I would love to learn how to properly prove it mathematically, I’m guessing something to do with the Schrödinger equation will help me?  PlanckTime - Amin A lot of proofs of this are wrong, but yours is sweet and simple. The meaning of the Pauli Exclusion is formally told as that the particles with half-integer spins, or fermions, are able to be described by anti-symmetric wave functions, or a wave function that has (a,b) and not (b,a), and particles with an integer spin or bosons, will be described be a symmetric wave function, (a,b) and (b,a) are there. I described it quite simply here, and I am sure that you know spins and the asymmetric relation, but I just wanted to be more specific in describing it. That one guy that's always on Please Login or Register
8c9d12b18b67b287
Five Popular Posts Of The Month Tuesday, December 18, 2018 Killing The Schrodinger's Cat, at last and for good: part II This post is one of the series of posts  listed in Appendix below After writing my first reflection on the first two chapter so the book (Part I), I continued reading the book and keeping my notes while reading. In general, I enjoyed the reading, especially the parts about personal history of various people. In that part my expectation turned out to be correct, Adam Becker offers a good account of the history.  Once in a while the reading initiated an argument, and those I present below. Page 39 “When an electron is shot out into the tube, its wave function obeys the Schrödinger equation, undulating and propagating outward like a wave” This statement makes us think that a wave function describes an actual physical field, like an electric field. This is simply wrong, because a wave function describes a number distribution ("an amplitude") in space and time (related to the probability distribution). “So sometimes the electron behaves like a wave, and sometimes it behaves like a particle”. This statement is wrong, because an electron never behaves like a wave and always behaves like a particle. However, that particle demonstrates different macroscopic behavior under the same macroscopic conditions – which is different from the behavior of macroscopic particles, those always demonstrate the same macroscopic behavior under the same macroscopic conditions. Specifically, an electron hits a screen at different locations. Electronspluralmany electrons under the same macroscopic conditions demonstrate behavior visually similar to the behavior demonstrated by a macroscopic waves. For example, when many electrons hit a screen at different locations, the resulting picture may look similar to the picture formed by waves traveling on a surface of water through two narrow slits. The difference between the waves in water and electrons is that every electron actually travel in space from a source to a screen, but water waves happen do to water molecules slowly moving about their equilibrium position and pushing on each other. The whole idea of “wave-particle duality” was developed as an attempt to make sense of the theoretical concepts which could not fit into a well-developed classical picture. But since then physics has grown and today, almost hundred years later, we do not need to hold on this mental bridge anymore. At the dawn of the quantum mechanics the fact that a particle cannot demonstrate its location and velocity at the same time was a shock. Today, we just accept it as a fact; yes, a quantum object cannot demonstrate (note: I am not saying “have” – that is a different conversation about possible interpretations of quantum mechanics), so, a quantum object cannot demonstrate its location and velocity at the same time. The real “mystery” is why macroscopic objects, which are made of quantum objects, do demonstrate their location and velocity at the same time; how does that ability of the whole comes from an inability of its parts? About a hundred years ago, when physicists would say something like “an incomplete description”, “incompatible variables”, “complementary”, they simply meant “different from classical”. Page 59 “For any entangled system, Einstein’s choice applied: either the system is nonlocal, or quantum physics can’t fully describe all the features of that system”. There is the third choice. The parts of a system interact via a physical interaction of some sort which has speed high enough to explain the behavior of the system – assuming the experiment is feasible at least in principle. The “high enough speed” condition may include interaction via agents which travel above the speed of light. Page 100 “How do … the photons … know you’re watching them at all?” (in a double-slit experiment). Answer – because “watching” means having photons interacting with  device which has one state in the absence of a photon and changes its state in the presence of a photon, and that inter-action changes the photon as well. Placing a detector by each slit makes the necessity for including those detectors in the mathematical description of the experiment. The much more intriguing question is how does a photon “know” – after traveling through one of the slits (and we don’t know which) – where to hit a screen, or more importantly, where NOT to hit it? It seems like a photon “knows” that well before reaching a screen. A photon “knows” must mean that there is an interaction between a photon and the environment which affects its motion toward a screen. But the Schrödinger equation gives NO information about such interaction. Here is where Bohm’s theory steps in. Page 124 “Everett … insisted that a single universal wave function was aa there was”. The idea of the existence of a single universal wave function for the whole existing universe is no different from the idea of a single universal Lagrangian for the whole world. It should be natural to every physicists who believes that our understanding of the universe should reflect the existence of the universe. However, the idea of the “many-worlds” logically is not connected with the idea of a single universal wave function; these two ideas do not demand each other. Term “many-worlds” implies the existence of many different worlds – at the same time at the same place – (however one may see it). However, the passage (page 126) “universal wave function splits into more and more noninteracting parts” shows that those many worlds just represent different parts of the whole world, parts which exists at the same time at different locations. This picture is no different from any classical view on the world. The notion that every single event in the universe creates new universes which correspond to all possible outcomes of the event, and an observer in each universe observes his own outcome may be seen as an innovation, but it has nothing to do with science, because does not help making predictions does not lead to new insights, and instead of making things easier and clearer, make them harder. This is a situation when a treatment is worse than a disease. Page 145 Bell’s quote “The great von Neuman … made assumptions in his proof that were entirely unwarranted”. Many though experiments about entanglement make the same mistake. Human mind can imagine things which may seem natural, but physically “unwarranted”. It is not enough just to say “let’s assume these particles are entangled”, there has to be a specific physical mechanism in place for that to happen. If that specific mechanism of entanglement does not exist, the whole thought experiment makes no sense. Page 149 “Bell used … Bohm’s version of EPR involving photons with entangled polarization. … When a photon hits a polarizer, it either passes through or gets blocked” This is an example of a very commonly used interaction between a photon and an optical device (a polarizer, a mirror, a lens, etc.). And it is an example of a very common misunderstanding of the physical phenomenon happening during this interaction. Every author bases his/her logic on the options what may happen with a photon during this interaction, for example, a photon maybe be reflected, deflected, transmitted, blocked. And then the same photon keeps traveling (and something else is happening to it). The fact of the matter is that the photon traveling away from a device is simply not the same photon which was approaching a device. When a photon starts interacting with a device, it means it collides with an atom inside the device (at least one), it most probably gets absorbed, then – after some microscopic time interval – a new photon is emitted, which may be again absorbed, emitted, absorbed, emoted, etc., and such a process eventually leads to a photon – a new one! – leaving a device. Any conclusion on what property that final photon has is probabilistic and has to be derived based on quantum electrodynamics (in general). Until this description is provided, any conclusions on the results of an experiment involving a photon-device interaction may be plausible, but not necessarily definite. A polarization axis (a transmission axis) of a polarizer is a macroscopic property of a device. When one photon encounters a polarizer, it encounter the existence of one atom or molecule. How would a photon “know” the direction of a transmission axis when it meets with only one atom? That atom absorbs a photon, emits a new one, etc. The final result is probabilistic. Hence, when a single photon interacts with a polarizer, there is always non-zero probability for a new photon be emitted by the polarizer on another side (what we call “passing through”). The phrase “a photon is polarized perpendicularly to the transmission axis” simply makes no sense. Hence, statement that (page 150) “the two [entangled] photons will always pass through together or be blocked together” is just wrong. Even if one polarizer completely absorbs one photon, there is a non-zero probability to see a photon on another side of a second polarizer. And that ruins the whole idea of the experiment, of any experiment with entangled photons (and also of the example with the casino). Page 153 “That suggests a need for a radical revision of our conception of space and time, far beyond Einstein’s relativity” Bell’s theorem may have pointed in that direction, but today “a need for a radical revision” is nothing but obvious, because, clearly, that seems the only way toward quantum gravitation – nothing else has been working. Page 198 “further work on the subject would extinguish his academic career”. The book provides many insights into the world of science, but it also provide many insights into the world of scientists. Those two worlds are not identical. The world of scientists is actually not much different from the world of actors, or politicians. “You are wrong” does not mean “you made a logical mistake here and there” (as it should be in the world of science), but “you think different from me, and that is wrong” (as it often in the world of scientists). And if you are not a member of “a pack”, you have a slim chance to find a good position. Page 231 “What causes the collapse of the system-apparatus-environment combined wave function?” The answer is – instability of immeasurable states. Page 245 “The idea that the universe as a whole was a suitable subject for scientific investigation was difficult for some physicists to swallow”. A good example to demonstrate the difference between one who is paid for doing something in the field called “physics”, and a physicist (like not everyone who has a job title “a teacher” is actually a teacher). Page 291 “But how can the photon “decide” whether to travel down just one path after it’s already passed through the first beam splitter?” The answer is (again) – the photon does not need to “decide” anything. That photon disappears, being absorbed by the material of the beam splitter. Gone. The rest of the process does not include the original photon anymore. Appendix I After writing Part I, but before writing Part II, I also wrote two more piece on the matter, which provide some additional points of view, including on probability, entanglement, “many-words”, philosophy, a “delayed choice experiment”, and more: The Core Assumption of Every Known Single-Photon Experiment Is Wrong I also have short pieces on a scientific method: Three old pieces on physics: Appendix II  The mission of a scientists as an agent of that human practice is discovering the truth about the universe and representing it in a testable form (e.g. verifiable, or falsifiable). When a faculty tells students "Quantum Mechanics is a complete theory, and it means this ..." he or she is simply lying - hence, he or she stops being a scientist. The truth (the fact) is that there are (exist, whether one likes it or not) different views on the state of Quantum Mechanics, and denying that fact is not a scientific action. A mere fact that someone is involved in a scientific research does not automatically makes that one a scientists. Appendix III So many people and so much energy have been focused on a photon traveling through two slits, or two entangled photons or electrons, etc., so no one asks why waves on trillions of strongly interacting atoms behave in a way similar to the behavior of weekly interacting atoms in a dilute  gas? A macroscopic number of strongly interacting microscopic particles does not follow the laws of a macroscopic world. Instead, there is a trick, a recipe – called “quantization” – which works like a charm. Why? The recipe works, what else do you need?  No comments: Post a Comment
ef6c84409ec26211
Licentiate thesis 2009-004 Numerical Methods for Quantum Molecular Dynamics Katharina Kormann 9 October 2009 The time-dependent Schrödinger equation models the quantum nature of molecular processes. Numerical simulations of these models help in understanding and predicting the outcome of chemical reactions. In this thesis, several numerical algorithms for evolving the Schrödinger equation with an explicitly time-dependent Hamiltonian are studied and their performance is compared for the example of a pump-probe and an interference experiment for the rubidium diatom. For the important application of interaction dynamics between a molecule and a time-dependent field, an efficient fourth order Magnus-Lanczos propagator is derived. Error growth in the equation is analyzed by means of a posteriori error estimation theory and the self-adjointness of the Hamiltonian is exploited to yield a low-cost global error estimate for numerical time evolution. Based on this theory, an h,p-adaptive Magnus-Lanczos propagator is developed that is capable to control the global error. Numerical experiments for various model systems (including a three dimensional model and a dissociative configuration) show that the error estimate is effective and the number of time steps needed to meet a certain accuracy is reduced due to adaptivity. Moreover, the thesis proposes an efficient numerical optimization framework for the design of femtosecond laser pulses with the aim of manipulating chemical reactions. This task can be formulated as an optimal control problem with the electric field of the laser being the control variable. In the algorithm described here, the electric field is Fourier transformed and it is optimized over the Fourier coefficients. Then, the frequency band is narrowed which facilitates the application of a quasi-Newton method. Furthermore, the restrictions on the frequency band make sure that the optimized pulse can be realized by the experimental equipment. A numerical comparison shows that the new method can outperform the Krotov method, which is a standard scheme in this field. Available as PDF (450 kB) Download BibTeX entry.
2a7664a5aae0c53c
Thursday, June 4, 2009 The Schrödinger Equation - Corrections In my last post, I claimed Additionally, we can extend from here that any quantum operator is written in terms of its classical counterpart by Peeter Joot correctly pointed out that this result does not follow from the argument involving the Hamiltonian. While it is true that any arbitrary unitary transformation, , can be written as where is an Hermitian operator, the relationship between a classical and its quantum counterpart is not as straightforward as I claimed. In reality, we can only relate the classical Poisson brackets to the quantum mechanical commutators, and we must work from there. Perhaps I will discuss this further in a later post. In any case, though, the derivation of the Schrödinger equation only makes use of the relationship between the classical and quantum mechanical Hamiltonians, so the remainder of the derivation still holds. I am leaving the original post up as reference, but the corrected, restructured version (with some additional, although slight, notation changes) is below. A brief walk through classical mechanics Say we have a function of and we want to translate it in space to a point , where need not be small. To do this, we'll find a ``space translation'' operator which, when applied to , gives . That is, We'll expand in a Taylor series: which can be simplified using the series expansion of the exponential1 to from which we can conclude that If you do a similar thing with rotations around the -axis, you'll find that the rotation operator is where is the -component of the angular momentum. Comparing (4) and (5), we see that both have an exponential with a parameter (distance or angle) multiplied by something ( or ). We'll call the something the ``generator of the transformation.'' So, the generator of space translation is and the generator of rotation is . So, we'll write an arbitrary transformation operator through a parameter where is the generator of this particular transformation.2 See [1] for an example with Lorentz transformations. From classical to quantum Generalizing (6), we'll postulate that any arbitrary quantum mechanical (unitary) transformation operator through a parameter can be written as where is the quantum mechanical version of the classical operator . We'll call this the ``quantum mechanical generator of the transformation.'' If we have a way of relating a classical generator to a quantum mechanical one, then we have a way of finding a quantum mechanical transformation operator. For example, in classical dynamics, the time derivative of a quantity is given by the Poisson bracket: where is the classical Hamiltonian of the system and is shorthand for a messy equation.[2] In quantum mechanics this equation is replaced with where the square brackets signify a commutation relation and is the quantum mechanical Hamiltonian.[3] This holds true for any quantity , and is a number which commutes with everything, so we can argue that the quantum mechanical Hamiltonian operator is related to the classical Hamiltonian by Time translation of a quantum state Consider a quantum state at time described by the wavefunction . To see how the state changes with time, we want to find a ``time-translation'' operator which, when applied to the state , will give . That is, From our previous discussion we know that if we know the classical generator of time translation we can write using (7). Classically, the generator of time translations is the Hamiltonian![4] So we can write where we've made the substitution from (10). Then (11) becomes This holds true for any time translation, so we'll consider a small time translation and expand (13) using a Taylor expansion3 dropping all quadratic and higher terms: Moving things around gives In the limit the right-hand side becomes a partial derivative giving the Schrödinger equation For a system with conserved total energy, the classical Hamiltonian is the total energy which, making the substitution for quantum mechanical momentum and substituting into (19) gives the familiar differential equation form of the Schrödinger equation [2] L.D. Landau and E.M. Lifshitz. Mechanics. Pergamon Press, Oxford, UK. [3] L.D. Landau and E.M. Lifshitz. Quantum Mechanics. Butterworth-Heinemann, Oxford, UK. [4] H. Goldstein, C. Poole, and J. Safko. Classical Mechanics. Cambridge University Press, San Francisco, CA, 3rd edition, 2002. 2 There are other ways to do this, differing by factors of in the definition of the generators and in the construction of the exponential, but I'm sticking with this one for now. 3 Kind of the reverse of how we got to this whole exponential notation in the first place... 1. Hi Eli, With the clarification made to 'From Classical to Quantum', there's a new guess in this writeup that stands out as requiring motivation (also tough). Namely, your "making the substitution $latex \vec{p} = i \hbar \nabla$" In Liboff, this is also pulled out of a magic hat. The only good motivation that I've seen for it was in Bohm's QT book, where it follows by Fourier transforms to switch from a position to momentum representation for the wave function. If anybody actually successfully studies from Liboff's text I imagine they also wonder where this comes from ... perhaps a topic for another blog;) 2. Peeter, I actually think of eq. (16) as the S.E. Each step afterwards assumes a particular representation of the equation. in this post I chose the more familiar position representation. As to why, in this representation the momentum operator is the differential operator - that is a topic for another post... 3. Okay, that makes sense. It never occured to me that there ought to also be a momentum representation of (18). I'll have to play around with that, but am guessing I have to dig up my mechanics book covering Hamiltonian x,p coordinate representation first. 4. Hi Eli I would like to thank you. I am a graduate student. I would like to ask a question: Can I learn the principles of Quantum Mechanics by myself. Thank you again 5. Hi Anonymous, I think you can learn the principles of QM by yourself. If you've never learned any of it before, I'd recommend starting with Griffiths' book. Peeter has taught himself most of this stuff, so you could probably ask him for advice, too. 6. I think it is hard to answer a broad question like the one posed without some idea of the starting point. I have an undergrad in engineering which provided good math fundamentals, but only basic physics. I'd rank my physics knowledge as perhaps (?) second year level undergrad physics so study of basic QM is timely (for me) My engineering program did include a small half semester course in QM basics, and this has helped as a base for further self study. That course used the MIT book by French, which has a number of admirable features for an introductory text. I've personally learned a lot from Bohm's book (Dover), which is cheap, and perhaps a bit old fashioned. It covers both motivational aspects and specifics with good depth. It has taken me a long time to work through that book (perhaps true of any QM text), and I am still not done. One resource that I particulariliy liked was Cathryn Carson's History 181B: Modern Physics, on iTunesU. She is an excellent and engaging lecturer, and builds up to Quantum field theory in a descriptive and highly accessable fashion, amazingly without requiring any mathematics (it's an arts course, not a physics nor engineering). I found this gave me an excellent high level picture of why quantum theory is worth studying and how it fits in with many other aspects of physics. 7. Hello *, I agree with Peeter, that all that is nice relationship between CM and QM, but has methodically nothing to do with a derivation of the Schrödinger equation. One methodically correct ansatz is the question, how the mechanics of an oscillator looks like, if there are no turning points? This question is admissible within Euler's representation of CM, because the basic laws/principles do not require turning points. It cannot be answered within CM, hence, it transcends CM (cf Gödel's theorem). Best wishes,
f05ed3d303e30f61
Canonical quantum gravity From Wikipedia, the free encyclopedia Jump to navigation Jump to search In physics, canonical quantum gravity is an attempt to quantize the canonical formulation of general relativity (or canonical gravity). It is a Hamiltonian formulation of Einstein's general theory of relativity. The basic theory was outlined by Bryce DeWitt[1] in a seminal 1967 paper, and based on earlier work by Peter G. Bergmann[2] using the so-called canonical quantization techniques for constrained Hamiltonian systems invented by Paul Dirac.[3] Dirac's approach allows the quantization of systems that include gauge symmetries using Hamiltonian techniques in a fixed gauge choice. Newer approaches based in part on the work of DeWitt and Dirac include the Hartle–Hawking state, Regge calculus, the Wheeler–DeWitt equation and loop quantum gravity. Canonical quantization[edit] In the Hamiltonian formulation of ordinary classical mechanics the Poisson bracket is an important concept. A "canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical Poisson-bracket relations, where the Poisson bracket is given by for arbitrary phase space functions and . With the use of Poisson brackets, the Hamilton's equations can be rewritten as, These equations describe a ``flow" or orbit in phase space generated by the Hamiltonian . Given any phase space function , we have In canonical quantization the phase space variables are promoted to quantum operators on a Hilbert space and the Poisson bracket between phase space variables is replaced by the canonical commutation relation: In the so-called position representation this commutation relation is realized by the choice: The dynamics are described by Schrödinger equation: where is the operator formed from the Hamiltonian with the replacement and . Canonical quantization with constraints[edit] Canonical classical general relativity is an example of a fully constrained theory. In constrained theories there are different kinds of phase space: the unrestricted (also called kinematic) phase space on which constraint functions are defined and the reduced phase space on which the constraints have already been solved. For canonical quantization in general terms, phase space is replaced by an appropriate Hilbert space and phase space variables are to be promoted to quantum operators. In Dirac's approach to quantization the unrestricted phase space is replaced by the so-called kinematic Hilbert space and the constraint functions replaced by constraint operators implemented on the kinematic Hilbert space; solutions are then searched for. These quantum constraint equations are the central equations of canonical quantum general relativity, at least in the Dirac approach which is the approach usually taken. In theories with constraints there is also the reduced phase space quantization where the constraints are solved at the classical level and the phase space variables of the reduced phase space are then promoted to quantum operators, however this approache was thought to be impossible in General relativity as it seemed to be equivalent to finding a general solution to the classical field equations. However, with the fairly recent development of a systematic approximation scheme for calculating observables of General relativity (for the first time) by Bianca Dittrich, based on ideas introduced by Carlo Rovelli, a viable scheme for a reduced phase space quantization of Gravity has been developed by Thomas Thiemann. However it is not fully equivalent to the Dirac quantization as the `clock-variables' must be taken to be classical in the reduced phase space quantization, as opposed to the case in the Dirac quantization. A common misunderstanding is that coordinate transformations are the gauge symmetries of general relativity, when actually the true gauge symmetries are diffeomorphisms as defined by a mathematician (see the Hole argument) – which are much more radical. The first class constraints of general relativity are the spatial diffeomorphism constraint and the Hamiltonian constraint (also known as the Wheeler-De Witt equation) and imprint the spatial and temporal diffeomorphism invariance of the theory respectively. Imposing these constraints classically are basically admissibility conditions on the initial data, also they generate the `evolution' equations (really gauge transformations) via the Poisson bracket. Importantly the Poisson bracket algebra between the constraints fully determines the classical theory – this is something that must in some way be reproduced in the semi-classical limit of canonical quantum gravity for it to be a viable theory of quantum gravity. In Dirac's approach it turns out that the first class quantum constraints imposed on a wavefunction also generate gauge transformations. Thus the two step process in the classical theory of solving the constraints (equivalent to solving the admissibility conditions for the initial data) and looking for the gauge orbits (solving the `evolution' equations) is replaced by a one step process in the quantum theory, namely looking for solutions of the quantum equations . This is because it obviously solves the constraint at the quantum level and it simultaneously looks for states that are gauge invariant because is the quantum generator of gauge transformations. At the classical level, solving the admissibility conditions and evolution equations are equivalent to solving all of Einstein's field equations, this underlines the central role of the quantum constraint equations in Dirac's approach to canonical quantum gravity. Canonical quantization, Diffeomorphism invariance and Manifest Finiteness[edit] A diffeomorphism can be thought of as simultaneously `dragging' the metric (gravitational field) and matter fields over the bare manifold while staying in the same coordinate system, and so are more radical than invariance under a mere coordinate transformation. This symmetry arises from the subtle requirement that the laws of general relativity cannot depend on any a-priori given space-time geometry. This diffeomorphism invariance has an important implication: canonical quantum gravity will be manifestly finite as the ability to `drag' the metric function over the bare manifold means that small and large `distances' between abstractly defined coordinate points are gauge-equivalent! A more rigorous argument has been provided by Lee Smolin: “A background independent operator must always be finite. This is because the regulator scale and the background metric are always introduced together in the regularization procedure. This is necessary, because the scale that the regularization parameter refers to must be described in terms of a background metric or coordinate chart introduced in the construction of the regulated operator. Because of this the dependence of the regulated operator on the cuttoff, or regulator parameter, is related to its dependence on the background metric. When one takes the limit of the regulator parameter going to zero one isolates the non-vanishing terms. If these have any dependence on the regulator parameter (which would be the case if the term is blowing up) then it must also have dependence on the background metric. Conversely, if the terms that are nonvanishing in the limit the regulator is removed have no dependence on the background metric, it must be finite.” In fact, as mentioned below, Thomas Thiemann has explicitly demonstrated that loop quantum gravity (a well developed version of canonical quantum gravity) is manifestly finite even in the presence of all forms of matter! So there is no need for renormalization and the elimination of infinities. In perturbative quantum gravity (from which the non-renormalization arguments originate), as with any perturbative scheme, one makes the assumption that the unperturbed starting point is qualitatively the same as the true quantum state – so perturbative quantum gravity makes the physically unwarranted assumption that the true structure of quantum space-time can be approximated by a smooth classical (usually Minkowski) spacetime. Canonical quantum gravity on the other hand makes no such assumption and instead allows the theory itself tell you, in principle, what the true structure of quantum space-time is. A long-held expectation is that in a theory of quantum geometry such as canonical quantum gravity that geometric quantities such as area and volume become quantum observables and take non-zero discrete values, providing a natural regulator which eliminates infinities from the theory including those coming from matter contributions. This `quantization' of geometric observables is in fact realized in loop quantum gravity (LQG). Canonical quantization in metric variables[edit] The quantization is based on decomposing the metric tensor as follows, where the summation over repeated indices is implied, the index 0 denotes time , Greek indices run over all values 0, . . ., ,3 and Latin indices run over spatial values 1, . . ., 3. The function is called the lapse function and the functions are called the shift functions. The spatial indices are raised and lowered using the spatial metric and its inverse : and , , where is the Kronecker delta. Under this decomposition the Einstein–Hilbert Lagrangian becomes, up to total derivatives, where is the spatial scalar curvature computed with respect to the Riemannian metric and is the extrinsic curvature, where denotes Lie-differentiation, is the unit normal to surfaces of constant and denotes covariant differentiation with respect to the metric . Note that . DeWitt writes that the Lagrangian "has the classic form 'kinetic energy minus potential energy,' with the extrinsic curvature playing the role of kinetic energy and the negative of the intrinsic curvature that of potential energy." While this form of the Lagrangian is manifestly invariant under redefinition of the spatial coordinates, it makes general covariance opaque. Since the lapse function and shift functions may be eliminated by a gauge transformation, they do not represent physical degrees of freedom. This is indicated in moving to the Hamiltonian formalism by the fact that their conjugate momenta, respectively and , vanish identically (on shell and off shell). These are called primary constraints by Dirac. A popular choice of gauge, called synchronous gauge, is and , although they can, in principle, be chosen to be any function of the coordinates. In this case, the Hamiltonian takes the form and is the momentum conjugate to . Einstein's equations may be recovered by taking Poisson brackets with the Hamiltonian. Additional on-shell constraints, called secondary constraints by Dirac, arise from the consistency of the Poisson bracket algebra. These are and . This is the theory which is being quantized in approaches to canonical quantum gravity. It can be shown that six Einstein equations describing time evolution (really a gauge transformation) can be obtained by calculating the Poisson brackets of the three-metric and its conjugate momentum with a linear combination of the spatial diffeomorphism and Hamiltonian constraint. The vanishing of the constraints, giving the physical phase space, are the four other Einstein equations. That is, we have: Spatial diffeomorphisms constraints of which there are an infinite number – one for value of , can be smeared by the so-called shift functions to give an equivalent set of smeared spatial diffeomorphism constraints, These generate spatial diffeomorphisms along orbits defined by the shift function . Hamiltonian constraints of which there are an infinite number, can be smeared by the so-called lapse functions to give an equivalent set of smeared Hamiltonian constraints, as mentioned above, the Poission bracket structure between the (smeared) constraints is important because they fully determine the classical theory, and must be reproduced in the semi-classical limit of any theory of quantum gravity. The Wheeler-De-Witt equation[edit] Hamiltonian constraint of LQG The Wheeler-De-Witt equation (sometimes called the Hamiltonian constraint, sometimes the Einstein-Schrödinger equation) is rather central as it encodes the dynamics at the quantum level. It is analogous to Schrödinger's equation, except as the time coordinate, , is unphysical, a physical wavefunction can't depend on and hence `Schrödinger's equation' reduces to a constraint: Using metric variables lead to seemingly un-summountable mathematical difficulties when trying to promote the classical expression to a well-defined quantum operator, and as such decades went by without making progress via this approach. This problem was circumvented and the formulation of a well-defined Wheeler-De-Witt equation was first accomplished with the introduction of Ashtekar-Barbero variables and the loop representation, this well defined operator formulated by Thomas Thiemann[4]. Before this development the Wheeler-De-Witt equation had only been formulated in symmetry-reduced models, such as quantum cosmology. Canonical quantization in Ashtekar-Barbero variables and LQG[edit] Many of the technical problems in canonical quantum gravity revolve around the constraints. Canonical general relativity was originally formulated in terms of metric variables, but there seemed to be insurmountable mathematical difficulties in promoting the constraints to quantum operators because of their highly non-linear dependence on the canonical variables. The equations were much simplified with the introduction of Ashtekars new variables. Ashtekar variables describe canonical general relativity in terms of a new pair canonical variables closer to that of gauge theories. In doing so it introduced an additional constraint, on top of the spatial diffeomorphism and Hamiltonian constraint, the Gauss gauge constraint. The loop representation is a quantum hamiltonian representation of gauge theories in terms of loops. The aim of the loop representation, in the context of Yang-Mills theories is to avoid the redundancy introduced by Gauss gauge symmetries allowing to work directly in the space of Gauss gauge invariant states. The use of this representation arose naturally from the Ashtekar-Barbero representation as it provides an exact non-perturbative description and also because the spatial diffeomorphism constraint is easily dealt with within this representation. Within the loop representation Thiemann has provided a well defined canonical theory in the presence of all forms of matter and explicitly demonstrated it to be manifestly finite! So there is no need for renormalization. However, as LQG approach is well suited to describe physics at the Planck scale, there are difficulties in making contact with familiar low energy physics and establishing it has the correct semi-classical limit. The problem of time[edit] All canonical theories of general relativity have to deal with the problem of time. In quantum gravity, the problem of time is a conceptual conflict between general relativity and quantum mechanics. In canonical general relativity, time is just another coordinate as a result of general covariance. In quantum field theories, especially in the Hamiltonian formulation, the formulation is split between three dimensions of space, and one dimension of time. Roughly speaking, the problem of time is that there is none in general relativity. This is because in general relativity the Hamiltonian is a constraint that must vanish. However, in any canonical theory, the Hamiltonian generates time translations. Therefore, we arrive at the conclusion that "nothing moves" ("there is no time") in general relativity. Since "there is no time", the usual interpretation of quantum mechanics measurements at given moments of time breaks down. This problem of time is the broad banner for all interpretational problems of the formalism. The problem of quantum cosmology[edit] The problem of quantum cosmology is that the physical states that solve the constraints of canonical quantum gravity represent quantum states of the entire universe and as such exclude an outside observer, however an outside observer is a crucial element in most interpretations of quantum mechanics. See also[edit] 1. ^ Bergmann, P. (1966). "Hamilton–Jacobi and Schrödinger Theory in Theories with First-Class Hamiltonian Constraints". Physical Review. 144 (4): 1078–1080. Bibcode:1966PhRv..144.1078B. doi:10.1103/PhysRev.144.1078. 2. ^ Dewitt, B. (1967). "Quantum Theory of Gravity. I. The Canonical Theory". Physical Review. 160 (5): 1113–1148. Bibcode:1967PhRv..160.1113D. doi:10.1103/PhysRev.160.1113. 3. ^ Dirac, P. A. M. (1958). "Generalized Hamiltonian Dynamics". Proceedings of the Royal Society of London A. 246 (1246): 326–332. Bibcode:1958RSPSA.246..326D. doi:10.1098/rspa.1958.0141. JSTOR 100496. 4. ^ Thiemann, T. (1996). "Anomaly-free formulation of non-perturbative, four-dimensional Lorentzian quantum gravity". Physics Letters B . B380(3): 257–264.
6da21c38f712ace6
Wednesday, July 2, 2014 Carroll goes nuts with many worlds After defending bad science philosophy, physicist Sean M. Carroll foes off the deep end with his own bad quantum philosophy, and posts Why the Many-Worlds Formulation of Quantum Mechanics Is Probably Correct: There are other silly objections to EQM, of course. The most popular is probably the complaint that it’s not falsifiable. That truly makes no sense. It’s trivial to falsify EQM — just do an experiment that violates the Schrödinger equation or the principle of superposition, which are the only things the theory assumes. Witness a dynamical collapse, or find a hidden variable. Of course we don’t see the other worlds directly, but — in case we haven’t yet driven home the point loudly enough — those other worlds are not added on to the theory. They come out automatically if you believe in quantum mechanics. This is nonsense. I don't want to keep picking on Carroll, but it seems that more and more physicists are reciting such nonsense in favor of MWI. (Carroll also calls it Everettian quantum mechanics, EQM.) Lumo explains how Carroll is wrong First, many worlds surely don't follow from quantum mechanics. The 2013 paper, On Quantum Theory, by Berthold-Georg Englert also explains: Quantum theory had essentially taken its final shape by the end of the 1920s and, in the more than eighty years since then, has been spectacularly successful and reliable — there is no experimental fact, not a single one, that contradicts a quantum-theoretical prediction. Yet, there is a steady stream of publications that are motivated by alleged fundamental problems: We are told that quantum theory is ill-defined, that its interpretation is unclear, that it is nonlocal, that there is an unresolved “measurement problem,” and so forth. It may, therefore, be worth reviewing what quantum theory is and what it is about. That is correct. Quantum mechanics, as it has been described in textbooks for decades, is a perfectly good theory. It is strikingly successful, and yet from Einstein in the 1930s to many well-known physicists today, they act as if the theory is broken. They even say that realizing that quantum mechanics requires many worlds is like the Copernican revolution. This modern rejection of quantum mechanics is just as crazy as if modern physicists went around claiming that particles can go faster than light because they don't believe relativity. I wonder how they ever passed their PhD qualifying exams without understanding quantum mechanics. Carroll is proof that a physicist can lose the capacity for scientific thinking if his brain is infected with lousy philosophy. The principle reason for rejecting MWI is that it postulates an infinity of unobservable worlds without any physical benefit. It adds no practical or conceptual advantages over textbook quantum mechanics, and is unscientific in having supernatural beliefs. It even has disadvantages, because it makes probabilities nearly impossible to interpret. Its advocates claim that it cures some philosophical defect of quantum mechanics, but there is no such defect. The quantum computing folks, like David Deutsch, love MWI because the extra universes are supposedly where the super-Turing computation takes place. No such computation has ever been observed, and the whole field is a big funding scam. 1. "[Quantum theory is] spectacularly successful and reliable" And its completely useless in the real world of chemistry. Quantum chemistry cannot even predict crystal structures of organic compounds. Nobody understands the pile of approximations, its a big mess of computer codes. At least the motivation for a "bigger" computer is warranted. Hey Lubos, let me know when your string theory crap is good for making a better battery. 2. "whole field is a big funding scam" Well, I guess you haven't looked at Berthold-Georg Englert's publication list that is mainly on quantum information processing the past few years. Or that he is the Principal Investigator at the Centre for Quantum Technologies whose mission is: "Our mission is to conduct interdisciplinary theoretical and experimental research in quantum theory and its application to information technologies. The discovery that quantum physics allows fundamentally new modes of information processing has required that classical theories of computation, information and cryptography be superseded by their quantum generalizations. These hold out the promise of faster computation and more secure communication than is possible classically." So, what gives, Roger? Both Motl and Englert, your voices of sanity on quantum mechanics, are either actively working in a field (quantum computing) that "is a big funding scam" or endorse its legitimacy. So you are arguing that they are sane once in a while? 3. I would argue that what makes one right or wrong, correct or incorrect is not your name, your position, or your pedagree. It is possible sometimes for even a child to take note that "The emperor has no clothes on", when everyone else is missing the obvious for various reasons. I also understand that broken clocks are correct twice a day. 4. How about putting the QM posts in a logical order? I'd like to go through them in some logical sequence. 5. I suggest that you read my posts on counterfactuals. Click on the link, and read in chronological order, starting with the introduction.
4e65a9544436e94f
Quasi-exact solution of sextic anharmonic oscillator using a quotient polynomial Spiros Konstantogiannis Among the one-dimensional, real and analytic polynomial potentials, the sextic anharmonic oscillator is the only one that can be quasi-exactly solved, if it is properly parametrized. In this work, we present a new method to quasi-exactly solve the sextic anharmonic oscillator and apply it to derive specific solutions. Our approach is based on the introduction of a quotient polynomial and can also be used to study the solvability of symmetrized (non-analytic) or complex PT-symmetric polynomial potentials, where it opens up new options. quasi-exactly solvable potentials; sextic anharmonic oscillator; quotient polynomial; energy-reflection symmetry Full Text: A. G. Ushveridze, Quasi-Exactly Solvable Models in Quantum Mechanics, (IOPP, Bristol, 1994). A. V. Turbiner, "Quasi-Exactly-Solvable Problems and sl(2) Algebra", Commun. Math. Phys. 118, 467-474 (1988). Alexander V. Turbiner, "One-dimensional quasi-exactly solvable Schrödinger equations", Phys. Rep. 642 1-71 (2016). Carl M. Bender and Gerald V. Dunne, "Quasi-Exactly Solvable Systems and Orthogonal Polynomials", J. Math. Phys. 37 6-11 (1996). Yao-Zhong Zhang, "Exact polynomial solutions of second order differential equations and their applications", J. Phys. A: Math. Theor. 45, 065206 (2012). Choon-Lin Ho, "Prepotential approach to exact and quasi-exact solvabilities of Hermitian and non-Hermitian Hamiltonians", Ann. Phys. 323: 2241-2252 (2008). Hakan Ciftci, Richard L Hall, Nasser Saad, and Ebubekir Dogu, "Physical applications of second-order linear differential equations that admit polynomial solutions", J. Phys. A: Math. Theor. 43, 415206 (2010). David J. Griffiths, Introduction to Quantum Mechanics, (Pearson Education, Inc., Second Edition, 2005). M. Moriconi, "Nodes of wavefunctions", Am. J. Phys. 75, 284-285 (2007). Sayan Kar and Rajesh R. Parwani, "Can degenerate bound states occur in one dimensional quantum mechanics?", EPL (Europhysics Letters) 80 (3), 30004 (2007). Miloslav Znojil, "Symmetrized quartic polynomial oscillators and their partial exact solvability", Phys. Lett. A 380, 1414-1418 (2016). C. Quesne, "Quasi-exactly solvable symmetrized quartic and sextic polynomial oscillators", Eur. Phys. J. Plus 132: 450 (2017). Carl M. Bender and Stefan Boettcher, "Quasi-exactly solvable quartic potential", J. Phys. A: Math. Gen. 31 L273 (1998). B. Bagchi, F. Cannata, and C. Quesne, "PT-symmetric sextic potentials", Phys. Lett. A 269 79-82 (2000) • There are currently no refbacks. Creative Commons License ISSN: 2394-3688 © Science Front Publishers
0a2c85fc96b5dddb
Human moleculeThis is a featured page Human Molecules (1910) Example of an early 20th-century style human molecule themed article, entitled "Human Molecules" (1910), by American philosopher Mary Mesny, in which she defines a person as an atom or molecule and outlines a simple human chemical bonding theory modeled on affinity bonding (valences) of atoms. [69] See also: Human molecule (Wikipedia); Human molecule (banned) In human chemistry, the human molecule is the atomic definition of a person. [1] The following 26-element formula is the latest calculation (2007) of the molecular formula for a typical 70kg (154lb) person: [2] where EN, e.g. E22, means exponent to the power of ten. A 22-element empirical formula for a human was first calculated in 2000 by American limnologists Robert Sterner and James Elser. [11] A molecule, according to the 1649 coining of the term by French thinker Pierre Gassendi, is structure of two or more connected atoms, and a person, according to functional mass composition data, is comprised of twenty-six types of elements (atoms); subsequently the term 'human molecule', or its synonyms: molecular person (George Scott, 1985), human chemical (Thomas Dreier, 1910), etc., is the scientific name for the chemical definition of one human. In this sense, from the perspective of chemical reactions between people, as captured in the motto "love the chemical reaction", such as in a couple forming reaction: A + B AB (combination reaction) the reactants (A + B) and product (AB) in the human chemical reaction are technically "molecules" no different than any other molecule in the universe. The union of two molecules, AB, in this example, would be termed a dihumanide molecule, i.e. two human molecules chemically bonded. There do exist, to note, many characteristic differences between complex, multi-element human molecules, and other simpler molecules, such as H20, one prominent difference being that there exists a metabolic effect or atomic turnover rate in the body of the human molecules. Synonyms to the term 'human molecule' include: chemical species, human particles, human element, social atoms, human atomism, etc., depending on the framework of study. The human structure is no exemption. Periodic table (elements of the human molecule) Functional elements (highlighted), from hydrogen (smallest) to iodine (largest), in the human molecule, according to 2002-2007 research of engineer Libb Thims, as shown (hyperlinked) on the hmolscience periodic table. [1] Elements: 26 atoms in the human body There are 92 types of atoms, naturally occurring, in the volumetric region of the earth. Each type of atom is characterized by the number of protons in its nucleus, the number being representative of the name of the element the atom is, a number which varies from one to one-hundred-and-eighteen. Hydrogen, symbol H, containing one proton, is the smallest type of atom. Helium, symbol He, containing two protons, is the next largest type of atom. A bound state structure of atoms is what is called a molecule. The human being is one such bound state structure. The number of elements said to be actively-functional in the composition of the human varies from 22 to 28 depending, on source. Etymology | 1789 The English "human molecule" originated in the French version of the the term molécule humaine. The earliest documented use of the term ‘molécules humaines’, discovered thus far, is found in the 1789 edition of the multi-volume treatise Philosophy of Nature by French philosopher Jean Sales who uses the term 'human molecule', functionally, by stating: [46] Jean Sales In 1798, French polymath Jean Sales coined the term 'molécule humaine' or human molecule (English). “We conclude that [there exists] a principle of the human body [which] comes from the great [process] [in which] so many millions of atoms of the earth become many millions of human molecules.” This French origin has to do with the fact that the term ‘molécule’ itself originated in France, supposedly first used in either the circa 1620 works of French philosopher Rene Descartes, who is said to have used the term to mean a small mass, or the 1649 work of French thinker Pierre Gassendi, who used the term molecule in the sense of the attachment of two or more atoms. The first English usage of the term ‘human molecule’ seems to come from the the 1855 English translation of French composer Hector Berloiz 1854 book Evenings with the Orchestra, in the original French version of which Berloiz used the term ‘molécule humaine’ referring to children in choir. Prior to (and after) this usage by Berloiz, however, there seems to exist a large, yet undocumented, usage of the term molécule humaine in French publications, e.g. Alphonse Esquiros (1840), Yves Guyot (1903), etc. Human gas particle (diagram) 1999 artistic rendition of the human particle (human atom or human atomism) view of people conceived as Daniel Bernoulli-style kinetic theory gas particles . [67] The first use of the term human molecule, used in a semi-modern scientific sense, is the 1869 "psychology of the human molecule" usage by French historian Hippolyte Taine, a usage later adopted by those including: German physician Ernst Gryzanowski (1875), American historian Henry Adams (1885), and French education theorist Max Leclere (1894). The usage by Adams would carry through to influence others, such as American sociologist Robert Nisbet; Nisbet, for instance, employed the term 'social molecule', to refer to the attachment of two or more 'human particles', as he called them, affixed together by a 'social bond', a subject about which he wrote a book (1970). Sciences: physics, chemistry, thermodynamics The sciences that study the human molecule can be divided into three general groups: human physics, human chemistry, and human thermodynamics. Human physics tends to concern itself with the forces that act on human molecules, often times modeling the human molecule as a particle, i.e. a human atom, social atom, or human particle. Human chemistry tends to concern the application of the principles of chemistry, particularly chemical bonding, collision theory, activation energy, molecular orbital theory, etc., to interactions of human molecules and the structures they form. Human thermodynamics tends to study boundaried sets or "systems" of interactive human molecules, which constitute thermodynamic systems, i.e. working bodies, according to the laws and principles of thermodynamics. Human molecule diagram Human molecular formula diagram from chapter "The Human Molecule" in Human Chemistry (2007) by Libb Thims. Short history The earliest views of what the "human being" is include French philosopher Rene Decartes' 1637 animal machine hypothesis (see: human machine), the human motor view in the 18th century, German polymath Johann von Goethe's view (see: Goethe's human chemistry) of people as a type of reactive chemical species, and English chemist Humphry Davy's 1813 point atom view of man. [6] To complicate matter, in 1869 Russian chemist Dmitri Mendeleyev had famously arranged the total 66-known elements at the time into a periodic table, listed in order of atomic weights, in such a manner that their properties repeated in a series of periodic intervals. [3] Following this point in history, it was beginning to become apparent that the human being may be a type of molecule. It soon became apparent that a human being may have a molecular formula in relation to these elements. The first to state this explicitly was American physician George Carey, who in his 1919 book Chemistry of Human Life, stated that "man's body is a chemical formula in operation." [5] The first calculations for the empirical molecular formula for the human were made independently in 2000 by American limnologists Robert Sterner and James Elser (22-element formula), in 2002 by American chemical engineer Libb Thims (26-element formula), and in 2005 by New Scientist magazine (12-element formula). [12] In modern atomic detail, according to the most up-do-date mass composition estimates, the human being is a twenty-six element molecule, as shown pictured. Which molecule has free will, is alive, is moral, has a brain? (an animate molecule) Retinal Molecule (an animate molecule) Human molecule (baby) (300px) The "forced" input of a single photon (a force carrier) causes the three-element retinal molecule to "move" into a straightened position; when the light is no longer present, the retinal molecule reverts back to the bent position. The "forced" input of a billions of photons (force carriers) causes the twenty-six-element human molecule to "move" into a straightened upright position; when the light is no longer present (e.g. nighttime), the human molecule reverts back to its bent position (e.g. curled in sleep). Free will In discussions on the idea of the person as a "molecule" the topic of free will, i.e. the conception that a person exercises control over the choices made in life, often comes to the fore. Russian bioelectrochemist Octavian Ksenzhek tells us in 2007 that "people are the molecules of which an economy consists", but also clarifies, in the context of water molecules forming frost on glass, that: [31] “Molecules have neither free will nor any will at all.” Ksenzhek goes on to state that “all a molecule can do is repel elastically from other chaotically moving molecules and sometimes, very seldom, lose some of its degrees of freedom and freeze in a lager collective.” Certainly there is a difference between water molecules and human molecules, but, nevertheless, the concept of 'free will' becomes defunct, each is considered as a molecule, pure and simple. Many, however, will not admit to this. In 1952, English physicist C.G. Darwin argued that humans are molecules governed by the laws of thermodynamics, but also conjectured that 'human molecules' have free will owing to their 'unpredictability'. [4] This, of course, is incorrect. Nobody in the history of science has every found a molecule in possession of a free will. [26] To clarify, in modern human chemistry and human thermodynamics, a human being is defined as a molecule, i.e. a "human molecule", and systems of humans are defined as thermodynamics systems, governed by the laws of chemistry and physics. In this view, the conception of a molecule, human or otherwise, with a free will, becomes an absurdity. The modern view, conversely, shows the concept of free will to be a defunct scientific theory, replaced by more updated views, such as induced movement, or more generally the view of human chemical reactions governed by the spontaneity criterion, activation energy, collision theory, free energy coupling between human chemical bonds, among other basic concepts of chemistry applied to human movements. Walking molecule (human atomic stick figure) Humorous depiction of a human-like 'walking molecule' from the 2009 NY Times article "Experiments Show That Molecules Can Walk" by Henry Fountain. [54] Walking molecules See main: walking molecule, molecular carrier, molecular spider, and molecular car It is often a neglected fact that humans are molecules that walk, run, and sometimes fly, on or above a 'surface', which from a chemical-definitions sense can be defined as either substrate or catalyst, depending on the context of discussion, which varies depending on subject mode: surface chemistry, surface physics, or surface thermodynamics. In this perspective, an intuitive way to better come to understand human behavior (movement and reactions) is to use the conception or reality that humans are 'walking molecules' on a surface and, using this perspective, study the behaviors and operation of smaller nano-size 'walking molecules'. The first operational walking molecules were developed in 2005 by German-born American physical chemist Ludwig Bartels at the University of California Riverside designed a molecule, called 9,10-dithioanthracene (DTA), that can walk in a straight line on a flat surface, like a little person. Recent years have seen spin-off varieties, such as: molecular cars (2005), molecular carriers (2007), and DNA-based, four-legged molecular spiders (2010), as well as among others. One interesting recent design (2009) a 21-atom, 2-legged, track-affixed walker, that moves down the track, step-by-step, when its environment oscillates between basic and acidic. [55] Here we can extrapolate to understand how humans move differently depending on how their environment oscillates, factors including: temperature and pressure, as well as more abstract oscillations, such as terrorism, war, or famine, etc. Findings of these various walkers include the fact that more fuel (or energy) is needed when walkers carry a load, that most molecular walkers need some help, in the form of chemicals to keep them going, and that they tend to wander. Walking molecule (two-legged) A two-legged 21-atom walking molecule (red) on a track, on which it walks when its environment oscillates between basic to acidic. [55] When confronted with the question of what is the difference between a walking 'human molecule' (person) and a nanosized 'walking molecule' (such as DTA), the person new to this mode of logic (even hardened non-religious scientists) will bring up antiquated objections, such as: a human is different because he or she has a brain, has consciousness, has free will, can choose there actions, is alive, among others nonsensical objections. The religious-type person will quickly bring up the 7,000-year-old theory that 'a human being as a soul' (or spirit), which is beyond the definitions of science of atoms. These few examples highlight the precipice of the revolution in human thinking that must take place, in the years to follow, in order to bring universal acceptance to the logic that humans are molecules that chemically react together on a surface driven by solar heat, a view which defines the science of human chemistry. Hector BerloizHippolyte Taine (young) French composer Hector Berlioz used the term “human molecule” in 1854, albeit, it seems, rather metaphorically. French philosopher Hippolyte Taine , independently used the 'human molecule', in a scientific sense, in 1869, which he considered the central subject of study for both the psychologist and the historian. 1854: Berloiz On the heels of the earlier 1798 usage of the 'human molecule' conception by French philosopher Jean Sales, in 1854 French composer Hector Berlioz used the term “molécules humaines”, which was translated the following year (in English) as ‘human molecules’, albeit in a rather poetic or artistic way. Berloiz used the term rather superficially, referred to filling up of the boys and girls in the multi-leveled amphitheater of St. Paul’s Cathedral as being similar to the phenomenon of crystallization, a phenomenon which he had viewed microscopically previously. He states: [38] “The points of this crystal of human molecules, constantly directed from the circumference towards the center, was bi-colored—the dark blue of the little boys’ coats on the upper stages, and the white of the little girls’ frocks and caps occupying the lower ranks. Besides this, as the boys wore either a polished brass badge or silver medal, their movements caused the light reflected by these metal ornaments to flash and produce the effect of a thousand sparks kindling and dying out every minute upon the somber background of the picture.” In other words, Berloiz seems to view movement of the choir as a larger shimmering crystal of human molecules. Timeline video themed on the human molecule, themed on the 2008 song “Human” by The Killers (March 2009). 1869: Taine French historian Hippolyte Taine, independent and contrary to prior metaphorical use of the term human molecules by Berloiz, was the first to use the term in a scientific sense and to build argument on this concept, and to have others adopt his usage. In 1869, in the preface to the book On Intelligence, Taine stated ‘it is now admitted that the laws which rule formation, nutrition, locomotion, for bird or reptile, are but one example and application of more general laws which rule the formation, nutrition, locomotion, of every animal.’ He continues ‘in the same way we begin to admit that the laws which rule the development of religious conceptions, literary creations, scientific discoveries, in a nation, are only an application and example of laws that rule this same development at every moment and with all men.’ In other terms, Taine states, ‘the historian studies psychology in its application, and the psychologist studies history in its general forms.’ On this logic, Taine reasons: [16] In sum to the preface of his book, he states that ‘for the last fifteen years I have contributed to these special psychologies’. Moreover, ‘I now attempt a general psychology.’ He notes, however, that ‘to embrace this subject completely, this theory of the Intelligence (faculty of knowing) needs a theory of the will added to it. Taine's human molecular philosophy had a significant influence on American historian Henry Adams, who became acquainted with Taine's philosophy as early as 1873. Adams' associate German physician Ernst Gryzanowski also seems to have adopted Taine's human molecule conception, using it in his 1875 article "Comtism", discussed below. Léon WalrasVilfredo Pareto Leon WalrasVilfredo Pareto In circa 1872, in efforts to make a science out of economics, French economist Léon Walras began to develop a theory of economic equilibrium in which he consider people to be "economic molecules"; students of this school of thought include French-Italian mathematical engineer Vilfredo Pareto and Polish sociologist Léon Winiarski who each developed human molecular theories of their own, the latter using Rudolf Clausius as a basis. 1870-1903: Laussane school - Walras|Pareto|Winiarski In 1870, French economist Léon Walras became professor of political economics (and later chair) at the University of Lausanne and together with his protégé French-Italian mathematical engineer Vilfredo Pareto and their followers, most notably Polish economist Leon Winiarski, this school of thought came to be known as the ‘Lausanne school’. [52] Walras considered people to be people as "economic molecules" and aimed to formulate a economic equilibrium theory based on mathematics and science. On this logic, in the years to follow, Pareto began to define a person explicitly as a ‘human molecule’ and to further outline a sociological theory based on human molecular interactions. In his in his 1896 Course on Political Economics, Pareto specifically defines a social system as follows: “Society is a system of human molecules in a complex mutual relationship.” In the context of the economic satisfaction, Pareto posits that human molecule only acts in response to the force of ophelimity: The constant flow of human molecules (2006) 2006 photo by American photographer Pierre Rousseau entitled “The Constant Flow of Human Molecules”, with the subtitle: “in blind service to Kant's Categorical Imperative. The newest psycho-babble craze is to get happy by preservation of "good" behaviors in acceptance of and slavery to the machine.” [59] This was outlined further in his 1916 Treatise on General Sociology, wherein his goal, as it has been argued, was to construct a system of sociology analogous in its essential features to the generalized chemical thermodynamics system as outlined in American mathematical physicist Willard Gibbs’ 1876 On the Equilibrium of Heterogeneous Substance. A residual protege of this school of logic was Polish sociologist Léon Winiarski who formulated the subject of "social mechanics", a course taught at the University of Geneva (1894-1900), based on the dynamics of Italian mathematician Joseph Lagrange and the thermodynamics of German physicist Rudolf Clausius. To cite an excerpt of Winiarski's 1898 book Essay on Social Mechanics: [51] “It is axiomatic to say that the fundamental forces soliciting the individual in society are egoism and altruism. If we consider the individual as a molecule of the social aggregate, these two forces can be regarded as playing the same role that attraction and repulsion play in any material system of the universe.” In another instance, Winiarski states: “The society is therefore considered as an aggregate of individual molecules, each depending on the forces of desire, which together through their interaction tend the society towards maximum satisfaction.” Winiarski was the leading thinker of the Lausanne school, particularly for his use of thermodynamic formulation. Henry Carey American economist Henry Carey, described as the 'Newton of social science' for his use of physics and chemistry in explaining the social phenomenon of reactions between people, the 'molecules of society'. 1874: Henry Carey In 1874, by American economist Henry Carey outlined the subject of 'social science' as such: [61] “Man, the molecule of society, is the subject of social science.” Here, Carey alludes to the concept of the person as human molecule and in his volumes of work in sociology he eventually gained the label as being referred to has the ‘Newton of social science’ for his law of social gravitation, drawing extended commentary on the physics and chemistry of human molecules, from those as Austrian social economist Werner Stark. In introducing the topic of social heat between reactive human molecules in society, according to 1962 commentary by Stark, Carey states: [62] “In the inorganic world, every act of combination is an act of motion. So it is in the social one. If it is true that there is but one system of laws for the government of all matter, then those which govern the movements of the various inorganic bodies should be the same with those by which is regulated the motion of society; and that such is the case can readily be shown.” The terms 'organic world' (carbon-based) verses 'inorganic world' (non-carbon based), to note are a antiquated synonyms for the life (animate) verse non-life (inanimate) dichotomy; a play on the theory that living things are made of carbon. Next, in what seems to be a citation of Berthelot-Thomsen principle, that the heat of a reaction was the true measure of affinity, Carey states: “to motion there must be heat, and the greater the latter, the more rapid will be the former.” This quotation, according to Stark, means that: “In the physical universe, heat is engendered by friction. Consequently the case must be the same in the social world. The ‘particles’ must rub together here, as they do there. The rubbing of the human molecules, which produces warmth, light and forward movement, is the interchange of goods, services, and ideas.” All-in-all, this is a very cogent and modern presentation of explaining chemical affinity, reactions, between human molecules, heat, and work, in the context of the economics and sociology of the exchange of goods and services; albeit without he modern conceptions of entropy, activation energy, free energy coupling, etc. 1875: Gryzanowski In his 1875 article "Comtism", German physician Ernst Gryzanowski argues: [33] “Civil law, commerce, political economy, and international ethics are all based on the assumption that the social body consists of such human molecules, and there is no reason why the methods of physical science should not be applied to the statics and dynamics of that society, the passions and rights of the individual man corresponding exactly to the chemical and physical forces inherent in the material molecule.” This quote captures Gryzannnowski's opinion of how the social physics of French sociologist Auguste Comte would ferret out in a modern sense. Gryzanowski seems to have adopted the Taine's 1869 'human molecule philosophy', likely by coming across it through his association with the North American Review (wherein an article on Taine's human molecule philosophy was published two years prior) and through discussion with his friend Henry Adams, who also had adopted Taine's philosophy as his own. Henry Adams (young) American historian Henry Adams adopted Taine's 1869 'human molecular' philosophy; defining human chemistry as the study of the 'mutual attraction of equivalent human molecules' (1885) and also wrote two books using the human molecule perspective: one on Willard Gibbs' phase rule applied to the phases of humanity (1909) and another on the application of William Thomson's degradation version of the second law applied to collective sets of evolving human molecules, viewed historically (1910). 1885: Adams' human molecules As early as 1873, American historian Henry Adams had come across Taine's 1869 'human molecule philosophy', when as the editor of The North American Review he accepted the article “Taine’s Philosophy”, by James Bixby, for publication, wherein Taine’s philosophy of history is presented as applied psychology of human molecules. American biographer Ernest Samuels argues that Adams was significantly influenced by Taine’s suggestion that the object of the historian is to study and follow the transformations of human molecules and to write history as the psychology of human molecules and that Adams later adopted this view as his own. [27] To exemplify the influence of Taine on Adams, on 12 April 1885, while at extended stay at work in Washington, Adams wrote to his wife: [28] This logic is clearly seen in Adams’ 1910 A Letter to American Teachers of History, wherein Adams argues that the history must be viewed as transformations of groups of human molecules subject to the second law of thermodynamics. The book presents an bivalent discussion on paradoxical relationship between Lord Kelvin's 1852 take on the second law as a universal tendency towards the dissipation of energy and Charles Darwin's 1859 take on evolution as a universal tendency towards the elevation of mental energy. Specifically, Adams reasoned that "the laws of thermodynamics must embrace human history in its past as well as in its early phase" and that from the point of view of a physicist, to explain the fall of potential, as embodied in the first and second law of thermodynamics, in relation to "Darwin's law of elevation", he must: History, then, according to Adams, "would then become a record of successive phases of contraction, divided by periods of explosion, tending always towards an ultimate equilibrium in the form of a volume of human molecules of equal intensity, without coordination." In human chemistry terms, Adams was attempting to reconcile the second law, i.e. that all natural systems are irreversible and tend to dissipate energy in their work cycles, by postulating that human systems compensate or create new energy by the act of contraction of people in the formation of cities and and world powers, similar to how the sun continuously releases energy by the gravitational contraction of mass. In modern terms, Adams' human molecule social contraction theory can be interpreted through the release of energy in the formation of new human chemical bonds in coupled coordination with the dissolution of bonds old. Human Molecule (1988) Norval Morrisseau 1988 acrylic on canvas (27.5”x51.5”) painting, entitled “Human Molecule”, by Canadian aboriginal artist Norval Morrisseau, which seems to give the impression, possibly evolutionarily, that a human is a molecule, being part fish, part bird. [58] 1894: Leclere In 1894, French education theorist Max Leclerc was commenting on Taine’s 1875 text Growing Disagreement of School and Life (La Disconvenance Croissante de l’ecole et de la Vie), according to one review, that “in our Lycees there is the same military discipline (as under Napoleon), the same aggregation of numbered human molecules, which the huge wheel, turned throughout France by the Minister’s pedal, grinds and reduces to human powder.” [35] This view comes from Leclerc’s 1894 book Education in the Middle Classes in England, where in he discusses the views of Taine, and comments that: [36] “France has repeatedly changed its political constitution in this century but, through all vicissitudes, under many different governments, the regime founded by Napoleon Bonaparte persisted as the mode of education has remained the same. Twenty years ago, France sought to establish freedom with the Republic, she believes she has succeeded, and freedom, she says possesses ". How does it prepare new generations to use and Others How those born since in 1870 they are learning about freedom? If the parliamentary monarchy of July had not had the courage, if the Republic of 1848 has not had time, if the Second Empire could not have the will repudiate the dangerous legacy of Napoleon, the Third Republic, who has time and should have the courage and determination, she undertook what no one has been able, willing or dared to do before it? Did she understand how risk it runs, raising his free citizens by whatever means were combined to perpetuate the reign of the despotic one? Prefects and principals of the Republic today have no other conception of their role than once under the sword of Napoleon. In our schools, even military discipline, even numbered piles of human molecules that huge wheel turning in all of France under the pedal stroke of the Minister, crushed and pulped to humanity ' .” 1898: Ramsay In 1898, Scottish chemist William Ramsay used the ‘human molecule’ analogy in his discussion of German physicist Rudolf Clausius’ 1856 kinetic theory of gases, wherein he compared the body of gases to a football team of human molecules: [29] “I find, in my own case, that it helps greatly to a clear understanding of a concept if a mental picture can be called up which will illustrate the concept, if even imperfectly. Some such picture may be formed by thinking of the motions of the players in a game of football. At some critical point in the game, the players are running, some this way, some that; one has picked up the ball and is running with it, followed by two or three others; while players from the opposite side are slanting towards him, intent upon a collision. The backs are at rest, perhaps; but, on the approach of the ball to the goal, they quicken into activity, and the throng of human molecules is turned and pursues an opposite course. The failure of this analogy to represent what is believed to occur in a gas is that the players’ motion is directed and has purpose; that they do not move in straight lines, but in any curves which may suit their purpose; and that they do not, as two billiard-balls do, communicate their rates of motion to the other by collision. But, making such reservations, some idea may be gained of the encounters of molecules by the encounters in a football-field.” We Human Chemicals (1948) American writer Thomas Dreier's 1948 book We Human Chemicals: the Knack of Getting Along with Everybody, written with consultation from Harvard chemist Gustavus Esselen, in which principles of chemistry are applied to facets of human interactions on the view that each person is a 'human chemical' constructed from elements of the periodic table. [49] 1910: Deier’s human chemicals With the construction of the periodic table in 1869, by Russian chemist Dmitri Mendeleyev, in which the then 66 known (and hypothesized) elements were listed in by their atomic weights, in rows such that their properties repeated in a series of periodic intervals, some began to think of a human, invariably composed of these elements, by the name ‘human element’, ‘human chemical’, or ‘human chemical element’. One such person was American writer Thomas Dreier who in 1910 published a 27-page pamphlet entitled “Human Chemicals”, extolling on the view that each person is a ‘human chemical’, and that one might better come to understand human interactions if this view is used when considering the variants of human behavior, such as explosive behavior [48] The following is an aggregate quote summarizing Dreier’s view on the matter from his 1948 book We Human Chemicals, an expanded version of his earlier article, written with consultation from Harvard chemist Gustavus Esselen: [49] “In the world of science, the chemist works with 96 elements, 92 of the period table, plus four recently discovered. These elements can be combined to make anything and everything of a material nature. So it is with people. All of us are human chemicals. Some of human chemicals can be mixed only with great difficulty; some explode if brought together; some excite each other beneficially; others are inert; others mix to form potent combinations; still others act as potent chemical catalysts, bringing about desirable changes in others when mixed with them, without themselves being changed.” The cover of the 1948 book is pictured adjacent, where each ‘human chemical’ is shown on a sort of mock human periodic table; which, to note, is similar to Goethe’s human affinity table (1808), although the latter is more accurate in a chemical thermodynamic sense. 1911: Perris In the 1911 book A Short History of War and Peace, English journalist George Perris, according to a 1913 review by American writer Alpheus Snow, argues that: [39] “War to bring about peace seems paradoxical. Yet it seems undoubtedly to be true, as the Perris says, that war is often a process of evolution—an explosive process which occurs when the progressive movement of human molecules towards a reorganization making for equality of opportunity and a betterment of the law, is unduly held back by the forces of standpatism and vested interests.” This, however, seems to be an interpretation of Perris’ opening chapter ‘The Human Swarm’ wherein he states that: “modern thought points to nothing so certainly as the universality of change. We stand on a whirling ball, every atom and molecule of which is in perpetual movement. Individually, we are aware of being different men and women every day of our lives; the life of the world has undergone such a transformation even during our own generation that an unmoved character-basis of society is incomprehensible, a miracle in a realm of law—and what an evil miracle.” William Armstrong Fairburn (1876-1947) English-born American navel engineer William Fairburn not only viewed people as 'human chemicals' but also wrote the first book on human chemistry (1914). 1914: Fairburn's human chemical elements In 1914, American navel engineer and industrial chemistry executive William Fairburn wrote Human Chemistry, the first attempt at a book on the subject of human chemistry, with aims to help the foreman and executive better understand his or her job, being that of facilitating the various reactions between people, considered as human chemical elements, in their daily work in the factory. The following is an aggregate opening quote summarizing the view followed by Fairburn in his booklet: “All men are like chemical elements in a well-stocked laboratory, and the manager, foreman, or handler of men, in his daily work, may be considered as the chemist [whose] primary requirement [or] principle work is the analysis and synthesis of the reactions resulting from combinations of individuals.” Fairburn goes on to state that there were 81 known chemical elements, each possessing different characteristics, and that similarly so to is each human chemical element different from his fellows in temperament and qualifications. Fairburn, to note, uses the terms ‘human chemical’ and human chemical element’ interchangeably in his book, speculating on topics such as how entropy applies to reactive human chemicals. Pierre Teilhard (young) French philosopher Pierre Teilhard wrote extensively on the use of atomic reductionism, defining people as human molecules, in attempts to reinterpret religion, evolution, and spirituality in modern scientific language, with focus on the evolution of the mind and the social collective in view of the future as described in his omega point theory. 1916: Teilhard's human molecules One who wrote extensively, in a very dense conceptual style, of considering people as molecules having evolved over time from atoms, was French philosopher Pierre Teilhard. He began working on his theory, through various unpublished essays, in 1916, until is death in 1955, after which his voluminous works were published post-humorously. The following is a representative quote from 1947, alluding to his concept of the noosphere or global sphere of connected minds: “The scope of each human molecule, in terms of movement, information, and influence, is becoming rapidly coextensive with the whole surface of the earth.” The following quote gives a precursory outline of the very dense subject of the human chemical bond: [3] “If the power of attraction between simple atoms is so great, what may we expect if similar bonds are contracted between human molecules?” I sum, during the years 1916 to 1955, Teilhard outlines a theory of evolution from atom to man, the latter of which he considers as a complex molecule or "human molecule" a term that he uses throughout his writings. In his articles on Human Energy, a collection of essays on morality and love, written between 1931 and 1939, for instance, he conceives of man as a “human molecule” (1936). Similarly, in his follow-up essay "Activation Energy", he theorizes that the concept of human reaction activation energy, i.e. the barrier to transition, applies to human interactions. n his 1947 essay “The Formation of the Noosphere”, he outlines the global view that due to the growing interconnectiveness of human molecules, they are forming a layer of the mind "noo-" over the biosphere. In particular, he states “no one can deny that a network (a world network) of economic and psychic affiliations is being woven at ever increasing speed which envelops and constantly penetrates more deeply within each one of us. With every day that passes it becomes a little more impossible for us to act or think other wise than collectively.” In other words, according to Teilhard, human molecules are forming a connective sheath or skin around the globe of the earth. 1919: Patten In his 1919 address “The Message of the Biologist”, American zoologist William Patten attempted to outline how the modern person might go about deriving a science-based system of morality and future governing constitution for a ‘molecular society’, of people considered as ‘human social atoms’ (social atoms) or ‘human molecules’, based on the pure science teachings of chemistry, physics, and astronomy. [30] George Carey (s) American physician George Carey, first to state that a human being is actually a chemical formula (1919). 1919: George Carey | chemical formula in operation A significant turning-point thinker in the history of the human molecule concept was American physician George Carey who, in his 1919 book The Chemistry of Human Life, made an attempt to integrate biochemistry with chemical affinity logic along with knowledge of active elements into a synthesis of a chemistry of the human being. Although his work is detracted a bit by religious and other mineral elixir types of healing remedies, he does outline a few gems. In a truncated opening quote, for instance, Carey points to the idea that: [5] “The human organism is an intelligent entity that works under the guidance which man has designated as chemical affinity.” Technically, to note, the full quote of what Carey said is that it is the 'mineral salts' of the human organism are 'intelligent entities' that work 'under divine guidance', which man has designated as chemical affinity. The above reworded quote, however, is the correct modern statement. Carey then goes on to state that the human body is a storage battery that must be supplied with the proper elements (chemicals) to set up motion at a rate that will produce what we please to call a live body. In commentary on how the laws of chemistry apply universally, he states: “there can be but one law of chemical operation in vegetable or animal organisms. When man understands and cooperates with that life chemistry, he will have solved the problem of physical existence.” The most interesting point in his book is his statement that: “Man’s body is a chemical formula in operation” It would be eighty-one years before Sterner and Elser would actually make an attempt at calculating this formula. And the addendum ‘in operation’ is a huge topic not yet even begun to be simplified, e.g. as exemplified the fact that human molecules have 46 percent annual atomic turnover rate, whereas other molecules, such as H20, do not seem to have such a turnover rate. Poll: Are you a giant molecule? (2008) 2008 poll "Are You a Giant Molecule" conducted online by English physicist Jim Eadon (graph from Thims' 2008 The Human Molecule), which shows that about 57% of Internet users think they are a molecule. [17] 1942: Schumpeter In 1942, Austrian economist Joseph Schumpeter speculated on how certain human molecule move up and down in social class over time: [34] Sociogram of a social atom (Moreno) social atom diagram (key) 1949 diagram of ‘the social atom’, in the Jacob Moreno-scheme, of an interviewed female (#3) by American sociologist Leslie Zeleny. [56] 1940s: Moreno | social atom theory (psychology) In 1917, Romanian-born American psychologist Jacob Moreno began to develop a modified Freudian psychology, focused on spontaneity states in the dynamics of human movement, and by circa 1951 had begun to quantify his theory using constructs of social entropy, employing a sort of three dimensional social atom theory, explained in terms of tele relationships or distal and proximal bonds to other social atoms. The ways he uses the term social atom, along with other varieties, such as cultural atom or acquaintance atom, etc., is a bit ambiguous: [47] Social atom, operational definition: plot all the individuals a person chooses and those who choose him or her, all the individuals a person rejects and those who reject he or she; all the individuals who do not reciprocate either choices or rejections. This is the ‘raw’ material of a person’s social atom.” An interesting facet of Moreno's approach is his attempt at applying the Bohr model of the atom, in which quantum energy inputs to the orbitals of atoms cause shifting of electrons, up or down, in orbital structure, to changes or shifts in human relationships, e.g. as in a infant child's orbit about his mother. A 1949 attempt at an illustration of Moreno’s social atom theory was undertaking by American sociologist Leslie Zeleny, who diagrammed the sociometric findings of the attraction-repulsion aspects of the relationships surrounding various people. A diagram of the social atom for female number three is shown adjacent. [56] The fact that the diagram is titled sociogram of 'the social atom of person number three', verses sociogram of a social atom', highlights that Moreno's theory is a more exotic extrapolation of the basic atomic model applied to human relationships, not easy to describe in a single definition. According to Zeleny, her study of the social atom of #3, shows #3 to be a much desired associate by her classmates; but that she is very ‘choosy’ about those with whom she will ‘pal’ as shown by the various dynamics of attraction and rejection, reciprocated or unreciprocated. Charles Galton Darwin (young) English physicist C.G. Darwin (grandson of Charles Darwin) defined the science of 'human thermodynamics' as the study of systems of 'human molecules.' (1952) 1952: Darwin | thermodynamics of human molecules The conception of a set of people as a collection of "human molecules", who interact according to the laws of physics, particularly statistical thermodynamics, whose history and future was determined by the laws of thermodynamics was first stated by American physicist Charles Galton Darwin, the great grandson of Charles Darwin, in his 1952 book The Next Million years. [4] The following is an excerpt of the 1953 review by Time magazine: [68] “In Darwin's view, the human molecules have one fundamental property that dominates all others: they tend to increase their numbers up to the absolute limit of their food supply.” In his opening chapter, C.G. Darwin sets out to view society on the ideal gas model “We may, so to speak, reasonably hope to find the Boyle’s law which controls the behavior of those very complicated molecules, the members of the human race, and from this we should be able to predict something of man’s future. When I compare human beings to molecules, the reader may feel that this is a bad analogy, because unlike a molecule, a man has a free will, which makes his actions unpredictable. This is far less important than might appear at first sight, as is witnessed by the very high degree of regularity that is shown by such things as census returns. Thus, though the individual collisions of human molecules may be a little less predictable than those of gas molecules, census returns show that for a larger population the results average out with great accuracy. The internal principle [internal energy] then of the human molecules is human nature itself.” In this book, Darwin goes on to define the new future science of 'human thermodynamics' as the thermodynamic study of systems of 'human molecules', a significant turning point in human thought. Human molecule (Ecological Stoichiometry) 1 Human molecule (Ecological Stoichiometry) 2 Human molecule (ecological stoichiometry ) 3 (f) Human molecule (Ecological Stoichiometry) 4 Human molecule (Ecological Stoichiometry) 5 Pages three through five from the 2002 book Ecological Stoichiometry, by American limnologists Robert Sterner and James Elser, showing the first published calculation of the molecular formula of a human being. [11] 1970: Nisbet | social molecules Influenced by the work of Henry Adams and Brooks Adams, in his 1970 book The Social Bond, American sociologist Robert Nisbet considers people to be ‘elementary human particles’, refers to the adhesion between two human particles as a ‘social bond’, and the attachment of two or more human particles to be a ‘social molecule’. Below is a representative quote: “Just as modern chemistry concerns itself with what it calls the chemical bond, seeking the forces that make atoms stick together as molecules, so does sociology investigate the forces that enable biologically derived human beings to stick together in the ‘social molecules’ in which we actually find them from the moment, quite literally, of their conception.” Beyond this, Nisbet spends considerable time discussing his conception of ‘social entropy’ and how this relates to human bonding. [45] 1998: Müller | human molecules (lecture) In the 1998 article "Human Societies: A Curious Application of Thermodynamics", Venezuelan chemical engineer Erich Müller defined humans to be analogous to molecules (human molecules), then quantified inter human molecular love and hate in terms of basic thermodynamic pair bonds, and quantified social forces as a type of van der Waals dispersion force. [9] In 2006, Müller was interviewed by journalist Laura Gallagher, with Reporter magazine, for his popularity for his invigorating thermodynamic lectures in which he draws analogies between molecules and people. [10] 1998: Goleman In 1998, American emotional intelligence theorist Daniel Goleman commented his view that: [37] “Virtually everyone who has a superior is part of at least one vertical ‘couple’; every boss forms such a bond with each subordinate. Such vertical couples are a basic unit of organizational life, something akin to human molecules that interact to form the lattice work of relationship that is the organization.” Here, Goleman seems to be making an attempt to discuss aspects of human chemical bonding. 1999: Prugh and Costanza In 1999, American ecological economists Thomas Prugh and Robert Costanza state that: [32] “The welfare of human society is best served by the view of people as ‘human molecules’ who, by pursuing their own interests through the market, inevitably promote the general good.” Robert SternerJames Elser (s) American limnologists Robert Sterner and James Elser: calculated a 22-element molecular formula for one average human being, based on actual mass composition measurements, in April 2000, as found in their 2002 textbook Ecological Stoichiometry. 2000: Sterner and Elser | empirical molecular formula See main: Human molecular formula The first calculation of the empirical molecular formula for the human being was made in April of 2000 by American limnologists Robert Sterner and James Elser. [15] Sterner and Elser published there results in the 2002 book Ecological Stoichiometry: the Biology of Elements from Molecules to the Biosphere. In outlining their subject, Sterner and Elser state: “The stoichiometric approach considers whole organisms as if they were single abstract molecules.” They were led to this by studying differences in carbon, nitrogen, and phosphorous levels in similar species. In their chapter one, as to the human being, they state that “from the information on the quantities of individual elements, we can calculate the stoichiometric formula for a living human being to be”, taking cobalt (Co) as unity, the state that the following "formula combines all compounds in a human being into a single abstract ‘molecule’”: [11] H375,000,000 O132,000,000 C85,700,000 N6,430,000 Ca1,500,000 P1,020,000 S206,000 Na183,000 K177,000 This amounts to a 22-element human empirical molecular formula. They continue, “our main purpose in introducing this formula for the ‘human molecule’ is to stimulate you to begin to think about how every human being represents the coming together of atoms in proportions that are, if not constant, at least bounded and obeying some rules”. 2001: Peachey | marriage and human molecules In the 2001 book Leaving and Clinging, American author Paul Peachey devotes his first chapter, entitled “The Marital Bond as the Human Molecule”, to the development of the view that each person can be considered as an atom and that attachments of human atoms, in families and marriage, constitute a human molecule. To cite one example quote: [42] Libb Thims (2) In September 2002, American chemical engineer Libb Thims, independently, calculated a 26-element molecular formula for an average human being, based on actual mass composition measurements, as found in his 2007 textbook Human Chemistry. “The question is whether the symbiosis of these polarities, i.e. the molecular (family) versus the atomic (individual) dimension of human existence, is a given in nature, or whether as humans we can replace this way of creating and sustaining the basis human molecule.” Peachey, to note, seems to cull many of his ideas on human bonding and human molecules from the prior work of American sociologist Robert Nisbet. 2002: Thims | molecular formula (death) In 1995, American chemical engineer Libb Thims began to study the spontaneity criterion (ΔG < 0) , i.e. that a reaction (human reaction or chemical reaction) needs to show a negative change in the Gibbs free energy if it is to be spontaneous (energetically feasible or successful), in relation to the basic human reproduction reaction, in which a man M and women W conceive a new baby B: M + W → B In circa 2002, Thims began to mediate on the issue of what exactly are these entities, M, W, B, from a chemical, atomic, or fundamental particle point of view, that he had been aiming to quantify enthalpically and entropically for the last seven years. In September of 2002, independent of Sterner and Elser, Thims calculated a 26-element molecular formula for the average human being. [12] Molecular evolution table (300px) A molecular evolution table showing key structures in the synthesis of human beings (human molecules) over the last 13.7-billion years. Thims included his calculation results in the 2002 manuscript Human Thermodynamics (Volume One), in the 2005 IoHT Molecular Evolution Table (online), and in the 2007 book Human Chemistry (Volume One). [12] Thims states, on page-190, of the 2002 manuscript, as based on a mass percent table of the 26-elements found to have function in the human body, that at approximately 200,000 years ago, "the universe had expanded/reacted enough to form a molecule made of these specific elements that we now define as homo sapien" as can be represented by the following "crude empirical formula for the molecular human", taking vanadium (V) as unity: [13] H2.5E9 O9.7E8 C4.9E8 N4.7E7 P9.0E6 Ca8.9E6 K2.0E6 Na1.9E6 S1.6E6 Cl1.3E6 Mg3.0E5 Fe5.5E4 F5.4E4 Zn1.2E4 Si9.1E3 Cu1.2E3 B7.1E2 Cr98 Mn93 Ni87 Se65 Sn64 I60 Mo19 Co17 V This amounts to be a 26-element human empirical molecular formula. Thims concludes "by describing the existence of a human being in this form we are by no means making attempts to degrade our existence, we are only trying to help elucidate our understanding of this existence." The need or drive for Thims to calculate the molecular formula originated in a short chapter in newly forming manuscript (2001-2004) Human Thermodynamics, called "What Happens to a Person When They Die" (a precursor to science of cessation thermodynamics), to define exactly, from a fundamental particle point of view, what exactly is a "human being". In other words, what fundamental particles constitute the totality of a person at the moment of death, in both bodily structure form and bond structure form, i.e. if these quantities are to be conserved according to the law of energy-matter conservation? Subsequently, from a chemical point of view or first law of thermodynamics point of view, the composition of a person technically is a twenty-six-element molecule combined with its substrate materials (personal wealth) and consortium of interpersonal human chemical bonds. In the years to follow, using more accurate mass composition tables, refinements on this formula were made by Thims. 2002: Hodgson | Little Fun Book of Molecules Humans (book) In 2002, similar in theme to Greek philosopher Empedocles' human chemistry analogies of how people how like each other mix like water and wine, American writer John Hodgson published the 102-page The Little Fun Book of Molecules Humans, a short booklet containing ninety-eight chemistry aphorism style sayings intended to look at the similarities existing between humans and molecules, so to, as stated in his preface, unearth clues to scientific information that might lead to new research. The second 2010 edition of Hodgson's book, retitled as molecles humans, seems to have taken its cover-design from American chemical engineer Libb Thims' 2008 book The Human Molecule (a 120-page historical overview of the concept of the human molecule); which in turn took its cue from Italian polymath Leonardo da Vinci's 1487 theory of the Vitruvian man, a geometrical model of a human, where each person considered as a tiny micro-universe; which in turn took its cues from Roman architect Marcus Vitruvius and his hints at correlations of the ideal human geometric proportions, found in his 25BC book On Architecture. Vitruvian man (s)Little Fun Book of Molecules Humans (2002) The Human Molecle (300px) molecules humans (2010) Original circa 1487 drawing of the Vitruvian man by Italian polymath Leonardo Da Vinci (CB IQ=200), a man depicted as a theoretical geometric figure, representative of a tiny universe, analogous to the structure of the surrounding universe. 2002 first edition of American writer John Hodgson's chemical aphorism style book on the similarities between humans and molecules. [41]2008 depiction of Da Vinci's Vitruvian man defined as a 26-element molecule, shown on the cover of the 122-page book The Human Molecule by American chemical engineer Libb Thims. [2] 2010 depiction of Da Vanci's Vitruvian man, representative of a human defined similar, analogous, aphorismic to a molecule, shown on the cover of the second edition American writer John Hodgson's book re-titled as molecules humans. [63] Each page of Hodgson's book contains a different aphorism of which below are shown a few representative examples: [41] “Different molecules or humans behave differently having different reactions or behaviors to changing situations.” “When molecules or humans mesh they have chemical or physical reaction and or reproduction.” Human molecule (2005) online Online publication (2005) of the formula for one human molecule (with rotating break-dancer and caffeine stick-figure representations) by American chemical engineer Libb Thims. [66] “With experiment we can better understand these molecules or humans like we never knew before.” “Molecules and humans take in elements or food.” “Molecules and humans engage in different behaviors and or sex.” “Molecules and humans make or change common bonds.” All-in-all, Hodgson’s book contains 98 of these sayings; although most, to note, are rather incoherent and negligibly connected to actual human chemistry. Human molecular orbitals | transition state theory In 1923, French physicist Louis de Broglie conceived the wave-particle duality theory of matter, which states that all bodies in the universe have both a wave and a particle nature. [64] This subject is bound up in the famous and puzzling double slits experiment invented by Thomas Young in 1801. Molecules as large as 60-atom Bucky Balls have been shown to exhibit wave-particle duality. It is probable that human molecules also have not only a particle-like movement behavior, but also a wave-like behavior. This was first noted in Ernst Mach’s 1885 conception of “turning tendencies”. A modern version this logic is human molecular orbital theory. The the study of the extrapolation standard molecular orbital theory to the time-accelerated analysis of the dynamic structure, formation, and dissolution of chemical bonds between human molecules, invokes theoretical conception of 'human molecular orbitals', as defined by human molecular orbital theory. When human movement, over the surface of the earth, is viewed at a time-accelerated pace, such as viewing the total weekly, monthly, or yearly movements of one person, via for example GPS tracking, in a sped-up five minute video clip, one begins to see an orbital picture of human movement. Tracking of humans, considered as material points, over extended high-speed clips of spans of months or years views, mathematically for patterns in short few-minute video segment trajectory clips, leads to a view of human activity orbitals changing dynamically over time, as bonds are formed and broken. Shown below is a typical generic picture of the transition state model of the male-female reaction, where two human molecules, male Mx and female Fy, collide in time, and begin to interact in their common "school orbital" S, of their various possible daily orbitals of work W, gym G, and so on; where after, by day 90, the two are orbiting in mutual friends houses F1 and F2; where after, by day 365, owing to the energy stabilizing effect of the ongoing reaction, the pair marry, thus combining orbitals, into the formation of one nuclear family with a single joint home H acting as the centralized nucleus of the dihumanide molecule. This is depicted below: Transistion state (HMO1)Transistion state (HMO2)Transistion state (HMO3) Day one: Two people, i.e. human molecules, Mx and Fy, meet in their school orbitals, and begin to associate. Day 90: The two human molecules develop more orbital overlap (stability) by hanging out at the houses of mutual friends. Day 365: The two human molecules fuse, by combining their previous separate nuclei into one (they move in together). A molecular orbital, by technical definition, is the a solution of the Schrödinger equation that describes the ninety percent probable location of an electron relative to the nuclei in a molecule and so indicates the nature of any bond in which the electron is involved. In simple terms, it is understood that electrons (and molecules) act as both waves and particles, tending to have orbital motions in their trajectories. That's life (2005) Opening section from the 2005 New Scientist article "That's Life", in which a 12-element empirical formula for a human is given. [70] Starting with the conservation of energy, which assumes that the total energy of a system is equal to the sum of its potential energy and kinetic energy, a descriptive time-dependent 'wave equation' can be derived which describes the movement or behavior, and thus the structure, of the nuclei and electrons that comprise an atom or molecule. This description is particularly intuitive when electrons are shared between two different atoms or molecules, creating a chemical bond, which actuates as through the means of an orbital overlap effect. The translation of this logic to the bonding transition states of human interpersonal interactions provides for a robust means of understanding human chemical bonding. 2005: New Scientist In a 2005 article entitled "That's Life", New Scientist magazine gave the following 12-element empirical molecular formula for a human: which the define as one's "chemical formula". [70] 2005: Molecular evolution tables See main: Molecular evolution table During the writing of the manuscripts for Human Thermodynamics (Volumes 1-3), Thims began to make an evolution table putting the hydrogen atom at the top row and the human molecule at the bottom row, filling in known intermediates in the middle rows (monkey, shrew, fish, bacteria, etc.), and calculating approximate molecular formulas for each each intermediate structure. This was first posted online in 2005 (IoHT Molecular Evolution Table). These tables, later published in various locations, have become a focal point of discussion and debate for many scientists in this field. A section of the latest version is shown below: ET (1)ET (2)ET (3)ET (4)ET (6)ET (5)ET (7)ET (8) 13.7 BYASeconds after Bang13.2 BYA4.4 BYA4.1 BYA3.9 BYA45 MYA150,000 Year Ago In other words, it is a matter of filling in the blanks, so to speak, to connect the mechanism of synthesis of the human molecule, starting with hydrogen. The Social Atom (250px) American physicist Mark Buchanan's 2007 book The Social Atom, in which he applies physics principles to the modeling of people in mass as collectives of social atoms. [43] 2007: Buchanan | The Social Atom (book) In 2007, American physicist Mark Buchanan wrote the book The Social Atom, in which he attempts to model each human actor as an individual atom in the crowd of the masses. In addressing the matter as to how to view people atomically, Buchanan remains two-sided as to whether to use the particle (atom) view or the human molecular view: The platform of the book is American economist Thomas Schelling’s 1971 paper “Dynamic Models of Segregation”, the very same paper which seems to have originated the now-famous ‘tipping point theory’, which concluded to the effect that even if every trace of racism were to vanish tomorrow, something akin to a law of physics might still make the races separate, much like oil and water. [44] This view, to note, is similar to American law professor Richard Delgado’s 1990 law of racial thermodynamics. The subject is discussed further by others in integration and segregation thermodynamics. Some of Buchanan's conclusions however are rather incoherent, particularly in his effort to salvage the theory of free will: “The laws of physics are beginning to provide a new picture of the human atom or [rather] social atom—and this is a picture that does not conflict with the existence of individual free will. Just as atomic-level chaos gives rise to the clockwork precision of thermodynamics, so can free individuals come together into predictable patterns.” This is similar to English physicist C.G. Darwin's 1952 comment that human molecules have free will owing to their unpredictability, which of course is incorrect, just as is Buchanan's view. 2008: Thims | The Human Molecule (book) The first book on the subject of the "human molecule", focused on its significance and history, was the 2008 booklet The Human Molecule, 120-pages in length, by American chemical engineer Libb Thims, as previously pictured above, which steps through the views of the three dozen or so individuals to have used this concept in discussion or philosophy. [20] The following chemistry aphorism style quote from the 1999 novel Milton's Progress by Forbes Allan, for instance, is the opening quote to The Human Molecule: [25] “People are like particles, they behave in groups as if they were molecules in a test-tube.” Thims' The Human Molecule can be read at (linked below). Human chemical reactions See also: Human chemical reaction (history) The dynamic evolving attachment of human molecules together into structures, e.g. A≡B, such as marriage pairs, friendships, family units, etc., actuates according to the function of human chemical bonds. The rearrangement of bonds, the formation of new bonds, or the dissolution of old bonds, defines the process of a human chemical reaction, such as shown below: A + B AB (combination reaction) AB A + B (dissolution reaction) Formula human (2011) Indian-born American mechanical engineer Kalyan Annamalai and American mechanical engineer Carlos Silva’s 2011 engineering thermodynamics textbook definition of a human formulaically as a “26-element energy/heat driven dynamic atomic structure”, based on the work of American electrochemical engineer Libb Thims (2002). [71] Human thermodynamics | Engineering thermodynamics Thinkers including Henry Adams (1890s) and C.G. Darwin (1952) were the first to initiate the study of humans viewed conceptually as “molecules” in the context of thermodynamics—the latter specifically defining the science of human thermodynamics as the study of systems of human molecules. In human thermodynamics, a set of human molecules partitioned off by an "energetic boundary", i.e. a quantitative spatial demarcation, such as a town boarder, a social barrier, state lines, corporate boundaries, occupational orbitals, social circles, family boundaries, etc., comprise a closed thermodynamic system of working molecules, i.e. a working body in the words of Clausius, according to which first and second law energy balances apply in the production of system external work W due to the action of cyclical solar heat input Qin. In 2011, Indian-born American mechanical engineer Kalyan Annamalai and American mechanical engineer Carlos Silva, in their second edition Advanced Thermodynamics Engineering, citing Libb Thims (2002), in their “formula” section, give the following thermodynamic definition of a human: [71] “Humans may be called a 26-element energy/heat driven dynamic atomic structure.” Annamalia and Silva, of note, are the authors of the 2009 “Entropy Generation and Human Aging” on aging theory (or anti-aging) and thermodynamics. [72] Recent views In 2006, after being introduced to the human molecule concept the previous year, Russian physical chemist Georgi Gladyshev began to incorporate the human molecule perspective into his theories, and comments that “the conclusions of hierarchical thermodynamics correspond excellently to conception of Libb Thims about the thermodynamics of human molecules”. [57] As of 2010, Gladyshev believes that the aging process of molecules can be explained using his theory of hierarchical thermodynamics. In 2007, as mentioned previously, Russian bioelectrochemist Octavian Ksenzhek stated that: "The economy of mankind is a very large and extremely complicated system [and] people are the 'molecules' of which it consists." Ksenzhek goes on to use energy and entropy to study the ways in which the "various associations of people constitute its structural components." [23] In 2010, Martin Gardiner, of the Annals of Improbable Research, the group that administers the Ig Nobel Prizes aiming to spotlight research that makes people laugh and then think, ran a four-part, three-day article on the work of Libb Thims, entitled “I Am Not A Molecule”, subtitled 'Inside the IoHT', discussing topics such as Thims' 2008 book The Human Molecule, the Human Chemistry 101 video lectures on the human molecule, the Institute of Human Thermodynamics, the Journal of Human Thermodynamics, among other topics. Gardiner considers the subject of the chemistry and thermodynamics of human molecules to be an emergent intellectual development. A oft-quoted popular quote for the 2010 book Employees First, Customers Second by Indian engineer business executive Vineet Nayar, signifying the logic that the employees are the molecular components of the mega-molecular structure of the corporation (corporate molecule), and that if each employee is instilled or given the vision of an entrepreneurial attitude that corporation will accelerate with a higher energy quotient. [60] : Nayer spends considerable time discussing ideas on how hidden or latent energy exists in the employees of corporations. Octillion | atoms in one person See main: Number of atoms in To give an idea as to the magnitude of the number of atoms in one human molecule as compared to, for instance, the number of atoms in one water molecule (three) or the number of atoms comprising the earth (sexdecillion), the following table lists names to common larger numbers. [53] The third column, Exp, shows the old-fashioned calculator shorthand symbol for large numbers, in which E is short for exponent, in the sense that, for instance, E9 is short for 10E9 which is short for 10^9 \,. Exponent shorthand is useful in writing out molecular formulas for biological entities. #NameNumber of Atoms 10^2 \,hundred100E2 10^3 \,thousand1,000E3 10^6 \,million1,000,000E6 10^9 \,billion1,000,000,000E9ten bacteria molecules 10^{12} \,trillion1,000,000,000,000E12 10^{15} \,quadrillion1,000,000,000,000,000E15ten pre-aquatic worms 10^{18} \,quintillion1,000,000,000,000,000,000E18 10^{21} \,sextillion1,000,000,000,000,000,000,000E21one small fish 10^{24} \,septillion1,000,000,000,000,000,000,000,000E24 10^{27} \,octillion1,000,000,000,000,000,000,000,000,000E27one person (human molecule) 10^{30} \,nonillion1,000,000,000,000,000,000,000,000,000,000E30 10^{33} \,decillion1,000,000,000,000,000,000,000,000,000,000,000E33 10^{36} \,undecillion1,000,000,000,000,000,000,000,000,000,000,000,000E36 10^{39} \,duodecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000E39 10^{42} \,tredecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000E42 10^{45} \,quattuordecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000E45 10^{48} \,quindecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000E48 10^{51} \,sexdecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000E51one earth molecule (the earth) 10^{54} \,septendecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000E54 10^{57} \,octodecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000E57one sun molecule (the sun) 10^{60} \,novemdecillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000E60 10^{63} \,vigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 10^{66} \,unvigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, E66the milky way galaxy 10^{69} \,duovigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 10^{72} \,tresvigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 10^{75} \,quattuorvigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 10^{78} \,quinquavigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 10^{81} \,sesvigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, E81the observable universe 10^{84} \,septemvigintillion1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, Using these larger number names in context, one would say, for instance, that one human molecule is comprised of an octillion atoms, of twenty-six types of 'active' elements. "I am a molecule!" (Apr 2009) [9:32 min] "I am not a molecule!" (Apr 2009) [6:10 min] Objections to Since the 1809 publication of Goethe's Elective Affinities, wherein the characters are said to mirror the activities and behaviors of the chemicals, there has been a never-ending stream of criticism regarding the chemical nature of the human being. [21] In 1810, for instance, Goethe's fellow author and neighbor Christoph Wieland sent a letter (which he suggested should be burned after it is read) to his close friend German philologist and archeologist Karl Böttiger stating that: [22] "To all rational readers, the use of the chemical theory is nonsense and childish fooling around." In modern terms, the debate still continues; where, according to recent Internet polls, about 57% of people agree that they are a giant molecule. [17] Likewise, according to standard molecular evolution tables, it is visually-obvious that humans are evolved molecules. In spite of these known perspectives, many maintain that humans are in some way different than molecules, particularly when it comes to choice and free will. In 1996, for instance, Austrian-born American theoretical physicist Fritjof Capra stated incorrectly that "human beings can choose whether and how to obey a social rule; molecules cannot choose whether or not they should interact." [18] In 2005, American science philosopher and sociologist Steve Fuller, a noted intelligent design advocate, published his New Scientist article "I Am Not a Molecule", arguing against atomic reductionism in sociology, used in recent publications, most notably English physical chemist Phillip Ball's 2004 book Critical Mass: How One Thing Leads to Another, American evolutionary biologist Jared Diamond’s 2005 book Collapse: How Societies Choose to Fail or Succeed, and Canadian-born evolutionary psychologist American Steven Pinker’s 2002 book The Blank Slate: the Modern Denial of Human Nature, all of which, according to Fuller, are "infuriating social scientists"; presumably himself, most significantly? [24] Likewise, in 2007 Canadian chemist Stephen Lower considered the following statement "people are viewed as chemical species, or specifically human molecules, A or B, and processes such as marriage or divorce are viewed as chemical reactions between individuals..." to be crackpot, meaning it is something akin to an eccentric or lunatic notion, and listed it among a grouping of pseudoscience subjects. [19] See also Human chemical Human chemical element Human element Human particle People are not molecules Social atom Joseph Dewey 1. Thims, Libb. (2007). Human Chemistry (Volume One), (preview), (ch. 2: "The Human Molecule", pgs. 15-35). Morrisville, NC: LuLu. 2. (a) Thims, Libb. (2008). The Human Molecule (issuu) (preview) (Google Books) (docstoc). LuLu. (b) Molecular Evolution Table - Institute of Human Thermodynamics 5. Carey, George W. (1919). The Chemistry of Human Life. Los Angeles:The Chemistry of Life Co. 6. Rabinbach, A. (1990). The Human Motor – Energy, Fatigue, and the Origins of Modernity. Berkeley: University of California Press. 8. Thims, Libb. (2002). Human Thermodynamics (Volume One). Chicago: Institute of Human Thermodynamics. 11. Sterner, Robert W. and Elser, James J. (2002). Ecological Stoichiometry: the Biology of Elements from Molecules to the Biosphere, (chapter one), (pgs. 2-3, 47, 135). Princeton: Princeton University Press. 12. (a) Thims, Libb. (2002). Human Thermodynamics (Volume One), Date: Sept. Chicago: Institute of Human Thermodynamics. (b) Note: Thims only became aware of Sterner's calculation on February 17, 2008 after doing a Google book search on keywords "human molecule thermodynamics"; Thims then emailed Sterner within the hour (after which Sterner explained how and when he did his calculation). Author. (2005). “That’s Life”, New Scientist, Dec 03. 13. The formula shown is the more accurate 2005-version (as found in the IoHT's molecular evolution table, ref. #2 above). The 2002 calculation was based on less-accurate mass percent data sets, taking nickel as unit. 14. Molecule (definition): “a molecule may be thought of either as a structure built of atoms bound together by chemical forces or as a structure in which two or more nuclei are maintained in some geometrical configuration by attractive forces from a surrounding swarm of negative electrons.” Source: Licker, Mark D. (2002). McGraw-Hill Concise Encyclopedia of Chemistry. New York: McGraw-Hill. 16. (a) Taine, Hippolyte. (1870). De l’Intelligence (On Intelligence), (Part I, Part II), (pg. xi-xii), London: L. Reeve and Co. (b) Sparks, Jared. (1873). The North American Review, (Section: Taine’s philosophy, pg. 403: keyword: “human molecule”, pg). Vol. CXVII. Boson: James R. Osgood and Co. 17. Running Poll: "Are You A Giant Molecule?" (by English physicist Jim Eadon) - 2001-2008+. 18. Capra, Fritjof. (1996). The Web of Life - a New Scientific Understanding of Living Systems, (pg. 212). New York: Anchor Books. 19. Lower, Stephen. (2007). “List of Flim-flam, Pseudoscience, and Nonsense”, Online listings. 22. Wieland, Christoph Martin. (1810). "Letter to Karl August Böttiger" July 16. Weimar. Quoted from Tantillo 2001, pg. 9-10. 23. Ksenzhek, Octavian S. (2007). Money: Virtual Energy - Economy through the Prism of Thermodynamics, (pgs. 162). Universal Publishers. Fuller, Steve. (2004). "I am not a molecule", New Scientist, Issue 2502, June 4th. 25. Forbes, Allan. (1999). Milton's Progress (Chapter 21). Rowanlea Grove Press. 26. (a) Gardiner, Martin. (2010). “Inside the IoHT: I am not a molecule (parts 1, 2, 3, 4)”, Improbable Research, Jun 04-06. (b) Martin Gardiner (about) – 27. Samuels, Ernest. (1989). Henry Adams (human molecule, pg. 115). Harvard University Press. 28. Adams, Henry. (1885). “Letter to Wife”, April 12; In: The Letters of Henry Adams: 1858-1868, Volume 1 (pg. xxviii). Harvard University Press, 1982. 29. Ramsay, William. (1898). “The Kinetic Theory of Gases and Some of its Consequences” (human molecules, pg. 685). The Contemporary Review, 74: 681-91. 30. (a) Patten, William. (1919). “The Message of the Biologist”, Address of the vice-president and chairman of Section F, Zoology, American Association for the Advancement of Science, St. Louis, Jan 31. (b) Patten, William. (1920). “The Message of the Biologist”, Science, pgs. 93-101, Jan 30. 31. Ksenzhek, Octavian S. (2007). Money: Virtual Energy - Economy through the Prism of Thermodynamics (pgs. 162, 170). Universal Publishers. 32. Prugh, Thomas and Costanza, Robert. (1999). Natural capital and Human Economic Survival (economic molecules, pg. 15; human molecules, pg. 17). CRC Press. 33. Gryzanowski, Ernst. (1875). “Comtism” (human molecules, social molecule, pg. 276). The North American Review, 120: 237-80, April. 34. Schumpeter, Joseph. (1942).Capitalism, Socialism, and Democracy (human molecules, pg. 204). Routledge. 35. Anon. (1894). “As Others See Us” (human molecules, pg. 217), Journal of Education, Apr 01. 36. Leclere, Max. (1894). L’Education des Classes Moyennes et Dirigeantes en Angleterre (Education in the Middle Classes in England and Politics) (molécules humaines, pg. 65). Paris: Armand Colin et Cie. 37. Goleman, Daniel. (1998). Working with Emotional Intelligence (human molecules, pg. 215). Random House. 38. (a) Berlioz, Hector. (1854). “Les Soirees de l’Orchestre” (Evenings with the Orchestra) (molécules humaines, pg. 259), Entierement Revue Et Corrigee. (b) Novello, Sabilla. (1855). “Translation from Hector Berlioz’s ‘Soireez de l’orchestre’”, The Musical Times, Jul 15. 39. Snow, Alpheus Henry. (1913). “Book Review: A Short History of War and Peace by G. H. Perris.” The American Journal of International Law, 7: 427-29. 40. Perris, George. (1911). A Short History of War and Peace (molecule, pg. 7-8). H. Holt and Co. 41. Hodgson, John. (2002). Little Fun Book of Molecules/Humans. 1st Books. 43. Buchanan, Mark. (2007). The Social Atom: why the Rich get Richer, Cheaters get Caught, and Your Neighbor Usually Looks Like You (pgs. x-xi, 13). New York: Bloomsbury. 44. (a) Schilling, Thomas. (1971). “Dynamic Models of Segregation”, Journal of Mathematical Sociology, 1: 143-86. (b) Tipping point (sociology) – Wikipedia. 45. Nisbert, Robert, A. (1970). The Social Bond - an Introduction to the Study of Society (social molecule, 38, 45, etc.). New York: Alfred A. Knoph. 46. Sales, Jean. (1798). De la Philosophie de la Nature: ou Traité de morale pour le genre humain, tiré de la philosophie et fondé sur la nature (The Philosophy of Nature: Treatise on Human Moral Nature, from Philosophy and Nature), Volume 4 (molécules humaines, pg. 281). Publisher. 47. Cukier, Rosa. (2007). Words of Jacob Levi Moreno: a Glossary of the Terms used by J. L. Moreno (terms: ambivalence of choice, pg. 42; atom, cultural atom, social atom, pg. 47-51, 358; energy, pg. 141, social entropy, pgs. 88, 360; spontaneity, pg. 393; warming up process, pg. 495-503). Lulu. 49. Dreier, Thomas. (1948). We Human Chemicals: the Knack of Getting Along with Everybody. Updegraff Press. 50. Fairburn, William Armstrong. (1914). Human Chemistry. The Nation Valley Press, Inc. 51. Winiarski, Leon. (1967). Es la Mecanique Sociale (molécules, pgs. 11-12, 34, 95, 163-64, 170, 195). Librairie Droz. 52. Garrouste, Pierre and Ioannides, Stavros. (2001). Evolution and Path Dependence in Economic Ideas (Lausanne school: Winiarski a member, pg. 184). Edward Elgar Publishing. 53. Names of larger numbers – Wikipedia. 54. Fountain, Henry. (2009). “Experiments Show that Molecules Can Walk, but Can They Dance?”, New York Times, Science, Apr 07. 55. Hadlington, Simon. (2009). “Two-legged Molecular Walker Takes a Stroll”, Dec 21. 56. Zeleny, Leslie D. (1949). “A Note on the Social Atom: an Illustration”, Sociometry, 12: 341-43. 57. Gladyshev, Georgi P. (2006). "The Principle of Substance Stability is Applicable to all Levels of Organization of Living Matter", International Journal of Molecular Sciences, 7:98-110. 58. Human Molecule (1988 acrylic on canvas) – Authentic Norval Morrisseau Blog. 59. Rousseau, Pierre. (2006). “The Constant flow of Human Molecules”, 60. (a) Murali, D. (2010). “Human particles in the Corporate Molecule”, The Hindu, May 29. (b) Nayar, Vineet. (2010). Employees First, Customers Second (quote, pg. 165). Harvard Business Books. 61. Carey, H.C. and McKean, Kate. (1874). Manual of Social Science: Being a Condensation of ‘The Principles of Social Science’ of H.C. Carey (ch.1: Social Science, pgs. 25; molecule, pg. 37). Industrial Publisher. 62. Stark, Werner. (1962). The Fundamental Forms of Social Thought. (Carey, 143-59; human molecules, pgs. 87-90, 126, 146 (quote), 243, 25). Routledge. 63. Hodgson, John. (2010). molecules humans. 64. (a) Louis de Broglie – Wikipedia. (b) Wave-particle duality – Wikipedia. 66. Human molecule (definition) - Human Thermodynamics Glossary. 67. Macrone, Michael and Lulevitch, Tom. (1999). Eureka!: 81 Key Ideas Explained (section: Entropy, pgs. 129-33; image pg. 130). Barnes & Noble Publishing. 68. Staff. (1953). “Science: the Million-Year Prophecy”, Time, Jan 19. 69. Mesny, Mary B. (1910). “Human Molecules”, The Smart Set: a Magazine of Cleverness, 31:100. 70. Author. (2005). “That’s Life”, New Scientist, Dec 03. 71. Annamalai, Kalyan, Puri, Ishwar K., and Jog, Milind A. (2011). Advanced Thermodynamics Engineering (§14: Thermodynamics and Biological Systems, pgs. 709-99, contributed by Kalyan Annamalai and Carlos Silva; §14.4.1: Human body | Formulae, pgs. 726-27; Thims, ref. 88). CRC Press. 72. Silva, Carlos A. and Annamalai, Kalyan. (2009). “Entropy Generation and Human Aging: Lifespan Entropy and Effect of Diet Composition and Caloric Restriction Diets”, Journal of Human Thermodynamics, Jan 23. Further reading Humans resemble molecules? , Pravda, 24 May 2004. ● Ockham, Edward. (2012). “The Human Molecule”, Beyond Necessity: Philosophy, Medieval Logic and the London Plumbing Crisis,, May 13. External links The Human Molecule - Human molecules - WikiSocial, a Wikia wiki. The Human Molecule (discussion thread) – Human molecule – EoHT symbol Latest page update: made by Sadi-Carnot , Apr 30 2014, 1:55 PM EDT (about this update About This Update Sadi-Carnot Edited by Sadi-Carnot 19 words added 1 word deleted view changes - complete history) More Info: links to this page Started By Thread Subject Replies Last Post Petrologist Why I'm not a molecule 19 Nov 9 2011, 10:17 AM EST by Sadi-Carnot Thread started: Aug 24 2009, 9:24 AM EDT  Watch First, I think that using thermodynamics, with its concept of efficiency, might be a very productive way of exploring some sciences other than physical. To show this, one needs to create at least one theory, test it whenever possible, and show it makes interesting, new predictions of value to us. To do the above, one need not slavishly adopt terminology from the physical sciences. When I write about the thermodynamics of rock interactions, I find it convenient to create new terms. These terms, however, are of value only if they are synonyms of terms in other sciences, mathematics, or everyday language. Here the human molecule of the social sciences fails. Sometimes, when discussing giving birth or dying, one want instead the chemical substances of human beings, as the quantity or mass of the unit NaCL in a halite-bearing rock. At other times, one wants a collection of separate but equivalent entities whose bonds one can define in psychological terms. These should be different terms. Each of the above have some properties of a molecule, but not all. 'Molecule' now brings to the mind a discrete substance (floating about) made of the same number & kinds of atoms, bonded in the same manner. They differ only in the physical properties 'isotopic mass' and 'handedness'. Geologists use instead 'substance', a much more flexible term. Substances react, and classical thermodynamics studies them. Chemical formulae above represent chemical compositions of the human substance. I have no suggestion for people in a crowd but, perhaps, 'person'. These are just first impressions. I might change my mind if 'flash' worked on my computer. :-) 4  out of 7 found this valuable. Do you?     Show Last Reply D.Boss orbitals 1 Aug 4 2012, 8:35 AM EDT by Sadi-Carnot Thread started: Aug 4 2012, 7:50 AM EDT  Watch can you point to applications of the human molecular orbitals? seems like an interesting theory. Do you find this valuable?     Show Last Reply Anonymous factors outside chemistry 1 Jun 22 2012, 8:22 AM EDT by Sadi-Carnot Thread started: Jun 21 2012, 4:28 PM EDT  Watch hi. I was wondering if you acknowledge other factors than chemistry? for instance, if we have the human molecule, do you think that can explain and predict everything that a human consists of? Do you find this valuable?     Show Last Reply GGladyshev The term of “Human molecule” 5 Oct 20 2010, 4:52 AM EDT by GGladyshev Thread started: Jul 3 2010, 6:05 AM EDT  Watch I believe that we should make a note to term of “Human molecule”. “Human molecule” is a particle of variable chemical and supramolecular compositions. These compositions are depend on the age of body. The same situation takes place in phylogenies and evolution. This is the consequence of action of hierarchical thermodynamics. First it was shown in my work (Gladyshev G.P. On the Thermodynamics of Biological Evolution // Journal of Theoretical Biology.- 1978.- Vol. 75.- Issue 4.- Dec 21.-P. 425-441. Preprint, May, 1977.” On the thermodynamics of biological evolution ”). Then there were some publications which conform this on the basis of quantitative calculations. (Gladyshev G.P. On the Principle of Substance Stability and Thermodynamic Feedback // Biology Bulletin, 2002.- Vol. 29, No. 1, pp. 1-4 ). I will be in Moscow in September only. Do you find this valuable?     Show Last Reply Anonymous  (Get credit for your thread) Showing 1 of 1 featured threads and the last 3 of 4 threads for this page - view all Related Content
8c8164029ebdc938
Durham e-Theses You are in: Quantum field theories with fermions in the Schrödinger representation Nolland, David John (2000) Quantum field theories with fermions in the Schrödinger representation. Unspecified thesis, Durham University. This thesis is concerned with the Schrödinger representation of quantum field theory. We describe techniques for solving the Schrödinger equation which supplement the standard techniques of field theory. Our aim is to develop these to the point where they can readily be used to address problems of current interest. To this end, we study realistic models such as gauge theories coupled to dynamical fermions. For maximal generality we consider particles of all physical spins, in various dimensions, and eventually, curved spacetimes. We begin by considering Gaussian fields, and proceed to a detailed study of the Schwinger model, which is, amongst other things, a useful model for (3+1) dimensional gauge theory. One of the most important developments of recent years is a conjecture by Mal-dacena which relates supergravity and string/M-theory on anti-de-Sitter spacetimes to conformal field theories on their boundaries. This correspondence has a natural interpretation in the Schrödinger representation, so we solve the Schrödinger equation for fields of arbitrary spin in anti-de-Sitter spacetimes, and use this to investigate the conjectured correspondence. Our main result is to calculate the Weyl anomalies arising from supergravity fields, which, summed over the supermultiplets of type JIB supergravity compactified on AdS(_s) x S(^5) correctly matches the anomaly calculated in the conjecturally dual N = 4 SU{N) super-Yang-Mills theory. This is one of the few existing pieces of evidence for Maldacena's conjecture beyond leading order in TV. [brace not closed] Item Type:Thesis (Unspecified) Thesis Date:2000 Copyright:Copyright of this thesis is held by the author Deposited On:01 Aug 2012 11:49 Social bookmarking: del.icio.usConnoteaBibSonomyCiteULikeFacebookTwitter
48f052ed00102414
SciELO - Scientific Electronic Library Online vol.68 issue1 author indexsubject indexarticles search Home Pagealphabetic serial listing   HTS Theological Studies Print version ISSN 0259-9422 Herv. teol. stud. vol.68 n.1 Pretoria Jan. 2012 Can matter and spirit be mediated through language? Some insights from Johann Georg Hamann Detlev Tönsing School of Religion and Theology, University of KwaZulu-Natal, Pietermaritzburg Campus, South Africa. Lutheran Theological Institute, University of KwaZulu-Natal, Pietermaritzburg Campus, South Africa Correspondence to The Enlightenment introduced to European philosophy and thought-patterns the strict dichotomy between res extensa and res cogitans; that is, matter and spirit. How to overcome the dichotomy and conceive of the interactions between these planes of reality has since become an overarching issue for philosophers. The theory of evolution, as founded by Charles Darwin, understands human beings, with their ability to think, to have arisen in the evolutionary process. Neuroscience utilises insights from the theory of complex systems to attempt to understand how perception, thought and self-awareness can arise as a consequence of the complex system that is the brain. However, already at the height of the Enlightenment, a contemporary and critic of Immanuel Kant, Johann Georg Hamann, suggested a metaphor for understanding the interrelationship of matter and thought. This metaphor is language. The appropriateness of this metaphor can be seen both in the importance that language abilities play in the evolutionary transition to the human species and in the characteristics of complex adaptive systems. Mind, matter, Calvin and Darwin In 2009, the second centennial of Darwin's birth, and the fifth of Calvin as theologian of the Holy Spirit, was celebrated. This conjunction can be used as an occasion to reflect on the relationship between spirit and the evolution of matter. In this article, I will use the term 'spirit' in the sense of that which defines the human being and makes humans different from animals. This is in the sense of Genesis 2, where the breath of God makes the human being into what it essentially is. This passage, as well as the passage in Genesis 1 about the imago dei in humans, shows a link between the human spirit and the Spirit of God. At the end of this article, I will draw implications for what the concept of 'Spirit of God' may imply from my conclusions about the human spirit itself. The concept of the human spirit or soul has been widely and controversially discussed and I will not enter into discussions on the relationship between spirit and soul, their distinction, and whether one should distinguish between body and spirit or body, soul and spirit in humans. Instead, I will focus on the use of 'spirit' as that which defines humans as distinct from (other) animals. Calvin (1843:171), as with Luther and medieval Catholicism, distinguishes the soul of the human being, which he acknowledges can also be termed 'spirit', from the body. It is this spirit that is the locus of the imago dei in humans and distinguishes humans from other beings. Examining the further distinctions of the soul - into intellect and will (Calvin 1843:180) - is beyond the scope of this article. Transcendental categories and language Kant and Hamann In between the theories of Calvin and Darwin lie those of Descartes and Kant, representing the culmination of Enlightenment thought. The distinction between spirit and body, or matter, is developed further and entrenched as both Descartes and Kant separated the concept of spirit from matter - the res extensa from the res cogitans, the noumena from the phenomena. Matter, with the main property of extension, is separated from intellect, which has the main ability of perception and thought. What is observed by the perceiving spirit is distinguished from the thing in itself. Kant reproduced the distinction between intellect and will within the spirit in his distinction between theoretical and practical reason in his two seminal works: Critique of pure reason (1781) and Critique of practical reason (1788). The division between spirit and matter was heightened to such an extent that one of the main difficulties of philosophy became how to explain the ability of human spirit to acquire information from the bodily senses and how the human spirit could effect instruction to the human body; that is, the mind-body problem (Carrier & Mittelstrass 1995:16, 26). The consequences of this separation are present with us today and can be seen in the issue of the observer in quantum mechanics. The wave function develops continuously and predictably in accordance with the Schrödinger equation-until there is an observation, at which point the wave function, which assigns probabilities to the outcomes of possible observations, collapses to the definite state of one definite observed answer. Which answer is observed is not predictable, but probabilistic. The wave function then resumes its continuous development - until the next observation. What exactly constitutes an observation, and what this implies for unobserved states, is still a topic of discussion in interpretations of quantum mechanics (Laurikainen 1991:198; Shimony 2001:5). Another consequence of the mind-body dualism is the relative devaluation of the body, as opposed to the intellect prevalent in Western culture. This can be seen in the relative value attached to manual and intellectual labour, as well as in the valuation of objects primarily in terms of the specific human contribution to their existence (Smith 1843 [1776]:20). Yet, in this conception, because value is given only to the human input in production, this devalues those aspects of nature untouched by humans. If that which is not made by humans is conceived as valueless, such a conception can be argued to be a major contributor to environmental degradation. Mind-body, spirit-matter dualism is therefore arguably at the root of some of the most important issues of our time. So how can this dualism be overcome? As a contemporary critic and friend of Immanuel Kant, Johann Georg Hamann may contribute to finding an approach to answering this question. Hamann assisted Kant in the publication of the Critique of pure reason. Having therefore had access to the text before publication, he wrote a response very soon after its appearance, although this was only published much later in respect for his friendship with Kant. He named this response Metakritik über den Purismus der Venunft (1825 [1784]), thereby coining the term 'metacritique'. He concludes this response, a mere 14 pages long in comparison with the 440 pages of the Kant's work, with the words: This last possibility to draw the form of an empirical perception without object or sign from the pure and empty property of our external and internal mind is the Archimedean fulcrum 'give me to stand' and 'origin of the deception', the cornerstone of critical idealism and its tower- and lodgebuilding of pure reason. The given or taken materials belong to the categoric and idealistic forests, peripatetic and academic store rooms. The analysis is nothing more than a cut according to the fashion, as the synthesis is just an artful seam of a good leather- or clothes tailor. For the sake of the weak reader, I have interpreted that which the transcendental philosophy metagrabolises (sounds out at length) to the sacrament of language, the letters of its elements, the spirit of its institution. I leave everyone to unfold the closed fist into an open hand.1 (Hamman 1825 [1784]:16, [author's own translation]) This passage, as with most of Hamann's writing, needs interpretation - the closed fist needs to be expounded into an opened hand. In his Critique of pure reason, Kant (1855 [1781]:31-43) attempts to demonstrate - with clarity and not hypothetically, but with apodictic certainty - that the true basis of thought lies in the universal conditions of perception which exist in all humans before the diverse vagaries of experience can arise: the categories of space, time, number and causality being the chief of these. In his response, Hamann (1825 [1784]:6) denies the possibility of universal reason, free from the vagaries of tradition and experience, as reason always depends on language and all language arises out of experience handed on, as sensus communis, in the process of tradition. Hamann (1825 [1784]:8) denies that a perfect, well-defined, abstract language without reference to everyday language is possible, calling this an ens rationis [pure thought] 'nothing'. Consequently, he decries the whole project of universal and controlling reason of an isolated individual, which Kant (1855 [1781]:17) proposes, as ill-conceived (Hamann 1825 [1784]:9). Instead, Hamann (1825 [1784]:9) starts with the insight that all thought is language and therefore participates in the particularity of experience and language. Universals are just particular concepts, arising out of particular experiences, which have been given very extensive fields of meaning. However, in language, the material - the sound of a syllable, the shape of a letter, of a word, becomes a carrier of meaning (Hamann 1825 [1784]:12). Hamann sees this joining of the material basis to the content of meaning as fundamentally analogous to that of the sacrament - where the meaning of the word of grace is joined to the material symbol of bread, wine or water. Hamann (1825 [1784]:16) suggests that the key to overcoming the matter-spirit dualism lies in the 'sacrament of language'. Before we explore the consequences of such an approach, let us see whether there is evidence that would support this claim. Language, mind and evolution This question brings us to Darwin and the theory of evolution. Whilst the general philosophical approach in Darwin's time still presupposed spirit-matter dualism, the theory of the evolution of humans presupposes a continuum between matter and spirit. It suggests that the human mind and spirit arose in a continuous process. At some stage, the beings that evolved from early apes, and later were our ancestors, began cognition and became human - as the name Homo sapiens implies. When did this occur? And what change made them human, cognitive, inspirited beings? What in matter can give rise to spirit? Indeed, the opposition to the suggestion of continuity between matter and spirit, between animal bodies and human minds, was one of the main reasons for contemporaries of Darwin to reject his theory (Peacocke 1985:101-102). In human palaeontology, tool manufacture had often been used as the indicator of humanness because it implies planning and self-awareness. This approach corresponds to Kant's interpretation of what human spirit, human mind, is - a theoretical mind capable of accurately and objectively perceiving objects and a practical mind capable of manipulating objects to its will, capable of intention and planning (Kant 1855 [1781]:464). This corresponds to the definition of human beings common to much of philosophical discourse - a rational being with free will. The fundamental nature of the human being perceived in this approach corresponds to the names given to human ancestors: Homo habilis and Homo ergaster (Wood & Collard 1999:197). However, recent research in palaeoanthropology indicates that tool use, and even tool construction, is an insufficient criterion for humanness. Studies indicate that crows and apes can also modify objects to serve as tools. Also, the development of artefacts, cultural settlements and complex behaviour suddenly increased in speed approximately fifty thousand years ago - indicating a qualitative shift from biological adaptation to cultural adaptation and cultural transmission of information. This is now taken as the true indication of the origin of humans as we experience ourselves. It is linked to the appearance of complex, grammatical language, associated with the Broca's area of the brain (Cela-Conde & Marty 1998:448; Diamond 1992:141). Religious behaviour originates at the same time, indicated by burial rituals and cultural construction, by the decoration of artefacts and production of paintings, as well as the construction of musical instruments (Ambrose 2001:1749; Cross, Zubrow & Cowan 2002:28; Mcbrearty & Brooks 2000:458;). Therefore, although it is more difficult to detect than tool construction, language use arguably is the defining characteristic of being human, much more so than tool use or construction, which does not seem to denote an equally dramatic shift from pre-human hominids. Spirit seems therefore to be linked to language, in terms of the evolutionary origin of human beings. One intriguing fact in human evolution is the extraordinary capacity, complexity and information processing ability of the human mind. The brain evolved in a hunter-gatherer society, where the information-processing needs of the average member were vastly less than those required of humans in our post-industrial information society. The brain has not changed dramatically in structure since then. The same brain that today can devise sub-quantum physical theories, process thousands of pages or screens of information, then only needed to do a little more than the average baboon still does today - gather tubers and insects. The ability to devise theories that span the evolution of the universe, that can project back 15 billion years and forward as equally long, and construct devices and societies as complex as supercomputers and mega-cities, seems to outweigh by far the evolutionary needs of the hunter-gatherers in which this mind evolved. Added to this is the significant evolutionary cost of a large brain: the problem of giving early birth to large-sculled children, caring for them in a long infancy, and the fact that the brain consumes an inordinate proportion of the energy of the body: 20% - 30%. The one parallel in other species of an organ that rapidly develops in size and complexity beyond the direct needs of survival is those that show sexual competence and health (Miller 2000:130-132). Some examples of this are the tails and crests of wydahs, paradise birds and peacocks, as well as the antlers of elk. The hypothesis therefore is that brain size and function evolved to demonstrate health and ability as a preferential partner in mate selection. Yet how was this brain size demonstrated? Rapid growth in brain size occurred directly after the formation of language areas and so it therefore stands to reason that brain size was demonstrated through linguistic ability in the mate-selection process. The implication is that we developed our mind in order to compose love songs, not in order to make tools.2 The second linkage of matter and spirit, according to science, originates in the area of complexity. Instead of asking how our humanness arose historically in the process of evolution, the question here is how does our humanness actually arise out of the way we are made? How does the mind that I am arise out of the body that I am, too? Whilst the interaction of mind and body had, in the history of philosophy, been linked to the pineal gland or explained in terms of pre-stabilised harmonies of monads, the attempt of modern neuroscience to answer this question lies in the theory of complex, non-linear systems. The theory of complex systems indicates that these systems develop a mode of behaviour wherein the system behaves as a whole and its behaviour must be studied on the level of the system as a whole, using different categories from the study of the parts of the system (Clayton 2006:677, 681; Peacocke 1986:28-29, 90-91, 1993:224-225, 2000:135). This mode of behaviour is also called supervenience or emergence. A typical feature of this is that the higher level reality - the meaning - can be instantiated in different ways in the lower level - the representation - and that the representation level, whilst constrained by lower-level laws, is flexible enough to take different configurations that are of equal energy, but carry different meaning. There is no effective difference between different arrangements of the bases in a DNA string, but the different arrangements carry different information. This is therefore similar to the relationship between the physical representation of words in sounds or written signs and the meaning of those signs in a certain language-context (Murphy 1998:476-477). Humanity and language, rationality and relationality If, then, it can be reasonably held that the humanity of human beings lies in their ability to use language in pursuit of relationships, and not primarily in their ability to grasp the world rationally nor in their ability to manipulate the world technically, what then would be the consequences for the conception of humanity? This conception would define humans not purely as rational beings, but rather as relational beings, thus coming closer to the African understanding of muntu ngumuntu ngabantu.3 Language is not something that can be used purely to describe the world (Miller 2000:359), something purely theoretical and interpreted in the mode of seeing as the primary sense, but rather something relational, with hearing - and answering - being the fundamental sense and mode of perception. The implication is that the human is not a distant and unrelated observer, nor a controlling artificer, but a communicant, a partner and participator in a communicative process (Hamann 1822 [1760]:261). Following this, human self-understanding would shift by necessity and the source of self-worth would have to be redefined. I am more human not when I know more, nor when I have more power, but when I relate more and deeper through the language I use. This would especially be true if these relationships are truly relationships of communication and not relationships of power exercise - for in such relationships, both parties would be, and would make themselves, vulnerable to one another and, through these relationships, a community would be created where the search for the common good would outweigh the competition for position. In our broken world, this may be a simplistic expectation - and real relationships, in our experience, most often contain struggles for power and recognition; however, a tendency towards humans being viewed more relationally and less in terms of abilities may be indicated by this perspective. If this would be accepted as basis of society, then our society would spend less effort on controlling reality and exploiting it for knowledge or goods or power and more on the development of relationships. Even knowledge would be conceived of differently - for the ideal of knowledge in the age of science is that originating from Francis Bacon (1825:219), who defined knowledge in terms that related it to technical mastery over the world: knowledge is power. Knowledge based on a fundamentally relationally conceived language would be closer to the Hebrew understanding of yad'a, where knowledge is the establishment of an intimate and understanding relationship between knower and known. I believe this shift would result in a more respectful approach to our world, which could result in a healthier relation to our environment - which would then lead to a better chance of our survival on this planet. Relationality, language and God's Spirit Furthermore, if that which makes us truly human is the image of God in us, and our spirit is an echo of the Spirit of God, then our concept of God would shift as well. Whilst humans who conceptualise their humanity fundamentally as their ability to discern or master over other creatures would define God in terms of mastery over creation, those who would understand themselves fundamentally as relational beings would conceive of God rather as one who is deeply relational. The classical theistic definition of God - in terms of omnipotence and omniscience - defines God in terms of power - and is rightly criticised by Feuerbach (1981:262). The conception of God in terms of relations corresponds better, in my view, to the God of Jesus of Nazareth, whose main interest is the establishment of relationships with us and who is, as Trinity, fundamentally relational. Of course, part of entering into a relationship is becoming vulnerable - contrary to the detached observer or the master, who is invulnerable. The vulnerability of God, God's pain at broken relationships, can clearly be seen in the biblical witness, from Hosea to Jesus (Fretheim 1984:155; Moltmann 1972:261). This understanding of God also has consequences for our fundamental approach to the world. In the 18th and 19th centuries, an age where God was conceived of as Master and designer, the world was conceived of as subject and machine, subjected to absolute and unchanging laws (Barrett 2000:58). However, if God's Spirit that penetrates and underlies the created order is conceived of in terms of language, then the world, too, will be interpreted in the metaphor of language and relationship. The laws of this world will be conceptualised as being akin to the laws of grammar and good style, rather than as the absolute laws of an absolute monarch. The laws of grammar underlie the possibility of expressing meaning in language - but they are, in their particular shape, neither necessary, nor is obedience to them absolutely mandatory - for whilst wholesale disregard for the rules of grammar destroys meaningful communication, the rules of grammar and style can be bent or broken occasionally with poetic license, when it serves the communication of meaning by an artful author (Hamann 1821 [1758]:138, 1822 [1759a]:17, 1821 [1759b]:508). In this conception, attention is focused away from an exclusive concern with the laws and onto an attempt to understand the meaning of the writing. A world seen only in terms of absolute laws has no meaning - but a world in which the laws of grammar undergird the communication of a particular text can have deep and profound meaning in its overall development, for each of its parts can contribute to that meaning. Does such a description fit our understanding of the world? The sciences of complexity indicate that it does, that the laws of this world are precisely such as to allow construction of intricate patterns that are not predetermined by the laws, but can develop because of the stability and freedom these laws provide. This balance between freedom and structure is sometimes referred to as the edge of chaos (Gutowitz & Langton 1995:52; Miller & Scott 2007:129). Can we understand this meaning? Can we read the language of the world? To understand a language, one needs a key, a Rosetta stone. For, in language, individual patterns or words do not have intrinsic meaning - rather, their meaning is fixed by usage. There is nothing in the letters 'wand' that predestines one to interpret them as a side of a room, in German, or as a magic stick, in English. Meaning is a priori arbitrary, but a posteriori necessary. What can be the Rosetta stone for understanding the meaning of history? Hamann answers that it is revelation, specifically the revelation of God in Christ that allows us to find the key to the language of history (Hamann 1821 [1758]:148, 1822 [1760]:263). The difference, then, between a believer and an unbeliever in the interpretation of the world is simply that between someone who has had access to the key to learn to understand the language and one who has not. Language as sacrament But why call language a sacrament? Hamann (1825 [1784]:16) does so in reference to the definition of a sacrament that underlies the Lutheran understanding and is often quoted in Lutheran writings: 'accedat verbum ad elementum et fit sacramentum' (Luther 2000 [1530]:468).4 When the word, or more properly, the meaning, is joined to the element, the sacrament results. In this perspective, a sacrament has three constituent aspects: a material sign, a meaning that is attached to this, which relates subsequently to the gospel of the gracious self-communication of God. These three elements can be seen in language. Indeed, is language not a prime example of the joining of meaning to a material entity; that is, an auditory signal or a written sign? And is language, after all we have said, not a sign of the gracious communication of God, who desires relationship and has given himself to us in a world that is suited for, and geared toward, the establishment of relationships? Therefore, does language, by its very existence, not give us an indication of the meaning of the world - a continuous growth in complexity and in the depth and extent of relationships amongst the beings in it? If language can be regarded as indicative of the communication of God with us - in the world, in the word and in the central sacrament of the incarnation of Christ - then language itself participates in the sacramental nature of the word, the word that was in the beginning, was God, and yet became flesh. It is the nearness of language to this incarnated Word that gives it sacramental character. In this sacramental character, we see both God's self-communication as the ultimate source of language - for God created the world such that it has the character of language, so that he may communicate through it and with it - and God's intention of establishing communication embodied in relationships within God's creation. It is in such a relationship, in such a communication, that the dichotomy of matter and spirit is overcome. The consequences of this approach have been indicated in the development of this argument: if the meaning of the world is seen in language, then relationships of communication - and self-communication - become essentially important, rather than those of abstraction or manipulation. Such an approach can contribute to a healing of the divisions engendered by the modern dualisms. Acknowledgements Competing interest Competing interest Ambrose, S.H., 2001, 'Paleolithic technology and human evolution', Science 291(5509),1748-1753., PMid:11249821        [ Links ] Bacon, F., 1825, The works of Francis Bacon, Lord Chancellor of England: A new edition, vol. 1, ed. B. Montague, Esq., William Pickering, London.         [ Links ] Barrett, P., 2000, Science and Theology since Copernicus: The search for understanding, University of South Africa, Pretoria.         [ Links ] Calvin, J., 1843, Institutes of the Christian religion, vol. 1, transl. J. Allen, Presbyterian Board of Publication, Philadelphia, PA.         [ Links ] Carrier, M. & Mittelstrass, J., 1995, Mind, brain, behavior: The mind-body problem and the philosophy of psychology, Walter de Gruyter, Berlin.         [ Links ] Cela-Conde, C. & Marty, G., 1998, 'Beyond biological evolution: Mind, morals and culture', in R.J. Russell, W.R. Stoeger & F.J. Ayala (eds.), Evolutionary and molecular biology: Scientific perspectives on divine action, pp. 445-451, Vatican Observatory, Rome.         [ Links ] Clayton, P., 2006, 'Emergence from physics to theology: A panoramic view', Zygon 41(3), 677-681.        [ Links ] Cross, I., Zubrow, E. & Cowan, F., 2002, 'Musical behaviours and the archaeological record: A preliminary study', in J. Mathieu (ed.), Experimental Archaeology: British archaeological reports international, ser. 1035, pp. 25-34, Archaeopress, Oxford.         [ Links ] Diamond, J., 1992, The third chimpanzee: The evolution and future of the human animal, Harper Perennial, New York, NY.         [ Links ] Feuerbach, L., 1981, Gesammelte Werke, vol. IV, ed. W. Schuffenhauer, Akademie-Verlag, Berlin.         [ Links ] Fretheim, T., 1984, The suffering of God, Fortress, Philadelphia, PA.         [ Links ] Gutowitz, H. & Langton, C., 1995, 'Mean field theory of the edge of chaos', in F. Morán (ed.), Advances in artificial life: Third European conference on artificial life proceedings, Granada, Spain, June 04-06, 1995, pp. 52-64, Springer, Berlin.         [ Links ] Hamann, J.G., 1821 [1758], 'Brocken', in Hamann's Schriften, vol. I, ed. F. Roth, pp. 125-148, G. Reimer, Berlin.         [ Links ] Hamann, J.G., 1822 [1759a], 'Sokratische Denkwürdigkeiten', in Hamann's Schriften, vol. II, ed. F. Roth, pp. 1-50, G. Reimer, Berlin.         [ Links ] Hamann, J.G., 1821 [1759b], 'To Kant, 30 Okt 1759', in Hamann's Schriften, vol. I, ed. F. Roth, pp. 508-509, G. Reimer, Berlin.         [ Links ] Hamann, J.G., 1822 [1760], 'Aesthetica in nuce', in Hamann's Schriften, vol. II, ed. F. Roth, pp. 255-308, G. Reimer, Berlin.         [ Links ] Hamann, J.G., 1825 [1784], 'Metakritik über den Purismum der reinen Vernunft', in Hamann's Schriften, vol. VII, ed. F. Roth, pp. 1-15, G. Reimer, Berlin.         [ Links ] Kant, I., 1855 [1781], Critique of pure reason, transl. J.M.D Meiklejohn, Henry G. Bohn, London.         [ Links ] Kant, I., 1976 [1788], Critique of practical reason and other writings in moral philosophy, transl. L.W. Beck, University of Chicago Press, Chicago, IL.         [ Links ] Laurikainen, K.V., 1991, 'Causality and quantum mechanics', Foundations of Physics Letters 4(2), 197-201.        [ Links ] Luther, M., 2000 [1530], 'The large Catechism', in R. Kolb & T.J. Wengert (eds.), The Book of Concord: The confessions of the Evangelical Lutheran Church, pp. 377-480, Fortress, Minneapolis, MN.         [ Links ] Mcbrearty, S. & Brooks, A.S., 2000, 'The revolution that wasn't: A new interpretation of the origin of modern human behavior', Journal of Human Evolution 39(5), 453-563., PMid:11102266 Miller, G., 2000, The mating mind: How sexual choice shaped the evolution of human nature, Heinemann, London.         [ Links ] Miller, J.H. & Scott E.P., 2007, Complex adaptive systems: An introduction to computational models of social life, Princeton University Press, New York, NY.         [ Links ] Moltmann, J., 1972, Der gekreuzigte Gott, Kaiser, Munich.         [ Links ] Murphy, N.,1998, 'Supervenience', in R.J. Russell, W.R. Stoeger & F.J. Ayala (eds.), Evolutionary and molecular biology: Scientific perspectives on divine action, pp. 474-478, Vatican Observatory, Rome.         [ Links ] Peacocke, A., 1985, 'Biological evolution and Christian theology - Yesterday and today', in J. Durant (ed.), Darwinism and divinity, pp. 101-130, Blackwell, Oxford.         [ Links ] Peacocke, A., 1986, God and the new biology, JM Dent & Sons, London.         [ Links ] Peacocke, A., 1993, Theology for a scientific age, SCM Press, London.         [ Links ] Peacocke, A., 2000, 'Chance and law', in R.J. Russell, N. Murphy & A.R. Peacocke (eds.), Chaos and complexity: Scientific perspectives on divine action, pp. 123-146, Vatican Observatory, Rome.         [ Links ] Shimony, A., 2001, 'The reality of the quantum world', in R.J. Russell, P. Clayton, K. Wegter-McNelly & J. Polkinghorne (eds.), Quantum mechanics: Scientific perspectives on divine action, pp. 3-16, Vatican Observatory, Rome.         [ Links ] Smith, A., 1843 [1776], An inquiry into the nature and causes of the wealth of nations, viewed 18 June 2009, from        [ Links ] Wood, B. & Collard, M., 1999, 'The changing face of Genus Homo', Evolutionary Anthropology 8(6), 195-207.<195::AID-EVAN1>3.0.CO;2-2        [ Links ] Correspondence to: Detlev Tönsing Postal address: Private Bag X1 Scottsville 3209, South Africa Received: 22 Oct. 2010 Accepted: 03 Mar. 2011 Published: 06 Feb. 2012 1. The German orginal is as follows: Diese letzte Möglichkeit, die Form einer empirischen Anschauung ohne Gegenstand noch Zeichen aus der reinen und leeren Eigenschaft unseres äu βern und innern Gemüths herauszuschöpfen, ist eben das der ganze Eckstein des kritischen Idealismus und seines Thurm- und Logenbaues der reinen Vernunft. Die gegebenen oder genommenen Materialien gehören den kategorischen und idealischen Wäldern, peripatetischen und akademischen Vorrathskammern, Die Analyse ist nichts mehr als jeder Zuschnitt nach der Mode, wie die Synthese, die Kunstnath eines zünftigen Leder- oder Zeugschneiders. Was die Transcendental=Philosophie metagrabolisirt, habe ich, um der schwachen Leser willen, auf das Sakrament der Sprache, den Buchstaben ihrer Elemente, den Geist ihrer Einsetzung gedeutet, und überlasse es einem jeden, die geballte Faust in eine Flache Hand zu entfalten. 2. This theory was developed in a somewhat light-hearted conversation with my wife over coffee, and later verified in consulting the literature - Hamann (1822 [1760]:258) supposed something similar. 3. This common African expression can be translated as: 'A human being is human through other humans.' 4. That is, 'The word joins the element and makes the sacrament.'
61f8ac8217bbb3e4
Complex number 2008/9 Schools Wikipedia Selection. Related subjects: Mathematics In mathematics, a complex number is a number which can be formally defined as an ordered pair of real numbers (a,b), often written: a + bi \, where i2 = −1. Complex numbers have addition, subtraction, multiplication, and division operations defined, with behaviours which are a strict superset of real numbers, as well as having other elegant and useful properties. Notably, negative real numbers can be obtained by squaring complex numbers. Complex numbers were invented when it was discovered that solving some cubic equations required intermediate calculations containing the square roots of negative numbers, even when the final solutions were real numbers. Additionally, from the fundamental theorem of algebra, the use of complex numbers as the number field for polynomial algebraic equations means that solutions always exist. The set of complex numbers form an algebraically closed field, in contrast to the set of real numbers, which is not algebraically closed. Complex numbers are used in many different fields including applications in engineering, electromagnetism, quantum physics, applied mathematics, and chaos theory. When the underlying field of numbers for a type of mathematics is the field of complex numbers, the name usually reflects that fact. Examples are complex analysis, complex matrix, complex polynomial and complex Lie algebra. Although other notations can be used, complex numbers are very often written in the form a + bi \, Real numbers may be expressed as complex numbers with the imaginary part of zero; that is, the real number a is equivalent to the complex number a+0i. Complex numbers with a real part which is zero are called imaginary numbers. For example, 3 + 2i is a complex number, with real part 3 and imaginary part 2. If z = a + ib, the real part (a) is denoted Re(z) or \Re(z), and the imaginary part (b) is denoted Im(z) or \Im(z). In some disciplines (in particular, electrical engineering, where i is a symbol for current), the imaginary unit i is instead written as j, so complex numbers are sometimes written as a + jb. Domain Coloring plot of the function f(x)=(x²-1)(x-2-i)²/(x²+2+2i). The hue represents the function argument, while the saturation represents the magnitude. Domain Coloring plot of the function The set of all complex numbers is usually denoted by C, or in blackboard bold by \mathbb{C}. The real numbers, R, may be regarded as a subset of C by considering every real number as a complex: a = a + 0i. Two complex numbers are equal if and only if their real parts are equal and their imaginary parts are equal. That is, a + bi = c + di if and only if a = c and b = d. Complex numbers are added, subtracted, multiplied, and divided by formally applying the associative, commutative and distributive laws of algebra, together with the equation i 2 = −1: • Addition: \,(a + bi) + (c + di) = (a + c) + (b + d)i • Subtraction: \,(a + bi) - (c + di) = (a - c) + (b - d)i • Multiplication: \,(a + bi) (c + di) = ac + bci + adi + bd i^2 = (ac - bd) + (bc + ad)i • Division: \,\frac{(a + bi)}{(c + di)} = \left({ac + bd \over c^2 + d^2}\right) + \left( {bc - ad \over c^2 + d^2} \right)i\, (Division of complex numbers is further defined later). The field of complex numbers Formally, the complex numbers can be defined as ordered pairs of real numbers (a, b) together with the operations: So defined, the complex numbers form a field, the complex number field, denoted by C (a field is an algebraic structure in which addition, subtraction, multiplication, and division are defined and satisfy certain algebraic laws. For example, the real numbers form a field). The real number a is identified with the complex number (a, 0), and in this way the field of real numbers R becomes a subfield of C. The imaginary unit i can then be defined as the complex number (0, 1), which verifies (a, b) = a \cdot (1, 0) + b \cdot (0, 1) = a + bi \quad \text{and} \quad i^2 = (0, 1) \cdot (0, 1) = (-1, 0) = -1. In C, we have: • additive identity ("zero"): (0, 0) • multiplicative identity ("one"): (1, 0) • additive inverse of (a,b): (−a, −b) • multiplicative inverse (reciprocal) of non-zero (a, b): \left({a\over a^2+b^2},{-b\over a^2+b^2}\right). Since a complex number a + bi is uniquely specified by an ordered pair (a, b) of real numbers, the complex numbers are in one-to-one correspondence with points on a plane, called the complex plane. C can also be defined as the topological closure of the algebraic numbers or as the algebraic closure of R, both of which are described below. The complex plane Geometric representation of z and its conjugate in the complex plane. Geometric representation of z and its conjugate \bar{z} in the complex plane. A complex number z can be viewed as a point or a position vector in a two-dimensional Cartesian coordinate system called the complex plane or Argand diagram (named after Jean-Robert Argand) – see figure at right. The point and hence the complex number z can be specified by Cartesian (rectangular) coordinates. The Cartesian coordinates of the complex number are the real part x = Re(z) and the imaginary part y = Im(z). The representation of a complex number by its Cartesian coordinates is called the Cartesian form or rectangular form or algebraic form of that complex number. Absolute value, conjugation and distance The absolute value (or modulus or magnitude) of a complex number z = reiφ is defined as | z | = r. Algebraically, if z = x + yi, then |z|=\sqrt{x^2+y^2}. One can check readily that the absolute value has three important properties: | z | = 0 \, if and only if z = 0 \, | z + w | \leq | z | + | w | \, ( triangle inequality) | z \cdot w | = | z | \cdot | w | \, for all complex numbers z and w. It then follows, for example, that | 1 | = 1 and | z / w | = | z | / | w | . By defining the distance function d(z,w) = | zw | we turn the set of complex numbers into a metric space and we can therefore talk about limits and continuity. The complex conjugate of the complex number z = x + yi is defined to be xyi, written as \bar{z} or z^*\,. As seen in the figure, \bar{z} is the "reflection" of z about the real axis. The following can be checked: \overline{z\cdot w} = \bar{z}\cdot\bar{w} \bar{z}=z   if and only if z is real \bar{z}=-z   if and only if z is purely imaginary z^{-1} = \bar{z}\cdot|z|^{-2}   if z is non-zero. The latter formula is the method of choice to compute the inverse of a complex number if it is given in rectangular coordinates. That conjugation commutes with all the algebraic operations (and many functions; e.g. \sin\bar z=\overline{\sin z}) is rooted in the ambiguity in choice of i (−1 has two square roots). It is important to note, however, that the function f(z) = \bar{z} is not complex-differentiable (see holomorphic function). Complex fractions We can divide a complex number (a + bi) by another complex number (c + di) ≠ 0 in two ways. The first way has already been implied: to convert both complex numbers into exponential form, from which their quotient is easily derived. The second way is to express the division as a fraction, then to multiply both numerator and denominator by the complex conjugate of the denominator. The new denominator is a real number. {a + bi \over c + di}& = {(a + bi) (c - di) \over (c + di) (c - di)} = {(ac + bd) + (bc - ad) i \over c^2 + d^2}\\ & = \left({ac + bd \over c^2 + d^2}\right) + i\left( {bc - ad \over c^2 + d^2} \right).\, Geometric interpretation of the operations on complex numbers X = A + B X = A + B X = AB X = AB X = A* X = A* Consider a plane. One point is the origin, 0. Another point is unity, or 1. The sum of two points A and B is the point X = A + B such that the triangles with vertices 0, A, B, and X, B, A, are congruent. The product of two points A and B is the point X = AB such that the triangles with vertices 0, 1, A, and 0, B, X, are similar. The complex conjugate of a point A is the point X = A* such that the triangles with vertices 0, 1, A, and 0, 1, X, are mirror images of each other. This geometric interpretation allows problems of geometry to be translated into algebra. The problem of the geometric construction of the 17-gon is thus translated into the analysis of the algebraic equation x17 = 1. Polar form Alternatively to the cartesian representation z = x+iy, the complex number z can be specified by polar coordinates. The polar coordinates are r =  |z| ≥ 0, called the absolute value or modulus, and φ = arg(z), called the argument or the angle of z. For r = 0 any value of φ describes the same number. To get a unique representation, a conventional choice is to set arg(0) = 0. For r > 0 the argument φ is unique modulo 2π; that is, if any two values of the complex argument differ by an exact integer multiple of 2π, they are considered equivalent. To get a unique representation, a conventional choice is to limit φ to the interval (-π,π], i.e. −π < φ ≤ π. The representation of a complex number by its polar coordinates is called the polar form of the complex number. Conversion from the polar form to the Cartesian form x = r \cos \varphi y = r \sin \varphi Conversion from the Cartesian form to the polar form \varphi = \arg(z) = \operatorname{atan2}(y,x) (See arg function and atan2.) The resulting value for φ is in the range (−π, +π]; it is negative for negative values of y. If instead non-negative values in the range [0, 2π) are desired, add 2π to negative results. Notation of the polar form The notation of the polar form as z = r\,(\cos \varphi + i\sin \varphi )\, is called trigonometric form. The notation cis φ is sometimes used as an abbreviation for cos φ + i sin φ. Using Euler's formula it can also be written as z = r\,\mathrm{e}^{i \varphi}\, which is called exponential form. Multiplication, division, exponentiation, and root extraction in the polar form Multiplication, division, exponentiation, and root extraction are much easier in the polar form than in the Cartesian form. Using sum and difference identities its possible to obtain that r_1\,e^{i\varphi_1} \cdot r_2\,e^{i\varphi_2} = r_1\,r_2\,e^{i(\varphi_1 + \varphi_2)} \, and that = \frac{r_1}{r_2}\,e^{i (\varphi_1 - \varphi_2)}. \, Exponentiation with integer exponents; according to De Moivre's formula, \big(r\,e^{i\varphi}\big)^n = r^n\,e^{in\varphi}. \, Exponentiation with arbitrary complex exponents is discussed in the article on exponentiation. The addition of two complex numbers is just the vector addition of two vectors, and multiplication by a fixed complex number can be seen as a simultaneous rotation and stretching. Multiplication by i corresponds to a counter-clockwise rotation by 90 degrees (π/2 radians). The geometric content of the equation i 2 = −1 is that a sequence of two 90 degree rotations results in a 180 degree (π radians) rotation. Even the fact (−1) · (−1) = +1 from arithmetic can be understood geometrically as the combination of two 180 degree turns. All the roots of any number, real or complex, may be found with a simple algorithm. The nth roots are given by \sqrt[n]{r e^{i\varphi}}=\sqrt[n]{r}\ e^{i\left(\frac{\varphi+2k\pi}{n}\right)} for k = 0, 1, 2, …, n − 1, where \sqrt[n]{r} represents the principal nth root of r. Some properties Matrix representation of complex numbers While usually not useful, alternative representations of the complex field can give some insight into its nature. One particularly elegant representation interprets each complex number as a 2×2 matrix with real entries which stretches and rotates the points of the plane. Every such matrix has the form a & -b \\ b & \;\; a where a and b are real numbers. The sum and product of two such matrices is again of this form, and the product operation on matrices of this form is commutative. Every non-zero matrix of this form is invertible, and its inverse is again of this form. Therefore, the matrices of this form are a field, isomorphic to the field of complex numbers. Every such matrix can be written as a & -b \\ b & \;\; a a \begin{bmatrix} 1 & \;\; 0 \\ 0 & \;\; 1 b \begin{bmatrix} 0 & -1 \\ 1 & \;\; 0 which suggests that we should identify the real number 1 with the identity matrix 1 & \;\; 0 \\ 0 & \;\; 1 and the imaginary unit i with 0 & -1 \\ 1 & \;\; 0 a counter-clockwise rotation by 90 degrees. Note that the square of this latter matrix is indeed equal to the 2×2 matrix that represents −1. |z|^2 = a & -b \\ b & a If the matrix is viewed as a transformation of the plane, then the transformation rotates points through an angle equal to the argument of the complex number and scales by a factor equal to the complex number's absolute value. The conjugate of the complex number z corresponds to the transformation which rotates through the same angle as z but in the opposite direction, and scales in the same manner as z; this can be represented by the transpose of the matrix corresponding to z. If the matrix elements are themselves complex numbers, the resulting algebra is that of the quaternions. In other words, this matrix representation is one way of expressing the Cayley-Dickson construction of algebras. It should also be noted that the two eigenvalues of the 2x2 matrix representing a complex number are the complex number itself and its conjugate. Real vector space C is a two-dimensional real vector space. Unlike the reals, the set of complex numbers cannot be totally ordered in any way that is compatible with its arithmetic operations: C cannot be turned into an ordered field. More generally, no field containing a square root of −1 can be ordered. R-linear maps CC have the general form with complex coefficients a and b. Only the first term is C-linear, and only the first term is holomorphic; the second term is real-differentiable, but does not satisfy the Cauchy-Riemann equations. The function corresponds to rotations combined with scaling, while the function corresponds to reflections combined with scaling. Solutions of polynomial equations A root of the polynomial p is a complex number z such that p(z) = 0. A surprising result in complex analysis is that all polynomials of degree n with real or complex coefficients have exactly n complex roots (counting multiple roots according to their multiplicity). This is known as the fundamental theorem of algebra, and it shows that the complex numbers are an algebraically closed field. Indeed, the complex number field C is the algebraic closure of the real number field, and Cauchy constructed the field of complex numbers in this way. It can also be characterized as the quotient ring of the polynomial ring R[X] over the ideal generated by the polynomial X² + 1: \mathbb{C} = \mathbb{R}[ X ] / ( X^2 + 1). \, This is indeed a field because X² + 1 is irreducible, hence generating a maximal ideal, in R[X]. The image of X in this quotient ring is the imaginary unit i. Algebraic characterization The field C is ( up to field isomorphism) characterized by the following three facts: • its characteristic is 0 • its transcendence degree over the prime field is the cardinality of the continuum • it is algebraically closed Consequently, C contains many proper subfields which are isomorphic to C. Another consequence of this characterization is that the Galois group of C over the rational numbers is enormous, with cardinality equal to that of the power set of the continuum. Characterization as a topological field As noted above, the algebraic characterization of C fails to capture some of its most important properties. These properties, which underpin the foundations of complex analysis, arise from the topology of C. The following properties characterize C as a topological field: • C is a field. • C contains a subset P of nonzero elements satisfying: • P is closed under addition, multiplication and taking inverses. • If x and y are distinct elements of P, then either x-y or y-x is in P • C has a nontrivial involutive automorphism x→x*, fixing P and such that xx* is in P for any nonzero x in C. Given these properties, one can then define a topology on C by taking the sets • B(x,p) = \{y | p - (y-x)(y-x)^*\in P\} as a base, where x ranges over C, and p ranges over P. To see that these properties characterize C as a topological field, one notes that P ∪ {0} ∪ -P is an ordered Dedekind-complete field and thus can be identified with the real numbers R by a unique field isomorphism. The last property is easily seen to imply that the Galois group over the real numbers is of order two, completing the characterization. Pontryagin has shown that the only connected locally compact topological fields are R and C. This gives another characterization of C as a topological field, since C can be distinguished from R by noting that the nonzero complex numbers are connected, while the nonzero real numbers are not. Complex analysis The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions which are commonly represented as two dimensional graphs, complex functions have four dimensional graphs and may usefully be illustrated by colour coding a three dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane. The words "real" and "imaginary" were meaningful when complex numbers were used mainly as an aid in manipulating "real" numbers, with only the "real" part directly describing the world. Later applications, and especially the discovery of quantum mechanics, showed that nature has no preference for "real" numbers and its most real descriptions often require complex numbers, the "imaginary" part being just as physical as the "real" part. Control theory • in the right half plane, it will be unstable, • on the imaginary axis, it will have marginal stability. Signal analysis In electrical engineering, the Fourier transform is used to analyze varying voltages and currents. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. (Electrical engineers and some physicists use the letter j for the imaginary unit since i is typically reserved for varying currents and may come into conflict with i.) This approach is called phasor calculus. This use is also extended into digital signal processing and digital image processing, which utilize digital versions of Fourier analysis (and Wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals. Improper integrals Quantum mechanics The complex number field is relevant in the mathematical formulation of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics – the Schrödinger equation and Heisenberg's matrix mechanics – make use of complex numbers. In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time variable to be imaginary. (This is no longer standard.) Complex numbers are essential to spinors which are a generalization of the tensors used in relativity. Applied mathematics In differential equations, it is common to first find all complex roots r of the characteristic equation of a linear differential equation and then attempt to solve the system in terms of base functions of the form f(t) = ert. Fluid dynamics Certain fractals are plotted in the complex plane e.g. Mandelbrot set and Julia set. The earliest fleeting reference to square roots of negative numbers perhaps occurred in the work of the Greek mathematician and inventor Heron of Alexandria in the 1st century AD, when he considered the volume of an impossible frustum of a pyramid, though negative numbers were not conceived in the Hellenistic world. Complex numbers became more prominent in the 16th century, when closed formulas for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (see Niccolo Fontana Tartaglia, Gerolamo Cardano). It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. For example, Tartaglia's cubic formula gives the following solution to the equation x³ − x = 0: The 18th century saw the labors of Abraham de Moivre and Leonhard Euler. To de Moivre is due (1730) the well-known formula which bears his name, de Moivre's formula: and to Euler (1748) Euler's formula of complex analysis: The existence of complex numbers was not completely accepted until the geometrical interpretation (see below) had been described by Caspar Wessel in 1799; it was rediscovered several years later and popularized by Carl Friedrich Gauss, and as a result the theory of complex numbers received a notable expansion. The idea of the graphic representation of complex numbers had appeared, however, as early as 1685, in Wallis's De Algebra tractatus. Wessel's memoir appeared in the Proceedings of the Copenhagen Academy for 1799, and is exceedingly clear and complete, even in comparison with modern works. He also considers the sphere, and gives a quaternion theory from which he develops a complete spherical trigonometry. In 1804 the Abbé Buée independently came upon the same idea which Wallis had suggested, that \pm\sqrt{-1} should represent a unit line, and its negative, perpendicular to the real axis. Buée's paper was not published until 1806, in which year Jean-Robert Argand also issued a pamphlet on the same subject. It is to Argand's essay that the scientific foundation for the graphic representation of complex numbers is now generally referred. Nevertheless, in 1831 Gauss found the theory quite unknown, and in 1832 published his chief memoir on the subject, thus bringing it prominently before the mathematical world. Mention should also be made of an excellent little treatise by Mourey (1828), in which the foundations for the theory of directional numbers are scientifically laid. The general acceptance of the theory is not a little due to the labors of Augustin Louis Cauchy and Niels Henrik Abel, and especially the latter, who was the first to boldly use complex numbers with a success that is well known. The common terms used in the theory are chiefly due to the founders. Argand called cosφ + isinφ the direction factor, and r = \sqrt{a^2+b^2} the modulus; Cauchy (1828) called cosφ + isinφ the reduced form (l'expression réduite); Gauss used i for \sqrt{-1}, introduced the term complex number for a + bi, and called a2 + b2 the norm. The expression direction coefficient, often used for cosφ + isinφ, is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass. Following Cauchy and Gauss have come a number of contributors of high rank, of whom the following may be especially mentioned: Kummer (1844), Leopold Kronecker (1845), Scheffler (1845, 1851, 1880), Bellavitis (1835, 1852), Peacock (1845), and De Morgan (1849). Möbius must also be mentioned for his numerous memoirs on the geometric applications of complex numbers, and Dirichlet for the expansion of the theory to include primes, congruences, reciprocity, etc., as in the case of real numbers. A complex ring or field is a set of complex numbers which is closed under addition, subtraction, and multiplication. Gauss studied complex numbers of the form a + bi, where a and b are integral, or rational (and i is one of the two roots of x2 + 1 = 0). His student, Ferdinand Eisenstein, studied the type a + bω, where ω is a complex root of x3 − 1 = 0. Other such classes (called cyclotomic fields) of complex numbers are derived from the roots of unity xk − 1 = 0 for higher values of k. This generalization is largely due to Kummer, who also invented ideal numbers, which were expressed as geometrical entities by Felix Klein in 1893. The general theory of fields was created by Évariste Galois, who studied the fields generated by the roots of any polynomial equation \ F(x) = 0. The late writers (from 1884) on the general theory include Weierstrass, Schwarz, Richard Dedekind, Otto Hölder, Bonaventure Berloty, Henri Poincaré, Eduard Study, and Alexander MacFarlane. The formally correct definition using pairs of real numbers was given in the 19th century. Retrieved from ""
435bd67deaa8c135
Advances in Graphene-Based Science and Application Yasuhiro HATSUGAI, University of Tsukuba 1. Introduction Graphene is a two dimensional array of carbon atoms on a honeycomb lattice. Its experimental realization [1,2] opens a breakthrough new world in physics and material science, which put a road to the Stockholm again as its zero dimensional (0D) and 1D analogue, C60 and polyacetylene. These carbon based materials have further large variety such as quasi-1D carbon nano-tubes, 3D diamond and graphite. They are clearly key ingredients for coming development of nano-science and technology. Physically most of them belong to a class of insulators/semiconductors which are characterized by a finite excitation gap. In the family, graphene is special, which is a zero-gap semiconductor. Since the energy gap is vanishing, any standard description is no longer applicable. Then a law to govern behavior of electrons in graphene is not a usual Schrödinger equation but a relativistic law by Dirac for vanishing mass. 2. More than new material It is true that graphene can be useful and groundbreaking new material for nano-technology and supplies a basic platform for various industrial applications. At the same time, graphene is physically fundamental since it is a perfect 2D crystal and electrons live there are relativistic and quantum particles. One of the surprises of the papers is that a theoretically famous “theorem” prohibits isolation of 2D perfect crystal, although it is really realized. The other is that realization of the zero-gap semiconductor implies lots of fancy predictions for the massless Dirac fermions by high energy particle physicists should be confirmed within labs. Graphene is a stage for condensed matter realization of the quantum theory with relativity and gauge symmetries. 3. Conclusions Significance of graphene’s experimental realization is, at least, twofold, for huge possibility as groundbreaking new material and for fundamental physics. Let me stress the latter in the talk which I hope to be useful key ideas in graphene based technology for a long term over several decades. Although the massless Dirac fermions living in graphene are anomalous, they are, at the same time, quite universal in that they also appear in many different physical systems such as a d-wave superconductor and a topological insulator which is another quite hot topic in the recent condensed matter physics with its relation to possible spintronics applications. I will also put the focus on the universality without going into any math details. Loading more stuff… Loading videos…
dcb93e0ba7496310
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer What is the reason for the observation that across the board fields in physics are generally governed by second order (partial) differential equations? If someone on the street would flat out ask me that question, then I'd probably mumble something about physicists wanting to be able to use the Lagrangian approach. And to allow a positive rotation and translation invariant energy term, which allows for local propagation, you need something like $-\phi\Delta\phi$. I assume the answer goes in this direction, but I can't really justify why more complex terms in the Lagrangian are not allowed or why higher orders are a physical problem. Even if these require more initial data, I don't see the a priori problem. Furthermore you could come up with quantities in the spirit of $F\wedge F$ and $F \wedge *F$ and okay yes... maybe any made up scalar just doesn't describe physics or misses valuable symmetries. On there other hand in the whole renormalization business, they seem to be allowed to use lots and lots of terms in their Lagrangians. And if I understand correctly, supersymmetry theory is basically a method of introducing new Lagrangian densities too. Do we know the limit for making up these objects? What is the fundamental justification for order two? share|cite|improve this question First of all, it's not true that all important differential equations in physics are second-order. The Dirac equation is first-order. The number of derivatives in the equations is equal to the number of derivatives in the corresponding relevant term of the Lagrangian. These kinetic terms have the form $$ {\mathcal L}_{\rm Dirac} = \bar \Psi \gamma^\mu \partial_\mu \Psi $$ for Dirac fields. Note that the term has to be Lorentz-invariant – a generalization of rotational invariance for the whole spacetime – and for spinors, one may contract them with $\gamma_\mu$ matrices, so it's possible to include just one derivative $\partial_\mu$. However, for bosons which have an integer spin, there is nothing like $\gamma_\mu$ acting on them. So the Lorentz-invariance i.e. the disappearance of the Lorentz indices in the terms with derivatives has to be achieved by having an even number of them, like in $$ {\mathcal L}_{\rm Klein-Gordon} = \frac{1}{2} \partial^\mu \Phi \partial_\mu \Phi $$ which inevitably produce second-order equations as well. Now, what about the terms in the equations with fourth or higher derivatives? They're actually present in the equations, too. But their coefficients are powers of a microscopic scale or distance scale $L$ – because the origin of these terms are short-distance phenomena. Every time you add a derivative $\partial_\mu$ to a term, you must add $L$ as well, not to change the units of the term. Consequently, the coefficients of higher-derivative terms are positive powers of $L$ which means that these coefficients including the derivatives, when applied to a typical macroscopic situation, are of order $(L/R)^k$ where $1/R^k$ comes from the extra derivatives $\partial_\mu^k$ and $R$ is a distance scale of the macroscopic problem we are solving here (the typical scale where the field changes by 100 percent or so). Consequently, the coefficients with higher derivatives may be neglected in all classical limits. They are there but they are negligible. Einstein believed that one should construct "beautiful" equations without the higher-derivative terms and he could guess the right low-energy approximate equations as a result. But he was wrong: the higher derivative terms are not really absent. Now, why don't we encounter equations whose lowest-order derivative terms are absent? It's because their coefficient in the Lagrangian would have to be strictly zero but there's no reason for it to be zero. So it's infinitely unlikely for the coefficient to be zero. It is inevitably nonzero. This principle is known as Gell-Mann's anarchic (or totalitarian) principle: everything that isn't prohibited is mandatory. share|cite|improve this answer Thanks for the answer. What is the reason that "their coefficients are powers of a microscopic scale or distance scale $L$"? In the last paragraph you use this again, where it's implied that the lower order derivatives are a priori related to a bigger scale, which then outweighs the later ones associated with higher orders. Is there a justification, which goes back to axiomatic assumptions or is it "just" an empirical insight from dealing with effective field theories? – NikolajK Dec 21 '11 at 15:08 Dear @Nikolaj, $L$ determining the coefficients is microscopic because microscopic scales are the natural ones for the formulation of the laws of physics. By definition, microscopic scales are the scales associated with the elementary particles. These general discussions talk about many things at the same moment. For example, in GR, the typical scale is the Planck length, $10^{-35}$ meters, which is the shortest one. In other theories, the typical scale is longer. But it's always microscopic because it determines the internal structure/behavior of the fields and particles which are small. – Luboš Motl Dec 21 '11 at 16:06 The comment that the derivatives are not just related, they produce long scale was meant to be as a self-evident tautology. What I mean is that if we consider a field that is changing in space, e.g. as a wave with wavelength $R$, then the derivative will pick a factor of order $1/R$, too. For example, the derivative of $\sin(x/R)$, the wave of length $2\pi R$, is $\cos(x/R)/R$. Cos and sin is almost the same thing, of the same order 1, and we therefore picked an extra factor of $1/R$. All these things are order-of-magnitude estimates. Macroscopic usage of field theory has a macroscopic $R$. – Luboš Motl Dec 21 '11 at 16:09 I'm not sure if I successfully pointed out my problem in the comment. My question is: What is the justification for assuming the coefficient of smaller orders would describe a bigger scale? What speaks against a situation, where the fourth order term has a small coefficient, but the second order term has an even smaller one? Then in the classical limit, just the fourth order expression would survive. – NikolajK Dec 21 '11 at 18:36 Dear @Nikolaj, it's likely that I don't understand your continued confusion at all. Whether a term may be neglected depends on the relative magnitude of the two terms, the neglected one and the surviving one. So I am estimating the ratio of higher-derivative terms and two-derivative terms and it scales like $(L/R)^k$, a small number, so the higher-derivative terms may be neglected if the two-derivative terms are there. It doesn't matter how you normalize both of these terms in an "absolute way". What matters for being able to neglect one term is the ratio of the two terms. – Luboš Motl Dec 21 '11 at 18:53 up vote 12 down vote One can rewrite any pde of any order as a system of first order pde's, hence the assumption behind question is somewhat questionable. Also there exist first order PDE's of relevance to physics (Dirac equation, Burgers equation, to name just two). However, it is common that quantities in physics appear in conjugate pairs of potential fields and their associated field strength, defined by the potential gradient. Now the gradients of field strength act as generalized forces that try to move the system to an equilibrium state at which these gradients vanish. (They will succeed only if there is sufficient friction and no external force.) In a formulation where only one half of each conjugate pair is explicit in the equations, a second order differential equation results. share|cite|improve this answer Here we will for simplicity limit ourselves to systems that have an action principle. (For fundamental and quantum mechanical systems, this are often the case.) Let us reformulate OP's question as follows: Why does the Euler-Lagrange equations of motion for a relativistic (non-relativistic) system have at most two spacetime-derivatives (time-derivatives), respectively? (Here the precise number of derivatives depends on whether one considers the Lagrangian or the Hamiltonian formulation, which are related via Legendre transformation. In case of a singular Legendre transformation, one should use the Dirac-Bergmann or the Jackiw-Faddeev method to go back and forth between the two formalisms. See also this Phys.SE post.) The higher-derivative terms are in certain theories suppressed for dimensional reasons by the natural scales of the problem. This may e.g. happen in renormalizable theories. But the generic answer is that the equations of motion actually doesn't have to be of order $\leq 2$. However, for a generic higher-order quantum theory, if higher-derivative terms are not naturally suppressed, this typically leads to ghosts of the so-called bad type with wrong sign of the kinetic term, negative norm states and unitarity violation. At the naive level, explicit appearances of higher time-derivatives may be removed in formulas by introducing more variables, either via the Ostrogradsky method, or equivalently, via the Lagrange multiplier method. However, the positivity problem is not cured by such rewritings due to the Ostrogradsky instability, and the quantum system remains ill-defined. See also e.g. this and this Phys.SE answer. Hence one can often not make consistent sense of higher-order theories, and this may be why OP seldom faces them. Finally, let us mention that it is nowadays popular to study effective higher-derivative field theory, with the possibly unfounded hope, that an underlying, supposedly well-defined, unitary description, e.g. string theory, will cure all pathologies. share|cite|improve this answer The reason for equations of physics, being of at most second order, is due to the so-called Ostrogradskian instability. (see paper by Woodard). This is a theorem, which states that equations of motion with higher-order derivatives are in principle unstable or non-local. This is easily shown using the Lagrangian and Hamiltonian formalism. The key point is that in order to get an equation of motion of third order in the derivatives, we need a Lagrangian that depends on the coordinates and the generalized velocities and accelerations: $L(q,\dot{q},\ddot{q})$. By performing a Legendre transformation to obtain the Hamiltonian, this implies that we need two generalized momenta. The Hamiltonian results to be linear in at least one of the momenta and therefore it is unbounded from below (it can become negative). This corresponds to a phase space in which there are no stable orbits. I would like to write the proof here, but it was already answered in this post. There the question is why Lagrangians only have one derivative, but it is actually closely related, since one can always find the equations of motion from a Lagrangian and viceversa. Citing Woodard: "It has long seemed to me that the Ostrogradskian instability is the most powerful, and the least recognized, fundamental restriction upon Lagrangian field theory. It rules out far more candidate Lagrangians than any symmetry principle. Theoretical physicists dislike being told they cannot do something and such a bald no-go theorem provokes them to envisage tortuous evasions. ... The Ostrogradskian instability should not seem surprising. It explains why every single system we have so far observed seems to be described, on the fundamental level, by a local Lagrangian containing no higher than first time derivatives. The bizarre and incredible thing would be if this fact was simply an accident." share|cite|improve this answer This is correct. However, physical evolution equations are second (in time) order hyperbolic equations. In fact, each component of Dirac spinor follows a second order equation, namely, Klein-Gordon equation. They're actually present in the equations, too. Neither the Standard Model (SM) Lagrangian nor the Einstein-Hilbert (EH) action contain higher than second order temporal derivatives. These are the actions which are experimentally tested and these two theories are the most fundamental scientific theories we have. We know that there are physics beyond these two theories and people have good candidates to the underlying theories, but physics is an experimental science and these theories are not experimentally verified. The effective SM Lagrangian (a Lorentz invariant theory with the gauge symmetries of the SM but with irrelevant operators) does contain higher than second order temporal derivatives. Equally for the EH action plus higher order scalars. Two clarifications are however in order: • These irrelevant terms are not experimentally verified. Almost everyone is sure that neutrino mass terms (which are irrelevant operators but do not contain higher order derivatives) exist in order to explain neutrino oscillations, but so far we do not have direct measurements of neutrino masses thus we are not allowed to claim that these terms exist. Summarizing: the effective SM is not a verified theory. • The origin of these irrelevant terms is a consequence of integrating out fields with a mass much greater than the energy scale we are interested in. This could be the case of the neutrino mass term and a right-handed neutrino. For instance, in quantum electrodynamics, if one is interested in the physics at much lower energies than the electron mass, one can integrate (or nature integrates-out) out the electron field obtaining an effective Lagrangian (Euler-Heisenberg Lagrangian) with terms with higher order derivatives like $\frac{\alpha ^2}{m_e^4}~F_{\mu\nu}~F^{\mu\nu}~F_{\rho\sigma}~F^{\rho\sigma}$ (which contains four derivatives). These are terms suppressed by coupling constants ($\alpha$) and high-energy scales ($m_e$). There are terms with a number of derivates arbitrarily high, and they come from inverses of differential operators. This makes that the higher order derivatives do not enter in the zeroth-order equation of motion. However, in a fundamental theory (in contrast to an effective one), finite higher order derivatives are not allowed in interactive theories (there are some exceptions with gauge fields, but for example a generic $f(R)$ theory of gravity is inconsistent). The reason is that those theories are not bounded from bellow (see Why are only derivatives to the first order relevant?) or, in some quantizations, contain negative norm states. These terms are among the forbidden operators in Gell-Mann's totalitarian principle. In summary, evolution equations are order two because of existence of a normalizable vacuum state and unitarity (including here the fact that physical states must have positive norm). Newton was right when he wrote $$\ddot x=f(x,\dot x)$$ share|cite|improve this answer Weinberg gives a pretty good answer for this in Volume 1 of his QFT opus: 2nd order differential equations appear in the field theories relevant to particle physics because of the relativistic mass-shell condition $p^2 = m^2$. If we have a quantum field $\phi$, and we think of its fourier modes $\phi(p)$ as creating particles with 4-momentum $p$, then the mass-shell condition provides a constraint: $(p^2 - m^2)\phi(p) = 0$, because we don't want particle creation off-shell. Fourier-transform this back to position space, and you find that $\phi$ has to obey a 2nd order differential equation. share|cite|improve this answer This doesn't apply to general relativity, where nevertheless equations are of second order. – Arnold Neumaier Nov 12 '12 at 16:39 It does tell you that the linearized Einstein equations should be second order. And it explains why the renormalization flow should be defined in such a way that the kinetic term is fixed, which is a important assumption implicit in Lubos' answer. – user1504 Nov 12 '12 at 16:42 Actually, evolution equations are even more than just second order in time : they don't depend naively on first order derivative, that is, on "velocity". This can be easily understood as the fact that there exists no privileged inertial frames. The change (that is, what is absolute) is given by acceleration and not velocity. If it depended naively on some velocity terms, then it would implies that there's a privileged frame. Let us make some analogy with Newtonian mechanics. If we were living in an Aristotle universe with privileged frame of reference, then $F = mv$. Motion would therefore be absolute and so would be velocity. Because there is no such privileged frame of reference, but a whole class of privileged ones (the inertial ones), $F = ma$. Why couldn't it be that we live in a universe where $F = m \dot a$ ? Simply because of Galilean principles. If you believe that acceleration and velocities are "cancellable", and that real change is given by the derivative of acceleration, then you would have to believe in a second order Galilean principle of invariance and inertia. Second order principle of invariance would tell you that the laws of physics has to be the same in all inertial frames and all uniformly accelerated frames, otherwise it would mean that there is a way to discriminate them, and thus, that there is no equivalence between being inertial or uniformly accelerated. This, in particular implies that if you're inside one of these frames and you see someone that is uniformly accelerated with respect to your $x$ axis, that is, $x_1(t) = gt^2/2$, and you also see someone accelerated in the opposite direction, that is, $x_2(t) = -gt^2/2$, then from the point of view of $x_2$, the first object will be described by $x_2(t) = g t^2$. This implies that you would be able to see objects with arbitrary high acceleration, and this without the need to consume any "energy". This is not what we observe in this universe, you don't uniformly accelerate an object "for free". So it looks like nature choosed to be as simple as possible in order to keep a symmetry between all inertial frames : its second order in time, not third or even worse. Note that one could say that its Machian, that is, that it is symmetric up to all order in acceleration. This would implies that there is no difference at all between rotation and being inertial. That is to say, that if I look at a guy spinning with a ball in his hands that will eventually let it go, the ball will then make a spiral movement and its angular velocity will keep increasing as far as it goes further from the guy who launched it (indeed, the latter has to see it going into straight line by Galileo principle of inertia). Universe is therefore not Machian either. Then why does Schrödinger's equation depends on first order in time ? Because it is a modal equation : it needs an observer to makes sense and to make measurement. Hence, there is one Schrödinger equation per observer (the Hamiltonian depends on the observer and the system he is looking at, see the relational interpretations). At least, this is my interpretation of it. share|cite|improve this answer It was already noted in other answers that fields in physics are not always governed by second order partial differential equations (PDEs). It was said, e.g., that the Dirac equation is a first-order PDE. However, the Dirac equation is a system of PDEs for four complex functions - components of the Dirac spinor. It was also mentioned that any PDE is equivalent to a system of PDEs of the first order. I mentioned previously that the Dirac equation in electromagnetic field is generally equivalent to a fourth-order partial differential equation for just one complex component, which component can also be made real by a gauge transform ( (my article published in the Journal of Mathematical Physics) or ). Let me also mention my article , where it is shown that the equations of spinor electrodynamics (the Dirac-Maxwell electrodynamics) are generally equivalent to a system of PDEs of the third order for complex four-potential of electromagnetic field (producing the same electromagnetic field as the usual real four-potential of electromagnetic field). share|cite|improve this answer (adding comment as answer) Actually all classical mechanics (and quantum mechanics) can be formulated with only 1st-order derivatives (with the expense of adding extra dimensions, ie phase-space, Hamiltonian formalism). This indeed makes for a dynamic description of a physical system. Furthermore any order of differential equations can be made into 1st order by the same token. Non-linear dynamics (i.e chaos theory) makes heavy use of only 1st-order dynamical laws in their studies. Adding more orders to dynamical laws, needs more information to be added (initial conditions) and becomes untractable to solve explicitly or algorithmically in most cases. Even furthermore, first order dynamical laws, do provide (at least) good approximations or even complete coverage of the dynamical evolution of a system under study share|cite|improve this answer Your Answer
c79f86f04f2df1c4
My watch list   Electron Phases In physical chemistry, Electron Phases describe the sign (positive or negative) of the wave function, which is a solution to the Schrödinger equation. When two wave functions describing two atomic orbitals on the same atom are combined, a hybrid orbital is created. When wave functions for atomic orbitals on two different atoms are combined, the result is a molecular orbital. In terms of molecular orbitals (MOs), the type of interaction between the two wave functions being added depends on their signs. This interaction may be either constructive (both wave functions are of the same sign) or destructive (opposite signs). The phases of a wave function can be represented as a simple sine curve, y=sinx, in which the values above the x-axis are positive and the values below the x-axis are negative. When two wave functions of the same phase (meaning both positive sine curves) are combined, in-phase overlap is observed and mathematically speaking, the amplitude of the wave increases. When two wave functions of opposite phases (this time one is a positive sine curve while the other is negative) are combined, the phases "cancel out" and the addition of wave functions results in a straight line with zero amplitude. Bonding orbitals are created via constructive interaction, while antibonding orbitals are created due to destructive interaction. When two atomic orbitals come in contact, they may form a sigma bond, pi bond, delta bond, or other form of interaction. These atomic orbitals may undergo in-phase overlap to produce a bonding orbital, as well as out-of-phase overlap to produce an antibonding orbital. As defined by phase space, an electron has a position and a momentum, and when two electron orbitals merge and form a bonding interaction, the electrons from both orbitals must have the same position and momentum. Thus, when the phases of both electron orbitals are similar, a bond is formed; when they are different, the two orbitals repel each other to form an antibonding pair. Relationships that are antibonding are denoted by the letter (sigma, pi, delta, etc.) followed by an asterisk (*) and are pronounced, for example, "sigma star". Regarding energy, bonding pairs are always more stable than the two original atomic orbitals that combined. Antibonding pairs are higher in energy than the atomic orbitals. The number of atomic orbitals that overlap during this process is equal to the number of molecular orbitals that result; one MO is lower energy (bonding) and the other is higher energy (antibonding). See also • Moore, Walter J. Physical Chemistry. New Jersey: 1972, 4th ed. ISBN 0-13-665968-3 • Whitten, Kenneth W. "General Chemistry". 2000, 6th ed. ISBN 0-03-021214-6 This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Electron_Phases". A list of authors is available in Wikipedia.
e53e48d2b0f6da3c
Take the 2-minute tour × I have read about the existence of a non-linear scrhödinger equation. What is its utility and application? And how can it be derived? Is it in a relativistic or non-relativistic context? share|improve this question Wikipedia article didn't help? –  Marek Jan 29 '11 at 11:38 Dear Boy, in this case, and many others, Wikipedia (and others) says that the utility and applications don't exist because they don't exist. This is a purely mathematical variation of Schrödinger's equation that doesn't describe any quantum systems because it violates a basic postulate of quantum mechanics, the linearity of operators (including the Hamiltonian that produces evolution). Some people have tried to write down nonlinear Schrödinger equations to "explain" the measurement or the "collapse" of the wave functions but none of these papers makes any sense. –  Luboš Motl Jan 29 '11 at 12:07 I want to say that Wikipedia clearly answers your question, and, deducing from your comment above, you are even aware of the answer that it gives you - but you seem to dislike the answer, right? The answer is correct, however. Such equations can't describe any physics at the fundamental level. –  Luboš Motl Jan 29 '11 at 12:08 More misinformation and propaganda in these comments I'm afraid. The NLSE is a widely used tool in theoretical physics, independent of its application to questions of wavefunction collapse. This question is valid and relevant. +1 –  user346 Jan 29 '11 at 13:08 I would be interested in reading one of these propose about collapsing the wave-function by adding a non-linear term, could someone provide me a reference? –  toot Jun 28 '12 at 10:02 4 Answers 4 up vote 7 down vote accepted Broadly speaking nonlinear Schrödinger equations are used to describe wave propagation in nonlinear media. There are many examples, like deep-water waves (where the linear shallow water approximation to the Navier-Stokes equations is not valid), or nonlinear optics. There is for example an effect called "Kerr effect" in optical fibers where the refractive index depends on the intensity of the optical pulse. You can find a detailed derivation of a nonlinear Schrödinger equation describing electromagnetic waves in such a medium in the book • M. J. Ablowitz, B. Prinari, A. D. Trubatch: "Discrete and Continuous Nonlinear Schrödinger Systems", Cambridge University Press, 2004 Edit: Fluids/Hydrodynamics: It is not easy to find a reference for the use of Schrödinger equations in Hydrodynamics, maybe because most authors write for a target audience that is not fluent in quantum mechanics, I don't know. But here is one: • Andrew Majda and Andrea Bertozzi: "Vorticity and Incompressible Flow" chapter 7.1, "Simplified Asymptotic Equations for Slender Vortex Filaments: The Self-Induced Approximation, Hasimoto's Transform and the Nonlinear Schrödinger Equation". share|improve this answer Thank you! Your reference is what i needed. And do you know a similar reference about detailed derivation in fluids? –  Boy Simone Jan 29 '11 at 12:48 @Boy Simone: I tried to answer your question by adding it to my primary answer instead of a comment, see above :-) –  Tim van Beek Jan 29 '11 at 15:38 I didn't understand a word of what you said but +1 for a superb answer. –  Trufa Jan 29 '11 at 16:56 an important practical application in non-linear fiber optics is soliton-waves. They are derived using nonlinear Schrödinger equations and are often used in telecommunications. –  mirk Aug 15 '12 at 12:46 The equation is also quite common in laser plasma interaction problems (self focusing). For a derivation you can read:ijens.org/Vol%2011%20I%2002/118802-3030%20IJBAS-IJENS.pdf –  Shaktyai Aug 22 '12 at 21:23 You can get some more information about the derivation from the related Wikipedia article about the Gross-Pitaevskii equation which is used in the context of Bose-Einstein condensation. There, one starts with a quantum field theory whose Hamiltonian is nonlinear in the fields, and replaces the field operator by a classical function that describes the condensate. So the reason why the same equation appears in so many different systems is that they are all described by an effective field theory with a nonlinear (fourth-order) interaction term. (The form of this effective theory is dictated by symmetry and the relevant degrees of freedom at low energy.) In the semiclassical approximation, this effective field theory reduces to the nonlinear "Schroedinger" equation. share|improve this answer thanks for your answer! A thing: can you specify what do you mean with "The form of this effective theory is dictated by symmetry and the relevant degrees of freedom at low energy"? Or can you do an example?In your Gross-Pitaevski equation I have seen that we introduce the delta potential of interaction between bosones. And in Non Linear Schrodingher Equation what is the origin of the "adjunctive terms in the potential"? Or is an effort without specific physical origin to simply see the effects of a term of that type? –  Boy Simone Jan 29 '11 at 12:43 Dear Tomáši, that's great but if the function entering the equation describes a classically measurable condensate, then it is not a wave function and the equation is not a Schrödinger equation in the physical sense, is it? Moreover, the form of the field equations for this object won't mimic the non-relativistic Schrödinger equation - it will follow the classical field theories that are typically second-order in time etc. –  Luboš Motl Jan 29 '11 at 12:59 @Lubos, if you are going to address people in comments, please append their names with an ampersand so that they are notified and have a chance to respond appropriately. –  user346 Jan 29 '11 at 13:12 @space_cadet: you mean prepend the name with an at-sign? :) –  Marek Jan 29 '11 at 16:11 @Luboš Motl: I have never said that the condensate function is a wave function of an actual quantum system, have I? Please notice that I put the "Schrödinger" in quotation marks. Personally I think that using the term nonlinear Schrödinger equation just leads to confusion as is clear from your comments; the question is not whether the variable in that equation is a quantum wave function, I guess nobody sane claims that. As to the form of the field equations, you can get a nonrelativistic kinetic term (one time derivative) even in a fully relativistic context due to the dense medium - BEC. –  Tomáš Brauner Jan 29 '11 at 21:46 The nonlinear Schrödinger equation (NLSE) in one dimension is $$ i\frac{\partial\psi}{\partial t}~+~\frac{\partial^2\psi}{\partial x^2}~+~f(|\psi|)\psi~=~0. $$ For $f(|\psi|)~=~|psi|^2$ this is the cubic Schrödinger equation. This is related to a couple of relativistic equations, such as the quartic equation for the Higgs field. The Dirac equation $i\gamma^\mu\partial_\mu\psi~+~({\bar\psi}\gamma^\mu \psi)\gamma_\mu\psi$ is the Thirring model of fermion condensates, related to BCS theory. This is a related nonlinear wave equation. In general the NLSE describes the motion of quantum waves through nonlinear media or in some process where the locality of standard Lagrangians (quadratic etc) is replaced with some nonlocal field. A separable solution to the NLSE $\psi(x,t)~=~u(x)v(t)$ may be found with $v(t)~=~exp(-i\omega t)$ and the spatial function is given by a DE $u_{xx}~+~\omega u$ $f(|u|)u~=~0$defined as $$ \int\frac{du}{\sqrt{Au^2~-~2F(u)~+~B}}~=~C~\pm~x,~F(u)~=~\int uf(|u|)du $$ which is a traveling wave solution. share|improve this answer Sorry, but what do you mean with "nonlocal field" ? –  Boy Simone Jan 29 '11 at 14:15 I should have said theory or Lagrangian. A nonpolynomial potential or $f(\psi)$ or some function which is not defined at a point can be a nonlocal field theory. –  Lawrence B. Crowell Jan 29 '11 at 15:02 I've seen this used in the context of quantum simulations of bosonic systems. If you have a system of many identical bosons all in the same state, you can treat the squared magnitude of the single particle wave function as a density for all of the particles. Then a single particle interacting with this density becomes a good model for a system of interacting Bosons. See for example Yepez, Phys. Rev. Lett. 103, 084501 (2009). share|improve this answer Your Answer
d374850a9e9111e4
Skip to Main Content Have library access? Log in through your library Nuclear Physics in a Nutshell Nuclear Physics in a Nutshell Carlos A. Bertulani Series: In a Nutshell Copyright Date: 2007 Edition: STU - Student edition Pages: 488 • Cite this Item • Book Info Nuclear Physics in a Nutshell Book Description: Nuclear Physics in a Nutshellprovides a clear, concise, and up-to-date overview of the atomic nucleus and the theories that seek to explain it. Bringing together a systematic explanation of hadrons, nuclei, and stars for the first time in one volume, Carlos A. Bertulani provides the core material needed by graduate and advanced undergraduate students of physics to acquire a solid understanding of nuclear and particle science.Nuclear Physics in a Nutshellis the definitive new resource for anyone considering a career in this dynamic field. The book opens by setting nuclear physics in the context of elementary particle physics and then shows how simple models can provide an understanding of the properties of nuclei, both in their ground states and excited states, and also of the nature of nuclear reactions. It then describes: nuclear constituents and their characteristics; nuclear interactions; nuclear structure, including the liquid-drop model approach, and the nuclear shell model; and recent developments such as the nuclear mean-field and the nuclear physics of very light nuclei, nuclear reactions with unstable nuclear beams, and the role of nuclear physics in energy production and nucleosynthesis in stars. Throughout, discussions of theory are reinforced with examples that provide applications, thus aiding students in their reading and analysis of current literature. Each chapter closes with problems, and appendixes address supporting technical topics. eISBN: 978-1-4008-3932-2 Subjects: Physics Table of Contents 1. Front Matter (pp. i-vi) 2. Table of Contents (pp. vii-xiv) 3. Introduction (pp. 1-3) The most accepted theory for the origin of the universe assumes that it resulted from a great explosion, soon after which the primordial matter was extremely dense, compressed and hot. This matter was mainly composed of elementary particles, such as quarks and electrons. As it expanded and cooled down, the quarks united to form heavier particles, called hadrons, which contain 3 quarks (baryons) or 2 quarks (mesons). The protons and neutrons (which are baryons) formed nuclei, and the electrons were captured in orbits around the nuclei forming atoms. The larger and heavier nuclei were created inside stars, which were formed... 4. 1 Hadrons (pp. 4-30) The scattering experiments made by Rutherford in 1911 [Ru11] led him to propose an atomic model in which almost all the mass of the atom was contained in a small region around its center called thenucleus. The nucleus should contain all the positive charge of the atom, the rest of the atomic space being filled by the negative electron charges. Rutherford could, in 1919 [Ru19], by means of the nuclear reaction ${}_2^4{\rm{He + }}{}_7^{14}{\rm{N }}\ \to\ {\rm{ }}{}_8^{17}{\rm{O + p,}}$(1.1) detect the positive charge particles that compose the nucleus calledprotons. The proton, with symbol$p$, is the nucleus of the hydrogen atom; it has charge... 5. 2 The Two-Nucleon System (pp. 31-70) The study of the hydrogen atom is relatively simple due the fact that the Coulomb force between the proton and the electron is very well known. The solution of this quantum problem resulted in the determination of a group of states of energy allowed for the system, permitting direct comparison with the measured values of the electromagnetic transitions between those states. Ever since, there has been great progress in understanding the hydrogen atom and atoms with many electrons. Nowadays, there are only small discrepancies between quantum theory and experimental data. Nuclear systems are much more complex than atomic ones. Already... 6. 3 The Nucleon-Nucleon Interaction (pp. 71-97) The starting point for any dynamical description of a physical system is knowledge of the relevant degrees of freedom and of the interaction. In the previous chapters we have seen that nucleons are the basic components of nuclei. Their degrees of freedom are determined by the position ri, momentum pi, spin siand isospin τiof theith nucleon. For the interaction one first takes the simplest assumption that it is a two-body interaction that can be described by a potential. A further extension of the model introduces three- and many-body interactions for a deeper understanding of the many-body system.... 7. 4 General Properties of Nuclei (pp. 98-118) The basic properties of nucleons were presented in chapters 1, 2, and 3, together with the development of the deuteron theory. Our purpose in this and the following chapters is to study the physics of nuclei with any number A of nucleons, to establish the systematics of their properties, and to present the theories that aim to explain them. However, the approach we have followed for the deuteron is not applicable here. The Schrödinger equation is already not exactly soluble for a three-nucleon system, and to establish the properties of a heavy nucleus starting from the interaction of all its... 8. 5 Nuclear Models (pp. 119-169) In the previous chapters we have talked about the impossibility of obtaining the properties of a system of A nucleons starting from its constituents and their underlying interactions, and it was clearly evidenced that there is a need to use models that represent some aspects of the real problem. The models are essentially of two classes. The first class of models assume that the nucleons interact strongly in the interior of the nucleus and that their mean free path is small. This is a situation identical to that of molecules of a liquid, and the liquid drop model belongs to... 9. 6 Radioactivity (pp. 170-184) The stable isotopes are located in a narrow band of the nuclear chart called the$\beta $-stability line, alongside of which nuclei unstable by${\beta ^ + }$or${\beta ^ - }$emission are located. For A > 150 the emission of an$\alpha $-particle is energetically favorable, and in this region one finds several$\alpha $-emitters. Heavy nuclei also release energy if divided in two nearly equal parts and can, for this reason, fission spontaneously. Aradioactive substance, which contains some unstable isotope, is in permanent transformation by the action of one or more of these processes. The physics of each of them will be studied later.... 10. 7 Alpha-Decay (pp. 185-194) The emission of an$\alpha $-particle is a possible nuclear disintegration process in situations in which (5.12) is satisfied. In contrast with the restricted existence of emitters of light fragments,$\alpha $-emitter nuclei are largely due to the large binding energy of the$\alpha $-particle. In turn, the$\alpha $-emitting process is energetically advantageous in practically all nuclei with A ≳ 150. Figure 7.1, based on the balance of masses, exhibits the energy available by emission of several nuclei for ²³⁹Pu. We see that$\alpha $-emission is the only energetically possible process. Very rarely one detects emission of heavier fragments, with A > 4. Examples are... 11. 8 Beta-Decay (pp. 195-217) The most common form of radioactive disintegration is${\rm{\beta }}$-decay, detected in isotopes of practically all elements, with the exception, up to now, of the very heavy ones at the extreme of the chart of nuclides. It consists in the emission of an electron and an antineutrino (${\beta ^ - }$-decay) or in the emission of a positron and a neutrino (${\beta ^ + }$-decay), keeping the nucleus, in both cases, with the same number of nucleons according to the equations $_{\rm{Z}}^{\rm{A}}{{\rm{X}}_N} \to _{{\rm{Z + 1}}}^{\rm{A}}{{\rm{Y}}_{N - 1}} + {e^ - } + v$(8.1) $_{\rm{Z}}^{\rm{A}}{{\rm{X}}_N} \to _{{\rm{Z}} - {\rm{1}}}^{\rm{A}}{{\rm{Y}}_{N + 1}} + {e^ + } + v.$(8.2) The mechanisms of$\alpha $- and$\beta $-emission differ in an essential aspect: whereas the nucleons that form the$\alpha $-particle already reside in... 12. 9 Gamma-Decay (pp. 218-257) The quantum system of A nucleons that form the nucleus has, above its state of lowest energy (ground state), a large number of possible excited states that can be accessed if enough energy is given to the system. The transitions among these states, either through excitation or through de-excitation, are accomplished mainly through$\gamma $-radiation, which embraces a high energy region of the electromagnetic spectrum. This region is located basically between 0.1 MeV and 10 MeV, being a 1 MeV$\gamma $-ray of order 3 × 10⁵ times more energetic than violet light. Besides the energy released, another characteristic parameter is the... 13. 10 Nuclear Reactions—I (pp. 258-297) The collision of two nuclei can give place to a nuclear reaction where, similarly to a chemical reaction, the final products can be different from the initial ones. This process happens when a target is bombarded by particles coming from an accelerator or from a radioactive substance. It was in the latter way that Rutherford observed, in 1919, the first nuclear reaction produced in the laboratory, $\alpha {\rm{ + }}_7^{14}{\rm{N}}\; \to \;_8^{17}{\rm{O + p}},$(10.1) using$\alpha $-particles coming from a ²¹²Bi sample and using as the target nitrogen contained in a reservoir. As in (10.1), other reactions were induced using$\alpha $-particles, the only projectile available initially. With... 14. 11 Nuclear Reactions—II (pp. 298-333) We have already mentioned the existence of reactions that occur within a short duration of the projectile-target interaction. Several of these mechanisms of direct reaction are known. This reaction type becomes more probable as one increases the energy of the incident particle: the wavelength associated with the particle decreases and localized areas of the nucleus can be “probed” by the projectile. In this context, importance in placed on peripheral reactions, where only a few nucleons of the surface participate. These direct reactions happen during a time of the order of 10-22s; reactions in which the formation of a compound nucleus... 15. 12 Nuclear Astrophysics (pp. 334-384) The hydrogen, deuterium, and most of the helium atoms in the universe are believed to have been created some 20 billion years ago in a primary formation process referred to as the Big Bang, while all other elements have been formed—and are still being formed—in nuclear reactions in the stars. These reaction processes can only be understood in an astrophysical context, as briefly outlined in this chapter, which also describes how nuclear science has provided much understanding about the universe, our solar system and our planets. The evolution of the universe is the object of study of cosmology... 16. 13 Rare Nuclear Isotopes (pp. 385-400) The study of nuclear physics demands beams of energetic particles to induce nuclear reactions on the nuclei of target atoms. It was from this need that accelerators were born. Over the years nuclear physicists have devised many ways of accelerating charged particles to ever increasing energies. Today we have beams of all nuclei from protons to uranium ions available at energies well beyond those needed for the study of atomic nuclei. This basic research activity, driven by the desire to understand the forces that dictate the properties of nuclei, has spawned a large number of beneficial applications. Among its many... 17. Appendix A Angular Momentum (pp. 401-418) 18. Appendix B Angular Momentum Coupling (pp. 419-431) 19. Appendix C Symmetries (pp. 432-439) 20. Appendix D Relativistic Quantum Mechanics (pp. 440-458) 21. Appendix E Useful Constants and Conversion Factors (pp. 459-460) 22. References (pp. 461-468) 23. Index (pp. 469-473)
d30fc8c96cf3d9e5
all 26 comments Taken111111 14 points15 points 5 months ago "i would rather be a brainless beast then a heartless monster" goku: frieza arc krazyalbert 6 points7 points 5 months ago Absolutely! 😇 valschermjager 12 points13 points 5 months ago It's well known that Feynman would say things he didn't strictly believe, for dramatic effect, and to help make a point that wasn't always literally apparent. His working class NYC accent only helped make the expression of his ideas more accessible, or at least feel that way. Anyone who's watched his lectures and read his stuff know very well, that he believes it's our job as adults today, with both respect to those who came before us, and future adults who are due to come after us... to do both. To continue pushing on those questions that have not yet been answered, and to always question those answers that others offer. Science requires the relentless pursuit of trying to prove yourself wrong (also from Feynman, I believe, I may be paraphrasing) In short, there is absolutely no reason to believe that this is an either-or proposition. Our respect for history and our obligation to the future demands that we always consider both, and work hard to move the ball forward. CAustin3 2 points3 points 5 months ago What's the either-or, though? Unless you're part of a group of political extremists or religious dogmas or something, what's the good of "answers that can't be questioned?" To paraphrase Feynman, part of the reason he taught introductory physics class (and took such passion in doing so) was because freshmen with blank slates sometimes asked challenging questions that someone too mired in the culture of science would be blind to. He was a showman to be sure, but this quote seems to pretty deeply reflect his accomplishments and philosophy. Successful challenges to the most sacred of cows is the story behind just about every significant breakthrough in physics, including his own. valschermjager 1 point2 points 5 months ago Yeah, I mean, those are good points of course, and even that RF quote didn’t actually state it as a strict this or that. But he did position them as one better than the other, when I don’t actually think he believed that, literally. I think he was making a point to provoke thought, and quite well, as usual. I didn’t know that about why he taught undergrads. That’s awesome. Thanks for sharing that. Also I didn’t mean to imply that answers shouldn’t be questioned. That’s what I was trying to say with the “scientists should always try to prove themselves wrong” thing. I wish more of us would adopt that approach, actually. It seems more often that when people form an opinion, they put more energy into searching for things to prove themselves right. Which is pointless because regardless what you believe, chances are that’s typically not hard to do. Maybe_Neo 2 points3 points 5 months ago I would rather shit in the sink than sink in the shit igo4vols2 1 point2 points 5 months ago Not Feynman. As usual. optiongeek -10 points-9 points 5 months ago Try questioning something in physics, like, say, whether the Schrodinger equation is correct. There is no field as dogmatic as physics. _why_do_U_ask 1 point2 points 5 months ago Physics is science, and science should always be questioned. optiongeek 2 points3 points 5 months ago And yet the dogmatics are out in force, finger on the downvote button, ready to silence any one who actually tries to question. That's my point. _why_do_U_ask 0 points1 point 5 months ago The down voting is pure childishness. Silamoth 5 points6 points 5 months ago This is completely untrue. Even if the past century, physics has undergone several major revolutions. Einstein’s discovery of special and general relativity fundamentally changed how we view physics. Then the discovery of quantum phenomena and the revolution of quantum mechanics ruptured physics again. Even now, there are a lot of major unresolved problems in physics, like the unification of general relativity with quantum mechanics to create a theory of quantum gravity. This will almost certainly change our understanding on a deep level. Physics isn’t at all dogmatic. That’s not how progress is made in physics, and really in science in general. Questioning is absolutely encouraged as that’s how new discoveries are made. Of course, those questions must be honest and in good faith, not crackpot “theories” that display a fundamental ignorance of physics. optiongeek -3 points-2 points 5 months ago What if I told you it's possible to create a form of hydrogen that turns paramagnetic in molecular form. And that the theory that predicted this over a decade ago is inconsistent with Schrodinger. You'd think I was a crack pot. Silamoth 1 point2 points 5 months ago I’d ask you for evidence, of course. Have you done such a thing? Can you replicate it? Are there any reputable publications regarding this idea? And keep in mind I’m no physics expert; have you brought this to any experts or tried to get it peer-reviewed (wherein it can be judged by experts)? optiongeek 1 point2 points 5 months ago There are plenty of peer reviewed papers I could show you. Several are referenced in the following link. However part of the "dogma" involved in the physics is the very notion of insisting that only papers that appear in "high quality" journals may be discussed in polite company. Read the following paper and make up your own mind. The author, W Hagen, is one of the world's leading experts on a scientific technique called Electron Paramagnetic Resonance, which is a very mainstream tool used to analyze materials. Any well equipped lab will have an EPR device. And Hagen literally wrote the textbook on how to do EPR. alexmijowastaken -1 points0 points 5 months ago yeah cause you are optiongeek -1 points0 points 5 months ago Tell me what's wrong with Hagen's analysis. Or just admit your resistance is borne out of adherence to dogma. (Let the record show this comment was downvoted before enough time had elapsed to have read the paper. Dogmatic) CAustin3 1 point2 points 5 months ago It's interesting that you bring up the Schrödinger equation and its analytical uses as an example of dogma in physics when looking at a Feynman quote. Do you know what a Feynman diagram is, or what he won his Nobel Prize for? Are you trolling? It'd be like bringing up Newton when looking at an Einstein quote about questioning dogma. optiongeek 0 points1 point 5 months ago Dogma is dogma whether it's Catholicism or Islamic. Just different ways to assuage the unbearable awfulness of not understanding the universe. cojoco -1 points0 points 5 months ago Feynman was around a long time ago. But there is heaps of evidence for the Shrodinger equation. You'll have more fun questioning some of its sacred cows for which there is little direct evidence, such as dark matter and string theory. Kyndlyon 0 points1 point 5 months ago Without faith-based presuppositions, rational thought is impossible. SenorBurns 0 points1 point 5 months ago Yeah, Feynman didn't say this. However, it is an aphorism so old and ubiquitous that it goes back centuries, and it's a great way to frame philosophy (which includes science) versus religion. One is based on critical thinking, the other on faith.
aba9508c074414a3
 Spintronics | School of Mathematics and Statistics Skip to content Toggle service links You are here 1. Home 2. Research 3. Research groups 4. Applied Mathematics and Theoretical Physics 5. Spintronics Inner workings of a computer hard driveSpintronics is a hugely important scientific field which attempts to exploit the spin of the electron in addition to its electric charge. The field spans a large area of scientific endeavour from theoretical physics to industry and is a sub-discipline of nanotechnology. The subject area began in the early 1990's, after Albert Fert and Peter Grünberg (who later received the 2007 Nobel Prize for their finding) discovered that the electrical resistance of certain magnetic nanostructures, could change drastically in response to an applied magnetic field. The effect, called giant magnetoresistance, relies on the conservation of spin-direction of electrons flowing through the structure.  This discovery was quickly developed commercially, and today all hard disk drives employ read heads which use this effect.  In fact, the modern day hard-disk read head is the first commercial device to utilise the spin of an electron rather than its electric charge. Inner workings of a computer hard driveOther spintronic devices currently being developed include, novel types of ultra-fast non-volatile memory and spin-based transistors and logic gates that use electron spin for information processing. Such devices may work with virtually no applied current, superseding present-day electronic technology and might hopefully provide a basis for scalable quantum computers. Within the department, the work on spintronics focuses on both fundamental aspects of spin transport in magnetic nanostructures, and the modelling of structures with real-world applications. The work involves solving the Schrödinger equation for both simple models and realistic structures and employs both analytical pencil and paper calculations and high powered computational modelling. For more information about this research area, see Andrey Umerski's research pages, or the article on Spintronics and Applications to Hard Disks.
6a2e199168645da0
Chapter 9: Scattering in One Dimension We now consider another one-dimensional problem, the scattering problem. In doing so we need to consider scattering-type solutions and what they mean. For standard scattering situations, the wave functions we use are usually those valid for regions of constant potential energy such as complex exponentials (plane waves) when E > V0 and real exponentials when E < V0.1 Table of Contents 1There is one other possibility that is not often considered. If E = V0, the solution to the Schrödinger equation yields a linear solution. Quantum Theory TOC Overview TOC OSP Projects: Open Source Physics - EJS Modeling Physlet Physics Physlet Quantum Physics STP Book
d9114435836ef109
Frank Znidarsic 2 ##Znidarsic, Franck P.E, PART 2: # Online preview of historic of “Zero Point Technologies” : (Open the original pages links to see the numerous pictures and videos) (see picture on original webpage) This author conducts a cryogenic zero point energy experiment. The results were published by Hal Fox in “New Energy News vol 5 pg #19. In the mid 1970s Ray Frank (the owner and president of the Apparatus Engineering Company one of my former employers) assigned this author the task of building a ground monitoring relay. In an effort to complete this assignment, I began to experiment with coils, current transformers, and magnetic amplifiers. I succeeded in developing the device. We sold many hundreds of them to many mining companies. I applied the knowledge that I gained to the design of an electronic levitational device. To my dismay, I discovered that no combination of electrical coils would induce a gravitational field. In the mid 1980’s, a friend, Tom C. Frank, gave me the book, ” THE QUEST FOR ABSOLUTE ZERO” by Kurt Mendelssohn. In his book, Mendelssohn disclosed that the relationship between the forces changed at cryogenic temperatures. This was the clue that I needed. Things began to come together. In 1989, I wrote my first book on the subject, “Elementary Antigravity” . This book caught the eye of Ronald Madison , a far sighted manager at the Pennsylvania Electric Company (my past employer). In 1991, Ron persuaded me to go to Texas and visit with Dr. Harold Puthoff . Puthoff’s work is based on the ideas of Andrei Sakharov. My work is based of the work of Kurt Mendelssohn. It is truly astounding that Puthoff and I, each following separate paths, independently arrived at almost the same conclusions. Prior to meeting Puthoff, I knew that the relationship between the forces changed in condensed cryogenic systems. Puthoff explained that the phenomena that I had discovered was the zero point interaction. Zero point interactions have now been discovered other non-cryogenic condensed systems. Zero point phenomena are exhibited in cold fusion cells and in the gravitomagnetic effects produced by rotating superconductors. In 1989 this author came out with his first book “Elementary Antigravity”. In chapter 10 of this book the relationship between the forces within cryogenic systems was examined. In 1996 an intermediate version of this material was published in the “Journal of New Energy” Vol 1, No 2. The remainder of this chapter is essentially a rewrite of these two works. As then, the study of cryogenic phenomena is very instructive. A better understanding of all zero point technologies can be gained through the study of cryogenic phenomena. ” Zero point energy is the energy that remains after a substance is cooled to absolute zero. ” Dr. Hal Puthoff It is well known that superconductors offer no resistance to electrical currents. Less well known, but even more amazing, are the low temperature superfluids. These fluids flow without friction. Once set into motion, they never slow down. Quantum interactions are limited to atomic distances in normal substances. In superconductors and superfluids quantum interactions are observed on a macroscopic scale. The normal interaction of the magnetic and electric field is very different in a superconductor. In normal conductors changing fields are required to induce other fields. In superconductors static fields can also induce other fields.1At the root of these effects lies a dramatic change in the permittivity and permeability of a superconductor (electron condensation). Ordinarily this change only effects the electromagnetic field. This author has developed techniques to coerce the gravitational and nuclear forces to participate in the condensation. A new look at the electromagnetic effects will lead to a deeper understanding of the the zero point interaction. The electric field of an isolated electric charge is that of a monopole. Dr. George Mathous, instructor at the Indiana University at Pennsylvania, commonly described the electric field of an isolated charge by stating, “The field drops off with the square of the distance and does not saturate.” In lay words this means that the field diverges outward and extends to infinity. It is instructive to look at the range and the strength of the electric field. The range of the field associated with an isolated electric charge is infinite. Isolation requires resistance. The electrical resistance of a superconductor is zero. No isolated charges can exist within a superconductor. The infinite permittivity of a superconductor confines the electric field within the superconductor. No leakage flux escapes. The maximum range of the electric field equals the length of the superconductor. Mangetic flux lines surround atoms and nucleons. The length of the shortest of these flux lines is measured in Fermi meters. The magnetic permeability of a superconductor is zero. All magnetic flux lines are expelled. This phenomena is known as the Meissner effect. The minimum range of the magnetic field equals the circumference of the superconductor. The infinite permittivity and permeability of superconductors also effects the quantum forces. The quantum forces normally have a very short range of interaction. This range is confined to atomic dimensions. Quantum interactions are observed on a macroscopic scale in superconductors and superfluids. Superconductors only accept currents that are integer multiples of one another. Superfluid helium will spin in a small cup only at certain rotational speeds. These low temperature phenomena vividly demonstrate that the range of the quantum interaction has increased to macroscopic dimensions. Pick the link to hear a quote from the lecture at the University of Illinois. File type wave. September 1999 ) A main point. Pick to view a chart that shows how the range of force interaction changes in a vibrationally reinforced condensate. ) Moses Chan and Eun-Seong Kim cooled and compressed helium and discovered a new phase of matter. They produced the first supersolid. 8The results surprised the physics community. Parts of the solid mass passed through other parts of the solid mass without friction. Friction is produced by the interaction of the electrical forces that bind solids together. These electrical forces are known as Columbic potentials. The individual Columbic potentials of normal matter become smoothed out in a supersolid. The electrical forces act in unity to produce a smooth frictionless surface. Nuclear fusion is regulated by the Columbic potential of the nucleus. The Columbic potential of the nucleus is also smoothed out in certain Bose condensates. Nuclear fusion can progress by unusual routes in these condensates. This author attended the American Nuclear Society’s Low Level Nuclear Reaction Conference in June of 1997. The conference was held at the Marriot World Trade Center in Orlando. At that conference James Patterson presented his new composite beads. These beads reduce the radioactivity of nuclear waste. George Miley described his discovery of heavy element transmutations within the CETI beads. The discovery of heavy element transmutations has given the field a new name. It is no longer called “cold fusion.” The process is now called ” low level nuclear transmutation “. The nucleus is surrounded by a strong positive charge. This charge strongly repels other nucleons. Conventional wisdom has it that the only way to get two nucleons to fuse together is to overcome this repulsive effect. In hot fusion scientists have been attempting (for 50 years now) to fuse nucleons together by hurtling them at each other a high speed. The nucleons must obtain at least ten thousand electron volts of velocity to overcome the electrostatic barrier. The process of surmounting the repulsive electrostatic barrier is akin to traveling swiftly over a huge speed bump. In the case of the speed bump, a loud crash will be produced. In the case of the electrostatic barrier, gamma and X-rays are produced. In conventional hot fusion huge quantities radiation are given off by this process. If the Patterson cell worked by this conventional process, everyone near it would have been killed. … During the conference in Orlando, Professor Heirich Hora (left), Emeritus Professor of The University of New South Wales, presented his theory of how the electrostatic barrier was being overcome. Hora said that the repulsive positive charges of the nucleons were “screened” by a negatively charged electron cloud. 2 Dr. Hora’s theory can not explain the lack of radiation. In his model the nucleons must still pass over the electrostatic potential barrier. When they do a high energy signatures will be produced. If the range of the strong nuclear force increased beyond the electrostatic potential barrier a nucleon would feel the nuclear force before it was repelled by the electrostatic force. Under this situation nucleons would pass under the electrostatic barrier without producing any radiation. Could this author’s original idea that electron condensations increase the range of the nuclear foces be correct? Since the Orlando conference several two new things have come to light. 1. It is now known that John J. Ruvalds discovered high temperature thin film nickel hydrogen superconductors. Light water cold fusion cells (the CETI cell) are thin film nickel hydrogen structures. Patent nuber 4, 043, 809 states: ” High temperature superconductors and method ABSTRACT: This invention comprises a superconductive compound having the formula: Ni1-x Mx Zy wherein M is a metal which will destroy the magnetic character of nickel (preferably copper, silver or gold); Z is hydrogen or deuterium, x is 0.1 to 0.9; and y, correspondingly, 0.9 to 0.1, and method of conducting electric current with no resistance at relatively high temperature of T>1° K comprising a conductor consisting essentially of the superconducting compound noted above.” This patent was issued on August 23, 1977 long before cold fusion was discovered. The bulk of the nickel hydrogen material becomes superconductive at cryogenic temperatures, however, this author believes that small isolated areas of superconductivity exist within the material at room temperature. 2. F. Celani, A. Spallone, P. Tripodi, D. Di Gioacchino, S. Pace, INFN Laboratori Nazionali di Frascati, via E.Fermi 40, 00044 Frascati (Italy) discovered superconductivity in palladium deuterium systems. “…………Awire segment (1/4 of total, the most cathodic) showed a very low resistance behavior in some tests (corresponding to R/Ro values much less than 0.05 and in a case less than 0.01)………………” It appears that the palladium deuterium structure is a room temperature superconductor. Heavy water cold fusion cells are constructed of palladium impregnated with deuterium (deuterium is heavy hydrogen). 3. Superconductors have no need to be negative, New Scientist, issue 2498, May 2005 ” Now physicist Julian Brown of the University of Oxford is arguing that protons can also form pairs and sneak through the metallic lattice in a similar manner. In theory, the protons should superconduct, he says. ” It’s now known that cold fusion cells contain small superconductive regions.34 Nuclear reactions proceed in these regions after thermal energy is added to the system. The thermal vibrations invite protons to participate in the condensation. The permeability and the permittivity of condensation now affects the nuclear forces. This author contends that the range of the strong nuclear force extends beyond the range of the electrostatic potential barrier. The increase in range allows nuclear transmutations to take place without radiation. The Griggs Machine and Potapov’s Yusmar device have been claimed to produce anomalous energy. The conditions inside of a cavitational bubble are extreme and can reach 10 of thousands degrees C at pressures of 100 million atmospheres or more.5These horrific pressures and temperatures are still several orders a magnitude to small to drive a conventional hot fusion reaction. In January of 1998 P. Mohanty and S.V. Khare, of Maryland College, reported: Sonoluminesence as a Cooperative Many Body Phenomenon , Physical Review Letters Vol 80 #1 January 1998. “……..The long range phase correlation encompassing a large number of component atoms results in the formation of a macroscopic quantum coherence……” A superconductor is a macroscopic quantum coherence. This author believes that the condensed plasma within a cavitation bubble is superconductive. Cavitational implosions produce extreme shock . This shock invites the nuclear force to participate in the condensation. The range of the nuclear force is increased. Nuclear transmutations proceed within the condensation. It was believed, during the first half of the 20th Century, that antigravity would be discovered shortly. This never happened. By the second half of the 20th Century, mainstream scientists believed that the that the unification of gravity and electromagnetism could only be obtained at very high energies. These energies would forever be beyond the reach of man’s largest accelerators. Antigravity was relegated to the dreams of cranks. In 1955 Major Donald E. Keyhole wrote in the “The Flying Saucer Conspiracy” Page 252-254 “Even after Einstein’s announcement that electricity, magnetism, and gravity were all manifestations of one force, few people had fully accepted that thought that we might someday neutralize gravity. … I still had no idea how such a G-field could be created.” In the last decade of the 20th Century, Podkletnov applied mechanical shock to a superconductor. A gravitational anomaly was produced.6Znidarsic wrote that the vibrational stimulation of a Bose condensate adjoins the gravitational field with the condensate. The gravitational force is then effected by the permittivity and permeability of the condensate. The range of the gravitational interaction decreases by the same order of magnitude that the range of the nuclear force has increased. The strength of the gravitational field within the condensate greatly increases. Gravitomagnetic flux lines are expelled. This takes place at low energies. It has to do with the path of the quantum transition. The relationship will be qualified in later chapters. Hopefully this idea will someday be universally recognized and antigravity will finally become a reality. On July 12, 1998 the University of Buffalo announced its discovery: CARBON COMPOSITES SUPERCONDUCT AT ROOM TEMPERATURE “LAS VEGAS — Materials engineers at the University at Buffalo have made two discoveries that have enabled carbon-fiber materials to superconduct at room temperature. The related discoveries were so unexpected that the researchers at first thought that they were mistaken. Led by Deborah D.L. Chung, Ph.D., UB professor of mechanical and aerospace engineering, the engineers observed negative electrical resistance in carbon-composite materials, and zero resistance when these materials were combined with others that are conventional, positive resistors….. This finding of negative resistance flies in the face of a fundamental law of physics: Opposites attract.Chung explained that in conventional systems, the application of voltage causes electrons — which carry a negative charge — to move toward the high, or positive end, of the voltage gradient. But in this case, the electrons move the other way, from the plus end of the voltage gradient to the minus end………………..”In this case, opposites appear not to attract,” said Chung. The researchers are studying how this effect could be possible.”…………………..A patent application has been filed on the invention. Previous patents filed by other researchers on negative resistance have been limited to very narrow ranges of the voltage gradient. In contrast, the UB researchers have exhibited negative resistance that does not vary throughout the entire gamut of the voltage gradient.”7 Electrical engineers know that when electrons “move toward the high, or positive end, of the voltage gradient” power is produced. Have the University of Buffalo scientists discovered how to produce electricity directly from a zero point process? The range of the electric and magnetic fields are strongly effected by a superconducting Bose condensate. An element of shock invites nuclear and gravitational participation. The shock produces vibration . The vibration lowers the elasticity of the space within the condensate. The reduced stiffness is expressed in several ways. The range of the natural forces tend towards the length of the superconductor. The strength of the forces varies inversly with their range. The constants of the motion tend toward the electromagnetic. The effect of the vibration is qualified in Chapters 10 and 11. The development or reduced to practice zero point technologies will be of great economic and social importance. 1. K. Mendelssohn. ” THE QUEST FOR ABSOLUTE ZERO ” McGraw-Hill, New York, 1966 2. Hora, Kelly Patel, Prelas, Miley, and Tompkims; Physics Letters A, 1993, 138-143 Screening in cold fusion derived from D-D reactions Dr. George Miley’s “Swimming Electron Theory” is based on the idea that electron clusters (a form of condensation) exist between the metallic surfaces of cold fusion electrodes. 3. Cryogenic phenomena are commonly associated with the spin pairing of electrons. The Chubb – Chubb theory points out the fact that electrons pair in the cold fusion process. 4. A. G. Lipson, et al., ” Generation of the Products of DD Nuclear Fusion in High-Temperature Superconductors YBa2Cu3O7-x Near the Superconducting Phase Transition,” Tech. Phys., 40 (no. 8), 839 (August 1995). 5. “Can Sound Drive Fusion in a Bubble” Robert Pool, Science vol 266, 16 Dec 1994 6. “A Possibility of Gravitational Force Shielding by Bulk YBa2Cu3O7- x Superconductor” , E. Podkletnov and R. Nieman Physica C 203 (1992) pp 441-444. “Tampere University Technology report” MSU-95 chem, January 1995 “Gravitoelectric-Electric Coupling via Superconductivity. ” Torr, Douglas G. and Li, Ning Foundations of physics letters. AUG 01 1993 Vol6 # 4, Page 371 7. Companies that are interested in technical information on the invention should contact the UB Office of Technology Transfer at 716-645-3811 or by e-mail at “Apparent negative electrical resistance in carbon fiber composites”, Shoukai Wang, D.D.L. Chung ; The Journal “Composites”, September 1999 8. “Probing Question: What is a supersolid?”, May 13, 2005 // End of chapter 4 ………………………………………………………………….. # CHAPTER #5, GENESIS. A version of this chapter was published in “Infinite Energy” Vol 1, #5 & #6 1996. – 1 – Edward P. Tryon, NATURE VOL 246, December 14, 1973. – 2 – Technically, nothing can exist outside of the universe. The universe is a closed structure in which, according to the cosmological principle, all positions are equivalent. The model presented in this paper, in which an object falls from an infinite distance away to the edge of the universe, does not represent reality. The model does, however, allow for the calculation of the negative gravitational potential shared by all objects within the universe. – 3 – “…the Universe is, in fact, spherical…” Lisa Melton, SCIENTIFIC AMERICAN, July 2000, page 15 – 4 – Fritz Zwicky proposed that 90% of the matter in the universe is “dark” in 1933. He came to this conclusion from the study of clusters of galaxies. Vera Rubin confirmed that 90% of the universe’s matter is composed of the so called “dark matter” from her study of the rotational speeds of galaxies in 1977. Solgaila and Taytler found that the universe contains 1/4 proton of ordinary matter and 8 protons of dark matter per cubic meter. ” New Measurements of Ancient Deuterium Boost the Baryon Density of the Universe “, Physics Today, August 1996, Page 17 – 5 – A satellite gyroscope experiment actually measured the gravitomagnetic field. Schwinger “Einstein’s Legacy” Page 218, The Scientific American Library. – 6 – “Inertial is a much-discussed topic. The Graneaus give a beautiful account in which they relate theories of inertia back to those of Mach at the beginning of the century. They view inertia as the result of the gravitational interaction of all particles in the universe on the body in question…” Professor John O’ M. Bockris Journal of Scientific Exploration, Spring 1995 – 7 – The genesis process may currently be creating particles in inter-steller space. “Energy from Particle Creation in Space” Cold Fusion (12/94) No. 5, page 12; Wolff, Milo 6. Paul Davies and John Gribbin, “THE MATTER MYTH” Touchstone publishing 1992. – 8 – Hal Puthoff, PHYSICAL REVIEW A, March 1989 ; Hal Puthoff, D.C. Cole, PHYSICAL REVIEW E, August 1993. Hal Puthoff, OMNI, “Squeezing Energy From a Vacuum” 2/91 Puthoff manufactured very dense plasma while working with Jupiter Technologies 1990. “Compendium of Condensed-Charge Technology Documentation” Internal report, Jupiter Technologies 1989. A patent on the process was obtained by Ken Shoulders #5018180. – 9 – A ball lightning experiment in Japan appeared to produce excess energy. Y.H. Ohtsuki & H. Ofuruton, NATURE VOL 246, March 14, 1991 – 10 – Dr. McKurbe’s cold fusion experiments at SRI in the USA continue to produce unexplained excess energy. Jerry E. Bishop, The WALL STREET JOURNAL 7/14/94. – 11 – Dr. Miley of the Hot Fusion Studies Lab at the University of Illinois developed his theory, “The Swimming Electron Theory” This theory shows that high density electron clouds exist in the CETI cold fusion cell. – 12 – Andrei Sakharov SOVIET PHYSICS DACKLADI Vol 12, May 1968, Page 1040 – 13 – V Arunasalam, ” Superfluid Quasi Photons in the Early Universe ” ; Physics Essays vol. 15, No. 1, 2002 – 14 – See Chapter 12 Pg 1 ” Is Radiation of Negative Energy Possible? “, F.X.. Vacanti and J. Chauveheid, Physics Essays , Vol. 15. , Number 4, 2002 Additional reading on synthetic life – ” In the Business of Synthetic Life “, James J. Collins, Scientific American , April 2005 – ” Yikes! It’s Alive! ” , Bennett Daviss, Discover , December 1990 – In 1953 in Chicago Stanley Miller produced the first synthitic amino-acids, since then scientists have manufactured synthetic proteins. // end of chapter 5 ………………………………………………………………….. The principles upon which modern science is based can be traced back to the original notions of ancient philosophers. The greatest of these early philosophers was Aristotle (384-322 BC). Aristotle developed his ideas from within through a process of introspection. His ideas are based on the concepts of truth, authenticity, and perfection. The conclusions he came to form the basis of western culture and were held as the absolute truth for nearly 2,000 years. Aristotle founded a planetary system and placed the earth at the center of the universe. In the second century A.D. Aristotle’s system was revised by Ptolemy. In this system, the stars and planets are attached to nine transparent crystalline spheres, each of which rotates above the earth. The ninth sphere, the primum moblie, is the closest to heaven and is, therefore, the most perfect. Animated GIF Aristotle’s universe is composed of four worldly elements; earth, fire, water, and air and a fifth element which is pure, authentic, and incorruptible. The stars and planets are composed of this fifth element. Due to its proximity to heaven, this fifth element possesses God like properties and is not subject to the ordinary physical laws. In the seventeenth century, Aristotle’s teachings were still considered to be a fundamental truth. In 1600, William Gilbert published his book “The Magnet”. Little was known of magnetism in Gilbert’s time except that it was a force that emanated from bits of lodestone. Aristotle’s influence upon Gilbert is apparent from Gilbert’s conclusion that magnetism is a result of the pure, authentic nature of lodestone. Gilbert also claimed that the earth’s magnetic field was a direct result of the pure authentic character of the deep earth. In some ways, according to the philosophy of Gilbert, the heavens, the deep earth, and lodestone were close to God. In 1820, Hans Christian Oersted of the University of Copenhagen was demonstrating an electric circuit before his students. In Oersted’s time electrical current could only be produced by crude batteries. His battery consisted of 20 cups, each containing a dilute solution of sulfuric and nitric acid. In each cup he placed a copper and a zinc electrode. During one of the experiments he placed a compass near the apparatus. To the astonishment of his students, the compass deflected when the electric circuit was completed. Oersted had discovered that an electric current produces a magnetic field. In the nineteenth century, experiments and discoveries, like those of Oersted, began to overturn the long-held ideas of Aristotle. Gilbert’s idea that magnetism is due to a God like influence was also brought into doubt. In 1824, Michael Faraday argued that if an electrical current affects a magnet then a magnet should affect an electrical current. In 1831, Faraday wound two coils on a ring of soft iron. He imposed a current on the coil #1 and he knew, from the discovery of Oersted, that the current in coil #1 would produce a magnetic field. He expected that this magnetic field would impose a continuous current in coil #2. The magnetic field did impose a current but not the continuous current that Faraday had expected. Faraday discovered that the imposed current on coil #2 appeared only when the strength of the current in coil #1 was varied. The work of Oersted and Faraday in the first half of the nineteenth century was taken up by James Clerk Maxwell in the latter half of the same century. In 1865, Maxwell wrote a paper entitled, “The Dynamical Theory of the Electromagnetic Field”. In the paper he developed the equations that describe the electromagnetic interaction. These equations, which are based on the concept of symmetry, quantify the symmetrical relationship that exists between the electric and magnetic fields. Maxwell’s equations show that a changing magnetic field induces an electrical field and, conversely, that a changing electrical field (a current) induces a magnetic field. Maxwell’s equations are fundamental to the design of all electrical generators and electromagnets. As such, they form the foundation upon which the modern age of electrical power was built. Maxwell was the first to quantify a basic principle of nature. In particular, he showed that nature is constructed around underlying symmetries. The electric and magnetic fields, while different from each other, are manifestations of a single, more fundamental force. The ideas of Aristotle and Gilbert about the pure, the authentic, and the incorruptible, were replaced by the concept of symmetry. Since Maxwell’s discovery, the concept that nature is designed around a deep, underlying symmetry has proven to be true time and time again. Today most advanced studies in the field of theoretical physics are based upon the principle of symmetry. In 1687, Isaac Newton published his book, “The Principia”, in which he spelled out the laws of gravitation and motion. In order to accurately describe gravity, Newton invented the mathematics of calculus. He used his invention of calculus to recount the laws of nature. His equations attribute the gravitational force to the presence of a field. This field is capable of exerting an attractive force. The acceleration produced by this force changes the momentum of a mass. In 1912, Albert Einstein published his “General Theory of Relativity”. The General Theory of Relativity is also a theory of gravity. Einstein’s theory, like the theory of Newton, also demonstrates that gravitational effects are capable of exerting an attractive force. This force can change the momentum of a mass. Einstein’s theory, however, goes beyond Newton’s theory in that it shows that the converse is also true. Any force which results in a change in momentum will generate a gravitational field. Einstein’s theory, for the first time, exposed the symmetrical relationship that exists between force and gravity. This concept of a gravitational symmetry was not, as in the case of the electromagnetic symmetry, universally applied. In the twentieth century, it was applied in a limited fashion by physicists in the study of gravitational waves. Pick the link to view the symmetrical relationship between the original and induced fields. In 1989, this author wrote his first book “Elementary Antigravity”. In this book he revealed a modified model of matter. This model is based on the idea that unbalanced forces exist within matter and that these forces are the source of the gravitational field of matter. In this present work, the symmetrical relationship that exists between the forces is fully developed. The exposure of these relationships will lead to technical developments in the fields of antigravity and energy production. These developments will parallel the developments in electrical technology that occurred following Maxwell’s discovery of the symmetrical electromagnetic relationship. In review, a changing electric field (a current) induces a magnetic field and, conversely, a changing magnetic field induces an electric field. Likewise, a gravitational field induces a force and, conversely, a force induces a gravitational field. The relationship between force, gravity, and the gravitomagnetic field has been known for 100 years. This author was the first to place force in a model of matter. This author’s work is fundamental to the development of zero point technologies. This author’s work on the force/gravity symmetry was published in INFINITE ENERGY Vol. #4, Issue #22, November 1998. To get an idea of the magnitude of the force required to produce a gravitational field, consider the gravitational field produced by one gram of matter. The amount of gravity produced by one gram of matter is indeed tiny. Now assume that this one gram of matter is converted into energy. A vigorous nuclear explosion will result. If this explosion is contained within a vessel, the outward force on the vessel would be tremendous. This is the amount of force necessary to produce the gravitational field of one gram of matter. This force is produced naturally by the mechanisms that contain the energy of mass within matter. A third symmetrical relationship exists between the strong nuclear force and the nuclear spin-orbit interaction. A moving nucleon induces a nuclear-magnetic field. The nuclear-magnetic field is not electromagnetic in origin. It is much stronger than the electromagnetic spin orbit interaction found within atoms. The nuclear-magnetic field tends to couple like nucleons pair wise into stable configurations. The nuclear spin orbit interaction favors nucleons with equal and even numbers of protons and neutrons. The nuclear spin-orbit interaction accounts for the fact that nucleons tend to contain the equal numbers of protons and neutrons (Z = A/2). It also accounts for the fact that nucleons with even numbers of protons and neutrons tend to be stable. The formulation of the nuclear spin orbit interaction has the same structure as electromagnetic and force-gravity interaction, however, the constants of the motion are different. The next portion of this chapter is devoted to developing the mathematical relationship that exists between the forces. The reader, who is not interested in mathematical details, may skip forward to the conclusion without missing any of the chapter’s essential concepts. Following Maxwell’s laws, the known electromagnetic relationship will be derived. Then, by following the same procedure, the unknown gravitational force relationship will be found. The interaction between the strong nuclear force and the nuclear spin orbit interaction will also be explored. The magnetic field produced by a changing electrical field (a current) is described by Maxwell’s electromagnetic relationship. This particular formulation, given in Equation #1, is known as Gauss’ Law. In words, Equation #1 states that the change in the number of electric flux lines passing through a closed surface is equivalent to the amount of charge that passes through the surface. The product of this charge and the electrical permittivity of free space is the current associated with the moving charge. I = eo (d/dt)E•ds Equation #1 The current produced by an electron passing through a closed surface. eo = the electrical permittivity of free space I = the current in amps ; E = The electrical potential in newtons/coulomb. Note the italic bold script means that E is a vector having a magnitude and a direction. The product of this current and the magnetic permeability of free space “uo” , yields, Equation #2, the magnetic flux through any closed loop around the flow of current. F = (uoeo) d/dtE•ds Equation #2 The magnetic flux surrounding an electrical current. F = the magnetic flux in Webbers ; uo = the magnetic permeability of free space Substituting charge “q/eo” for the electrical potential yields, Equation #3. F = (uoeo ) d(q/eo) / dt Equation #3 The magnetic flux surrounding an electrical current. Simplifying equation #3 yields equation #4. ( Equation #3 was simplified by taking the derivative using a mathematical operation called the chain rule. In this process eo comes out as a constant.) F = uo (dq / dt) Equation #4 The magnetic flux surrounding a current carrying conductor. Equation #4 states that the magnetic flux around a conductor equals the product of the current flow ( in coulombs per second dq/dt ) and the permeability of free space. i = (dq / dt) = (qv / L) Equation #5 The current flow through a closed surface. Equation #5 relates current flow “i” in coulombs per second to the product of charge “q”, velocity “v”, and the reciprocal of the distance “L” around which the charge flows. Substituting Equation #5 into Equation #4 yields Equation #6. F = uo i Equation #6 the magnetic flux around a current carrying conductor Equation #6 gives the magnetic flux around a conductor carrying a constant current. The flux carries the momentum of the moving electrical charges. Its magnitude is proportional to the product of the current “i” carried in the conductor and the permeability of free space. The derivative of Equation #6 was taken to introduce an acceleration into the system. This acceleration manifests itself as a change in the strength or direction of the current flow. This acceleration generates a second electrical field. This second electrical field contributes a force to the system. This force opposes the acceleration of the electrical charges. The electrical field described by Equation #7 expresses itself as a voltage ( joules / coulomb ) across an inductor. E2 = uo (di / dt) ; E2 = L (di/dt) volts Equation #7 The voltage produced by accelerating charges. A second analysis was done. This analysis derives the relationship between gravity and a changing momentum. The analysis employs the same procedure that was used to derive the electromagnetic relationship. Disturbances in the gravitational field propagate at the speed of light. 1,2,3,4,5During the propagation interval induced fields conserve the momentum of the system. Disturbances in a gravitational system induce a gravitomagnetic field. Equation #8 is the gravitational equivalent of equation #2. Equation #8 states that the momentum of a moving mass is carried by a gravitomagnetic field. The strength of the gravitomagnetic field is proportional to the number of gravitational flux lines that pass through an infinite surface. Fg = (uoeo) (d/dt)Eg•ds Equation #8 The gravito-magnetic flux surrounding moving mass Eg The vector, gravitational potential in (newtons / kg) ; Fg = The gravitomagnetic flux Substituting mass “m” for gravitational potential yields equation #9, is the gravitational equivalent of equation #3. Fg = (uoeo) d (Gm) / dt Equation #9 The gravitomagnetic flux surrounding a moving mass. m = The mass in kg ; G = The gravitational constant Maxwell discovered a relationship between light speed, permittivity, and permeability. This relationship is given by equation #10. uoeo = (1 / c2) Equation #10 Maxwell’s relationship. Substituting Maxwell’s relationship into equation #9 yields equation #11. Equation #11 is the gravitational equivalent of Equation #4. Fg = (G / c2) dm / dt Equation #11 The gravitomagnetic flux surrounding a moving mass. Equation #12 states that the mass flow “I” in kilograms per second (dm/dt) is the product of mass “m”, velocity “v”, and the length “L” of a uniform body of mass. Equation #12 is the gravitational equivalent of Equation #5. It is the gravitational mass flow. Ig = dm / dt = mv / L Equation #12 The mass flow through a closed surface. Substituting #12 into Equation #11 yields Equation #13. Equation #13 is the gravitational equivalent of equation #6. Fg = (G/c2) (mv / L) Equation #13 The gravitomagnetic flux around a moving mass. Fg = The gravitomagnetic field Equation #13, the gravitomagnetic field, carries the momentum “mv” of a mass moving at a constant velocity. The magnitude of this field is proportional to the product of mass flow in kilograms per second times the ratio of the gravitation constant “G” and light speed “c” squared. Momentum “p” is substituted for the product “mv”. Taking the derivative of the result introduces acceleration into the system. This acceleration generates a second gravitational field. This field is given by Equation #14. This second gravitational field contributes a force to the system which opposes the acceleration of the mass. E2g = (G / c2)( dp / dt ) / L Equation #14 The distance “L” is the length of the moving mass “m”. In a gravitational system, this length is equivalent to the gravitational radius “r” of the mass. Substituting “r” for “L” yields equation #15. Equation #15 gives the intensity of the induced inertial force. The field described by equation #15 expresses itself as an applied force ( newtons / kilogram ). Induced inertial force = [G / c2 r ] (dp / dt) Equation #15 is the general formula of gravitational induction. This formula will be extensively applied in upcoming chapters. G (the gravitational constant) = 6.67 x1011 Nm² / kg² ; c (light speed) = 3 x 108 meters / second ; (dp / dt) = the applied force in newtons ; r = the gravitational radius A third analysis was attempted . This analysis derives the relationship between the strong nuclear force and the nuclear spin-orbit interaction. This analysis employs the same procedure that was used in deriving the electromagnetic and gravitational relationships. The principle of symmetry requires a nuclear-magnetic field to be produced by the movement of nucleon. Equation #15 is the nuclear equivalent of equation #2. Equation #15 states that a change in the strong nuclear force induces a nuclear-magnetic field. The strength of this field is proportional to the number of strong nuclear flux lines that pass through an infinite surface. Fn = (uo eo)(d/dt)En•ds Equation #15 The nuclear-magnetic (spin orbit) field The strong nuclear force is nonlinear. The analysis cannot be fully developed. An estimate of R at the surface of a nucleon may obtained empirically. R = 23 MeV The nuclear field described by the equation below expresses itself as a potential ( MeV / nucleon ) associated with the spin of a nucleon. Nuclear-magnetic energy = R d ( spin ) / dt Nature is constructed around underlying symmetries. The idea that nature is constructed around underlying symmetries has proven to be correct time and time again. The electro-weak theory, for example, is based on the principle of symmetry. The first natural symmetry to be discovered was the relationship between the electric and magnetic field. A second symmetrical relationship exists between force and gravity. A third symmetrical relationship exists between the strong nuclear force and the nuclear spin orbit interaction. The nature of these symmetries was explored. The mathematics used to describe the electromagnetic relationship were applied to the gravitational / force relationship. This analysis yielded (Equation #15) the general formula of gravitational induction. The mathematical analysis performed shows that gravity produces a force and conversely that a force produces gravity. Each relationship has a similar formulation and involves the element of time. The relationships differ, in that electromagnetism involves a change in the magnetic field while the gravitational/force involves a change in momentum. The nuclear spin-orbit interaction was also explored. In light of what was learned from the study of electromagnetism and gravity, it was determined that the nuclear spin-orbit interaction involves a change in the strong nuclear force. The formulation of the each relationship is the same, however, the constants of the motion (L, G/c2, and R) are radically different. Some very profound conclusions have been obtained through the application of the simple concept of symmetry. The remainder of this text will explore, expand, and develop these constructs into a synthesis that will become the foundation of many new futuristic technologies. A main point. Pick to view a chart that shows how the constants of the motion, discussed in this chapter, changes in a vibrationally reinforced condensate. 1. S. Kopeikin (2001), ” Testing the relativistic effect of the propagation of gravity by very long baseline interferometry “, Astrophys.J. 556, L1-L5. 2. T. Van Flandern (2002), 3. H. Asada (2002), Astrophys.J. 574, L69-L70. 4. S. Kopeikin (2002), . 5. T. Van Flandern (1998) , The speed of gravity ­ What the experiments say², Phys.Lett.A 250, 1-11. // end of chapter 6 ………………………………………………………………….. The study of the form, function, and composition of matter has been, and continues to be, one of the greatest intellectual challenges of all time. In ancient times the Greek Empedocles (495-435 B.C.) came up with the idea that matter is composed of earth, air, fire, and water. In 430 B.C. the idea of Empedocles was rejected by, the Greek, Democritus of Abdera. Democritus believed that the substances of the creation are composed of atoms. These atoms are the smallest bits into which a substance can be divided. Any additional subdivision would change the essence of the substance. He called these bits of substance “atomos” from the Greek word meaning “indivisible”. Democritus was, of course, correct in his supposition, however, at the time, no evidence was available to confirm this idea. Ancient technology was primitive and could not to confirm or contest any of these ideas. Various speculations of this sort continued to be offered and rejected over the next 2,000 years. The scientific revolution began in the seventeenth century. With this revolution came the tools to test the theories of matter. By the eighteenth century these tools included methods of producing gases through the use of chemical reactions, and the means to weigh the resultant gases. From his studies of the gaseous by-products of chemical reactions, French chemist Antoine Lavoisier (1743-1794) discovered that the weight of the products of a chemical reaction equals the weight of the of the original compound. The principle of “the conservation of mass” was born. For his achievements Lavoisier is today known as the father of modern chemistry. Late in the eighteenth century, the first use of another new tool began to be applied to test the theories of matter. This tool is electrical technology. The first electrical technology to be applied to the study of matter, electrolysis , involves the passing of an electrical current through a conductive solution. If an electrical current is passed through a conductive solution, the solution tends to decompose into its elements. For example, if an electric current is passed through water, the water decomposes, producing the element hydrogen at the negative electrode and the element oxygen at the positive electrode. With the knowledge obtained from the use of these new technologies, English schoolteacher, John Dalton (1766-1844) was able to lay down the principles of modern chemistry. Dalton’s theory was based on the concept that, matter is made of atoms, all atoms of the same element are identical, and atoms combine in whole number ratios to form compounds. Electrical technology became increasingly more sophisticated during the nineteenth century. Inventions such as the cathode ray tube (a television picture tube is a cathode ray tube) allowed atoms to be broken apart and studied. The first subatomic particle to be discovered was the electron. In 1897, J.J. Thomson demonstrated that the beams seen in cathode ray tubes were composed of electrons. In 1909, Robert Millican measured the charge of the electron in his, now famous, oil drop experiment. Two years later, Ernest Rutherford ascertained the properties of the atomic nucleus by observing the angle at which alpha particles bounce off of the nucleus. Niels Bohr combined these ideas and in 1913, placed the newly discovered electron in discrete planetary orbits around the newly discovered nucleus. The planetary model of the atom was born. With the appearance of the Bohr model of the atom, the concept of the quantum nature of the atom was established. Pick the link to view the Bohr model of the atom. As the temperature of matter is increased, it emits correspondingly shorter wavelengths of electromagnetic energy. For example, if a metal poker is heated it will become warm and emit long wavelength infrared heat energy. If the heating is continued the poker will eventually become red hot. The red color is due to the emission of shorter wavelength red light. If heated hotter still, the poker will become white hot emitting even shorter wavelengths of light. An astute observer will notice that there is an inverse relationship between the temperature of the emitter and the wavelength of the emission. This relationship extends across the entire electromagnetic spectrum. If the poker could be heated hot enough it would emit ultra-violet light or X-rays. The German physicist Max Karl Ludwig Planck studied the light emitted from matter and came to a profound conclusion. In 1900, Planck announced that light waves were given off in discrete particle-like packets of energy called quanta. Today Planck’s quanta are know known as photons. The energy in each photon of light varies inversely with the wavelength of the emitted light. Ultraviolet, for example, has a shorter wavelength than red light and, correspondingly, more energy per photon than red light. The poker, in our example, while only red hot cannot emit ultraviolet light because its’ atoms do not possess enough energy to produce ultraviolet light. The sun, however, is hot enough to produce ultraviolet photons. The ultraviolet photons emitted by the sun contain enough energy to break chemical bonds and can “sun” burn the skin. The radiation spectrum cannot be explained by any wave theory. This spectrum can, however, be accounted for by the emission of a particle of light or photon. In 1803, Thomas Young discovered interference patterns in light. Interference patterns cannot be explained by any particle theory. These patterns can, however, be accounted for by the interaction of waves. How can light be both a particle and a wave? In 1924, Prince Louis de Broglie proposed that matter possess wave-like properties. 1According to de Broglie’s hypothesis, all moving matter should have an associated wavelength. De Broglie’s hypothesis was confirmed by an experiment conducted at Bell Labs by Clinton J. Davisson and Lester H. Germer. In this experiment an electron beam was bounced off of a diffraction grating. The reflected beam produced a wave like interference pattern on a phosphor screen. The mystery deepened; not only does light possess particle-like properties but matter possesses wave-like properties. How can matter be both a particle and a wave? Pick the link to view one form of the Germar Davisson experiment. Throw a stone in a lake and watch he waves propagate away from the point of impact. Listen to a distant sound that has traveled to you from its source. Shake a rope and watch the waves travel down the rope. Tune in a distant radio station, the radio waves have traveled outward from the station to you. Watch the waves in the ocean as they travel into the shore. In short, waves propagate, its their nature to do so, and that is what they invariably do. Maxwell’s equations unequivocally demonstrate the fields propagate at light speed. Matter waves, however, remain “stuck” in the matter. Why do they not propagate? What “sticks” them? An answer to this question was presented by Erwin Schrödinger and Werner Heisenberg at the Copenhagen conventions. The Copenhagen interpretation states that elementary particles are composed of particle-like bundles of waves. These bundles are know as a wave packets. The wave packets move at velocity V. These wave packets are localized (held is place) by the addition of an infinite number of component waves. Each of these component waves has a different wavelength or wave number. An infinite number of waves each with a different wave number is required to hold a wave packet fixed in space. This argument has two major flaws. It does not describe the path of the quantum transition and an infinite number of real component waves cannot exist within a finite universe. Max Born attempted to side step these problems by stating that the wave packets of matter are only mathematical functions of probability. Only real waves can exist in the real world, therefore an imaginary place of residence, called configuration space, was created for the probability waves. Configuration space contains only functions of kinetic and potential energy. Forces are ignored in configuration space. “Forces of constraint are not an issue. Indeed, the standard Lagrangian formulation ignores them…In such systems, energies reign supreme, and it is no accident that the Hamiltonian and Lagrangian functions assume fundamental roles in a formulation of the theory of quantum mechanics..” Grant R. Fowles University of Utah Ordinary rules, including the rules of wave propagation, do not apply in configuration space. The propagation mystery was supposedly solved. This solution sounds like and has much in common with those of the ancient philosophers. It is dead wrong! “Schrödinger never accepted this view, but registered his concern and disappointment that this transcendental, almost psychical interpretation had become universally accepted dogma.” Modern Physics Serway, Moses, Moyer; 1997 Einstein also believed that something was amiss with the whole idea. His remark, “God does not play dice” indicates that he placed little confidence in these waves of probability. For the most part, the error made little difference, modern science advanced, and bigger things were discovered. It did, however, make at least one difference; it forestalled the development of gravitational and low level nuclear technologies for an entire Century. Matter is composed of energy and fields of force. Matter can be mathematically modeled but a mathematical model does not make matter. Matter waves are real, they contain energy, are the essence of mass, and convey momentum. Disturbances in the force fields propagate at light speed. ” This result is rather surprising… since electrons are observed in practice to have velocities considerably less than the velocity of light it would seem that we have a contradiction with experiment. Paul Dirac, his equations suggested that the electron propagates at light speed. 11″ Matter does not disperse because it is held together by forces. These forces generate the gravitational field of matter, establish the inertial properties of matter, and set matter’s dynamic attributes. The understanding of the nature of the restraining forces provides insight of the quantum transition. The remainder of this chapter will be spent qualifying these forces and the relationship that they share with matter. The ideas to follow are central to this author’s work. Reader’s who have no interest in math may skip to the conclusion without missing the essential details of this chapter. Essentially the math shows that forces within matter are responsible for many of the properties of matter. This concept will be extended in Chapter 10. The various fields that compose matter have radically different ranges and strengths. The force, that pins the various fields within matter, is generated when the amplitude of a field exceeds the elastic limit of space. A version of this section was published in “Infinite Energy” Vol 4, #22 1998 The matter wave function is composed of various fields. Photons were employed to represent these various fields. Photons exhibit the underlying relationship between momentum and energy of a field (static or dynamic) in which disturbances propagate at luminal velocities. Consider photons trapped in a massless perfectly reflecting box. The photon in a box is a simplistic representation of matter. Light has two transverse modes of vibration and carries momentum in the direction of its travel. All three modes need to be employed in a three dimensional model. For the sake of simplicity this analysis considers only a single dimension. The photons in this model represents the matter wave function and the box represents the potential well of matter. As the photons bounce off of the walls of the box momentum “p” is transferred to the walls of the box. Each time a photon strikes a wall of the box it produces a force. This force generates the gravitational mass associated with the photon in the box. The general formula of gravitational induction, as presented in the General Theory of Relativity 3, 4(this equation was derived in Chapter 6) is given by Equation #2. Eg = G / (c2r) (dp / dt) Equation # 2 The gravitational field produced by a force Eg = the gravitational field in newtons / kg ; G = the gravitational constant ; r = the gravitational radius ; dp/dt = force Each time the photon strikes the wall of the box it produces a gravitation field according to equation #2. The gravitational field produced by an impact varies with the reciprocal of distance “1/r”. The gravitational field produced by matter varies as the reciprocal of distance squared “1/r2”. This author has ascertained how the “1/r2” gravitational field of is produced by a force. It will now be shown that the superposition of a positive field that varies with an “1/r” rate over a negative field that varies with an “1/r” rate, produces the “1/r2” gravitational field of matter. An exact mathematical analysis of the gravitational field produced by the photon in the box will now be undertaken. L = The dimensions of the box ; p = momentum ; t = the time required for the photon to traverse the box=L/c ; r = the distance to point X The far gravitational field at point X is the vector sum of the fields produced by the impacts on walls A and B. This field is given below. Eg at x = 1/r field from wall A – 1/r field from wall B Equation 3 Showing the super-position of two fields. Eg at x = (G / [ c2 (r+L) ] ) ( dp / dt ) – (G / [c2r] ) (dp / dt) Equation 4 Simplifying. Eg at x = – (G / c2) (dp / dt) [ L / (r2 + r L) ] Equation 5 Taking the limit to obtain the far field. Eg at x = limas r>>L – (G / c2) (dp / dt) [ L / (r2 + r L) ] The result , Equation #7, is the far gravitational field of matter. Far, in this example, means greater than the wavelength of an elementary particle. In the case of a superconductor far means longer than the length of the superconductor. Eg at x = – (G / c2) (dp / dt) L / r2 Equation #7 This momentum of an energy field that propagates at light speed is given by the equation below 2 . p = E / c ; E = the energy of the photon ; c = light speed ; p = momentum (radiation pressure) The amount of force (dp / dt) that is imparted to the walls of the box depends on the dimensions of the box L. Equation 8 gives the force on the walls of the box. dp / dt = Dp / Dt = (2E / c) / (L / c) = 2E / L Equation #8 Equation #8 was substituted into Equation #7 and a factor of 1/2 was added. The factor of 1/2 is required because the resultant field is produced by two impacts and the energy can only impact one wall at a time. Equation 9 The far gravitational field produced by energy bouncing in a box Eg at x = – (1/2) (G / c2) (2E / L) (L / r2) Equation #9 Equation 10 is Einstein’s relationship between matter and energy. M = E / c2 Equation #10 Substituting mass for energy yileds Equation #11, Newton’s formula for gravity. 5 Eg at x = – GM / r2 Equation #11 This analysis clearly shows that unbalanced forces within matter generate the gravitational field of matter. 6, 10 These forces result from the impact of energy which flows at luminal velocities. Note: a version of this section was published in “The Journal of New Energy” Vol 5, September 2000 In 1924 Prince Louis DeBroglie proposed that matter has a wavelength associated with it. 1 Schrödinger incorporated deBroglie’s idea into his his famous wave equation. The Davission and Germer experiment demonstrated the wave nature of the the electron. The electron was described as both a particle and a wave. The construct left many lingering questions. How can the electron be both a particle and a wave? Nick Herbert writes in his book “Quantum Reality” Pg. 46; The manner in which an electron acquires and possesses its dynamic attributes is the subject of the quantum reality question. The fact of the matter is that nobody really these days knows how an electron, or any other quantum entity, actually possesses its dynamic attributes.” Louis deBroglie suggested that the electron may be a beat note. 7 The formation of such a beat note requires disturbances to propagate at light speed. Matter propagates at velocity V. DeBroglie could not demonstrate how the beat note formed. This author’s model demonstrates that matter vibrates naturally at its Compton frequency . A standing Compton wave is pinned in place at the elastic limit of space ( Chapter 10 ). A traveling wave component is associated with moving matter. The traveling wave component bounces off the discontinuity produced at the elastic limit of space . The reflected wave doppler shifts as it bounces of off the discontinuity. The disturbances propagate at luminal velocities. They combine to produce the dynamic DeBroglie wavelength of matter. The animation shows the DeBroglie wave as the superposition of the original and the Doppler shifted waves. Pick the link to view a JAVA animation on the DeBroglie wavelength of matter. The harmonic vibration of a quantum particle is expressed by its Compton wavelength. Equation #1A expresses the Compton wavelength. lc = h / Mc Equation #2A gives the relationship between frequency f and wavelength l. Please note that the phase velocity of the wave is c. c = f l Substituting Eq #2A into Eq. #1A yields Eq #3A the Compton frequency of matter. fc = Mc2 / h A doppler shifted component of the original frequency is produced by the reflection at matter’s surface. Classical doppler shift is given by Eq #4A. f2 = f1 ( 1 +- v / c) A beat note is formed by the mixing of the doppler shifted and original components. This beat note is the deBroglie wave of matter. Equation #5A and the above express a function “F” involving the sum of two sin waves. Refer to the figure above. A minimum in the beat note envelope occurs when the component waves are opposed in phase. At time zero the angles differ by p radians. Time zero is a minimum in the beat note envelope. A maximum in the beat envelope occurs when the component waves are aligned in phase. The phases were set equal, in Equation #7A, to determine the time at which the aligned phase q condition occurs. The result, Equation #10A, is the deBroglie wavelength of matter. Reflections result from a containment force. These reflections combine to produce the deBroglie wavelength of matter. An analysis was done that described inertial mass in terms of a restraining force. This force restrains disturbances that propagate at luminal velocities. Consider energy trapped in a perfectly reflecting containment. (see the figure below) This energy in a containment model is a simplistic representation of matter. In this analysis no distinction will be made between baryonic, leptonic, and electromagnetic waves. The wavelength of the energy represents the Compton wavelength of matter. The containment represents the surface of matter. The field propagates at light speed. Its momentum is equal to E/c. The containment is at rest. The energy is ejected from wall “A” of the containment, its momentum is p1. The energy now travels to wall “B”. It hits wall B and immediately bounces off. Its momentum is p2 . The energy now travels back to wall “A”, immediately bounces off, its momentum is again p1. This process repeats continuously. If the energy in the containment is evenly distributed throughout the containment, the momentum carried by this energy will be distributed evenly between the forward and backward traveling components. The total momentum of this system is given in equation #1C. pt =(p1 /2 – p2 /2) The momentum of a flow of energy is given by equation #2C p = E/c (Eq #2); E = energy ; c = light speed ; p = momentum Substituting Eq. #2C into Eq. #1 yields Eq. #3C. pt= [E1 /2c – E2/2c] ; ( Eq. #3C) Given the containment is at rest. The amount of energy in the containment remains fixed, the quantity of energy traveling in the forward direction equals the quantity of energy traveling in the reverse direction. This is shown in equation #4. E1= E2;(Eq #4C) Substituting Eq. #4C into Eq. #3C yields Eq #5C. pt =(E/2c)(1 – 1) ; (Eq #5C) Equation #5 is the total momentum of the system at rest. If an external force is applied to the system its velocity will change. The forward and the reverse components of the energy will then doppler shift after bouncing off of the moving containment walls. The momentum of a an energy flow varies directly with its frequency. Given that the number of quantums of energy is conserved, the energy of the reflected quantums varies directly with their frequency. This is demonstrated by equation #6C. E2= E(1) [ff / fi]; (Eq. #6C) Substituting Eq. #6C into Eq #5C. yields eq. #7C. pt = E/2c)[(ff1/fi1) – (ff2 /fi2)] ; (Eq #7C) Equation #7C is the momentum of the system after all of its energy bounces once off of the containment walls. Equation #7 shows a net flow of energy in one direction. Equation #7C is the momentum of a moving system. The reader may desire to analyze the system after successive bounces of its energy. This analysis is quite involved and unnecessary. Momentum is always conserved. Given that no external force is applied to the system after the first bounce of its energy, its momentum will remain constant. Relativistic doppler shift is given by equation #8C. (ff / fi) = [1 – v2 / c2]1/2 / (1 +- v/c), Eq #8C ; v = velocity with respect to the observer ; c = light speed ; ff/fi = frequency ratio ; + or – depends on the direction of motion Substituting equation #8 into equation #7C yields equation 9C Substituting mass for energy, M = E/c2 The result, equation #14C is the relativistic momentum of moving matter. This first analysis graphically demonstrates that inertial mass is produced by a containment force at the surface of matter. A fundamental change in the frame of reference is produced by the force of containment. This containment force converts energy, which can only travel at light speed, into mass, which can travel at any speed less than light speed. 8 Note: A version of this analysis has been published in this author’s book Elementary Antigravity , Vantage Press 1989, ISBN 0-533-08334-6 According to existing theory the matter wave emerges from the Fourier addition of component waves. This method requires an infinite number of component waves. Natural infinities do not exist within a finite universe. The potential and kinetic components of a wave retain their phase during a Fourier localization. The aligned phase condition is a property of a traveling wave. The Fourier process cannot pin a field or stop a traveling wave. Texts in quantum physics commonly employ the Euler formula in their analysis. The late Richard Feynman said, “The Euler formula is the most remarkable formula in mathematics. This is our jewel.” The Euler formula is given below: e i q = cosq + i sinq The Euler formula describes the simple harmonic motion of a standing wave. The cos component represents the potential energy of a standing wave. The sin component represents the kinetic energy of a standing wave. The kinetic component is displaced by 90 degrees and has a i associated with it. The localization of a traveling wave through a Fourier addition of component waves is in error. To employ this method of localization and then to describe the standing wave with the Euler formula is inconsistent. This author corrected this error through the introduction of restraining forces. The discontinuity produced at the elastic limit of space restrains the matter wave. The potential and kinetic components of the restrained wave are displaced by 90 degrees. A mass bouncing on the end of a spring is a good example of this type of harmonic motion. At the end of it travel the mass has no motion ( kinetic energy = zero) and the spring is drawn up tight ( potential energy = maximum ). One quarter of the way into the cycle the spring is relaxed and the mass is moving at its highest velocity (kinetic energy = maximum). A similar harmonic motion is exhibited by the force fields. The energy of a force field oscillates between its static and magnetic components. For some motion music click on the moving mass. Mass energy ( Em ) is a standing wave. A standing wave is represented on the -j axis of a complex plane. The phase of a standing wave is 90 degrees. All standing waves are localized by restraining forces. A traveling wave has its kinetic and potential components aligned in phase. An ocean wave is a good example of this type of harmonic motion. The wave’s height ( potential energy ) progresses with the kinetic energy of the wave. The energy “E” contained by a wave carrying momentum “P” is expressed below. E = Pc The traveling wave expresses itself through its relativistic momentum “P”. P = Mv / (1- v2 / c 2 )1/2 Substituting yields the amount of energy that is in motion “Eq“. Energy flows are represented on the X axis of a complex plane. The vector sum of the standing ( Em ) and traveling ( Eq ) components equals the relativistic energy ( Er ) of moving matter. The relativistic energy is represented by the length of the hypotenuse on a complex plain The ratio of standing energy to the relativistic energy [ Em / Er ] reduces to (1- v2 / c 2 )1/2. This function express the properties of special relativity. The arc sin of this ratio is the phase; b = arc sin (1- v2 / c 2 )1/2 The phase b expresses the angular separation of the potential and kinetic energy of matter. The physical length of a standing wave is determined by the spatial displacement of its potential and kinetic energy. This displacement varies directly with the phase b. The phase b varies inversely with the group velocity of the wave. This effect produces the length contraction associated with special relativity. Time is represented on the Z (out of the plain) axis on a complex diagram. The rotation of a vector around the X axis into the Z axis represents the change in potential energy with respect to time. The rotation of a vector around the Y axis into the Z axis represents a change in potential energy with respect to position. Relativistic energy is reflected on both axes. The loss in time by the relativistic component Er is compensated for by gain in position. The phase b of a wave expresses the displacement of its potential and the kinetic energy. When placed on a complex diagram the phase directly determines the relativistic momentum, mass, time, and length. These effects reconcile special relativity and quantum physics. The analysis reveals information not provided by special relativity. The ratio of traveling energy to the relativistic energy ( Eq / Er ) reduces to v/c. The simplicity of the ratio suggests that it represents a fundamental property of matter. In an electrical transmission line this ratio is known as the power factor. The power factor is a ratio of the flowing energy to the total energy. The construct of Special Relativity may be derived a-priori from the premice that the group velocity of the matter wave is V and the phase velocity of the matter wave is c. The difference between these two velocities is produced by reflections. Reflections result from restraining forces. The same principles apply to all waves in harmonic motion. This model requires a restrained matter wave function. What is the nature of this restraing force? Are forces beyond the four known forces required? This author will show that no addtional forces are requied. The restraining force is produced throgh the action of the known forces. The nature of this restraining force will be presented in Chapter 10. An analysis of this restraining force, in Chapter 12, revealed the path of the quantum transition. This model produces a second result. This result contains negative mass with a lagging phase angle. It is located along the +j axis. Is this second result that of anti-matter? Capacitors are used to cancel reactance along a power line. On the complex plane the capacitance at vector -j cancels out the reactance at vector +j leaving only the X axis traveling wave component. Is this what occurs when matter meets antimatter? This author believes that matter’s leading power factor ( kinetic energy leads potential energy) is an affect of a elastic anomaly. This anomaly is associated with the restraint of the wave function. These ideas will be developed in Chapters 10 and 11. Einstein’s principle of equivalence states that gravitational and inertial mass are always in proportion. The photon has no rest mass and a fixed inertial mass. What is its gravitational mass? General relativity states that gravity warps space. Photons take the quickest path through this warped space. The path of a photon is effected by gravity. The effect has been measured. The light of star was bent as it passed near the sun. The momentum of the light was altered. The principle of the conservation of momentum requires that the sun experiences an eqivalent change in momentum. The bending light must generate a gravitational field that pulls back on the sun. Bending light has a gravitational mass. Photons from the extremes of the universe have traveled side by side for billions of years. These photons do not agglomerate. The slightest agglomeration would result in a decrease in entropy. This decrease would be in violation of the laws of thermodynamics. Photons traveling in a parallel paths extert no gravitational influence upon each other. Matter gives up energy during the process of photon ejection. The principle of the conservation of energy requires that the negative gravitational potential and the positive energy of the universe remain in balance. The ejected photon must carry a gravitational mass that is equivalent to the gravitational mass lost by the particle. These conditions are satisfied with a variable gravitational mass. The gravitational mass of the photon varies directly with the force it experiences. This force is expressed as a changing momentum (dp/dt). Hubbles’ constant expresses the expansion space in units of 1/time. Ordinarily, the effects resulting for the Hubble expansion are quite tiny. At great distances and at high velocities significant effects do, however, take place. As a photon travels through space at the high velocity of light it red shifts. This red shift may be considered to be the result of an applied force. This force is produced by the acceleration given in equation #1D. Acceleration = Hc Eq #1D ; H = Hubble’s constant, given in units of (1/sec) ; c = light speed To demonstrate the gravitational relationships of a photon the principle of the conservation of momentum will be employed. According to this principle exploding bodies conserve there center of gravitational mass. Mass M ejects a photon while over the pivot I. The gravitational center of mass must remain balanced over the pivot point I. Mass M is propelled to the left velocity at v1 and the photon travels to the right at velocity c. The product of the velocity and time is the displacement S. Setting the products of the displacements S and the gravitational masses Mg equal yields equation #2D. Mg1 S1= Mg2 S2 ; Eq #2D Relativity 3, 4 ( this equation was derived in Chapter 6 ) is substituted for the gravitational mass of the photon on right side of the equation below. GMS /r2 = (G/c2) (force) S /r2 GMS /r2= (G/c2) dp/dt S /r2 GM(v1 t) = (G/c2) dp/dt (ct)r ; Eq #5D Substituting, dp/dt = Ma = MHc = (E/c2)Hc = EH/cEq #6D G(Mv1)t = (G/c2 )( E / c) Hctr Substituting momentum p for Mv1 and E/c Gp1 t = (G/c2)p2 Hctr Setting the momentums equal. p1= p2 c = Hr ; Eq #11D The result, equation #11D, shows that the gravitational mass of a photon is generated by the force it experiences as it accelerates through Hubble’s constant. The result is true only under the condition where the speed of light is equal to the product of Hubble’s constant and the radius of the universe. This qualification is essentially consistent with the measured cosmological values. The radius and expansion rate are independent properties of this universe. The speed of light and the gravitational constant G are dependent upon these properties. It will be shown in upcoming chapters that the values of the natural constants are dependent upon the mass, radius, and expansion rate of the universe. 12The gravitational radius of the photon is the radius of the universe. On the largest scales the gravitational effect of photonic energy is equal to the gravitational effect of mass energy. The result demonstrates that the negative gravitational potential and the positive energy of the universe remain in balance. # Schrödinger’s Wave Equation Revisited Schrödinger’s wave equation is a basic tenement of low energy physics. It embodies all of chemistry and most of physics. The equation is considered to be fundamental and not derivable from more basic principles. The equation will be produced (not derived) using an accepted approach. Several assumptions are fundamental to this approach. The flaws within these assumptions will be exposed. This author will derive Schrödinger’s wave equation using an alternate approach. It will be shown that Schrödinger’s wave equation can be fundamentally derived from the premice that the phase velocity of the mater wave is luminal. Restraining forces confine the luminal disturbances. The new approach is fundamental and yields known results. # The accepted approach The wave equation describes a classical relationship between velocity, time, and position. The velocity of the wave packet is v. It is an error to assume that the natural velocity of a matter wave is velocity v. Disturbances within force fields propagate at light speed c. The matter wave Y is also a field. Like all fields it propagates at velocity c. Equation #3 below expresses the wave equation as of function of position and time. The exponential form of the sin function (e jwt ) is introduced. This function describes a sin wave. The j in the exponent states that the wave contains real and imaginary components. The potential energy of wave is represented by the real component and the kinetic energy of a wave is represented by the imaginary component. In a standing waves these components are 90 degrees out of phase. Standing waves are produced by reflections. The required reflections are not incorporated within current models. Current models include (e jwt ) ad-hoc. The deBroglie relationship is introduced. It is also incorporated ad-hoc. The introduction of the deBroglie wave was questioned by Professors Einstein and Langevin. H. Ziegler pointed out in a 1909 discussion with Einstein, Planck, and Stark that relativity would be a natural result if all of the most basic components of mass moved at the constant speed of light. 13This author’s work is based on the premice that disturbances in the force fields propagate at light speed. This analysis has shown that the deBrogle wavelength is a beat note. This beat note is generated by a reflection of a matter’s Compton wave. Disturbances in the the Compton wave propagate at luminal velocity. The result below is the time independent Schrödinger equation. The Schrödinger equation states that the total energy of the system equals the sum of its kinetic and potential energy. Energy is a scalar quantity. Scalar quantities do not have direction. This type of equation is known as a Hamiltonian. The Hamiltonian ignores restraining forces. The unrestrained matter wave propagates at velocity v. It is an error to assume that an unrestrained wave propagates without dispersion. A new approach The phase velocity of the matter wave is c. The matter wave is pinned into the structure of matter by restraining forces. The resultant force has a magnitude and a direction. The solution is Newtonian. The superposition of the Compton wave and its doppler shifted reflection is the deBrogle wavelength of matter. The electrons natural vibrational frequency was determined in Chapter 10 of this text. This frequency is known as the Compton frequency of the electron. Refer to equation A1. Substituting Ñ2 for acceleration divided by light speed squared. This step embodies the idea the disturbances in the matter wave propagate at luminal velocities. The time independent Schrödinger equation has been derived from a simple technique. It has been demonstrated that the matter wave contains the forces of nature. Disturbances within these fields propagate at luminal velocities. Restraining forces prevent dispersion. This influence extends through the atomic energy levels. The movement of ordinary matter does not produce a net magnetic field. The movement of charged matter does produce a net magnetic field. Charged matter is produced by the separation of positive and negative charges. The derivation used to develop Newton’s formula of gravity (Equation #3) shows that matter may harbor positive and negative near field gravitational components. The wavefunctions of superconductors are collimated. The collimated wave functions act in unison like a single macroscopic elementary particle. The near field gravitational components of a superconductor are macroscopic in size. The rotation of these local gravitational fields is responsible for the gravitational anomaly observed at Tampere University. 9 The forces of nature are pinned into the structure of matter by forces. Forces generate the gravitational mass of matter, determine matter’s relativistic properties, and determine matter’s dynamic properties. The nature of the bundling force will be presented in Chapters 10 & 11. The understanding of the nature of the bundling force reveals the path of the quantum transition. – 1.French aristocrat Louis de Broglie described the electrons wavelength in his Ph. D. thesis in 1924. De Broglie’s hypothesis was verified by C. J. Davisson and L. H. Germer at Bell Labs. – 2.Gilbert N. Lewis demonstrated the relationship between external radiation pressure and momentum.Gilbert N. Lewis.Philosophical Magazine, Nov 1908. – 3.A. Einstein, Ann d. Physics 49,1916. – 4.Einstein’s principle of equivalence was experimentallyconfirmed byR.v.Eötös in the 1920’s. R.v.Eötös,D. Pekar,and Feteke,Ann. d. Phys 1922. Roll, Krotkov and Dicke followed up on the Eötvös experiment and confirmed the principle of equivalence to and accuracy of one part in 10 11in the 1960’s. R.G. Roll,R.Krokow&Dicke,Ann.of Physics 26,1964. – 6. Jennison, R.C. “What is an Electron?”Wireless World, June 1979. p. 43. “Jennison became drawn to this model after having experimentally demonstrated the previously unestablished fact that a trapped electromagnetic standing wave has rest mass and inertia.” Jennison & Drinkwater Journal of Physics A, vol 10, pp.(167-179) 1977 Jennison & Drinkwater Journal of Physics A, vol 13, pp.(2247-2250)1980 Jennison & Drinkwater Journal of Physics A, vol 16, pp.(3635-3638)1983 – 7.B. Haisch & A. Rueda of The California Institute for Physics and Astrophysics have also developed the deBroblie wave as a beat note.Refer to: – 8.Znidarsic F. ” The Constants of the Motion ” The Journal of New Energy. Vol. 5, No. 2 September 2000 – 9.”A Possibility of Gravitational Force Shielding by Bulk YBa2Cu307-x”, E. Podkletnov and R. Nieminen,Physica C, vol 203, (1992), pp 441-444. – 10.Puthoff has shown that the gravitational field results from the cancellation of waves.This author’s model is an extension version this idea. H.E. Puthoff, ” Ground State Hydrogen as a Zero-Point-Fluctuation-Determined State “, Physical Review D, vol 35, Number 3260, 1987 H. E. Puthoff ” GRAVITY AS A ZERO-POINT FLUCTUATION FORCE “, Physical Review A, vol 39, Number 5, March 1989 – 11.Ezzat G. Bakhoum ” Fundamental disagreement of Wave Mechanics with Relativity “, Physics EssaysVolume 15, number 1, 2002 – 12.John D. Barrow and John K. Webb” Inconstant Constants “, Scientific AmericanJune 2005 – 13.Albert Einstein, ” Development of our Conception of Nature and Constitution of Radiation “, Physikalische Zeitschrift 22 , 1909. Scroll to Top
d0c03ffed2487d87
Effective Field Theory for Low-Energy Two-Nucleon Systems Tae-Sun Park, Kuniharu Kubodera, Dong-Pil Min and Mannque Rho Dept. of Physics and Center for Theoretical Physics, Seoul Nat’l University, Seoul 151-742, Korea Dept. of Physics and Astronomy, University of South Carolina, Columbia, SC 29208, U.S.A. Service de Physique Théorique, CEA Saclay, 91191 Gif-sur-Yvette Cedex, France December 9, 1997 We illustrate how effective field theories work in nuclear physics by using an effective Lagrangian in which all other degrees of freedom than the nucleonic one have been integrated out to calculate the low-energy properties of two-nucleon systems, viz, the deuteron properties, the scattering amplitude and the transition amplitude entering into the radiative capture process. Exploiting a finite cut-off regularization procedure, we find all the two-nucleon low-energy properties to be accurately described with little cut-off dependence, in consistency with the general philosophy of effective field theories. PACS numbers: 21.30.Fe, 13.75.Cs, 03.65.Nk, 25.40.Lw preprint: hep-ph/9711463SNUTP 97-160 Effective field theories (EFTs) have long proven to be a powerful tool in particle and condensed matter physics[1, 2], so it is quite natural that a considerable attention is nowadays paid to the role of EFTs in nuclear physics where phenomenological approaches have traditionally been tremendously successful. Some authors have focused on nucleon-nucleon interactions and two-nucleon systems[3, 4, 5, 6, 7, 8] while some[9] on many-body systems including dense matter relevant to relativistic heavy-ion processes and compact stars. One of the most spectacular cases was the recent chiral perturbation calculation of the radiative capture at thermal energy [4] with an agreement with experiment within 1%. What was calculated in [4] was however the meson-exchange current corrections relative to the single-particle matrix element with the latter borrowed from the accurate Argonne [10] phenomenological two-nucleon wave function. In this respect, one cannot say that it was a complete calculation in the framework of the given EFT, namely, chiral perturbation theory (ChPT) although it was following the strategy of [3] of using ChPT for computing “irreducible graphs” only. The purpose of this Letter is to supply the “missing link” that can render Ref.[4] a “first-principle calculation,” that is to obtain the single-particle matrix element within the framework of EFTs [11]. In so doing we will compute the static properties of the bound state (the deuteron) and the scattering amplitude in the channel. The results come out to be in a surprisingly good agreement with the data, offering a first glimpse of how EFTs work in nuclei. Since we shall be interested in very low-energy processes with the energy scale MeV, we will integrate out all massive fields as well as the pion field[6], leaving only the nucleon matter field which can be treated in heavy-fermion formalism (HFF). Since the anti-nucleon field also is integrated out in HFF, there are no “irreducible” loops (there will be, however, “reducible” loops to all orders in solving Lippman-Schwinger equation) and the EFT becomes non-relativistic quantum mechanics where all the interactions appear in the potential. Now the states we shall study are all very close to the threshold: they are either weakly bound () or almost bound (). Bound states are not accessible by perturbation expansion and the scattering state with a large scattering length has a small scale , making the convergence of EFTs highly non-trivial. We shall circumvent these difficulties by summing “reducible diagrams” – which amounts to solving Schrödinger (or Lippman-Schwinger) equation and using a cut-off regularization instead of the usual dimensional regularization. Due to the nonperturbative nature of the Schrödinger equation, unlike perturbative cases, it does matter which regularization scheme one uses in effective theories. Kaplan, Savage and Wise [6] and Luke and Manohar [7] have found that with the dimensional regularization, the EFT breaks down at a very small scale, for large scattering length , where is the effective range and that this problem cannot be ameliorated by introducing the pionic degree of freedom. As pointed out by Beane et al [8] and Lepage[12], the problem can however be resolved if one uses a cut-off regularization. In effective theories, the cut-off has a physical meaning and hence it should not be taken to infinity as one does in renormalizable theories [12]. In fact the strategy of effective field theories is such that one should not pick either too low a cut-off or too high a cut-off: if one chooses too low a cut-off, one risks the danger of throwing away relevant degrees of freedom – and hence correct physics – while if one chooses too high a cut-off, one introduces irrelevant degrees of freedom and hence makes the theory unnecessarily complicated. The astute in doing EFTs is in choosing the proper cut-off. Thus with our effective Lagrangian in which the lightest degree of freedom integrated out is the pion, the natural cut-off scale is set by the pion mass. We find that the optimal cut-off in our case is MeV as one can see from the results in Table 1 and Figures 1 and 2. We shall do the calculation to the next-to-leading order (NLO). The potential of the EFT is local and hence of zero range in coordinate space, requiring regularization. In order to do the calculation algebraically, we choose the following form of regularization appropriate to a separable potential given by the local Lagrangian: where is the regulator which suppresses the contributions from , and , and is a finite-order polynomial in . Up to the NLO, the most general form of is where is the nucleon mass and is the rank-two tensor that is effective only in the spin-triplet channel, Note that the coefficients are (spin) channel-dependent, and that is effective only in spin-triplet channel. Thus we have five parameters; two in and three in channel. In principle, these parameters are calculable from a fundamental Lagrangian (i.e., QCD) but nobody knows how to do this. So in the spirit of EFTs, we shall fix them from experiments. Since the explicit form of the regulator should not matter[12], we shall choose the Gaussian form, where is the cut-off. (This form of the cut-off functions, strictly speaking, upsets the chiral counting [8] on which we will have more to say later.) The Lippman-Schwinger (LS) equation for the wavefunction , where is the free wavefunction and is the free two-nucleon propagator depending on the total energy , leads to the -wave function (for the potential (2)) of the form where () are defined by With the regulator (4), the integrals come out to be and . The phase shifts can be calculated by looking at the large- behavior of the wavefunction. To do this, it is convenient to separate the pole contributions of the integrals Eqs.(6, 7) as which define the functions and , both of which are real. Note that which makes finite, and that . The phase shift takes the form where the is the ratio to be given below (see (25)), which vanishes for the channel. In order to fix the two coefficients , we compare (15) to the effective-range expansion We obtain These are essentially the “renormalization conditions” in the standard renormalization procedure. Two important observations to make here: (a) We note that there is an upper bound of , , if one requires that be positive and that be real. That is, for , the potential of the EFT becomes non-Hermitian. With fm and fm for the channel taken from the Argonne potential [10] (which we take to be “experimental”), we find that MeV; (b) the value defined such that when MeV is quite special. At this point, we have and , that is, the NLO contribution is identically zero. This corresponds to the leading-order calculation with the chosen to fit the experimental value of the effective range . A similar observation was made by Beane et al [8] using a square-well potential in coordinate space with a radius , with playing the role of . The resulting phase shift with is plotted in Fig. 1. We see that the agreement with the result taken from the Argonne potential [10] is perfect up to MeV. Beyond that, we should expect corrections from the next-to-next-order and higher-order terms. In Fig. 2, we show how the phase-shift for a fixed center-of-mass momentum, MeV varies as the cut-off is changed. The solid curve is our NLO result, the dotted one the LO result (with ), and the horizontal dashed line the result taken from the potential (“experimental”). We confirm that our NLO result is remarkably insensitive to the value of for . It is instructive to compare our result (15) with that obtained with the dimensional regularization[6], Expanding of (19) in , we find that the coefficient of the -th order term is order of . This increases rapidly with when is large, disagreeing strongly with the fact that the low-energy scattering is well described by just two terms of the effective range expansion in (16). This observation led the authors of [6] to conclude that the critical momentum scale at which the EFT expansion breaks down is very small for a very large : We arrive at a different conclusion. With the cut-off regularization, the scattering length is replaced by an effective one, , that is order of for large . This agrees with the findings of Beane et al [8] and Lepage [12]. Counting to be order of , the -th order coefficient now is , as one would expect on a general ground. The next quantity to consider is the transition amplitude for and the deuteron structure. For the capture, we need both the scattering wavefunction and the deuteron wave function. The initial state wave function can be written as where is the center-of-mass momentum and Figure 1: phase shift (degrees) vs. the center-of-mass (CM) momentum . Our theory with MeV is given by the solid line, and the results from the Argonne potential [10] (“experiments”) by the solid dots. Figure 2: phase shift (degrees) vs. the cut-off for a fixed CM momentum MeV. The solid curve represents the NLO result, the dotted curve the LO result and the horizontal dashed line the result from the potential [10]. As for the coupled channel relevant for the final state of the capture, we use the eigenphase parametrization [13] with the given by where is the mixing angle, The so far undetermined can be fixed by the deuteron ratio [10] at with the binding energy of the deuteron, Given , and for a given , all other quantities are predictions. The binding energy of the deuteron is determined by the pole position, with . The renormalization procedure is the same as for the channel. The only difference is that the value of that makes does not coincide with that makes . Using fm and fm [10] for the channel, we find that MeV, MeV and MeV. The resulting (-wave and -wave) radial wavefunctions of the deuteron are We now have all the machinery to calculate the deuteron properties: the wavefunction normalization factor , the radius , the quadrupole moment and the -state probability . The magnetic moment of the deuteron is related to the through where is the isoscalar nucleon magnetic moment. Finally the one-body isovector transition amplitude relevant for at threshold [4] is (MeV) Exp.[10] [10] (MeV) 2.225 () 0.869 0.877 0.878 0.878 0.8846(8) 0.885 (fm) 1.951 1.960 1.963 1.969 1.966(7) 1.967 () 0.231 0.277 0.288 0.305 0.286 0.270 (%) 2.11 4.61 5.89 9.09 5.76 0.868 0.854 0.846 0.828 0.8574 0.847 (fm) 4.06 4.01 3.99 3.96 3.98 Table 1: Deuteron properties and the transition amplitude entering into the capture for various values of . The (parameter-free) numerical results are listed in Table 1 for various values of the cut-off . We see that the agreement with the experiments (particularly for MeV) is excellent with very little dependence on the precise value of . It may be coincidental but highly remarkable that even the quadrupole moment which as the authors of [10] stressed, the potential fails to reproduce, comes out correctly. We believe to have demonstrated the power of EFTs in low-energy nuclear physics, allowing us to be as close as one can hope to the fundamental theory in the sense put forward in Ref.[1, 2]. In particular, it is satisfying that the classic capture process can be completely understood from a “first-principle” approach. Here the cut-off regularization was found to be highly efficient: With the dimensional regularization the matrix element was found to be in total disagreement with the result of the Argonne potential. As mentioned, the Gaussian cut-off brings in terms higher order than NLO which to be consistent, would require corresponding “counter terms” in the potential although our results indicate that the latter cannot be significant. The next task is to incorporate pions into the picture and go up in energy. This would enable us to explore the interplay between the breakdown of EFT and the emergence of a “new physics”, an important and generic issue currently relevant in particle physics where going beyond Standard Model is the Holy Grail. These issues will be addressed in a forthcoming publication. The work of TSP and DPM was supported in part by the Korea Science and Engineering Foundation through CTP of SNU and in part by the Korea Ministry of Education under the grant (BSRI-97-2441, BSRI-97-2418) and the work of MR by a Franco-German Humboldt Research Prize and Spain’s IBERDROLA Visiting Professorship. KK is partially supported by the NSF Grant No. PHYS-9602000. TSP would like to thank H.K. Lee for his support while he was in Hanyang University where part of this work was done. • [1] See, e.g., S. Weinberg, The Quantum Theory of Fields II (Cambridge Press, 1996); Nature 386, 234 (1997). • [2] J. Polchinski, in Recent Directions in Particle Theory eds. by J.Harvey and J. Polchinski (World Scientific, Singapore, 1994); R. Shankar, Rev. Mod. Phys. 66, 129 (1994). • [3] S. Weinberg, Phys. Lett. B251, 288 (1990); Nucl. Phys. B363, 3 (1991); Phys. Lett. B295, 114 (1992). • [4] T.-S. Park, D.-P. Min and M. Rho, Phys. Rev. Lett. 74, 4153 (1995); Nucl. Phys. A596, 515 (1996). • [5] C. Ordonez, L. Ray and U. van Kolck, Phys. Rev. Lett. 72, 1982 (1994); Phys. Rev. C53, 2086 (1996). • [6] D.B. Kaplan, M.J. Savage and M.B. Wise, Nucl. Phys. B478, 629 (1996). • [7] M. Luke and A.V. Manohar, Phys. Rev. D55, 4129 (1997). • [8] S.R. Beane, T.D. Cohen and D.R. Phillips, nucl-th/9709062 and references therein. • [9] M.A. Nowak, M. Rho and I. Zahed, Chiral Nuclear Dynamics (World Scientific, Singapore, 1996). • [10] R.B. Wiringa, V.G.J. Stoks and R. Schiavilla, Phys. Rev. C51, 38 (1995) and references therein. • [11] We thank Jim Friar for his challenge in 1995 that a “first-principle” calculation for the capture process should be feasible. • [12] G.P. Lepage, nucl-th/9706029. • [13] J.M. Blatt and L.C. Biedenharn, Phys. Rev. 86, 399 (1952). For everything else, email us at [email protected].
983397241d56d680
Math   Science   Chemistry   Economics   Biology   News   Search > Bohr and quantum’s atomic model Issue: 2012-3 Section: 14-16 Bohr’s atomic model was introduced in 1913 by Niels Bohr. Bohr atom is a planetary model, where the electrons are in stationary circular orbits and the electrons orbit the nucleus at set distances, and it is based on laws of Plank and Einstein’s photoelectric effect. The Bohr’s atomic model was an expansion on the Rutherford model. Bohr’s atomic model overcame the several flaws of Rutherford model. Rutherford’s atomic model leaves many questions, because Rutherford’s electrons lose energy, collapsing on the atom itself; also, Rutherford’s atomic model was not compatible with Maxwell’s laws. The Bohr’s atomic model is important because it describes most of the accepted features of atomic theory and explains the Rydberg formula. Also (characters) the Bohr model shows that the electrons are in orbits of differing energy around the nucleus. For Bohr(characters), the energy of an electron is quantized, that is, there is nothing between an orbit and another. The level of energy that normally occupies an electron is called the ground state. When an electron changes orbit it makes a quantum leaps. The difference between the two orbits (ground state and electron’s excited state) is emitted by the atom with photons. The Bohr’s atomic model shows that each electron has a set energy. From the electron’s excited state, the electron can return at its ground state. Bohr discovered also that the closer an electron is to the nucleus, the less energy needs; conversely, the further an electron is from the nucleus, the more energy it needs. He also discovered that each energy level may contain varying quantities of electrons. Even if it contains some errors, the Bohr model is very important. The energy of the orbit is related to its size. Bohr used Planck's constant and obtained a formula for the energy levels of the hydrogen atom. Bohr postulated that the angular momentum of the electron is quantized. Planck’s laws and photoelectric effect For Bohr the light is a electromagnetic radiation of a particular nature. Planck thought that the light was formed by packets of energy, called quanta; each quantum has a respective frequency. Planck's law was formulated to explain the radiation emitted by a blackbody. For a blackbody that does not exceed the hundreds of degrees, many of the radiation emitted are in the infrared part of the electromagnetic spectrum. At higher temperatures, a part of the radiation is radiated as visible light. The value of Planck’s constant 6.62606957 × 10−34 joule∙second, with a standard uncertainty of 0.00000029 × 10−34 joule∙second. Einstein, for explain the photoelectric effect, said that the light was formed by packets of energy, called photon. A bright light emits many photons. Einstein’s photoelectric effect is based on experiments conducted on a metal foil bombarded with electromagnetic energy, where electrons may be expelled only if they have the same frequency of the energy administered. Bohr takes an atom of hydrogen and administered energy, and the atom, overcome the maximum threshold, changed orbit (1 to 2, 2 to 3…). Bohr discovered 7 orbit in the hydrogen atom. This phenomenon is corresponded with the studies by Balmer. Balmer discovered which the spectrum was formed from various wavelengths, and all the wavelengths had got various frequencies. Emission Spectrum The energy released by electrons occupies the portion of the electromagnetic spectrum that we detect as visible light. Small variations are seen as light of different colors. All colors of the visible spectrum are visible when the white light is diffracted with a prism. But, when the light emitted by a hydrogen atom is fragmented, not all colors are visible. As Bohr thought that the electrons are in different energy levels, he found seven energy levels of the hydrogen atom. Bohr hypothesized then that when an electron receives energy from outside, the electron changes orbit. When the electron returns at the original orbit, the electron issued a photon and an energy transition occurred. In addition to Balmer’s series, there are the Lyman’s series and the Paschen’s series. Also the electron revolved in a circular and stationary orbit, as the Rutherford’s atomic model. Bohr’s atomic model results in an improvement of the Rutherford’s atom and each orbit corresponds to certain energy values. The energy of the photon issued corresponds at the different energy between two orbits. Bohr explained how electrons could jump from one orbit to another only by emitting or absorbing energy in fixed quanta. The Bohr’s atomic model is fine for simple atoms (such as hydrogen), but not for more complex atoms. The Bohr’s atomic model doesn’t explain the Zeeman effect, it violates the Heisenberg Uncertainty Principle, and does not work with complex atoms. 10 years later contradictions appeared to the Bohr’s atom. It was necessary to make new hypothesis and not to miss the property of particles such as the electron, proton and atom. Despite all, Bohr’s atomic model was very important, especially for the astronomy. From the atom of Bohr the quantum atomic model was born. Quantum mechanical atomic model The Quantum mechanical atomic model is based on the quantum theory: matter has characteristics both of the particles and of the waves. According to the uncertainty principle, it’s impossible to know the exact position and position of an electron at the same time; or one or the other. The quantum mechanical atomic model is based on probability; because it uses even complex shapes of orbitals, amount of space in which it is likely to find the electron. To describe the electron and their orbital four quantum numbers were introduced: “n”, “l”,”m” and “ms”. Schrödinger’s equation The Schrödinger equation is very important for quantum Physics. Thanks to it you can find out where the particle is, its momentum and you can solve the wave functions of the particle. The Schrödinger equation provides the behavior of a dynamic system and is a wave equation that predicts the likelihood of events. The energy is quantized and has a quantum number n, and energy can never be zero. Quantum number and orbitals The quantum numbers describe the position of the electron and the amount of electrons that can stand in an orbital. Quantum number “n”: is a quantum number that describes the distance of the orbital from the nucleus and the size of the orbit. It has got positive integer values: 1, 2, 3, … The angular momentum quantum number or quantum number “l”: the quantum number “l” defines the shape of orbital. It has positive integer values (0 to n-1); “l” corresponds to the s-p-d and f. The subshell are orbitals that have the same value of number quantum “n” but different values of number quantum “l”. Quantum number “m”: the quantum number “m” describes the orientation of the orbital in the space. It has values from –l to 0 to l: Example: L= 1 M= -1,0,1 Spin quantum number “ms”: the spin quantum number “ms” specifies the value and orientation of the axis of rotation of an electron. It has values ½ or - ½. The Pauli exclusion principle states that in an orbital can be only two electron with opposite spin. The electron configuration is an arrangement of the electrons belonging to the orbitals of an atom. the arrangement of electrons takes place with the principle of the Aufbau: The Bohr and quantum’s atomic model is very important for the science because the Bohr’s atomic model is an expansion of the Rutherford’s atomic model and it describes most of the accepted features of atomic theory and explains the Rydberg formula; the quantum atomic model because it introduced the concept of the orbital and explain the quantum numbers; also the Bohr’s atomic model is important for the discovered of the emission spectrum, which is very important in the astronomy.
cf9fb3b65d1081af
Quantum 101 Reference: John Preskill’s notes on Quantum Information and Computation. In quantum mechanics, a state is a complete description of a physical system. Mathematically, it is given by a ray (an equivalence class of vectors) in Hilbert space, {\mathcal{H}}, which is a vector space endowed with an inner product, and which is complete with respect to the norm induced by the latter. Simply put, Hilbert space is the abstract vector space in which quantum states “live”. Hilbert spaces can be real or complex, finite- or infinite-dimensional; for definiteness we’ll assume complex, finite-dimensional Hilbert spaces here. The inner product is simply a map that associates an element of the field to pairs of elements in the vector space; in this case, {\left<\cdot,\cdot\right>:\mathcal{H}\times\mathcal{H}\rightarrow\mathbb{C}}. In Dirac’s bra-ket notation, vectors (states) in {\mathcal{H}} are denoted {\left|\psi\right>}, and dual vectors (operators, which act as linear functionals on the states) are denoted {\left<\psi\right|}. The properties of the inner product {\left<x,y\right>\equiv\left<y|x\right>} may then be written as follows: 1. Positivity: {\left<\psi|\psi\right>\geq0}, with equality iff {\left|\psi\right>=0}. 2. Linearity: {\left<\phi\right|\left( a\left|\psi_1\right>+b\left|\psi_2\right>\right)=a\left<\phi|\psi_1\right>+b\left<\phi|\psi_2\right>}. 3. Skew symmetry: {\left<\phi|\psi\right>=\left<\psi|\phi\right>^*}. The inner product induces a norm, {||\psi||=\left<\psi|\psi\right>^{1/2}}, which defines the distance between states in {\mathcal{H}}. Any inner product space with such a distance function is a metric space, also known as a pre-Hilbert space. The aforementioned completeness criterion is what elevates a pre-Hilbert space to a Hilbert space: a pre-Hilbert space is complete if every Cauchy sequence converges with respect to the norm to an element in the space (intuitively, there are no “missing points”). The completeness criterion is important for infinite-dimensional Hilbert spaces, where it ensures the convergence of eigenfunction expansions that one encounters in, e.g., Fourier analysis. Note that we are free to choose the normalization {\left<\psi|\psi\right>=1}, since this merely amounts to choosing a representative of the equivalence class of vectors that differ by a nonzero complex scalar. In this sense, both {\left|\psi\right>} and {e^{i\alpha}\left|\psi\right>} represent the same state; only relative phase changes between states in a superposition are physically meaningful. Although states are the basic mathematical objects in this formalism, we never measure them. Rather, we measure observables, which are self-adjoint (a.k.a. Hermitian) operators that act as linear maps on states, {A:\mathcal{H}\rightarrow\mathcal{H}}. Such an operator has a spectral representation, meaning that its eigenstates form a complete orthonormal basis in {\mathcal{H}}. This allows us to write an observable {A} as \displaystyle A=\sum_n a_n P_n~, \ \ \ \ \ (1) where {P_n} is the orthogonal projection onto the space of eigenvectors with eigenvalue {a_n} (such orthogonal projections can be proven to exist for complete inner product spaces, e.g., {\mathcal{H}}). In the simple case where {a_n} is non-degenerate, {P_n} is the projection onto the corresponding eigenvector, {P_n=\left|n\right>\left<n\right|}. Of course, given the unit normalization above, the projection operators satisfy {P_nP_m=\delta_{mn}P_n} and {P_n^\dagger=P_n}. (Note that the spectral theorem is more subtle for unbounded operators in infinite-dimensional spaces, but that will not concern us here). The numerical result of a measurement in quantum mechanics is given by the eigenvalue of the observable in question, {A}. This implies that the system must, at the instant of measurement, be in an eigenstate of {A} with the measured eigenvalue. If the quantum state immediately prior to a measurement is {\left|\psi\right>}, then the outcome {a_n} is obtained with probability \displaystyle \mathrm{Prob}\left( a_n\right)=||P_n\left|\psi\right>||^2=\left<\psi|P_n\right>~, \ \ \ \ \ (2) and the normalized quantum state with eigenvalue {a_n} is therefore \displaystyle \frac{P_n\left|\psi\right>}{\left<\psi|P_n\right>^{1/2}}~. \ \ \ \ \ (3) This is the point at which probability notoriously enters the picture; we’ll have more to say about this later. Note that, since the system is now in an eigenstate, immediately repeating the measurement will yield the same eigenvalue with probability 1. The fact that the measurement process appears to induce such decisiveness on the part of the state leads to the notion of “wave function collapse”, which is a horribly misleading oversimplification to which we will return. Suffice to say that wave functions don’t collapse, but we’ve a bit more math to cover before the reality can be made precise. So much for states and observables. What about dynamics? As in classical mechanics, the Hamiltonian {H} is the generator of time translations, and its expectation value gives the energy of the state. The latter is a measurable quantity, which implies that in order to be a well-defined physical observable, the Hamiltonian operator must be self-adjoint, {H^\dagger=H}. By Stone’s theorem, the exponential of a self-adjoint operator is unitary; thus if {U=e^{-iHt}}, then {U} is a bounded linear operator on {\mathcal{H}} that satisfies {U^\dagger U=1}. Mathematically, this is why time evolution in quantum mechanics is unitary. Physically, this is simply the statement that time evolution preserves the inner product; i.e., that probabilities continue to sum to 1 (since, under time-evolution by a Hermitian operator {U}, {\left<x|y\right>\rightarrow\left<x|U^\dagger U|y\right>=\left<x|y\right>}). Note that here we’re implicitly assuming that {H} is time-independent, in order to write the time translation operator as the exponential thereof. For time-dependent cases, one can still show {H^\dagger=H} perturbatively in {t}, it’s just less elegant. Given such an operator {U}, the evolution of a state over some finite interval {t} is unitary, and may be written \displaystyle \left|\psi(t)\right>=U(t)\left|\psi(0)\right>=e^{-iHt}\left|\psi(0)\right>~, \ \ \ \ \ (4) where in the second equality we’ve assumed {H} to be time-independent. If we then consider an infinitesimal transformation {\delta t} and expand both the left- and right-hand sides to first order, we have \displaystyle \left|\psi(\delta t)\right>=\left|\psi(0)\right>+\delta t\frac{\mathrm{d}}{\mathrm{d} t}\left|\psi(0)\right>=\left(1-iH\delta t\right)\left|\psi(0)\right>~. \ \ \ \ \ (5) Comparing terms at linear order, we recognize the Schrödinger equation, \displaystyle \frac{\mathrm{d}}{\mathrm{d} t}\left|\psi(t)\right>=-iH\left|\psi(t)\right>~, \ \ \ \ \ (6) which describes the evolution of states in the Schrödinger picture, wherein states are time-dependent while operators (including observables) are constants. This is precisely the opposite of the Heisenberg picture, wherein operators carry the time-dependence while states are constant. The two pictures are related by a change of basis, analagous to the relation between active and passive transformations. A third picture, the interaction picture, is often later introduced as a rather ham-fisted compromise between these two; it forms a fantastically successful premise for perturbation theory, but it doesn’t actually exist (see Haag’s theorem). Note that unitary evolution, as encapsulated in the Schrödinger equation, is entirely deterministic: specification of an initial state {\left|\psi(t)\right>} allows us to predict the state at any future time. But as described above, measurement is probabilistic: despite our infinite ability to predict future states, we cannot make definite predictions about measurement outcomes. One of the deepest (and most controversial) aspects of quantum mechanics is how deterministic evolution can nonetheless lead to probabilistic outcomes. Preskill quite aptly refers to this juxtaposition as a “disconcerting dualism”, and we shall return to it below. Another interesting observation is that according to the Schrödinger equation, quantum mechanical evolution is linear, in contrast to the non-linear evolution often encountered in classical theories. This is tied up with the issue of probability above: probability theory is fundamentally linear. But the connection isn’t quite so straightforward. As an aside, the linearity of quantum mechanics is why quantum chaos is so subtle. Naïvely, quantum systems should be incapable of supporting chaos, since small perturbations to the initial state don’t wildly change the evolution in the case of linear dynamics. However, two states which are close in Hilbert space can nonetheless yield wildly different measurements. Quantum chaos has important implications in a number of areas, particularly holography and black holes; but that’s a subject for another post. So, to summarize, we’ve seen that in quantum mechanics, states are vectors in Hilbert space, observables are Hermitian operators, symmetries are unitary operators, and measurements are orthogonal projections. Now here’s the kicker: everything we’ve said so far applies only to a single, isolated system. This is an idealization that simply does not exist. Another way to phrase this is that the formulation above only holds if applied to the entire universe. Even ignoring the issue of how one would make a measurement in such a scenario, this is clearly not a realistic description. In fact, it’s frankly wrong: in general (that is, when considering subsystems) states are not rays, measurements are not orthogonal projections, and evolution is not unitary! The simplest extension of the above is to consider a bipartite system, the Hilbert space for which is a tensor product of the Hilbert spaces of the constituents, {\mathcal{H}=\mathcal{H}_A\otimes\mathcal{H}_B}. Given an orthonormal basis {\{\left|i\right>_A\}} for {\mathcal{H}_A} and {\{\left|j\right>_B\}} for {\mathcal{H}_B}, an arbitrary pure state of {\mathcal{H}_A\otimes\mathcal{H}_B} can be expanded as \displaystyle \left|\psi\right>_{AB}=\sum_{i,j}a_{ij}\left|i\right>_A\otimes\left|j\right>_B~, \ \ \ \ \ (7) where, by unitarity, the eigenvalues satisfy {\sum_{i,j}|a_{ij}|^2=1}. We’ve referred to this as a pure state in contrast to a mixed state; the former correspond to rays in the total Hilbert space, while the latter do not. This is the first crucial correction alluded to above. Let us now consider an observable that acts only on subsystem {A}, {M_A\otimes I_B}. Its expectation value is \displaystyle \begin{aligned} \left<M_A\right>&={}_{AB}\left<\psi\right|M_A\otimes I_B\left|\psi\right>_{AB}\\ &=\sum_{mn}a_{mn}^*\left({}_A\left<m\right|\otimes{}_B\left<n\right|\right)\left( M_A\otimes I_B\right)\sum_{ij}a_{ij}\left(\left|i\right>_A\otimes\left|j\right>_B\right)\\ &=\sum_{ijm}a_{mj}^*a_{ij}{}_A\left<m\right| M_A\left|i\right>_A =\mathrm{tr}{M_A\rho_A}~, \end{aligned} \ \ \ \ \ (8) where we’ve introduced the reduced density matrix \displaystyle \rho_A=\mathrm{tr}_B\left(\left|\psi\right>_{AB~AB}\left<\psi\right|\right)~. \ \ \ \ \ (9) In contrast to the trace, which is a scalar-valued function given by the sum of eigenvalues, the partial trace w.r.t. {B} is an operator-valued function given by summing over the basis elements of {B}: \displaystyle \mathrm{tr}_B\left(\left|\psi\right>_{AB~AB}\left<\psi\right|\right) =\sum_j\big._B\left<j|\psi\right>_{AB~AB}\left<\psi|j\right>_B =\sum_{ijm}a_{mj}^*a_{ij}\left|i\right>_{A~A}\left<m\right|~. \ \ \ \ \ (10) The elegant expression for the expectation value {\left<M_A\right>} above then follows by the cyclic property of the trace. The reduced density matrix will play a central role in what follows, so it’s worth elaborating the properties that follow from the definition above (in particular the explicit form (10)): 1. Hermiticity: {\rho_A=\rho_A^\dagger}. 2. Non-negativity (of its eigenvalues): {\forall~\left|\psi\right>_A}, {\big._A\left<\psi\right|\rho_A\left|\psi\right>_A=\sum_j\big|\sum_ia_{ij}\big._A\left<\psi|i\right>_A\big|^2\ge0}. 3. Unit norm: {\mathrm{tr}{\rho_A}=\sum_{ij}|a_{ij}|^2=1} (since {\left|\psi\right>_{AB}} is normalized). As mentioned above, pure states are rays in Hilbert space, but mixed states are not. However, both are described by a reduced density matrix, which therefore provides a suitably general definition of quantum states. In the case of a pure state, {\rho_A=\left|\psi\right>_A\big._A\left|\psi\right>}, which is the projection operator onto the state (that is, onto the one-dimensional space spanned by {\left|\psi\right>_A}). The density matrix for a pure state is therefore idempotent, {\rho^2=\rho}. In contrast, for a general (mixed) state in the diagonal basis {\{\left|\psi_a\right>\}}, \displaystyle \rho_A=\sum_ap_a\left|\psi_a\right>\left<\psi_a\right|~, \ \ \ \ \ (11) where the eigenvalues satisfy {0<p_a\le1} and {\sum_ap_a=1}. It follows that a pure state has only a single non-zero eigenvalue, which must be 1, while a mixed state contains two or more terms in the sum (and {\rho^2\neq\rho}). As alluded above, in a coherent superposition of states, the relative phase is physically meaningful (i.e., observable). This is merely a consequence of the linearity of the Schrödinger equation: any linear combination of solutions is also a solution. In contrast, the mixed state {\rho_A} is an incoherent superposition of eigenstates {\{\left|\psi_a\right>\}}, meaning that the relative phases are experimentally unobservable. This gives rise to the concept of entanglement: when two systems {A} and {B} interact, they become entangled (i.e., correlated). This destroys the coherence of the original states such that some of the phases in the superposition become inaccessible if we measure {A} alone. Henceforth we will reserve the unqualified “superposition” to refer to the former case. We should note that probability again enters this updated picture when we consider that the expectation value of any observable {M} acting on the subsystem described by {\rho} is \displaystyle \left<M\right>=\mathrm{tr}{M\rho}=\sum_ap_a\left<\psi_a|M\right>~, \ \ \ \ \ (12) which leads to the interpretation of {\rho} as describing a statistical ensemble of pure states {\left|\phi_a\right>,} each of which occurs with probability {p_a}. But we’re not quite ready to address the associated interpretive questions just yet. As a concrete example, consider the spin state \displaystyle \left|\uparrow_x\right>=\frac{1}{\sqrt{2}}\left(\left|\uparrow_z\right>+\left|\downarrow_z\right>\right)~, \ \ \ \ \ (13) which is a (coherent) superposition of spins along the {z}-axis. Measuring the spin along the x-axis will result in {\left|\uparrow_z\right>} or {\left|\downarrow_z\right>} with probability {\frac{1}{2}} each; e.g., from (2): \displaystyle \mathrm{Prob}\left(\uparrow_z\right)=||P_{\uparrow_z}\left|\uparrow_x\right>||^2 =\frac{1}{2}\left(\left<\uparrow_z|\uparrow_z\right>\right)^2=\frac{1}{2}~. \ \ \ \ \ (14) In contrast, the ensemble in which each of these states occurs with this probability is \displaystyle \rho=\frac{1}{2}\left(\left|\uparrow_z\right>\left<\uparrow_z\right|+\left|\downarrow_z\right>\left<\downarrow_z\right|\right)=\frac{1}{2}I~. \ \ \ \ \ (15) But since the identity is invariant under a unitary change of basis ({U^\dagger IU=I}), we can obtain the state along an arbitrary axis {\left|\psi(\theta,\phi)\right>} by applying a suitable unitary transformation to {\left|\uparrow_z\right>} without changing the right-hand side. As a consequence, measuring the spin along any axis yields a completely random result: \displaystyle \mathrm{tr}{\left|\psi(\theta,\phi)\right>\left<\psi(\theta,\phi)\right|\rho}=\frac{1}{2}~. \ \ \ \ \ (16) In other words, we obtain spin up or down with equal probability, regardless of what we do. This is a reflection of the fact that the relative phases in a superposition are observable, but those in an ensemble are not. A mixed state can thus be thought of as an ensemble of pure states in many different ways, all of which are experimentally indistinguishable. (As an aside, further clarity on these relationships can be gained by studying the Bloch sphere, which I shall not digress upon here). A bipartite pure state can be expressed in a standard form, which is often very useful. One begins by observing that an arbitrary state {\left|\psi\right>_{AB}\in\mathcal{H}_A\otimes\mathcal{H}_B} may be expanded as \displaystyle \left|\psi\right>_{AB}=\sum_{i,j}a_{ij}\left|i\right>_A\left|j\right>_B=\sum_i\left|i\right>_A\left|\tilde i\right>_B~, \ \ \ \ \ (17) where {\{\left|i\right>_A\}} and {\{\left|j\right>_B\}} are the orthonormal bases defined in (7), and in the second equality we’ve defined a new basis {\left|\tilde i\right>_B=\sum_ja_{ij}\left|j\right>_B}. A priori, {\{\left|\tilde i\right>_B\}} need not be orthonormal. However, since {\{\left|i\right>_A\}} is, we are free to choose it such that {\rho_A} is diagonal (cf. (11)), in which case we can write the reduced density matrix that describes subsystem {A} alone as \displaystyle \rho_A=\sum_ip_i\left|i\right>_{A~A}\left<i\right|~. \ \ \ \ \ (18) However, by definition (9), this is also equivalent to tracing out system {B}, \displaystyle \begin{aligned} \rho_A&=\mathrm{tr}_B\left(\left|\psi\right>_{AB~AB}\left<\psi\right|\right) =\mathrm{tr}_B\left(\sum_{ij}\left|i\right>_A\left|\tilde i\right>_{B~A}\left<j\right|\big._{B}\left<\tilde j\right|\right)\\ &=\sum_{ij}\left|i\right>_{A~A}\left<j\right|\sum_k\big._B\left<\tilde j|k\right>_B\big._B\left<k|\tilde i\right>_B =\sum_{ij}\big._B\left<\tilde j|\tilde i\right>_B\left|i\right>_{A~A}\left<j\right|~. \end{aligned} \ \ \ \ \ (19) And therefore, it must be the case that \displaystyle \big._B\left<\tilde j|\tilde i\right>_B=\delta_{ij}p_i \ \ \ \ \ (20) i.e., the new basis {\{\left|\tilde i\right>_B\}} is orthogonal after all! Furthermore, by simply rescaling the vectors by {p_i^{-1/2}\left|i\right>_B\equiv\left|i'\right>_B}, we find that we can express the bipartite state (17) as \displaystyle \left|\psi\right>_{AB}=\sum_i\sqrt{p_i}\left|i\right>_A\left|j\right>_B=\sum_i\left|i\right>_A\left|\tilde i\right>_B~, \ \ \ \ \ (21) which is the Schmidt decomposition of the bipartite pure state {\left|\psi\right>_{AB}} in terms of a particular orthonormal basis of {\mathcal{H}_A\otimes\mathcal{H}_B}. Note that our derivation was completely general; any bipartite pure state can be expressed in this form, though of course the particular orthonormal basis employed will depend on the state (that is, we can’t simultaneously expand {\left|\psi\right>_{AB}} and {\left|\phi\right>_{AB}} using the same orthonormal basis for {\mathcal{H}_A\otimes\mathcal{H}_B}). Observe that by tracing over one of the Hilbert spaces in (21), we find that both {\rho_A} and {\rho_B} have the same nonzero eigenvalues, e.g., \displaystyle \rho_B=\mathrm{tr}_A\left(\left|\psi\right>_{AB~AB}{\left<\psi\right|}\right)=\sum_ip_i\left|i'\right>_B\big._B\left<i'\right|~, \ \ \ \ \ (22) though since the dimensions of {\mathcal{H}_A} and {\mathcal{H}_B} need not necessarily be equal, the number of zero eigenvalues can still differ. The fact that {\rho_A} and {\rho_B} have no degenerate non-zero eigenvalues implies that they uniquely determine the Schmidt decomposition: one can diagonalize the reduced density matrices, and then pair up eigenstates with the same eigenvalue to determine (21). (There is still the potential for ambiguity in the basis if either {\rho_A} or {\rho_B} individually has degenerate eigenvalues—to wit, which {\left|i'\right>_B} gets paired with which {\left|i\right>_A}). The Schmidt decomposition is useful for characterizing whether pure states are separable or entangled (for mixed states, the situation is more subtle). In particular, the bipartite pure state above, {\left|\psi\right>_{AB}} is separable iff there is only one non-zero Schmidt coefficient {p_i}. Otherwise, the state is entangled. If all the Schmidt coefficients are equal (and non-zero), then the state is maximally entangled. On account of this classification, it is common to associated a Schmidt number to the state {\left|\psi\right>_{AB}}, which is the number of non-zero eigenvalues (equivalently, the number of terms) in the decomposition (note that this implies the Schmidt number is a positive integer). Thus a pure state is separable iff its Schmidt number is 1. In this case we can write it as a direct product of states in {\mathcal{H}_A} and {\mathcal{H}_B:} {\left|\psi\right>_{AB}=\left|\phi\right>_A\otimes\left|\chi\right>_B}, which further implies that {\rho_A=\left|\phi\right>_A\big._A\left<\phi\right|} and {\rho_B=\left|\chi\right>_B\big._B\left<\chi\right|} are each pure. In contrast, an entangled state, with Schmidt number greater than 1, has no such direct product expression, in which case {\rho_A} and {\rho_B} are mixed. Entanglement is quantified by the von Neumann entropy. Entanglement entropy is a tremendously rich topic in itself, to say nothing of its connections to other areas of physics, and thus we defer further discussion elsewhere. To summarize, it is only in the case for idealized, isolated systems (i.e., the entire universe) that quantum states may be described by rays in Hilbert space. In reality, since we always deal with subsystems, states are given by (reduced) density matrices defined by tracing out the complement of the Hilbert space under consideration. (The Hilbert spaces themselves are associated with spatial regions, and thus what we mean is that we trace over all degrees of freedom localized in the complement of our subregion. As mentioned elsewhere however, this is generally still too naïve). It remains to justify our earlier claim that generic measurements are not orthogonal projections, and evolution non-unitary. In the course of doing so, we shall resolve Preskill’s “disconcerting dualism” between determinism and probability, and explain why the notion of wave-function collapse is an illusion. But this will require us to develop slightly beyond the basic mathematical machinery above, and as such we partition the discussion into Part 2. This entry was posted in Physics. Bookmark the permalink. Leave a Reply WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s
98bbefe541dce194
Take the 2-minute tour × This answer of mine has been strongly criticized on the ground that it is no more than a philosophical blabbering. Well, it may well be. But people seem to be of the opinion that HUP alone does not ensure randomness and you need Bell's theorem and other features for the randomness in QM. However, I still believe it is the HUP which is all one needs to appreciate the probabilistic feature of QM. Bell's theorem or other such results reinforces this probabilistic view only. I am very much curious to know the right answer. share|improve this question Asking a separate question instead of abusing the comments is a very good idea! –  Sklivvz Apr 9 '11 at 10:17 @Sklivvz: except that this question is not interested in the past discussion (as per @sb1's comment to my answer) so the whole talk about Bell's theorem (which is just misunderstanding on sb1's part anyway) shouldn't be present in this question at all. –  Marek Apr 9 '11 at 10:29 I'm appalled that so many people piled on your previous Answer without leaving any comments. Qudos to Marek for having left a comment, however the part of his comment that I agree with is that your Answer was not much to the point. It may be that the other downvoters felt that you didn't Answer the Question, not that it was philosophical blathering. I didn't downvote your Answer, but nor did I upvote it. –  Peter Morgan Apr 9 '11 at 13:03 @Peter: No, there were comments exchanged which were not quite friendly. I guess the moderator has removed them all except the first comment. –  user1355 Apr 9 '11 at 13:26 I am totally clueless about the above comment made by @Marek. He seems to assume a lot of things which makes one quite surprised and detested! –  user1355 Apr 9 '11 at 13:30 show 7 more comments 6 Answers The title question is Does the HUP alone ensure the randomness of QM? I claim that the answer to this question is:No. The HUP has the basic forms: $$\Delta E.\Delta t \ge \hbar$$ $$\Delta x.\Delta p \ge \hbar$$ Furthermore quantum mechanics books prove that for non-commuting observables: $$[P,Q] \neq 0$$ $$\Delta P.\Delta Q \ge \hbar$$ So the HUP is proven generally as a consequence of the non-commutativity of the observables. In order to understand why there are non-commuting observables in QM takes us to the rest of the postulates of QM and so explains why the other answers say that HUP is a consequence of QM in toto. However there is more to the topic of "QM randomness" than this, and we have not yet responded to your remarks about Bell's Theorem. The first point to note is that in classical engineering, there is a concept of time-domain and frequency domain (for a wave) and the associated law: $$\Delta \omega.\Delta t \ge 1$$ This law is a consequence of the Fourier transform between these domains and therefore: $$e^{-i\omega t}$$ So the HUP formula is more widespread than just quantum mechanics. Of course if one puts $$E=\hbar \omega$$ then one obtains once again! So where does quantum randomness (assuming for the moment, that that is the correct term) come from? One published book that makes this point explicitly is Roger Penrose "The Emperor's New Mind", p297 [In quantum collapse..] these real numbers play a role as actual probabilities for the alternatives in question. Only one of the alternatives survives into the actuality of physical experience.. It is here, and only here, that the non-determinism of quantum theory makes its entry. The italics is mine (and this is where Penrose introduces his R definition for describing quantum wave function reduction). Thus if you are familiar with quantum mechanics then this is the reduction postulate (in words). So we have several different concepts in play here: HUP, QM Postulates, Bell's Theorem, randomness. share|improve this answer I have no confusion about the fundamentals of quantum mechanics more than any body else here. Your engineering example is cute but wrong in the sense that the error in measurement in that case can be made arbitrarily small by more accurate instruments. Any theory, comes with a HUP like principle where the uncertainty can't be made arbitrarily small has to have probabilistic features. That's my understanding. –  user1355 Apr 9 '11 at 16:02 @sb1 : I think the moral of how this site (has to) work is that if a question appears to be asking for a Textbook explanation of something that is what will be provided by default. If one means to challenge, or extend, the textbooks on an apparently basic topic (which Fundamental ones are) then the question formation needs to be referenced and so on. Phrases like "People believe.." dont convey exactly what was intended. So yes there will be misunderstandings about what was intended here. I worked on the Question title itself this time as my source for your meaning. Try again though. –  Roy Simpson Apr 9 '11 at 21:18 @sb1, I think the "engineering example" Roy cites is relevant to this discussion, but I point out that it emerges in deterministic signal processing, that stochastic SP is not needed (which you both may know). I find that a good author on this issue in SP is Leon Cohen, who I think writes very clearly. His "Time-Frequency Distributions-A Review", PROCEEDINGS OF THE IEEE, VOL. 77, NO. 7, JULY 1989, Page 941, where he discusses the relationship between quantum and SP from 30 years experience, is a 40 pages that's well worth reading. –  Peter Morgan Apr 9 '11 at 22:32 @sb1 The error in measurement in the SP case cannot be made arbitrarily small because the concept of measuring the amplitude of the signal at a given frequency requires measurement of the signal at all times, so that a perfect Fourier transform of the signal can be constructed. If we measure the signal for only a finite time, we can effectively only compute the Fourier components of the signal we want in convolution with a window function. –  Peter Morgan Apr 9 '11 at 22:41 @Peter, thanks for the link. Actually this Answer is only part of a larger Answer I had developed for this question, which developed the point about the time-frequency domain (and another example) much further. But when I checked with the OP question I found that my conclusions had nothing much to do with the original question, so I truncated the answer to what the OP seemed to be asking. This Answer is now being downvoted probably because it doesnt address an ambiguous question, so I will probably delete it and not answer any more ambiguous questions of this type. –  Roy Simpson Apr 10 '11 at 16:08 add comment I think I'm largely going to repeat what Roy, Vladimir, and Jaskey13 have already said, but perhaps, I hope, not so totally that this won't be Useful. I take it that HUP, despite its grandiose title, is not a principle; it's derived as a consequence of the various mathematical structures of QM. As such, HUP is a part of a characterization of the properties of QM. HUP is, however, something of a lesser part of that characterization because it is not enough to characterize all the differences between classical stochastic physics and QM. It is possible, as Roy says, to construct local classical models for which, under a reasonable physical interpretation of the mathematics, HUP is true. I'm not completely sure what you mean by "HUP alone does not ensure randomness"? I suppose the interpretation of QM is all probability all the time. In various comments you protest, and I believe, that you know the axioms of QM and their basic interpretation well enough. What I take you to mean is that "HUP alone does not ensure intrinsic randomness". This qualification, which is fairly commonly used, makes sense, to me, of your following comment, with my qualification inserted, that "you need Bell's theorem and other features for the [intrinsic] randomness in QM", whereas the relevance of Bell inequalities to your Question seems to have troubled other people here. I take “intrinsic” to be a rather coded way to say that a classical probability theory is not isomorphic to quantum probability theory. I've previously cited on Physics SE the presentations of Bell-CHSH inequalities that I think best make this clear, due to Landau and to de Muynck, here, where I note that you also left a notably Useful(8) Answer. Their derivations use the CCRs in a way that is not significantly more obscure than does the derivation of the HUP. I take the Bell-CHSH inequalities to be a reasonable lowest-order characterization of the difference. There is of course confusion concerning the relevance of locality to the Bell inequalities, which I think could get in the way of my discussion here, but I see that you have a relatively sophisticated view of that confusion. share|improve this answer UP can be derived from the Schrödinger equation and normally introductory text books derive it. But in advanced courses one learns that Schrödinger equation can be derived from the basic axioms of quantum theory. These axioms are held to be the most fundamental postulates about nature. These postulates directly leads us to the general uncertainty relationship. The catch here, imho, is UP encompasses the gist of the theory. In order to be a quantum theory all a theory needs is to be consistent with the UP. It is truly a fundamental principle of the QT in this sense. –  user1355 Apr 10 '11 at 15:10 -1 is not mine. –  user1355 Apr 10 '11 at 15:14 @sb1 Downvoting was all too likely for my Answer. Downvotes are meaningless unless someone is wise enough to be able to say why, at least for other readers, if not for the Answerer. Your idea that HUP is truly a principle, and enough to make a theory a quantum theory, seems to me quite radical. I think I don't see that in quantum logic or axiomatic approaches? It's often done from the CCRs, which give CHSH, etc. Is there a proof that a theory that satisfies HUP (and what other conditions?) must violate the Bell inequalities? Otherwise, what you're proposing seems rather different from QM. –  Peter Morgan Apr 10 '11 at 17:23 add comment I'm not sure if an undergrads perspective would be useful here- but I'll give it a shot (at worst I'll learn something new.) David Griffith's "Introduction to Quantum Mechanics" takes great care to motivate the uncertainty principle from more basic founding postulates of Q.M. First Hilbert space and the state vector, as the description of the particle, are defined. Next classical observables are formulated as operators on the state vector. Eigenvalues and the basis of the operators are explored and it is revealed that for certain (conjugate) operators, the state vector cannot be written in the same basis if a unique value for those operator's corresponding observable is desired. It is shown that such operators do not commute. It is finally shown that from non-commuting the uncertainty principle can be mathematically derived. So the point of this summary (all of which I'm sure you already know well) is the order that things are done. Griffiths is so far my favorite text book author, and I'm sure there is a reason he laid things out so explicitly. He stresses the classical nature of the observables and how the state vector is truly fundamental. It always seemed to me (and thus how I understand it) that what he was getting at is that observables like position and momentum are classical and what we are doing is trying to perform classical observation on a quantum system. When we attempt to do this, we are putting limitations on the state vector that nature simply doesn't do on her own. The result of this is that we end up with non-comeasurable observables, simply because of our classical bias in "translating" the true state of the particle, which is simply not completely expressible solely in terms of classical observables. To me this, what Q.M. is actually doing, seems more fundamental than the HUP. Perhaps it borders on metaphysics- but it seems to be the logical conclusion of the math/algorithms. And because Bell's Theorem is mentioned: the inputs for this theorem are already there in Q.M.- the theorem simply tells us how to properly combine them and then conclude the character of the correlations between observables. In a way (once again seeming to me) it "measures" what kind of probabilities we are expressing in our theory. share|improve this answer It's true that the uncertainty principle is derived, but what you say in your third paragraph doesn't make much sense. There's not really anything classical about observables. In fact, they act very non-classically since they have nontrivial commutation relations with other things. Observables are operators on the Hilbert space of states, and "project" out (in some sense) the information contained in the state vector you're looking for. The "classical" things are more related to expectation values, not the operators. I think Griffiths discuses this somewhere in the exercises. –  Mr X Apr 10 '11 at 14:48 @Jeremy Price what I was getting at is that things like "momentum" and "position" are not true quantum mechanical properties- rather they are classical measures that we apply to the quantum world. But it is a true interpretation about the expectation values, from what I've read. Which is the root of the uncertainty principle, is it not- as the uncertainty of an observable is expressed as a deviation from the expectation value? (In Griffith's derivation at least) –  jaskey13 Apr 10 '11 at 15:16 @jaskey13 I don't think it's right to say that about momentum and position. They're very real things quantum mechanically, we still have quantum mechanical analogues of, e.g., conservation of momentum and energy, despite the fact that they are not "well-defined" in a classical sense. In fact, if you look at how to derive the Schroedinger equation, you replace operators into E = p^2/2m and act this on a function, usually as a function of position, which is surely taking all of these properties very seriously and fundamentally! –  Mr X Apr 12 '11 at 16:55 @Jeremy Price Are these analogues those that come from an application of Ehrenfest's theorem? If not- could you please tell me what they are? –  jaskey13 Apr 12 '11 at 21:48 My edition is surprisingly scant when it comes to conservation laws. Maybe it is time to move on to something more advanced –  jaskey13 Apr 12 '11 at 22:56 show 4 more comments In my opinion HUP is not a "principle" but a consequence of the mathematical framework of QM - it is derived rather than "postulated". Randomness or uncertainty in measuring some variable in some state is not strictly related to the uncertainty of its canonically conjugate variable. HUP establishes some limitation on them and that's it. What I want to underline is that, say, uncertainty in momentum is determined with the given QM state itself. About randomness, it is easy to understand if we remember that the information is gathered with help of photons. When the number of photons in one "observation" is large, their average is well determined and it is what the classical physics deals with. If the number of photons is small, the uncertainty makes an impression of a strong randomness in measuring, say, position of a body. Even the Moon position is uncertain if based on few-photon measurements. Uncertainty in measurements is a fundamental feature of states in physics. Determinism is possible only for "well-averaged" measurements. Look at the Ehrenfest equations - they involve average (expectation) values. It implies many-many measurements. In other words, the classical determinism is due to its inclusive character. share|improve this answer add comment Well, you misinterpreted what I (and others) said at least in two important ways. 1. Bell's theorem surely isn't responsible for randomness in QM. That's because it doesn't actually tell you anything about QM itself, only about other theories trying to reproduce the same results that QM (and nature) produces. The reason I mentioned it is that it (severely) restricts the class of non-random theories that can describe the nature. Without such a theorem one might hope (and people still do) that it is possible to construct a deterministic framework that could be compatible with observations. So HUP certainly doesn't imply intrinsic randomness. You need further work to establish that no viable theory (and not just QM) is deterministic. Measurement of violation of Bell's inequalities is what does it (at least if one assumes locality). 2. QM is based on lots of principles. HUP is fundamental (and is built-in by including non-commutative operators into the framework) but no less fundamental than other postulates. Trying to isolate one particular feature of a theory doesn't always make sense. You could try to obtain deterministic QM by removing HUP but that essentially means letting $\hbar \to 0$ and obtaining classical physics, thereby losing all the other special effects of QM. In other words, your statement "HUP which is all one needs to appreciate the probabilistic feature of QM" couldn't be more far removed from reality. To appreciate this probabilistic aspect, one needs to master the mathematical formalism of QM, the way it connects to experiment and the way measurements are interpreted. HUP is only a small part in it and actually, the one thing you almost never care for as it is built-in into the theory from the start. share|improve this answer You have misunderstood my point as well. I am well aware of all the fundamental postulates of QM. You need all those postulates for a fully functional Q.T. However, my point is UP is the postulate for the essential qualitative element of the randomness in the theory. –  user1355 Apr 9 '11 at 8:43 @sb1: that might be the case but you start your question with "people seem to be of the opinion that..." which is simply not the case. People talked about something completely different last time so I am not sure why you bring that in now if you only intend to give downvotes for people's replies. If you only want to talk about pure QM then I suggest you edit your question in order not to confuse people further. –  Marek Apr 9 '11 at 8:49 @sb1 I think there's a Useful Answer in here (my +1). –  Peter Morgan Apr 9 '11 at 13:37 @Peter: thank you. Well, I believe all I said is correct and relevant but whether I've read @sb1's mind correctly as to what his intents were with this question that's another story... –  Marek Apr 9 '11 at 14:14 add comment It's very strange for someone to say that "Bell's theorem ensures something in quantum mechanics". Bell's theorem is a theorem - something that can be mathematically proved to hold given the assumptions. It's valid in the same sense as $1+1=2$. Is $1+1=2$ needed for something in physics? Maybe - but the question clearly makes no sense. Mathematics is always valid in physics - and everywhere else. However, even the assumptions of Bell's theorem surely can't be "necessary building blocks" for some results in quantum mechanics because Bell's theorem is not a theorem about quantum mechanics at all. It is a theorem (an inequality) about local realist theories - exactly the kind of theories that quantum mechanics is not. Whether someone needs $1+1=2$ doesn't matter because this fact is imposed upon him, anyway. Any proof may be modified so that $1+1=2$ is needed and any proof may be modified so that $1+1=2$ is not needed. But even if one ignores the comment about "Bell's theorem and other such results" that can't possibly have anything to do with the question, it's nontrivial to make the question precise. The uncertainty principle is normally formulated as a part of quantum mechanics - we say that $\Delta x$ and $\Delta p$ can't have well-defined sharp values at the same moment. What it means for them not to have sharp values? Well, obviously, it means that one measures their values with an error margin, and the fluctuations or choice of the measured value from the allowed distribution has to be random. If it were not random, there would have to be another quantity for which one should do the same discussion. Again, if the uncertainty principle applied to this hidden variables (and a complementary one), it would imply that its values have to be random. Do you allow me to assume that the HUP holds for whatever variables we have? If you do, obviously, there has to be random things in the Universe. But even the term "random" is too ill-defined. Do you require some special vanishing of correlations etc.? If you do, shouldn't you describe what those requirements are? So I don't think it's possible to fully answer vague questions of this kind. I would say a related comment that quantum mechanics - with its random character - is the only mathematically possible and self-consistent framework that is compatible with certain basic observations of the quantum phenomena. The outcomes in quantum mechanics take place randomly, with probabilities and probability distributions that can be calculated from the squared probability amplitudes, and all other attempts to modify the basic framework of quantum mechanics have been ruled out. If it's so, and it is so, there's really no point in trying to decompose the postulates of quantum mechanics into pieces because the pieces only combine into a viable theoretical structure, able to explain the behavior of important worlds such as ours, when all these postulates are taken seriously at the same moment. share|improve this answer add comment Your Answer
9387fd0cdcab3256
Wednesday, September 08, 2010 The Narrow Cosmic Performance Envelope The Cosmos must have a very particular performance envelope if evolution is going to get anywhere very fast. (i.e. 0 to Life in a mere 15 billion years) Brian Charlwood has posted a comment on my Blog post Not a Lot of People Know That. As it’s difficult to work with those narrow comment columns I thought I would put my reply here. Brian’s comments are in italics. You say //So evolution is not a fluke process as it has to be resourced by probabilistic biases.// so it is either a deterministic system or it is a random system. I am not happy with this determinism vs. randomness dichotomy: To appreciate this consider the tossing of a coin. The average coin gives a random configuration of heads/tails with a fifty/fifty mix. But imagine some kind of “tossing” system where the mix was skewed in favour of heads. In fact imagine that on average tails only turned up once a year. This system is much closer to a “deterministic” system than it is to the maximally random system of a 50/50 mix. To my mind the lesson here is that the apparent dichotomy of randomness vs. determinism does no justice to what is in fact a continuum. A deterministic system requires two ingredients: 1/ A state space 2/ An updating rule For example a pendulum has as a state space all possible positions of the pendulum, and as updating rules the laws of Newton (gravity, F=ma) which tell you how to go from one state to another, for instance the pendulum in the lowest position to the pendulum in the highest position on the left. Fine, I’m not adverse to that neat way of modeling general deterministic systems as they develop in time, but for myself I’ve scrapped the notion of time. I think of applied mathematics as a set of algorithms for embodying descriptive information about the “timeless” structure of systems. This is partly a result of an acquaintance with relativity which makes the notion of a strict temporal sequencing across the vastness of space problematical. Also, don’t forget that these mathematical systems can also be used to make “predictions” about the past (or post-dictions), a fact which also suggests that mathematical models are “information” bearing descriptive objects rather than being what I can only best refer to here as “deeply causative ontologies”. A random system is a bit more intricate. It can be built up with 1/ A state space 2/ An updating rule Huh? Looks the same. Yeah, but I can now add the rule is updating. Contrary to deterministic systems, the updating rule does not tell us what the next state is going to look like given a previous state, it is only telling us how to update the probability of a certain state. Actually, that is only one possible kind of random system, one could also build updating rules which are themselves random. So you have a lot of possibilities, on the level of probabilities, a random system can look like a deterministic system, but it is really only predicting probabilities. It can also be random on the level of probabilities, requiring a kind of meta-probabilitisic description. If I understand you right then the Schrödinger equation is an example of a system that updates probabilities deterministically. The meta-probabilistic description you talk of is, I think, mathematically equivalent to conditional probabilities. This comes up in random walk where the stepping to the left or right by a given distance are assigned probabilities. But conceivably step sizes could also vary in a probabilistic way, thus superimposing probabilities on probabilities. i.e. conditional probabilities. In the random walk scenario the fascinating upshot of this intricacy is that it has no effect on the general probability distribution as it develops in space. (See the “central limit theorem”) Anyway, these are technical details, but let's look at what happens when we have a deterministic system and we introduce the slightest bit of randomness. Take again the pendulum. What might happen is that we don't know the initial state with certainty, the result is that you still have a deterministic updating rule, but you can now only predict how the probability of having a certain state will evolve. Now, this is still a deterministic system, the probability only creeps in because we have no knowledge of the initial state. But suppose the pendulum was driven by a genuine random system. Say that the initial state of the pendulum is chosen by looking at the state of a radio-active atom. If the atom decayed in a certain time-interval, we let the pendulum start on the left, if not on the right. The pendulum as such is still a deterministic system. But because we have coupled it to a random system, the system as a whole becomes random. This randomness would be irreducible. This would classify as a one of those systems on the deterministic/random spectrum. The mathematics of classical mechanics would mean that any old behavior is not open to the pendulum system, and therefore it is not maximally random.; the system is constrained by classical mechanics to behave within certain limits. The uncertainty in initial conditions, when combined with mathematical constraint of classical mechanics, would produce a system that behaves randomly only within a limited envelope of randomness; the important point to note is that it is an envelope, that is, an object with limits, albeit fuzzy limits like a cloud. Limits imply order. Thus, we have here a system that is a blend of order and randomness; think back to that coin tossing system where  tails turned up randomly but very infrequently. So, if you want to say that there is a part of evolution that is random, the consequence is that the whole of it is random and therefore it is all one big undesigned fluke. No, I don't believe we can yet go this far. Your randomly perturbed pendulum provides a useful metaphor: Relative to the entire space of possibility the pendulum’s behavior is highly organized, its degrees of freedom very limited. Here, once again, the probabilities are concentrated in a relatively narrow envelop of behavior, just as they must be in any working evolutionary system – unless, of course, one invokes some kind of multiverse, which is one (speculative) way of attempting maintain the “It’s just one big fluke” theory. Otherwise, just how we ended up with a universe that has a narrow probability envelope (i.e. an ordered universe) is, needless to say, the big contention that gets people hot under the collar. Post a Comment Links to this post: Create a Link << Home
79a5da0b5f841ad1
Support Options Submit a Support Ticket Quantum Mechanics: Hydrogen Atom and Electron Spin By Dragica Vasileska1, Gerhard Klimeck2 1. Arizona State University 2. Purdue University View Series Slides/Notes podcast Licensed according to this deed. Published on A hydrogen atom is an atom of the chemical element hydrogen. The electrically neutral atom contains a single positively-charged proton and a single negatively-charged electron bound to the nucleus by the Coulomb force. The most abundant isotope, hydrogen-1, protium, or light hydrogen, contains no neutrons; other isotopes contain one or more neutrons. This article primarily concerns hydrogen-1. The hydrogen atom has special significance in quantum mechanics and quantum field theory as a simple two-body problem physical system which has yielded many simple analytical solutions in closed-form. In 1913, Niels Bohr obtained the spectral frequencies of the hydrogen atom after making a number of simplifying assumptions. These assumptions, the cornerstones of the Bohr model, were not fully correct but did yield the correct energy answers. Bohr's results for the frequencies and underlying energy values were confirmed by the full quantum-mechanical analysis which uses the Schrödinger equation, as was shown in 1925/26. The solution to the Schrödinger equation for hydrogen is analytical. From this, the hydrogen energy levels and thus the frequencies of the hydrogen spectral lines can be calculated. The solution of the Schrödinger equation goes much further than the Bohr model however, because it also yields the shape of the electron's wave function ("orbital") for the various possible quantum-mechanical states, thus explaining the anisotropic character of atomic bonds. The Schrödinger equation also applies to more complicated atoms and molecules. However, in most such cases the solution is not analytical and either computer calculations are necessary or simplifying assumptions must be made. Solution of the Schrodinger equation for the hydrogen atom is provided below: • Slides on the Solution of the Schrodinger equation for Hydrogen atom • In physics and chemistry, spin refers to a non-classical kind of angular momentum intrinsic to a body, as opposed to orbital angular momentum, which is the motion of its center of mass about an external point. A particle's spin is essentially the direction a particle turns along a given axis, which in turn can be used to determine the particle's magneticism.[1] Although this special property is only explained in the relativistic quantum mechanics of Paul Dirac, it plays a most-important role already in non-relativistic quantum mechanics, e.g., it essentially determines the structure of atoms. In classical mechanics, any spin angular momentum of a body is associated with self rotation, e.g., the rotation of the body around its own center of mass. For example, the spin of the Earth is associated with its daily rotation about the polar axis. On the other hand, the orbital angular momentum of the Earth is associated with its annual motion around the Sun. In fact, in classical theories there is no analogue to the quantum mechanical property meant by the name spin. The concept of this nonclassical property of elementary particles was first proposed in 1925 by Ralph Kronig, George Uhlenbeck, and Samuel Goudsmit; but the name related to the phenomenon of spin in physics is Wolfgang Pauli. Applications of Spin in nanoelectronics are given in the presentation slides below: • The story of the two spins • Cite this work Researchers should cite this work as follows: • Dragica Vasileska; Gerhard Klimeck (2008), "Quantum Mechanics: Hydrogen Atom and Electron Spin," BibTex | EndNote In This Series 1. Quantum Mechanics: Hydrogen Atom 09 Jul 2008 | Teaching Materials | Contributor(s): Dragica Vasileska, Gerhard Klimeck The solution of the Schrödinger equation (wave equations) for the hydrogen atom uses the fact that the Coulomb potential produced by the nucleus is isotropic (it is radially symmetric in space and only depends on the distance to the nucleus). Although the resulting energy eigenfunctions (the... 2. Quantum Mechanics: The story of the electron spin One of the most remarkable discoveries associated with quantum physics is the fact that elementary particles can possess non-zero spin. Elementary particles are particles that cannot be divided into any smaller units, such as the photon, the electron, and the various quarks. Theoretical and..., a resource for nanoscience and nanotechnology, is supported by the National Science Foundation and other funding agencies. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
8dd1cd2798ef0cac
Pilot wave From Wikipedia, the free encyclopedia Jump to: navigation, search Couder experiments,[1][2] "materializing" the pilot wave model. In theoretical physics, the pilot wave theory was the first known example of a hidden variable theory, presented by Louis de Broglie in 1927. Its more modern version, the Bohm interpretation, remains a controversial attempt to interpret quantum mechanics as a deterministic theory, avoiding troublesome notions such as instantaneous wave function collapse and the paradox of Schrödinger's cat. The pilot wave theory[edit] The pilot wave theory is one of several interpretations of quantum mechanics. It uses the same mathematics as other interpretations of quantum mechanics; consequently, it is also supported by the current experimental evidence to the same extent as the other interpretations. The pilot wave theory is a hidden variable theory. Consequently: • the theory has realism (meaning that its concepts exist independently of the observer); • the theory has determinism. A collection of particles has an associated matter wave, which evolves according to the Schrödinger equation. Each particle follows a deterministic trajectory, which is guided by the wave function; collectively, the density of the particles conforms to the magnitude of the wave function. The wave function is not influenced by the particle and can exist also as an empty wave function.[3] The theory brings to light nonlocality that is implicit in the non-relativistic formulation of quantum mechanics and uses it to satisfy Bell's theorem. Interestingly, these nonlocal effects are compatible with the no-communication theorem, which prevents us from using them for faster-than-light communication. The pilot wave theory shows that it is possible to have a realistic and deterministic hidden variable theory, which reproduces the experimental results of ordinary quantum mechanics. The price which has to be paid for this is manifest nonlocality.[citation needed] Mathematical foundations[edit] To derive the de Broglie–Bohm pilot-wave for an electron, the quantum Lagrangian where Q is the potential associated with the quantum force (the particle being pushed by the wave function), is integrated along precisely one path (the one the electron actually follows). This leads to the following formula for the Bohm propagator: K^Q(X_1, t_1; X_0, t_0) = \frac{1}{J(t)^ {\frac{1}{2}} } \exp\left[\frac{i}{\hbar}\int_{t_0}^{t_1}L(t)\,dt\right]. This propagator allows to track the electron precisely over time under the influence of the quantum potential Q. Derivation of the Schrödinger equation[edit] Pilot Wave theory is based on Hamilton–Jacobi dynamics[4] rather than Lagrangian or Hamiltonian dynamics. Using the Hamilton–Jacobi equation H\left(\mathbf{q},{\partial S \over \partial \mathbf{q}},t\right) + {\partial S \over \partial t}\left(\mathbf{q},t\right) = 0 it is possible to derive the Schrödinger equation: Consider a classical particle — the position of which is not known with certainty. We must deal with it statistically, so only the probability density \rho (x,t) is known. Probability must be conserved, i.e. \int\rho\,d^3x = 1 for each t. Therefore it must satisfy the continuity equation \partial \rho / \partial t = - \nabla(\rho v) \quad(1) where v(x,t) is the velocity of the particle. In the Hamilton–Jacobi formulation of classical mechanics, velocity is given by v(x,t) = \frac{\nabla S(x,t)}{m} where S(x,t) is a solution of the Hamilton-Jacobi equation - \frac{\partial S}{\partial t} = \frac{\left(\nabla S\right)^2}{2m} + V \quad(2) We can combine (1) and (2) into a single complex equation by introducing the complex function \psi = \sqrt{\rho}e^\frac{iS}{\hbar}, then the two equations are equivalent to i \hbar \frac{\partial \psi}{\partial t} = \left( - \frac{\hbar^2}{2m} \nabla^2 +V-Q \right)\psi \quad with \quad Q = - \frac{\hbar^2}{2m} \frac{\nabla^2 \sqrt{\rho}}{\sqrt{\rho}} This is the time dependent Schrödinger equation with an extra potential, the quantum potential Q, which is the potential of the quantum force, which is proportional (in approximation) to the curvature of the amplitude of the wave function. Mathematical formulation for a single particle[edit] The matter wave of de Broglie is described by the time-dependent Schrödinger equation: i \hbar \frac{\partial \psi}{\partial t} = \left( - \frac{\hbar^2}{2m} \nabla^2 +V \right)\psi \quad The complex wave function can be represented as: \psi = \sqrt{\rho} \; \exp \left( \frac{i \, S}{\hbar} \right) By plugging this into the Schrödinger equation, one can derive two new equations for the real variables. The first is the continuity equation for the probability density\rho: [5] \partial \rho / \partial t + \nabla \cdot ( \rho v) =0 \; , where the velocity field is defined by the guidance equation \vec{v} (\vec{r},t) = \frac{\nabla S(\vec{r},t)}{m}\; . According to pilot wave theory, the point particle and the matter wave are both real and distinct physical entities. ( Unlike standard quantum mechanics, where particles and waves are considered to be the same entities, connected by wave–particle duality. ) The pilot wave guides the motion of the point particles as described by the guidance equation. Ordinary quantum mechanics and pilot wave theory are based on the same partial differential equation. The main difference is that in ordinary quantum mechanics, the Schrödinger equation is connected to reality by the Born postulate, which states that the probability density of the particle's position is given by \rho = |\psi|^2 . Pilot wave theory considers the guidance equation to be the fundamental law, and sees the Born rule as a derived concept. The second equation is a modified Hamilton–Jacobi equation for the action S: where Q is the quantum potential defined by By neglecting Q, our equation is reduced to the Hamilton–Jacobi equation of a classical point particle. ( Strictly speaking, this is only a semiclassical limit [clarification needed], because the superposition principle still holds and one needs a decoherence mechanism to get rid of it. Interaction with the environment can provide this mechanism.) So, the quantum potential is responsible for all the mysterious effects of quantum mechanics. One can also combine the modified Hamilton–Jacobi equation with the guidance equation to derive a quasi-Newtonian equation of motion m \, \frac{d}{dt} \, \vec{v} = - \nabla( V + Q ) \; , where the hydrodynamic time derivative is defined as Mathematical formulation for multiple particles[edit] The Schrödinger equation for the many-body wave function \psi(\vec{r}_1, \vec{r}_2, \cdots, t) is given by i \hbar \frac{\partial \psi}{\partial t} =\left( -\frac{\hbar^2}{2} \sum_{i=1}^{N} \frac{\nabla_i^2}{m_i} + V(\bold{r}_1,\bold{r}_2,\cdots\bold{r}_N) \right) \psi The complex wave function can be represented as: The pilot wave guides the motion of the particles. The guidance equation for the jth particle is: \vec{v}_j = \frac{\nabla_j S}{m}\; . The velocity of the jth particle explicitly depends on the positions of the other particles. This means that the theory is nonlocal. Empty wave function[edit] Lucien Hardy[6] and John Stewart Bell[3] have emphasized that in the de Broglie–Bohm picture of quantum mechanics there can exist empty waves, represented by wave functions propagating in space and time but not carrying energy or momentum,[7] and not associated with a particle. The same concept was called ghost waves (or "Gespensterfelder", ghost fields) by Albert Einstein.[7] The empty wave function notion has been discussed controversially.[8][9][10] In contrast, the many-worlds interpretation of quantum mechanics does not call for empty wave functions.[3] In his 1926 paper,[11] Max Born suggested that the wave function of Schrödinger's wave equation represents the probability density of finding a particle. From this idea, de Broglie developed the pilot wave theory, and worked out a function for the guiding wave.[12] Initially, de Broglie proposed a double solution approach, in which the quantum object consists of a physical wave (u-wave) in real space which has a spherical singular region that gives rise to particle-like behaviour; in this initial form of his theory he did not have to postulate the existence of a quantum particle.[13] He later formulated it as a theory in which a particle is accompanied by a pilot wave. He presented the pilot wave theory at the 1927 Solvay Conference.[14] However, Wolfgang Pauli raised an objection to it at the conference, saying that it did not deal properly with the case of inelastic scattering. De Broglie was not able to find a response to this objection, and he and Born abandoned the pilot-wave approach. Unlike David Bohm, de Broglie did not complete his theory to encompass the many-particle case.[13] Later, in 1932, John von Neumann published a paper claiming to prove that all hidden variable theories were impossible.[15] (A result found to be flawed by Grete Hermann three years later, though this went unnoticed by the physics community for over fifty years). However, in 1952, David Bohm, dissatisfied with the prevailing orthodoxy, rediscovered de Broglie's pilot wave theory. Bohm developed pilot wave theory into what is now called the de Broglie–Bohm theory.[5][16] The de Broglie–Bohm theory itself might have gone unnoticed by most physicists, if it had not been championed by John Bell, who also countered the objections to it. In 1987, John Bell[17] rediscovered Grete Hermann's work, and thus showed the physics community that Pauli's and von Neumann's objections really only showed that the pilot wave theory did not have locality. The de Broglie–Bohm theory is now considered by some to be a valid challenge to the prevailing orthodoxy of the Copenhagen Interpretation, but it remains controversial. Yves Couder and co-workers recently discovered a macroscopic pilot wave system in the form of walking droplets. This system exhibits behaviour of a pilot wave, heretofore considered to be reserved to microscopic phenomena.[1] 1. ^ a b Couder, Y.; Boudaoud, A.; Protière, S.; Moukhtar, J.; Fort, E. (2010). "Walking droplets: a form of wave–particle duality at macroscopic level?". Europhysics News 41 (1): 14–18. Bibcode:2010ENews..41...14C. doi:10.1051/epn/2010101.  2. ^ "How Does The Universe Work?". Through the Wormhole. 13 July 2011. Season 2, Episode 6, 15min 23s.  3. ^ a b c Bell, J. S. (1992). "Six possible worlds of quantum mechanics". Foundations of Physics 22 (10): 1201–1215. Bibcode:1992FoPh...22.1201B. doi:10.1007/BF01889711.  4. ^ Towler, M. (10 February 2009). "De Broglie-Bohm pilot-wave theory and the foundations of quantum mechanics". University of Cambridge. Retrieved 2014-07-03.  5. ^ a b Bohm, D. (1952). "A suggested Interpretation of the Quantum Theory in Terms of Hidden Variables, I". Physical Review 85 (2): 166–179. Bibcode:1952PhRv...85..166B. doi:10.1103/PhysRev.85.166.  6. ^ Hardy, L. (1992). "On the existence of empty waves in quantum theory". Physics Letters A 167 (1): 11–16. Bibcode:1992PhLA..167...11H. doi:10.1016/0375-9601(92)90618-V.  7. ^ a b Selleri, F.; Van der Merwe, A. (1990). Quantum paradoxes and physical reality. Kluwer Academic Publishers. pp. 85–86. ISBN 0-7923-0253-2.  8. ^ Zukowski, M. (1993). ""On the existence of empty waves in quantum theory": a comment". Physics Letters A 175 (3–4): 257–258. Bibcode:1993PhLA..175..257Z. doi:10.1016/0375-9601(93)90837-P.  9. ^ Zeh, H. D. (1999). "Why Bohm's Quantum Theory?". Foundations of Physics Letters 12: 197–200. arXiv:quant-ph/9812059. doi:10.1023/A:1021669308832.  10. ^ Vaidman, L. (2005). The Reality in Bohmian Quantum Mechanics or Can You Kill with an Empty Wave Bullet? 35 (2). pp. 299–312. arXiv:quant-ph/0312227. Bibcode:2005FoPh...35..299V. doi:10.1007/s10701-004-1945-2.  11. ^ Born, M. (1926). "Quantenmechanik der Stoßvorgänge". Zeitschrift für Physik 38 (11–12): 803–827. Bibcode:1926ZPhy...38..803B. doi:10.1007/BF01397184.  12. ^ de Broglie, L. (1927). "La mécanique ondulatoire et la structure atomique de la matière et du rayonnement". Journal de Physique et le Radium 8 (5): 225–241. doi:10.1051/jphysrad:0192700805022500.  13. ^ a b Dewdney, C.; Horton, G.; Lam, M. M.; Malik, Z.; Schmidt, M. (1992). "Wave-particle dualism and the interpretation of quantum mechanics". Foundations of Physics 22 (10): 1217–1265. Bibcode:1992FoPh...22.1217D. doi:10.1007/BF01889712.  14. ^ Institut International de Physique Solvay (1928). Electrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique tenu à Bruxelles du 24 au 29 Octobre 1927. Gauthier-Villars.  16. ^ Bohm, D. (1952). "A suggested Interpretation of the Quantum Theory in Terms of Hidden Variables, II". Physical Review 85 (2): 180–193. Bibcode:1952PhRv...85..180B. doi:10.1103/PhysRev.85.180.  17. ^ Bell, J. S. (1987). Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press. ISBN 978-0521334952.  External links[edit]
98fcb6174c7d1f1e
Take the 2-minute tour × How is the following classical optics phenomenon explained in quantum electrodynamics? • Reflection and Refraction Are they simply due to photons being absorbed and re-emitted? How do we get to Snell's law, for example, in that case? Split by request: See the other part of this question here. share|improve this question Nice question ! –  student Dec 18 '10 at 11:03 Feynman's little pop-sci book on QED address these questions very well. I'd recommend it for anyone who doesn't want to muddle through the nitty-gritty of the math. Heck, it's a fun read even if you do want to work through the math. –  dmckee Dec 18 '10 at 15:33 By taking the classical limit, then explained clasically. Correspondence principle. –  user1708 Dec 18 '10 at 16:58 @kalle43: that goes without saying. –  Sklivvz Dec 18 '10 at 17:01 As I said in the comments below, the question is too broad right now and it would perhaps be wise to split it up. –  Marek Dec 19 '10 at 10:30 show 2 more comments 2 Answers up vote 9 down vote accepted Hwlau is correct about the book but the answer actually isn't that long so I think I can try to mention some basic points. Path integral One approach to quantum theory called path integral tells you that you have to sum probability amplitudes (I'll assume that you have at least some idea of what probability amplitude is; QED can't really be explained without this minimal level of knowledge)) over all possible paths that the particle can take. Now for photons probability amplitude of a given path is $\exp(i K L)$ where $K$ is some constant and $L$ is a length of the path (note that this is very simplified picture but I don't want to get too technical so this is fine for now). The basic point is that you can imagine that amplitude as a unit vector in the complex plane. So when doing a path integral you are adding lots of short arrows (this terminology is of course due to Feynman). In general for any given trajectory I can find many shorter and longer paths so this will give us a nonconstructive interference (you will be adding lots of arrows that point in random directions). But there can exist some special paths which are either longest or shortest (in other words, extremal) and these will give you constructive interference. This is called Fermat's principle. Fermat's principle So much for the preparation and now to answer your question. We will proceed in two steps. First we will give classical answer using Fermat's principle and then we will need to address other issues that will arise. Let's illustrate this first on a problem of light traveling between points $A$ and $B$ in free space. You can find lots of paths between them but if it won't be the shortest one it won't actually contribute to the path integral for the reasons given above. The only one that will is the shortest one so this recovers the fact that light travels in straight lines. The same answer can be recovered for reflection. For refraction you will have to take into account that the constant $K$ mentioned above depends on the index of refraction (at least classically; we will explain how it arises from microscopic principles later). But again you can arrive at Snell's law using just Fermat's principle. Now to address actual microscopic questions. First, index of refraction arises because light travels slower in materials. And what about reflection? Well, we are actually getting to the roots of the QED so it's about time we introduced interactions. Amazingly, there is actually only one interaction: electron absorbs photon. This interaction again gets a probability amplitude and you have to take this into account when computing the path integral. So let's see what we can say about a photon that goes from $A$ then hits a mirror and then goes to $B$. We already know that the photon travels in straight lines both between $A$ and the mirror and between mirror and $B$. What can happen in between? Well, the complete picture is of course complicated: photon can get absorbed by an electron then it will be re-emitted (note that even if we are talking about the photon here, the emitted photon is actually distinct from the original one; but it doesn't matter that much) then it can travel for some time inside the material get absorbed by another electron, re-emitted again and finally fly back to $B$. To make the picture simpler we will just consider the case that the material is a 100% real mirror (if it were e.g. glass you would actually get multiple reflections from all of the layers inside the material, most of which would destructively interfere and you'd be left with reflections from front and back surface of the glass; obviously, I would have to make this already long answer twice longer :-)). For mirrors there is only one major contribution and that is that the photon gets scattered (absorbed and re-emitted) directly on the surface layer of electrons of the mirror and then flies back. Quiz question: and what about process that the photon flies to the mirror and then changes its mind and flies back to $B$ without interacting with any electrons; this is surely a possible trajectory we have to take into account. Is this an important contribution to the path integral or not? share|improve this answer +1, makes sense to me, but what about color? :-) –  Sklivvz Dec 18 '10 at 17:47 @Sklivvz: oh, I completely missed that part of the question. Thanks. But my answer is already too long, so I guess I will suggest OP to ask that as a separate question. Actually, @hwlau gives a correct first view on the problem (quantum mechanical), but the question actually deserves a lot more, I think. –  Marek Dec 18 '10 at 17:55 @Sklivvz: oh, you are OP :-D I am so sorry :-D –  Marek Dec 18 '10 at 17:55 +1, pretty good with this length. The difficult part is to explain why the other path can cancel exactly in few sentences, such as why the path of refraction is bend toward normal ;) –  hwlau Dec 19 '10 at 10:47 add comment It really desires a long discussion. You may be interested in the book "QED: The Strange theory of light and matter" written by Richard Feynman (or the corresponding video), which gives a comprehensive introduction with almost no number and formula. For the solution of Schrödinger equation of hydrogen atom. The energy level is discrete, so its absorption spectrum is also discrete. In this case, only few colors can be seen. However, in solid, the atom are interacting strongly with each other and the resulting absorption spectrum can be very complicated. This interaction depends strongly with its structure and the outside electrons. The temperature can play an essential role for the structural change and a phase transition can occur and so does color change. I think there is no easy explanation for the exact absorption spectrum, or color, for a material without doing complicated calculation. share|improve this answer add comment Your Answer
74eb97c3589d9c20
Cardiff University Logo  Cardiff School of Mathematics Prelude for Strings The problems of spectral theory are among the very first questions in scientific history, and have served as a starting-point for the development of mathematics. The history of this discipline shows how the investigation of questions which, at a first glance, appear quite abstract and academic, can yield methods of enormous practical value, often after decades or centuries of research. The Pythagoreans (6th - 5th century BC) studied the vibrations of a taut string, finding that harmonically consonant sounds are produced when the string is divided in simple numerical ratios. This observation corroborated their tenet that everything in the world is governed by relations of numbers. The Pythagorean Hippasos of Metapontum, credited, among other things, with the invention of the regular dodecahedron and the irrational numbers, is said to have investigated the vibrations of metal plates as well. At the medieval universities, music - essentially the ancient theory of harmony - was taught in the quadrivium (fourfold way) along with arithmetic, geometry and astronomy; together with the trivium of grammar, logic and rhetoric, these constituted the seven liberal arts. Johannes Kepler (1571-1630) tried to explain the radii of the planetary orbits in terms of geometrical ratios, the so-called harmony of the spheres. As a by-product of his speculations on this question (which is still unsolved today) he found Kepler's laws, the basis of the modern picture of the solar system. John Wallis (1616-1703) described the relation between the harmonics of a vibrating string and the number of its nodes of vibration. In 1717, Brook Taylor (1685-1731) published the first mathematical paper on the vibrating string, but he did not yet know the differential equation of wave propagation. In 1746, Jean le Rond d'Alembert (1717-1783) wrote down the one-dimensional wave equation and found a method of its solution for `arbitrary' initial data; the solution is a superposition of two waves, travelling to the right and left, resp. The question how `arbitrary' the initial data can really be, i.e. which functions are admissible as solutions of the wave equation, was taken up by d'Alembert, Daniel Bernoulli (1700-1782) and Leonhard Euler (1707-1783). This was the beginning of the struggle for an exact definition of the concept of a function, which became one of the principal achievements of 19th century mathematics. D. Bernoulli saw a general method of solving the wave equation in the superposition of (infinitely many) simple vibratory modes, an idea previously used by Euler in specific situations. This technique, along with separation of variables, became prominent in Joseph Fourier's (1768-1830) treatment of the heat equation (1822). However, the actual scope of the method remained obscure at first because of the limit process involved. The idea of separation of variables, which reduces problems in two or more dimensions to a family of one-dimensional problems, is also the basis of tomography, a widely used tool of contemporary medicine. The fundamental question of Fourier's method, viz. which functions can be expanded in their Fourier series, turned out to be very complicated and very fertile for the further development of mathematics. For piecewise continuous and piecewise monotonic functions and the trigonometric Fourier series, it was answered in 1829 by Peter Gustav Lejeune-Dirichlet (1805-1859); in its general form, it is the core of the mathematical field of harmonic analysis. Fourier analysis is of great importance in many areas of science and technology; e.g. it is used in astronomy to enhance the optical resolution of telescopes. Separation of variables, when applied to the fundamental equations of mathematical physics and differential geometry, often gives rise to ordinary differential equations of a general type studied by Charles-François Sturm (1803-1855) and Joseph Liouville (1809-1882). Liouville investigated the representation of solutions of this equation by generalised Fourier series. However, a satisfactory answer to this question could only be found in the framework of Hilbert space, named after David Hilbert (1862-1943), which is a corner-stone of functional analysis and the modern treatment of differential equations. Hilbert space is also a central notion of quantum mechanics. The Schrödinger equation (1926), the fundamental equation of non-relativistic quantum mechanics, is closely related to the wave equation. Years before, Hermann Weyl (1885-1955) had paved the way for a study of this equation by his spectral analysis of the singular Sturm-Liouville equation. The colour of the light emitted by a gas of atoms is determined by the spectrum of the associated Schrödinger equation, i.e. the frequencies of its simple vibratory modes. Thus Erwin Schrödinger (1887-1961) found the harmony of the spheres, sought in vain in the cosmos by Kepler, in the subatomic microcosmos. <<< home K. M. Schmidt fecit MMVIII
5d03bbbbd60ab48d
Support Options Submit a Support Ticket HomeTopics › AQME Advancing Quantum Mechanics for Engineers AQME Advancing Quantum Mechanics for Engineers by Tain Lee Barzso, Dragica Vasileska, Gerhard Klimeck Introduction to Advancing Quantum Mechanics for Engineers and Physicists Discovery that is Possible through Quantum Mechanics Nanotechnology has yielded a number of unique structures that are not found readily in nature. Most demonstrate an essential quality of Quantum Mechanics known as quantum confinement. Confinement is the idea of keeping electrons trapped in a small area, about 30 nm or smaller. Quantum confinement comes in several dimensions. 2-D confinement, for example, is restricted in only one dimension, resulting in a quantum well (or plane). Lasers are currently built from this dimension. 1-D confinement occurs in nanowires, and 0-D confinement is found only in the quantum dot. Confinement also increases the efficiency of today’s electronics. The laser is based on a 2-D confinement layer that is usually created with some form of epitaxy such as Molecular Beam Epitaxy or Chemical Vapor Deposition. The bulk of modern lasers created with this method are highly functional, but these lasers are ultimately inefficient in terms of energy consumption and heat dissipation. Moving to 1-D confinement in wires or 0-D confinement in quantum dots allows for higher efficiencies and brighter lasers. Quantum dot lasers are currently the best lasers available, although their fabrication is still being worked out. Confinement is just one manifestation of quantum mechanics in nanodevices. Tunneling and quantum interference are two other manifestations of quantum mechanics in the operation of scanning tunneling microscopes and resonant tunneling diodes, respectively. For more information on the theoretical aspects of Quantum Mechanics check the following resources: Quantum Mechanics for Engineers: Podcasts Quantum Mechanics for Engineers: Course Assignments Because understanding quantum mechanics is so foundational to an understanding of the operation of nanoscale devices, almost every Electrical Engineering department (in which there is a strong nanotechnology experimental or theoretical group) and all Physics departments teach the fundamental principles of quantum mechanics and their application to nanodevice research. Several conceptual sets and theories are taught within these courses. Normally, students are first introduced to the concept of particle-wave duality (the photoelectric effect and the double-slit experiment), the solutions of the time-independent Schrödinger equation for open systems (piece-wise constant potentials), tunneling, and bound states. The description of the solution of the Schrödinger equation for periodic potentials (Kronig-Penney model) naturally follows from the discussion of double well, triple well and n-well structures. This leads the students to the concept of energy bands and energy gaps, and the concept of the effective mass that can be extracted from the pre-calculated band structure by fitting the curvature of the bands. The Tsu-Esaki formula is then investigated so that, having calculated the transmission coefficient, students can calculate the tunneling current in resonant tunneling diode and Esaki diode. After establishing basic principles of quantum mechanics, the harmonic oscillator problem is then discussed in conjunction with understanding vibrations of a crystalline lattice, and the idea of phonons is introduced as well as the concept of creation and annihilation operators. The typical quantum mechanics class for undergraduate/first-year graduate students is then completed with the discussion of the stationary and time-dependent perturbation theory and the derivation of the Fermi Golden Rule, which is used as a starting point of a graduate level class in semiclassical transport. Coulomb Blockade is another discussion a typical quantum mechanics class will include. Particle-Wave Duality pic1_duality.png A wave-particle dual nature was discovered and publicized in the early debate about whether light was composed of particles or wave. Evidence for the description of light-as-waves was well established at the turn of the century when the photoelectric effect introduced firm evidence of a light-as-particle nature. This dual nature was found to also be characteristic of electrons. Electron particle nature properties were well documented when the DeBroglie hypothesis, and subsequent experiments by Davisson and Germer, established the wave nature of the electron. Particle-Wave Duality: an Animation This movie helps students to better distinguish when nano-things behave as particles and when they behave as waves. The link below connects to an exercise on these concepts. Introductory Concepts in Quantum Mechanics: an Exercise Solution of the Time-Independent Schrödinger Equation Piece-Wise Linear Barrier Tool in AQME – Open Systems pcpbt1.bmp pcpbt2.bmp pcpbt3.bmp Available resources: Bound States Lab in AQME The Bound States Lab in AQME determines the bound states and the corresponding wavefunctions in a square, harmonic, and triangular potential well. The maximum number of eigenstates that can be calculated is 100. Students clearly see the nature of the separation of the states in these three prototypical confining potentials, with which students can approximate realistic quantum potentials that occur in nature. The panel below (left) shows energy eigenstates of a harmonic oscillator. Probability density of the ground state that demonstrates purely quantum-mechanical behavior is shown in the middle panel below. Probability density of the 20th subband demonstrates the more classical behavior as the well opens (right panel below). pic6_state1top.png pic7_state2left.png pic8_state3right.png Available resources: Energy Bands and Effective Masses Periodic Potential Lab in AQME pic10_perpot2.png pic9_perpot1.png The Periodic Potential Lab in AQME solves the time-independent Schrödinger Equation in a 1-D spatial potential variation. Rectangular, triangular, parabolic (harmonic), and Coulomb potential confinements can be considered. The user can determine energetic and spatial details of the potential profiles, compute the allowed and forbidden bands, plot the bands in a compact and an expanded zone, and compare the results against a simple effective mass parabolic band. Transmission is also calculated. This lab also allows the students to become familiar with the reduced zone and expanded zone representation of the dispersion relation (E-k relation for carriers). Available resources: Periodic Potentials and Bandstructure: an Exercise Band Structure Lab in AQME pic12_band2.png pic11_band1.png Band structure of Si (left panel) and GaAs (right panel). In solid-state physics, the electronic band structure (or simply band structure) of a solid describes ranges of energy that an electron is “forbidden” or “allowed” to have. It is due to the diffraction of the quantum mechanical electron waves in the periodic crystal lattice. The band structure of a material determines several characteristics, in particular, its electronic and optical properties. The Band Structure Lab in AQME enables the study of bulk dispersion relationships of Si, GaAs, InAs. Plotting the full dispersion relation of different materials, students first get familiar with a band structure of a direct band gap (GaAs, InAs), as well as indirect band gap semiconductors (Si). For the case of multiple conduction band valleys, students must first determine the Miller indices of one of the equivalent valleys, then, from that information they can deduce how many equivalent conduction bands are in Si and Ge, for example. In advanced applications, the users can apply tensile and compressive strain and observe the variation in the band structure, band gaps, and effective masses. Advanced users can also study band structure effects in ultra-scaled (thin body) quantum wells, and nanowires of different cross sections. Band Structure Lab uses the sp3s*d5 tight-binding method to compute E(k) for bulk, planar, and nanowire semiconductors. Available resource: Bulk Band Structure: a Simulation Exercise diamond.png The figure on the left illustrates the first Brillouin zone of FCC lattice that corresponds to the first Brillouin zone for all diamond and Zinc-blende materials (C, Si, Ge, GaAs, InAs, CdTe, etc.). There are 8 hexagonal faces (normal to 111) and 6 square faces (normal to 100). The sides of each hexagon and each square are equal. Supplemental Information: Specification of High-Symmetry Points Symbol Description Γ Center of the Brillouin zone Simple Cube M Center of an edge R Corner point X Center of a face Face-Centered Cubic K Middle of an edge joining two hexagonal faces L Center of a hexagonal face U Middle of an edge joining a hexagonal and a square face W Corner point X Center of a square face Body-Centered Cubic H Corner point joining four edges N Center of a face P Corner point joining three edges A Center of a hexagonal face H Corner point K Middle of an edge joining two rectangular faces L Middle of an edge joining a hexagonal and a rectangular face M Center of a rectangular face Real World Applications Schred Tool in AQME pic13_schred1.png pic14_schred2.png pic15_schred3.png The Schred Tool in AQME calculates the envelope wavefunctions and the corresponding bound-state energies in a typical MOS (Metal-Oxide-Semiconductor) or SOS (Semiconductor-Oxide- Semiconductor) structure and in a typical SOI structure by solving self-consistently the one-dimensional (1-D) Poisson equation and the 1D Schrödinger equation. The Schred tool is specifically designed for Si/SiO2 interface and takes into account the mass anisotropy of the conduction bands, as well as different crystallographic orientations. Available resources: 1-D Heterostructure Tool AQME 1dhet1.png 1dhet2.png Available resources: Resonant Tunneling Diode Lab in AQME rtd1.png rtd2.png Put a potential barrier in the path of electrons, and it will block their flow. But if the barrier is thin enough, electrons can tunnel right through due to quantum mechanical effects. Even more surprising, if two or more thin barriers are placed closely together, electrons will bounce between the barriers, and, at certain resonant energies, flow right through the barriers as if there were none. Explore Resonant Tunneling Diode Lab in AQME, which lets you control the number of barriers and their material properties, and then simulate current as a function of bias. Devices exhibit a surprising negative differential resistance, even at room temperature. This tool can be run online in your web browser as an active demo. pic18_restunn.png pic19_restun2.png Available resources: Quantum Dot Lab in AQME Scattering and Fermi’s Golden Rule Coulomb Blockade sentence to transition to ending list of AQME tools AQME constituent tools Piece-Wise Constant Potential Barriers Tool Bound States Calculation Lab Band Structure Lab Periodic Potential Lab 1D Heterostructure Tool Resonant Tunneling Diode Simulator Quantum Dot Lab Bulk Monte Carlo Lab Coulomb Blockade Simulation
191f49a74007040f
Items tagged with academic academic Tagged Items Feed I am getting an error dialog when I try to Browse an array I created programmatically.  The title of the dialog is RTable Browse Error with and error that says Empty RTable Structure.  Now the data is 0..20 x 0..1 Array Datatype :anything Storage: rectangular Order: Fortran_Order.  If I do a showall, I can see the data, I just cannot browse it.  Any help would be appreciated. Code is: for i from 1 to N do end do; The attached presentation is the first one of a sequence of three that we wanted to do on Quantum Mechanics using Computer Algebra. The level is that of an advanced undergraduate QM course. Tackling this topic within a computer algebra worksheet in the way it's done below, however, is an entire novelty, and illustrates well the kind of computations that can be done today with Maple & Physics. Ground state of a quantum system of identical boson particles Pascal Szriftgiser1 and Edgardo S. Cheb-Terrab2  (1) Laboratoire PhLAM, UMR CNRS 8523, Université Lille 1, F-59655, France (2) Maplesoft Departing from the Energy of a quantum system of identical boson particles, the field equation is derived. This is the Gross-Pitaevskii equation (GPE). A continuity equation for this system is also derived, showing that the velocity flow satisfies `&x`(VectorCalculus[Nabla], `#mover(mi("v"),mo("&rarr;"))`) = 0, i.e.: is irrotational.   The Gross-Pitaevskii equation Problem: derive the field equation describing the ground state of a quantum system of identical particles (bosons), that is, the Gross-Pitaevskii equation (GPE). Background: The Gross-Pitaevskii equation is particularly useful to describe Bose Einstein condensates (BEC) of cold atomic gases [3, 4, 5], that is, an ensemble of identical quantum boson particles that interact with each other with an interaction constant G. The temperature of these cold atomic gases is typically in the w100 nano-Kelvin range. The atom-atom interactions are repulsive for G > 0 and attractive for G < 0  (which could lead to some instabilities). The GPE is also widely used in non-linear optics to model the propagation of light in optical fibers. In this area, GPE is known as "non-linear Schrödinger equation", and the non-linearity comes from the Kerr effect [6]. Continuity equation for a quantum system of identical particles [1] Gross-Pitaevskii equation (wiki) [2] Continuity equation (wiki) [3] Bose–Einstein condensate (wiki) [4] Bose-Einstein Condensation in Dilute Gases, C. J. Pethick and H. Smith, Second Edition, Cambridge (2008), ISBN-13: 978-0521846516. [5] Advances In Atomic Physics: An Overview, Claude Cohen-Tannoudji and David Guery-Odelin, World Scientific (2011), ISBN-10: 9812774963. [6] Nonlinear Fiber Optics, Fifth Edition (Optics and Photonics), Govind Agrawal, Academic Press (2012), ISBN-13: 978-0123970237. Edgardo S. Cheb-Terrab Physics, Maplesoft color subsystems... November 22 2013 Bendesarts 60 It should be to have a clearer presentation of my system. Is somebody known as it is possible to color a subsystem ? Thank you for help. In connection with recent developments in the Physics package, we now have mathematical typesetting for all the inert functions of the mathematical language. Hey! This is within the Physics update available on the Maplesoft Physics: Research & Development webpage I think this is an interesting development that will concretely change the computational experience with these functions: it is not the same to compute with something you see displayed as %exp(x) instead of the same computation but flowing with it nicely displayed as an exponential function with the e in grey, reflecting that Maple understands this object as the exponential inert function, with known properties (all those of the active exp function), and so Maple can compute with the inert one taking these properties into account while not executing the function itself - and this is the essence of the inert function behaviour. Introducing mathematical display, copy and paste for all these inert functions of the mathematical language concretely increases the mathematical expressiveness of the system, for teaching, working and also for presenting ideas. Attached is a brief illustration. Edgardo S. Cheb-Terrab Physics, Maplesoft  InertMathematicalFun.pdf Is there any way to write a function that determines the area of any n-sided polygon determined by a sequence of points? ie [[x_1, y_1]. [x_2, y_2], ... [x_n, y_n]] while returning 0 if any of the 2 segments intersect, otherwise print the area. Thanks for any help I've been on this question a week now and still no conclusive answer! What I need is a function that produces the inequalities that determine a triangle given the 3 points and then using a 4th point, prints true if the 4th point satisfies 2 or 3 or the inequalities and prints false if it only satisfies 1 or none of the inequalities. I need to have this solved by tonight so any quick help would be greatly appreciated! So, newb question here. I've done my best to debug this line of code, but to no avail. For some reason this function is NOT getting solved correcting for the zeros: cos(L*x) - x*sin(L*x) This is not an extremely complex equation either, so I'm at a loss for why my while loop continues to sit there forever. I've got it set to find N number of zeros, but it'll just keep going forever never finding any zeros. I've tried mixing up the start point, and even changed the range which it's searching for them, but nothing seems to get me any closer. Please help! > restart; with(plots); > a := 0; b := 1/2; N := 5; w := 1; L := b-a; Eigenvalue equation > w := cos(L*x)-sin(L*x)*x; > plot(w, x = 0 .. 50); > lam := array(0 .. N+1); > nn := 0; kk := 5; while nn < N do zz := fsolve(w(x) = 0, x = kk .. kk+1); if type(zz, float) then printf("lam(%d)=%f\n", nn, zz); lam[nn] := zz; nn := nn+1 end if; kk := kk+1 end do In MapleSim, it is possible to constrain a rigid body in a multibody system to an arbitrary constraint?  Similarly to how a mass at the end of a simple pendulum can only move along a circle defined by the length of the pendulum, I was hoping to constrain one to a shape like an ellipse or sine wave.  I could then give it initial velocities and see how it would move along the shape over time. I'd like to be able to define this constraint with an equation. ... 1 2 3 4 5 6 7 Last Page 1 of 18
d040dffe5fbcbe39
Quantum Gravity and String Theory Quantized Space-Time and Internal Structure of Elementary Particles: a New Model Authors: Hamid Reza Karimi In this paper we present a model in which the time and length are considered quantized. We try to explain the internal structure of the elementary particles in a new way. In this model a super-dimension is defined to separate the beginning and the end of each time and length quanta from another time and length quanta. The beginning and the end of the dimension of the elementary particles are located in this super-dimension. This model can describe the basic concepts of inertial mass and internal energy of the Elementary particles in a better way. By applying this model, some basic calculations mentioned below, can be done in a new way: 1- The charge of elementary particles such as electrons and protons can be calculated theoretically. This quantity has been measured experimentally up to now. 2- By using the equation of the particle charge obtained in this model, the energy of the different layers of atoms such as hydrogen and helium is calculated. This approach is simpler than using Schrödinger equation. 3- Calculation of maximum speed of particles such as electrons and positrons in the accelerators is given. Comments: 23 pages. Download: PDF Submission history [v1] 16 Nov 2009 Unique-IP document downloads: 1149 times Add your own feedback and questions here: comments powered by Disqus
4d3943687d57d633
Take the 2-minute tour × Recently there have been some interesting questions on standard QM and especially on uncertainty principle and I enjoyed reviewing these basic concepts. And I came to realize I have an interesting question of my own. I guess the answer should be known but I wasn't able to resolve the problem myself so I hope it's not entirely trivial. So, what do we know about the error of simultaneous measurement under time evolution? More precisely, is it always true that for $t \geq 0$ $$\left<x(t)^2\right>\left<p(t)^2\right> \geq \left<x(0)^2\right>\left<p(0)^2\right>$$ (here argument $(t)$ denotes expectation in evolved state $\psi(t)$, or equivalently for operator in Heisenberg picture). I tried to get general bounds from Schrodinger equation and decomposition into energy eigenstates, etc. but I don't see any way of proving this. I know this statement is true for a free Gaussian wave packet. In this case we obtain equality, in fact (because the packet stays Gaussian and because it minimizes HUP). I believe this is in fact the best we can get and for other distributions we would obtain strict inequality. So, to summarize the questions 1. Is the statement true? 2. If so, how does one prove it? And is there an intuitive way to see it is true? share|improve this question Why do you think it would apply? You can't really make a measurement that way (either you measure at $t=0$ or at $t=T$, but never both), so you basically have two different $\psi$ solutions. Both will obey the principle independently. Am I misunderstanding your question? –  Sklivvz Mar 19 '11 at 16:06 If your wavepacket, to begin with, saturates the uncertainty bound (i.e. is a coherent state) then this is trivially true - coherent states stay coherent under time-evolution. If your initial state is not a coherent state then the evolution is clearly more involved, but in that case you could expand your arbitrary initial state in the coherent state basis - so that this inequality (as established for coherent states) could still be used, component by component to show that it remains true for the arbitrary state. Or perhaps not. Chug and plug, baby, chug and plug. –  user346 Mar 19 '11 at 16:08 I don’t think the statement is true. Put the minimum uncertainty wave packet at t=0. What was the uncertainty before, at t<0? it was larger so it has been decreasing before t=0. More generally, you cannot derive time asymmetric statements from time symmetric laws. –  user566 Mar 19 '11 at 16:39 @Moshe: there are loopholes in your argument: there might be no minimum for a given system (just infimum) and if there is minimum, it might be preserved in evolution (as for free Gaussian). Still, nice idea and I'll try to use it to find a counterexample in some simple system. As for the second statement: right, so I am sure you'll tell me that we can't obtain second law too... just kiddin', I don't want to get into this discussion that made Boltzmann commit suicide :) –  Marek Mar 19 '11 at 16:47 @Marek, in any example you can solve the Schrodinger equation, you'll find that the quantity you are interested in grows away from t=0, both towards the past and towards the future, this is guaranteed by symmetry. As for the general statement, it is also true for the second law. You cannot derive time asymmetric conclusions from time symmetric laws without extra input, this is just basic logic, nothing to do with physics. The whole discussion is what is that extra input and where does it come in. –  user566 Mar 19 '11 at 16:57 5 Answers 5 up vote 32 down vote accepted The question asks about the time dependence of the function $$f(t) := \langle\psi(t)|(\Delta \hat{x})^2|\psi(t)\rangle \langle\psi(t)|(\Delta \hat{p})^2|\psi(t)\rangle,$$ $$\Delta \hat{x} := \hat{x} - \langle\psi(t)|\hat{x}|\psi(t)\rangle, \qquad \Delta \hat{p} := \hat{p} - \langle\psi(t)|\hat{p}|\psi(t)\rangle, \qquad \langle\psi(t)|\psi(t)\rangle=1.$$ We will here use the Schroedinger picture where operators are constant in time, while the kets and bras are evolving. Edit: Spurred by remarks of Moshe R. and Ted Bunn let us add that (under assumption (1) below) the Schroedinger equation itself is invariant under the time reversal operator $\hat{T}$, which is a conjugated linear operator, so that $$\hat{T} t = - t \hat{T}, \qquad \hat{T}\hat{x} = \hat{x}\hat{T}, \qquad \hat{T}\hat{p} = -\hat{p}\hat{T}, \qquad \hat{T}^2=1.$$ Here we are restricting ourselves to Hamiltonians $\hat{H}$ so that $$[\hat{T},\hat{H}]=0.\qquad (1)$$ Moreover, if $$|\psi(t)\rangle = \sum_n\psi_n(t) |n\rangle$$ is a solution to the Schroedinger equation in a certain basis $|n\rangle$, then $$\hat{T}|\psi(t)\rangle := \sum_n\psi^{*}_n(-t) |n\rangle$$ will also be a solution to the Schroedinger equation with a time reflected function $f(-t)$. Thus if $f(t)$ is non-constant in time, then we may assume (possibly after a time reversal operation) that there exist two times $t_1<t_2$ with $f(t_1)>f(t_2)$. This would contradict the statement in the original question. To finish the argument, we provide below an example of a non-constant function $f(t)$. Consider a simple harmonic oscillator Hamiltonian with the zero point energy $\frac{1}{2}\hbar\omega$ subtracted for later convenience. $$\hat{H}:=\frac{\hat{p}^2}{2m}+\frac{1}{2}m\omega^{2}\hat{x}^2 -\frac{1}{2}\hbar\omega=\hbar\omega\hat{N},$$ where $\hat{N}:=\hat{a}^{\dagger}\hat{a}$ is the number operator. Let us put the constants $m=\hbar=\omega=1$ to one for simplicity. Then the annihilation and creation operators are $$\hat{a}=\frac{1}{\sqrt{2}}(\hat{x} + i \hat{p}), \qquad \hat{a}^{\dagger}=\frac{1}{\sqrt{2}}(\hat{x} - i \hat{p}), \qquad [\hat{a},\hat{a}^{\dagger}]=1,$$ or conversely, $$\hat{x}=\frac{1}{\sqrt{2}}(\hat{a}^{\dagger}+\hat{a}), \qquad \hat{p}=\frac{i}{\sqrt{2}}(\hat{a}^{\dagger}-\hat{a}), \qquad [\hat{x},\hat{p}]=i,$$ $$\hat{x}^2=\hat{N}+\frac{1}{2}\left(1+\hat{a}^2+(\hat{a}^{\dagger})^2\right), \qquad \hat{p}^2=\hat{N}+\frac{1}{2}\left(1-\hat{a}^2-(\hat{a}^{\dagger})^2\right).$$ Consider Fock space $|n\rangle := \frac{1}{\sqrt{n!}}(\hat{a}^{\dagger})^n |0\rangle$ such that $\hat{a}|0\rangle = 0$. Consider initial state $$|\psi(0)\rangle := \frac{1}{\sqrt{2}}\left(|0\rangle+|2\rangle\right), \qquad \langle \psi(0)| = \frac{1}{\sqrt{2}}\left(\langle 0|+\langle 2|\right).$$ $$|\psi(t)\rangle = e^{-i\hat{H}t}|\psi(0)\rangle = \frac{1}{\sqrt{2}}\left(|0\rangle+e^{-2it}|2\rangle\right),$$ $$\langle \psi(t)| = \langle\psi(0)|e^{i\hat{H}t} = \frac{1}{\sqrt{2}}\left(\langle 0|+\langle 2|e^{2it}\right),$$ $$\langle\psi(t)|\hat{x}|\psi(t)\rangle=0, \qquad \langle\psi(t)|\hat{p}|\psi(t)\rangle=0.$$ $$\langle\psi(t)|\hat{x}^2|\psi(t)\rangle=\frac{3}{2}+\frac{1}{\sqrt{2}}\cos(2t), \qquad \langle\psi(t)|\hat{p}^2|\psi(t)\rangle=\frac{3}{2}-\frac{1}{\sqrt{2}}\cos(2t),$$ because $\hat{a}^2|2\rangle=\sqrt{2}|0\rangle$. Therefore, $$f(t) = \frac{9}{4} - \frac{1}{2}\cos^2(2t),$$ which is non-constant in time, and we are done. Or alternatively, we can complete the counter-example without the use of above time reversal argument by simply performing an appropriate time translation $t\to t-t_0$. share|improve this answer I was thinking of trying to work out some harmonic oscillator example myself (because I have few further questions and it seems like simplest system where something nontrivial is happening) but you've beat me to it. Thanks! –  Marek Mar 20 '11 at 18:57 Although there is one thing that bugs me. I believe the calculation is essentially right, however we have $f(0) = 1/4$ which means it minimizes HUP (unless I am misunderstanding your conventions) and therefore $\psi(0)$ would have to be Gaussian -- a contradiction with your initial state. Is there a little mistake in calculation somewhere or do I have a flaw in my argument? –  Marek Mar 20 '11 at 19:02 Okay, I fixed it (I hope) :) –  Marek Mar 20 '11 at 19:20 Dear @Marek: I agree, there was powers of $2$ missing in three formulas. –  Qmechanic Mar 20 '11 at 19:32 One thing that's worth noting: you say that the Schrodinger equation is not invariant under time reversal. It's true that simply substituting $t\to -t$ is not invariant, but simultaneously changing $t\to -t$ and complex conjugating $\psi\to\psi^*$ does leave the equation invariant. That means that, for every solution $\psi(t)$, there is a corresponding solution $\psi^*(-t)$ that "looks like" the same state going backwards in time (and in particular has the same expectation values for all operators). That's what people mean when they say that the Schrodinger equation has time-reversal symmetry. –  Ted Bunn Mar 21 '11 at 13:02 The Schrodinger equation is time-symmetric. The answer is therefore No. From all of the comments, I feel like I must be oversimplifying or missing something, but I can't see what. share|improve this answer I'm with you, but it is probably useful for Marek to see for himself how this works in the simple example to be convinced of the general statement. –  user566 Mar 19 '11 at 17:19 Yes, this seems like a good argument to settle the original question. But it brings in further questions :) In particular, Moshe's solution (minimum growing towards both future and past) is a kind of bounce. But on both sides of that bounce I suppose the inequality would be satisfied. In other words, would the statement hold if we allowed these simple bouncy solutions and the time "t=0". Or to put it more clearly: I should've asked more general question of what does the uncertainty as a function of time look like... We now know it need not be monotone but perhaps it has other nice properties. –  Marek Mar 19 '11 at 18:07 I can't make heads or tails of this sentence: In other words, would the statement hold if we allowed these simple bouncy solutions and the time "t=0". I don't know if anything interesting in general can be said about the time evolution of $\Delta x\,\Delta p$, other than of course that it's bounded below. –  Ted Bunn Mar 19 '11 at 18:09 @Ted: ah, that was indeed not very clear. The best rephrasing is probably this: whether there exists time $t_0$ such that the inequality holds for all times $t \geq t_0$. But it is a different question. –  Marek Mar 19 '11 at 20:15 I thnk that @Marek and I are in complete agreement. Just to be explicit, let me answer @Carl's question about how we know $\Delta p$ is constant. Marek is right: For a free particle, $p^n$ commutes with the Hamiltonian, so all expectation values $\langle p^n\rangle$ are constant. So $\Delta p^2=\langle p^2\rangle-\langle p\rangle^2$ is constant. (Indeed, the entire probability distribution for $p$ is constant in time.) As a result, a Gaussian wave packet for a free particle does not remain minimum-uncertainty for all time. It spreads in real space while remaining the same in momentum space. –  Ted Bunn Mar 20 '11 at 14:05 No. Here's a simple example where it shrinks: You have a particle that has a 50% chance of being on the left going right, and a 50% chance of being on the right going left. This has a macroscopic error in both position and momentum. If you wait until it passes half way, it has a 100% chance of being in the middle. This has a microscopic error in position. There will also only be a microscopic change in momentum. (I'm not entirely sure of this as the possibilities hit each other, but if you just look right before that, or make them miss a little, it still works.) As such, the error in position decreased significantly, but the error in momentum stayed about the same. share|improve this answer Think in terms of Harmonic Functions and their Maximum Principle (or Mean Value Theorem). For simplicity (and, in fact, without loss of generality), let's just think in terms of a free particle, ie, $V(x,y,z) = 0$. When the Potential vanishes, the Schrödinger equation is nothing but a Laplace one (or Poisson equation, if you want to put a source term). And, in this case, you can apply the Mean Value Theorem (or the Maximum Principle) and get a result pertaining your question: in this situation you saturate the equality. Now, if you have a Potential, you can think in terms of a Laplace-Beltrami operator: all you need to do is 'absorb' the Potential in the Kinetic term via a Jacobi Metric: $\tilde{\mathrm{g}} = 2\, (E - V)\, \mathrm{g}$. (Note this is just a conformal transformation of the original metric in your problem.) And, once this is done, you can just turn the same crank we did above, ie, we reduced the problem to the same one as above. ;-) I hope this helps a bit. share|improve this answer I am sorry but I don't see how this is related to uncertainty and time evolution. Could you explain that? –  Marek Mar 19 '11 at 20:51 @Marek: the point was made explicit by Qmechanic, in his answer above. If you apply what i said in the Schrödinger picture, you get evolving states whose magnitude is always bound by the Mean Value Theorem. (If we were talking about bounded operators, this could be made rigorous with a bit of Functional Analysis.) –  Daniel Mar 20 '11 at 19:32 A physical way of seeing this is that the phase space volume of a system is preserved. Hamiltonian mechanics preserves the volume of a system on its energy surface H = E, which in quantum mechanics corresponds to the Schrodinger equation. The phase space volume on the energy surface of phase space is composed of units of volume $\hbar^{2n}$ for the momentum and position variables plus the $\hbar$ of the energy $i\hbar\partial\psi/\partial t~=~H\psi$. This is then preserved. Any growth in the uncertainty $\Delta p\Delta q~=~\hbar/2$ would then imply the growth in the phase space volume of the system. This would then mean there is some dissipative process, or the quantum dynamics is replaced by some master equation with a thermal or environmental loss of some form. For a pure unitary evolution however the phase space volume of the system, or equivalently the $Tr\rho$ and $Tr\rho^2$ are constant. This means the uncertainty relationship is a Fourier transform between complementary observables which preserve an area $\propto~\hbar$. share|improve this answer -1, this is completely irrelevant to my question. I am interested just in pure states and for those phase volume is always zero and so trivially conserved. But this doesn't give any information on the behavior of uncertainty. –  Marek Mar 21 '11 at 13:20 The volume a system occupies in phase space defines entropy as $S~=~k~log(\Omega)$ for $\Omega$. The von Neumann entropy $$ S~=~-k~Tr~\rho log(\rho). $$ A mixed state has each element of $\rho~=~1/n$ and the trace is $\sum(1/n)log(1/n)$ $~=~log(n)$. A pure state then occupies a phase space region that is normalized to unit volume --- not zero. –  Lawrence B. Crowell Mar 21 '11 at 14:45 Your Answer
85e5dd69ca80b213
Skip to main content Modal Interpretation of Quantum Mechanics The term modal interpretation is ambiguous. It is a proper name that refers to a number of particular interpretations of quantum mechanics. And it is a term that singles out a class of conceptually similar interpretations, which includes proposals that are not generally referred to as modal ones. This ambiguity was already present when Bas C. van Fraassen coined the term in the 1970s by transposing the semantic analysis of modal logics to quantum logic. The resulting modal interpretation of quantum logic defined a class of interpretations of quantum mechanics, of which van Fraassen developed one instance in detail, called the Copenhagen modal interpretation. In the 1980s Simon Kochen and Dennis Dieks developed independently an interpretation of quantum mechanics that became known as the modal interpretation, turning the term into a proper name. In the 1990s further research produced new proposals, broadening attention to the class of modal interpretations. The development of modal interpretations can be positioned as attempts to understand quantum mechanics as a theory according to which some but not all observables of physical systems have definite values. Quantum mechanics predicts the outcomes of measurements of observables pertaining to systems and is typically silent about whether these observables have values themselves. Attempts to add to quantum mechanics descriptions of systems in which all quantum-mechanical observables have values became deadlocked in the 1960s: Kochen and Ernst Specker's no-go theorem proved that such descriptions are inconsistent if these values have to comply to the same mathematical relations as the observables themselves; John S. Bell's inequalities showed that the descriptions easily lead to nonlocal phenomena at odds with relatively theory (Redhead 1987). Modal interpretations add descriptions to quantum mechanics according to which only a few preferred observables have values, and avoid in this way specifically the Kochen-Specker theorem. A second common element is that modal interpretations do not ascribe one state to a system, as quantum mechanics does, but two: a dynamical state and a value state. By doing so another peculiarity of quantum mechanics is overcome, namely that states of systems evolve alternately by two mutually incompatible laws: the Schrödinger equation that yields smooth state evolution in between measurements, and the projection postulate that yields discontinuous evolution at measurements. In modal interpretations dynamical states of systems evolve with the Schrödinger equation only, and value states evolve typically discontinuously. A particular modal interpretation is now characterized by the value states it assigns to systems; value states fix the preferred definite-valued observables and their values. Finally there is the claim that modal interpretations stay close to quantum mechanics. The dynamical states that modal interpretations assign can be taken as the states that quantum mechanics assigns, the only difference being that the former do not evolve by the projection postulate. Modal interpretations may thus be said to incorporate quantum mechanics instead of replacing it, as some hidden-variables theories do. Quantum-Mechanical Hilbert-Space Mathematics In quantum mechanics the state and observables of a physical system are represented by mathematical entities defined on a Hilbert space associated with the system. A Hilbert space H contains vectors |ψ , and if it is an n -dimensional space, there exist sets {|e 1,|e 2, |en } of n vectors that are pair-wise orthogonal. Such a set is called a basis of the space, which means that any vector |ψ in H can be decomposed as a weighted sum of the elements of the basis: |ψ =ici |ei . The Hilbert space associated with two disjoint physical systems consists of the tensor product H 1H 2 of the Hilbert spaces associated with the separate systems. If {|e 1, |en } is a basis of H 1 and {|f 1, |fm } a basis of H 2, then any vector |Ψ part of H 1H 2 can be decomposed as a sum |Ψ =i,jCij |ei |fj (a double summation). Linear operators A on a Hilbert space are linear mappings within that space. The operator that projects any vector on the vector |ψ is called a projector and is written as |ψ ψ |. In quantum mechanics the state of a system is represented by such a projector, or by a density projector W which is a complex sum iλi |ψi ψi | of projectors. An observable pertaining to a system (e.g., its momentum or spin) is represented by a self-adjoint operator A. Self-adjoint operators and density operators can be decomposed in terms of their eigenvalues ai and projectors on their pair-wise orthogonal eigenvectors |ai , that is, A =iai |ai ai |. (Complications due to degeneracies, phase factors, and infinities are ignored.) Particular Modal Interpretations In all interpretations named modal, the dynamical state of a system is represented by a density operator W on the system's Hilbert space. This dynamical state evolves with the Schrödinger equation and has the usual quantum-mechanical meaning in terms of measurement outcomes: If observable A is measured, its eigenvalue ai is found with probability p (ai )=ai |W |ai . The value state of a system is represented by a vector |v and determines the values of observables by the rule: A has value ai iff |v is equal to the eigenvector |ai of A. This rule leaves many observables without values; a specific value state is an eigenvector of only a few operators, which then represent the preferred observables. Particular modal interpretations fix the value states of systems differently. In van Fraassen's (1973, 1991) Copenhagen modal interpretation |v is a vector in the support of the dynamical state (which implies that W can be written as a convex sum of |v v | and other projectors). Van Fraassen is more specific about value states after measurements. If an observable A of a system is measured, the dynamical state of the composite of system and measurement device may become |Ψ Ψ |, with |Ψ =ici |ai |Ri . The vectors |ai are eigenvectors of the measured observable, and the |Ri 's are eigenvectors of a device observable that represents the outcomes (the pointer readings). The value states after this measurement are, according to van Fraassen, with probability |ci |2 simultaneously given by |ai for the system and by |Ri for the measurement device, respectively. The decomposition |Ψ =ici |ai |Ri is mathematically special because it contains one summation (as said, a decomposition of a vector |Ψ in a product space H 1H 2 relative to bases of the separate Hilbert spaces has usually a double summation). This special single-sum decomposition is called the bi-orthogonal decomposition of |Ψ , and a theorem (Schrödinger 1935) states that every vector |Ψ in H 1H 2 determines exactly one basis {|e 1, |en } for H 1 and one basis {|f1 , |fm } for H2 for which its decomposition becomes such a bi-orthogonal decomposition. Kochen (1985) and Dieks (1989) use this decomposition to define value states in their modal interpretation: If two disjoint systems have a composite dynamical state |Ψ Ψ | and the bi-orthogonal decomposition of the vector |Ψ is |Ψ =ici |ei |fi , then the value states are with probability |ci |2 simultaneously |ei for the first system and |fi for the second. Kochen adds a perspectival twist to this proposal, absent in Dieks's earlier writing: For Kochen the first system witnesses the second to have value state |fi iff it has itself value state |ei (which is the case with probability |ci |2) and the second system then witnesses, conversely, the first to have value state |ei . The Kochen-Dieks proposal applies to two systems with a composite dynamical state represented by a projector |Ψ Ψ | only. The spectral modal interpretation by Pieter Vermaas and Dieks (1995) generalizes this proposal to n disjoint systems with an arbitrary composite dynamical state W. This composite state fixes the dynamical states of all subsystems. Let W (x ) be the dynamical state of the x -th system part of the composite and let it have an eigenvalue-eigenvector decomposition W (x )=iwi (x )|wi (x )wi (x )|. The value state of this x -th system is then |wi (x ) with probability wi (x ). Vermaas and Dieks gave, moreover, joint probabilities that the disjoint systems have simultaneously their value states |wi (1), |wj (2), etcetera. In the spectral modal interpretation a composite system, say, system 1+2 composed of the disjoint systems 1 and 2, has an eigenvector |wk (1+2) of its dynamical state W (1+2) as its value state. The atomic modal interpretation by Guido Bacciagaluppi and Michael Dickson (1999) fixes the value states of such composite systems differently. Bacciagaluppi and Dickson assume that there exists a set of disjoint atomic systems, for which the value states are determined similarly as in the spectral modal interpretation, and propose that the value states of composites of those atoms are tensor products of the value states of the atoms: the value state of the composite of atoms 1 and 2 is |wi (1)|wj (2) iff the value states of the atoms are |wi (1) and |wj (2), respectively. The Class of Modal Interpretations The class of modal interpretations comprises those proposals according to which only a few observables have values, and that can be formulated in terms of dynamical and value states. The interpretations by Richard Healey (1989) and by Jeffrey Bub (1997) have this structure quite explicitly and are therefore often called modal ones (Healey's proposal has a number of similarities with the Kochen-Dieks proposal; in Bub's the value state of a system is an eigenvector of an observable fixed independently of the system's dynamical state). One may argue that David Bohm's mechanics (1952) is also a modal interpretation. The development and application of modal interpretations have led to mixed results. The maximum set of observables that can have values by modal interpretations without falling prey to the Kochen-Specker theorem has been determined (Vermaas 1999). Bub and Rob Clifton showed that this set is the only one that satisfies a series of natural assumptions on descriptions of single systems (Bub, Clifton, and Goldstein 2000). The evolution of value states, which determines the description of systems over time, can be given (Bacciagaluppi and Dickson 1999). This evolution was, however, shown not to be Lorentz-covariant for the spectral and atomic modal interpretations and, to a lesser extent, for Bub's interpretation, revealing that the assumption that only a few quantum-mechanical observables have values, still may lead to problems with relatively theory (Dickson and Clifton 1998, Myrvold 2002). Moreover, even though this assumption yields consistent descriptions of single systems, joint descriptions of systems were still proved to be problematic. First, it is commonly assumed in quantum mechanics that the observable of a system 1 represented by the operator A defined on H 1, and the observable of a composite system 1+2 represented by the operator A 1I 2 on H 1H 2 (I 2 is the identity operator on H 2) are one and the same observable. The Copenhagen, Kochen-Dieks, and spectral modal interpretations have the debatable consequence that these observables should be distinguished (Clifton 1996). Second, the spectral modal interpretation cannot give joint probabilities that systems 1, 2, , and their composites, 1+2, , have simultaneously their value states |wi (1), |wj (2), |wk (1+2), etcetera (Vermaas 1999, ch. 6). These negative results motivated in part the formulation of the atomic modal interpretation but can also be avoided by adopting Kochen's perspectivalism, which implies that one accepts constraints on describing different systems simultaneously. Finally, the Kochen-Dieks, spectral, and atomic modal interpretations have problems with properly describing measurements, doubting their empirical adequacy. David Albert and Barry Loewer (1990) argued that after a measurement, the dynamical state of the system-device composite need not be |Ψ Ψ | with |Ψ =ici |ai |Ri , and that the mentioned interpretations then need not yield descriptions in which the device displays an outcome (Bacciagaluppi and Hemmo 1996). These results allow critical conclusions about particular modal interpretations and raise doubts about the viability of the class of modal interpretations. Three remarks can be made about this assessment. First, an evaluation of the results may depend on what one expects from interpretations. If interpretations are to provide descriptions that allow realist positions about quantum mechanics, the inability of, say, the spectral modal interpretation to give joint probabilities that systems have simultaneously value states, proves this interpretation problematic. But if interpretations, in line with van Fraassen's view, are to yield understanding of what quantum mechanics means, this inability of the spectral modal interpretation is an interesting conclusion about how quantum-mechanical descriptions of systems differ from those of other physical theories. The result that some modal interpretations may be empirical inadequate, is, however, fatal independently of one's expectations for interpretations. Second, the set of particular modal interpretations that is analyzed so far does not exhaust the class of modal interpretations. Research therefore continues (e.g., Bene and Dieks 2002). Third, these results are relevant to the project of interpreting quantum mechanics in general. Existing and new interpretations, modal or not, according to which only some observables have definite values, are constrained by the negative results and can now be assessed as such; and existing and new interpretations may benefit from the positive results about modal interpretations. See also Bell, John, and Bell's Theorem; Bohm, David; Quantum Mechanics; Van Fraassen, Bas. Albert, David Z., and Barry Loewer. "Wanted Dead or Alive: Two Attempts to Solve Schrödinger's Paradox." In Proceedings of the 1990 Biennial Meeting of the Philosophy of Science Association. Vol. 1., edited by Arthur Fine, Micky Forbes, and Linda Wessels. East Lansing, MI: Philosophy of Science Association, 1990. Bacciagaluppi, Guido, and Michael Dickson. "Dynamics for Modal Interpretations." Foundations of Physics 29 (1999): 11651201. Bacciagaluppi, Guido, and Meir Hemmo. "Modal Interpretations, Decoherence and Measurement." Studies in History and Philosophy of Modern Physics 27 (1996): 239277. Bene, Gyula, and Dennis Dieks. "A Perspectival Version of the Modal Interpretation of Quantum Mechanics and the Origin of Macroscopic Behavior." Foundations of Physics 32 (2002): 645671. Bohm, David. "A Suggested Interpretation of Quantum Theory in Terms of 'Hidden Variables.'" Physical Review 85 (1952): 166193. Bub, Jeffrey. Interpreting the Quantum World. Cambridge, U.K.: Cambridge University Press, 1997. Bub, Jeffrey, Rob Clifton, and Sheldon Goldstein. "Revised Proof of the Uniqueness Theorem for 'No Collapse' Interpretations of Quantum Mechanics." Studies in History and Philosophy of Modern Physics 31 (2000): 9598. Clifton, Rob. "The Properties of Modal Interpretations of Quantum Mechanics." British Journal for the Philosophy of Science 47 (1996): 371398. Dickson, Michael, and Rob Clifton. "Lorentz-Invariance in Modal Interpretations." In The Western Ontario Series in Philosophy of Science. Vol. 60, The Modal Interpretation of Quantum Mechanics, edited by Dennis Dieks and Pieter E. Vermaas. Dordrecht, Netherlands: Kluwer, 1998. Dieks, Dennis. "Quantum Mechanics Without the Projection Postulate and Its Realistic Interpretation." Foundations of Physics 19 (1989): 13971423. Healey, Richard A. The Philosophy of Quantum Mechanics: An Interactive Interpretation. Cambridge, U.K.: Cambridge University Press, 1989. Kochen, Simon. "A New Interpretation of Quantum Mechanics." In Symposium on the Foundations of Modern Physics, edited by Pekka Lahti and Peter Mittelstaedt. Singapore: World Scientific, 1985. Myrvold, Wayne C. "Modal Interpretations and Relativity." Foundations of Physics 32 (2002): 17731784. Redhead, Michael. Incompleteness, Nonlocality, and Realism: A Prolegomenon to the Philosophy of Quantum Mechanics. Oxford: Clarendon Press, 1987. Schrödinger, Erwin. "Discussion of Probability Relations Between Separated Systems." Proceedings of the Cambridge Philosophical Society 31 (1935): 555563. van Fraassen, Bas C. Quantum Mechanics: An Empiricist View. Oxford: Clarendon Press, 1991. van Fraassen, Bas C. "Semantic Analysis of Quantum Logic." In The University of Western Ontario Series in Philosophy of Science. Vol. 2, Contemporary Research in the Foundations and Philosophy of Quantum Theory, edited by C. A. Hooker. Dordrecht, Netherlands: Reidel, 1973. Vermaas, Pieter E. A Philosopher's Understanding of Quantum Mechanics: Possibilities and Impossibilities of a Modal Interpretation. Cambridge, U.K.: Cambridge University Press, 1999. Vermaas, Pieter E., and Dennis Dieks. "The Modal Interpretation of Quantum Mechanics and Its Generalization to Density Operators." Foundations of Physics 25 (1995): 145158. Pieter E. Vermaas (2005) Cite this article • MLA • Chicago • APA "Modal Interpretation of Quantum Mechanics." Encyclopedia of Philosophy. . 21 Nov. 2018 <>. "Modal Interpretation of Quantum Mechanics." Encyclopedia of Philosophy. . (November 21, 2018). "Modal Interpretation of Quantum Mechanics." Encyclopedia of Philosophy. . Retrieved November 21, 2018 from Learn more about citation styles Modern Language Association The Chicago Manual of Style American Psychological Association
e4a895a03d832f0f
Also Available in: Why the Universe does not revolve around the Earth Refuting absolute geocentrism by and Published 12 February 2015; last update 19 July 2017 Table of Contents Questions about how the universe works are not always easy to answer. For many centuries most people (scientists and philosophers included) thought the earth was at its center and that the planets, moon, sun, and stars revolved around us. This is called “geocentrism” or the “geocentric view of the universe”. It took years of painstaking work, spread out over multiple centuries, to show that this was false as an absolute claim. Today, we accept a “geokinetic” (moving-earth) view based on the work of Newton and Einstein. For the student of history and/or science, how we came to the modern view is an amazing exploration of how things work and a testimony to the amazing ability to reason that God uniquely put into people. We live in a created universe, meaning its existence did not come about through naturalistic processes alone. We also live in a well-ordered universe; meaning it behaves according to a set of rules. This is consistent with it being created by an ultimate Lawgiver, who is not fickle and acts in a consistent manner, according to His very nature (c.f. 1 Corinthians 14:33, James 1:17). Therefore, we can explore the way things work and expect rational results from our experiments. However, it is far more difficult to take these experiments and use them to explain the origin of everything. When a person tries to predict backwards to infinity, this type of science breaks down. Philosophically, there are paradoxes waiting around every corner. For example, we are either in a steady-state universe that defies the 2nd Law of Thermodynamics, or we are in a universe that has a beginning but without a cause. Scientifically, we see how appeals to big bang physics have led to much speculation, including inflation theory, dark matter, dark energy, the fine tuning of numerous constants in order to get the models pointing in the right direction, etc. Therefore, even after we have learned all this about the mechanics of the universe, once we begin trying to explain how it all began we get into the realm of faith. True, there are still puzzles to be explained in the “young-earth” position, but since evolutionists explain away their puzzles with ‘it’s science’s job to solve these puzzles’, the same allowances should be made for creationist scientists. The question about whether or not earth is at the center is not as easy to answer as the “flat earth” question. Not only are these two ideas not the same, but no significant evidence exists for flat earth beliefs among scientists going back to the Greeks. Indeed, a Greek scientist named Eratosthenes of Cyrene (276–194 BC) calculated the circumference of the earth (to an amazing degree of accuracy). Within the circles of Christian scholarship, no notable theologian seems to have believed in a flat earth, not only because of it so obviously is not, but also because the Bible does not claim it is. Notable theologians throughout the Christian era believed the earth is spherical. Even in the midst of the falsely-named “Dark Ages”, the leading Anglo-Saxon scholar and monk ‘the Venerable’ Bede (AD673–735), one of most widely-read scholars for the next 1000 years, wrote that the earth: istock photo question-mark The relationship between the spherical earth and the universe, however, was a notorious nut to crack, with many famous scientists weighing in on the difficulty. The main problem is that we are here on earth and, to us, it appears that everything revolves around our planet. We don’t feel like we are sailing through the heavens. We don’t feel like we are moving at all. Is it possible to sort out fact from fiction in this subject? Actually, yes. The answer is both elegant and satisfying, but we must do a little digging to answer the riddle. Biblical phenomenological language Well-meaning Christian geocentrists basically say, “The Bible says the sun rises and sets and that the earth doesn’t move; that settles it.” However, does the Bible really say that absolute geocentrism is true? The use of language complicates this subject. Even today, in both writing and in common speech, people often use “phenomenological language”. Indeed, it would be almost impossible to have many conversations if we did not talk about things like “sunrise” (go ahead, try to describe a sunrise or sunset without sounding like you are stationary and the sun is moving, and compare with our attempt below). And it’s not just us: the leading Roman poet Virgil (70–19 BC) wrote, “We set out from harbour, and lands and cities recede” (Aeneid 3:72). This line was quoted by both Copernicus and Kepler. Thus, even when consulting biblical passages, we must be wary of the use of language. This was recognized in the Middle Ages by scientist-clergy such as the priest Jean Buridan (c. 1300–c. 1360), the bishop Nicole Oresme (c. 1320–1382),2,3 and Cardinal Nicholas of Cusa (1401–1464).4 If you think these men are perhaps insignificant, Buridan’s formulation anticipated the principle of describing motion with respect to reference frames, which paved the way for Galileo, Newton, and Einstein. His idea of impetus anticipated Galileo’s concept of inertia and Newton’s First Law of Motion.5 Historian of science, James Hannam, comments: Like many medieval Christians, Buridan expected God to have arranged things in an elegant way, always allowing that he could do as he pleased. However, although there was also a presumption towards elegance, you still had to check the empirical facts to see if God really operated this way.6 Living almost exactly 100 years after Buridan, Nicholas of Cusa wrote eloquently on the subject: It is clear here that he believed the earth moved through space, and he clearly understood the principle of frames of reference (discussed in more detail below). Buridan and Nicolas predate the Copernican Revolution, meaning later scientists did not come up with their ideas on their own.8 Centuries of scholarship had been working in this direction. When we consider the biblical ‘proof texts’, most are taken out of context by those few people who want to argue for absolute geocentrism (the view that the earth is fixed and does not rotate while everything in the universe rotates about us once every day). This taking out of context is done both by biblioskeptics and, unfortunately, modern geocentrists who take their views as gospel. There are multiple verses that have a generic reference to “sunrise”, including Genesis 19:23, Exodus 22:3, Judges 5:31, Judges 9:33, Job 9:7, Psalm 104:22, Ecclesiastes 1:5, Nahum 3:17, Matthew 5:45, Mark 16:2, and James 1:11. There are also a number of verses that use “sunrise” in relation to the direction “east”, which makes perfect sense, including Numbers 2:3, Numbers 3:38, Numbers 34:15, Joshua 1:15, Joshua 12:1, Joshua 13:5, Joshua 19:12, and Joshua 19:13. Indeed, the normal Greek word for ‘east’, ἀνατολή (anatolē, e.g. Matthew 2:1), has the primary meaning of ‘rising’, usually of the sun. In other places, “sunrise” is used in a prophetic or poetic sense, including Luke 1:78 (also anatolē), which comes in the middle of the prophesy of Zechariah, father of John the Baptist, and is comparing Christ to the sunrise “that shall visit us from on high”. This is similar to the prophesy of Malachi 4:2 that claims “the sun of righteousness shall rise with healing in its wings.” Additional references can be found in Psalm 50:1 (“The Mighty One, God the Lord, speaks and summons the earth from the rising of the sun to its setting.”), Malachi 1:11 (“For from the rising of the sun to its setting my name will be great among the nations…”), and Psalm 113:3 (“From the rising of the sun to its setting, the name of the Lord is to be praised!”). There are also multiple verses that have a reference to “sunset”, including Genesis 28:11, Deuteronomy 16:6, Deuteronomy 23:11, Deuteronomy 24:13, Deuteronomy 24:15, Joshua 8:29, Joshua 10:27, 1 Kings 22:36, 2 Chronicles 18:34, Psalm 50:1, Psalm 104:19, Psalm 113:3, Ecclesiastes 1:5, Daniel 6:14, Malachi 1:11, and Luke 4:40. None of these verses is a challenge to geokinetic theory and none actually support geocentrism for all are acceptable uses of phenomenological language and, as mentioned earlier, we use similar phrases every day with no intention of misleading anyone into thinking we are geocentrists. (But all of these verses refute certain modern models of a flat earth where the sun orbits at a constant distance above a flat disk!) Modern geokinetic astronomers teach using a planetarium, which treats the earth as the center of an infinite celestial sphere, and is full of phenomenological ‘geocentric’ terms such zenith, nadir, celestial poles and equator. Language conventions like this are necessary for simple communication. There are other passages, however, that require a more careful exegesis. After the Israelites crossed the Jordan into Canaan, they defeated the cities of Jericho and Ai (Joshua 1–8). Soon after that the residents of Gibeon tricked Israel to entering into a covenant with them (Joshua 9). Gibeon was to the west of Ai and an obvious next target for the invading army. The other peoples in the area were angry and went to war against the Gibeonites. Israel came to their aid and a great battle was fought (Joshua 10). In the midst of this battle, the Bible says: This very famous passage describes Joshua’s Long Day, and is often used to support geocentric views, but what is it saying, really? Obviously, the statements are being given in a local frame of reference. Why? Because the sun standing over Gibeon would not appear to be overhead anywhere except in the geographic vicinity of Gibeon. The valley of Aijalon is to the west of Gibeon. Therefore, the moon would not appear to be to the west of Gibeon to someone standing in Aijalon; it would be out over the Mediterranean.9 Many claim this passage teaches that God stopped the moving sun and moon. Yet there is nothing here to say that he did not temporarily slow down a rotating earth (as well as the hydrosphere and atmosphere). This would produce the same effect. Or He could have stopped the movement of everything in the universe. Same result. That something universal really happened in history is shown by legends of a long night in people groups on the other side of the globe.10 Note that the mention of the moon is a mark of authenticity. The Amorites were sun worshippers, so it makes sense for God to show His power over the false god. But if His means really was slowing down the earth, as we suggest, then this would also affect the relative motion of the moon, which otherwise need not have been mentioned. And let us not forget the reversing of the course of the sun in the time of Hezekiah (2 Kings 20:5–11, Isaiah 38:1–7), an event that was noticed, or at least enquired about, by astronomers outside of Jerusalem (2 Chronicles 32:24–31). These deviations from the scientific norm are what allow us to identify miracles when they occur. In a geocentric universe, everything is one giant miracle with no simple explanation (see below). Certainly, a geocentrist would not expect the sun to stop or to move backward, but why not? There is no rational explanation for the way the universe operates, so why could something out of the ordinary not happen? Psalm 96:10 is another critical verse for us to understand. It says: Similar statements that “the earth shall not be moved” appear in Psalm 93:1 and Psalm 104:5. Do these verses not say that the earth does not move? No, they do not, for one very simple reason: the Hebrew word מוֺט (mot) means “to totter, shake, or slip”11 and is often translated such in other places. The opposite of “shake” can be “unmoving”, as in these verses, but it can also be accurately translated “unshaken”. Using the same word, Psalm 55:22 and Psalm 112:6 say the righteous will never be moved. Same word, similar context, but obviously this does not mean people are fixed in place! Yet, if the righteous can move, so can the earth. Following on that theme, Psalm 121 is titled, “The Righteous shall never be moved.” verse 3 says God will never let your foot be moved, yet a few verses later talks about “coming in” and “going out”, meaning the feet must be moving and the earlier use of “shall not be moved” must be a metaphoric or poetic expression for “firm” or “unshaken”. Also, Psalm 16:8 says, “I shall not be moved,” and most biblioskeptics and geocentrists would not think that the Psalmist was in a strait jacket! Finally, Psalm 125:1 says those who trust in the Lord are like Mt. Zion, which cannot be moved and abides forever. This is perhaps a better place to use “cannot be moved”, for we are talking about a mountain, but even that will be burned up in the future (according to most views on eschatology), so the poetic expression is clear. One other problem is the use of the word “firmament” in Genesis 1 in the King James version. That word comes straight out of the geocentric views of Ptolemy (AD 90–168) and his predecessors, albeit by a long route. Around 250 BC, Jewish scholars in Alexandria, Egypt, translated the Hebrew Bible into Greek to make the Septuagint LXX. Unfortunately, they imbibed some of the Greek cosmologies—a spherical earth surrounded by concentric crystalline spheres—by translating the Hebrew word רקיע (rāqîya‘) into steréōma (στερέωμα). This comes from the word στερεόω (stereoō)—“to make or be firm or solid.” We see this meaning carried over into Jerome’s Latin Vulgate, firmamentum. This was basically transliterated into the KJV’s “firmament”. Thus, this is an example where the science of the day influenced Bible translation, and vestiges remained for almost 2,000 years! Another example of how the Greek cosmology influenced Jewish translators comes from Josephus. He referred to the rāqîya‘ created on Day 2 as a κρύσταλλος (crystallos, i.e. crystalline sphere) around the earth (Antiquities of the Jews 1(1):30). [Note: This is not a slam on the KJV necessarily. CMI does not take any particular stand on Bible translations, but this one word is demonstrably taken from the scientific views of the time.] There is some debate among creationists on the meaning of rāqîya‘ in this context. Kulikovsky points out: Note also that the semantic ranges of stereōma and firmamentum do not match rāqîya‘. The Hebrew word rāqîya‘ refers to something flexible or malleable which has been stretched out. As Livingston puts it: “The emphasis in the Hebrew word raqia is not on the material itself but on the act of spreading out or the condition of being expanded.”[12] Stereōma and firmamentum, on the other hand, refer to something hard, solid and inflexible.[13] Indeed, Seely admits that his historical etymology of rāqîya‘ and rāqa “does not absolutely prove that rāqîya‘ in Genesis 1 is solid.”[14]15 J.P. Holding put it this way: … the description of the raqiya‘ is so equivocal and lacking in detail that one can only read a solid sky into the text by assuming that it is there in the first place. One can, however, justifiably understand Genesis to be in harmony with what we presently know about the nature of the heavens.16 Thus, even though multiple interpretations could equally well fit, rāqîya‘ does not mean “solid dome”. And as will be seen, most of the debate was about the science; or as philosopher of science Thomas Kuhn (1922–1996) put it, a shift in scientific paradigms.17,18 Most people in history spoke in geocentric terms, as most people do today—we say, “The sun is setting” not “The earth’s rotation is now bringing our line of sight to the sun into a tangent at my position on the earth’s surface.” But this does not mean that most of us today are geocentrists! Thus, there is no real biblical problem with a geokinetic view. This is not the same argument as “is evolution true?” or “can we add millions of years of earth history to the Bible?” This is not using “science” to inform us about biblical theology, which all attempts at merging evolutionary time and the Bible end up doing. The nature of the relationship of the earth to the heavens is an open subject that begs for exploration. This is an example of the difference between the ministerial and magisterial uses of science. Geokineticism is ministerial in that is helps us to elucidate texts that could go either way. In contrast, long-age views of evolution are based on a magisterial abuse of science in order to override Scripture, with baneful theological consequences, like death before Adam’s sin. Logic and science This study is designed both to help Christians refute critics and to understand why geokinetism is both good science and biblically allowable. Here’s the main logical problem with absolute geocentrism: it’s not that we could not construct a geocentric cosmology, as one of many allowable reference frames. It’s that there is no scientific or biblical reason why we would—there is no dynamic model to explain it, i.e. in terms of forces as efficient causes of motions. Therefore it has essentially no predictive value. Yes, it could describe planetary positions accurately enough for pre-telescope astronomy, admittedly a great achievement, but it fails to explain the orbital motions of satellites of other planets. It is useful in some respects, however, for launching things into orbit, for pointing earth-based antennae at geostationary satellites, for plotting the position of stars, etc. Yet, because it lacks predictive power, a fully-comprehensive geocentric model would be very, very complicated. They would need to add terms almost at random to account for the thousands of variations easily explained by geokineticism. There is another, perhaps stronger, point to make: geokinetics is the best way to understand the physics. The equations of motion are the simplest for the particles that orbit in a center-of-mass system and when the center is used as the origin in the co-ordinate frame. Science thrives on making predictions, and Newton’s three Laws of Motion and theory of gravity (with Einstein’s further refinements) are one of the most amazing predictive engines in history. Since Scripture doesn’t demand that a stationary earth is the only valid reference frame (absolute geocentrism), why would we hold to an earth-centered, earth-fixed reference frame? Here’s the main scientific problem with geocentrism: if absolute geocentrism is true, then the laws of physics are not universal. That is, experiments we do on earth cannot apply to things outside the atmosphere because Newton’s laws of motion and gravity cannot explain what we are seeing. This is a big problem, for every time we do something in outer space everything behaves as if it would here on earth. Absolute geocentrism requires a universe that does not work according to Newton’s laws. Yes, you can attempt to describe the way things revolve around the earth in a absolute geocentric system, but gravity cannot be used to explain the motion of those objects; another force is required to glue the universe together. Where does the change occur? Certainly before we get to the moon, for that must orbit the earth once a day. But we cannot detect any such transition! We can fly a plane, launch a satellite, send things to the outer solar system and there is no place where Newtonian mechanics does not apply. For example, late in 2014, the Rosetta spacecraft from the European Space Agency successfully arrived at and orbited comet 67P/Churyumov–Gerasimenko. In a delicate and complex series of maneuvers, the craft deposited the Philae lander on the surface of the comet. Everything about that rendezvous is explained by Newtonian physics, and it is the same physics that works here on earth. If everything out there behaves as expected based on experiments here on earth, does this not mean that geokineticism is true and absolute geocentrism is not? If you can’t use gravity to explain the motion of objects in the solar system, you can’t use gravity to explain the motion of space probes flying among those objects. It is that simple. Absolute geocentrism is then nothing more than ‘stamp collecting’. One cannot make many predictions. One can only describe what is seen. Essentially, they can describe observations without being able to explain those observations. The power of the geokinetic model lies in the fact that it is based on a simple observation that can then be used to explain multiple phenomena. The Achilles’ heel for those few who still believe the earth does not move is that their “model” is nothing more than a list of unrelated phenomena. The Greeks The main protagonist in the geocentrism debate is a man named Claudius Ptolemy (AD 90–168), a Greek scholar living in the Egyptian city of Alexandria in the second century AD. He had a profound influence on this debate, to the point that today the terms “geocentric” and “Ptolemaic” are interchangeable. Prior to him, however, there was no unanimity among Greek thinkers. In fact, several solar-centric views predated Ptolemy’s geocentrism. The Greek scholar Aristarchus of Samos (310–230 BC) is but one of those people. Interestingly, he also said that the sun must be further away than moon (because the moon can eclipse the sun). Since they have the same apparent size, he reasoned the size of the sun must be proportional to its distance behind the moon. He underestimated the size of the sun (and thus its distance) by a factor of 10. But even his estimate was much bigger than the earth, so he reasoned that the earth orbited the sun. And he was not the only ancient to struggle with it. The debate was known to famous people like Archimedes (287–212 BC), Seneca (4 BC AD 45), Pliny the Elder (AD 23–79), and Plutarch (AD 45–120). There were good reasons for most early people to believe in geocentrism and the scholars listed multiple evidences in support of it. Nicolaus Copernicus (1473–1543) summarized the arguments in Chapter 7 of his book On the Revolutions of the Celestial Spheres:19 Copernicus uses the Aristotelian terminology of his opponents, where “violent” simply means “caused by an outside force”, and no one then knew Newton’s Second Law. For example, a book falling from a table is ‘natural motion’, while picking it up is ‘violent’ motion. Yet think about the implications of this Aristotelian view: if any outside force is ‘violent’, experimental science is invalid because any experimental manipulation cannot then be ‘natural’. Some of the ancients tried to argue that, if the earth rotated, it would fly apart, people and animals would be flung from the surface, falling objects would curve as they fall to the surface, and there should be a perpetual east wind, as Copernicus explained. But he then takes the argument and turns it back on itself, issuing an even greater challenge in Chapter 8: For these and similar reasons forsooth the ancients insist that the earth remains at rest in the middle of the universe, and that this is its status beyond any doubt. Yet if anyone believes that the earth rotates, surely he will hold that its motion is natural, not violent … Ptolemy has no cause, then, to fear that the earth and everything earthly will be disrupted by a rotation created through nature’s handiwork … But why does he not feel this apprehension even more for the universe, whose motion must be the swifter, the bigger the heavens are than the earth? Or have the heavens become immense because the indescribable violence of their motion drives them away from the center? Would they also fall apart if they came to a halt? Were this reasoning sound, surely the size of the heavens would likewise grow to infinity. For the higher they are driven by the power of their motion, the faster that motion will be, since the circumference of which it must make the circuit in the period of twenty-four hours is constantly expanding; and, in turn, as the velocity of the motion mounts, the vastness of the heavens is enlarged. In this way the speed will increase the size, and the size the speed, to infinity. Yet according to the familiar axiom of physics that the infinite cannot be traversed or moved in any way, the heavens will therefore necessarily remain stationary. As we will see, not only has ‘the earth will fly apart’ argument been answered, but so have the other arguments some of the ancients attempted to make. The Church Fathers The few Church Fathers who discussed the issue were geocentrists. However, it is not quite fair for modern geocentrists to quote the early Church Fathers in support. First, all the pagans of their day also supported geocentrism, so the Church Fathers just reflected common sense, common contemporary scientific ideas, or common use of language. They were hardly making a principled theological opposition to geokineticism. Second, they were influenced by the faulty translation of the raqia’ in the available Greek and Latin translations. Third, their geocentrism was Ptolemaic Geocentrism, while modern geocentrists actually hold the Tychonian (or Tychonic) hybrid geo-heliocentrist view (see below). Since no Church Father held this modern view, how can one quote them in support? Fourth, the first genuinely intellectual challenge to absolute geocentrism came from devout adherents to a broadly biblical world view. The Middle Ages Due to the work of leading lights like Boëthius (AD 480–525), who was following the lead in this case of Aristotle and Ptolemy (they were not wrong about everything, after all), scholars in the Middle Ages knew the earth was just a point compared to the vastness of space, saying: As you have heard from the demonstrations of the astronomers, in comparison to the vastness of the heavens, it is agreed that the whole extent of the earth has the value of a mere point; that is to say, were the earth to be compared to the vastness of the heavenly sphere, it would be judged to have no volume at all.21 Yet most of them accepted the geocentric views of their day. Thomas Aquinas (1225–1274) had a great deal of influence in nearly fixing Aristotelian philosophy, and its cousin, Ptolemaic astronomy, in the minds of his contemporaries. However, after Aquinas, some clergy-scientists in the Middle Ages directly questioned Aristotelian philosophy. In fact, the Middle Ages saw the birth of the universities, where questioning authority was often encouraged.22 Because of the infinitesimally small size of the earth compared to the heavens, Buridan and Oresme proposed that it might be more elegant that the earth itself rotated rather than the cosmos revolving around it (following in the steps of several Greek philosophers who said the same). They answered most of the biblical and scientific objections that would be thrown at Galileo a few centuries later, but came short of asserting geokineticism as a fact, as Hannam explains: What Oresme had done was prepare the groundwork. He refuted most of the objections to a moving earth two centuries before Copernicus had suggested that it might actually be in motion.23 A common thought in the Middle Ages was that the centre of the universe was the worst place to be. For example, Dante’s Divine Comedy (c. 1310) has nine circles of Hell inside the Earth, getting worse as they approach the center. Satan was right at the centre of a (spherical) earth, at the centre of the universe. In the opposite direction, the nine celestial spheres of heaven increased in virtue and closeness to God as they got further from the center. We certainly do not hold to Dante’s vision, but in this light moving the earth away from the center was a promotion in the eyes of people in the Middle Ages, not a demotion, as 21st century anachronistic skeptics claim. Was heliocentrism the result of Hermetic paganism? Some recent historians have tried to make the claim that Copernican theory was driven by some sort of Hermetic24 sun worship, but this is grossly anachronistic. By taking the ‘perfect’ sun and putting it at the center, instead of worshiping the sun, Copernicans were demoting it to the worst place.25And even though the Hermitica was widely read among the scholars of Copernicus’ time (the Renaissance), we do not believe Copernicus was among the adherents. Copernicus had one passing mention of Hermes among other ancient writings: At rest, however, in the middle of everything is the sun. For in this most beautiful temple, who would place this lamp in another or better position than that from which it can light up the whole thing at the same time? For, the sun is not inappropriately called by some people the lantern of the universe, its mind by others, and its ruler by still others. (Hermes) the Thrice Greatest labels it a visible god, and Sophocles’ Electra, the all-seeing. Thus indeed, as though seated on a royal throne, the sun governs the family of planets revolving around it. Moreover, the earth is not deprived of the moon’s attendance. On the contrary, as Aristotle says in a work on animal, the moon has the closest kinship with the earth. Meanwhile the earth has intercourse with the sun, and is impregnated for its yearly parturition. If this is a problem, then what about the Apostle Paul quoting pagan poets with approval: Aratus (Acts 17:28), Menander (1 Corinthians 15:33), and Epimenides (Titus 1:12)? Also, Copernicus had also cited Scripture with approval: For would not the godly Psalmist (92:4) in vain declare that he was made glad through the work of the Lord and rejoiced in the works of His hands, were we not drawn to the contemplation of the highest good by this means, as though by a chariot? Then see the alleged Hermetic heliocentrism: The above is hardly science at all, but mystical nonsense. So if any heliocentrist was influenced by Hermeticism, it was surely Giordano Bruno (1548–1600), a New-Agey non-scientist beloved of atheist Neil deGrasse Tyson. Furthermore, this passage talks about a sphere surrounding the earth, and only the other planets surrounding the sun. Thus Hermeticism is also probably even more compatible with the Tychonian geo-heliocentrism hybrid beloved of modern geocentrists (see below). They would undoubtedly take umbrage if they were accused of being Hermeticists, so they should practise “do unto others” when it comes to accusing geokineticists. A final point: geokineticism does not fall even if Copernicus was a rabid hermeticist (this would be the genetic fallacy), and in any case, this objection can’t touch Copernicus’ medieval predecessors or most other geokineticists. What we have to do is assess the evidence for and against absolute geocentrism and not resort to ad hominem distractions. Did the Church suppress geokinetic theory? Others have argued that the “Church” suppressed scientific advance by persecuting those who argued against absolute geocentrism, but history paints a very different picture. The Catholic Church, instead of being opposed to astronomy, spent tremendous amounts of money on it. Why? Because once the “Church” covered a significant portion of the globe, the calculation of the date of Easter became problematic. “The first Sunday after the first full moon after the vernal equinox” (Council of Nicea, AD 325) sounds like a precise formula, but it was entirely probable that different observers, even without making a mistake, could celebrate Easter on different days in different parts of the world. Add to that the fact that the Julian calendar was causing the calendar year to get farther and farther from the solar year (10 days off by the 1500s) and they had a real problem. To resolve these issues, cathedrals were enlisted as giant pinhole cameras projecting onto meridian lines (meridiane, singular meridiana). Thus the sun’s path through the sky could be accurately recorded, as documented by science historian John Heilbron (b. 1934).26 The cathedrals were ideal because they were huge, works of architectural genius, and were old enough for the foundations to have stabilized, so the positions of the meridiane would not shift. They were even more accurate astronomical instruments than the best telescopes of the day; telescopes did not surpass the meridiane until the mid-18th century. The meridian line in the basilica Santa Maria degli Angeli e dei Martiri, Rome, just a few minutes before noon. The result of this work was the adoption of the Gregorian calendar in 1582, which we still use today. The calendar change occurred 50 years before the trial of Galileo and was “based on computations that made use of Copernicus’ work”, as Kuhn pointed out.27 So already the new astronomy of Copernicus had shown its practical superiority, also showing that the Church permitted this view as a working mathematical hypothesis. After that, the work was refined even further. Interestingly, by 1655 (13 years after Galileo’s death) observations made in the Cathedral of Bologna by Giovanni Cassini (1625–1712) answered a great debate of the time, and gave concrete evidence that Kepler’s theory was correct and that Ptolemy’s was not. He also showed that the distance to the sun changed over time, meaning circular orbits were out of the question, so Kepler was right about elliptical orbits.28 Timeline of events—a fun romp through history There are many names that enter into the story. Too many, in fact, to do justice to them all. Yet, it is good to put several of the more important names in a proper historical perspective. When the subject comes up, most people immediately think of Galileo and his trials, but he was actually not the first nor the most important figure. Nicolaus Copernicus (“the man who stopped the sun and moved the earth”29) died more than two decades before Galileo was even born, and Galileo’s censure did not occur until he was 70 years old. ~ 200 BC Aristarchus of Samos estimates the distance to and size of the sun. ~ 200 BC Eratosthenes of Cyrene calculates the circumference of the earth with remarkable accuracy. ~ 150 AD Claudius Ptolemy writes the Syntaxis (Almagest), which became the main astronomy textbook of the Middle Ages. This established absolute geocentrism as the ruling scientific paradigm for almost 1,500 years. ~ 500 AD Boëthius acknowledges the vast size of the universe, compared to which the whole earth was just a point, in his Consolation of Philosophy. This was one of the most widely read books of the Middle Ages. ~ 700 AD ‘The Venerable’ Bede writes that the earth is a globe. ~1230 AD John Sacrobosco publishes Tractatus de Sphaera (On the Sphere of the World), a textbook that explained what was then known about astronomy. This clearly explained that the earth must be a sphere, and taught that even the smallest star we see is larger than the earth. The Sphere was required reading by students in all Western European universities for the next four centuries, which meant that the leading clergy of the day were taught from it. ~1250 AD Thomas Aquinas nearly fixes Aristotelian Ptolemaic astronomy in the minds of his contemporaries. In his magnum opus, Summa Theologica, He also reaffirmed that the earth is a globe, as an example of an obvious and objective fact that everyone knew. ~1350 Jean Buridan discovered the law of inertia centuries before Galileo, and proposed a geokinetic idea as a mathematically elegant hypothesis. ~1380 Nicole Oresme invented graphs of motion centuries before Galileo, and addressed most scientific and theological objections to geokineticism. ~1450 Cardinal Nicholas of Cusa proposed that the earth would be moving relative to reference frames of heavenly bodies. 1543 Nicolaus Copernicus “The man who stopped the world and moved the sun”. 1582 The Gregorian Calendar, aided by the Copernican model, is adopted by the Catholic world. 1600 Tycho Brahe makes thousands of astronomical observations that would be used later to further develop Copernicus’ theory. Brahe proposed a model that compromised between Ptolemy’s and Copernicus’ models. 1610 Galileo Galilei made the first telescope observations of moons orbiting other planets and phases of Venus, and became the most controversial proponent of Copernican heliocentrism. 1619 Johannes Kepler proposed his eponymous Three Laws of Planetary Motion. 1639 Jeremiah Horrocks makes the first observation of the transit to Venus. 1651 Giovanni Battista Riccioli published his Almagestum Novum that defended the Tychonian system, mostly on scientific grounds. 1655 Giovanni Cassini proves the distance to the sun changes over the seasons, consistent with Kepler’s First Law (planets move elliptical orbits around the sun). 1687 Isaac Newton’s Universal Law of Gravity, three Laws of Motion, and calculus explain Kepler’s model. 1729 James Bradley documents the aberration of starlight and calculates the speed of light. 1716 Edmund Halley suggested that we could use the transit of Venus across the sun to determine the AU, also noted the moon was slowing down. 1759 Alexis-Claude Clairaut calculates return of Halley’s comet. 1769 James Cook successfully records the transit of Venus from Tahiti. 1772 Joseph-Louis Lagrange described the remaining two Lagrange points first predicted by Euler. 1781 Sir Frederick William Herschel discovers Uranus, the first new planet known since classical times. 1846 Urbain Le Verrier predicts an undiscovered planet based on disturbances in the orbit of Uranus. 1846 Johann Gottfried Galle discovers Neptune in the place predicted by Le Verrier, another triumph for Newton’s theories. 1838 Friedrich Bessel made the first stellar parallax measurement of 61 Cygni. 1859 Urbain Le Verrier says that Mercury’s orbit is slightly off from Newtonian predictions (perihelion precession). 1873 James Clerk Maxwell’s equations of electrodynamics. ~1900 Hendrik Lorentz’s Lorentz Transformations 1905 Jules Henri Poincaré re-works the Lorentz Transformation and paves the way for Einstein. 1905 Albert Einstein’s Special Theory of Relativity. 1915 Albert Einstein’s General Theory of Relativity solves Urbain Le Verrier’s problem about Mercury. 21 July 1969 Neil Armstrong takes the first human footsteps on the moon. 25 August 2012 Voyager 1 crosses the heliopause at 121 AU (18 billion km) from the Sun, thus becoming the first man-made object to leave the heliosphere and enter interstellar space. 14 July 2015 New Horizons became the first spacecraft to fly by Pluto. New Horizons was launched on 19 January 2006 when Pluto was considered to be the outermost planet, but later that year (13 September), the International Astronomical Union (IAU) demoted planet Pluto to dwarf planet 134340 Pluto. The quest to solve this mystery was pushed by people with a Christian worldview who more or less believed the Bible. They saw no conflict between science and faith. Even the great astronomer Johannes Kepler said of his work that it was “like thinking God’s thoughts after him”, and: But there was resistance to geokinetic ideas. This was mainly led by other scientists, not the “Church”. The views of Copernicus were known to the Pope and many bishops of the day, and they supported him. This is not to say that his views were not controversial, but that neither the Protestant nor Catholic churches summarily rejected geokineticism. Later, Galileo was encouraged in his work by Pope Urban VIII, at first a close friend, but they later became bitter enemies after Galileo insulted him by putting the Pope’s words in the mouth of Simplicio (the fool) in a book that argued against geocentrism.31 Only a few decades after the death of Galileo and Urban VII, Jesuit astronomers were teaching geokineticism to astronomers in China. Georgio de Santillana (1902–1974), philosopher/historian of science at MIT, wrote: It has been known for a long time that a major part of the church’s intellectuals were on the side of Galileo, while the clearest opposition to him came from secular ideas.32 Considering that the argument had been going on for centuries, it should not be surprising that there was a controversy among the scholars. Some of this came though the Protestant/Catholic divide, some of it came through hard-headedness of various people, and much of it has been manufactured by 19th-century anti-Christian polemicists.33,34 Nicolaus Copernicus Perhaps the most important name in our brief tour of history is Nicolaus Copernicus. Copernicus was not only an astronomer but also a linguist, classical scholar, physician, doctor in canon law of the church, and an insightful economist.35 Although his geokinetic ideas were in place decades before his death, and even though he shared his views with many other people, he delayed publication of his opus De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres) until right before his death in 1543. This major event in the history of science triggered what we now call the “Copernican Revolution”. He took the same observational data that others were using, but added a much simpler explanatory model—that the planets, now including Earth, orbited the sun. “Occam’s Razor” (named after William of Ockham, AD 1287–1347) is a well established principle in science. It states that when two theories are in conflict, the one with the fewest assumptions is more likely to be correct. Copernicus’ model was much simpler than the Ptolemaic system. In the same way, the modern geokinetic system is much simpler than the modern absolute geocentric system. In fact, modern versions of absolute geocentrism are far more complex than even the Ptolemaic system, because they have to deal with many more phenomena than Ptolemy was aware of. Therefore, Occam’s Razor ‘cuts’ them deeply. However, there was room for improvement. For example, he still claimed the planets orbited in perfect circles, and clung to the Ptolemaic idea that the stars orbited in a crystalline sphere far above. Thus he also needed some epicycles to make his theory fit observations. Yet, his logic, math, and observational evidence started a fuse burning. In fact, he played a great role in re-starting the Scientific Revolution during the Renaissance, after the Medieval scientific revolution had been stalled by the Black Death.36 There were still obstacles to overcome, however, and the fact that there was no observed parallax among the stars was used as strong evidence against his theory by many detractors. What is Parallax? Put your finger on your nose. Now alternately open and shut one eye at a time. Your finger should move to the left and right as you look at it from each eye. This is called parallax. Because you are looking at something from two different angles, its position seems to shift relative to the background. Now hold your arm out straight and point your finger. Again, alternately open and close each eye. Your finger should still move back and forth, but less than before. Why? Because the difference in the angle between your finger and each eye is much less. Parallax is very useful in astronomy. The earth’s orbit is 150 million kilometers in radius. Thus, when we look at a star in the summer and in the winter, that is like having two eyes that are very, very far apart. If the star is close, its position will change through the seasons. However, most stars do not measurably change position because they are too far away for us to measure the change in angle. The few that do are closer to us than the ones that do not. Therefore, we can infer the distance to nearby stars, we can infer that different stars are at different distances, and we can infer that some things are very far away. All of this is consistent with geokinetics. In fact, all of this answers one of the gravest objections to geokinetics: the perceived lack of parallax in the early years. Parallax was also used to determine how far away the earth is from the sun. This distance is called the ‘astronomical unit’ or AU for short and for a long time we did not know its value. Edmund Halley (1656–1742) suggested that we could use the transit of Venus across the sun to determine the AU by getting multiple people to view it and by accounting for the difference in parallax between their locations. This was difficult on many fronts. First, transits occur in pairs separated by several years, but the pairs are separated by 121.5 or 105.5 years! Second, you needed to know exactly where on the earth each observer was and we had not yet perfected the measurement of longitude, so we could only use a parallax based on latitude. Third, you needed to accurately time the event and clocks were only just beginning to be built with enough precision to do this. Fourth, even with a very precise clock, the time difference (seconds) between the start of the transit from one place to the next would be negligible. However, he reasoned that if multiple people measured the total time from start to finish (hours), the accuracy would be high enough to get a good figure. He was correct. The first recorded transit of Venus across the sun was made in 1639 by a young minister named Jeremiah Horrocks (1618–1641), who projected an image of the sun through a telescope onto a piece of paper [Warning: do NOT try this without proper eye protection, and children should never do this without adult supervision. Concentrating sunlight in this way can permanently damage your eyes.] Horrocks was able to estimate the size of Venus as well as the AU: 59.4 million miles. This was about ⅔ the true value, but it was the most accurate measurement to date. Further sightings in 1761/1769 and 1874/1882 involved some of the greatest international scientific collaborations ever. In fact, the great navigator Captain James Cook (1728–1779), the first person recorded to circumnavigate New Zealand, was sent to Tahiti with the express purpose of recording the 1769 transit (this was successful). But parallax can also be used to measure the distance to stars. Friedrich Bessel (1784–1846) made the first stellar parallax measurement, of the star 61 Cygni, in 1838! He concluded the star was 10.3 light years distance (less than 10% off), although the parallax amounted to less than 0.00009 degrees. By the close of the 19th century, we had a very good idea of the AU, the size of the solar system, and that parallax was very useful for measuring great distances, and all of that helped to solidify geokinetic theory.37 The recently launched European Space Agency (ESA) Gaia unmanned observatory will be able to measure parallax out to tens of thousands of light years (about 1% of the diameter of the Milky Way). Since stars are obviously at different distances, there is no single ‘crystal sphere’. Are there different spheres? One for each star, perhaps? Maybe the stars are embedded in a universal solid? Perhaps a series of high-tension wires? Or, maybe the universe is geokinetic after all! Galileo Galilei NASA Venus Venus transiting the sun, 2012. Galileo Galilei (1564–1642) was the first person to point a telescope at celestial bodies—and contrary to popular myth, there is no record of anyone ‘refusing’ to look through Galileo’s telescope.38 He was the first to see the moons of Jupiter (and correctly interpreted them as such) and the rings of Saturn. He was the first to see sunspots, which refuted the Aristotelian and Medieval idea of perfect heavenly bodies. He noticed that Venus grew considerably larger and smaller over time, and, with his telescope, observed that Venus went through phases like the moon. The Ptolemaic theory had Venus orbiting the earth quite close to the sun, because we observe it only that way. However, under that scenario, the apparent size would not change by a factor of almost 7.39 This is explained by the fact that Venus orbits the sun at an average distance of 108 million km, while Earth orbits at 150 million km, so its closest approach to Earth is about 42 million km (150–108) and furthest about 258 million km (150+108).40 The phases cannot be explained under the Ptolemaic model that had Venus orbiting the earth close to the sun without the earth in between the sun and Venus to observe full and gibbous phases. But Venus orbiting the sun explains the huge difference in apparent size, the phases, and why the crescent phase is by far the brightest, since at that time Venus is closest to Earth.41 Galileo was the first to suggest a (correct, and workable when on land) way to solve the “Longitude Problem” by compiling a table of Jovian moon cycles for a reference location (including times and angles), then making observations of these events (same times, different angles) at a location whose longitude is unknown. There is a lot more to this man than most people realize! Much has been written on his trial by the Catholic Church, and urban myths on the subject abound. Let us just say that the Church did not actively suppress geokinetic theory so much as Galileo insulted the Pope in such a way as to permanently break their friendship, at which point his opponents gleefully used the occasion to bring him to trial for heresy. However, Heilbron points out: A map of known stars within 14 light years of earth, based on parallax measurements. Importantly, they are not just moving with respect to earth; they are moving with respect to each other, meaning the constellations will change shape over time in a predictable manner. Absolute geocentrism can’t explain this in any practical way. Galileo’s heresy, according to the standard distinction used by the Holy Office, was “inquisitorial” rather than “theological”. This distinction allowed it to proceed against people for disobeying orders or creating scandals, although neither offence violated an article defined and promulgated by a pope or general council. … Since, however, the church had never declared that the Biblical passages implying a moving sun had to be interpreted in favour of a Ptolemaic universe as an article of faith, optimistic commentators … could understand “formally heretical” to mean “provisionally not accepted”.42 As shown above, it was really science vs. science, but Galileo also did not have all the science on his side. His favourite ‘proof’ for geokineticism was the tides, now known to be fallacious. Bede had proposed the right explanation centuries earlier: the moon was the main cause of tides. So the usual atheopathic historiography of science vs. ignorant religious geocentrism is based on historical ignorance and anachronism: many of the geocentrists were following what they thought was the best scientific evidence they had at that time.The detractors err by reading back modern science into people who could not have had this knowledge. We should not make the opposite error by ignoring modern science and adopting absolute geocentrism. Tycho Brahe Tycho Brahe (1546–1601) was yet another man of intelligence and diligence who left his mark on history. Without the aid of a telescope, he compiled careful astronomical observations over multiple decades, with a precision equivalent to the width of a US quarter seen at a distance of 100 meters. After the supernova of 1572, Brahe argued that the celestial sphere was not immutable, as Aristotle taught. He then argued that the Great Comet of 1577 traveled through the supposed crystal spheres (meaning they must not exist). In the end, he proposed a mixed model where the other planets orbited the sun but the sun and moon43 orbited the earth. This was supported by the fact that Copernicus’ model did not fit the newest available data (but this was because of Copernicus’ perfect circle assumption). Tycho proposed a cosmology that was a hybrid of the Ptolemaic and Copernican system: the sun, moon, and stars circled the fixed earth, while the other planets circled the sun. He thought that it combined the mathematical elegance of the latter with what he thought was the science of the former in proving that the earth could not move. This Tychonian geo-helicentric model was compatible with Galileo’s observations of the phases of Venus and the moons of Jupiter. Brahe also made a good point: that if the earth moved around the sun, then we should see parallax with the stars. Copernicus answered, rightly as it turns out, that the stars were even more distant than the vast distances already imagined. However, not so fast! At the time, stars were thought to have a definite apparent size, and stars like Vega were perceived to be larger than Polaris, for example. So Brahe calculated that if the stars were as far away as Copernicus required, they must be also be unimaginably huge, dwarfing the sun. These arguments would soon be answered by the geokineticists, but some weakly. One Copernican, Christoph Rothmann, answered Brahe’s point about the huge stars, essentially saying, “Who cares how big the stars are?” because size means nothing to an infinite God. This turns the usual atheopathic science vs. religion canard on its head: here the geocentrist was appealing to science—including looking through a telescope just as Galileo asked—while a Copernican resorted to a sort of ‘God of the Gaps’44 response. The ‘giant stars’ argument was a major and unanswered argument in Book 9 of the encyclopedic Almagestum Novum (New Almagest) (1651) by the astronomer and Jesuit priest Giovanni Battista Riccioli (1598–1671), who was also the first person to measure precisely the gravitational acceleration of falling bodies. He discussed 126 arguments of variable quality about Earth’s motion—49 for and 77 against. Most of these arguments were scientific, and Riccioli thought that the weight of the science favoured a fixed earth. So he defended the Tychonian geo-heliocentrism model as the one that best fit the science of his day.45 However, it was not known—either by Brahe and Riccioli, or by their heliocentric opponents—that the apparent sizes of stars is an optical illusion—almost all stars are really point light sources when viewed from the earth, and the ‘size’ is a refraction or scattering effect. Even with a telescope, the scattering causes the ‘Airy Disc’, as realized by the 19th century scientist Sir George Biddell Airy (1801–1892).46 Both sides of the debate thought that the Airy Disc was the star itself. In reality, Vega, which Brahe thought was huge, is only 2.36 times as big as the sun, but it’s quite close. Polaris, which Brahe thought was a lesser star, is 43 times the sun’s size. The first direct image of a star outside our solar system, in the sense of the stellar disk as opposed to a point of light, was a Hubble Space Telescope picture of Betelgeuse, taken in 1996.47 But Betelgeuse really is a huge star, bigger even than the diameter of Jupiter’s orbit, and relatively close (about 643 light years), so it was possible to image its size. However, Airy disc diffraction wasn’t discovered until relatively recently, so we can conclude that Brahe was acting as a real scientist, using the best data available at the time (as was Riccioli after him). And although Brahe took Scripture seriously, he based his modified geocentric model on what he thought was the best scientific evidence.48 It is not surprising that for a few years Brahe’s geo-heliocentric theory, and many competing but similar theories, were more popular. Several of those competing theories involved a rotating earth in a geocentric universe, but every one of these, like their Greek progenitors, had a short shelf life. Once the earth spins, the pseudo-biblical arguments (e.g., “The Bible speaks of ‘sunrise’ so the universe must be geocentric”) go up in smoke. And once the earth spins, all the supposed observational evidence for geocentrism suddenly disappears. Johannes Kepler Kepler’s Platonic solid model of the Solar system (1596). Johannes Kepler (1571–1630) worked for Brahe and inherited his data upon Brahe’s death. Unlike Brahe, however, he was an early convert to Copernicus’ heliocentrism, believing that its mathematical elegance reflected the glory of the Trinitarian God of the Bible, and tried to refine it further. His first attempt was ingenious, even though he eventually abandoned it: he proposed that the orbits of the six known planets had the radii of imaginary spheres that circumscribed one of the five Platonic solids (octahedron, icosahedron, dodecahedron, tetrahedron, and cube), with each solid nested in the next with its vertices touching a circle that inscribed the next larger solid, and with all centered on the sun.49 Kepler found that the predictions differed from Brahe’s observations of Mars’ orbit by a mere 8 arcminutes. Since 1 arcminute = 1/60 of a degree, there was thus a tiny difference between observation and theory. The moon’s angular diameter as seen from earth is between 29.3 and 34.1 arcminutes, and Ptolemy’s and Copernicus’ earlier work was precise only to 10 arcminutes. However, Kepler held Brahe’s observation skill in such high esteem, since they were precise down to 2 arcminutes, that this tiny difference was enough to abandon the theory: If I had believed that we could ignore these eight minutes [of arc], I would have patched up my hypothesis accordingly. But, since it was not permissible to ignore, those eight minutes pointed the road to a complete reformation in astronomy. He then developed what are now called Kepler’s Three Laws of Planetary Motion: 1. All planets orbit the sun in elliptical orbits with the sun at one of the two foci 2. Planets sweep out equal areas in equal times 3. The square of orbital period is proportional to the cube of the semi-major axis of the orbital ellipse His ideas were not universally accepted (e.g., Galileo and Descartes both rejected them), but his book Epitome of Copernican Astronomy would become the most-read astronomy text of the era.50 Still, what was lacking was a physical reason for the way things were. At that point in history, astronomy was allied with astrology and mathematics and was thus deeply steeped in philosophy. Physics was treated as an entirely separate subject and Kepler received criticism for even his minor attempts to bridge the two realms. Table 1: Planetary orbital data. Planet Mass (1024 kg) Diameter (km) Gravity (m/s2) Distance from Sun (106 km) Orbital Period (days) Orbital velocity (km/s) Orbital Eccentricity Perihelion (106 km) Aphelion (106 km) Mercury 0.33 4,879 3.7 57.9 88 47.4 0.205 46 69.8 Venus 4.87 12,104 8.9 108.2 224.7 35 0.007 107.5 108.9 Earth 5.97 12,756 9.8 149.6 365.2 29.8 0.017 147.1 152.1 Mars 0.64 6,792 3.7 227.9 687 24.1 0.094 206.6 249.2 Jupiter 1,898 142,984 23.1 778.6 4,331 13.1 0.049 740.5 816.6 Saturn 568 120,536 9.0 1,433.5 10,747 9.7 0.057 1,352.6 1,514.5 Uranus 86.8 51,118 8.7 2,872.5 30,589 6.8 0.046 2,741.3 3,003.6 Neptune 102 49,528 11.0 4,495.1 59,800 5.4 0.011 4,444.5 4,545.7 Isaac Newton Isaac Newton (1642–1727) was simply one of the greatest scientists who ever lived. Among his many accomplishments, he developed the universal theory of gravity (1687), which stated that all objects in the universe attract all other objects, and that the attractive force is related to the mass of the two objects and their distance apart.51 He also gave us the Three Laws of Motion: 2. Force, mass, and acceleration are related by the formula F = ma 3. There is always an equal and opposite reaction to any force applied to an object Think about it: Galileo first saw that smaller moons orbit larger planets. Newton then gave the reason for this. Apply this thought to the solar system. We know size of the sun. The sun has much more mass than the earth. If moons orbit their more massive planets, then the earth (and other planets) must orbit the much more massive sun. At larger scales, we see globular clusters where stars are orbiting their common center of mass, as best as super-computer modeling can tell. Although here is where ‘dark matter’ is often added to the mix, so obviously there is more to learn (e.g. maybe we should adopt the new physics of Carmelian Relativity).52 Importantly, Kepler’s three laws of planetary motion can be directly derived from Newton’s work (in fact, Newton did this). When one uses Newton’s law of gravitation and his 1st and 2nd laws in a heliocentric system, it turns out that one of the foci of the Keplerian ellipses is the barycenter (the center of mass of the system). But Newton went farther than that, and this is part of his brilliance. Newton very carefully explained that evidence for his theory should only be applied to limited sets of data. In building up explanations for various phenomena, results could be pooled into larger and larger explanatory models, but also any deviations from what is expected should be attributed to specific causes. This is one of the most important developments in the history of experimental science, for it led to more and more observational measurements and more and more refinement of his models. The simple idea that every particle in the universe is attracting every other particle can now explain, to an amazing degree of accuracy, the observational evidence, and that evidence stands in sharp contradiction to absolute geocentrism. There are many chaotic effects even in the solar system (perturbations from objects not yet cataloged), and as a result the math is not as precise as perfect clockwork, as one might expect. But these are small effects. An amazing number of phenomena can be explained. For example, Edmund Halley (1656–1742) announced that the moon appeared to be slowing over time (based on ancient eclipse data). Multiple theories were put forth by very smart scientists (e.g., Euler and Laplace), but Newtonian mechanics eventually won the argument (in the mid 1800s it was concluded that tidal friction was causing the slowing—and its resultant recession53). More and smaller discrepancies were noted over time, and in the early the 1900s the Hill–Brown theory was developed. On purely Newtonian grounds, it accounted for the many small variations in the earth’s rotation in relation to the moon, and most of this was explained by accounting for irregularities in the structure of the earth itself—in other words, further refinements of the Law of Gravity. Newtonian theory works to an incredibly high degree of precision here on earth, explains satellites, works on the moon, basically works everywhere we have tried it. If absolute geocentrism were true, none of these things should necessarily be true. Nor could we have derived such simple laws, resulting in many true predictions, from a geocentric universe. Therefore, why would we seek an alternative explanation to geokineticism? For those who think the Bible demands absolute geocentrism, it is notable that Newton wrote even more about theology than science, and thought his greatest work was an exposition of the prophecy of Daniel.54 He saw the solar system as evidence of the biblical God: Further, he was scathing of the atheism that dominates so much of academia today: Also, despite accusations to the contrary, Newton was a confirmed Trinitarian, despite not being able to assent to all of Anglican doctrine.57 Before the turn of the 20th century, several problems had become evident in Newtonian mechanics. Urbain Jean Joseph Le Verrier (1811–1877) first noted that the Mercury’s orbit had deviated from Newtonian predictions by a tiny ~40 arcseconds per century. Albert Einstein (1879–1955) answered this by concluding that Newtonian mechanics are valid approximations at low gravity, but at more extreme levels (e.g., the orbit of Mercury), gravity distorts space and time. In his 1916 paper on general relativity, he suggested three tests of his theory: 1) the precession of Mercury could thus be explained, 2) deflection of light by the sun, and 3) gravitational redshift. All three, and much more, have been confirmed. Einstein’s theories also argue against an absolute-geocentric universe. This is a one-two punch. Geocentrism must address the experimental verification of both Newton and Einstein. Einstein famously said, “A thousand experiments cannot prove me right. A single experiment can prove me wrong” (rough translation). And his theories have been tested, and passed those tests, thousands of times. Thus, it is absolute geocentrism that lacks experimental validation, suffers from experimental contradictions, and supporters are forced to resort to more and more exotic ideas in order to explain away the many contradictions. But Einstein’s theories are based on those of James Clerk Maxwell (1831–1879) and his famous equations of electrodynamics, Hendrik Lorentz (1853–1928) and his equally famous Lorentz Transformations, and Jules Henri Poincaré (1854–1912), whose re-working of the Lorentz Transformation paved the way for Einstein. Thus, geocentrism runs into even more problems with experimental science. The universe is unintelligible under a system of absolute geocentrism and almost everything we think we know about the most profound astronomical discoveries of all time must be wrong. Frames of Reference We have often pointed out that when discussing astronomy the Bible is simply making a valid choice of reference frame. Someone sitting in a train does not seem to be moving with respect to the train but seems to be moving quickly compared to the world outside. Likewise, someone standing on the ground outside the train sees the person zipping by at the same speed as the train. The difference is that the two people have different frames of reference. Thus, for someone here on earth, the sun, moon, planets, and stars appear to be circling us, so why should the Bible not use earth as a frame of reference? And we always talk about a ‘stopped’ car, meaning stopped relative to the ground. Speed limits and stop signs are likewise set relative to the ground, and the GPS in many of our cars uses a car-centered reference frame! Only a pedant would point out that in the geokinetic system a car stopped at the equator travels at about 1,670 km/h (about 1,000 mph) relative to the centre of mass of the earth due to the earth’s rotation on its axis,58 and is orbiting 108,000 km/h around the sun, and is traveling 800,000 km/h around the centre of the galaxy. Sir Fred Hoyle (1915–2001), no friend to Christianity, affirmed: The relation of the two pictures [geocentricity and geokineticism] is reduced to a mere coordinate transformation and it is the main tenet of the Einstein theory that any two ways of looking at the world which are related to each other by a coordinate transformation are entirely equivalent from a physical point of view. Today we cannot say that the Copernican theory is ‘right’ and the Ptolemaic theory ‘wrong’ in any meaningful physical sense.59 Note that Hoyle is speaking about a coordinate transform between any two reference frames in a geokinetic universe, not a dynamical explanation of the physics involved in how things move in a geokinetic vs. absolute geocentric model. Once you have a set of equations of motions for a geocentric and heliocentric model, you can switch between them. Fine. But once you bring in physics (gravity), one model works with the physics and the other is just a set of equations of motion with no relation to physics. Also, one of Stephen Hawking’s collaborators, South African cosmologist and theistic evolutionist, George Ellis, was quoted as follows: Ellis is speaking here about big bang vs. other cosmological models, not geocentrism vs. geokinetics. The point here is that philosophy often intrudes itself into arguments about how the universe works. Ellis could have said the same about a literally-earth-centred frame. Geocentrists often quote gleefully about supposed geocentric evidence in cosmology, but this is on a galactic scale – a scale far too large to differentiate heliocentrism from geocentrism. If choice of reference frame were the only issue, we would not have a problem with a geocentric reference frame in ordinary usage. However, this is not what modern geocentrists claim. Rather, they insist that the earth is the only valid reference frame, often combined with abusive ad hominem attacks on the faith of the Christian geokinetic pioneers. Hoyle, Einstein, and Ellis (as well as Cardinal Nicholas of Cusa back in the 15th century) all said we could switch from one to the other just by transforming coordinates. But why would we want to, for any sort of study of motions in the solar system, galaxy, or cosmos? It is true that you can easily switch between Copernican, Tychonic, and Ptolemaic systems because they all relied on circular orbits. You could build a more complex geocentric model with elliptical orbits, but you are still going to fall short, because in order to make a comprehensive geocentric model, you would need dozens if not hundreds of ad hoc parameters added almost willy nilly to explain the many small perturbations that Newton’s model explains with the simple Law of Gravity. Geocentrism does not really have a “model” in a mathematical sense. Thus, the mathematics for converting from a geokinetic to a geocentric universe are almost unbelievably cumbersome. Many modern ‘geocentrists’ make another ad hoc adjustment that should doom their theory by definition: placing the earth off-center—a tacit admission that Kepler was right all along that the sun was at the focus, not the centre of elliptical orbits. Thus, they have a neo-Tychonic system in which the moon and sun orbit the earth but the planets orbit the sun, and all with elliptical orbits. This bait-and-switch is hardly solved by their preferred neologism ‘geocentricity’.61 It was the accumulation of these ad hoc parameters in geocentric models that made scientists seek for a better explanation in the first place. The transform only works in practice at the basest of levels. The geokinetic argument starts with a very simple Newtonian law: all objects in the universe attract each other according to the inverse square law. Everything else follows naturally from there. Why does not everything in the solar system collapse into the sun? Orbital angular momentum balances the attractive force of gravity. This works until you get to the size of galaxies and galactic clusters, but this is an ongoing field of study among evolutionists and creationists, with multiple competing models. Also, because of Newton’s second law, acceleration = force/mass, the more massive objects accelerate less with the same applied force. So when dealing with objects of vastly different masses, it makes more sense as an approximation that the most massive object is treated as an unmoving centre. In reality, everything in the solar system revolves around the centre of mass (the barycenter). For the earth-sun system, this is 450 km from the centre of the sun (0.065% of the radius of the sun),62 so treating the sun as the center is a very good approximation.63 barycenter The location of the barycenter of the solar system changes over time based on the position of the planets. In many ways, the geocentism debate is akin to the “Did men really land on the moon conspiracy“. Why could they not have done so when everything we know about experimental science (from the force of gravity, the properties of accelerating objects, the workings of jet engines, geometry, trigonometry, calculus, etc., etc.) all argue that it is entirely possible? In fact, a motivated high school student could work out many of the necessary calculations. In the same way, the simplicity, elegance, and far-reaching predictive value of geokinetics puts a huge burden of proof on the geocentrist. In science, there are many useful reference frames. For example, electrical engineers often find it most convenient to use a ‘bug on the rotor’ as the reference frame when studying induction motors, to understand the way the rotating magnetic field ‘slips’. But if you average out the motions of all the stars in our local cluster, we are moving about 70,000 km/h, in the direction of Lyra (geokinetic), and in reference to the galactic center we are moving about 800,000 km/h. To say that all frames of reference are valid, as some do, is the central point of Relativity. However, to then say that a geocentric frame is the only true or valid frame is breaking the very rule they try in invoke. And what is to prevent someone from claiming the center of the universe is at the tip of their nose (‘idiocentrism’), since that is 100% in agreement with every personal observation any person has ever made? Supporting Evidence (or, why the earth cannot be at the absolute center) The rate of acceleration of objects in the universe According to Newton’s first law, an object in motion will tend to go in a straight line. Thus, in order to orbit something, an object must turn. In other words, it must accelerate— to a physicist, this means any change of speed or direction. Newton’s second law states that the force required is proportional to the mass and the acceleration (F=ma). If the entire universe is rotating (accelerating) around the earth, how much force would be required to keep things from flying apart? And, the farther away the object, the greater the orbital radius, the more acceleration is required. Remember, there is overwhelming evidence against solid spheres holding the stars and planets in place, and since we can measure distance to many stars using parallax, there is no single “sphere” upon which they are stuck. Based on Newton’s laws, we can estimate the mass of many stellar objects and guess at the mass of many more. The force required to hold them in circular orbits around the earth at faster-than-light speeds (see below) would be astronomically huge.64 The speed of objects in the universe If objects are rotating around the earth, we can calculate the speed at which they are moving, and the speed depends on their distance. They must travel the circumference of their orbit every day. In big bang theory at least, there is nothing preventing stars from moving faster than the speed of light. This is called ‘superluminal speed’ and big bang cosmologists assume that anything outside one Hubble radius (about 14 billion light years) is receding from us at greater than c. But in a geocentric universe any object beyond the orbit of Neptune would be moving faster than c, because it would take more than one day to travel a circle of that circumference at the speed of light. If geocentrism is true, there should be a ‘spatial Coriolis’ seen in the Pioneer probes and other objects we have sent into the heavens. Here on earth, the Coriolis force is seen when objects traverse an inertial reference frame other than the one in which they started. There should be a ‘spatial Coriolis’ as well, because objects leaving earth are starting with an inertial reference frame radically different from the one to which they are travelling. If we aimed them at a planet, they should miss—by millions of miles! Note that this argument is exactly the same as the one Copernicus quoted from Ptolemy above, only here instead of a curving falling object we have a curving rising object.65 In order to get to a planet, the ship would have to accelerate to unbelievable speeds. Where does this extra propulsive force come from? And if that acceleration did not happen, if one of our ships happened to run into one of the distant planets it would smack into it at such a high velocity as to completely obliterate the planet. This underscores the hopelessness of deriving any dynamical model for geocentrism once we leave the vicinity of the earth. Here is another example of the speed problem: the moon orbits the earth at about 1 km/s, with an average distance from the center of the earth of 385,000 km (this is based on simple trigonometry). In a geocentric universe, instead of orbiting every 27.32 days, it orbits daily, meaning it must move about 27 km/s. This is much faster than the Apollo spacecraft sent to the Moon in the 1970s. In fact, it is faster than the 11.2 km/s required to reach escape velocity. The Moon should sail away into space, but it does not because it is not orbiting at that speed and is held nicely in place by the force of gravity. And think about what would be required to bring a long-period comet in from the apex of its orbit (aphelion) to a close approach with the sun (perihelion). We can estimate the mass of many different comets (and after the Rosetta/Philae rendezvous described above, we know the mass of one comet to a high degree of precision), and thus we know how much force it would take to account for the necessary acceleration to bring them closer in within a geocentric universe. To go from a speed reater than c to a speed much less than c, and then back again, comets would have to come with warp-drive. Yet another speed problem comes from our own humble satellites orbiting the earth. Under Newtonian physics and a rotating earth, a satellite will appear stationary in the sky if it has a circular orbit over the equator, and revolves in the same direction and speed of the earth’s rotation. That is, it has an orbital period of one sidereal day (23.934461223 hours). Such a satellite is called geostationary, and for this to work, the satellite’s altitude must be 35,786 km (22,236 mi) above sea level. Only at that height will the earth’s gravity provide the right centripetal acceleration to produce the orbit with the right period. (Note, geostationary orbits are a subset of geosynchronous orbits: the latter merely have a period of one day so keep up with the earth’s rotation, but if the orbit is elliptical or slanted, the satellite will not appear stationary.) But if the earth doesn’t move, then the satellite must also be unmoving. So much handwaving must be invoked to explain how a rotating universe manages to suspend a satellite motionless in space at just that altitude rather than succumb to the earth’s gravity. Aberration of starlight The velocity of the earth changes as it orbits the sun, therefore the expected position of the sun changes over time. In the same way that rain seems to fall at an angle while driving in a car on a rainy day, the perceived direction to various stars shifts as the earth revolves around the sun. This was first noticed in the 1500s, but it defied explanation and interfered with the search for stellar parallax. Aberration was first explained by James Bradley (1693–1762) in 1729. He also provided a decent approximation of the speed of light (183,000 miles per second, 98.4% of the true value). Aberration is a direct effect of the earth’s movement about the sun and is perfectly consistent with Newtonian physics. Under geocentrism, however, arbitrary explanations must be invoked to explain it. Think about it. If the universe revolves around the earth, stars circle the earth 365 times a year. For a star exactly 10 light years away, the star would revolve 3,652.42 times before its light reached earth. In other words, the light beam should trace out a path that looks more like a very tight spiral, with arms 24 light-hours apart (assuming a finite and constant speed of light). This would be able to be measured easily. And, since we have sent multiple space probes (with cameras) far enough away from earth, this would have been discovered by now Thus, the stars do not rotate about a stationary earth. The Discovery of Neptune In 1781, Sir Frederick William Herschel (1738–1822) discovered the planet Uranus. Upon subsequent observations, its orbit was worked out by Anders Johan Lexell (1740–1784). However, slight disturbances in the measured orbit of Uranus led to the prediction of another, undiscovered planet by Le Verrier in 1846. Neptune was discovered by Johann Gottfried Galle (1812–1910) the same evening Le Verrier’s letter to him predicting the existence of an undiscovered planet arrived at the Berlin Observatory. This was perhaps the greatest achievement of the Newtonian system, and ranks as one of the greatest achievements of experimental science. The perturbations of Jupiter and Saturn on Uranus are greater than that of Neptune and it was only by applying Newtonian gravitational theory to the situation (by factoring out the effects of Jupiter and Saturn) that Neptune could be discovered. What is even more amazing is that Uranus, with an orbital period of 84 years, had not even completed one orbit of the sun before it was used to find Neptune! We were able to better estimate the mass of Uranus after the Pioneer flybys of Neptune. This, in turn, answered a riddle that was created by earlier, less exact estimates and the need for a hypothesized 10th planet to account for certain discrepancies simply vanished. Can you see how Newton’s methodology has led to further and further successful refinements of the geokinetic system? Absolute geocentrism could never have predicted Uranus and Neptune from orbital mechanics. Remember, both the Ptolemaic and Tychonian models are kinematic: they merely describe how planets are observed to move. Any observed deviations are just tacked on to the model—what’s another epicycle here or there? Only under a dynamic model, with forces causing motions, can a deviation from predictions have any real meaning. The Return of Halley’s Comet Alexis-Claude Clairaut (1713–1765) successfully calculated the return of Halley’s comet to perihelion in 1759. In order to do so, he had to account for the gravitational effects of Jupiter and Saturn on the comet, and the effects of Jupiter on the sun. Using the most advanced mathematics of the day (calculus), his detailed calculations took years. In the end, he was off by about 1 month, within his margin of error. This was taken as a triumph of Newtonian gravity theory and helped tremendously to bring mathematics and physics together. Prior to this, many thought math was just pure, applied logic and that the physical world was nothing if not mysterious. Theory and fact were not always expected to mesh together. This changed after 1759.66 Delicate orbital mechanics There are several places in any planetary system called Lagrange points where the gravitational attraction of the sun exactly balances that of the planet, meaning an object can orbit at the same rate as the planet even though it is a different distance from the sun. The first three Lagrange points were discovered by the great mathematician, and staunch Christian, Leonhard Euler (1707–1783). In 1772, his able student and successor Joseph-Louis Lagrange (1736–1813) described the remaining two. These discoveries (and their later confirmation) were squarely based on Newtonian theory. In a fine example of applied Newtonian physics, ESA’s Gaia space telescope is placed on a Lagrange Point (L2, specifically). It was already known that L2 is unstable (small deviations from equilibrium grow exponentially over time), so in order to keep the satellite in place while using the smallest possible amount fuel to fine-tune the position, it was placed in a looping, Lissajous orbit that also had the effect of keeping it out of earth’s shadow. This elegant dance was made possible by geokinetic theory. The equatorial bulge Newton noticed that Jupiter had an equatorial bulge and reasoned that this was due to the fact that it was rotating, causing a fictitious centrifugal force in Jupiter’s reference frame.67,68 He then reasoned that earth must have a bulge as well and set about to estimate its magnitude. It turns out that sea level at the equator is about 21 km ‘higher’ than at the poles. Other rotating bodies also have an equatorial bulge, including Mars, Saturn, Uranus, Neptune and the asteroid Ceres. At the equator, there is a reduction in apparent surface gravity of ½ of 1% compared to the poles. 70% of that is due to the ‘centrifugal force’ counteracting the attractive force of gravity. The remainder is due to the difference in distance from the center of the earth caused by the bulge. However, this is enough to make the furthest surface point from the earth’s center the summit of the equatorial volcano Chimborazo, not Everest. The equatorial bulges of objects in near space are due to rotation. Earth has a similar bulge. Geocentrism must claim the two phenomena are due to different causes, which is nonsensical. To be fair, geocentrists could in theory solve the problem with relativity. Max Born (1882–1970), Nobel laureate and quantum mechanics pioneer, pointed out: Thus we may return to Ptolemy’s point of view of a ‘motionless earth’ … One has to show that the transformed metric can be regarded as produced according to Einstein’s field equations, by distant rotating masses. This has been done by Thirring. He calculated a field due to a rotating, hollow, thick-walled sphere and proved that inside the cavity it behaved as though there were centrifugal and other inertial forces usually attributed to absolute space. Thus from Einstein’s point of view, Ptolemy and Copernicus are equally right.69 But here again, Born just said it was possible, not mandatory or even practical. For example, earthquakes can affect the earth’s rotation because they can redistribute mass, and this can be calculated relatively strightfowardly. But using the absolute-geocentric explanation would entail that an earthquake affects the entire universe. And ironically, many absolute geocentrists reject relativity, since they don’t want to concede that non-geocentrism is even as right as geocentrism.70 The oddly wiggling universe If the earth is the center of everything, we must explain why events happening here on earth affect the rest of the universe. For example, Bradley discovered that the earth wobbles on its axis much like a spinning top wobbles as it revolves. ‘Nutations’ like this are explained by Newtonian theory to a high degree of accuracy, but would be nothing more than arbitrary changes in the rotation of the cosmos under geocentrism. And earthquakes, like the one that caused the massive tsunami that hit Japan in 2009, are known to affect the rotation of the earth. Scientists actually measured a change in the rate of rotation of the earth after that event. If geocentrism is true, nutations and earthquakes change the rotational speed of the universe instead. Yet, strangely, even though there is no reason to believe all objects in the universe are connected, they all change their rates of rotation at the same time. And these objects are at vastly different distances to the earth. Thus, there is a time delay that must be accounted for. Do objects further out change earlier than objects closer in, and are all these sequential changes timed to future events here on earth? No. We see everything in the universe changing at the same time because it is the earth itself that is changing its rotational speed. Coriolis force This is named after the French engineer and mathematician Gaspard-Gustave Coriolis (1792–1843). Newton’s Laws of Motion say that an object will move in a straight line unless an outside force acts upon it. This applies to any motion across the earth or any rotating body—any outside observer would see straight line motion. But from the viewpoint of a stationary observer on the rotating body itself, the motion would appear to be deflected. This is due to the fact that an object, decoupled from the moving and rotating earth, will travel in a straight line irrespective of what the earth itself does. So to apply Newton’s Laws, a fictitious force or pseudo-force must be postulated to cause this ‘deflection’. This is the ‘Coriolis Force’, acting perpendicular to both the rotation axis and the object’s motion. This is important for cyclones, a large-scale weather pattern where air flows into a low-pressure region. Instead of flowing straight in, the air is deflected, so that cyclones flow counter-clockwise in the northern hemisphere, but clockwise in the southern hemisphere. Because the earth is rotating so slowly—once per day—the Coriolis effect is negligible except over long distances.71,72 There is simply no good reason to attribute the observations to the universe spinning around a stationary earth.73 And when we look at the Great Red Spot on the southern hemisphere of Jupiter, we note that it behaves exactly like hurricanes do in the northern hemisphere here on earth—rotating anticlockwise (counterclockwise). That’s because hurricanes have wind rotating inwards towards a very low pressure area, while the Spot is an anticyclone (winds are spiraling outwards from a high pressure area). And, of course, the Spot is larger than any earth hurricanes—in fact, larger than the whole earth. From all appearances, its behavior is due to the Coriolis Force acting on an anticyclonic gyre moving across a spinning planet. We can observe Jupiter spinning. We see evidence of the physical effects of that spin. Now look at earth. We see evidence of the physical effects of spin in the Coriolis Force. Does this not mean that the earth also spins? There are many other geokinetic examples we could have brought into this discussion. We decided to stick to these few examples only, and we ordered the examples starting with the most important. When all is said, it is clear that absolute geocentrism has extreme problems. We would encourage anyone dabbling with non-Newtonian ideas to let go and let the earth find its own place in the heavens. The triumph of geokinetic theory is one of the greatest examples of the pursuit of science in the history of man. It was pioneered by scientists with a biblical worldview, affirmed by theologians with a biblical worldview, and is accepted today by people with a biblical worldview. It also fits all the relevant data. These are the reasons why we support it. The greatest contribution of Western science, pioneered by Christians as it were, is the idea that the universe is rational. This is in line with the biblical presupposition that the universe behaves in an orderly manner because the Ultimate Lawgiver would not have created something that goes against his very nature. Our God is unchanging. There is no ‘shadow of turning’ with him (James 1:17). He is not fickle. He is not like pagan gods. He is not like Zeus, sitting on Mount Olympus waiting to throw down a lightning bolt whenever he wants to mess up a person’s life (or experiment). He is not ‘chaos’, which would prevent rational interpretations of events. He is not ‘nature’ – if nature were alive, it would have a volition of its own and science would not be possible. No, our God has created a universe for us to live in and that exults His name. He has also told us to use our minds and to understand the universe He made for us. This universe, therefore should be understandable, and geokinetic theory makes such an understanding possible. References and notes 1. Sarfati, J., The flat earth myth, Creation 35(3):20–23, 2013. Return to text. 2. Nicole Oresme, Le Livre du Ciel et du Monde (The Book of Heaven/Sky and the World), 1377. Return to text. 3. Hannam, J., God’s Philosophers: How the Medieval World Laid the Foundations of Modern Science, Icon Books, ch. 12, 2010. Published in the USA as The Genesis of Science: How the Christian Middle Ages Launched the Scientific Revolution. Return to text. 4. Hannam, J., Ref 3. Return to text. 5. Graney, C.M., Mass, speed, direction: John Buridan’s 14th-century concept of momentum, The Physics Teacher 51(7):411–414, October 2013. Return to text. 6. Hannam, J., Ref. 3. Return to text. 7. Nicholas of Cusa, De Docta Ignorantia (On Learned Ignorance) 2(12), 1440, translated by Jasper Hoskins; Return to text. 8. In a similar way Charles Darwin certainly did not think up evolutionary theory on his own! See Bergman, J., Did Darwin plagiarize his evolution theory? Journal of Creation 16(3):58–63, 2002. See also Sutton, M., A Bombshell for the History of Discovery and Priority in Science, 2013; to text. 9. Interestingly, this informs us about the time of month of this battle. The moon was to the west of the sun during the day, meaning it was late in the month and the moon was waning, or past full. Return to text. 10. Most New Zealanders know about the Māori legend of the demigod Maui capturing the sun before it could rise, then beating it so it slowed down. The paganism, as always, was a later addition to the older belief in a single supreme Creator God, Io. Return to text. 11. Brown, F., Driver, S.R., and Briggs, C.A., A Hebrew and English lexicon of the Old Testament, Hendrickson Publishers, UK, 1996; available online from Return to text. 12. Livingston, G. Herbert et al., Beacon Bible Commentary, Volume 1: Genesis through Deuteronomy, p. 32, 1969. Return to text. 13. See BAGD, Louw–Nida. Return to text. 14. Seely, P.H., The three-storied universe, J. American Scientific Affiliation 21(1):19, 1969. Return to text. 15. Kulikovsky, A.S., Creation, Fall, Restoration, p. 131, 2009. Return to text. 16. Holding, J.P. Is the raqiya’ (‘firmament’) a solid dome? Equivocal language in the cosmology of Genesis 1 and the Old Testament: a response to Paul H. Seely, J. Creation 13(2):44–51, 1999; Return to text. 17. Kuhn, T., The Copernican Revolution, Harvard University Press, 1957. Return to text. 18. Kuhn, T., The Structure of Scientific Revolutions, University of Chicago Press, 1962. Return to text. 19. Nicolaus Copernicus, De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), 1543. Return to text. 20. Better known today as the Almagest. Copernicus uses a short form of the original name of Ptolemy’s tome: Hē Mathēmatikē Syntaxis (Ἡ Μαθηματικὴ Σύνταξις = The Mathematical Treatise). This became so admired it was called simply Hē Megalē Syntaxis (Ἡ Μεγάλη Σύνταξις = The Great Treatise). Then Arab scientists used the superlative Megistē (Μεγιστη), and named it al-kitabu-l-mijisti (The Greatest Treatise), which was Latinized to Almagest. Return to text. 21. Boëthius, The Consolation of Philosophy (De consolatione philosophiae) 2(7)3–7, AD 524. This book was one of the most widely read and influential works in the west during the entire Middle Ages. Return to text. 22. Rodney Stark, How the West Won: The Neglected Story of the Triumph of Modernity, Intercollegiate Studies Institute, 2014. Return to text. 23. Hannam, J., Ref. 4. Return to text. 24. Based on writings attributed to a mythical figure called Hermes Trismegistus (Greek Hermēs ho Trismegistos Ἑρμῆς ὁ Τρισμέγιστος, ‘thrice-greatest Hermes’). The writings advocated an esoteric monotheism with reincarnation, and taught that man could control nature with rituals (theurgy), alchemy, and astrology. Return to text. 25. For an essay on this topic with many interesting quotes, see Return to text. 26. Heilbron, J.L. The Sun in the Church: Cathedrals as Solar Observatories. Harvard University Press, 1999. Return to text. 27. Kuhn, Ref. 13. Return to text. 28. Broad, W.J., How the Church Aided ‘Heretical’ Astronomy, New York Times Learning Network, 19 October 1999. Return to text. 30. Johannes Kepler, De fundamentis astrologiae certioribus (Concerning the More Certain Fundamentals of Astrology), Thesis 20, 1601. Return to text. 31. Galileo Galilei, Dialogo sopra i due massimi sistemi del mondo (Dialogue Concerning the Two Chief World Systems), 1632. Return to text. 32. de Santillana, G., The Crime of Galileo, p. xii, University of Chicago Press, Chicago, 1955. Return to text. 33. See the discussion of Luther’s supposed antagonism to geokinetic theory, which was really a hearsay account of his rejection because it was new-fangled, in Sarfati, J., Refuting Compromise, Creation Book Publishers, Power Springs, GA, chapter 3. Return to text. 34. See also Sarfati, J., Galileo Quadricentennial: myth vs fact, Creation 31(3):49–51, 2009. Return to text. 35. Copernicus seems to have been the first to realize that increasing the money supply (or modern day ‘printing money’ or ‘quantitative easing’) would likely cause price inflation (Memorandum on monetary policy, 1517). Return to text. 36. Sarfati, J., The biblical roots of modern science, Creation 32(4):32–36, 2010. Return to text. 37. Today, parallax is the basis of the standard distance measure for professional stellar astronomers: the parsec (from parallax-second): the distance at which an AU subtends an angle of 1 arcsecond (1/3,600 of a degree). This is 3.26 light years or 206,000 AU. A parsec is shorter than the distance to even the nearest star outside our solar system, Proxima Centauri, at a distance of 1.301 pc. Return to text. 38. Hannam, J., Who refused to look through Galileo’s telescope?, 20 November 2006: “According to popular legend, when Galileo presented his telescope to senior cardinals/Jesuits/Aristotelian philosophers/the Inquisition they refused to even look through it. This tale has become a standard trope for when we want to attack anyone who won’t accept ‘obvious’ evidence. … So who refused to look through Galileo’s telescope? According to the historical record, no one did for certain. The argument was over what they could see once they did look.” Return to text. 39. Williams, D.R., Venus Fact Sheet,, 9 May 2014. Venus’ angular size ranges from 9 to 66.7 minutes of arc. Return to text. 40. Those figures assume circular orbits as a first approximation. In reality, since the orbits are elliptical, the closest and furthest distances are 38 and 261 million km. See Coffey, J., Venus Distance From Earth,, 8 May 2008. Return to text. 41. This is why the difference in apparent magnitude is not as great as the difference in apparent size:-4.9 brightest, and-3 dimmest: the crescent phase simply has far less of the surface reflecting light towards us. NB, this is a logarithmic scale, where a magnitude 1 star is 2.512 times brighter than a magnitude 2 star. This number means that every five magnitude steps is a brightness factor of 100. So Venus ranges in brightness by a factor of 5.7 (2.512 1.9). Return to text. 42. Heilbron, Ref. 18, pp. 202–3. Return to text. 43. Note that in our Newtonian system, in a sun-centered frame the moon orbits the sun, not the earth. As viewed from outer space, the moon always follows a convex path towards the sun. The earth only perturbs the moon’s path in its journey around the sun. The monthly orbit of the moon around the earth is only an apparent one, and only exists in earth’s reference frame. But note that in this frame, the moon follows Kepler’s laws. An absolute geocentrist must explain why the moon follows these laws but apparently all other heavenly bodies are exempt. Return to text. 44. Creationists are not generally guilty of ‘God of the Gaps’ arguments, despite dishonest caricatures by atheopaths and their churchian allies. See Weinberger, L., Whose god? The theological response to the god-of-the-gaps, J. Creation 22(1):120–127, 2008. Return to text. 45. Graney, C.M., Regarding how Tycho Brahe noted the absurdity of the Copernican Theory regarding the Bigness of Stars, while the Copernicans appealed to God to answer,, 9 December 2011. Return to text. See also Sanderson, K., Galileo duped by diffraction: Telescope pioneer foiled by optical effect while measuring distance to the stars, Nature 2 September 2008 | doi:10.1038/news.2008.1073 and Galileo backed Copernicus despite data: Stars viewed through early telescopes suggested that Earth stood still, Nature 5 March 2010 | doi:10.1038/news.2010.105. See also Graney’s book: Setting Aside All Authority: Giovanni Battista Riccioli and the Science against Copernicus in the Age of Galileo, University of Notre Dame Press, 2015. Return to text. 46. In the atmosphere it is pure refraction. When using a telescope, however, there is the added problem of diffraction due to the aperture size (diffraction angle ~ wavelength/diameter of aperture). Return to text. 47. Hubble Space Telescope captures first direct image of a star,, 10 December 1996. Return to text. 48. The Prof says: Tycho was a scientist, not a blunderer and a darn good one too! The Renaissance Mathematicus,, 6 March 2012; refuting the notorious Christophobe David Barash, whom CMI has refuted on another issue. Return to text. 49. Johannes Kepler, Prodromus dissertationum cosmographicarum, continens mysterium cosmographicum, de admirabili proportione orbium coelestium, de que causis coelorum numeri, magnitudinis, motuumque periodicorum genuinis & proprijs, demonstratum, per quinque regularia corpora geometrica (Forerunner of the Cosmological Essays, Which Contains the Secret of the Universe; on the Marvelous Proportion of the Celestial Spheres, and on the True and Particular Causes of the Number, Magnitude, and Periodic Motions of the Heavens; Established by Means of the Five Regular Geometric Solids), 1596. Return to text. 50. Johannes Kepler, Astronomia Nova ΑΙΤΙΟΛΟΓΗΤΟΣ seu physica coelestis, tradita commentariis de motibus stellae Martis ex observationibus G.V. [Generositas Vestra] Tychonis Brahe (New Astronomy, Based upon Causes, or Celestial Physics, Treated by Means of Commentaries on the Motions of the Star Mars, from the Observations of [your generosity] Tycho Brahe, Gent.), 1609. Return to text. 51. Newton’s formula for calculating gravitational attraction: F =-GMm/R2. The negative sign means attraction, because it’s in the opposite direction to the vector from one of the bodies travelling to the other. The force is proportional to the masses of the objects and inversely proportional to the square of their distance apart—hence an inverse square law. Return to text. 52. Hartnett, J., Has dark matter really been proven? Clarifying the clamour of claims from colliding clusters,, 8 September 2006. Return to text. 53. Henry, J., The moon’s recession and age, J. Creation 20(2):65–70, 2006. Return to text. 54. The Chronology of Ancient Kingdoms Amended, posthumously published in 1728; Observations Upon the Prophecies of Daniel and the Apocalypse of St. John, 1733. Return to text. 57. Newton actually denied arguments for the Trinity from dubiously attested biblical texts, such as the Johannine Comma in John 5:7. Most informed Trinitarians today would agree that the texts are dubious. A very detailed defense of Newton’s Trinitarianism is Van Alan Herd, The theology of Sir Isaac Newton, Doctoral Dissertation, University of Oklahoma, 2008; This documents much evidence, including Newton’s words refuting tritheism and affirming Trinitarian monotheism, e.g.: “That to say there is but one God, ye father of all things, excludes not the son & Holy ghost from the Godhead because they are virtually contained & implied in the father. … To apply ye name of God to ye Son or holy ghost as distinct persons from the father makes them not divers Gods from ye Father. … Soe there is divinity in ye father, divinity in ye Son, & divinity in ye holy ghost, & yet they are not the forces but one force.” The argument against Newton is like someone 300 years from now citing our page ‘Arguments we think creationists should NOT use’ and claiming that CMI is anti-creationist. Return to text. 58. Depending on the latitude of course—multiply by the cosine. Return to text. 59. Hoyle, F., Nicolaus Copernicus, Heinemann Educational Books Ltd., London, p. 78, 1973. Return to text. 60. Gibbs, W.W., Profile: George F.R. Ellis; Thinking Globally, Acting Universally, Scientific American 273(4):28–29, 1995. Return to text. 61. Bouw, G.D., Geocentricity: A Fable for Educated Man? Return to text. 62. Since Jupiter is so much more massive than Earth, and much further away, the barycenter of the Sun-Jupiter system is just outside the sun’s surface. A hypothetical alien astronomer would be able to deduce Jupiter’s presence from the sun’s ‘wobble’. Return to text. 63. In chemistry, we use the Born–Oppenheimer approximation to simplify the Schrödinger equation for the atomic wavefunction—this treats the nucleus as basically stationary compared to the electrons because each proton and neutron in it is almost 2,000 times more massive than an electron. Return to text. 64. In Newtonian physics, the force required to keep a body of mass moving in a circle of radius r at speed v is given by F = mv2/r. See also Sarfati, J., More space travel problems: g-forces,, 9 February 2012. Return to text. 65. Ptolemy was correct, by the way. Objects should curve as they fall, but he had no way to measure the effect because he could not get high enough to drop an object and see the curve. Indeed, when manned space ships are re-entering earth’s atmosphere, rocket scientists must account for both the horizontal motion of the ship as well as the rotational speed of the earth in order to land in the correct place. If a non-orbiting object (e.g., something orbiting the sun in the vicinity of earth) were to fall, say, from the altitude of a geostationary satellite, it would NOT fall in a straight line. In fact, it would appear to curve as the earth rotated beneath the falling object. Return to text. 66. Wilson, C., Clairaut’s calculation of the eighteenth-century return of Halley’s Comet, Journal of the History of Astronomy 24(1–2):1–16, February 1993;….24….1W/0000001.000.html. Return to text. 67. Under Newton’s First Law, any object with no force acting keeps moving in a straight line. So an object moving in a circle has a tendency to fly off in a literal tangent, just because of its inertia, with no force needed. But to observers on the rotating reference frame, it seems as if there is a force acting that pushes objects away from the center, i.e. centrifugal (‘center-fleeing’). This doesn’t exist in inertial reference frames. Return to text. 68. In rotational spectroscopy, gas molecules are treated as rigid rotors to a first approximation. But the molecular rotation pushes the atoms apart, increasing the molecule’s moment of inertia. Since the molecular rotating reference frame is important, a centrifugal distortion parameter is applied to correct for this. Return to text. 69. Born, M., Einstein’s Theory of Relativity, pp. 344–345, Dover, 1962. (German: Die Relativitäts theorie Einsteins und ihre physikalischen Grundlagen), Springer, 1920.) Return to text. 70. Such as Gerardus Bouw, probably the best known geocentrist today. Bouw, G.D., Geocentricity, pp. 267–269, Association for Biblical Astronomy, Cleveland, 1992. Return to text. 71. In technical terms, this is the Rossby number (Ro and not Ro), named after the Swedish meteorologist Carl-Gustaf Rossby (1898–1957). Ro = v/Lf where v is velocity, L is the length, and f = 2 Ω sin φ where Ω is the angular frequency of planetary rotation and φ the latitude. For small Ro (caused by large lengths or spin speed), Coriolis effects are very important. For large Ro, caused by slow spin, small scale, or low latitude (near equator), Coriolis effects are negligible. Return to text. 72. Some claim that the Coriolis effect causes water to drain counter-clockwise from a sink in the northern hemisphere and clockwise in the south. This is a myth—rather, an irregularity in the shape and latent water motion would almost always cause some turning in the flow towards the hole. As the water flow converges on the hole, the diameter shrinks, so the rotation rate increases. This is because of the Law of Conservation of Angular Momentum, which also explains why a spinning ice skater speeds up when she pulls her arms in. Return to text. 73. In rotational-vibrational spectroscopy, if the molecule is rotating very fast, as the atoms vibrate they will experience Coriolis effects in the molecule’s rotational reference frame. So there is a need for correction terms known as Coriolis zeta coupling constants. Return to text. Readers’ comments Edward H. You stated: “this passage [Isaiah 38:8] is entirely ambiguous. It does nothing to inform us of the workings of the universe.” You seem to think that saying something makes it so. There is not any ambiguity in the passage, and that passage speaks directly to the workings of the universe. The foundation of your biblical argument is a “phenomenological language” tradition that requires three assumptions: 1) someone other than God is viewing the phenomenon, 2) that someone is telling us what he saw, and 3) and what he saw did not actually happen as he described it. First of all, it is God speaking in Isaiah 38:8. Verse 4 states “then came the word of the Lord to Isaiah.” Isaiah was speaking the words of God Almighty. God stated what he would do: “Behold, I will bring again the shadow of the degrees, which is gone down in the sun dial of Ahaz, ten degrees backward.” God then stated exactly how he did it. “So the sun returned ten degrees, by which degrees it was gone down.” God says, without ambiguity, that the sun returned the very ten degrees that it had gone down. Yet, you state the passage is ambiguous. However, the alleged ambiguity, according to you, only cuts one way, and that is directly contrary to the express meaning of the words used. Under your “phenomenological language” tradition, you are saying that in fact the sun did not return the ten degrees it had gone down. Please explain to me how your “phenomenological language” tradition does not contradict what God clearly states? How is what you are saying any different than that for which Jesus rebuked the scribes and Pharisees? “Making the word of God of none effect through your tradition, which ye have delivered.” Mark 7:13. Robert Carter Simple. And I have already done so. Both geocentrism and geokinetic theory can be satisfied by one of the stated mechanisms. If you or I were standing there looking at that shadow, measuring the angle to the sun, you and I would both say the shadow and the sun went back. It is "ambiguous" because the language cannot be used to separate the two theories. Yup, without ambiguity, the Bible says the shadow went back. You are putting words in my mouth by claiming I believe the 'sun did not return the ten degrees it had gone down'. The passage is not ambiguous as to what happened. If you wanted to use that as an experimental setup, however, the results would be ambiguous because you cannot use the results to make a decision between the competing hypotheses. A "Type C" experiment, as I explained in an earlier comment. Caution: You are taking up a lot of space and time. This is not an open-ended forum. You've made your best case against phenomenological language and I have replied. If you want to stay on the same topic, we are not under any obligation to keep responding. FYI. Tim H. I agree with BOTH of you (smile), both Edward H and Robert Carter, about the sun dial, because any and all of the discussions about any and all motion of any physical objects in the universe is mathematically RELATIVE, just as Einstein et al said. And that it is only a matter of choosing one’s coordinate system that changes. Therefore Mars-centricity is just as valid mathematically. Grab the universe in any given point and hold it still and the motion of all the other objects in space will not change observationally. “So which is real, the Ptolemaic or the Copernican system? Although it is not uncommon for people to say that Copernicus proved Ptolemy wrong, that is not true … one can use either picture as a model of the universe, for observation of the heavens can be explain by assuming either the earth or the sun to be at rest.” -- Stephen Hawking, Physicist “Let it be understood at the outset that it makes no difference, from the point of view of the describing planetary motion, whether we take the Earth or the Sun as the center of the solar system. Since the issue is one of relative motion only, there are infinitely many exactly equivalent descriptions referring to different centers – in principle any point will do … So the passions loosed on the world by the publication of Copernicus’ book De revolutionibus orbium coelestium libre VI, were logically irrelevant …” – Fred Hoyle, Astronomer What is needed is scientifically observable independent evidence of whether the earth moves or not. And the Michelson-Morley experiments provided that evidence. A serious IN DEPTH discussion about Michelson-Morley needs to take place in this forum. Robert Carter Yup, from the point of view of describing planetary motion. We said as much in the article. Mathematical models of motion can go either way. However, explanations of the physics involved cannot. This was the main conclusion of the article and this is what our detractors are assiduously avoiding. You did read the article, right? Sadly, this comment thread is not a suitable venue for an in-depth discussion of every aspect of M-M. But I have already answered the main contentions (in two places) above. There is not much more to say. And, as I have told two other people, we've given you enough space to make your case. Blessings to you. Edward H. You subtitled the article “Refuting absolute geocentrism,” but you did no such thing. A refutation is proof that something is wrong. You presented arguments against geocentricity, but your article can hardly be termed a refutation. To refute geocentricity would require proof. However, you have craftily dodged and avoided some strong scientific and biblical arguments that support geocentricity. For example, you made passing reference to Isaiah 38:1–7, but you did not cite or discuss the operative passage that proves geocentricity, which appears in verse 8. Why not? If you are refuting geocentricity, one would think that you would address Isaiah 38:8. “Behold, I will bring again the shadow of the degrees, which is gone down in the sun dial of Ahaz, ten degrees backward. So the sun returned ten degrees, by which degrees it was gone down.” Isaiah 38:8. That passage clearly states that “the sun returned ten degrees, by which degrees it was gone down.” God is stating clearly that he moved the shadow on the sun dial ten degrees by moving the sun back in its path by ten degrees. Your statement that God is using “phenomenological language” in the bible passages that support geocentricity is just a clever way of saying that God is telling us a fib, because we are too dense to understand what is really happening. God could have easily stated that he moved the earth back ten degrees in its rotation, if that is what he had done. I think we could understand that phenomenon. God did not say that, because he did not do that. God is not lying to us about what he did by using what you call “phenomenological language.” God explained exactly what happened, “the sun returned ten degrees, by which degrees it was gone down.” Isaiah 38:8. Robert Carter We have not ‘craftily dodged’ anything, and your argument really amounts to nothing. The refutation is in the physics, and so far nobody has attempted to seriously grapple with it, which is telling. Regarding Isaiah 38:8, the shadow went backwards in the reference frame of those doing the observing. If that happened today, the first-hand reports would use the exact same language even though later pundits would state the earth must have temporarily reversed its spin. I note that God did not say the universe reversed its spin around the earth, only that the sun/shadow went back, which could happen in a geocentric universe if God reversed the spin of the universe OR if He temporarily decoupled the sun (and planets!) from the rotation of the universe. In a geokinetic universe, God could reverse the spin of the earth OR move the sun and planets in their place to produce the same effect (thank you for making me think of this last bit). Therefore, this passage is entirely ambiguous. It does nothing to inform us of the workings of the universe. Nobody is being duplicitous. David S. Thank you, Dr. Carter and Dr. Sarfati, for your willingness to engage in what is still a hot topic centuries after Galileo and Copernicus. I consider myself not fully convinced of either side in this debate, and this article did little to help that unfortunately. A chief reason for that is the lack of any mention of the fairly recent work by Robert Sungenis on this topic. See especially his work with Robert Bennett called "Galileo Was Wrong, The Church Was Right". It's a 3-volume work detailing the problems with heliocentrism (or a geokinetic earth, as you say) and defending geocentrism. I believe the thrust of these volumes may be in direct contradiction to your statements in this article Since I didn't see this work or its authors mentioned in either the article or the comments, I was wondering what your thoughts were on this work. If you disagree with the assertions and conclusions made by Sungenis and Bennett, I would be interested to hear why. If you haven't read it, I recommend it; I think it belongs in the conversation. Like you, I want to follow the truth where it leads. It was following the truth that lead me to stop accepting the lie of evolution. CMI helped me tremendously in that capacity. Perhaps CMI can help me with this, as well. Thanks for your hard work on this article and the many other articles you both have written! Robert Carter Sungenis was the main driver behind the geocentrist movie The Principle (there is a link to a review of that movie in a reply to a comment from Gre T. above). As a Catholic, he has great trouble with several of the great Solas of the Reformation (Sola Scriptura -- the Bible alone, and Sola Fide -- by faith alone, specifically). He and Bennet co-wrote Galileo was Wrong, The Church was Right. As I said in a comment above, our purpose was to put the best case forward for geokinetic theory and not to allow ourselves to get bogged down by addressing the claims of the geocentists. As you can see from the comments, however, we have now handled many of their claims. So in the end their book was brought into the conversation. Indirectly, but according to plan. For those out there (like you) who might be sitting on the fence: Put on your thinking caps. Look at the evidence for both sides. Look at the way each side handles Scripture. Look at how each side understands the fundamental processes involved. It is our contention that they are mishandling Scripture and that they do not understand what they are talking about on fundamental levels. The train of recent comments should make that clear. There is a rational viewpoint that is also a faithful viewpoint and that takes into account all of the relevant scientific and Scriptural data and savages neither. It is geokinetics. Tim H. Thanks, Robert, for your correction! My apologies. My analogy of the moving train should’ve only had one boy, catching a pitch from a stationary man. But now a simpler analogy might prove my point: Picture instead a turntable with an LP record (the fabric of space) painted with white dots (stars) spinning once in 24 hours around a small stationary bug (a rocket ship) perched on a stationary spindle (earth) preparing to travel to a dot (Pluto) on the LPs outer edge. Admittedly the bug would have to exert a certain angular force in order to safely step on to and adjust to the rotational momentum of the spinning LP and point itself to Pluto. BUT, once he adjusts to this spinning reference frame AFTER ONLY A FEW STEPS and aligns himself with Pluto, then he only has to travel in a STRAIGHT LINE to get there, requiring only LINEAR thrust from that point on. From the bug’s PERSPECTIVE the stationary spindle (earth) is now spinning behind him; while the stationary spindle perceives the bug as being on a curved trajectory towards Pluto. But, again, this is only a PERCEIVED curve, not a real curve (after alignment), because it is a different (a spinning) reference frame. From the bug’s perspective, he is now walking straight, with Pluto straight in front of him, exerting only a LINEAR speed, NOT and angular speed. The bug’s angular speed is only a PERCEIVED speed “seen” by the spindle. The point is that the bug only had to exert angular force BRIEFLY at the beginning near the earth to get aligned, NOT for the entire trip “to catch up to” a speed-of-light-traveling Pluto. To him Pluto is now stationary. Does that work? The math should work out the same in a geokinetic system. Corrections welcomed. Robert Carter You are having difficulty coming up with working analogies because your basic premise is incorrect. In this case you are demonstrably wrong again. Space is not a 'hard' medium, the record is. There is essentially no friction in space (and many experiments have already shown this). As a ship moves outward in a geocentric universe, it will have to accelerate sideways or it will not travel in a 'straight' line. In fact, your analogy will break down as soon as the ship gets far enough out on the record that friction ceases to hold it in place. The ship, if traveling, is not anchored to the medium. And if it was, it could not move. You are failing to account for inertia. If the interstellar medium is 'thick' and simply carries the ship along in the universal rotation field, there should be a lag (i.e. acceleration), unless it is infinitely thick (like if you were moving inside the vinyl record). But if it is infinitely thick, how does any object move through it? Or is it absolutely thin in the direction of travel and absolutely thick orthogonal to the direction of travel? I think any reasonable person would conclude you were just making things up if you tried to argue this. NASA never takes into account any 'certain angular force' in order to 'step on to' any interstellar medium, yet their space ships fly straight and true. Why is this? It is because geocentrism is an incorrect physical model. Thank you for trying. Thank you for explaining to the best of your ability. I am glad to have this exchange saved for posterity so that our readers can see which is the stronger of the two views. You have had four tries and, unless you have some really stellar new argument that no one has ever thought of before, this conversation will end here. Blessings. Morris Y. All this invective language which has come about as a result of this solidly written article highlights the reason why it was needed in the first place. Onlookers here may think geocentrism is a religion in itself. I’ll quickly say, I admire the geocentrist’s commitment to upholding what he feels is biblical truth. However, I wish that their faith in God would be stronger than something which MUST be propped up by the precarious tenet of geocentrism. God created man to dwell on, subdue, and have dominion over the earth, so it’s our reference point. From that point, it is perfectly valid to say the sun moves around the earth. Likewise, it’s okay for me to say I am cold when my temp drops to 97 degrees even though my body is still basking in heat hundreds of degrees above absolute zero. Since we use this type of language to help others understand our experiences, it is only logical that God would also use phenomenological language, in our reference point, using our terms, to help us understand His Word. Imagine if God would have chosen that the entire Bible were written on His omniscient terms from His over-arching perspective. We’d find it hard to understand a word of it. Even if geocentrists postulate that the fabric of the universe itself (space-time) is turning once every 24 hours, dragging all stars with it, there would still be problems. The earth would be compelled to turn with the medium as well at the same rate and the stars and sun would not go across the sky at all. A fix to this problem would be to say the earth is turning opposite the rotating space-time but, alas, that would also violate their beliefs; the sun would not be moving but the earth would. Kudos on the article! It’s been a pleasure to read Dr Carter's responses and to meet Dr Sarfati Sunday. Tim H. Dear Robert, I sincerely appreciate your response and light-speed response. Further regarding the speed of objects measured in different reference frames, I think our disagreement boils down to the term “in relation to”. A simplified example for the point I’m advocating is the classic illustration of a man on the ground observing a ball being thrown by two boys in a passing train. We may still disagree, but what I am saying is that the ball the man “observes” being thrown actually does NOT travel at a faster speed than it is thrown by one boy to the other. I’m saying that there is only be one TRUE speed of the ball, which can only be measured in one chosen reference frame, not across two; therefore, not “in relation to” another reference frame. This means that the ball’s speed “in relation to” the man on the ground (the train’s speed + the thrown ball’s speed) is fictitious, and doesn’t exist in reality. Why? Because the instant the man begins to add the speed of the train he has changed his measuring reference frame to the train, thus eliminating the “in reference to himself” portion of his equation. Likewise, if the man chooses to start measuring the ball’s speed the instant it’s thrown, for him to get the balls true speed he must stay in his reference frame (ignoring the train’s speed, which is irrelevant, because from the man’s reference point, at the INSTANT the boy throws the ball, the boy must be considered motionless – for an instant is not a span of time). The ball’s speed according to the man must be calculated as if the boys did not transverse further in the moving train. I simply applied this principle to the stars rotational motion. But I’m open to correction. Robert Carter Perfect analogy, but you are missing the critical part. We are not concerned about two boys in the same reference frame (throwing a ball to each other on a 'moving' object). We are concerned with you getting a ball to one of those boys on the speeding train. If you wait until the boy is almost even with you, you are going to have to throw the ball in the direction the train is traveling and at close to the speed of the train. If you simply lob the ball across, you are going to kill him with a 100 mph fastball to the head (because the ball is basically at rest with respect to the earth, not the speeding train). If you throw it faster then the train, the ball will slowly sail past him (take your throw speed and subtract the speed of the train to get the relative velocity as the boy sees it). In the same way, I am on earth. The boy is on a planet circling the earth. In order to get the ball to the boy, I have to throw the ball at the apparent (to me!) speed of the planet. And that speed, to me!, is many millions of miles per hour. Because we can use simple trigonometry to determine the distance to objects in our solar system, and because those objects must circle us once per day in your model, the relative speed of any object to us at a distance beyond Neptune is >c. How do we accelerate space craft to that velocity? And, since nobody at NASA takes into account the idea that the place we are trying to get to will spin around the earth many, many times before we get there, how on earth (pun intended) can we actually hit the target? Based on your reaction to this, I will be forced to either conclude you are deliberately playing fast and loose with your terminology or you don't (or didn't) understand relative reference frames. Tim H. Regarding the Speed of Objects in the Universe you say that the rotational speed of an object depends on its distance from earth. This is a physical fallacy. An objects speed is only calculable within its own reference frame. The notion of calculating speed across reference frames is nothing more than an illusion, despite what Einstein said. From the earth’s perspective a star or galaxy 100 million lights years away, still travels at a rotational speed of 24,000 miles per day in relationship to the earth, but the star itself does not travel AT ALL in relation to its frame of reference and the other stars. Its frame of reference is the fabric of space. And even if space stretched it would not change the distance because measurements would stretch too. To puts it another way, for all it cares, a star doesn’t travel at all (it only sees the earth “spinning”), whereas the earth sees any distant “stationary” object as simply traveling the circumference of the earth in one 24 hours. Any assumptions beyond that are only calculations of PERCEIVED speeds not, not of ACTUAL speeds. This is why we say that “to the observer” (in a given reference frame) an object in a different reference frame “appears” to be traveling at a different speed. But in reality it is precisely BECAUSE the two different reference frames are not the SAME referenced frame that one cannot legitimately claim that one frame CAUSES a speed change of an object in the other. Therefore it is incorrect to say that an object circling the earth travels at a different speed depending on its distance from the earth. Perceived speed is not real speed. Robert Carter What a curious amalgamation of disparate notions. And no amount of fancy language is going to obscure the fact that you have entirely missed the point. You believe the earth is sitting still. Now, a space ship on earth is launched at a distant object (let's say Pluto). That space ship MUST accelerate to the speed of the object if it wants to get to that object without 1) missing entirely or 2) smashing into it at faster-than-light speed. You cannot bring up relativity for we do not care how the earth appears to that distant object. Our only concern is the fact that the ship must get to a significant radius away from the earth and that the horizontal speed relative to the earth (for the ship started at rest -- the reference frame of earth) must match the circumference of a circle with that radius every 24 hours. Physical fallacy? An object's speed can only be calculated within its own reference frame? Nonsense! We are talking about the speed of an object in reference to the earth. I will ignore the fact that you claim the stars only have to travel 24,000 miles per day. Adding "in relation to the earth" is nonsensical. There's a lot about geometry, even in our article, you seem to have missed. Tim H. In a number of places in this comment section you say that writers make refutations but don’t provide evidence or documentation. But how can we provide adequate proof and documentation when we are only allowed 300 words to respond to a paper that is many, many pages long? I have written very solid refutations on many of the key points I disagree with, but have no way of posting them for everyone to see. Therefore you have the advantage because you were able to post your whole paper, but we can’t sufficiently respond with only 300 words. Granted, you say that for long responses to use the contact page. But if we do will our responses be posted for all to see? Robert Carter We are all doing our best with the technology available to us. Yes, the limit is 300 words and, no, we cannot post something longer. However, one can say a lot in 300 words. Instead of trying to write a full-on broadside against geokinetics, how about just firing your best shot? So far, several have put forth ambiguous experiments, claimed the Bible absolutely means X when it does not necessarily mean that, or claimed the evidence we presented was nothing but lies. No one has yet attempted to substantively answer our main scientific arguments, and there is space enough to do so. Indeed, I have managed to answer every comment so far in the same amount of space. Scott M. Thank you so much for your compliment. Joshua's Long Day occurs after Genesis 2:2, so miracles were still occurring and thus naturalism would have been an invalid interpretation of that phenomenon. The article you linked to me was actually the reason for my question: your view contradicts the view in that article slightly. That article rejects naturalism as a rule. It essentially argues that miracles are so infrequent that scientists can still do well enough by assuming God isn't interfering with their experiments. But to be sure, that article says that God could, if He wanted, interfere at any time and suspend naturalism. You say the universe behaves according to a set of rules, but this other article makes it clear that there are an infinite number of rules that God may chose to apply at any moment based on His literally infinite wisdom. For all we know, the rapture could start tomorrow. Lastly, has there been a study on the frequency of miracles? Could there be many miracles that no one knows about? Because, I actually like the argument that we can still do science if miracles are infrequent, but it would be cool to get a frequency estimate. Robert Carter The two articles are not in conflict. God can do with His creation whatever He sees fit, and at whatever time He chooses. The fact that He chooses to generally operate according to a set of self-appointed rules means that science can work. This also means that the times He does not (the Virgin Birth, for example) can be identified as miracles. I am not aware of the frequencies of miracles, nor indeed do I believe it is something that can be calculated. Miraculous events in the Bible, before Christ, were separated by decades, sometimes centuries, but this does not mean the Bible recorded every one. Etc. Marc K. To summarise your article and its intent: BECAUSE Man has gone to the moon and sent out Rosetta to a distant rock THEN Geokineticism (moving earth) must be TRUE because it is the only model that explains them. These are lies and illusions generated through computer imagery. Why do the people imagine a vain thing? (Psalm 2). Furthermore, you would have us believe that what we read in the Bible in relation to the movement of the stars, sun, and moon has to be understood in phenomenological language. This is a fancy way of saying that the Bible does not mean what it says and that God has not, or cannot, clearly articulate his creation. This casts doubt on God’s Word and echoes the first words of the serpent: “Yea, hath God said...? (Gen. 3)” – That old serpent called the Devil, and Satan which deceiveth the whole world (Rev. 12). In essence, your article gives more credence to the lies of Satan than the truth of God’s Word. Your article dares to affirm that the burden of proof lies with Geocentricism (stationary earth). Geokinetic theory evolves to give an appearance of credibility to lie upon lie whereas Geocentricism is centred on the truth of God’s Word. There may be no turning back for you and your colleagues at CMI. God is gracious and full of mercy. He gives space to repent but His judgment is quick. Robert Carter Let the interested reader note that this person believes the universe is a grand illusion and that the Christians involved in NASA are rotten, filthy liars in league with the Devil himself. We have one bonafide rocket scientist on staff, other speakers who work or worked with the aerospace industry, have interviewed more than one physicist in Creation magazine, and listed multiple people who called on the name of Christ for the salvation in our article. All of these men are deceived, at best. As far as phenomenological language is concerned, we see it at various points in the Bible. God is not, actually, an "all consuming fire", etc. If we are to talk about the "truth" of God's word, we first have to decide what the language means. Taking a straightforward approach is not the same thing as taking a literalistic approach. Let the reader also note that we wait patiently for the "quick" judgment of God. It is a humbling thing to know that we will one day be judged and can only plead the Blood of Christ in our defense. Maranatha! John C. In the Jan-Feb (1977) issue of Bible-Science Newsletter, an article was published challenging a Geocentric believer (Gerardus Bouw?) to refute his article. He stated that in order for a stable orbit to be maintained, the mutual gravitational force must be perfectly offset by the orbiting body’s centrifugal force. He set up the equation and found that the orbital period was equal to 2(pi) times the radius taken to the 3/2 power all divided by the square root of the Universal Gravitational Constant times the mass of the stationary object. If the earth was in the center, it would take thee sun 211226.9 days to complete one orbit. Sun centered, it would take the earth 365.8 days, just as we see today. I never saw an answer to the article. I have the math if anyone's interested. Robert Carter Of course, at the given earth-sun distance and with a mutual gravitational attraction as calculated by Newtons Law of Gravity, the sun must be moving very slowly to prevent it from flying away from us. But this is assuming the Law of Gravity is applicable. Geocentrists reject it, so the argument dies on the vine. They do not appeal to known physics and do not accept the introduction of physics into the discussion unless it suits them. Greg T. Drs. Carter and Safarti: Some of the evidence you present in this article against absolute geocentrism will need to be reconciled with your YEC earth age. Your YEC earth age is approximately 6,000 years. However, given the distance from earth to distant stars, or the distance between stars and galaxies, as you state in the article, would it not take longer than 6,000 years for the light to travel those great distances? Robert Carter You need to consult our Astronomy and Astrophysics Q&A page, specifically articles under the heading How can we see light from stars millions of light years away? Robert B. I feel you are being too charitable here. Jesus cautioned us to "know them by their fruits." When Hugh Ross used outright lies and deceptions to make his points as CMI has caught him doing; benignly embracing of him as a Christian Brother in spite of all that is not even the best thing for him compared to calling out the apparent source for his self deception. Jesus did indeed warn us to "judge not" lest we be judged, but the Bible also tells us that we have the sober responsibility TO judge! In that context, we are reminded that we will even be responsible for judging the angels! When Jesus turned to Peter and said "Get thee behind me Satan" he wasn't consigning Peter to the pit of Hell forever. But He was telling him in no uncertain terms that he was being used as a tool of the Devil in at least that instance. The forces of Hell are much more arrayed against you and CMi's ministry than they are against a ministry that has a message that undermines the Gospel. I sense that the current crop of geocentrists are on the same team as the outspoken old Earth Christians. Satan likely has little objection to their message receiving a lot of play. Robert Carter We are getting called all sorts of names from that side; I am not of a mind to hurl the invective back at them. In light of 1 Pet 3:15, we are called to answer humbly. Let the reader judge the merits of our answers. I will also take this opportunity to note once again: No detractor has attempted to seriously grapple with our list of evidences against geocentrism. "The Bible says..." and "These experiments prove..." have been the only answers, but we dealt with both of these areas in the article and preceding comments. David O. Genesis 1:16 - 19 If the Earth revolves around the sun, what did it revolve around for the first 3 days of creation? If it was still for the first 3 days and God set it in motion about the (presumably larger) sun after He created this on the 4th, would that not be an act worth noting? The bible is as clear that the Earth doesn’t move, as it is that God didn’t use evolution to create. I understand that science also disproves a moving Earth, unless one can accept such contradictory (non-science) theories as Einstein’s General/Special Relativity, and Lorenz-contraction (experimental apparatus shrinking during the experiment). Such are excuses to explain away the logical conclusions of experimental results, similar to many evolutionary “theories”. The most basic proof that the Earth does not move is that you cannot feel it. Were we really spinning about the sun due to gravity, we (the Earth) would be constantly accelerating toward the sun. Whenever one undergoes centripetal acceleration, one can feel it. This is different to a constant linear velocity (say, in a car), and is better likened to a car speeding about a circular track. The constant acceleration of the car toward the center point of its circular path would be experienced by any passenger or driver (despite a constant speed), unlike the case were the velocity constant (i.e. speed constant in a single linear direction). The pathway to evolution (to make respectable the rejection of Christ) could not have occurred, had the Earth’s central position in God’s Creation not first been displaced in the eyes of men. Thank you for the article. We disagree now, but many Christians are coming to realise we have been lied to, and are starting to return to the infallible words of the only wise God our Saviour. God bless you Robert Carter If the earth were not spinning, why would there be 'day and night'? Was there an orbiting light source NOT called the sun prior to day 4? Or was the universe pulsing with light for those first couple of days? Either way, the language becomes strained if not a rotating earth. But, as we said in the article, once the earth rotates there is no reason to hold to a geocentric universe. You understand the science disproves a moving earth? Then you did not read the list of evidences we provided at the end. You cannot feel the earth rotating? One word: hurricanes. Also, did you not see that we directly answered this claim in the article? "At the equator, there is a reduction in apparent surface gravity of ½ of 1% compared to the poles. 70% of that is due to the ‘centrifugal force’ counteracting the attractive force of gravity. The remainder is due to the difference in distance from the center of the earth caused by the bulge." This reduction in the force of gravity is exactly what one would expect based on the difference in distance to the center of the earth between the poles and the equator + a 1000 mph tangential rotational speed a the equator. Ah, but you are not talking about the earth rotating. You are talking about the earth orbiting the sun. Have you considered the degree of curvature of that orbit? On small scales (i.e. that which we could 'feel') it approximates a straight line. Also, the distance from the near side of the earth to the far side of the earth is negligible compared to the distance of the earth to the sun. Thus, there is no 'feel' here either. As I said in other comments, it did not take geokinetics to displace God. Most ancient cultures were geocentrist and most rejected God. True, there is, in the Western world, a series of events that led from theism to atheism, but it was not science but scientism that caused this. Yes, we disagree, but we are at least encouraged that more and more Christians are coming to realize the sensible nature of the universe comports to the divine attributes of God himself and that we do not have to reject simple, observational, operational science if we want to hold to the divine inspiration of Scripture. Tim R. Very informative! You marshall a lot of arguments to make your point. I'm wondering whether Foucault's pendulum experiments apply here as well. I understand that these also can be readily explained by a geokinetic model but would involve a miracle in the geostationary view of the cosmos. Presumably it's not a real miracle if we can produce it whenever we like. Robert Carter There are always geocentrist counter arguments to the best geokinetic evidence, but most of them make no sense. One deals with the spinning universe producing an 'advanced signal', meaning the pendulum can sense what the universe will do in the future. It is not my intention of discussing the counter arguments at length, but let this serve as an example. The simple observation that all objects in the universe are attracting each other explains even Foucalt's pendulum. The geocentrist assumption that the earth is unmoving at the center of the universe requires a different explanation for each phenomenon, and many of these explanations are nothing if not mysterious. Bernie M. Thanks for your answer re: revolving/orbiting (and rotating). I read the "morning-has-broken-but-when" article, and it doesn't actually touch nor resolve this issue: if the premise is that on days 1-3 the observable universe was not as we know it today, and the universe is finite, how (or, really, why?) from the text should we infer that one of those days God made the earth start revolving around the sun? Was it at the "separation of the waters?" Why is it less plausible that the whole thing looks like a giant whirling snow globe (or fish bowl) with the earth at it's centre? If it's not so hard to believe that all the stars were made on the same day, after the plants, then couldn't your proposal that, "God could easily have moved every atom in the earth at the same time" be just as easily applied the other way: God moved every atom around the earth? There's evidence for both, right? Both "scientifically" and mathematically, right? If one's faith is strong enough for a literal creation week, against the wisdom of this age, why would it not be for a God-sized mind-blowing geo-centric view? Just wondering. May you and all y'all at CMI be blessed in your vital ministry. Robert Carter God 'moving every atom on earth at the same time' as part of creation week is not the same thing as God 'moving every atom in the universe around the earth' today. One deals with fiat creation. The other deals with ongoing processes. The first is not scientifically testable. The latter requires experimental science to be flat out wrong. Is there evidence for geocentrism? Yes, but you have to include ALL the evidence. Naked-eye astronomy appears to show that the universe revolves around the earth, but Newtonian physics says otherwise. The only way to combine these two is to conclude that naked-eye astronomy only gives us an appearance that is based on an earth-centric reference frame. Robert B. What a well reasoned and scholarly treatment this article is. I am blown away by you guys continually but this article is a fresh reminder of why I love CMI. But when I got to the comments section, it was quite an eye opener. Although I was familiar with Hugh Ross and the valiant fight CMI wages against his sort of Christian Old-Earth silliness; now I see that the enemy is waging a similar war on other fronts as well. It seems that Satan's objectives are to oppose the Gospel by sowing confusion and depicting Christians as fools. You just exposed that depiction as false; keep up your excellent work! Robert Carter As I said in a comment just above, let us be careful not to insult our fellow brothers and sisters in Christ. Certainly, sowing discord among us in an effective tactic used by the opposition, but that does not mean geocentrists (or us!) are tools of Satan. Of course, you don't believe this, but someone from the other side might take your words in a different way. Gre T. Thank you for this fascinating article. Very enlightening. Have you seen promotions of a new movie hitting theaters (albeit limited) called "The Principle." It looks interesting, in that it claims to thoroughly refute Copernican principle and that there is strong evidence that the earth is at or near the center. [URL removed as per site rules] p.s. I see that Dr. Hartnett is interviewed for the movie, so I assume you are aware of it. Thoughts? Robert Carter Oh yes, we are aware of it. In fact, it was one of the catalysts that led to this article. With full approval from Dr. Hartnett, we decided to nip this in the bud. The movie takes an absolute geocentric position. He does not. Neither do we. If you see it, watch for how they craft his words and see if what he says comports to what he believes. See also his review on his personal website: The Cosmological Principle and geocentrism. David B. Superstition and pseudoscience rule..always has and always will...PT Barnum knew it and used it to make millions. Bottom line IGNORANCE is a bigger business than the "first business." Robert Carter Careful, David, these are our brothers and sisters in Christ and we do not want to insult them for any belief they hold close to their hearts. Non-Christian ideas should be pointed out and vigorously opposed, but geocentrism vs. geokinetics is something that begs for frank, give-and-take discussion, not insults. There is no way around it Robert, The author of The Bible (God) was a Geocentrist and a Young Earth Creationist if Geocentricity is false and/or Young Earth Creation is false then The Bible is wrong and God was wrong and if God that cannot lie was wrong then neither can The Gospel be trusted. Paul put that much forth in 1 Cor chapter 15 since our Faith is based on objective truths not subjective religious beliefs our faith is falsifiable. The thing is, if Airy's failure, Michelson-Morley Experiment Sagnac Experiment and Michelson Gale Experiment had been done (able to) back in 1620 or so The whole Copernican Principle would have been deemed falsified by those science experimeny results and then Everybody today would be a Geocentrist and Billions of Years/millions of years and Evolution would never had even become an issue and CMI probably would not have any need to exist. I get ridicule for all 3 of views but I believe if any are proven wrong then Christianity is false. King James Bible believer Pure Cambridge Edition PCE 1900. If that's not the inerrant infallible word of God preserved by God in 2015 then there is no inerrant infallible word of God preserved by God in any language to be the final authority. There is no The Greek text or "The Hebrew text" There are many of each that differ If Young Earth Creation is false then The Bible is wrong and the Gospel in it cannot be trusted. If Geocentricity is false, then The Bible is wrong and The Gospel ??? Robert Carter Brian, this is comment #4. Please understand that this is not a blog site. Neither is it a convenient place to hash out a difficult argument. Neither are we obliged to answer or post every comment made by every person. Therefore, I get the last word. 1) You have gone from empirical arguments for and against geocentrism to King James Onlyism. Thus, your argument has gotten convoluted. The discussion has almost nothing to do with Bible translations. 2) Yes, if those experiments had been done earlier, they would have affected the course of scientific discovery, but why would they run the experiments before the issue came up? And, even if delayed, science would eventually have caught up to what we know today because somebody would have eventually sent a rocket into outer space and noticed that it did not behave the way they expected. 3) You do not need Copernicus to have billions of years. Ask any Hindu. 4) This argument has nothing to do with the King James Bible, because every modern translation uses similar phraseology in the relevant passages, with the lone exception being "firmament" in Gen 1:6. If your only argument is "firmament", you do not have much an an argument. We discussed this, and the other passages, in the article. 5) I am very sorry that you believe that if geocentrism is proven false your Christianity is also false. I am not of that persuasion. Neither is anybody at CMI. This is an issue of 'where do we draw the line', and, as I said in a comment above, that line is between operational and historical science. Operational science works among the planets. Yet, it cannot if geocentrism is true. 6) Thank you for the candid discussion. Thank you also for educating us about your views. Frankly, we were surprised, yet you are probably not the only one out there, so this discussion is of some benefit. 7) We note that even after all these comments have been made, not one person has attempted to address our main arguments. Instead, they have fallen back on sweeping statements about the 'big' experiments that supposedly proved geokinetics wrong (but that are actually equivocal) or sweeping statements about biblical language ('the Bible says it, therefore I believe it') without addressing the phenomenological aspects of communication. D. M. Good study. However, we could be the relative center of the universe, not the actual pinpoint. Hubble discovered evidence of our centrality in his studies of the expansion and the discovery of the red shift. However, he let atheism dictate his position simply to nullify the possibility that we were central, a fact he thought would prove God's existence. He decided on the assumption of uniformity: he said: “The assumption of uniformity has much to be said in its favour. If the distribution were not uniform, it would either increase with distance, or decrease. But we would not expect to find a distribution in which the density increases with distance, symmetrically in all directions. Such a condition would imply that we occupy a unique position in the universe, analogous, in a sense, to the ancient conception of a central earth. The hypothesis cannot be disproved but it is unwelcome and would be accepted only as a last resort in order to save the phenomena. Therefore, we disregard this possibility and consider the alternative, namely, a distribution which thins out with distance." Robert Carter You are correct. See a comment above with links to articles that discuss the relationship of the earth to the universe in general. Jeff P. This article did not need to be so long to answer the simple question: is the earth at the center of the universe?. Here are the facts available to a non-scientific reader: (a) the observable universe (what we know about) is about 93 billion light years in diameter; (b) the Virgo Supercluster containing the Milky Way is in the center of the observable universe; (c) the Earth is part of the Milky Way galaxy and therefore is also in the center of the observable universe. (Keep in mind that the dimensions of the observable universe are so large that when we say the earth, the Milky Way Galaxy or the Virgo Supercluster are at the center of the Universe we don't quite know where the Center really is, but we're close enough to satisfy the "naked eye"!) There...wasn't that easy? But hey, you don't have to take my word for it. Check it our for yourself, and have an astronomically nice day: [link deleted, as per site rules] Robert Carter Also, see Our galaxy is the centre of the universe, ‘quantized’ redshifts show and Our galaxy—at the center of the universe after all! and other articles on, at least one of which was mentioned in a comment above. Scott M. Good article. The second paragraph is interesting because you say (and I agree) that naturalism works today. That is, I can do an experiment and try to figure out the laws of physics, confident that these laws are consistent and reproducible. However, you also say that there were supernatural events in the past that created the universe, and other supernatural events such as Joshua's Long Day. Anyone witnessing these supernatural events would not be able to determine any laws of physics from them and would not be able to reproduce them. In fact, they would see these supposed "laws" broken and lose their faith in naturalism entirely. I wonder at what point in time naturalism became valid. When did supernatural causes cease? Robert Carter Great question. The answer is after "God rested from his labors" (Gen 2:2). Please see Miracles and Science, and the DVD titled Are Miracles Scientific (same link). Why doesn't CMI discuss Airy's failure, Michelson Morley Experiment Sagnac Experiment and Michelson Gale Experiment? Why does CMI discuss Gallieo, and Copernicus Ptolemy but never present The Neo Tychonic Cosmos (Updated Tycho Brahe model)? I think that CMI though willing to take the heat from Evolutionists are not willing to take the heat they would take from EVERYBODY if they took a Biblical stand on Geocentricity. Considering Kids from age 5 up are indoctrinated into The Copernican Principle , Billions of years and Macro Evolution for CMI to skip over The Copernican Principle and then try to star the fight at the age of Earth an d Evolution is missing the foundation of it all. People are taught to believe it is "common sense" that the Earth is spinning 1,000+ MPH at equator and hurling through space at approx. 66,000+ mph. They have no clue why they are supposed to believe that but accept it just like they accept billions of years and evolution (macro). Airy's failure, Michelson Morley Experiment Sagnac Experiment and Michelson Gale Experiment as demonstrated by Malcolm Bowden show that the denial of Geocentricity is the foundation of Satan's deception that directly led to billions of years and Macro Evolution. If people were to realize the Earth is stationary center mass of entire universe then they would have to acknowledge the One who put it there and acknowledge that He put it there approximately 6,000 years ago with no macro evolution in there anywhere at all. I understand people can argue flat Earth from Bible as well as Sphere Earth though 281 Geocentric references are too many 2 ignore Robert Carter Simply refer to our answers to the previous comments. All of this has already been discussed. If Geocentricity and YEC is wrong then The Bible is wrong. Robert Carter These two things are not the same. Note our comment about the "ministerial" use of science (here used to flesh out ambiguous passages) vs. the "magisterial" use of science (as old-age evolution does invariably when it trumps the biblical timeline). "It is the glory of God to conceal things, but the glory of kings is to search things out" (Pro 25:2, ESV). The relationship between the earth and the heavens was concealed for a very long time, but that is no longer the case. If the physical universe is knowable, rational, and consistent (all three of these tenets are derived from Christian theology, btw), geocentrism fails. But this does NOT mean that YEC fails, that the Bible is not inspired, etc. One does not have to let go of the doctrine of inspiration to let go of the earth. Far it, in fact. One must draw the line somewhere, but the line is between historical vs. operational science, not within operational science. This topic is where CMI like [another creationist organization] fail to be biblical and actually CMI and [another creationist organization] treat Geocentrists the way that the atheist Evolutionists treat CMI and [another creationist organization]. How can a group stand up for the authority of The Bible then ignore 281 Geocentric references in The Bible ? [URL deleted] Whole sermon series about it on [URL to an online sermon database] [URL deleted] NEO TYCHONIC GEOCENTRIC COSMOS only model that accounts for both all observations in sky as well as Airy's failure, Michelson Morley Experiment Sagnac Experiment, and Michelson-Gale Experiment , The CMB WMAP and PLANCK data. How can any "Bible Believing Christians" read the Bible and below books and not be a biblical Geocentricty: Christianity in the Woodshed by Gerardus D. Bouw, PhD (Astronomy). He Maketh His Sun to Rise: A Look at Biblical Geocentricity, by Dr. Thomas M. Strouse, Dean of Bible Baptist Theological Seminary, Cromwell, CT The Bible and Geocentricity, by James N. Hanson, Professor Emeritus of the Cleveland State University The Copernican Revolution: A fable for educated men ed. by G. Bouw. A collection of responses submitted in response [to an article by another creationist organization]. Each of these was submitted to Creation Ex Nihilo [sic: Journal of Creation?] but were rejected for publication for reasons not always clearly Robert Carter [note this comment was edited to remove URLs, as per site rules, and references to other creation organizations and speakers outside CMI] 1) We are accused of failing to be biblical, but the accusation is falling flat in reference to our Statement of Belief. 2) We are accused of treating geocentrists unfairly, but where is the evidence for this in this article? It is unfair to bring up personal history we cannot defend against in this format. 3) How can we stand for the authority of the Bible in light of the many uses of phenomenological language in the Bible? See our discussion. 4) Sermons are preached on all sorts of things, and you and we would consider the contents of many to be heretical. So what if some are preached on geocentrism? I just read a story about a Islamic scholar who teaches geocentrism. That also means nothing. 5) We discuss the neo-tychonic model in the article, and why it fails. The other experiments were discussed in a reply to a comment above. And let's not confuse absolute geocentrism with the the relative position of the earth in the universe. CMB, PLANK, and WMAP all indicate we are either near the center of a spherically symmetric universe, or the data are corrupt, as recent claims against the BICEP2 results would indicate. 6) How can a Bible believing Christian read the Bible and not be a geocentrist? Reading the Bible certainly does not automatically make one a geocentrist. The opposite is true, in fact, as attested by the fact that the majority of conservative Bible believers are not. How can a Bible believer read those books and not be a geocentrist? Read the article and the replies to these many comments. Bernie M. Great article, clearly loads of good solid research. Wondering about all this and how it fits into the 6 day chronology of Genesis 1, I would like to ask: what was the Earth rotating around for days 1, 2, & 3 if the Sun wasn't created until Day 4? Asked another way, which model better fits a literal Genesis account? Robert Carter I think you mean "revolving/orbiting" not "rotating". In the geokinetic model, the earth was simply spinning in space and gravitational motion relative to the sun would have begun when the sun was created. Would this have cause the earth to "jerk" forward? Not according to the physics involved in Joshua's long day (God knows the physics required and can account for all parameters), and God could easily have moved every atom in the earth at the same time, or the earth and sun could have been created with the correct relative motion to the other to avoid any jarring. Which model better fits a "literal" interpretation? The one that can account for both the text and the physics without forcing an interpretation on one or the other. See Light before the sun?. Malcolm B. I am greatly surprised that a very lengthy article on geocentrism NEVER mentions the four main SECULAR experiments that all point to a Geocentric universe. (1) Michelson Morley - showed that we were NOT going round the Sun at 30km/s. This shook the scientific world and they rushed out the Fitzgerald-Lorentz Contraction to "explain it away". (2) The Michelson-Gale experiment that showed an aether was passing across the surface of the earth within 2% of 1/day - either the earth was turning OR the aether was rotating around us. (3) Airey's "Failure" showed that it was the aether rotating around US. The Earth was stationary. (4) Sagnac's experiment demonstrated that the aether existed. To overcome the M-M Exp, Einstein produced Relativity which abolished the aether. Thus Sagnac disproved Relativity. Barbour and Bertotti showed that the geocentric universe was perfectly viable, as did Popov. This article cherry-picked all the evidence against Geocentrism and completely ignored the SCIENTIFIC evidence that supports it! I have about 8 YouTube videos on Geocentricity. Would you allow a reply article? Malcolm Bowden. Robert Carter 1) M-M was equivocal. Either the earth is not spinning or light speed is constant in every direction. 2) M-G is equivocal, as you said. 3) Airy's experiment is equivocal, and trivial as far as Relativity is concerned, as I discussed in an earlier comment. 4) Sagnac's experiment was equivocal, and is consistent with Relativity, as von Laue showed two years prior to Sagnac. You're right, we did not discuss these. Perhaps we need an addendum to explain why the main things used to support geocentrism are all equivocal. The careful reader will note that we sought to bring up unequivocal evidence in support of geokinetics. S. W. I find a real double standard between taking a literal 6 days of creation and the geocentric debate. Who is to decide what the Holy Spirit meant? What about Joshua's long day? As others have mentioned, all the arguments provided against geocentrism can be refuted, why on this issue is science 'so-called' taken to over ride the plain reading of scripture? Robert Carter Did you actually read the article? We discussed Joshua's long day and other things. "Who are we"? Nobody, actually. We are not some holy authority, lording it over others, and we are not deciding anything for anybody. How could you get that out of what we wrote? And, as many others did, you fail to bring up substantive arguments and instead resort to the catch-all "all the arguments against geocentrism can be refuted". So, your entire argument boils down to the "plain" reading of scripture. God said it, that settles it? You must address what God actually said and put that into common language conventions. Our outline should help. Robin R. To be succinct, you have missed every point entirely because of not doing your homework. I strongly suggest that you begin by reading " GEOCENTRICITY : CHRISTIANITY IN THE WOODSHED " " SUN, STAND THOU STILL " by Henry Schuman Inc. also published as " THE WORLD OF COPERNICUS " Also, it is instructive to see the following video : Nova: Season 30, Episode 4 Galileo's Battle for the Heavens (29 Oct. 2002) and........ GALILEO'S DAUGHTER by PENGUIN BOOKS. Having 20 years working inside NASA Science, I can tell you that Romans 1:18-19 is truer than you can imagine. Many NASA scientist secretly agree, but would lose everything ( materially and family ) if they acknowledged it. Your article is woefully incomplete. Robert Carter [note: this comment was edited prior to publication to remove URLs, as per site rules, and to clean up some formatting issues] Clearly, we did not win over every reader. However, all this is is elephant hurling. You have provided nothing substantive to answer. You say "many" NASA scientists are, what, geocentrists? I seriously doubt that "many" are, but so what? Kyle H. The detail in your article is appreciated however... K-I-S-S....RELATIVITY. No wonder Galileo Galilei hedged his bets! Robert Carter Due to the nature of the arguments, we had to do more than just discuss relativity. I hope you can see that. Terry S. The bible is not a book about science but where it touches on science it is always correct, if you accept that it is God's word then you would expect nothing less. The creation sequence as stated in Genesis in detailed in such a way that theories like evolution are automatically excluded. In a similar way the bible portrays a geocentric universe, if read in a plain straightforward way it excludes heliocentric ideas. The earth is at the heart and centre of the universe, the centre of God's attention. Helocentricity detracts from this and gives the evolutionist the excuse that the earth is just another planet, one of many. The bible also describes the purpose or function of the universe, that is its supporting role, the stars are for signs and seasons and declare the glory of God, the sun to rule the day and the moon to rule the night etc. The earth also has foundations so that it can never be shaken or moved. There is no scientific reason to compel us to believe anything different. Observations of the movements of stars, planets and questions of parallax can be explained by both models but experimental evidence carried out in the past would support a stationary non spinning earth with a rotating incredibly dense aether. The Michelson Morley is one such experiment the results of which indicated that the earth is not moving. There are problems with a spinning earth, the effect it would have on the atmosphere for instance. Nothing in creation is simple, rather creation shows an incredible complexity, for example the so called simple cell. The more we find out about God's design the more we marvel at the sophistication and elegance of His solutions to design problems. The geocentric model fits the bible very well. Robert Carter You have broached multiple topics and space is limited, but I must reply to some. 1) Your opening comments are spot on. 2) The Bible does NOT portray a geocentric universe. All descriptions are phenomenological or conventional, as we discuss in the article. 3) Yes, the earth is at the heart of creation, but this does not mean it is at the physical center of creation. You are extrapolating from the spiritual to the physical. 4) The evidence for extrasolar planets is mounting steadily. There is no reason for God to NOT create other planets around other stars (this does not mean life is out there, but that is yet another topic). Thus, geocentrism does not have the upper hand here, for those planets exist no matter the physical explanation of the universe. Or do we have to reject this data as well? 5) The stars etc. as signs for the seasons has nothing do do with our relative motion in respect to those things. 6) The earth can never be shaken -- now I am thinking you did not read the article! 7) We outlined a great deal of experimental evidence that supports geokinetic theory. M-M was equivocal and the purpose was to contrast two opposing theories of aether more than it was to decide geocentrism vs. geokineticism. 8) Nothing in creation is simple, so why hold to the 'simple' geocentric theory? As you just said, complexity is no reason to reject any aspect of creation. J. S. I love your site, but I seem to be a lone objector on this one! I have researched geocentrism, and convinced of it scripturally, and pardon me by saying I have a 'gut feeling' that it is the truth. It has been said that the devil could not have gotten evolution rolling without first setting up Copernicus and the heliocentric idea. I think that if you were to give geocentrism even a breath of consideration, the howls you now get from a large portion of the Christian community for your stand on a YE 6-day creation (which I firmly believe) will seem like whimpers! Robert Carter 'Gut' feelings are not good arguments. When emotions conflict with observation, one of them is generally wrong. And, evolutionary ideas were found among the Greek philosophers, most of whom were geocentrists, so Copernicus was not needed for evolution. Instead, Copernicus was a leading figure in the Scientific Revolution, which was an outgrowth of Christian appeals to the use of logic in theology. It was the detour to atheistic 'scientism' during the Enlightenment (at the hands of those who hated the then religious underpinnings of science) that is to blame. And, as our track record certainly shows, we are not afraid of reprobation from even the Christian community. We took this stand on this issue because we feel it was the correct stand to take. Thus, there is no reason to give geocentrism even a breath of consideration. Eric H. I'm surprised that you didn't mention the recent book by Dr Hartnett "Starlight Time and the New Physics". He shows that a development of the Cosmic Special Relativity of Moshe Carmeli fits/explains the observable astronomical data well and does away with the need for dark matter and dark energy. I've forgotten too much maths to fully follow his arguments, but his conclusions are persuasive. Sadly atheists will cling to the cosmological principle which requires there to be no centre to the universe despite contradictory data. Robert Carter See footnote #52. terry C. The difficulty for me, is why there are no scientific experiments showing the forward movement of our earth. Airey's experiment seems to prove our earth is stationary. Should we not reason from the general to the particular. I want to believe you are correct, because I mentioned my Geocentrism to a few people, and now am facing religious persecution. I am so tired of being a misfit. Robert Carter But if the earth is not moving, Neptune is moving at nearly light speed. We have sent space probes past Neptune, but we do not have the ability to accelerate objects to faster-than-light speed. What other proof is needed? Earth and Neptune are both moving through space at orbital speeds predicted by Newtonian mechanics. Most experiments designed to test if the earth is moving were equivocal. "Type A" experiments can decide between two hypothesis, depending on the outcome. These are the ideal experiments. "Type B" experiments can decide between hypotheses if the outcome goes in one direction but cannot decide between them if the outcome goes in the other direction. These are very common in science. "Type C" experiments cannot decide between hypotheses no matter what happens. In a paper titled "Airy's Water Telescope" (The Observatory 60:103-107, 1937), the editors stated, "In Relativity theory, Airy's experiment is trivial, as the observer is at rest relative to the telescope." Thus, Airy's experiment was a Type C. It tells us nothing for both hypotheses predict no aberration. There is nothing wrong with being a misfit, with standing one's ground on biblical ideas, but let us pick the right battles. Kenneth D. I have often been impressed with the intellect God has given to Jonathon Sarfati and the others at CMI. Congratulations on a very thorough refutation. My first thought was why put so much effort into refuting something very few people actually believe? A quick internet search revealed that perhaps there are more absolute geocentrists out there than I realized. Unfortunately, it appears that many are basing their arguments on Biblical terminology, thus creating a possible point of contention among Bible believers. It seems as though the enemy of our souls will use anything he can to discredit God’s Word and get our focus off of the truth. Keep up the good work, and may God bless your efforts at disseminating truth with humility. Robert Carter We did not write this article only to rebut a minority position. It was also a chance to extol the great men who went before us, teach about applied logic, discuss one of the greatest scientific mysteries of the ages, and marvel at the universe we have under our feet and above our heads. There are so many threads wrapped up in this story that it is nothing short of amazing. Thus, the joy of writing was also one of our motivators. Thanks for the kudos. Joe T. Almost every Biblical verse says the sun is moving or standing still. God could just as easily have said we are moving. Robert Carter Of course, but the Bible is also not written to be a comprehensive science book. It is a book of history that focuses on the story of the Bride of Christ. God could have said a lot of things about all sorts of scientific issues that are not fleshed out in the text. And the sun IS moving in an earth-based reference frame, so there are no textual problems here. We dealt with this at length in the article. David M. I have never seen such a lengthy article on, which implies this much energy has never been spent on any other topic. This is quite disappointing as, I believe with some extra research, one would’ve found that each and every point made in this article has been properly refuted already. Conveniently, they've also been packaged up neatly into a book, Geocentricity: Christianity in the Woodshed, published last year and authored by Gerardus D. Bouw, PhD (Astronomy). The reason I feel this book is important, to this article in particular, is because in footnote 70 the article authors admit Bouw is “probably the best known geocentrist today”, however the only other footnote regarding his work is one where he coins the term ‘geocentricity’. All other arguments in the article are refuting arguments of someone less preeminent in this field, if at all. I find that odd, but it is the reason I feel Bouw’s book covering every aspect of this article is essential reading. Robert Carter Note: this comment was modified prior to its publication. One hyperlink was deleted as per site rules and the several personal comments were removed. And, although this person made a generous offer to supply interested people with the book he mentioned, this is simply inappropriate for a place like this. Readers who are interested in the subject can find such information offsite. We trust that, after carefully reading this article, the arguments from the small geocentric community will not be convincing. Concerning our rationale, it was not necessary to list and attempt to refute every argument made by modern geocentrists. Instead, we took a positive approach, built up our case carefully, and applied the best available scientific evidence to the problem. The burden of evidence is squarely on the shoulders of the other side, yet I note that not one example of how our points have already been ‘refuted’ was provided. Of course, a comment box like this is not a place where these things can be debated. While we appreciate your heart for the subject, it would have helped to have at least one point made instead of the ‘elephant hurling’ tactic of dismissing all points in one fell swoop without citing any actual evidence. Paul S. This is an interesting and informative article. One thing that I would like to point out, however, is that nothing therein constitutes an argument against the thesis that the earth is, approximately, at the center of the universe. That thesis and the thesis that ‘everything revolves around the earth’ need to be kept clearly separate in discussing these matters. Like isotropy and homogeneity (the so-called Cosmological Principle, the thesis that the earth is, approximately, at the center of the universe belongs to the realm of metaphysical assumptions or speculations rather than that of empirical science. Robert Carter You are 100% correct, we did not discuss possible position of the earth in the universe. However, there are several articles on that discuss this matter. Start here: Where are we in the universe? R. D. This is a really interesting article, and almost certainly the most thorough anti-geocentric treatment which has yet been penned by genuine Christians with a complete fidelity to God’s Word. Until now, the lengthiest which we had was Danny Faulkner's article from 1999, which—I think—unsuccessfully attempted to address the claims of Gerhardus Bouw. Having read both sides, it was firmly my opinion that Dr. Bouw had the better of that exchange. I never quite accepted geocentricity, because—even though I had not examined the issues in anywhere near such depth as they are presented here—it always struck me as requiring too much of what I call non-regular God-force, even though—as this article freely acknowledges, which I think is very important—it IS possible to construct models which fit the observations. After reading this piece, I can begin to appreciate just how much physics has to be rewritten in order for geocentricity to work. I particularly like the emphasis on how, on many occasions, the Seventeenth-Century geocentrists were often more scientific than those who opposed them! One thing which I might suggest as an addition—though I know this is already much lengthier than the vast majority of CMI articles—is a handful of diagrams to show the various models. I know what many of them look like because I've read quite a bit on this matter before, but for those who are entirely new to it, diagrams of Ptolemaic, original Copernican, original Tychonic, modern acentric (secular cosmology), modern geocentric, modern galactocentric (CMI’s favoured model), etc. models I think would greatly aid conception. Whether you choose to add these or not though—a very fine overview indeed. My compliments to Dr. Sarfati and Dr. Carter. Robert Carter You picked up the main point of the article: “I can begin to appreciate just how much physics has to be rewritten in order for geocentricity to work.” Thank you for the compliments and, as far as extra diagrams go, that is always a possibility … Comments are automatically closed 14 days after publication.
51d74b0b4562bd16
ab initio quantum mechanical methods Methods of quantum mechanical calculations independent of any experiment other than the determination of fundamental constants. The methods are based on the use of the full Schrödinger equation to treat all the electrons of a chemical system. In practice, approximations are necessary to restrict the complexity of the electronic wavefunction and to make its calculation possible. PAC, 1999, 71, 1919. (Glossary of terms used in theoretical organic chemistry) on page 1921 [Terms] [Paper] This definition supersedes an earlier definition of ab initio calculations.
a3565d9d9d2cec68
Vigilantism Wins — Posted Friday November 19 2021 17-year-old vigilante killer Kyle Rittenhouse has been found innocent of killing two people and seriously wounding another based on his claim that he felt threatened, despite the fact that he instigated the events while being a minor and illegally carrying an assault rifle. Does this mean that as a Californian I can unlawfully take my registered large-caliber handguns over state lines to an open-carry state and shoot any number of people because I feel "threatened" by them? The Rittenhouse case has now set a precedent that says I can, and that I can always walk away a free man. Not just that, but I can do it again and again without fear of a guilty conviction—just as Rittenhouse is free to do now. It gives me some comfort knowing that, as a white man, I can now kill any one knowing that all I have to do is claim fear of being physically harmed by them, regardless of the circumstances. God help the black man or minority who tries the same thing. What's Wrong With Gravity? — Posted Sunday November 14 2021 PBS Spacetime's latest video questions our basic understanding of gravity, a force that was known to mankind ever since the first Homo erectus fell off a cliff. It wasn't known mathematically until Isaac Newton formalized his theory of gravity in 1687, which itself was largely based on Newtonian mechanics. That theory stood the test of time until around 1830, when astronomers noticed that the theory did not correctly describe the orbit of the planet Mercury. Unwilling to consider the possibility that Newton's theory was wrong, astronomers believed that Mercury was being grqvitationally shifted due to another planet (dubbed Vulcan) on the other side of Mercury's orbit, hidden from view because it was always behind the Sun. The Vulcan idea persisted until early 1916, when Einstein used his new theory of general relativity to precisely calculate Mercury's orbit (Einstein was delirious with joy for days following his discovery, since he knew then that Newtonian gravity was only an approximation to the truth). Then in the 1930s astronomers noticed that stars rotating around galaxies were moving too fast based on the amount of measured galactic mass. In order to preserve Einstein's theory, astronomers assumed there had to be extra mass hidden mass in and around galaxies, causing their stars to rotate faster. Thus was born the notion of dark matter, said to be "dark" because it couldn;t be seen and because it didn't seem to interact with anything (except gravity), including itself. But unlike the Vulcan theory, dark matter is today part of the Standard Model of Cosmology, in spite of the fact that for decades billions of dollars have been spent worldwide by experimentalists trying to detect the stuff, all to no avail. By comparison, little effort has been expended looking for a theoretical alternative to dark matter, an effort akin to the replacement of Newtonian gravity with that of Einstein's. The leading contenders to date have been modified Newtonian gravity (MOND) and its relativistic cousin tensor-vector-scalar (TeVeS), which despite some successes have failed to equal the explanatory power of dark matter. It seems to me that the current game plan for modifying Einsteinian gravity is to take the original theory and add various vector and scalar fields to it in the hope that something works. This appears to be something like curve fitting, since the addition of parameters to the theory is being done simply to match observation without a sound theoretical justification. Dark matter skirts this problem by simply adding more dark matter here and there until things jive. Consequently, dark matter is essentially non-predictive, since one can always cheat by tossing in more or less dark matter. Here's one other thought: any theory of gravity shou;d include one or more of the Riemann geometrical trio \(R, R_{\mu\nu}\) and \(R_{\mu\nu\alpha\beta}\). In some TeVeS theories, none of these are apparent, while scalar and vector quantities dominate. Here's the video, and you can decide for yourself where all this might be going: Our Wallflower Attorney General — Posted Tuesday November 9 2021 Showing my age here, but older people will remember Warner Brothers' Droopy Dog character from the 1940s and 1950s. Slow and methodical to the point of being completely uninteresting and ineffectual, the character reminds me of current Attorney General Merrick Garland. You may recall that Garland was President Obama's pick to replace Antonin Scalia on the Supreme Court, but his nomination was effectively cancelled by Senator Mitch McConnell. Democrats cried foul, saying that Garland wuz robbed. But instead of fighting for Garland, Obama decided that he'd let it go, likely not wanting to appear as an uppity black man. Of course, once Obama was out of office the Republican Senate immediately approved President Trump's rightwing selection, Neil Gorsuch. That may be water under the bridge now, but President Biden's selection of Garland for Attorney General went ahead unimpeded, since it was only a consolation prize anyway (Supreme Court justices have lifelong appointments, whereas Attorney Generals come and go with presidential terms). Garland now has the unprecedented opportunity to bring Trump and Friends to justice for their traitorous attempt to overthrow the government of the United States, but Garland is nowhere to be seen. As I've noted before, Garland is like the timid party attendee who's always scrooching to the back wall trying not to be noticed. This is a heck of a way for an Attorney General to behave, given the fact that our country's very existence is at stake today. As Droopy would say, "Oh, dear!" Speaking About Time \(\ldots\) and 1963 in Particular — Posted Saturday November 6 2021 The Moving Finger writes; and, having writ, Moves on: nor all thy Piety, nor thy Wit Shall lure it back to cancel half a Line, Nor all thy Tears wash out a Word of it. The Rubayat of Omar Khayyam You see, the past doesn't want to be changed. — Al Templeton, 11.22.63 In early 1963 I was a 13-year-old 8th grade student in Mrs. Daphne Cunningham's English class. The class had been studying The Rubayat, and for the life of me I had no idea what it was all about. One day Mrs. Cunningham kept me after class, and we sat together to discuss the problems I was having with the poem. I remember the occasion distinctly, as all I could think about were my teacher's beautiful face and shapely legs dangling before me, physical aspects of womanly beauty that were only then becoming aware to me. Because of my stupidity she had to explain the poem to me, along with its underlying notions of love, death and the meaning of life, but it really made no differance, as all I could think about was that gorgeous woman sitting in front of me. Thinking back, maybe that was what the poem was all about. While the future holds promise (and fear) for many people, few wish to travel into the distant future to find out what happens, assuming the return trip is impossible. But traveling to the past is something entirely different. We all would like to go back into the past, primarily for two reasons. One, to find out what really happened, to recover information that we believe has been lost forever, like who really killed JFK. The other is to relive the past in some sense, like correcting our mistakes, marrying the girl we truly loved, or to redirect our lives into more productive avenues. But the past is past, and nothing can change it, while the scary and unknowable but otherwise uninteresting future is all that's left for us. Speaking about time (and JFK), you may want to watch the excellent 2016 mini-series 11.22.63, an eight-part television series adapted from Stephen King's overly long novel of the same name. Yes, it's all about preventing the assassination of JFK in 1963, and while the time-travel technology involved is rather nonsensical, it's entertaining just the same. Like many older people like me, I personally will never forget that day when as a freshman high school student I learned about the assassination. Awareness of girls, entry into high school (which I detested), and the JFK murder. Yes, 1963 was a heck of a year, but one I wouldn't want to revisit. The Future is Quantum Physics, the Past is Classical Physics — Posted Saturday November 6 2021 I think it was Socrates or Augustine or one of those ancient guys who said that he understood what time was but then didn't know whenever he was asked what it was. That seems to be just as relevant today as it was ages ago. In this new video from technologist Arvin Ash, the concept of time is raised for the umpteenth time in history. He doesn't get too far with it either, but he does bring up a few interesting points, motivated by this recent paper by noted physicist Lee Smolin. In a nutshell, Smolin says the past and future are respectively definite and indefinite, connected by the instantaneous and infinitesimally ephemeral present. This observation reminds us of the Copenhagen interpretation of quantum mechanics, which asserts that a quantum state is inherently unknowable and complex-valued until a physical measurement or observation is made, at which time the state's associated wave function collapses instantaneously to a single eigenstate, which is unique, single-valued and real. Wave function collapse is assumed to be instantaneous, and in that regard it resembles the present that Smolin discusses in his paper. Similaarly, the future represents the infinite multitude of uncollapsed wave functions in the universe, while the past represents the infinite multitude of fixed, knowable eigenstates. Ash ties all this in with the notion of entropy, itself a topic that is subject to many interpretations. States with high entropy may be highly ordered (e.g., a small box of compressed gas molecules) but unknowable (the positions and momenta of the individual molecules are probabilistic but cannot be known with any precision), while systems with low entropy are highly disordered but can be known to high precision (the gas molecules slow down to a crawl or halt entirely). Ash points out that this characteristic of entropy is similar to that of wave functions and eigenstates, or the future and the past. But where does the present fit into all this? However, all this talk about quantum states and entropy doesn't tell us what time is. And like the nature or purpose of human consciousness, we may never know. Air Taxis Are Coming — Posted Friday November 5 2021 I've been following the all-electric vertical take-off and landing (eVTOL) airplane technology for several years now, mainly as something that I might want to invest in. It's definitely coming along, and the technology might be sufficiently robust within five years or so, at least for the high-end air taxi service sector. Here's a reliable update on the possibility of near-term eVTOL aircraft, which seems well within technological capability provided that the energy capacity of on-board batteries will be substantially improved in the next few years. My only criticism is that it doesn't adequately address the redundancy issue, which has to do with mid-air battery or propulsion failure, bird strikes or similar mishaps. The article is based in part on this recent paper by researchers at Carnegie Mellon University. Companies such as German-based Lilium are heavily invested in eVTOL aircraft with significant redundancy. Recognizing that a major percentage of energy expenditure occurs during take-off and landing, the company seeks to minimize energy loss during horizontal flying (cruising) using what is known as boundary-layer air ingestion (also the Coanda effect) of its electric ducted fans (see this YouTube video for details). American Insanity is Thriving — Posted Thursday November 4 2021 When I received my third and final Moderna COVID vaccination yesterday, all I ended up with was a slightly sore right arm. I didn't glow in the dark. But when the respected science journal Bulletin of the Atomic Scientists reports that the lunatic conspiracy group QAnon claims that scientists have put the bioluminscent chemical luciferase into the vaccine supply so that the federal government could track commie-loving leftists around the country, I took notice. QAnon is the same outfit that (among other outrageous claims) published the claim that John F. Kennedy and his son John Jr. would show up alive in Texas, thus disproving their reported deaths (which of course was just another government conspiracy). Consider this: all it takes to cause a nationwide disaster is for one lunatic with a "God-given" 100-round assault rifle to slaughter dozens of innocent people, although he may represent only a tiny fraction of the total population. Consider also that some 30-40% of American voters believe that Donald Trump won the 2020 presidential election, and that this same electorate believes a monomaniacal sexual predator and pathological liar should be restored to the presidency—by insurrection, if necessary—and be made Dictator-for-Life. Finally, consider the fact that if only several percent of Americans truly believe QAnon's lies (a figure that the polls support), then American democracy surely has no chance of surviving. On Optimization — Posted Tuesday November 2 2021 Roughly three millenia ago, the Iron Age ushered in a revolution in tool and weapons manufacture. Iron ore was plentiful and cheap, but iron was expensive to produce, requiring vast reserves of wood to burn to create the high temperatures needed to refine the ore. Very early on iron was forged into nails for building, and due to their high cost they were routinely reused (there is evidence that nails used in crucifixions were recovered for future use). Because of their value, nail dimensions underwent great refinement through trial and error. Too long or too thin, and nails would bend or break when hammered into wood; too short or too thick meant a waste of material. With the advent of calculus, it was a simple matter to to employ even the early rudimentary science of strength of materials to calculate the optimum dimensions of nails and other metallic fastening materials, such as rivets, bars and beams. The mathematics of optimization is used everywhere today, from packaging, scheduling and shipping to structural design and building. At the most fundamental level, optimization seems to have been built into Nature herself—the mathematical quantity known as action has the units of energy times time (or equivalently momentum times distance), and Nature seeks to extremalize (and usually minimize) the amount of energy, time, momentum or distance that some activity requires for completion (indeed, when the calculus of variations was discovered in the late 1600s, the power and inherent beauty of the action principle were collectively considered proof of the existence of God). Is any physical process subject to optimization? Apparently not. The traveling salesperson problem is a famous example of an optimization problem that continues to elude an exact mathematical solution. And according to this recent Quanta Magazine article, there are likely many such problems whose solutions are intractable. Optimization has led to great discoveries in the efficient use of resources, but it is useless when those resources disappear. One of those resources is fossil fuels, whose extraction from the ground, refinement, distribution and use have been continually optimized since their discovery. But mathematical optimization should also include the costs of future consequences, such as environmental degradation and climate disruption. To date, humankind has intentionally chosen to ignore these costs, mainly because we are a short-term thinking species that believes getting rich first is all that matters. Script Writers and TV Viewers, Please Learn a Little Science — Posted Tuesday November 2 2021 My late wife and I were fans of the otherwise excellent mystery series Monk starring the Lebanese actor Tony Shalhoub. But on occasion the plots (like those of many other shows) were so patently ridiculous that we eventually lost interest. Case in point: in the second season's "Mr. Monk Gets Married," a writer has hidden a fortune in gold with the single clue "It's in my journals." Turns out it really was—the writer melted his gold, mixed it with black ink and used the mixture to pen the pages of a thousand bound journals, so the entire fortune was hidden in plain sight on his bookshelves. Trouble is, gold melts at around 1,100 degrees Fahrenheit, while ink is carbon-based and would be instantly destroyed. Furthermore, the mixture couldn't be used to write anything anyway, as the gold would solidify and be useless as a writing material. I recently saw this episode again, and the stupidity of the plot ruined it once more for me. So long, Mr. Monk. Neutrino Oscillation — Posted Frdiay October 29 2021 Neutrinos—chargeless elementary particles of spin 1/2 and very small mass, first postulated in 1930 and then experimentally verified in 1956—are known to come in three types, the electron neutrino \(\nu_e\), the muon neutrino \(\nu_\mu\) and the tau neutrino \(\nu_\tau\). For several decades they have been known to oscillate into one another, so that an electron neutrino created in the Sun can become one of the others when it arrives on Earth. As late as the 1980s it was hoped that neutrinos might account for the existence of dark matter, assuming that their mass was sufficiently large. But that mass has since been estimated to be thousands of times less than the mass of the electron. In addition, such a small mass would give neutrinos relativisitic velocities upon their creation, making them difficult to accumulate around galaxies under gravitational attraction. The assumed tiny mass of neutrinos is what has prevented them from being identified with dark matter. There has also been the possibility of one or more much more massive neutrinos, including the hypothetical sterile neutrino, also chargeless and of spin 1/2. Is it possible, I've often wondered, if neutrino oscillation can account for the creation of the sterile neutrino as a fourth member of the known three? After all, a composite of three neutrinos would still have a spin of 1/2 while remaining chargeless, and the binding energy might result in a much more massive particle. Recently, the presumed even distribution of neutrino oscillations has come into question, lending this possibility some credence. A new article in Quanta Magazine doesn't exactly address this possibility, but it does question the distribution issue. As a Christian, I've also wondered about the seeming oddity of God the Father, the Son and the Holy Spirit being one and the same. The neutrino was my answer to this, along with the unity of the electric and magnetic fields being one and the same due to special relativity. But I'll admit that's probably all wrong! There's still much more to be learned about neutrinos, which I see as the last remaining possible answer to dark matter. My Biggest Fear — Posted Monday October 25 2021 Yale philosophy professor Jason Stanley talks about ten aspects of fascism in this new Big Think video. It's all so obviously true, but the allure of fascist leaders and strongmen never goes away in spite of all the education and intelligence of their deluded followers. Former President Donald Trump and most of his sycophants and enablers exhibit all of these traits, yet fully 74 million Americans voted for him in his failed bid for re-election in November 2020. We've had nearly a year since that time to discover, review, digest and analyze all the lies that came out of the man before and during his reign of terror, and yet there's a real chance that he'll return in 2024. And one thing seems certain to me—once the Republican Party gains control again over this country, they'll never let it go. Aside: At the video's 8:12 mark, you'll see the Arbeit Macht Frei ("Work shall set you free") sign that greeted doomed Jewish prisoners upon their arrival at the Auschwitz death camp of World War II. The sign was fabricated and erected by Jewish prisoners, but they had one opportunity to show their resistance—they intentionally set the letter "B" upside down. Their German overseers apparently didn't notice, but we can see it today as a subtle form of defiance. When fascism takes over Amerika, how will such defiance be demonstrated when every form of communication has been subdued by the GOP? Stupidity — Posted Thursday October 21 2021 If someone were to call you stupid, how would you feel? The same as me: either angry, ready for a fight, or willing to walk away. Unless one is truly mentally impaired, I prefer the definition of stupidity as a learned cultural, emotional or political resistance to generally accepted facts and evidence. It has nothing to do with intelligence, but everything with one's cultural conditioning. Here's a new video on "stupidity theory," which despite my objections to the definition of stupidity seems quite accurate: If 51% of the population were homicidal, would you feel secure wandering out in public? Certainly not! How about 49%? Again, certainly not. But what only 0.01%, a figure that might represent the actual population? You'd venture out, but you might want to be cautious—but not paranoid. But now consider the fact that some 47% of the American population believes that Trump won the 2020 presidential election, that COVID vaccine is to be avoided at all cost, and that (to an admittedly much lesser extent) Democratic Party leaders are drinking the blood of children to obtain the supposed life-extending benefits of the blood marker hemochrome? Sure, such people represent a minority of the American electorate, but are you comfortable with that? And are you also comfortable with the probabilty that this minority is currently driving American politics toward another Trump presidency? CNN's Don Lemon is despairing over the Democratic Party's seeming inability or unwillingness to deal with the stupidity of American voters, despite the fact that most Americans agree with President Biden's efforts to alleviate the negative impacts of homelessness, poverty, climate change and dwindling energy resources. But the polls say otherwise: the stupidity of Republican Party leaders is leading the way toward another Trumpian era. The Republicans seem to be saying that an immoral, pussy-grabbing monster is okay with them because "he gets things done." If Democrats can't get things done in spite of their majority in the executive and congressional branches, then all is lost. A Different Approach to Modified Gravity — Posted Thursday October 21 2021 The current issue of the British science magazine New Scientist has an article by the American astrophysicist Paul M. Sutter entitled "Gravity, With a Twist." The title is appropriate, as it describes a theory of gravity that includes something called "torsion," or more generally teleparallelism. (I had a friend in high school who had to have a testicle removed because of torsion, but that's a different use of the word.) In Einstein's general relativity, there are two fundamental quantities that describe all of what we currently know about gravity. One is the metric tensor \(g_{\mu\nu}\) (which also has the upper-index form \(g^{\mu\nu}\)), which itself is the most fundamental of all tensors. It's assumed to be symmetric, so that \(g_{\mu\nu} = g_{\nu\mu}\). The other is the non-tensor connection \(\Gamma_{\,\mu\nu}^\lambda\), so-called because it "connects" vectors undergoing parallel transport and because it (and its first derivatives) make up what is known as the Riemann curvature tensor \(R_{\,\,\mu\nu\alpha}^\lambda \), which vanishes in the absence matter in space but is non-zero whenever there's a massive object nearby. The connection is also assumed to be symmetric in it lower indices, so \(\Gamma_{\,\mu\nu}^\lambda = \Gamma_{\,\nu\mu}^\lambda \). In Einsteinian gravity, the metric tensor and connection are related by the rather complicated expression $$ \Gamma_{\,\mu\nu}^\lambda = \frac{1}{2}\, g^{\lambda\alpha} \left( \partial_\mu g_{\alpha\nu} + \partial_\nu g_{\alpha\mu} - \partial_\alpha g_{\mu\nu} \right) $$ With these quantities, Einstein's gravity theory has passed every test ever made since its publication in November 1915, and its predictive power far exceeds that of the more familiar Newtonian theory of gravitation. In the 1920s, Einstein (and many others) tried to generalize the theory to incorporate a purely geometric provenance for electromagnetism. Einstein's idea was to abandon the notion that the metric tensor and connection were symmetric, and he hoped that the contracted quantity \(\Gamma_{\,\mu\lambda}^\lambda - \Gamma_{\,\lambda\mu}^\lambda\) might represent the electromagnetic source vector \(S_\mu\) (the connection is not a tensor, although the difference is a true tensor). But I digress. Sutter's New Scientist article describes how torsion might be able to account for a number of perplexing problems in cosmology, including the nature of dark matter, dark energy and the Hubble tension, and that torsion might also be able to connect grqvity with quantum mechanics via string theory. But adding torsion to general relativity is exceedingly complicated, as all of the theory's resulting equations have to be resolved in a way that it still accounts for terrestrial and cosmological observations. I've never been tempted to explore torsion for this very reason, but then I'm a lay idiot. One of the oldest books on my bookshelves is Einstein's The Meaning of Relativity, published in 1955 (the same year as his death). It's a fairly readable (if mathematical) account of general relativity, but it includes a chapter called "Relativistic Theory of the Non-Symmetric Field," which is essentially the theory of torsion. Completed in the late 1940s, Einstein thought he had finally achieved a consistent version of the theory, although it didn't have any predictive power. Upon Einstein's 70th birthday on March 14, 1949, his colleagues at Princeton's Institute for Advanced Study (which included Hermann Weyl, himself an early researcher into gravity theory) threw Einstein a party. His long-time secretary, Helen Dukas, baked him a cake whose icing summarized the key equations of his non-symmetric theory: When Einstein died of a ruptured abdominal aneurism in Jume 1955, the attending nurse found several pages of calculations lying on the floor next to his hospital bed. They described more generalizations of his non-symmetric connection term, so we know that Einstein was at it right up to the very end. If he made it to Heaven, I can almost hear him saying to God "I never would have guessed the answer was so simple!" An Old Man's Memory — Posted Thursday October 21 2021 It just dawned on me that in two months (appropriately, the first day of winter) I'll turn 73. Oh Lord, take me now. As I've noted many times before, I'm a complete idiot but I do have a great long-term memory. I can recall my mother bathing me in the kitchen sink at the age of two (me, not her), and spotting a mouse in the kitchen around the same time. But my most enduring memory is when my maternal grandmother came to visit from Illinois in the summer of 1952. I was only three, but already a dedicated fan of the early TV show Time for Beany (I didn't know it then, but Einstein was also a regular watcher of the show!) My grandmother once caught me talking with two of my own imaginary finger characters, "BiBi" and "Saucer" (no doubt my take on Beany and Cecil), and I recall wondering how in hell she knew about them. Time for Beany (not to be mistaken for the much later stupid cartoon show of the 1960s) first aired in 1949. It was a puppet show, featuring many characters from the minds of Stan Freeberg and Daws Butler, the show's creators (I remember always being scared when Dishonest John would show up). I distinctly recall one of the puppets, which at the age of three I presumed was a black crow: Now I realize it was intended as an African native, its stereotypical features being an artifact of the racist attitudes of the early 1950s. (I also remember my parents watching Amos and Andy, an even worse exemplar of racist 1950s America. Many Time for Beany episodes can be viewed on YouTube, and they're of historical interest only. Don't worry, I've moved on. Apparently, Even Tiny Sizes Matter — Posted Wednesday October 20 2021 Some 2,700 years ago, the value of the transcendental number \(\pi\) (pi) was known to be roughly 3, and that was good enough for most purposes back then: And he made a molten sea, ten cubits from one brim to the other. It was round all about, and its height was five cubits; and a line of thirty cubits did compass it round about. — 1 Kings 7:23 Millennia later, pi got to be better known, first 22/7, then 355/113 (try it!), then to hundreds of digits, and today to trillions of digits. According to Einstein's general relativity theory, clocks tick faster the farther they are from a gravitating mass. This has been proven experimentally many times for clocks, astronauts and satellites in Earth orbit; the time difference between terrestrial clocks and those orbiting above amounts to merely milliseconds, but the calculated differences agree perfectly with those observed. Amazingly, the clock difference has now been measured to within a height of one millimeter from Earth's surface, according to this paper. A more readable version can be had here. Although a millimeter is still huge compared with quantum scales, it makes me wonder how time might vary at the subatomic level, where tiny but possibly important massive particles quickly pop into and out of existence due to quantum fluctations and the Heisenberg uncertainty principle. This could not have been surmised over a hundred years ago, when the German mathematical physicist Hermann Weyl (who's still my hero) proposed that time might vary from one point to another depending on the path taken between the points. On the basis of a purely classical argument, Einstein nullified Weyl's idea, but today things look a lot better for Weylian theory, particularly as it applies to gravity and cosmology. See many of the articles I have posted on my weylmann.com site for further details. Metropolis — Posted Wednesday October 20 2021 As a silent film buff, one of my favorite movies is 1927's Metropolis, produced in Germany over the period 1925-1927 and directed by the great Austrian-German film maker Fritz Lang. Various grainy versions of the film can be found on YouTube, but a fantastic, nearly complete remastered version is available at Amazon. It's a first in many respects—a gorgeous humanoid robot, tricky special effects, a huge cast and filmed at enormous expense at the time despite Germany's not-yet recovered economy from its World War I drubbing—but its socialist message did not sit well with the coming Nazi regime, which viewed underclasses with scorn. (Another great Lang film, 1931's M, also was not popular with the Nazis, due to the film's seemingly kindly attitude towards the mentally ill.) Here's a recent video that talks about some of Metropolis' lesser known facts. I found it informative and interesting. And On It Goes — Posted Tuesday October 19 2021 This is the first time I've come across these folks, who in this video discuss the ongoing battle between dark matter and modified gravity (MG). They also address the possibility that cold neutrinos (which would fit nicely into Einsteinian gravity with no modification) may be responsible for what otherwise appears to be dark matter, but such neutrinos have not yet been seen (they probably don't exist). I remain hopeful that dark matter is a myth, and that a modification of general relativity (Einstein's gravity) will eventually explain the mystery of dark matter. Based on this 1989 paper by Mannheim and Kazanas, I always felt sure that MG was the answer, but then this paper by Hobson and Lasenby came along this year that made me question everything. I try to explain the issue in this paper, but now I'm truly clueless about what the real truth might be. Dark matter remains what is arguably the most pressing problem in cosmology today. Colin Powell — Posted Monday October 18 2021 General Colin Powell, ex-Secretary of State under George W. Bush, has died at the age of 84 due to COVID-related complications. My last full-time job was as Assistant Executive Officer in an engineering group oddly dominated by mostly pro-Bush Republicans. On the day that Powell gave his ill-fated May 27, 2003 address to the UN Security Council on the dangers of Saddam Hussein's supposed weapons of mass destruction, my office rolled out a television into the conference room, where the staff (but not me) gleefully listened to Powell's carefully orchestrated presentation, complete with exhibits, photographs, video and other evidence that Iraq had amassed nuclear and biochemical weapons scheduled to be unleashed upon Hussein's Middle East enemies (namely Israel). All hell would break loose unless the United States and her (reluctant and largely unconvinced) allies stopped Hussein in his tracks. It was all a sham, as the world was to learn later. The mobile biological weapons turned out to be weather balloons, and the aerial photos of amassed nuclear bombs turned out to be domestic and military equipment misread by American "intelligence" agencies. Powell subsequently apologized for letting the Bush Administration abuse his personal and military credibility, but it did not prevent the slaughter of hundreds of thousands of Iraqi civilians, the expenditure of some three trillion dollars worth of American treasure, the loss of tens of thousands of US troops due to war death and injury, and the attendant loss of America's credibility around the world. Meanwhile, American citizens yawned. They're still yawning. But the airwaves are now full of nothing but praise for Powell, despite his having misled the country into the most tragic military disaster since Vietnam. One might be willing to forgive Powell, as misled as he was himself by Bush and Bush's pro-war cronies, but it's doubtful that nothing but glory will now be heaped upon Powell's memory. Meanwhile, George W. Bush continues to paint and tend his garden. His record as former President of the United States is also doing well, because the collective memory of Americans citizens is short-lived at best, while their stupidity and willful gullibility persist unhindered. After all, they elected Trump. On Not Thinking — Posted Monday October 18 2021 This a kind of follow-up on my last post, which had to do with the lure of authoritarianism for people in difficult times. It's also a follow-up to a post from long ago in which I discussed Nobel Laureate Daniel Kahneman's best-selling 2013 book Thinking, Fast and Slow, which asserts (among other things) that stessed people tend to rely on gut feelings and emotions rather than logic, facts and evidence. In short, when you're frightened you don't think, and that's when authoritarian leaders jump in and take advantage of things. Along this same line of thought is this new video from Big Think: Dang it all, God gave us humans the ability to think, reason and analyze, and He gave us the ability to use science as a defense against the hardships of disease, injury and tough times. When we resort to fear-based, emotional, gut-level actions (like electing amoral demagogues like Donald Trump), we're essentially telling God that we don't need the brains He gave us after all. Applebaum on Authoritarianism — Posted Sunday October 17 2021 Political writer Anne Applebaum on conservative media commentator Laura Ingraham [my emphasis]: The America of the present is a dark, nightmarish place where God speaks to only a tiny number of people; where idealism is dead; where civil war and violence are approaching; where democratically elected politicians are no better than foreign dictators and mass murderers; where the "elite" is wallowing in decadence, disarray, death. The America of the present, as she sees it, and so many others see it, is a place where universities teach people to hate their country, where victims are more celebrated than heroes, where older values have been discarded. Any price should be paid, any crime should be forgiven, any outrage should be ignored if that's what it takes to get the real America, the old America, back. — Anne Applebaum, Twilight of Democracy: The Seductive Lure of Authoritarianism, 2020 You might want to pick up Applebaum's short (120 pages) book to get an idea of how authoritarianism seems to be the go-to political ideology when regional and world affairs go awry and when frightened people look for any semblance of order despite its tragic cost. My only disagreement with Applebaum is her repeated use of the word "unity" when she really means the collective obedience, conformity and loyalty of ill-informed and scared people to those in power regardless of how immoral, craven and power-hungry they are proven to be. Some 74 million Americans voted for Donald Trump in the 2020 presidential election. Although Joe Biden won the election with seven million more votes, Trump's tally is a truly frightening harbinger of what America has become. Holy Authoritarian Takeover, Batman! — Posted Friday October 15 2021 Wow! At only $1.75, cartoonist Ruben Bolling's domestic insurgent set is a fantastic bargain with over 5,000 pieces. The Donald Trump action figure is sold separately, but the set does include the Insane Pillow Man figure, a true collectible that's sure to increase in value when the country's democracy is overturned in 2024 by America's Republican traitors. President Biden, we hardly knew ye. Unfinished Business — Posted Sunday October 10 2021 It's now been 115 weeks since the worst day of my life, and I'm far from over it. Hmmm, I had this grave marker installed over a year ago, but one of the dates seems to be unfinished. Hopefully that will be corrected before too long. Going Strong — Posted Sunday October 10 2021 Indeed, politics since 2000 has been marked by the rise of populists—politicians who spurn "out-of-touch experts" and who claim to speak on behalf of millions of people with whom they in fact have no authentic connection, and in whom they have no genuine interest beyond securing votes to support their own often very personal agendas. — Fiona Hill, There's Nothing For You Here, October 2021 Yes, America, your favorite pussy-grabbing, pathologically lying, narcissistic sexual pervert Donald J. Trump is not only still around, but now fully in charge of the Republican Party. Noted political authors have repeatedly warned us of this, notably Alvin Toffler, Naomi Wolf and others. But most recently we have Fiona Hill, whose semi-autobiographical 2021 book There's Nothing For You Here explains the political and cultural crises America is now experiencing. It's all about populism and authoritarianism, which invariantly arise whenever civilization undergoes an overwhelming shock or change of some kind. For America, it came with the 9/11 attacks, then the financial crisis of 2008, followed by the demanded rights of minorities and a twice-elected black President, all of which ushered in a panic on the part of undereducated (I mean stupid) white evangelical Americans, whose longing for the mythic 1950s days of Leave It To Beaver and Father Knows Best has diseased their minds into thinking that a thrice-married sexual predator (Republicans: I'm talking about Herr Trump) should be the glorious leader of our country (aside: in Shakespeare's Hamlet, the word "country" is really meant as cuntry). Meanwhile, the Democratic Party is viewed not only as the inverse of a desired perpetual Trumpian dictatorship by right-wingers but a bunch of wusses and wimps on the part of everyone else. President Joe Biden (the so-called adult in the room) appears glaringly boring and near-senile, while much of his administration is viewed as ineffective. We all remember the days with then-Attorney General William Barr, whose obvious criminal support of Trump kept the media dancing, but whose replacement by shoulda-been-Supreme Court Justice Merrick Garland has resulted in what can only be described as a quantum-mechanical human vacuum state. He reminds me of the perpetual loser who somehow finds himself at a party, constantly scrooching himself against a back wall so no one will notice him. (Wait a minute—I was that loser in high school). I don't watch much television these days, but on CNN today it was revealed that the media and its listeners only want negative news, as it's much more interesting than positive news. This is nothing more than a reflection of what American viewers want as entertainment, which is why we're hearing so much about Gabby Petito, Kim Kardasian and all those other jerks. Will Trump run in 2024? I'd bet on it, and I'd also lay odds on his winning. America is truly a pathetic joke. Batteries Have Come a Long Way — Posted Sunday October 10 2021 We will not get serious about alternative energy and climate change until the last drop of oil on earth is burned. — My suggested mission statement for the American Petroleum Institute For the past year I've been watching YouTuber Dave Borlace's Just Have a Think site, in which the technology expert addresses mostly issues involving climate change and emerging energy technologies, especially high energy-density battery technology. The latter is especially interesting, as companies such as Tesla, Litton and others are making significant improvements to existing battery chemistry (both liquid and solid state), and it appears that within five to ten years the world will have low-mass batteries that can not only power automobiles over 1,000 miles on a single charge, but limited taxi-like electric airplanes as well. [Borlace is a hard guy to track down. He's a highly informed and interesting technical commentator, but my efforts to find out more about him have proved useless. The most I could find about him is this brief Medium.com article.] At any rate, Borlace's latest video talks about the advantages (and serious challenges) of lithium-sulfur batteries versus the now-standard and pervasive lithium-ion technology. Tesla is currently making great improvements to the latter (which power the Tesla series of electric automobiles), but the technology will likely hit a plateau at some point in terms of energy density, fire safety and recharge cycles and recharge time. Meanwhile, efforts to surpass lithium-ion are being made for various alternative lithium applications, including lithium-iron and lithium-aluminum, and the advantages of these technologies will only improve with time. Even if some of these can never be utilized in automobiles or aircraft, the promise of large, fixed-site power generating facilities utilizing these technologies looks expremely good. I'm now looking into investment opportunities for the coming wave of high energy-density battery technology, as their future looks very bright. You might want to watch Borlace's latest video, which will give you an idea of how far research and development of battery technology is moving: Speaking of the End of the World \(\ldots\) — Posted Tuesday October 5 2021 The End is Nearer Than We Think — Posted Monday October 4 2021 Heaven and earth will pass away, but My words will never pass away." — Mark 13:30-31 The noted Christian writer and book author C.S. Lewis once referred to this passage as "the most embarrassing" verse of the New Testament, as we all know that when Jesus of Nazareth uttered it (roughly around 30 A.D.) the end of the world didn't come to pass. Christian apologists have written books on the passage in an attempt to make Jesus's words true, even though the fabled "End Times" definitely did not occur within the lifetimes of the disciples he addressed it to (Mark tells us that Peter, James, John and Andrew were present, but the full complement of disciples may have also attended as well). Yet, some 40 years after Jesus's crucifixion the Roman army under Vespasian and his son Titus burned Jerusalem and the Jews' holy Temple to the ground, carting away its treasures and slaughtering hundreds of thousands of Jews in the process. This event certainly occurred within the lifetimes of most of Christ's disciples and followers. The word "generation" might refer to the expected lifetime of someone living in 1st century A.D. (my interpretation), although it could also refer to a host of other meanings depending on the exact nature of the Greek word \(\gamma\epsilon\nu\epsilon\alpha\). Many modern theologians believe that Jesus was just an itinerant apocalyptic preacher whose prediction of the fall of Jerusalem in 70 A.D. was either a lucky guess or an after-the-fact postscript of the early Gospel writers. But we also know that several of Christ's disciples were martyred in Rome around 64 A.D., less than 40 years after the passage of Mark 13:30, so whatever is the true meaning of the verse remains open to debate. In the Coptic Orthodox Church (my church), the word "generation" refers to all believers at all times, which is as good as any other interpretation. But regardless of all this, Christians today remain hopeful that Christ will indeed return in the End Days, although that time will most probably occur sometime long after we die. This is the primary hope of all Christians, yet it would seem that physicists are also falling under the sway of a similar hope, at least when it comes to their ultimate fate. Last month's edition of the Scientific American included this article by science writer John Horgan. who contends that the teleological matter of "what's the point of it all" weighs very much on the minds of many scientists today, whose theories to date have taken them about as far as experimentally possible. In his article, Horgan also raises the issue of terror management theory, which has to do with how we sentient humans have learned to cope with the knowledge of our imminent demise. At its extreme, the theory says that all human endeavor—getting educated, making money, developing hobbies, having children and whatever—is nothing more than our subconscious efforts to distract us from the fear of death, a fear that only we humans seem to be acutely aware of. Horgan's article touches upon an issue that to my knowledge was first raised by noted physicist Frank Tipler, whose 1997 book\(^{**}\) The Physics of Immortality: Cosmology, God and the Resurrection of the Dead says that we'll all come back again at the "Omega Point," a time in the unimaginably distant future after the universe has undergone complete heat death. But it ain't heat you should be worried about, but the time when the universe has achieved maximum entropy through the dissolution and decay of all the matter, stars, galaxies and black holes, leaving nothing but a thin fog of stray photons. And it is this fate, Horgan writes—this unimaginably boring, bland, seemingly pointless fate—that is now keeping many physicists up at night. The good news—if you are capable of accepting it—is that you and I will never have to worry about this fate of the universe (which modern cosmology says is certain and unavoidable), as we'll all be dead long before it takes place. And that's the entire point of what I'm saying here. The end is nearer than we think—we should be ever ready for the return of Jesus Christ, because the End Times comes when we draw our last breath. Maybe that was what Christ was getting at all along. \(^{**}\) Tipler's book opens with this moving dedication: Dedicated to the grandparents of my wife, the great-grandparents of my children Jozefa Basarewska and Adam Rokicki Shot to death by the Nazis in 1939, for the crime of being Poles Jozef Basarewska Tortured by the Gestapo, and died shortly thereafter All three being citizens of Torun, Poland, the birthplace of Copernicus, Who died in the hope of the Universal Resurrection And whose hope, as I shall show in this book, Will be fulfilled near the End of Time Free Speech? — Posted Sunday October 3 2021 Felicia Hofner is a young native German from Munich currently living in Cincinnati, Ohio. Her popular YouTube site German Girl in America deals with cultural and political differences between Germany and America. Her latest video addresses a number of actions that are surprisingly illegal in Germany. My first visit to Berlin and Munich impressed me with how open and sincerely apologetic Germans are regarding the Holocaust. Despite the ongoing presence of neo-Nazis and a pervasive distrust of immigrants (albeit by a tiny minority of Germans), laws have been enacted in Germany designed to both secure the dignity of all persons living there and to fix Germany's responsibility for the horrors of World War II. I already knew that it is illegal to say "Heil Hitler" or "Sieg Heil" in Germany, but according to Ms. Hofner it is also unlawful to publicly deny the Holocaust, whatever one's opinions might be on the subject. She explains that this is not a violation of free speech per se, but an acknowledgment of the inherent dignity of those living and dead who suffered as a consequence of Germany's past persecution of the Jews. I could not help but compare the situation in Germany with that here in my country, where one can pretty much say anything in public, in the social media or anywhere else, regardless of how preposterously wrong, stupid or hurtful it might be. I am reminded of the American ignoramuses (almost solely white evangelicals from Red States) who openly denigrate minorities, the incidence of mass shootings, climate change and vaccination efficacy, often with horrendous consequences. This is "free speech" in America. However, the state of Wisconsin recently did formally ban certain forms of speech, albeit applying to critical race theory and historical slavery. "Multiculturalism, equity, racial bias, social justice" and other terms are now banned from official state legislative language, school districts and other public agencies. Of course, the bill was passed by a predominance of Republican legislators in that state. God help this country. Dark Energy Again — Posted Thursday September 30 2021 Astrophysicist Ethan Siegel's latest article discusses the nature of dark energy, which scientists believe accounts for some 68% of all the energy in the universe. Ordinary matter and radiation (protons, neutrons, electrons, neutrinos and photons) contribute only 5%, while the remaining 27% is thought to be dark matter, a strange variant of ordinary matter that is non-interacting with anything except gravity. Over the past three decades billions of dollars have been spent looking for experimental evidence of dark matter, but to date all efforts have come up empty. Siegel posits the possibility that dark energy is just a property of spacetime itself, and is not a particle or a field. Basic cosmological theory says that the universe is not expanding into anything (like a surrounding empty space), but that the expansion itself creates space, and this space may contain a constant energy density that happens to be dark energy. The few reliable estimates we have indicate that this energy density is very small but non-zero, and that it is positive and so results in a kind of antigravity field that tends to push ordinary matter away. This would explain the observations of Perlmutter, Riess and Schmidt, who jointly won the 2011 Nobel Prize in Physics for their 1998 discovery of the accelerating expansion of the universe. I once thought that the energy lost by photons (via the red shift) as they are stretched by expanding space might account for dark energy, but this idea has been discounted. Others have thought that the zero-point energy of the vacuum might be dark energy, but the nature of vacuum energy to date does not have a sound theoretical basis. Einstein himself proposed the cosmological constant \(\Lambda\) in 1917, which (when added to the gravitational field equations) fully explains dark energy, but this too lacks a firm experimental basis. Meanwhile, Einstein's gravity theory itself holds that mass-energy is not strictly conserved, which violates the once-traditional law of mass and energy conservation. So what is dark energy, and where does it come from? Stop Looking for Superman — Posted Thursday September 23 2021 Oh Superman where are you now When everything's gone wrong somehow The men of steel, the men of power Are losing control by the hour. — Genesis, Land of Confusion (1986) Just months into his administration, President Biden is being blamed for all of America's faults, not only from conservatives but from his own party as well. Meanwhile, hideous moral monsters like Donald Trump cruise along, supported by willfully ignorant and stupid conservative fawns who've led us once again into another COVID-19 disaster: Meanwhile, COVID deaths are now averaging 2,000 a day. Great job, morons. Case in point: former Trump national security advisor Michael Flynn is now claiming that Biden operatives are putting COVID vaccine into salad dressing, and Trumpists are swallowing the lie whole hog. God help us. As the Genesis song indicates, the human race has too many people doing too many wrong things. That was 35 years ago, but it applies even more today. With some 7.8 billion people on the planet now, all it takes is a tiny minority to ruin everything. Racist Hypocrisy — Posted Thursday September 23 2021 Gabby Petito—so young, so beautiful, so white\(\ldots\)so much more interesting and important than missing women of color. This happens on a regular basis—a beautiful, young white woman goes missing or is found murdered, and all the news networks drop everything to focus on her. Police, volunteers, helicopters, surveillance aircraft and all manner of other resources combine forces to solve the mystery, and Americans are glued to their television sets waiting for the latest news. Meanwhile, many more women of color undergo the same fate, but are ignored completely by the media. Yet Americans pride themselves as being beyond racism. It's all because of ratings, and the media know perfectly well that viewership depends heavily on the color of the unfortunates. Shame on us. Turtles — Posted Wednesday September 22 2021 And then, a sail appeared; it was The Rachel. The Rachel, who in her long melancholy search for her missing children found \(\ldots\) another orphan. — Herman Melville, Moby-Dick Today I watched 1999's The Thirteenth Floor again for the first time since my wife passed away. It's based on the possibility that we're living in a computer simulation, programmed and run by future programmers (human or otherwise), and I still believe the concept has some merit, although I don't truly think it describes the reality of our world. The concept has some religious aspects I won't go into here, except to say that some fellow members of my church also believe it's at least possible. The 1999 film is based broadly on the 1964 story Simulacron-3 by American science fiction writer Daniel F. Galouye. It concerns the near future in which some computer programmers develop an artificial world inhabited by sentient people of their own design, all unaware that they're simulated. In the movie, one of the simulated characters discovers "it ain't real": Later, "real-world" programmer Douglas Hall takes his own trip and makes a similar disturbing discovery: Many philosophical articles have been written about the simulation hypothesis, including the possibility that it's not only just turtles all the way down but turtles all the way up as well. I've sometimes pondered the possibility that the linkage is somehow circular, so that the simulations all derive from one another, and that nothing is either real or unreal. Coincidence? — Posted Thursday September 16 2021 — Auric Goldfinger In quantum mechanics, anything that is allowed to happen, no matter how improbable, will happen. This is encapsulated in the Quantum Totalitarian Principle of Murray Gell-Mann, the late Caltech physicist and winner of the 1969 Nobel Prize in Physics, which states that "Everything that is not expressly forbidden is compulsory." And according to the so-called Law of Truly Large Numbers, even the most outrageously improbable event imaginable is likely to occur at some time. For example, over the complete history of our race some 110 billion human beings have lived out their lives and, with the countless events they've witnessed, at least a few seemingly impossible events will have taken place. Barring acts of God or clever tricks by humans, they can all be dismissed as coincidences. Some forty years ago, the Japanese particle physicist Yoshio Koide discovered a formula that many scientists believe simply must have a scientific explanation. Given the currently known masses of the electron \(m_e\) and its identical (with the exception of mass) cousins the muon \(m_\mu\) and the tau \(m_\tau\), Koide's formula is $$ \frac{m_e + m_\mu + m_\tau}{\left( \sqrt{m_e} + \sqrt{m_\mu} + \sqrt{m_\tau} \right)^2} = 0.666661\ldots \approx \frac{2}{3} $$ The slight difference from the exact 2/3 figure can be completely attributed to the uncertainties in the known masses. These uncertainties were larger when Koide first made his observation, but the formula has continued to converge on 2/3 since that time. It is entirely possible that with more accurate mass measurements, the formula will indeed be precisely 2/3. The formula may still be just a coincidence, but if improved measurements of the masses of these particles are able to add two or three more 6's to the formula, then physicists will likely have to accept the fact that new physics is involved. This is the take according to astrophysicst Ethan Siegel, who discusses the formula in his latest online article. But I have an even bigger question: how in the world did Koide come up with this crazy formula in the first place?! What an Age — Posted Monday September 13 2021 My father was born in Illinois on January 8 1905, four years after the Victorian Period officially ended. That era governed England during the reign of Queen Victoria from her coronation in 1837 to her death in 1901, but its influences in terms of culture, politics and technology spread all the way to America. One of those influences had to do with the dressing of young boys as girls; here is my father in mid-1908 with long blonde locks, looking rather like a girlish Buster Brown, a popular comic strip character of the time: The life expectancy in America when my father was born was only 49 years, so people had to grow up quickly compared to now. Education was still hit and miss—if you managed to graduate from high school, that was great, but college was a distant hope for most. Articles from my father's high school yearbooks (1919-1924) reflect the need for most kids to leave school, get a job, make money, get married, have a family, and hope for the best. Oddly enough, germ theory was well established when my father was born, but routine chlorination of drinking water didn't arrive until 1906. Prior to that time, people died in droves due to water-borne typhoid and cholera, due mostly to contamination of the drinking water supply. Case in point: my father had three brothers, all of whom lived to old age except for Russell (born 1896), who unexpectedly fell dead on the floor in 1902, presumably because of cholera. I have an original page of his Kindergarten scribbles, a rare artifact preserved by my late sister. This and other issues are addressed by Joe Scott, an afficionado of the Victorian Period, although he generally confines himself to science issues. In the following video he discusses the Victoria era in some detail, and it's amazing how many of its cultural and technological issues compare with those of America: My late wife and I were big fans of British television shows and movies, and one thing that always stood out for us was the strict formality that invariably existed between parents and their children in Victorian times (and even much later). Mr. Scott explains why this is so: the child death rate was about 60% before the age of ten, so parents were apparently reluctant regarding getting too personally close to their kids, knowing that their suvival into adulthood was very doubtful. With the advent of photography in Victoria's time, it became common to have a dead child or relative photographed to preserve their memory. Photographic memento mori (remembrance of the dead) dates back to the 1840s, but it achieved its heyday in the 1850-1880 period. YouTube is replete with such photographs, which range from the serenely beautiful to the macabre. Pretty Neat — Posted Friday September 10 2021 The Pasadena area experienced a great lightning and thunderstorm last night, sadly without much of the rain we so desperately need. But on my walk this morning I was pleasantly greeted with this sight, something I haven't seen in ages. It's no big deal in many parts of the country, but rare around here. The Night of the Hunter (1955) Documentary — Posted Wednesday September 8 2021 The acclaimed actor Charles Laughton directed only one film, which was 1955's The Night of the Hunter starring Robert Mitchum, Shelley Winters, Lillian Gish, Peter Graves and James Gleason. The movie was not a financial success when it was first released, but it has since become a classic noir-ish film with subtle sexual and religious motifs and overtones. The plight of the two children, desperately fleeing a homicidal maniac played brilliantly by Mitchum (in his favorite role of all time), underscores the recurrent theme of innocence fighting off the evil of the world while surrounded by hapless adults who are either unaware of the evil or are unable to recognize what is going on around them. This morning my older son alerted me to this recent 2 hour, 40 minute YouTube documentary, in which Laughton is depicted discussing the film and directing the actors during various outtakes and scenes. It's brilliant, and I'm so grateful that it's now available for viewing. (Thanks, Kristofer!) At one point in the film, an exhausted, sleep-deprived John (played by Billy Chapin) is awakened in the dead of night by the idle singing of the pursuing killer, who always seems to be just one step behind John and his younger sister Pearl. "Don't he never sleep?!" he intones to himself, which to me represents the constant threat of unrelenting evil that pervades our world. Mitchum's repeated rendition of the old Gospel hymn "Leaning on the Everlasting Arms" is both charming and malevolent at the same time, and once you've seen the film the hymn will never sound the same again. Don't miss the film or the video. Nuts on the Proof of the Gatic Theorem — Posted Saturday September 4 2021 The late cartoonist Gahan Wilson was a favorite of mine in the 1970s, and his NUTS series was a great take-off on the Peanuts strip. I remember this one, as it reminds me of Miss Woods' high school geometry class. I was never sick in high school, but I swear every day in her class was like this one. Hossenfelder Again — Posted Saturday September 4 2021 I apologize for posting yet another of German physicist Sabine Hossenfelder's videos, but she addresses an issue that many cosmologists have been questioning for the past few decades, which concerns the validity of the Cosmological Principle. In short, the principle states that if one looks out at great distances, the universe appears to be both isotropic and homogeneous—that is, wherever one looks, the density of matter and radiation appears to be uniform. The Cosmological Principle is a bedrock assumption in standard cosmology, because it's really the only way the dynamics of the universe can be understood analytically. For example, the Friedmann equations (which to date have been remarkably accurate in describing the dynamical universe) could not be derived from Einstein's general relativity theory without the simplications that the Cosmological Principle provides. As Hossenfelder points out in the video, recent research has shown that if the Cosmological Principle does not hold absolutely, then the assumed existence of dark energy (and possibly dark matter) might be invalid. A universe completely devoid of dark energy and dark matter would most likely halt its observed expansion and recollapse in the distant future, possibly resulting in a new Big Bang. I've always wondered if the Cosmological Principle might be made more realistic if some kind of clumpiness or non-uniformity could be embedded in the formalism, perhaps by introducing some sort of random "fuzziness" in the large-scale distribution of matter and radiation. Maybe this kind of adjustment would explain the divergence seen in Type 1a supernovae data, which seems to imply that the universe is expanding at an accelerated rate. Indeed, the following graph published in 1999 won its researchers the 2011 Nobel Prize in Physics. Just wondering. A Discovered Notebook — Posted Wednesday September 1 2021 I met my future wife Munira when she joined our laboratory in May 1972. She would usually sit alone eating her lunch and reading the Bible, but on occasion she would read from an old, tiny book that had wood covers. Later I learned that the book was The Imitation of Christ, written anonymously in the 15th century but usually attributed to one Thomas à Kempis. It was originally published in Latin, but has since been translated into many other languages. During lunch I would occasionally sit and chat with Munira in the little garden outside the lab. One day she loaned me the book, although at the time I could hardly consider myself to be a devout Christian. But I did read it, and returned the book to her in late 1972. I honestly didn't give it much thought at the time. When Munira died in July 2019 I found the book in her handbag, where without my knowledge she had carried it throughout our 42 years of marriage. Then yesterday, while going though some of her things in the garage, I found a copy of the Imitation of Christ that Munira had laboriously handwritten into one half of a notebook from her first year at the University of Cairo, Egypt in late 1963 (which is ironic—she started college the same year I started high school). She wrote it in English, her second language, and with some amusement I spotted a few grammatical errors that she made, although the context was correct. The other half of her notebook contained detailed notes she transcribed from lectures she had taken in her first year at university. Amazingly (to me), those notes were from her first physics course, which included elementary atomic physics. So there was my future wife, studying calculus-based physics when I was just getting comfortable with Algebra I in high school. In fact, I never took physics in high school, despite being fascinated with the subject from an early age. I now treasure these notes from my dear late wife, who went on to earn a Master's Degree in chemical engineering while maintaining her Christian faith throughout her life. And I consider the juxtaposition of her physics notes with The Imitation of Christ to be a moving and very profound message to me from her spirit, and I am deeply moved. Three months ago I was ordained a deacon in the Coptic Orthodox Church, the same church my wife grew up with in her native Egypt. During our marriage we attended the church only infrequently, but now it's a lifeline for me as well as an ongoing connection with her and with the Christ she loved. With God's help I hope to join her again someday in the Kingdom of Heaven. The Chandrasekhar Limit — Posted Monday August 30 2021 In 1930, the brilliant Indian physicist Subrahmanyan Chandrasekhar derived the theoretical limit of how massive a star can be without collapsing into a white dwarf, a neutron star or a black hole. That limit today is about 1.44 solar masses, and of the hundreds of white dwarfs and neutron stars observed to date, none have exceeded the limit, confirming Chandrasekhar's work. I lost interest in stellar physics years ago, mostly because Chandrasekhar's mathematics was just too daunting. But here's a wonderful new YouTube video on the derivation of a decent estimate of the Chandrasekhar limit involving mostly just algebra: I find it interesting that Chandrasekhar's limit was ignored for decades, mostly because of the refusal of colleagues to believe it was valid. But in time they did, and in 1983 Chandrasekhar was awarded the Nobel Prize in Physics for his work. I can't help but feel that at least some of the difficulties he faced had to do with the same racist attitudes the similarly gifted Indian mathematician Srinivasa Ramanujan had to deal with. On the basis of a series of brilliant but unsolicited calculations he sent to the noted Cambridge mathematician Godfrey Hardy, and almost out of curiosity Ramanujan was invited to England by Hardy. Hardy became Ramanujan's mentor, but the racist attitudes and cultural differences Ramanujan had to endure in England were profoundly difficult for the young man, and he died at the young age of 32. A decent overview of Ramanujan's life can be glimpsed in the 2015 film The Man Who Knew Infinity. Gone 25 Months — Posted Saturday August 28 2021 And now I'm lost, so gone and lost Not even God can find me. — They Call the Wind Maria Siegel on Dark Energy — Posted Thursday August 26 2021 This recent Medium article by astrophysicist Ethan Siegel deals with the nature of dark energy and its possible effects on the future of the universe. Dark energy is represented by the cosmological constant \( \Lambda\) in Einstein's gravitational field equations, given by $$ R^{\mu\nu} - \frac{1}{2}\, g^{\mu\nu} R + \Lambda g^{\mu\nu} = \frac{8 \pi G}{c^4}\, T^{\mu\nu} \tag{1} $$ The standard model of cosmology says that as the universe expands, ordinary matter (including dark matter) and radiation get diluted, but dark energy is actually created with constant density \(\rho_\Lambda\). Dark energy has a kind of antigravity effect, so as the universe gets bigger the antigravity effect grows, resulting in the runaway, accelerated expansion of the universe. If the standard model is correct, and if \(\Lambda\) is truly a constant, then the universe will definitely expand without bounds forever. Consequently, matter and radiation will become so diluted that the universe in effect will simply disappear into a bleak nothingness. But Siegel raises the possibility that \(\Lambda\) may not be a true constant—it may increase in strength or grow weaker with time, with vastly different effects on the fate of the universe. I've always felt that the \(\Lambda g^{\mu\nu} \) term in Einstein's field equations was an easy plug-in, since the covariant divergence of (1) (which implies a kind of mass-energy conservation) is assumed to vanish. The metric tensor \(g^{\mu\nu}\) is a constant under covariant differentiation, but if \(\Lambda\) is not a constant but a differentiable scalar then all bets are off with respect to the model. Similarly, it is also possible that the metric tensor does not vanish under covariant differentiation. This is called non-metricity, and the concept has been around almost since the dawn of Einstein's general relativity theory. Even if the combination \(\Lambda(x) g^{\mu\nu}(x) \) is somehow covariantly constant, the cosmological consequences of Einstein's equations would be radically different and far-reaching. Many theories have been proposed to date that examine these consequences, but as yet they have not agreed with observation. As the early 20th century astrophysicist Arthur S. Eddington once noted, "The universe is not only stranger than we imagine, it is stranger than we can imagine." Yes, Everything Vibrates — Posted Monday August 23 2021 German physicist Sabine Hossenfelder's latest blog was inspired by "two lovely ladies" who expressed their belief that they wouldn't contract COVID-19 because their personal "vibrational frequencies" are out of sync with that of the virus. I've heard of crazy beliefs like magnet therapy, homeopathic medicine, vaccine magnetization, astrological forecasting and other nonsense, but this one takes the cake. Hossenfelder's opinion mirrors mine, but she notes that there is some truth to the notion that "everything vibrates." This is true even at a temperature of absolute zero (0 Kelvin)—a particle at that temperature jiggles a tiny bit, due to the Heisenberg unceertainty principle. But I fear that Hossenefelder's guarded admission will be taken completely out of context by the women appearing in her video, assuming they bother to watch something on the Internet that does not feature the American flag, guns, Trump and competitive Krispy Kreme doughnut eating. Back in the late 1960s we had harmless hippie behavior like love-ins and navel gazing, but modern pseudoscience like vaccine avoidance and willful stupidity can have truly tragic consequences. If the estimated 30% or so of Americans opt not to get vaccinated (out of allegiance to Trump or preferences for hydroxychloroquine, Clorox injections or veterinary drugs), then the current and projected variants of COVID-19 will be with us forever. Airliner Disaster Over Duarte, or When World Lines Intersect — Posted Monday August 23 2021 I had long forgotten this event when I stumbled across this YouTube video, which describes an airline disaster that I witnessed back in June 1971. I left this comment on the YouTube site: I had just graduated from college, and was walking home to my parent's house in Duarte, California from the Alpha Beta supermarket when I heard a distant explosion. Looking east, I saw something plummet straight down from the sky. After dropping off my groceries, I drove to the Fish Canyon area of Duarte only to find that the roads had been blocked off by police cars. I returned home, turned on the TV and learned that it had been an in-flight collison between a commercial passenger jet and a military aircraft. Later, there were rumors that the military jet had been doing a victory roll just prior to the collision, but it was subsequently judged to be an evasive maneuver by the pilot. God save the souls of those who perished. In physics, a world line is the trace of one's life sequence through space and time. I find it tragically coincidental that the world lines of two aircraft might find themselves at the same point in spacetime. Since that day fifty years ago I have flown many times, but because of this event I still do not like flying. The Horror! The Horror! — Posted Friday August 20 2021 As a junior at Duarte High School in 1966 I saw a film screened in the school's auditorium on the dangers of premarital sex. A U.S. Army film made back in 1944 to warn overseas soldiers on the hazards of venereal disease, the film traces the sad tale of one unlucky soldier who discovers he's contracted a "fine dose" of VD. Just before announcing his discovery to a pal, his buddy asks "What's eating you?" I didn't see the unintended humor in that remark in 1966, but it's all too obvious to me today! While walking out of the auditorium that day, I recall one girl saying to her friend "Did you see that chancre sore? Ugh!" I remember it distinctly because I had a fatal crush on that girl at the time. I suppose the film had its intended effect on her, but the message was largely lost on me, as I had no chance of having even a date in high school, much less experiencing sex. Back in those days kids talked about getting the "clap" (gonorrhea), which was easily treated with penicillin, while syphillis was another matter entirely. There was a widespread rumor in high school back then that the cafeteria workers were putting saltpeter (potassium nitrate) in the food to counteract the raging hormones thought to be prevalent in boys at the time. I'm still not sure if that rumor was true, nor if saltpeter had such an effect. You can watch the 30-minute film at this YouTube site. Meanwhile, the ultimate venereal disease remains the one depicted in the critically-acclaimed 2014 horror film, It Follows. It's recommended watching, and it might just change your mind about illicit sexual conduct. The Kal-Els at Home — Posted Tuesday August 10 2021 I fell in love with science (especially physics and chemistry) while reading comic books as a child in the late 1950s and early 1960s. Staple fare then was Superman, Superboy, Action Comics, Adventure Comics and World's Finest Comics. "Kal-El" was Superman's name on his home planet Krypton, which exploded due to an instability of the planet's core, minutes after Kal-El was saved by his parents Jor-El and Lara by shooting their young son into space via rocket. Kryptonians had no super powers because their sun was red, but when Kal-El landed on Earth he attained super powers due to our sun's yellow color. The remnants of Krypton were highly radioactive due to their conversion to kryptonite, but it was toxic only to Kryptonians. Pieces of Krypton occasionally found their way to Earth, providing many plot lines for the comic books. I never questioned the existence of a radioactive element such as kryptonite, although in 1962 I knew there were only 106 known chemical elements at the time. My 8th grade science teacher at the time was Mrs. Wilson, a real battle ax you really didn't want to mess with. She caught me talking in class one day, and demanded that I give her the number of chemical elements. To her surprise I answered correctly, but she told me never to speak in class again while she was talking. Mrs. Wilson was probably around 40 years old then. She had a big impact on my life, and I wonder what happened to her. By the way, names with "el" often have religious significance. It's related to the God of Judaism and Christianity, with Michael, Gabriel and Raphael being the principle angels of orthodox Christianity. Conversely, names with "bel" are often related to wicked paople, due to its connection to Baal, the pagan god of the ancient Canaanites and others. For example, Israel's 9th century BC King Ahab was seduced by Jezebel, who was ignomiously pushed out of a window, with her body eaten by dogs. Meanwhile, my name is Bill, and I can only wonder what disaster awaits me. Cuomo Resigns — Posted Tuesday August 10 2021 With some eleven women testifying that New York Governor Andrew Cuomo sexually harrassed them, Cuomo has resigned. Meanwhile, former President Donald Trump, accused by fifteen women of sexual harrassment and molestation, skates off unblemished. Something's wrong here, but the worshipers of Trump could care less. Learning Can Be Fun When You're Miserable — Posted Friday August 6 2021 First there was the COVID-19 alpha (\(\alpha\)) variant. Then the beta (\(\beta\)) and gamma (\(\gamma\)) variants came along, but they were minor players, as far as viruses are concerned. The delta (\(\delta\)) variant then came along, and it's currently all the rage (literally). Now there's news of the epsilon (\( \epsilon \)) variant, and even a lambda (\(\lambda\)) variant. It appears that America is gonna have to learn the entire Greek alphabet before (or if) this thing is over. If it doesn't end, I hope we'll start learning the Arabic alphabet, as I need the practice. Merritt's MOND Book — Posted Monday July 26 2021 I received my copy of David Merritt's 2020 book A Philosophical Approach to MOND (MOdified Newtonian Dynamics). At only 250 pages, it's a fairly quick and easy read (provided you have at least an undergraduate degree in physics and you skip over the book's preliminary philosophical stuff). I found that it deserves all the praise it has received from Germany's Sabine Hossenfelder and many others, although I don't think it will soon overturn the conventional Standard Model of Cosmology, which asserts that dark matter exists. MOND basically says that gravity acts a little differently than what's taught from the usual \( F = ma \) point of view, with the gravitational force falling off as \(1/r \) rather than \( 1/r^2 \) at galactic distances. This means that at great distances stars will orbit their galactic centers at roughly constant speeds, which is what is observed for nearly every galaxy. Dark matter, on the other hand, accounts for these constant velocities by assuming the presence of dark matter "haloes" surrounding the galaxies, which preserves the usual Newtonian \( F= ma \) force law. First proposed by Israeli physicist Mordehai Milgrom in 1983, MOND received aparse attention despite its ability to accurately predict stellar motions and other cosmological phenomena discovered in subsequent years. Its main drawback was that it was not a relativistic theory, although in later years such modifications to Milgrom's theory appeared that removed this constraint via extended versions of Einstein's general theory of relativity. Merritt's book presents three approaches to MOND. The first is Milgrom's basic idea (which seems to have been little more than an inspired guess). It's followed by a relativistic version developed by Milgrom and colleague Jakob Bekenstein. The third approach is a kind of mixture of MOND and dark matter, which Hossenfelder believes might be a dark matter superfluid. The Milgrom/Bekenstein theory came out in 1984, and to relativistically reproduce the successes of the original MOND theory they had to introduce a universal scalar field \( \psi \) into the formalism. But a flaw in this theory led Bekenstein to a 2004 revision involving a vector field \(A_\mu \) along with \( \psi\) that he planted into the fundamental metric tensor \( g_{\mu\nu} \). This in turn required the theory to now have the kinetic part \(F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu \), which looks a lot like the electromagnetic field tensor. The final theory, which appears as the expression given by Equation (6.19) in the book, now had at least five adjustable constant parameters, making it look (to me) a lot like curve fitting. Furthermore, the Ricci scalar \( R \), which traditionally accounts for the gravitational field, is nowhere to be seen (at least I can't find it). By comparison, Milgrom's original theory had just one constant added parameter, \(a_0\), which causes the gravitational potential to fall off more slowly. This reminds me of the theories of Hermann Weyl and Mannheim/Kazanas (which I have discussed many times elsewhere on this website), which look far more promising. Still, the accounts of the successes of MOND described in detail in Merritt's book are hard to overlook or dismiss, and I'm even more convinced that a consistent relativistic version of MOND will eventually prove that dark matter is nothing more than a convenient myth. Two Years — Posted Sunday July 25 2021 Today marks exactly two years (104 weeks) since my dear wife Munira passed away. I'm trying to move on, get off the Prozac and the grief therapies, but it's just not happening yet. Here we are on her 45th birthday in 1991: May God bless you and keep you always, dear Mimi, as I await and pray for the day when I will see you again in Heaven. The Party of "NO" — Posted Wednesday July 21 2021 The Republican Party, spearheaded by Kentucky Senator Mitch McConnell and Texas Senator Ted Cruz (but really dominated by the disgrunted and still very dangerous ex-president Trump), has become the party of "NO," determined to block all of President Biden's plans to reinvigorate the economy, deal with climate change and infrastructure issues and everything having to do with serious COVID-19 mitigation. This situation reminds me of my favorite opera of all time, the five-act Italian opera Mefistofele, composed by Arrigo Boito in 1868. It's based on the famous story of Goethe's Faust, a scientist who sells his soul to the devil in exchange for beauty and knowledge. The opera includes the famous "Whistle Aria," in which the devil reveals his true nature: I am the Spirit who denies Everything always: the stars, the flowers. My sneering and hostility Disturb the Creator's leisure. I want Nothingness and the Universal ruin of Creation. My vital atmosphere is What is called Sin, Death and Evil! I laugh and snarl this monosyllable: I destroy, I tempt, I roar, I hiss: I bite, I ensnare, I destroy, tempt, roar, hiss. I whistle, whistle, whistle, whistle, eh! [whistles loudly] I am part of the innermost Recesses of the great All: Obscurity. I am the Child of Darkness! What an appropriate description of today's Republican Party. Why Distance and Size Don't Matter — Posted Wednesday July 21 2021 Science writer Ben Brubaker talks about Bell's inequality in today's online Quanta Magazine, where he summarizes the notions of locality and non-locality of events in spacetime, and how the late physicist John Bell destroyed the idea that two events occurring far apart cannot take place simultaneously. Bell's inequality has since been demonstrated many times experimentally. Some years ago I showed that the inequality is truly violated in elementary quantum mechanics, and that two distant but entangled events can indeed occur at the same time. [There is one simple way to show that this is true for one extreme case. The distance or interval between two events is expressed by the invariant line element \(ds\), which in Cartesian coordinates is given by $$ ds^2 = c^2 dt^2 - dx^2 - dy^2 - dz^2 $$ At the speed of light, \( ds = 0\), which means that physical distances (and time itself) become meaningless. If you were a photon, you would perceive yourself as existing everywhere in the universe at the same time, and in this sense you would be immortal as well since you would not perceive the passage of time at all. This might seem to be a paradox to a human, since turning on a light and letting the photons impinge on your cornea would make it seem that a photon is born at one instant and destroyed shortly later. Thus, to a human a photon's lifetime is finite, while the photon would see itself as having no birth or death. But this is nothing but the famous twin paradox taken to its ultimate extent.] But all this made no sense to Einstein, and he tried to prove it in 1935 in a famous paper he wrote with collaborators Boris Podolsky and Nathan Rosen. It was not a proof, exactly, as Einstein simply argued that the concept of locality prevented the effects of one event from affecting a distant event faster that the speed of light. But quantum mechanics asserts that two entangled particles are in the same quantum state, and so the distance between the two makes no difference. I see this as meaning that the physical distance between two events in spacetime is in a sense meaningless, and since distance is a measure of the size of an object, the concept of size is essentially meaningless as well. This is the gist of a theory proposed by the German mathematical physicist Hermann Weyl in 1918, in which he asserted that distance between two points in spacetime is affected by the path that one takes to get from one point to the other. It was Weyl's initial intention to show that the path is affected by a combination of gravitational and electromagnetic fields, but this idea was quickly overturned by Einstein himself. It was subsequently shown in 1929 that Weyl's idea applied not to gravity but to quantum theory as the principle of gauge invariance. Gauge invariance is now a fundamental cornerstone of quantum physics, where it is believed to lie at the basis of all physics. David Merritt on Dark Matter — Posted Tuesday July 20 2021 David Merritt, former professor of physics at Rochester University, writes in Aeon Magazine that it's about time that the standard model of cosmology begins to accept the very real probability that dark matter, like the late 1880s idea of the luminiferous aether, simply does not exist, and should be replaced by a relativistic version of modified Newtonian gravity (MOND). Merritt's 2020 book on the subject, A Philosophical Approach to MOND, has received great reviews, along with the support of notable physicists like Germany's Sabine Hossenfelder. I purchased Merritt's book from Amazon, but I haven't received it yet. However, from what I've read about it so far I tend to be in complete agreement with his views. Einstein's gravitational field equations are traditionally derived by a variation of the simple action quantity $$ S = \int \!\! \sqrt{-g}\, \left( R - 2\Lambda \right) \, d^4x $$ where \(R\) is the Ricci scalar and \(\Lambda\) is the cosmological constant, with \(R\) being a function of the fundamental metric tensor \(g^{\mu\nu}\) and its derivatives. With \( T^{\mu\nu}\) being the energy-momentum tensor representing mass-energy, the variation yields Einstein's field equations $$ R^{\mu\nu} - \frac{1}{2}\, g^{\mu\nu} R + \Lambda g^{\mu\nu} = \frac{8 \pi G}{c^4}\, T^{\mu\nu} $$ where \( R^{\mu\nu}\) is the ten-component Ricci tensor (in most applications, \( R^{\mu\nu}\) and \( T^{\mu\nu}\) have only the four non-zero diagonal components \( R^{00}, R^{11}, R^{22}, R^{33} \) and \( T^{00}, T^{11}, T^{22}, T^{33} \) ). Solution of the four associated differential equations yields fantastically successful predictions of planetary motion, the deflection of starlight, gravitational lensing and gravitational redshift, all of which have been observed to nearly prefect precision. But observation of certain galactic-scale phenomena (stellar velocities and galactic clustering) indicates that there is more matter than can be accounted for, which has led to the now-accepted notion of dark matter to account for the missing mass. Einstein's field equations are still valid, but the now decades-long search for dark matter particles and/or fields has come up completely empty, leading to the idea that dark matter does not in fact exist and that Einstein's gravity theory needs to be modified. I'll comment on Merritt's book once I've read it, but I've long believed (like Einstein himself) that Einsteinian gravity theory is only an approximation of the truth. To me, relativistic MOND is a step in the right direection. Recent Stuff — Posted Wednesday July 14 2021 Here's a recent paper by some Russian physicists on some cosmological aspects of Hermann Weyl's 1918 gravity theory, which suggests that the Weyl vector \(\phi_\mu\) might have something to do with dark matter. I think it's rather far-fetched, but then I once believed that Weyl's theory was at least partially true (it may still be), but I never connected it with dark matteer (which I don't believe exists). Meanwhile, I'm reading German physicist and neuroscientist Alexander Unzicker's 2015 book Einstein's Lost Key, which posits that if the speed of light were not an absolute constant (an idea that Einstein thought might hold some water) then notions like early cosmological inflation theory might be wrong. This idea piggybacks on some other notions that the Newtonian gravitational constant \(G \) might vary with time (which Paul Dirac once proposed) or that the fine structure constant \(\alpha\) might vary as well. Such ideas have never been demonstated experimentally, and I've always thought they were crazy, but then we humans have only been around a relatively short time, so we wouldn't be able to detect the changes even if they did occur. Here's a YouTube video posted by Unzicker on the ideas he proposes in his book. It's interesting, along with many of his other videos, but I've always wondered who he's lecturing to, as his audience is never shown. McGaugh on Dark Matter — Posted Monday July 12 2021 Noted observational cosmologist Stacy McGaugh of Case Western Reserve University has long railed against the conventional belief that dark matter exists, despite hundreds of millions having been spent so far to detect it experimentally. It supposedly doesn't interact with anything (including itself) except gravity, and is otherwise invisible, tasteless and odorless, yet it somehow accounts for some 80% of all the matter in the universe. A whole slew of ideas have been proposed regarding its nature, including cold neutrinos, sterile neutrinos, axions and massive photons, although pixie dust has presumably been eliminated. McGaugh leans toward modified gravity theories as the preferred alternative to dark matter, the simplest version of which is modified Newtonian dynamics (MOND). There are many different types of modified gravity, all based on Einstein's original theory of general relativity, and many of these theories agree with observations, with some important shortcomings. The advantage of dark matter is that it can be made to agree with observations by simply sprinkling it around here and there to make it agree. McGaugh's latest post on the subject appears on his website Triton Station, along with many earlier posts on the same topic. He also composed the following graphic, which perfectly summarizes my own views on dark matter, which forms half of the lambda-cold-dark-matter (\(\Lambda \)CDM) model of modern cosmology: Most Assuredly, It's Safe and Effective — Posted Monday July 5 2021 The chemical element radium has numerous isotopes, all highly radioactive. The most common is radium-226, which was discovered by the Curies in 1898. Although the Curies quickly discovered its toxic properties, by the mid-1910s it had acquired the false reputation as being a cure-all for any number of illnesses and conditions. It was also used in luminescent paint, as it glowed green in the dark. At the outbreak of America's entry into World War I, the paint was commonly used to coat the dials of wrist watches, which was of particular value to the military during night operations. Radium-based paint was applied to watch dials by manufacturers of wrist watches and other time pieces, most often by lowly-paid young women who used small, fine brushes to apply the paint. To achieve a fine point on the brush, the women (now referred to as radium girls) would regularly pass the brush tips along their moistened lips. This introduced radium into their bodies by direct oral exposure and by ingestion. But at the time radium was not only considered harmless but beneficial, so there was no concern regarding the health of the employees. Within a few years, however, these women began to exhibit severe health problems, including oral cancer and tooth and jaw degradation. But the companies providing the radium (notably U.S. Radium and Undark) and the watch factories aggressively denied the role that radium played in these problems. They continued to avoid any reponsibility for the thousands of severe illnesses and deaths that resulted from their products, and it took nearly three decades for health authorities to fully recognize radium's dangers. Here's a 1921 magazine ad from the Undark Company extolling radium's supposed health and beauty benefits. It represents one of the earlier examples of modern scientific and chemical quackery common to the era, which included the use of opioids such as cocaine and morphine in over-the-counter medications. But the practice continues to this day, in the form of homeopathic products and treatments, many megavitamins, innumerable beauty products, hair-growth formulas and magnet therapy. The now-prevalent conspiracy theory phenomenon is largely based upon such ignorance, despite overwhelming logical and scientific evidence to the contrary. God help us. Ruben Bolling Does It Again — Posted Saturday July 3 2021 There's so much packed into this strip that I can't begin to summarize it. The Consequences of Infinite Energy — Posted Saturday July 3 2021 I took a single graduate class in cosmology/astrophysics many years ago, and remember not liking it at all. Now it seems it's all I think about. Here's physicist Sabine Hossenfelder (again) talking about cosmological inflation, energy non-conservation in general relativity and the creation of baby universes. It's true that the old high-school law of energy conservation does not hold in general relativity, which to me means that infinite energy is always available, an idea that I immediately attribute to the Almighty Creator. In a nutshell, here's why energy conservation does not occur in general relativity. The ten Einstein field equations are given by $$ G^{\mu\nu} = R^{\mu\nu} - \frac{1}{2}\ R g^{\mu\nu} + \Lambda g^{\mu\nu} = \frac{8 \pi G}{c^4}\, T^{\mu\nu} $$ where the covariant divergence (you'll have to look that up) of both the Einstein tensor \( G^{\mu\mu}\) and the energy-momentum tensor \( T^{\mu\nu} \) vanishes. This indeed implies conservation. But true mass-energy conservation is dependent upon the vanishing of ordinary partial differential divergences. This difference is profound and irreconcilable, so mass-energy is not conserved. Bottom line: don't worry where the energy comes from. It just comes. If You're 72 Like Me, It's Later Than You Think — Posted Thursday July 1 2021 It's July already?! Sketchy Data — Posted Thursday July 1 2021 An ancient human-like skull has come to light. What did its owner really look like? We've all seen forensic reconconstructions of faces associated with ancient and modern skulls, often in attempts to identify unknown murder victims. Forensic scientists and artists use their skills to reconstruct the faces, but how often have you seen comparisons of the reconstructions with actual photos of the victims? Almost all of the time there's very little real resemblance other than agreement with cursory or basic facial features. It's the same story with artists' sketches of criminals based on eye witness descriptions. Sketchy data often leads to sketchy facts. I often find myself coming back to the following graph, which is now over twenty years old. It also involves what I consider to be sketchy data: The work behind this graph won its researchers the Nobel Prize in Physics in 2011. It basically says that the farther away certain standard candles (known as Type 1a supernovae) are from us, the fainter they should be based on what the Standard Model of Cosmology was telling us back in 1999. The data begin to diverge from the expected straight line at great distances, implying that the expansion of the universe is accelerating with time. This phenomenon has been linked to dark energy which, if constant, may result in the so-called Big Rip, a point in the very distant future in which stars, galaxies and even atoms will be torn apart by cosmological expansion. Although I believe that dark matter is likely non-existent, I do believe in dark energy, which Einstein's prematurely disregarded cosmological constant \( \Lambda \) is based upon. It's really only a matter of whether \( \Lambda\) is truly constant, or increasing or decreasing with time. But like facial reconstruction, I've not seen any significant improvements in the data that the above graph summarizes. There's simply just too much scatter in the data points at high red shifts, and the exact nature of Type 1a supernovae itself is not precisely known. For example, the metallicity of pre-nova stars (the abundance of elements in the stars beyond hydrogen and helium) must surely influence their characteristics during supernovae events, such that the assumed 1.44 solar mass limit (see my previous post) is violated to some extent. Nevertheless, the validity of the Type 1a model is generally accepted, and is now part of the \( \Lambda\)CDM ("lambda-cold-dark-matter") model of modern cosmology. I find this perplexing, because despite enormous theoretical research and experimental efforts, no one still has any real idea what dark energy and dark matter might be. The Straw That Breaks The Cosmic Camel's Back — Posted Tuesday June 29 2021 A cold, incoherent object that exceeds approximately 1.44 solar masses will collapse in on itself, creating one of two possible remnants—a neutron star or a black hole. A black hole is a simple object that contains only mass, angular momentum and possibly electric charge (although instreaming charged particles would eventually cancel the hole's net charge). By comparison, a neutron star is highly complex, consisting of a thin atmosphere of stray neutrons and a solid crust, mantle and core of unimaginably dense neutronium (in actually, an element with an atomic number of zero!) The star would also likely be spinning at a high rate, a consequence of the conservation of angular momentum of the uncollapsed star. The gravitational pull of the star is so strong that even the slightest imperfection on its surface would be smashed flat, something akin to a microbe on the surface of a bowling ball. The extreme pressure at the star's core constitutes a form of gravity which, combined with the star's already incredible gravitational compressive force, is simply mind-boggling. But should the star somehow gain sufficient mass to exceed the 1.44 solar mass limit (via infalling matter such as interplanetary gas, planets, asteroids or other material), it is then expected to collapse into the comparatively featureless object we call a black hole. All ordinary massive objects obey an equation of state (EoS), which simply relates the object's density to its internal pressure. The familiar ideal gas law \(PV = nRT\) is an EoS, but for solids it can be much more complicated. Indeed, no one knows what the EoS might be for a typical neutron star, assuming one even exists. It is easy to imagine a neutron star that's right on the cusp of becoming a black hole. Might dropping a paper clip on the star result in its collapse? Unlikely, as the star's EoS is probably forgiving enough to prevent it. But just how forgiving can it be, since the star must certainly be gaining mass over time as it absorbs infalling matter? One wonders how neutron stars can exist at all, since the 1.44 solar mass limit must surely be reached over reasonable cosmic time periods. Perhaps most neutron stars are so far below the limit to begin with that it would take many billions of years for them to accrete sufficient matter to become black holes. Now astrophysicists have detected two instances of a neutron star "accreting" a black hole, with the stunning result that the black hole simply swallows the star whole, leaving a more massive black hole and a huge outgoing gravitational wave. Such waves have now been discovered, and an even more interesting example of gravitational astronomy has been given to us. Hossenfelder on Coincidences and Conspiracies — Posted Saturday June 5 2021 If you have a grid of evenly spaced straight lines on a piece of paper and you toss a bunch of needles on the paper, you can approximate the value of the transcendental number \( \pi \) by counting the number of times the needles land on the lines. Non-mathematicians may wonder how \( \pi \) shows up in this exercise, but it's not a coincidence. Noted German physicist Sabine Hossenfelder's latest video addresses the coincidence issue via the notions of human pattern identification and false positives, explaining how and why we tend to assign underlying agenticity and conspiracies to purely coincidental events, much like the Peanuts gang's interpretation of overhead clouds: Hossenfelder's talk is one of the best I've seen, and it explains a lot about the foibles and tragedies of human behavior, in particular the sad predominance of weak-minded right wingers to persistently believe in PizzaGate, the drinking of children's blood by Democratic Party leaders and similar crazy conspiracies actively or passively promoted by the Republican Party. AI Lincoln — Posted Monday May 31 2021 I have an old book called The Lincoln Reader that I've had forever that relates many personal stories and anecdotes about Abraham Lincoln, my favorite president. Although he lost a chunk of his lower jaw during a molar extraction in the early 1840s, those who knew him said he had a fine set of healthy white teeth and frequently smiled, often when telling an off-color joke. Early photos of the day almost never showed people smiling, a consequence of the proprieties of the times and the fact that early photography usually took several seconds to minutes to capture an image. So people would just sit there, motionless and expressionless, and that's the way we see them today. With the advancing technology of artificial intelligence we now have relatively accurate ways of showing people in old photos as they might actually have appeared. Here's a new YouTube video that shows a smiling Lincoln, along with varying hair styles. Enjoy. Sometimes artificial intelligence doesn't work too well. George Washington wore almost full dentures in his later years, sporting false teeth fashioned from human, horse and cow teeth. Here he looks just like Soupy Sales in drag. Fine Tuning — Posted Saturday May 15 2021 In his latest video, YouTube's "Who Gives a Bleep?" Arvin Ash talks about the fine-tuning argument, which deals with the nature of the fundamental constants of our universe, such as the gravitational constant, the fine-structure constant, the magnitude of elementary electric charge and many other parameters that govern the workings of the observable universe. The problem involves the realization that if these constants were only slightly different (either individually or collectively), the universe would either not exist or would not be capable of creating or sustaining life. It's a short video, and well worth watching: As I see it, there are only two possibilities: either there is a creator (God or an external entity such as a computer programmer in a simulated universe), or the multiverse theory (or the many-worlds interpretation of quantum mechanics) is valid, in which case there are numerous (possibly infinite) universes each having varying fundamental constants, and we just happen to be living in one in which life is possible. Perhaps the most important aspect of Ash's talk is his assertion that we cannot talk about the probability of which argument is right, since we have no statistics available to base a probabilistic argument (we observe only one universe, and a sampling size of one is insufficient to do any calculations). This situation perfectly mirrors the familiar "science vs religion" argument, and it's unlikely that we'll ever be able to resolve it (but perhaps both are true). The Goat Problem \(\ldots\) Again — Posted Friday May 7 2021 If you tie a goat to the outside of a barn and let it graze, how much grass can it clear? This might have been a practical math problem centuries ago, but Quanta Magazine seems to think it's still relevant today. What's interesting is that the problem becomes much more difficult if the goat is tied to the inside of a closed barn, regardless of the barn's shape. That couldn't have been much of a problem way back when, but it's still occupying mathematicians today. I addressed the problem of a goat in a circular fenced yard on December 11 last year, but now Quanta has upped the ante with a square yard. This time I didn't bite, but you can try it yourself if you have nothing better to do this weekend. More Geek Stuff — Posted Friday April 30 2021 German physicist Sabine Hossenfelder's views on dark matter vs modified gravity have changed. For the details, you can watch her latest video on the subject here. For the few who care about such things, back on March 27 I posted some comments on a recent paper claiming that the Mannheim-Kazanas (MK) spacetime metric did not predict flat stellar rotation curves in galaxies. I disagreed, and I've been waiting for Philip Mannheim to publish a rebuttal. But now there's another paper that supports the claim that the MK spacetime does indeed predict flat rotation curves by analyzing the effective potential associated with the metric. In the paper's analysis, the \(\gamma\) term plays an important role in defining the radius at which stellar velocities approach constant values far from their galactic centers. This radius is the same one I came up with. The MK metric is the only spacetime I've seen that is free from any added parameters (such as scalar and vector fields) that reproduces the predictions of the standard Einstein field equations. Like the Einstein-Hilbert action, that of the MK action (first derived by Hermann Weyl 100 years ago), is pure geometry. Alien Visitation — Posted Saturday April 24 2021 There are countless YouTube videos depicting UFO sightings, and there's an equal number of people claiming they've been abducted by extraterrestrials for experimental purposes. Writer Devan Taylor talks about the difficulties associated with interplanetary visitation by advanced alien species, perhaps the biggest being "Why the hell would they come here in the first place?" If extraterrestrials could ever visit Earth, there are only a few possible scenarios. One, they're malevolent, in which case we're already toast, since they would obviously be far more technologically advanced than we are (discounting the entertaining but otherwise preposterous Independence Day film); two, they're benevolent, in which case we'd have already made contact with them; and three, they're just unobtrusive observers, in which case we have today's current situation. I personally believe there is intelligent life out there, but we'll never make contact with them because they're just too far away and just too darned spread out, so stop thinking about UFOs, aliens and all that nonsense. But meanwhile you can muse over the purely imaginary value of alien visitation, as depicted in this video clip where the Simpsons get abducted (obviously, aliens will need to have powerful tractor beams): What's Up with the Arizona Ballot Audit? — Posted Saturday April 24 2021 The Arizona Republican Party has hired a right-leaning Florida-based company called "Cyber Ninjas" (I kid you not) to assist in the audit some 2.1 million ballots in the state that were cast during the November 2020 presidential election. Although no irregularities have been reported, the Arizona GOP just wants to be sure that Biden won fair and square. Reports immediately surfaced that the auditors were illegally using blue and black pens to verify the ballots, in violation of the state's audit rule that only red pens may be employed (only red pens may be used, because computer scanners will falsely count any ballots illegally modified with black and blue pens). When the violation was reported by an Arizona reporter, auditors switched to the red pens, after which the reporter was banned from the audit site. I bring up this story because on April 20 I posted some thoughts on data fluctuations and their significance. When it comes to counting election ballots, there will always be a few innocent mistakes, such as human and computer counting errors, damaged ballots and other systematic and non-systematic errors. Typically, if these mistakes account for less than a tiny percentage of the total count, then the election is declared fair. If not, one or more recounts may be requested. In the Arizona case, no such irregularity was seen, but Arizona Republicans decided to proceed with a recount anyway, just to be "sure." Given the fact that the Arizona ballot hen house is now under watch by Republican foxes, it will be interesting to see how the recount ends. They don't have to falsify enough ballots to give Donald Trump the victory in the state, just enough to declare the election invalid. This would then spread to other Red states with anticipated similar results, giving the GOP the chance to declare Biden a fraudulently elected president. I sincerely believe this is their game plan. America may see a new Civil War after all. Is It Something I Said or Did? — Posted Thursday April 22 2021 Let us always guard our tongue, not that it should always be silent, but that it should speak at the proper time. — St. John Chrysostom It was June 9, 1962, the very last day of the 7th grade. Puberty had apparently hit the girls hard earlier that year, as they were all fawning over a classmate who'd been a good friend of mine for as lomg as I could remember. At morning recess the girls were all crowding around him with their cameras, and I felt jealous for the first time in my life ("Get out of the way, Bill, we want to take B's picture!") One girl in particular was practically foaming at the mouth over B, her hormones in overdrive, and in a moment of rage I blurted out "Susan, you make me sick!" Susan wasn't particularly attractive, and I certainly had no interest in her, but I resented her obsequious behavior over my friend. Near the end of the day, Cindy F. asked me why I was so mean to Susan, but I don't recall how I answered her. I just learned from an old classmate that some years later Susan committed suicide, having been clinically depressed ever since grade school. This news hit me like a sledgehammer, as I instantly recalled that day in 1962 and my heartless remark. Now I'm filled with remorse over something that happened almost sixty years ago and whatever part I might have played in that poor girl's demise. Watch what you say, because you might not have a chance to make up for it. An Inspector Calls, 2015. I'm Still Confused — Posted Tuesday April 20 2021 In 2001 the E821 experiment at Brookhaven National Laboratory in Upton, N.Y., found hints that the muon's magnetic moment diverged from theory. At the time, the finding was not robust enough because it had a statistical significance of only 3.3 sigma: that is, if there were no new physics, then scientists would still expect to see a difference that large once out of 1,000 runs of an experiment because of pure chance. The result was short of 5 sigma — a one-in-3.5-million fluke — but enough to pique researchers' interest for future experiments. — Scientific American You may have read about "5 sigma," the statistical criterion for determining whether a new scientific discovery is valid or not. Most recently, the magnetic dipole moment of the muon (a heavier but otherwise identical version of the electron) has been measured to great accuracy, but it differs very slightly from the theoretical value determined via quantum field theory. This difference is currently irreconcilable, and the difference amounts to something called 4.2 sigma. "Sigma" is a just a measure of the standard deviation, computed assuming the applicability of the normal (Gaussian) distribution to experimental measurements. It's just the area outside of the Gaussian curve for a given point on the left of the mean to the same point on the right of the mean, the points being standard deviations. For example, for a standard deviation of 3.3 the area under the curve in the range \(z = -3.3 \) to \( z = 3.3 \) for the Gaussian integral $$ y = \frac{1}{\sqrt{2 \pi}}\, \int_{-3.3}^{3.3} e^{-z^2/2} dz \tag{1} $$ is about 0.99903. The area outside this range ((\(1-y\)) is 0.00097, the inverse of which is 1,030. This is taken to mean that there is only about one chance in a thousand that the measurement is an erroneous, statistical fluke. Similarly, for a standard deviation of 4.2 (based on the latest muon data) the chances are only about one in 40,000 that the measurement is a fluke. What I don't understand is that all the reports are claiming that the 5 sigma "discovery" statistic means only one chance in about 3.5 million. But a straightforward calculation using (1) shows that this is only one chance in 1.75 million, exactly half of the number being reported. Is this a one-tailed or two-tailed thing? The answer is: it depends on whether you're calculating sigma or its cousin, the p-value, both of which are used in statistical analysis. So it's basically one-tailed vs two-tailed, no real difference, although I wish to hell they'd be consistent with what they're reporting. Update: German physicist Sabine Hossenfelder's latest video addresses the issue of sigma significance, noting that even 6 sigma can be misleading. Math For Fun — Posted Tuesday April 13 2021 For years I've watched the math videos that Steve Chow has posted on his popular YouTube channel Blackpenredpen, which mostly covers undergraduate-level calculus. Chow is an instructor at Los Angeles Pierce College, and while he doesn't have a PhD in mathematics he's the guy I wish I had I when was in school. Most of his videos present innovative and clever methods and tricks used to do seemingly impossible integrals, but he often changes gears and does purely algebraic problems. One of those that still fascinates me is the infinite tower-power problem $$ x^{x^{x^{x^{\cdots}}}} = 2 $$ (which, to the best of my knowledge, has no practical application whatsoever, but it's a fun problem). It's easy to derive the solution \(x = \sqrt{2}\), but the seemingly easier finite tower-power problem $$ x^{x^{x}} = 2 $$ is far trickier. It turns out that the exponentiation process is not "commutative" in the sense that \( x^{(x^x)}\) is not the same as \( (x^x)^x\). In this video, Chow uses Newton's iterative method to solve for \(x\), getting \( x = 1.476684\ldots\). As is easily shown, however, the solution to \( (x^x)^x = 2\) is just \(x=\sqrt{2}\) (which is also the solution to the infinite tower-power problem), but this disagrees with Chow's solution. I'm puzzled by this, and I wonder if anyone can show me where I'm going wrong. Chow starts out the Newton's method with the guess \(x = 1\), so perhaps that's a bad start to the process. Okay, I just found this video that explains everything. It's called tetration, which I knew about (but didn't know there was a rule for it). My Ignorance — Posted Thursday April 8 2021 Today is my late wife's birthday. She would have been 75, and it still breaks my heart that she is not here today. Here she is on her 60th birthday, as beautiful as ever. The last few days I've been going through boxes containing many hundreds of old letters that my wife and her brother received from (and wrote to) their parents and friends in Cairo, Egypt and in Kinshasa, Zaire, where my father-in-law taught chemistry. He took the job because it paid in American dollars (not Egyptian pounds), and was therefore a godsend, as it helped finance his adult children's emigration to America. They came to Los Angeles in 1970, as the weather was nearly the same as that in their native Cairo, and because they had a few friends in the church there to help them adjust to their new country. Munira and I were married 42 years, not counting the 5 years I knew her from before. Outside of some limited and very broken Arabic, I never learned the language, and while looking at the letters (all of which are 40 to 50 years old), I now much regret my ignorance, as I would love to read them now. I will be sending all of them to my wife's brother, the only one remaining in the family who can translate them. Hopefully, with his help I'll learn more about my wife's earlier years, although I also hope to see her again before too much longer. Is It God, or a "Smelly Hacker"? — Posted Thursday April 8 2021 There's currently a lot of puzzles in the physics world, but two stand out. One is the discrepancy in the Hubble parameter, which has been measured to pretty decent accuracy by two different methods that don't agree with one another, and the other is the measured and calculated magnetic dipole moment of the muon, which are also significantly different. The Hubble parameter is a measure of the expansion rate of the universe, and it's either about 67 or 73 kilometers per second per megaparsec. There's some error in the observed values, but the error bars don't overlap, hence the discrepancy. On the other hand, the measured and calculated magnetic moments of the muon (a much heavier variant of the electron) are extremely close to one another, but there is a slight difference that physicists cannot explain. It might be experimental error or a statistical fluke, but those explanations are considered highly unlikely. Physicists are hopeful that the difference points to new physics beyond the Standard Model of Physics. But there's another possibility, one that arises from the famous Simulation Hypothesis, which posits that all reality (including human consciousness) is the creation of an advanced computer programmer or hacker (smelly or not) who resides outside of the simulated universe. Assuming the hypothesis is correct, then even the most advanced programmer (perhaps even God) could not be precisely perfect in creating the simulation, and there would always be glitches that show up in the simulated universe, glitches that might be observable to the universe's inhabitants. This possibility is explored in this recent segment of Closer to Truth: In view of the large disparity between the calculated values of the Hubble parameter, I highly doubt that it's a glitch of any kind; more likely it's experimental error or an erroneous assumption in the Standard Model of Cosmology. The muon dipole moment problem is far more interesting, as the predicted value is based on highly reliable calculational methods that have been used to fantastic success in other applications. Could the one part in a million difference between experiment and calculation be a glitch, or could it be "new physics"? More advanced experiments are planned, so hopefully time will tell. PS: Sometimes I imagine that I'm a very old man, getting up in the morning, making coffee, then walking over to my large living room window to watch what's going on out there. What I see today is a fabricated Cretaceous landscape of herbivorous and carnivorous dinosaurs going about their business, thanks to a high-resolution, three-dimensional holographic glass pane that serves as my window to the world outside. Of course, the outside world is the usual boring place of cars going by and neighbors walking their dogs, and by turning off my fully programmable window I can see what's really there. But I prefer to gaze out upon the Grand Canyon, the Eiffel Tower or the Andromeda Galaxy, or if I choose I can watch the pyramids being constructed during Egypt's Old Kingdom. The possibilities are endless, and it makes me also wonder if my own existence isn't also a computer fabrication of some kind. The Waste Land (Television, Not T.S. Eliot) — Posted Thursday April 8 2021 I just can't watch cable TV news anymore (although I have AT&T TV, not cable), given the non-stop coverage of ugly politics, COVID-19, the ongoing racist treatment of minorities, perpetual mass shootings and all the rest. I put up a flat antenna on my wall and am now watching over-the-air (OTA) television programs, which don't cost anything. The local news is still depressing, but the number of available programs is about as good as basic cable TV. It's also given me the opportunity to tune into shows that my family and I watched in the 1950s, 1960s and 1970s, which were almost de rigeur viewing in those days because there wasn't anything else to watch. With the sole exception of The Honeymooners, all the shows are pathetically insipid, with paper-thin plots and nonsensical situations. The family-oriented sitcoms are the worst (Family Affair, The Partridge Family, Donna Reed, etc.), with their well-coiffed, squeaky-clean (and white) children and their invariably highly educated and successful parents (doctors, lawyers, architects, businessmen and engineers) posing as ordinary middle-income folks raising preposterously talented offspring who danced, sang, played instruments and put on fantastic stage shows. And of course no single parent was divorced back then (only widowed), thus preserving the sanctity of marriage. And also all the men had served in the military, which offered plenty of opportunity for the shows to feature patriotic tableaus of one sort or another, always to comedic and heartwarming effect. Good Lord, how much of that garbage did I consume without thinking in those days? I can only imagine how much better I'd have turned out if my parents had never purchased a television in the first place. "April is the cruelest month \(\ldots\)" Enough — Posted Monday April 5 2021 Truth be told, I'm sick to death of the non-stop Derek Chauvin trial coverage on CNN and MSNBC, not to mention the glib commentary by the various hosts of those news networks. The videos and testimonies of eye witnesses pretty much made the case against Chauvin for me, but the extensive trial coverage has become nothing more than a partisanized joke. The liberal-leaning CNN and MSNBC of course are pushing for a murder conviction, while the ultra conservative Fox News, One America Now and other racist networks are either playing the trial down or not covering it at all. Let's be honest here: if we could replace Derek Chauvin with a black police officer and George Floyd with an addicted white civilian who passed a counterfeit bill, the network biases would be reversed, and the trial would have ended long ago with the black police officer convicted of first- or second-degree murder. Instead, by dragging out the trial ad nauseam the justice system will be forced into a lot of hemming and hawing over who's guilty due to the various legal subtleties and medical minutiae that are being presented, with the result that there's a very good chance that confused jurors will be hung and Chauvin will walk. His police career will be over, but then he'll write a best-selling book and make repeated guest appearances on Fox News, so he won't be hurting financially. Fortunately, there's still a civil trial pending, and I can only hope that Floyd's family will end up getting a big chunk of whatever money Chauvin earns for the rest of his pathetic racist life. Come On, Feet, Don't Fail Me Now — Posted Wednesday March 31 2021 It has always killed me when lay people ooh-and-aah over the weightlessness of astronauts aboard the International Space Station, which orbits the Earth at some 17,300 miles per hour. At roughly 200 miles above the Earth's surface, the gravitational attraction is still over 90% that which we all feel here, and it's the orbital velocity and the associated centrifugal force that provides the weightlessness. So it's not outer space we're talking about, people, but the effects of orbital speed. It's easy to calculate that if you were to travel at about 17,700 miles per hour, you could orbit the Earth weightlessly at an altitude of 1 foot, barring collision with buildings, trees, Uncle Jack and other obstacles. General relativity states that time runs slower the closer one gets to a gravitating mass, a proven feature of the theory that is crucial to the functioning of the Globsl Positioning System (GPS) that our iPhones utilize to enable Google Maps (and for the National Security Agency to monitor our every whereabouts). So don't rob a bank with your iPhone in your pocket, as you'll be easily tracked down and arrested. It's a miniscule effect at Earth's surface, but as astrophysicist Ethan Siegel writes in his latest article, time really does run slower for your feet than your head because they're closer to the Earth. The effect would be much more pronounced at the surface of a neutron star, but then you'd be squashed flatter than an atom before you'd notice it. In fact, neutron stars are so smooth that even an irregularity the height of a human hair could be observed due to the wobble it would produce on the star's rotation, as explained in this fascinating Sixty Symbols YouTube video. Say It Ain't So, Matt! — Posted Wednesday March 31 2021 Today's Republican Party—Come for the tax breaks, stay for the paranoia, the lies, the hypocrisy and the pedophilia! That one may smile, and smile, and be a villain! — Hamlet Scene 1, Act 5 Hey, we've all sinned and fallen short of the glory of God, and I know I'm no different than anyone else. But we tend to hold our political leaders just a tad higher than ourselves, and when they fall short we expect them to come clean, admit their transgressions and repent. Sadly, that's not how it works in the Republican Party, where if you can lie and lie and get away with it, it's the same as telling the truth. Three days ago, 38-year-old Florida GOP Congressional representative Matt Gaetz appeared on Fox News' Tucker Carlson show to plead innocence over claims that he had sexual relations with a 17-year-old girl and had transported her over state lines for related purposes. I don't know for sure if Gaetz is guilty, but I especially enjoyed how he referred to the unnamed child as a 17-year-old "woman." I suppose that's better than a 12-year-old woman. Gaetz also claimed that his family was being extorted for $25 million to keep his guilt secret, noting that his father is wearing a wire to record the alleged extortionist's schemes. If true, it must be a pretty stupid extortionist to think that he/she can pull off the crime, given that it's now open to the public. It's also revealing that Gaetz and his family has that kind of money to begin with. The FBI is investigating both the allegations against Gaetz and the extortion claim, and only time will tell if the Trump-loving Gaetz will find himself in the dock. My guess is that he will, but with a suspended sentence, as it's probably only his fourth or fifth offense. He will then disappear for a while, be forgiven by his Republican backers and faith-based supporters, and then be restored to full Christian forgiveness. Want proof? Just remember the likes of Jimmy Swaggart, Jim and Tammy Bakker and Ted Haggard, just to name a few from a long list of Republican criminals and hypocrites. Just Nip It, Mr. President — Posted Saturday March 27 2021 "Nip it! Nip it in the bud!" — Some slightly out-of-context (and off-color) advice from Deputy Barney Fife in his currently out-of-print bestseller, Nip It: Barney Fife's Guide to Blistering Hot Married Sex Pulitzer Prize winner Maureen Dowd's opinion piece in today's New York Times is a reminder to President Biden that the Republican Party despises his guts, and will never agree with any of his policies, regardless of how popular they are with your average Republican voter. Dowd writes "So while you're modulating, Mr. President, here's a suggestion: Ditch that old habit of yours, bending over backward to appease Republicans \(\ldots\) Bipartisanship ain't happening now." I fully agree with Ms. Dowd. Biden is wasting his and his administration's time with all this talk about bipartisanship. Just look at the Republicans' current voting habits and their ongoing fawning allegiance to Donald Trump, whose lies and anti-democratic antics continue unabated. The current spate of Republican-led voting restriction bills and laws in Red States are proof that the GOP intends to never let the Presidency, House and Senate slip from their grubby hands again. If they could outright block blacks, Hispanics and Asians from voting, period, they'd friggin' do it. So just nip all that talk of getting Mitch McConnell, Ted Cruz, Rand Paul and their ilk to see the light, Mr. President, and go on an Executive Order rampage. Geek Saturday — Posted Saturday March 27 2021 The classical Einstein-Hilbert action with a cosmological constant \(\Lambda\) $$ S_{EH} = \int\!\! \sqrt{-g} \, \left( R - 2 \Lambda \right) d^4x $$ has passed every experimental and predicted test of general relativity since Einstein proposed his theory in November 1915. However, in the weak-field limit they do not account for the observed near-constant velocities of stars on the outskirts of their galactic centers, which had led to the proposal that some kind of unobservable dark matter exists around galaxies and galaxy clusters. Assuming that dark matter does exist, then Einstein's theory still holds, so now the search is on for the illusive dark matter. It has been proposed that dark matter may consist of cold neutrinos, axions, massive photons and other exotic weakly-interacting species, all of which have not been detected despite herculean and costly experimental efforts over the past three decades. Einstein's action is fully Lorentz and coordinate invariant, but it is not invariant with respect to a change in scale, in which the metric tensor is varied according to \(g_{\mu\nu} \rightarrow \Omega(x)^2 g_{\mu\nu} \), where \(\Omega\) is an arbitrary function of space and time. In 1929, the German mathematical physicist Hermann Weyl showed that modern quantum theory demands scale invariance (later recognized as phase or gauge invariance) and that it accounts for the conservation of electric charge. Consequently, many physicists today believe that scale invariance should hold not only at the quantum level but on cosmological levels as well. In 1989, University of Connecticut physics professor Philip Mannheim and his colleague Demosthenes Kazanas proposed that Einstein's action should be replaced by a scale invariant version of general relativity that reduces to that of Einstein's under less general conditions. Again, it was Weyl in 1921 who derived a unique scale-invariant tensor quantity \(C^\lambda_{\,\,\mu\nu\alpha}\) composed solely of the Riemann curvature tensor \(R^\lambda_{\,\,\mu\nu\alpha}\) and its two contracted variants \(R_{\mu\nu}\) and \(R = R^\mu_{\,\,\mu}\), and Mannheim-Kazanas used the associated action $$ S = \int\!\! \sqrt{-g}\, C_{\mu\nu\alpha\lambda}\, C^{\mu\nu\alpha\lambda} \,d^4x \tag{1} $$ to derive the equations of motion for free space in spherical coordinates. After what must have been laborious effort, they found the Schwarzschild-like solution $$ ds^2 = e^\nu c^2 dt^2 - e^{-\nu} dr^2 - r^2 d\theta^2 - r^2 \sin^2 \theta \,d\phi^2, \quad e^\nu = 1 - 3 \beta \gamma -\frac{\beta(2-3\beta\gamma)}{r} + \gamma r - k r^2 $$ where \(\beta, \gamma, k\) are constants of integration. As is easily seen, for \( \gamma = k = 0 \) the solution reduces to the familiar Schwarzschild line element, where \( \beta\) plays the role of the central gravitating mass. The terms proportional to \(r\) and \(r^2\) serve as acceleration parameters, which initially provided hope that scale invariance would provide an answer for dark energy and possibly for dark matter as well. Indeed, subsequent studies showed that the Mannheim-Kazanas metric accurately predicted the observed flat rotation curves for stars in many galaxies. Several days ago, a preprint paper appeared on arXiv.org claiming that the Mannheim-Kazanas metric in fact does not accurately predict flat rotation curves under generally assumed galactic conditions. The paper's Cambridge researchers (M.P. Hobson and A.N. Lasenby) showed that a clever coordinate change in the radial parameter \(r\) could be used to eliminate the \(3\beta\gamma\) and \(\gamma r\) terms, thereby reducing the Mannheim-Kazanas metric to $$ e^\nu = 1 - \frac{k_1}{r^\prime} - k_2 (r^\prime)^2 \, \,\,\, * $$ where \(r^\prime\) is the new radial parameter and \(k_1\) and \(k_2\) are constants. This metric is equivalent to that of de Sitter spacetime, in which there is a cosmological constant (proportional to \(k_2\)) but with no matter in the universe (as our universe continues to expand, radiation and matter density will decrease to the point where the de Sitter metric will be perfectly valid). By eliminating the term linear in \(r\) in the Mannheim-Kazanas metric, the Cambridge researchers show that the velocities of stars far from their galactic centers fall off like \(1/r\) (in regions of interest for a typical galaxy) as in the pure Schwarzschild case, resulting in no region for flat rotation curves. It can be shown that the tangential velocity \(v\) of a rotating star is given by $$ v^2 = \frac{1}{2}\, r e^{-\nu} \frac{d e^\nu}{dr} $$ For the Schwarzschild metric, the velocity decreases inversely with distance, as is classically expected (and in disagreement with observation). However, the situation changes for the Mannheim-Kazanas metric. The velocity will be extremalized (maximized) when its derivative with respect to \(r\) vanishes. With the reasonable assumptions that \( e^\nu \approx 1\), \( \beta \approx GM/c^2\) and \(\beta\gamma \ll 1\), we get the condition $$ \gamma r^2 - 2 k r^3 = \frac{2GM}{c^2} $$ Assuming further that \(k \approx \Lambda\) (and so can be neglected), we find that maximum stellar velocities will occur at the radius $$ r = \sqrt{ \frac{2GM}{\gamma c^2}} $$ Given the expected smallness of \(\gamma\), this will be a very large distance from the galactic center, and is thus in agreement with observation. From this, I judge that the Mannheim-Kazanas analysis does indeed match the effects of dark matter. I'm studying the various galactic conditions that Hobson and Lasenby used to justify their conclusions but, as I'm not very familiar with galaxy dynamics, I cannot be assured that the assumptions I've made here are legitimate. I've long been a fan of the work of Mannheim and Kazanas (both their independent and joint research), and I can only hope that they can find convincing arguments rebutting the Hobson and Lasenby paper. PS: Einstein's full gravitational field equations are given by $$ R^{\mu\nu} - \frac{1}{2}\, g^{\mu\nu} R + \Lambda g^{\mu\nu} = \frac{8 \pi G}{c^4}\, T^{\mu\nu} \tag{2} $$ where \(T^{\mu\nu}\) is the energy-momentum (or stress-energy) tensor, which accounts for the presence of matter and radiation (which in turn affects the geometry). Einstein viewed the left-hand side (which is pure geometry) as being made of fine marble, whereas the right-hand side is cheap plastic (but matter and radiation have to be tacked on somehow, he reasoned). If \(S_M\) is the action for matter and radiation, then $$ T^{\mu\nu} = -2 \frac{1}{\sqrt{-g}}\, \frac{\delta (\sqrt{-g}\, S_M)}{\delta g_{\mu\nu}} $$ where the \(\delta\) operator means "variation." It is easily seen that both sides of (2) have the dimensions of length\(^{-2}\). But the Lagrangian in (1) is dimensionless, meaning that the proportionality term \(8 \pi G/c^4 \) would be unnecessary for a stress-energy that is also dimensionless. Go figure that out! * This is Equation (15) in the paper, which is the key identity in the reseachers' argument. I've been unable to derive it, and I wonder if it's correct. Changes? We Don't Need No Stinking Changes! — Posted Wednesday March 24 2021 You've by now heard about the Ruger AR-556 "pistol," the weapon that was used to murder 10 people in Colorado two days ago. It's classified as a pistol because previous mass killings motivated the National Rifle Association to request minor changes to the weapon, which was originally classified as an assault rifle. Pistol or rifle, it shoots NATO 5.56-mm jacketed ammo, whose diameter is a little less than a quarter inch. This seemingly small bullet belies the huge cartridge and powder charge in the round, which can push the bullet up to about 2,800 feet per second. The energy delivered is sufficient to cut a person in half—there are no flesh wounds with these things, as you're lucky to come away as a paraplegic or quadraplegic. [It always bugs me when I watch a World War II movie in which a soldier gets hit with a machine gun or aircraft round, invariably clutching his chest or stomach while heroically uttering "They got me, Joe!" This is nonsense, and perhaps it's time to start airing autopsy photos of the children who were slaughtered by such weapons in places like Sandy Hook, Columbine, Parkland and Rancho Tehama.] The Ruger pistol (on the left) has roughly the same killing power as the Vietnam-era M16 assault rifle (on the right), including a 30-round magazine that also holds the 5.56-mm round. The major difference is that the M16 can be fired in either semi-automatic or fully-automatic mode, while the Ruger is semi-automatic only. Clever gun enthusiasts have found ways to illegally modify the trigger and housing to fire the weapon like a machine gun, although all fully-automatic weapons are banned in America (even in Red States). But search the Ruger website and you'll find that the AR-556 is listed as a pistol, thanks to the NRA. The NRA probably patted itself on the back getting it listed as such, but deep down you know they hated doing it. Conservatives hate change, which is why they will never allow a ban on assault rifles or any other deadly firearm that can be easily concealed, then whipped out to mow down dozens of victims. Republican lawmakers have even bragged that if President Biden proceeds with any related gun control measure, they'll see to it that the required 60% Senate majority is never achieved. Their tried-and-true argument is that some 50,000 Americans die in auto accidents every year, but no one calls for a ban on cars and trucks. It's the same with the COVID-19 vaccines—Republicans weren't getting vaccinated last year, so why change now? Add to that the fact that 47% of Republicans say they will not get vaccinated, thanks to Trump's persistent effect on their tiny, pathetic minds. Consequently, America will likely not achieve herd immunity, guaranteeing that the virus and it variants will go on killing many thousands more. Freedom! Liberty! Second Amendment! Huzzah! "You Mean I've Been Eating Hamburger From 1960?!" — Posted Monday March 22 2021 My high school's cafeteria had an outdoor lunch service window that I often frequented, and I'd always buy their hamburger for 25 cents. It had a sauce that was heavy on mustard, but I loved it. I'd sit at one of the outdoor tables having lunch with Dan E. and John Z., talking about what we would do when we left high school. Dan planned on becoming an entomologist, but ended up with a PhD in Art History. Years later I'd often see John tooling around in his yellow VW Thing, a rather bizarre-looking auto that enjoyed a brief popularity in the late 1960s. I don't know what became of John, but I went on to study chemistry. I still remember the taste of those hamburgers, and I was reminded of those days while binge-watching 11.22.63, the 2016 eight-episode mini-series based on Stephen King's 2011 science fiction novel of the same name (which I've also read). The book and series tell the story pf a man (Jake Epping) who travels back in time to stop the assassination of President John F. Kennedy. Unsure of Lee Harvey Oswald's guilt, he first has to determine if Oswald had first tried to assassinate General Edwin Walker in April 1963. Walker was a real-life anti-communist zealot whom Oswald deemed a fascist threat (Oswald's involvement in the assassination attempt of Walker was later corroborated by Oswald's wife, Marina). Assured of Oswald's guilt, Epping then proceeds to track down Oswald and prevent JFK's assassination. I distinctly recall the early afternoon of Friday, November 11, 1963, when my high school French teacher (Mrs. Eleanor Farrell) was interrupted by a student from the hallway who handed her a note. In emotionless tones, Farrell recited "President Kennedy was assassinated at 1:00 pm today in Dallas, Texas. Vice President Johnson is now the President of the United States." She then returned to the blackboard without further comment, which I thought was odd at the time. Since then I've read about a dozen books on the assassination. None have revealed the true story, which will probably never be known. The best I've seen to date is Jeremy Bojczuk's 2014 book A Brief Guide to the JFK Assassination, but it too falls short of what probably really happened, or of those who were actually responsible. But "Since then, 'tis centuries" as Emily Dickenson once wrote (although only 57 years have passed since Kennedy's murder). Who knows what might be revealed in the coming decades (or centuries). The remains of Kennedy's brain (which record the true path of the backward- or forward-tracking bullet) are under lock and key by the Kennedy family, and may never be revealed for definitive forensic analysis. At any rate, 11.22.63 is a great series, and to me it's also tops as a time-travel adventure. You'll have to watch it to understand the hamburger connection. Racism and Our Gang — Posted Thursday March 18 2021 In the early 1970s, Munira and I would would regularly go for lunch with our lab co-workers to El Tepeyac, a Mexican restaurant in East Los Angeles. The restaurant featured Manuel's Special, a pillow-sized burrito of rice, beans and beef that was on the house if you could eat the whole thing. Munira and I would invariably split one, joking about the lack of neighborhood stray dogs and cats and the possible nature of the restaurant's meat given its low prices. We'd also frequent another local restaurant called Sambo's, whose wall paintings depicted the eponymous little black child being chased by a tiger. Thank God the restaurant either went out of business or changed its name to something less offensive. I recalled this last memory while reading Our Gang - A Racial History of the Little Rascals by Julia Lee, a Chinese-American associate professor of English at Loyola University in Los Angeles. I have most of the Our Gang films, which were produced by Hal Roach Studios as silents (1922-1929) and early sound films (and later produced by MGM until 1944). I also have every extant Laurel and Hardy film that Roach produced from 1927 to 1940. In 1990 Munira and I attended a memorable night with Hal Roach at the Raymond Theatre here in Pasadena, where several Laurel and Hardy films were presented along with personal reminiscences of early Hollywood by Roach and Joe Cobb, a child actor and early member of Our Gang. Lee's book was a revelation, told from the perspective of American racist attitudes that existed in early Hollywood. But I was also struck by its purely historical information on Roach, his studio and the directors and child actors who starred in and produced the Our Gang films. Surprisingly, the inclusion of black child actors in the films was a natural outcrop of the relative innocence of the times, as the white and black members of Our Gang showed no innate racism in the episodes, although "Sunshine Sammy," "Farina" and'"Stymie" (my favorite) were often stereotypically shown as ignorant, spook-fearing, watermelon-loving pickaninnies. The fact that the book was written by a Chinese-American also resonated with me, especially in view of the recent horrific attacks on Asian Americans, spurred on by former President Trump's racist remarks regarding the "Chinese virus" and "Kung Flu." If you have the slightest interest in the racism of early Hollywood films and the sorry state of our country's politics today, please buy and read Prof. Lee's book. PS: In 1927, Hal Roach (1890-1990) put two of his film comedians together to form the team of Laurel & Hardy. They quickly became a favorite of young and old alike, and to this day are considered the best comedic film duo of all time. In 1983 I took my four-year-old son Kris to see a Laurel & Hardy retrospective viewing of five of their silent films, all of which had been remastered and restored. Kris enjoyed the movies, but I was dumbfounded by the superb quality of the films, which looked as if they had been shot yesterday. Since then, L&H films have been released in countless collections, but to this day they still await definitive video and audio restoration. The best I've seeen to date is the fantastic 10-volume set The Lost Films of Laurel and Hardy, of which all DVDs are sold separately (and expensively). The nearest thing in terms of quality is Laurel and Hardy - The Essential Collection, which I purchased and gave to several family members and friends, all of whom are devoted L&H fans. The Economic Elephant in the Room — Posted Sumday March 14 2021 President Reagan proved that deficits don't matter. — Former Bush Vice President Dick Cheney Tax cuts always pay for themselves. — The Perpetual Delusionary Mantra of the Republican Party I watched Fareed Zakaria's GPS cable program this morning, in which economists Larry Summers and Paul Krugman talked about the pros, cons and likely consequences of President Biden's recently enacted $1.9-trillion American Recovery Act (ARA). A former president of Harvard University, Summers was a Treasury Secretary and economic advisor to presidents Clinton and Obama, while Krugman is a political writer, economic advisor and columnist who was awarded the 2008 Nobel Prize in Economics. Summers and Krugman are friends and colleagues, but they disagree on the costs, benefits and outcome of the ARA. In a nutshell, Summers feels that the ARA does not have sufficient long-term monetary resources to pay for itself, and he worries about its inflationary potential and the deficits that are likely to result. Krugman likened the ARA to a war-times act that—like the New Deal and the 2008 financial collapse—was necessary to pull the country out of the health and economic disasters caused by the COVID-19 pandemic that has so far killed over 530,000 Americans. Krugman also noted that while the ARA is costly, is is not likely to result in runaway inflation. I think Summers made a good point about the lack of any obvious financial resources to pay for the ARA (such as public investments like bonds), much less for Biden's plans for infrastructure improvement and climate change mitigation. Krugman's response was that similar cash outlays and fiscal borrowing occurred during the Korean and Vietnam wars, which led to manageable inflationary problems and deficits. During the discussion, neither economist brought up the topics of taxes, America's ongoing committment to its expansive foreign military presence or to the booming stock market, which has inexplicably soared throughout nearly all of the 2020-21 pandemic year. I can understand why most politicians today won't touch the issues of raising taxes or cutting back on America's military, but the trillions of dollars now being amassed by corporations and their stockholders represents a hugely significant source of funds needed to pay for public programs, infrastructure improvements and climate change mitigation. Roughly 55% of Americans are currently invested in the stock market one way or another (through actual stock ownership, pension plans or other indirect investments), and they are now seeing rates of return far in excess of what banks are paying on savings accounts. Furthermore, the tax rate on capital gains is still only 15%, far below what most middle-class American are paying on their combined federal and state income taxes. To me, this is the elephant in the room: if America wants a responsible pay-as-you go economy, reliable infrastructure and climate mitigation, it had better tap into the corporations and their stock holders, and this means raising taxes on those who are benefiting the most. The market is now poised for an even bigger boom as we get COVID-19 under control, and if we don't want dangerous bridges, unreliable public utilities and a ruined climate while giving a free pass to Bahamas-bound wealthy freeloaders, stock market-specific taxation is the way to go. Why Most Women Don't Marry Scientists — Posted Thursday March 11 2021 By the way, Gary Larson is back! Dark Matter Detection Fails Again — Posted Thursday March 11 2021 The great philosopher of science Karl Popper famously assigned a critical element to what he defined as "science", which is that it must be falsifiable. That is, every scientific theory must not only stand the test of ongoing experimental validation, but it must also be subject to being proved wrong. For example, while Einstein's general theory of relativity (gravitation) has perfectly passed every test thrown at it for over 100 years, a single confirmed observation that disagrees with the theory would disprove it, requiring that the theory be either modified or discarded in favor of something better. Consequently, there are no true laws of Nature, only theories awaiting revision or refutation. Popper's falsification idea has not always been popular, but it's a good start. Related to this is the notion of reproducibility, which involves applying the same or similar experimental techniques, methods and materials used to establish a theory in the first place. If you conduct a experiment (or series of identical experiments) that appears to uphold a theory you've proposed, but someone else cannot reproduce the same results, then your theory remains either a hypothesis or a conjecture (or it's simply pure bunk). A case in point is the set of experiments performed by a research team in Italy called DAMA/LIBRA, which uses a mass of thallium-doped sodium iodide to detect dark matter particles. But rather than detect these hypothetical particles themselves, its approach has been to look at whatever detection events occur over time, the idea being that as the Earth is swept along with the Milky Way's rotation, it must encounter a varying biannual number of particle events depending on whether Earth's orbit is moving with or against the galaxy's rotation. In a sense, the exact nature of the detection event doesn't matter, only the sinusoidal pattern of detected events matters. This approach would then rule out any human, systematic or random experimental errors and noise in the detection apparatus. The DAMA/LIBRA team has indeed reported such a semiannual pattern in its data, appearing to confirm the presence of something akin to dark matter. You might recall that the famous Michelson-Morley experiment of the late 1880s tried the exact same approach in an attempt to detect the luminiferous aether, a universal substance that was assumed to exist to provide a means for light waves to travel in space (the rationale was that since sound waves need air to move through and water waves need water, then light must also have something to "wave" against in order to propagate in space). The experiment famously failed, demonstrating that light can indeed move through the vacuum of space. However, the DAMA/LIBRA team did detect something, and until now it provided at least indirect evidence for dark matter. But more recently a series of identical experiments conducted by the ANAIS dark matter team at Spain's Saragossa University have consistently failed to reproduce the DAMA/LIBRA results to 99% confidence. The disagreement is detailed in today's Medium article by astrophysicist Ethan Siegel. While any limited series of disagreeing experiments is not sufficient to conclusively disprove the existence of dark matter, I'm still hoping that a satisfactory answer for dark matter's presumed effects on galaxies and galactic clusters will be explained by modified Einsteinian gravity. (Only time will tell, but at 72 my remaining years are dwindling away, and I'm getting impatient.) The Very Definition of Spin — Posted Thursday March 11 2021 And I'm not referring to quantum-mechanical spin. The phrase "It's not a flaw, it's a feature" once applied to bug-ridden computer programs, but the Republican Party has turned it into a political term, what we currently know as spin. Mississippi Republican Senator Roger Wicker, who joined every single one of his fellow Republican senators and congressional representatives in vehemently opposing President Biden's $1.9 trillion COVID-19 relief legislation, is now trying to take credit for the benefits the legislation will provide to his constituents and Mississippi businesses. In defense of Wicker's comments, prominent GOP members are using Wicker's words as proof of Republican bipartisan support for Biden's efforts, which they have uniformly opposed ever since Biden took office. I only wish that Sen. Wicker, an avowed devout Christian, would more carefully read the Gospel of Matthew, in which Jesus admonishes hypocrites and hypocrisy no fewer than 15 times. Since when did hypocritical spin become a treasured feature of Republican politics? Coordinated Guessing — Posted Tuesday March 9 2021 The following is intended mainly for budding structural design engineers and nerds, but it has wide application to Newtonian and relativistic gravitational physics, electronic circuit design, fluid flow and similar computationally intense problems. Let's say you want to build a 100-story office building. You start by designing the basic structure from scratch, using a 3-dimensional grid of steel beams, girders and columns. You also know the various loads that the structure will have to safely support, including gravity, wind, snow, earthquake and occupancy loads. The basic structure will consist of thousands of nodes (where the beams come together) and links (the horizontal and vertical beams that connect the nodes). Your design will have to take all these things into account, including a safety factor. Where do you begin? In olden days you'd rely on rules of thumb, experience and slide rules. Nowadays the computer does most of the work, checking that your design is not only safe but can be built at the lowest possible material and labor cost. Given a few input parameters, the computer may also do the basic design as well. Trust me that your 100-story office building will involve thousands of nodes, and these are the basic elements that the computer uses to check for allowable beam stress, strain and deflection. Also trust me that for a structure involving \( N \) nodes, the computer will need to construct an \( N \times N \) matrix (almost always symmetric), and that it will have to solve a system of \( N \) simultaneous linear equations of the form \( A X = Y \), where \( A \) is the matrix, \( X \) is the \(N-\)dimensional vector of unknowns (say, beam stresses) and \( Y \) is the vector expressing all the known constraints on the structure. The computer must somehow solve the system via \( X = A^{-1}\,Y \), where \( A^{-1} \) is the the inverse of the matrix \( A\). Lastly, trust me that calculating inverse matrices is computationally a total bitch. For a system of thousands of nodes, no computer can do it directly, and a time-consuming process known as iteration must be used, where each iteration essentially involves an increasingly more accurate guess of the unknowns. Designing a safe and cost effective office building is thus very complicated, but the number of elements is still finite. What if you need to design a continuous structure, say an aircraft wing? Such a structure is not described by a finite number of elements like beams and girders, but the wing's stresses, strains and deflections must be calculated using a seemingly infinite number of design elements. What do you do now? Of course, you discretize the structure using a finite grid of nodes and links, an approach known as finite-element design. The computations become more accurate the more nodes and links you use, but the calculation is essentially the same as the office building example. The importance of solving large-scale linear (or linearized) simultaneous equations cannot be overemphasized, and as noted earlier the problem spans many technological applications now essential to modern life. It is therefore not surprising that an enormous amount of effort has been expended on finding efficient computational solution methods. The number of computer interations or steps required to achieve a solution is then key, since it largely defines the effort required to solve the problem. For some problems, that number can be enormous, which, due to cumulative floating-point errors and time constraints, may result in erroneous solutions. The latest Quanta magazine spells out the problem in more detail, and it also describes a new technique called "coordinated randomness" that has proved (at least marginally) useful. While solution guesses are helpful, it's often difficult to know just how to arrive at them. The Quanta article describes such an approach based on random guesses, and it implies that in some cases even random guessing can be useful (unlike the SAT and GRE exams you took). The article interested me for a number of reasons, as I used to do large-scale numerical analysis of both linear and nonlinear systems and because I still think there might be some validity to the simulation hypothesis, which is the idea that something or someone (God, a post-human computer programmer or even intelligent Nature itself) is simulating our universe using an unimaginably complex computer program. Our entire observable universe is composed of roughly \( 10^{90} \) particles and fields, which is mind-bogglingly huge but still finite, and therefore subject to computer simulation. Sorry for this overly long and nerdy post, but it underlies a fascinating and important mathematical problem. Stanley Tucci — Posted Sunday March 7 2021 I first became aware of American actor Stanley Tucci in the excellent 1993 action thriller The Pelican Brief starring Julia Roberts, Denzel Washington and Sam Shepard (didja know that Shepard was also a 1961 graduate of Duarte High School, my alma mater?) In the film Tucci plays a cold-blooded assassin, a far cry from his comedic turn in the hilarious 1998 film The Impostors with Oliver Platt, which regularly had my wife, me and our kids literally in stitches. Don't miss the annoying German ship steward's immortal line "It has made me also \(\ldots\) moist" or Billy Connolly's "Powerful enough to snap the neck of a small beast, and yet sensitive enough to caress the tender throat of a young castrato — coax a song out of him!" Now Tucci is starring in the popular CNN reality show Searching for Italy, in which Tucci travels to various Italian cities visiting that country's historical treasures and sites, all while stuffing his face on the local cuisine. My family members love it, but I see it as just another "food porn" show, and I can't get into it. I did like the PBS series Rick Steves' Europe, which is far more educational and entertaining (but still has Steves stuffing his face much of the time). Perhaps my dislike of food shows stems from the food I'm eating now as an aging widower: basic stuff like fruit, vegetables, bread and the wonderful "Beyond Meat" plant products that have made me almost a vegetarian. Like the bumper stickers say, "Eat Right, Exercise, Die Anyway." Dirac on the Constancy of the Gravitational Constant — Posted Saturday March 6 2021 Although I dearly love the work of Isaac Newton, Albert Einstein and Hermann Weyl, my favorite physicist just has to be "Father of Moden Physics" Paul Adrien Maurice Dirac (1902-1984), whose foundational contributions to physics spanned everything from quantum mechanics to general relativity. In a far saner and less ignorant world, Dirac's name would be as recognized as those of Newton and Einstein. In the early 1930s Dirac began to question the nature of large dimensionless numbers, such as the ratio of the electric force to the gravitational force (roughly \( 10^{39} \)) and the relatively small value of the fine structure constant \( \alpha \), which is about 1/137. He went so far as to wonder if these figures were true constants of Nature, or if they might vary over geological or cosmological spans of time. For example, the Newtonian gravitational constant \( G \) is about \( 6.67 \times 10^{-11} \) in the kg-m-s unit system, but humans have only known about it for some 400 years. It's entirely possible that it might be slowly but measurably changing over periods of thousands or millions of years. We just don't know. This situation reminds me of the story of the ephemeral fly, which I recounted long ago on this site. It's a small insect that lives for only a few hours after its hatching, and as it flits around desperately seeking a suitable mate during its brief life span it might consider the trees and flowers of its world as everlasting and permanent. We humans live far longer, but we might also find ourselves fooled into thinking that the things we observe and measure are similarly truly constant. On the basis of what we know about modern cosmology today, the universe is doomed to perpetual and accelerating expansion, with all matter eventually being transformed into low-energy radiation, a view that is predicated upon the constancy of dark energy. But if the density of dark energy is not constant but decreasing, then the universe might halt its expansion and begin contracting, perhaps transforming itself again into a primordial mass-point and reinstigating a new Big Bang. Here is Dirac in 1979, talking about the constancy of \( G \). He died in 1984, sadly having never learned about recent discoveries of the accelerating expansion of the universe: PS: Dirac's eccentricities are nearly as famous as his physics. Forced to speak only French in his home while growing up, he became extremely taciturn in his adult life (his students coined the term Dirac unit, meaning one word per hour). He was notoriously aloof but not unfriendly, preferring long stretches of solitude and daily walks to social interaction. At a party on board an ocean liner with fellow Nobel Laureate Wolfgang Pauli, he was asked to dance. "Come on, there are lots of nice women here, Dirac!" said Pauli, to which Dirac responded "How do you know they're nice?" While discovering quantum field theory, antimatter, the Dirac relativistic electron equation (the greatest equation of all time, in my opinion), the Dirac delta function and many other fundamental discoveries, Dirac managed to marry and have two daughters with the sister of fellow Nobel Laureate Eugene Wigner (Dirac's friends and colleagues were stunned that Dirac had not only a romantic side to his personality but a sex drive as well). He invariably introduced his wife Margit to others as "Wigner's sister." Weary of the cold weather in England, Dirac took an emeritus position at Florida State University in 1972. I equate this event as akin to Einstein arriving at a community college and asking for a teaching job. These and many other stories are recounted in Graham Farmelo's great 2011 Dirac biography The Strangest Man: The Hidden life of Paul Dirac. Physics Leads the Way — Posted Wednesday March 3 2021 Good news: the American Physical Society will not hold any future physics meetings or conferences in American cities that tolerate social injustice. I believe this will apply to many Red State cities, whose ignorant denizens don't believe in science anyway. Geek Tuesday — Posted Tuesday March 2 2021 I occasionally get emails asking what programs I use to write and post mathematical text. For online math, I use MathJax, which is free and requires only a basic familiarity with the LaTeX typesetting system. For documents, over the years I've used numerous computer programs, going all the way back to PCTeX, Scientific Workplace, MathType, LyX and most recently TexStudio, the latter two of which are free. In 1986 I induced my office to spend $300 for Lotus Manuscript, which had rudimentary math tyepsetting ability. It was okay as a word processor, but took hours to render readable math. Today there are more free and not-free math programs than I can name. I still use the amazingly powerful Mathematica on occasion (I still have the 1992 DOS version), but only to do calculations I'm either too lazy or too stupid to do myself. Retired Stanford University math professor Donald Knuth is considered the father of modern mathematical typesetting, whose many books include The TeXBook, from which I learned the TeX programming language many years ago. Written in a highly amusing style, the book's illustrations by Duane Libby are both clever and hilarious: Dirty Tricks! A New Media Technology — Posted Monday March 1 2021 Older folks like me will recall the "colorization craze" that took place in the 1980s, which attempted to make old movies look more realistic (or at least more interesting). It failed (the phrase "putting lipstick on a pig" comes to mind), mainly because it predated the ensuing "remastering" technology that removed scratches and other defects in the original films. Even later (in the 2000s), remastering techniques had progressed to the point where frame-by-frame toning consistency, interpolation, frame speed correction and similar repair techniques made the films look almost new. Perhaps the best example of this is the remarkable 2018 film They Shall Not Grow Old, which stunningly presents a series of World War I films that appear to have been shot yesterday. As amazing as this technology is, it's still far from perfect. Depending on the condition of the original material, facial features like hair, teeth and skin complexion cannot be rendered accurately. This is quite apparent in the above-mentioned film when the subjects smile, revealing dental features that look pretty awful (although the subject's teeth were often bad in real life). There's an interesting recent video on YouTube describing a new technology for not only repairing and remastering old photos and films but enhancing them to near-lifelike quality using neural network learning techniques. While it relies in part on the use of existing models and morphing, the results are nothing short of amazing: The video cites a paper that describes the technology in more detail, although I had some trouble understanding it (I have a friend with a Caltech PhD in computer recognition who might want to explain it to me). As a fan of old silent and classic films, I'd love to see the day when they can be rendered using this or an even better technology. However, at the same time I fear its misuse in Deepfake applications that can generate phony photos and videos that are indistinguishable from real life. I can easily imagine Fox News airing a Deepfake video showing a progressive political candidate having sex with a young child, which would quickly go viral among Republicans. Even if such videos were quickly reported as fakes, the images would persist in the minds of gullible conservative voters, destroying the candidate. Only time will tell where things will go from here. I'm hopeful (imagine watching crystal-clear Laurel & Hardy films), but I'm still realistic when it comes to the current state of the world. PS: Some years ago I regularly exchanged emails with Dr. John Sotos, M.D., whose book The Physical Lincoln, which examined Abraham Lincoln the way a physician would, revealed physical and medical aspects of our 16th president in fascinating detail (for example, a country dentist, while pulling an infected molar from Lincoln, inadvertantly took out a piece of his lower jaw bone as well). Based on his examinations of Lincoln's features, behavior and medical history, Sotos believes that Lincoln suffered from a genetic malady that would have imminently killed him even if Booth's bullet had missed. I highly recommend the book. My note on a recent YouTube post: The Misinformation Pandemic — Posted Sunday February 28 2021 I caught CNN's Fareed Zakaria on his popular GPS show this morning, and as usual he addressed a number of important issues of the day. He had multi-billionaire Bill Gates on hand to talk about climate change (again), and although the Bill and Melinda Gates Foundation is doing some worthy things to benefit the poor and disadvantaged in Third World countries, I strongly suspect that as a highly intelligent and educated man he knows full well that the human race is probably permanently screwed. One of the reasons I feel this is the case is that Gates also talked about the country's (and world's) misinformation problem, which is largely responsible for our unwillingness to seriously address many existential problems, of which climate change is just one. Prior to the Gates segment, Zakaria talked about the incredibly brutal 2018 assassination of CNN journalist Jamal Khashoggi by Saudi agents acting under the direction of Saudi Arabia's Crown Prince Mohammed bin Salman. In particular, Zakaria addressed then-presidential candidate Joe Biden's promise to hold bin Salman responsible for Khashoggi's horrific murder and dismemberment, which now-President Biden hypocritally walked back due to supposed Real Politik realities. As Zakaria notes, Biden has decided not to punish bin Salman because ever-expanding American empire-building ambitions need to preserve the good graces of Saudi Arabia for both economic and military purposes. Biden's decision to let bin Salman walk and Biden's recent decision to hit Iranian-supported facilities in Syria are just two reminders to me that while Biden may be a far better presidential choice than Donald Trump, America's global military ambitions far outweigh any morally admirable intentions that the U.S. tries to promote around the world. And on that note I'll point out that the real purpose of misinformation is to promote evil while hypocritally posing it as good. When Khashoggi was murdered, dismembered and the pieces carried out of his hotel in suitcases by his Saudi killers, the world was shocked that then-President Trump chose to believe in bin Salman's innocence. But the Republican Party expressed little shock, since its members viewed journalist Khashoggi as just another dirty brown Arab. Misinformation feeds on a combination of fear, ignorance and arrogance—traits that perfectly define the Republican Party—and it is anyone's guess if President Biden will be able to escape its lure or consequences. Geek Extra: In a straw poll taken today, former president Donald Trump got the support of 68% of CPAC attendees, or one standard  deviant  deviation. What the Hell is Quantum Holonomy Theory? — Posted Sunday February 28 2021 Technophile Arvin Ash has an interesting series of science videos on his YouTube website, with an emphasis on modern physics. Of particular interest to me is that Ash has addressed the dark matter and dark energy problems numerous times, always citing current research and progress in these areas. His latest video addresses something called quantum holonomy theory, a subject I admittedly never heard of before. But it sounds interesting—Ash notes that it's a theory of quantum gravity that's focused on the fundamental simplicity of Nature, an asset sadly missing from superstring theory, loop quantum gravity, supersymmetry and even the Standard Model of physics itself. Its sole distraction, at least from what I've managed to glean from the lengthy 2015 paper Ash cites in his video, is that it involves non-commutative geometry. While non-commutative algebra presents no problems in other areas of physics (rotations, matrix multiplication, differential operators, elementary quantum mechanics, etc.), its application to pure geometry is highly non-intuitive (at least for me). Still, quantum holonomy assumes that we live in just the three space dimensions we've come to know and love, and it focuses on the properties that physical objects exhibit when they're moved from one point to another. Conceptually this is very appealing, as you can't get much simpler than that. The video is about 16 minutes long, including the usual plug for Magellan TV (which you can skip over): Shame on America's Conservative Republican Christians — Posted Saturday February 27 2021 The GOP's Conservative Political Action Conference is being held in Orlando, Florida this weekend. It will prominently feature a full-size golden idol of former President Donald Trump, seemingly in violation of the Second Commandment, which states that "You shall not make nor bow down to any graven image or idol." The Republican Party is no longer a political party, but a Trump cult. Nerd Saturday — Posted Saturday February 27 2021 First, some history. Nearly the same time that Einstein published his theory of general relativity in November 1915, the famous German mathematician David Hilbert found a simple way to express the same gravitational physics by extremalizing the action quantity $$ S = \int \!\! \sqrt{-g}\, R\, d^4x \tag{1} $$ where \( g\) is the determinant of the metric tensor \( g_{\mu\nu} \) and \( R \) is the Ricci scalar. If we vary the metric tensor of this action, we arrive at the Einstein gravitational field equations for free space, $$ R^{\mu\nu} - \frac{1}{2}\, g^{\mu\nu} R = 0 $$ where \(R^{\mu\nu} \) is the Ricci tensor (any one bothering to read this will know what I'm talking about). Einstein also realized that the free-space equations could be extended to acount for the presence of matter by setting the equations equal to a quantity known as the energy-momentum or stress-energy tensor \( T^{\mu\nu} \), or $$ R^{\mu\nu} - \frac{1}{2}\, g^{\mu\nu} R = \frac{8 \pi G}{c^4}\, T^{\mu\nu} $$ where \( G \) is Newton's gravitational constant (the proportionality constant is chosen so that the field equations are consistent with the classical Newtonian gravitational result; it is also dimensionally consistent with that of the action). Einstein's solution was complicated and drawn out, but his field equations matched those of Hilbert's vastly simpler approach, which must have really pissed off Einstein. Although Einstein got credit for the discovery, the action \( S \) is now referred to as the Einstein-Hilbert action. In 1918, the noted German mathematical physicist Hermann Weyl proposed that a better action was the quadratic quantity $$ S = \int \!\! \sqrt{-g}\, R^2\, d^4x \tag{1} $$ whose equations of motion are $$ R \left( R^{\mu\nu} - \frac{1}{4}\, g^{\mu\nu} R \right) + \nabla^\mu \nabla^\nu R - g^{\mu\nu} g^{\alpha\beta} \nabla_\alpha \nabla_\beta R = k T^{\mu\nu} $$ where \(\nabla\) is the covariant derivative and \( k \) is some constant. Although the action in (1) is equivalent to that of Hilbert's (subject to a simple constraint on \(R\)), the action is of dimension zero, complicating the inclusion of the energy-momentum tensor (which is of dimension inverse length squared, so that \( k\) must be dimensionless). As it is then, we have no way of connecting Weyl's action with the energy-momentum tensor. A few solutions have been proposed, including the division or multiplication of \(R^2\) by an appropriate scalar quantity, like the energy-momentum scalar \(T\) or some scalar quantity \( \phi(x)\) that fixes the dimensionality problem. Such fixes are called \( f(R, T, \phi)) \) theories, all of which have been examined extensively by researchers with little or no progress. Further complicating the situation is the fact the the Bianchi identities $$ \nabla_\nu \left( R^{\mu\nu} - \frac{1}{2}\, g^{\mu\nu} R \right) = 0 $$ are inviolate and cannot be avoided, and so must somehow play a role in whatever equations of motion result from the choice of \( f(R, T, \phi)\). As readers of this site will know, I've long been fascinated by modifications of Einstein's gravity theory that might explain the problems of dark energy and dark matter, and I hope that the answer will be found someday. Them Thar Soshalists Had Best Listen Up — Posted Saturday February 27 2021 Here's another timely comic by Ruben Bolling, drawn again in the style of famed Disney cartoonist Carl Barks. I see Bolling's Hollingworth Hound as the modern counterpart of one of the Beagle Boys (The Beagle Boys! The Dreaded Beagle Boys!) And it's true—Texas' power grid is completely free-market, with prices driven solely by supply and customer demand. When the Big Freeze hit Texas last week, customers saw their electricity bills going into the thousands of dollars for only a few days' worth of demand. And gosh-durn it all, they're just gonna have to pay up one way or another, despite calls for energy regulation by them-there danged progressive types. Still Coping — Posted Thursday February 25 2021 As a food bank volunteer, I deliver food to the homebound in Pasadena, Arcadia, Monrovia and Duarte, and my route takes me right past my wife's grave. I can see our joint headstone as I drive by, and I always whisper Sabahelkhayr, ya hayati, ana behebik! (Good morning, my dear, I love you!) With 19 months now gone since her passing, my eyes still invariably fill with tears. Although I'm tapering off my antidepressant medication and my grief therapy sessions, I'm now convinced that I will never recover from the loss of my wife. Other than my family members, my only consolation is that with global climate disruption, ongoing COVID-19 isolation, the virus variants and the insane anti-truth, anti-science authoritarian crap still running rampant in this country, she's not here to suffer through it. That's a hell of a consolation. Life on Mars? — Posted Thursday February 18 2021 NASA's Perseverance probe has landed successfully on the surface of Mars!! Part of its mission is to detect evidence of life, and I can only wonder what sociological and religious impact it will have on the human race if it finds it. And It's Gonna Be A Long, Long Time — Posted Thursday February 18 2021 I've been going to the gym for forty years, but in February 2020 my gym closed down due to COVID-19. It then underwent bankruptcy and restructuring and only recently reopened in Arcadia. I've hardly exercised since my wife passed away in July 2019, but I tried it again this Tuesday, barely lasting 15 minutes. I went again this morning and lasted 22 minutes, so I guess I'm on my way. But it's a far cry from the 90 minutes I used to work out. I could only lift about half the weight I used to, and the number of reps I can do is very limited. My body was saying "Hey, I remember this, but I just can't do it any more!" so out of respect for my now-frail 72-year-old frame I'm not gonna push it. I hope your day is going better than mine. Laying the Blame Where It Belongs — Posted Thursday February 18 2021 Schadenfreude is a German word that means taking joy or pleasure in the misfortune of others. I would be taking Schadenfreude over the self-inflicted misery that Texas is experiencing right now over the millions of people going without power and water during a record-setting cold snap, but I know that millions of children and progressive adults are also suffering. Texas Governor Greg Abbott and Senator Ted Cruz are blaming renewable-energy utilities for the problem, despite the fact that less than 10% of Texas' power is generated by wind turbines and solar cells. Texas experienced a similar event in 2011, but conservative hatred of regulation and "green" energy blocked progressive efforts to winterize the state's power infrastructure. The blame is laid bare in today's New York Times article. Before flying off to Cancun to escape the cold while his constituents suffered, Sen. Cruz went so far as to point the finger at California, which he described as a dystopia with constant power blackouts and water shortages due to overregulation. You may recall that under the Obama administration Texas expressed a strong desire to secede from the Union. With the persistent and apparently entrenched animosity of Texas and other Red States against President Biden and the progressive Democratic agenda, I think secession of these states might be a very good idea indeed. I suggest they move to Antarctica. PS: Speaking of Schadenfreude, is it okay to start celebrating Rush Limbaugh's death? I'm not celebrating, but in truth I couldn't care less. However, No man is an island entire of itself; every man if a clod be washed away by the sea, Europe well as any manner of thy friends or of thine own were; any man's death diminishes me, because I am involved in mankind. And therefore never send to know for whom the bell tolls; it tolls for thee. — John Donne But still, Hegemony, the Great Storm and Boltzmann Brains — Posted Monday February 15 2021 Sorry for abridging Bolling's latest comic, I just didn't find all of it that funny: On American Christianity in the Age of Trump — Posted Monday February 15 2021 As a Christian, I've often wondered how Donald Trump took hold of (and still holds) his American Christian evangelical base with a vise-like grip, despite the obvious anti-Christian views that he has demonstrably espoused throughout his life. Here is an article written by Isaac Bailey of Davidson College who poses the same question that I have. My response is "They're not true Christians," but it still raises the question of how so many supposed Christians could be so misled. Hossenfelder on the Simulation Hypothesis — Posted Saturday February 13 2021 Many practicing hydraulic engineers deal solely with one-dimensional flow regimes (especially those involving open-channel flow), while researchers will get involved in more complicated two- and three-dimensional situations, where the Navier-Stokes equations are utilized. These are highly non-linear equations whose full solutions still elude even computerized analysis. Did you ever wonder how the turbulent flow of a waterfall might be completely described by a Navier-Stokes analysis? Yet Nature does it instantaneously and seemingly without effort. This had led me to wonder exactly how Nature pulls off such things, and if there's some complicated calculation being conducted somewhere behind the scenes. I've even thought that maybe this is why time itself exists, which might be necessary for Nature to do her stuff. The idea that there's some calculation going on in Nature is not a new one, and it's just one simple example of why Bostrom's simulation hypothesis is embraced by many scientists. Yet, as the noted German physicist Sabine Hossenfelder points out in her latest video, there is zero empirical evidence for the simulation hypothesis, so it must be relegated to the realm of religious faith, not science. I responded to Hossenfelder's post, where I noted that if the notions of a creator God or the simulation hypothesis are wrong, then there is really only one remaining explanation for the profound and consistent physical laws of Nature we observe in our world: that we live one of a possible multiverse of worlds (or that we live in one of a many-worlds universe). As a Christian I believe the answer is God (which if you think about it is pretty much the same as an omniscient simulator), but if I'm wrong then I prefer the many-worlds interpretation, which does not violate the laws of quantum mechanics. Whichever is true belies the inescapable fact that there is a curtain of ignorance that separates us from the actual reality of our universe. It is my fervent hope that when we die we'll find out just what the heck it is. Here is Hossenfelder's video, which runs about 10 minutes: PS: The Navier-Stokes problem is one of seven mathematical problems featured in the Clay Mathematical Institute's Millennium Prize offering. The prize is $1 million dollars, not an inconsiderate sum for what amounts to an idle pastime for a true geek. Declare War, President Biden — Posted Friday February 12 2021 Today's news includes reports that President Biden is "anxiously awaiting" the Senate conviction vote against Donald Trump, but I'm sure he has no doubt what the result will be—acquittal, which Trump will view as "exoneration." My own guess is that no more than six or seven Republican senators will vote to convict, leaving Trump in the clear to run for election again or incite another insurrection. If I were Biden and all the other Democrats, I'd see the Senate's acquittal as proof positive that the GOP has no intention of working with the Biden administration, just as it had no intention of working with President Obama's. Biden should therefore announce that he has no faith in the GOP in anything, and proceed to issue as many executive orders as he can get away with. Biden should also make public Trump's taxes and all the information he has on Trump's four-year occupation of the White House, including every scrap of data he has on Trump's dealings with Russia, Stormy Daniels and every other trollop that Trump has bedded since election. Fifty Years Ago — Posted Tuesday February 9 2021 Fifty years ago exactly (it was also a Tuesday), I woke up on the floor of my studio apartment in Long Beach sometime just after 6 am. The place was rocking violently, and I realized it was an earthquake. I was in my pajamas, and I rushed out through the door onto the second floor balcony. Right away I noticed two things: the water in the swimming pool below was sloshing crazily back and forth, and two attractive girls in the apartment next to mine had come out wearing only their panties and bras. My first thought was: Why hadn't I noticed these girls before? And my second thought was: Yeah, this just had to happen on the second day of my last semester at California State College at Long Beach! I later learned that the earthquake's epicenter was forty miles away in Sylmar, with a Richter magnitude of only 6.5, and I was surprised that it was felt so strong in Long Beach. But there was very lttle damage where I was, although the aftershocks kept me awake for several days after. We had several other moderate earthquakes years later, but on January 17, 1994 (my older son's 15th birthday and on the MLK holiday), we experienced the 6.7-magnitude Northridge earthquake, which shook our Pasadena house with a violence hard to describe. My wife and I checked on our sons, and we called on other family members, but thank God everyone was alright. Our house underwent moderate damage, but those of my co-workers were far worse. The next few days at work were interesting. The afterquakes shook my 14th-floor workplace pretty good, and one unnerved long-term employee took early retirement as a result. We're long overdue for another major earthquake in Southern California, if not the "Big One," the predicted 7.6-Richter San Andreas event that's sure to come someday. God save us all. Why History Repeats Itself — Posted Friday February 5 2021 BTW, Entschuldigen Sie means "Excuse me," and meine Dame would be much more polite than Fräulein. Otherwise, Ruben Bolling's got the Hitler-Trump analogy spot on. PS: It's obvious that some of Bolling's characterizations intentionally mimic the work of other comic artists (notably Disney's Carl Barks). In the above comic, Billy Dare is very similar to the character Tintin, an adventurous European youth created in the 1930s by the Belgian artist Georges Remi. I discovered Tintin many years ago, and my Egyptian wife Munira was also very fond of the comic strip and the many books that Remi published. Our sons grew up with Tintin books, although their school friends had no idea the character existed. In 2011, directors Steven Spielberg and Peter Jackson released the computer-animated film The Adventures of Tintin, which I highly recommend. On the Fine Tuning Conundrum — Posted Thursday February 4 2021 I remember in early college wondering why the speed of light \(c\) in vacuum is some 2.99792458\(\times 10^8\) meters per second. I thought that maybe there was some deep law of fundamental physics that would provide the answer, perhaps as some complicated combination of \(\pi, e , \epsilon \) and other fundamental numbers that would give the speed of light. That's probably wrong—the speed of light \( c\) is likely just a pure constant of Nature. Much more recently, I discovered that the cosmological constant \(\Lambda\), which appears in the full expression of Einstein's gravitational field equations $$ R_{\mu\nu} - \frac{1}{2}\, g_{\mu\nu} R + \Lambda g_{\mu\nu} = \frac{8 \pi G}{c^4}\, T_{\mu\nu} $$ is not only a small positive number, but that if it differed by more than one part in \(10^{123}\) then our universe would never have been possible. But it too is probably just a constant of Nature. I'm quickly becoming addicted to neuroscientist Robert Lawrence Kuhn's long-running series of PBS and YouTube videos entitled Closer to Truth, whose 30-minute episodes delve into the basic question of why we (or anything) exists. Most of the episodes feature a blend of physics, religion, biology and the science of the human mind, but those dealing with what is known as fine tuning are particularly fascinating. Fine tuning refers to the fact that the observed magnitudes of various physical constants (electric charge, the strength of gravity, the fine structure constant, etc.) are so tightly constrained that if they were even slightly different, life (and possibly even the universe itself) could not exist. Kuhn's most recent venture into the topic is perhaps his best to date, as he interviews the physicists Martin Rees, Leonard Susskind, Russell Stannard, Alex Valenkin and Roger Penrose regarding their views on the subject. Kuhn comes away with four basic answers to fine tuning: sheer accident; fundamental but as-yet unknown physics; God; and the multiverse. Susskind also addresses the cosmological constant, and while he is an atheist he admits that the observed tiny value of \(\Lambda\) is balanced on a knive edge of unexplainable, mind-boggling precision. At the end of the video Kuhn notes that most physicists today believe that we live in a multiverse (or a "megaverse," as Susskind calls it) of possible universes. If that number is infinite (or finite but preposterously large), then at least a few would be nearly identical to our own, and we just happen to be in one of them. The video is about 26 minutes, and well worth watching: A Physicist Speaks Out on the Social Over-Dominance of the Internet — Posted Thursday February 4 2021 New York Times writer Charlie Warzel interviewed retired physicist Michael Goldhaber, a noted scientist who predicted many of the social problems we're currently dealing with, including the rise of inane reality television, website attention-seeking, political shamelessness, rampant celebrity-worship and the influence of terrorists on social media. Goldhaber is just one in a family of distinguished physicists who worked with many notable quantum physicists from the 1940s on. Goldhaber's interest in politics and social issues goes back to his childhood. With the rise of the Internet in the 1990s he predicted what might collectively be called the "attention economy," which spans not only the Internet and information technology but trends in the entertainment industry as well. I encourage you to read the article, along with an earlier interview with Goldhaber posted by the American Institute of Physics in 1995. I cannot help but see myself in some of the issues raised by Goldhaber in the interview, and it seems that my interest in posting articles on my own website may be nothing more than an attempt to grab attention and vent my feelings and opinions. But I also see my what I'm doing as a kind of online diary that my children, grandchildren and future progeny can look at long after I'm gone. Hopefully, I will be a positive influence on them in a scary future world that will need all the optimism it can muster. How Times Have Changed — Posted Tuesday February 2 2021 I received my first Moderna vaccination today, and with the exception of a slightly sore left arm I feel fine. While growing up in Duarte, California in the 1950s, I would eagerly await the ice cream truck coming down Bloomdale Street. When I heard the truck's jingle (which I still remember) I would beg my mother for a dime, which at the time would buy what was called a "Sidewalk Sundae" ice cream bar. Fortunately, outside of the ice cream truck and the occasional visit of Little Oscar's Wienermobile, I never saw this guy: Those of you my age will remember getting polio booster shots in grade school. We'd have to bring a permission slip from our parents (with a dollar bill attached), then line up for the dreaded shot at the nurse's office. Bruce, John, Greg and I would try to look brave to impress the girls, but inside we were scared to death of those needles. PS: One day the Wienermobile stopped in front of our house, and my mother and I went inside. I was only about four years old, but I towered over Little Oscar (a middle-aged "little person") whose face and wrinkles made him look strangely old to me. I was glad to get out of there, although I came away with a "Weenie Whistle." Will They Ever Change? — Posted Monday February 1 2021 Shortly after the stoning of the first Christian martyr Stephen (which the future apostle Paul personally witnessed and approved of), Paul set out with his assistants to the city of Damascus, where he intended to arrest evangelical Christians and bring them back to Jerusalem to be tried for apostasy. Along the way, he encountered Jesus Christ, and was both blinded and enlightened. Paul's Road to Damascus experience is one that many Christians experience (I sure did), and in Paul's case it represented a complete reversal of his religious philosophy. He was totally and forever changed, yet he would always refer to himself as the "chief sinner" for his persecution of the early Christian church, which he always regretted. Today we have millions of supposedly devout Christians who view former president Donald Trump as the new Messiah. They believe that he lost the 2020 election due to criminal conspiracy and fraud, that Democrats drink the blood of slaughtered babies to obtain a precious substance called adenochrome, and that the government of the United States must be violently overthrown in order to restore a proven sociopath and sexual deviant (Trump) to power. Will they ever experience their own Road to Damascus moment? I highly doubt it. For Geeks Only: Chameleon Gravity — Posted Monday February 1 2021 Have you ever heard of something called chameleon gravity? I just caught wind of it, but scientific and popular articles have been around on the topic for some time now. It's supposed to explain how galaxies form, as well as providing a window into dark matter and dark energy. It turns out to be yet another \( f(R) \) gravity theory, but in this case \( f(R) \) is a kind of substitute for the cosmological constant \( \Lambda \). It also involves a scalar field \( \phi(x) \), which means it has to have a mass and a kinetic term in the Lagrangian. It's guaranteed to provide additional terms in the equations of motion, which is pretty much what \( f(R) \) theories are designed to do. In July 2019, an article in Nature came out that caught the attention of the popular press, and for a while there was a flurry of Internet articles reporting what might be a major new discovery in cosmology. It all reminded me of how the press made a big fuss over a supposed great discovery that Einstein made in 1928. As Abraham Pais described it in his book Subtle is the Lord: The Science and Life of Albert Einstein, it was much ado about nothing. Still, chameleon gravity is an attempt to dispense with the notion of dark matter by way of modifying Einstein's 1915 gravity theory, an attempt that I welcome since I believe dark matter does not exist. By the way, if the Ricci scalar \( R \) is a pure constant in cosmological theory (and it almost certainly is), then it is easily shown that our universe will inexorably expand forever. And if the cosmological constant \( \Lambda \) is a true constant, then the rate of the expansion will increase without bound with time. Q-Nuts, Featuring Good Ol' Charlie Brown — Posted Friday January 29 2021 I never realized Linus was insane. We Cahn't See! — Posted Friday January 29 2021 I was supposed to get my first Moderna COVID-19 vaccination this morning, but it was canceled due to heavy rain, so I'm stuck indoors with little to do but surf the Internet. Jim Backus' character from 1963's It's a Mad, Mad, Mad, Mad World can't see because his eyes are closed, but there are things we can't see even with giant optical and radio telescopes. The most important of these is dark matter, which I have my doubts even exists. Beginning in the 1930s, astronomers noticed that there didn't seem to be enough observable matter (stars, gas and dust) in galaxies and galaxy clusters to account for their properties, notably the extreme velocities of rotating stars far from galactic centers. This gave rise to the notion of "dark matter" that would have to exist in gigantic halos surrounding galaxies and in between galactic clusters. Much more recently, detailed studies of gravitational lensing showed that there had to be more matter in galaxies than could be accounted for, thus supporting the idea that some kind of invisible matter existed in the universe besides ordinary matter. In defiance of the dark matter concept, in the 1970s it was proposed that ordinary Newtonian physics could be modified slightly to account for the effects of these observational discrepancies. A theory called modified Newtonian dynamics (MOND) arose that gave some promise along these lines, but it was deficient in numerous areas (and being a modified classical theory, MOND was also not relativistically correct). To fix this, further theories came along that modified Einstein's general theory of relativity in order to patch up the discrepancies. But these theories were complicated and required too many arbitrary parameters to fit the observational data, although they're still a topic of current research. Although his 1915 theory has admirably passed all tests to date, Einstein himself believed that it was only an approximation of the truth, although a very good one. Might some simple but reasonable modification of the 1915 theory be closer to the truth? This too is a topic of current reasearch, but everything I've seen to date seems too hopelessly complicated to be considered valid. Although dark matter might actually exist and be the answer to all our observational problems, I'm still clinging to the hope that Einstein's theory might yet be modified to provide a successful theory. One approach, and one that I think deserves more attention, was originally proposed in 1989 by Mannheim and Kazanas, who laboriously worked out the calculations associated with the fully conformal gravity theory deriving from the Lagrangian $$ S = \int \!\! \sqrt{-g}\, \left( R_{\mu\nu} R^{\mu\nu} - \frac{1}{3}\, R^2 \right) d^4x $$ This Lagrangian automatically results in three parameters that not only reduce to Einstein's 1915 theory but also explain much of the data now being attributed to dark matter and dark energy. Meanwhile, here's an interesting video posted by the University of Oxford's Rebecca Smethurst, who questions the existence of dark matter while discussing a recent study that appears to explain the effects of neighboring galaxies on galaxies exhibiting dark matter-like properties. A Year and a Half — Posted Sunday January 24 2021 My dear wife Munira died exactly 18 months ago. I'm still taking an antidepressant and undergoing grief therapy, but things are gradually getting better. We were married 42 years, and it still doesn't seem real to me, as I vacillate between shock, grief and feeling sort of okay. Our church has been a tremendous support to me, as has been my family. Never stop appreciating those you love, as you won't have them forever. Geek Saturday — What is a Spinor? — Posted Saturday January 23 2021 All of the ordinary matter in the universe, the "stuff" that everything is made out of, is comprised of particles called fermions. They include elementary particles like electrons, quarks and neutrinos, along with composite particles like protons and neutrons. They're all characterized by half-integral quantum spin numbers (\(\pm\)1/2, \(\pm\)3/2 \(\ldots\)), while all the other things—the force carriers like photons, gluons and the Z\(^0\) and W\(^\pm \) particles—are called bosons, having integer spins (0, 1, 2 \(\ldots\)). Bosons get along with one another, and can be found clumped together in various quantum states, while fermions are antisocial loners that tend to avoid other fermions. Being fermions, multiple electrons can be found in atoms, but every electron must have its own unique set of quantum numbers. The helium atom, for example, can have two electrons in the same orbital, but one will have a spin of 1/2 while the other's spin will be -1/2. This is a consequence of the famous Pauli exclusion principle, which everyone learns in high school. One might guess that fermions and bosons have to be treated differently in quantum mechanics, and this is true. Unfortunately, the algebra that describes fermions (which make up all the familiar stuff) differs substantially from that of bosons, and that algebra is a tough act to follow. While the quantum-mechanical equations of bosons are fairly understandable, those of fermions are complicated and non-intuitive. They are described by something called the spinor algebra, which in practice is often referred to as a kind of "square root" of an ordinary vector in Euclidian geometry. In his latest post, Columbia University mathematical physicist Peter Woit also asks about spinors, but being quite conversant in the subject he seems to have no problem dealing with them. Compare this attitude with just about anyone else, including the late Sir Michael Atiyah, who often expressed frustration with spinors. If you're a mathematician, spinors will take you into all kinds of arcane areas, like group theory and topology (often in higher dimensions), but physicists prefer sticking to more mundane applications, where spinors are still difficult to comprehend because much of the math cannot be avoided. I remember one professor at USC touching on the subject of spinors, but he didn't elaborate, probably because he preferred to stay away from the subject. While learning the Dirac relativistic electron equation (which incorporates the Dirac spinor), I got hopelessly confused. It wasn't until many years later that I tried to improve my situation by writing an elementary paper on them, but even today I do not really understand the damned things. I still get a little ticked off when I think that God made the most common, ordinary stuff in the universe so difficult to comprehend. Thank God, the Trump Nightmare is Finally Over — Posted Wednesday January 20 2021 In the dark days of the George W. Bush presidency and in the far darker days of the Donald Trump presidency, I would often think to myself that the biggest mistake that both Lincoln and General William Tecumseh Sherman made in 1864-65 was to first allow the South to be spared total annihilation, and then to welcome Southerners back into the Union, "With Malice Toward None." What we got was a century of Jim Crow, lynched and murdered African Americans and "Lost Cause" stupidity, all under the guise of "Southern gentility" mixed with the enduring battle cry of "The South Shall Rise Again." And rise again it did. As the sentiment expressed in this article in today's USA Today makes clear, let us hope that President Biden will not repeat Lincoln's and Sherman's mistake. Sure, let us act like true Christians and try to be understanding and kumbaya and all that, but let us always remember that the Red States still hate progressives with a passion, and they now have vengeance and destruction in mind. If they will not turn away from their collective racism, bigotry and anti-science, anti-fact retardation, the best we can do is to cut them off from the Union entirely and let them cope on their own, perhaps as a collection of seceded states. I don't really want any more of my tax dollars going there now anyway, since they have been and continue to be a net sink on federal assistance. Is It Too Late? — Posted Tuesday January 12 2021 When was the last time you had a heated discussion with another over some ideological issue, only to come to the conclusion that she was right and you were wrong? And not only did you recognize that you were wrong, you wholeheartedly changed your mind and adopted your opponent's position? The 2020 presidential election ended with Joe Biden winning 80 million votes against Donald Trump's 74 million votes. That 6-million vote difference was decisive, but the total vote split was uncomfortably close—52% to 48%—meaning that roughly half the country's voters went for Trump. Not only that, but nearly 80% of Trump's supporters today devoutly believe a rampant conspiracy theory that Biden stole the election through fraud, despite some 60 high-level court cases and numerous bipartisan investigations that confirmed the legitimacy of Biden's victory. Add this to the number of crazy conspiracy theories that have sprung up following Trump's 2016 presidential win, including Hillary Clinton operating a child sex ring out of a Washington D.C. pizza parlor, that global climate disruption is a hoax designed to give China an economic edge over the U.S., that Barack Obama was born in Kenya and was thus ineligible to be President, and that the deaths attributed to COVID-19 are actually due to other causes like colds and influenza. I don't care if people want to believe in extraterrestrials or that magnets can cure cancer, but the insane beliefs of nearly 50% of Americans are too much for me to accept. Today's conservatives seem to believe that personal opinions, emotions and feelings are more legitimate than facts and objective evidence, and the tragedy of January 6 proves that these beliefs have taken a very dangerous turn. My son's best friend at university is now a respected medical doctor and specialist in nephrology. He's one of the most intelligent people I've ever met, but despite his education and training he's completely bought into the lies and conspiracy theories that Trump and his followers have promoted. This scares the hell out of me—if people as smart and educated as he is can be so completely and irreversibly hoodwinked by utter nonsense, then what hope can we have that this country can ever be truly sane? Alluding to my first paragraph, can Trump supporters ever change their minds, or are they permanently stuck in anti-fact, anti-science fantasy? Ross Douthat (pictured), a writer for the New York Times whose conservative views I have often disagreed with, wonders if the Republican Party can break away from the Trumpian insanity now threatening the country, or itself be broken permanently through its adherence to lies, paranoia and fantasy. It's well worth reading, and I hope it's not too late for the country to recover. Fury — Posted Saturday January 9 2021 Videos clips of the shameful, violent and murderous attack by Trump right-wingers on the Capitol Building this week reminded me of the classic 1936 film Fury, starring Spencer Tracy and Silvia Sidney and directed by the noted German film maker Fritz Lang. The movie features a mob attack on a jail, filmed by news camera crews whose films are later used in court to identify and convict the attackers. I urge readers to find and watch the film (most libraries have the DVD) as a lesson for how the mob mentality of 85 years ago is strikingly similar to what we saw earlier this week. Here's a brief YouTube clip of the final court scene, which I hope is repeated when those responsible for the Capitol attack are brought to justice: Trump IS the Snake — Posted Saturday January 9 2021 In 2016 President Trump read a poem to adoring followers at a Florida rally. Called "The Snake," the poem talks about a tender-hearted woman who kindly nurses an injured snake back to health, only to have the snake bite her. "You knew I was a snake when you picked me up!" says the serpent. (The story has been told many times in different forms, usually a frog who gives a lift to a scorpion across a river.) Trump's retelling was intended as a warning that being kindly to immigrants was a mistake, as they would surely turn on us (becoming thieves, rapists, killers, etc.) The crowd ate it up. I liken Trump's crowd as akin to the Israelites in Exodus 32 who, weary of waiting for Moses to come down from the mountain, fashioned the golden calf and partied. When Moses returned, things quickly went downhill for the Israelites. Now it's America's turn. The incredible irony of Trump's story is too obvious today, but it was evident long before he ran for president. A serial womanizer and sexual molester, the thrice married Grab-'em-by-the-Pussy Chief Executive had shown his true colors from the time he was a young man. On numerous occasions I sadly posted several links on Trump's current wife Melania, who posed for a series of sickening girl-on-girl nude photos twenty years ago here and here (Warning: graphic). "This is your new First Lady!" I raged at the time, although it seemed no one was paying attention. But no, Trump was a rich man, and his wives were all beautiful and hot, and that's all that mattered to the majority of American voters. Now Trump has finally been exposed for the monster he is, but his sycophantic base is sticking with him, now turned violent and murderous. Worst of all, they're all self-professed conservative Christians. Meanwhile, the GOP is still the Party of Trump. And it appears that things will only get worse for America. I'm Moving! (Someday) — Posted Thursday January 7 2021 From Hamlet, Act V, Scene I: Assistant: The gallows-maker, for that frame outlives a thousand tenants. Gravedigger: I like thy wit well, in good faith. The gallows does well, but how does it well? It does well to those who do ill. Now thou dost ill to say the gallows is built stronger than the church ... Assistant: [So] Who builds stronger than either the mason, the shipwright, or the carpenter? Gravedigger: Cudgel thy brains no more about it, for thy dull ass will not mend his pace with beating. And when you are asked this question next, say "a grave-maker." The houses that he makes last till doomsday. King Tut — Posted Sunday January 3 2021 I've long been captivated by ancient Egypt, and I'm planning to visit the new billion-dollar Grand Egyptian Museum this year. I went to the Valley of the Kings with my late Egyptian wife and our sons several years ago, and now I'm longing to see the shriveled Mr. Tutankhamun* in person again, along with the other tombs in the Valley (my wife didn't think it was a big deal, having already seen everything as a child and as a University of Cairo chemical engineering student). Here's a fascinating recent video of the 1922 discovery of Tut's tomb: * He's got a condo made of stone-a What is Consciousness? — Posted Sunday January 3 2021 The unexamined life is not worth living. — Socrates When I was little I had numerous pets, including cats, dogs, snakes and lizards. I remember wondering what it would be like to have a dog's brain, to see how I would think and view the world around me. This is certainly the first time I pondered the notion of consciousness, although I had no idea what the concept meant. Years later I would wonder why humans were so much different than other animals, even those considered to be intelligent in some sense. I learned that chimpanzees could make simple tools (like moistening a stick to catch termites), that killer whales used clever tactics and teamwork to catch their prey efficiently, and that porpoises could be trained to do complex tricks (which I witnessed during visits to now long-defunct Marineland in Southern California*). More interestly were studies I read about in which animals were placed before mirrors to see what reactions they might exhibit. Many animals ignored the mirrors, some felt threatened, believing they had encountered a rival or enemy, while a few exhibited interesting behaviors, as if they somehow knew the reflections were themselves but couldn't fully comprehend what was going on. In college I was required to take a course in psychology. I had little interest in the subject but I was exposed to the concept of self-awareness, and I recall wondering why some animals like apes and porpoises might express some degree of intelligence but did little more than eat, sleep, reproduce and avoid danger. More importantly, unlike humans they did not progress over many millions of years, apparently lacking the ability to develop true thought or self-awareness. Consequently, chimpanzees, porpoises and elephants will never discover calculus or writing (or develop nuclear weapons). But in spite of all our fantastic mental abilities, we still do not understand the biological basis behind consciousness, in spite of many decades of technological progress in the neural, physiological and cognitive fields of study. Where does self-awareness reside in the human brain, and how does it manifest itself? Indeed, what is the purpose of consciousness, if millions of years of evolution would have been sufficient to ensure our survival as a species? It's entirely possible that self-awareness is a distinctive trait of humans alone, a subject that is addressed in this recent Aeon article, whose writer implies that it is indeed uniquely human. But better insight can be obtained from the work of the noted neuroscientist Robert Lawrence Kuhn, the brilliant creator and host of the great television and YouTube series Closer to Truth. Kuhn has pondered these questions many times, always seeking out top experts in the field to help him find the answers (Kuhn is equally passionate about other notable puzzles, especially in the fields of religion, physics and biology, and I encourage you to seek him out). His latest venture into the issue of consciousness appears in the following video: But as is often the case with many of the questions he tries to answer, Kuhn largely comes away empty handed. Of the many mysteries of our universe and existence, consciousness may be the most unanswerable of all. By far, my greatest regret in life (other than having been a non-Christian and total self-centered jerk until my later years) is that I wasn't really self-aware for most of my life. Descartes famously said "I think, therefore I am," but thinking alone isn't enough. The most important aspects of self-awareness are empathy, humility and the true concern for the well-being of others. This is what Christ taught, and I wish to God I had been aware of it earlier. * I went to Marineland often as a kid, and later took my wife and children there many times. It was an occasional feature of the popular 1950s television series Sea Hunt, which I still love. Rambling Thoughts on Quantum Immortality — Posted Saturday January 2 2021 Happy New Year, and good riddance to 2020. Newton and many of his later contemporaries believed that if one knew the precise position and velocity of every particle in the universe, then one could rewind or fast-forward that information to know the past and future of everything perfectly. In particular, the initial conditions of the universe would mean that everything would evolve deterministically, with the result that all events in the future would be set for all time. There would therefore be no free will for humans, whose (admittedly very complex) collections of atoms and molecules would only be following a fixed set of events. Did you ever rob a bank or cheat on your wife? Don't feel too bad, you couldn't help yourself—like Calvinism, it was predetestined from the start. Or so some believe. But it's impossible to know such information to infinite precision (you'd have to know everything out to an infinite number of decimals), and chaos theory would seem to guarantee that anything can happen randomly if such precison is not available. So what does this have to do with anything? It means that free will really does exist (at least to some extent), that you weren't predestined to murder your mother-in-law (so it's to prison you're a-going), and more importantly the probability of any past or future event remains largely random and undecided. Indeed, probablity has everything to do with it. Whatever happens, its probability is either zero, one, or something in between. It's illogical to think that the likelihood of an event is 150% or less than zero. This principal is behind what is known as unitarity in quantum mechanics, which was outlined in a recent PBS Spacetime episode on the nature of quantum information (which you can watch below). Host Dr. Matt O'Dowd explains that unitarity involves both the preservation of probability (never less than zero or greater than one) and the time-reversal symmetry of physics, in which the replacement \( t \rightarrow -t \) keeps fundamental physics intact. Not so, you might think. While a film of the elastic collision of billiard balls can be run backward without anyone knowing which way the film is running, the dropping of an uncooked egg on the floor and its breaking cannot be filmed without revealing the proper sequence of the event—it's a fundamental property of entropy, right? But the collision of real billiard balls invariably involves the transfer of energy (which can be revealed by a flash of infrared light when the balls are struck or the slowing of some of the balls due to friction), and this would give away the sequence. When energy is conserved there's no change in entropy, and time-reversal symmetry is preserved. As the PBS Spacetime episode explains, the fundamental Schrödinger equation of quantum physics preserves time-reversal symmetry, but the collapse of the wave function (according to the generally accepted Copenhagen interpretation of quantum mechanics) does not. Many physicists believe that this fact alone invalidates the Copenhagen interpretation, which has notably given rise to ideas like the many-worlds interpretation, in which wave function collapse does not occur (albeit at the expense of the branching off of other universes due to the mere act of observation). If true, this would seem to mean that the simple act of creating or transferring information by sentient minds or otherwise conscious observers somehow gives rise to multiple universes. The connection of information with unitarity culminates with the notion of the eternal preservation of information, which when created cannot be destroyed. Exactly how information is preserved is a mystery—some believe that the universe stores everything on its boundary surface via the holographic principle. This principle (actually a conjecture) would seem to answer the so-called black hole information paradox, in which information falling into a black hole is either lost or somehow preserved when the black hole evaporates. All this is getting pretty metaphysical if not downright religious in tone, so I'm stopping here. Meanwhile, enjoy the video:
3d2c9c6057c584c7
next up previous Next: Hartree-Fock method Up: MANY-NUCLEON SYSTEMS Previous: Effective Interactions (I) Effective Interactions (II) To a certain extent, a way out from the explosion of dimensionality, discussed in Sec. 4.1, may consist in using a better single-particle space. Instead of parametrizing fields $a^+_x$ by space-spin-isospin points $x$, one can use a parametrization by the shell-model orbitals $\phi_{i}(x)$ that are active near the Fermi surface of a given nucleus, i.e., by fields a^+_i = \int\hspace{-1.2em}\sum {\rm d}{x}\,\phi_{i}(x) a^+_x . \end{displaymath} (64) When a complete set of orbitals is used, the descriptions in terms of creation operators $a^+_i$ and $a^+_x$ are equivalent. However, one can also attempt a drastic reduction of the set $a^+_i$ to a finite number, $i$=1...$M$, of ``most important'' orbitals, similarly as we have been previously using finite sets of the space-spin-isospin points instead of continuous variables. The reduction is now not a mere question of discretizing continuous fields, but involves a serious limitation of the Hilbert space. In quantum mechanics one can always split the Hilbert space into two subspaces, $\vert\Psi\rangle=P\vert\Psi\rangle+Q\vert\Psi\rangle$, where $P$ and $Q$ are projection operators such that $P+Q=1$. Then, the Schrödinger equation $H\vert\Psi\rangle$=$E\vert\Psi\rangle$ is strictly equivalent to the following 2$\times$2 matrix of equations, \left(\begin{array}{cc} PHP & PHQ \\ QHP & QHQ \end{array}\r... ...ert\Psi\rangle \\ Q\vert\Psi\rangle \end{array}\right) \quad . \end{displaymath} (65) Using the second equation, one can now formally express the ``excluded'' component, $\vert\Psi_Q\rangle$$\equiv$$Q\vert\Psi\rangle$, of the wave function by the ``kept'' component, $\vert\Psi_P\rangle$$\equiv$$P\vert\Psi\rangle$, i.e., \vert\Psi_Q\rangle = \frac{1}{E-QH}\,QH\vert\Psi_P\rangle, \end{displaymath} (66) and put it back into the first equation. This gives the Schrödinger equation reduced to the ``kept'' Hilbert space, H_{\mbox{\scriptsize {eff}}}\vert\Psi_P\rangle = E\vert\Psi_P\rangle, \end{displaymath} (67) where the effective Hamiltonian $H_{\mbox{\scriptsize {eff}}}$ is given by the Bloch-Horowitz equation Blo58, H_{\mbox{\scriptsize {eff}}} = H + H\,\frac{1}{E-QH}\,QH \quad . \end{displaymath} (68) The main questions is, of course, whether the Bloch-Horowitz effective interaction, $V_{\mbox{\scriptsize {eff}}}$= $H_{\mbox{\scriptsize {eff}}}-T$, can be replaced by a simple phenomenological interaction, and used to describe real systems. In particular, when a two-body, energy-independent interaction is postulated in a very small phase space, one obtains the shell model, which is successfully used since many years in nuclear structure physics. Figure 9: Dimension of the shell-model space for calculations of $N$=$Z$ nuclei within the $pf$ space. (Picture courtesy: W. Nazarewicz, ORNL/University of Tennessee/Warsaw University.) From witek/. In order to illustrate the dimensions of the shell-model Hilbert space, in Fig. 9 we show the numbers of many-fermion states that are obtained when states in $N$=$Z$ medium heavy nuclei are described within the $pf$ space (20 s.p. states for protons and 20 for neutrons). Currently, complete solutions for the $pf$ space become available, i.e., dimensions of the order of 10$^9$ can effectively be treated. Progress in this domain closely follows the progress in size and speed of computers, i.e., one order of magnitude is gained in about every two-three years. We shell not discuss these methods in any more detail, because dedicated lectures have been presented on this subject during the Summer School. next up previous Jacek Dobaczewski 2003-01-27
ba758957e48a2b93
Non Local SETI  Concerning quantum entanglement, it is an ascertained reality that when two particles have first interacted together, and then are separated, even at the longest distance: in addition to Dr. Aspect’s experiment in 1982 and previous EPR (Einstein, Podolsky, Rosen) “gedanken” experiment in the fifties, what fully proves the reality of entanglement is experiments on quantum teleportation of simple particles such as photons or electrons. Up to to this point everyone agrees. But, several new hypotheses postulate that this mechanism might be active not only in the micro world, but also in the mesoscopic (intermediate scale) and even macroscopic domains. And not  just this. Some, including Prof. John Wheeler with his hypothesis on so called retrocausation, are convinced that, considering just particles, some hidden link may exist between all particles in the Universe. Because at the zero time (Big Bang) particles were all connected and strictly interacting. It is not yet known which particle parameters are affected here, in addition to the spin and polarization, maybe the quark color too.  But if this hypothesis is true then, at a certain level, everything should be non-locally linked inside this universe, and possibly also between multiverses and possible other dimensions. Maybe a sort of “fossil link” might be still present now. The framework of this idea stands upon the so called “implicate order” elaborated many years ago by quantum physicist David Bohm, who  including Nobel prize winner Wolfgang Pauli created the mathematical apparatus that describes what happens in the entanglement process, by expanding the Schrödinger equation (most important equation of quantum mechanics) with an additional parameter called “quantum potential” whose character is non-local (namely instantaneous). According to Bohm’s physics, reality is constituted of two interconnected domains: a local and a non-local one. The first obeys to Newton/Einstein physics (finite light speed, etc., on which no one argues), the second obeys to another law of which quantum theory is only the tip of an iceberg. Some, mostly philosophers, think that the second realm is just “consciousness” while the first is matter/energy. In reality, if particles all over the universe maintain some hidden link together, this means that even the cells of our body are affected. In particular: neurons. And now I come to Dr. Thaheld hypothesis on “non-local astrobiology”. The hypothesis is that neurons (being constituted themselves by particles) are able to receive non-locally some kind of sentient information, which is then explicated to brainwaves (alpha, delta, theta). From this point on all the investigation becomes absolutely conventional, because whatever is the method for sending information, then that information is deposited inside neurons whose electric activity produce brainwaves. So you just have to look into them, first using an EEG apparatus (of very high-resolution kind, in this specific case) and then using a specific algorithm (Fourier, Karhunen-Loewe, multi-scale computational procedures, or even a simple time-series analysis) which is able to extract a signal from the background noise inside brainwaves (in fact what is of interest here is not the way in which brainwave goes but if something is deposited inside). There might be a very structured message that can be decoded by such an algorithm, so that the analysis becomes exactly identical to the one used in standard SETI. The only difference between NLSETI and SETI is that in the first case information is assumed to be received instantaneously through the quantum entanglement mechanism, and in the second case through radio or optical photons whose intensity decreases with the inverse of the square of distance. Of course many detractors of this hypothesis will say that the entanglement mechanism is not able to transfer information because we acknowledge a quantum entanglement state only when we observe one of the two particles, and at that moment we make the wave function linking them collapse, which is true per se, of course. But they persist not to understand that when a non-local link like quantum entanglement is established that can be used to transfer information from a quantum to a classical state (such as the neurons in the brain) in form of information in the brainwave, which we can measure indeed. It is obvious that standard SETI is totally limited by the distance factor, while the probability to find an intelligent signal increases just with the source’s distance, but at the same time the signal amplitude diminishes exponentially as well: here is the trap. We have tried to increase radiotelescope aperture or acquisition mode (such as the recent Square Kilometer Array technique, for instance), receiver sensitivity, amplifier power, number of channels that can be detected simultaneously (up to one billion, nowadays) through a multi-channel spectrum analyzer, power of the algorithm of analysis, etc. After 50 years, according to the SETI Institute’s protocol (see Note * at the end about SETI PROTOCOL), the result is just discomforting. Therefore trying the NLSETI way is not that bad and not even so expensive. Of course it has nothing to do with telepathy, because the quantitative analysis is intended to be done directly doing measures on neurons through brainwave. If something true were found (after checking all possible sources of systematic error or interference) we should have two results: a) the entanglement mechanism is extended everywhere in the universe; b) an informative sentient message could be quantitatively decoded. And, apart from the hypothesis per se, I am interested only in the quantitative/mathematical aspect of the test. Where does the “message” come from? We can just hypothesize.  Assuming that all the sources of background noise can be eliminated, we have two possibilities: a) someone particularly intelligent has sent the message through non-local means using maybe “quantum repeaters” placed somewhere in the universe (in order to avoid decoherence); b) the test subject himself has been able to connect non-locally to a sort of “server” that is placed not in the cyberspace but rather in the quantum void, where a sort of “big informative library” has been deposited since eons. Maybe everyone everywhere uploads spontaneously this kind of information there all the time without even knowing to do it. If a completely new information is found this means that the test subject has downloaded something from there, and then transferred it to the neural electrical activity which then manifests the info and we reconstruct it technically. This is, grossly speaking, the assumption of NLSETI. As you see it is of double importance: for fundamental physics and for SETI. An attempt does not hurt. Some other considerations: 1. It is potentially possible to send an answer in quasi-real time by irradiating neural cells using a nanopulsed (with a modulated structure) Laser and/or a magnetic field. Some experiments have been already done in medical labs concerning the entanglement between two test tubes containing neural cells that have been linked previously together through a chemical substance such as an anesthetic. 2. Independently from all of this a quantum theory of the brain already exists due to mathematical physicist Roger Penrose and to neurophysiologist Stuart Hameroff. In brief this theory says that microtubules inside each neuron work in a so called “orchestrated entanglement”. They are all together (the entire ensemble of them) described by a wave function (typical equation of quantum mechanics). Normally this wave function collapses when a quantum system is observed, before which all possibilities coexist being overlapped all together. Differently from normal quantum systems, in the brain the wave function collapses spontaneously more or less every 1/40 sec: this collapse is a physical (geometrodynamic in terms of spacetime) collapse at the Planck scale level (quantum void: 10^-33 cm) where both relativity and quantum theory (due to the micro-scale involved here) are required. What then in simple terms the wave function collapse consist of? It is a “consciousness moment”. We experience it normally one million of times or so every day. Therefore the so called “consciousness” in order to manifest itself needs a neural correlate: the brain. Otherwise the wave function remains suspended and it doesn’t collapse. This means in few words that consciousness and physical matter cannot exist the one without the other (thus contradicting almost everything of religions of any kind). Prof. Hameroff found that just microtubules are the ideal physical vectors to permit entanglement inside the brain, because they are well insulated by any kind of interaction that might destroy quantum information. The quantity of consciousness depends on how much energy is inside the brain, namely how much mass of active elements (microtubules) are present able to trigger full consciousness, whose “power” is inversely proportional to the velocity of the process in terms of time taken to make so that the wave function (uniting all microtubules in the brain in a quantum coherent domain) collapses. So: according to this hypothesis, if it will be demonstrated to be true, the brain is a purely quantum system. From this (even if Penrose & Hameroff are maybe unaware of Dr. Thaheld hypothesis on NLSETI) it is not difficult to deduce that: a) if all brains are quantum systems based on the entanglement mechanism within its components they are ideal communication centers; b) whatever comes from outside affects also consciousness. But we can measure only neurons and not what a person “feels”. This doesn’t exclude that at a consciousness (and not neural) level a person can potentially acquire suddenly ideas: namely, able to connect to a universal “server” or to receive “non-local emails” directly from someone (by the way: what is exactly a genius? And how exactly one becomes a genius?). It is exactly the same mechanism of Internet: the only difference is that the mechanism here is non-local. Therefore NLSETI, being experimental and not speculative, can permit to quantitatively prove or disprove the hypothesis of connection between intelligent beings in the universe through quantum entanglement. Concerning the “quantum mind” theory by Penrose & Hameroff, in spite of Max Tegmark’s rebuttal (and other’s) it is based on the fact that microtubules (inside neurons) are highly isolated by a specific gel, therefore there is sufficient time to transfer information before the overall wave function collapses due to thermal effects. I am sure that those who are largely more advanced than us did two things: a) used and manipulated the quantum vacuum (playing with virtual particles using them as the elements of a quantum computer) in the same way in which we do with a silicon chip in order to order a library of universal information of every kind; b) send deliberately information everywhere hoping that someone catch it. Of course some persons catch the message unconsciously but not scientifically: “they” know it and so they decided to leave a track in the brainwave too in order to permit us to demonstrate the mechanism scientifically. Our duty is to verify scientifically this and, if present, to decode the information. Doing many trials and using many test subjects. Fortunately this is not science fiction. Simply I think it is time to turn the page, if we really want to attempt a real communication with alien intelligence. I have the impression that if we really want to know more on true alien intelligence we have to understand more what exactly “reality” is.  * SETI PROTOCOL – They say that a SETI signal is considered as such only if it is persistent in time, namely that it comes always from the same alpha-delta coordinates (which of course must be detected by many observers everywhere in the world). Correct, of course. But at the same time highly limited. In this truly bureaucratic way of the above said protocol, everything else is excluded, not only internal/external noise or interference (as it often happens), but also possible high-proper motion sources, namely: possible sources transiting inside the solar system (in substance they want to throw away dirty water together with the baby inside). Of course it is not so difficult to span the antenna inside an error circle that is slightly bigger and bigger until we find again the same signal at a slightly different coordinate position so that we can reconstruct the orbit and track it like a comet (with full happiness of Dr. Freeman Dyson). But this is not done. Why? Because the SETV branch of SETI is not politically correct. But this is not a scientific aptitude, this is religion, or even politics. Of course I still support (but not do it any more) standard SETI: sooner or later we’ll find something. But that something will be the result of a pure selection effect: just like to find some kind of aliens of the monkey type using black glasses or a smoked filter. There is more out there, methinks.
c82fec4e9238c143
Canonical Analysis of Radiating Atmospheres of Stars in Equilibrium 111 Research supported by OTKA grants no. T046939 and TS044665, the János Bolyai Fellowships of the Hungarian Academy of Sciences, the Pierre Auger grant 05 CU 5PD1/2 via DESY/BMF and the EU Erasmus Collaboration between the University of Szeged and the University of Bonn. Z.K. and L.Á.G. thank the organizers of the 11th Marcel Grossmann Meeting for support. ZOLTÁN KOVÁCS    LÁSZLÓ Á. GERGELY and ZSOLT HORVÁTH , , Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany Departments of Theoretical and Experimental Physics, University of Szeged, Dóm tér 9, H-6720 Szeged, Hungary The spherically symmetric, static spacetime generated by a cross-flow of non-interacting null dust streams can be conveniently interpreted as the radiation atmosphere of a star, which also receives exterior radiation. Formally, such a superposition of sources is equivalent to an anisotropic fluid. Therefore, there is a preferred time function in the system, defined by this reference fluid. This internal time is employed as a canonical coordinate, in order to linearize the Hamiltonian constraint. This turns to be helpful in the canonical quantization of the geometry of the radiating atmosphere. canonical gravity, spherical symmetry, null dust The quantum theory of gravitational collapse motivated many authors to study models with both in- and outgoing thin null dust shells in a spherically symmetric geometry. Such models can equally apply for other phenomena, like radiative domains around stars in thermodynamical equilibrium. The model of a radiative stellar atmosphere composed of two null dust streams provides good prospects for carrying out a complete canonical analysis and quantization. We present here an overview of the Hamiltonian description of two cross-streaming radiation fields with spherical symmetry and the first steps towards the Dirac quantization of this constrained Hamiltonian system. Letelier demonstrated that the energy-momentum tensor of two superimposed, counter-propagating radiation fields is equivalent to the energy-momentum tensor of a specific anisotropic fluid.[Letelier] Based on this algebraic equivalence we have recently shown that the dynamics derived by extremizing the matter Lagrangians of these two models are the same.[HKG] For the purpose of canonical analysis the two cross-flowing radiation fields can therefore be substituted with a single anisotropic fluid (with radial pressure equaling the energy density and no tangential pressures). The equivalence with the fluid model is crucial for our purposes since earlier works on the Hamiltonian formalism of two cross-flowing radiation fields with spherical symmetric geometry, although achieving important results, could not solve the problem of the absence of an internal time. [BicakHajicek] The possibility of replacing the two-component null dust with an anisotropic fluid raises the possibility to introduce the proper time as an internal time in the Hamiltonian formalism, in analogy with the case of the incoherent dust.[BrownKuchar] We foliate the static and spherically symmetric geometry by the spherically symmetric leaves labelled by the parameter time : where and are the metric functions and and are the lapse function and the non-vanishing component of the shift vector, respectively [BCMN]. A static, spherical symmetric space-time describing the cross-flow of two null dust streams (or equivalently an anisotropic fluid) has been found [Gergely]: where and are time and radial coordinates of the fluid particles and Motivated by this exact solution we chose the scalar fields , , and appearing in the metrics (1) and (2) as the canonical coordinates of the gravity and the matter source. The proper time of the fluid particles provides the internal time for the colliding radiation fields, whereas the radial coordinate gives the Lagrangian coordinate of the fluid particles for constant and . In order to provide the Hamiltonian description of this model, we perform the Legendre transformation of the Lagrangian describing two non-interacting null dust streams with time-independent energy density , which propagate along the null congruences and . We perform the transformation by decomposing the tangent vectors of the two null congruences with respect to the gradients of the matter variables, and introducing the momenta and canonically conjugated to and , The matter Lagrangian can be then rewritten in the ”already Hamiltonian” from where the super-Hamiltonian and supermomentum constraints of the system consisting of the two null dust streams are By eliminating the comoving density form the Hamiltonian constraint and employing that the super-Hamiltonian and the supermomentum constraints of the total system weakly vanish, we are able to solve the constraints with respect to the momenta and . The vacuum constraints and are expressed in terms of the preferred canonical variables of spherically symmetric vacuum gravity,[Kuchar] with and its canonical momentum is replaced with the Schwarzschild mass and its canonical momentum . After solving the constraints with respect to the momenta we can introduce a new set of linearized constraints, equivalent to Eq. (3), in which the momenta of the matter variables are separated from the rest of the canonical data [HKG]: The above linearized form of the constraints is advantageous for two reasons. First, the Hamiltonian constraint is resolved with respect to the momentum canonically conjugated to the internal time . Second, the new constraints have strongly vanishing Poisson brackets and as such form an Abelian algebra instead of the Dirac algebra of the old constraints. In the canonical quantization of gravity coupled to the two null dust streams with spherically symmetric geometry the super-Hamiltonian constraint becomes an operator equation on the state functional of gravity, restricting the allowed states. Since classically the super-Hamiltonian constraint was resolved with respect to the momentum , the operator condition leads to the functional Schrödinger equation The operator version of the supermomentum constraint applied on the state functional ensures that the quantum states are independent of the dust frame [BrownKuchar]. Besides the Hilbert space structure of the solutions to the Eq. (4), the other advantage of the linearized constraints is that their Abelian algebra turns into a true Lie algebra of vacuum gravity. These promising achievements point towards a possible consistent canonical quantization of the presented superposed null dust system. For everything else, email us at [email protected].
535090c3da729748
JMPJournal of Modern Physics2153-1196Scientific Research Publishing10.4236/jmp.2015.68119JMP-58358ArticlesPhysics&Mathematics Theoretical Study of the Triplet Electronic States of the BP Molecule ahdiMansour1NaylaEl-Kork2MahmoudKorek1*Faculty of Science, Beirut Arab University, Beirut, LebanonKhalifa University, Sharjah, United Arab Emirates*;1007201506081156116113 April 2015accepted 25 July 28 July 2015© Copyright 2014 by authors and Scientific Research Publishing Inc. 2014This work is licensed under the Creative Commons Attribution International License (CC BY). The Complete Active Space Self Consistent Field (CASSCF) with Multi Reference Configuration Interaction (single and double excitation with Davidson correction) MRCI + Q method has been used to investigate the potential energy curves of the 17 low-lying triplet electronic states of the molecule BP. The harmonic vibrational frequency ω e, the inter-nuclear distance at equilibrium Re, the rotational constant Be, the electronic energy with respect to the minimum ground state energy Te, and the permanent dipole moment have been also calculated. A literature review shows a strong correlation between our investigated data and those previously published either theoretically or experimentally. This work introduces, for the first time, a study of 14 new electronic states. Our spectroscopic data can be a conducive to further work on BP molecule in both experimental and theoretical research. <i>Ab Initio</i> Calculation Electronic Structure Spectroscopic Constants Potential Energy Curves Dipole Moments 1. Introduction Although the molecular III-V family have many unique chemical and physical properties, boron phosphate has a good translucent, good adhesion, smaller index of ionicity and low internal stress level. It also, constitutes an excellent example of an almost perfect covalence heteronuclear diatomic molecule. This low-heteropolarity gives striking features to the BP system. Many experimental studies were done on diatomic molecules by considering either one electronegative and one electropositive atom or two electronegative atoms. From the other side, no extensive studies were done on diatomic molecules of two atoms belonging to electropositive groups [1] [2] . The boron phosphate has a wide range of applications in refractory materials and solid state physics, and in a new kind of infrared optical materials [3] [4] . In literature, BP molecule was originally studied by Gingerich et al. [5] . Later, Boldyrev and Simons [6] investigated 5 electronic states for the molecule BP. They conclude that the 3P state is the ground state, while the first excited state 1S+ lies 0.3 eV above the ground state. Since the BP molecule has very close electronic states of different multiplicity, there are difficulties for determining its molecular parameters in the fundamental and excited states. Using Bader’s theory of atoms in molecules [7] [8] , BP compound, in both crystalline and molecular structure, was studied by Mori-Sánchez et al. [9] -[11] . He analyzed the transfer of charge and the polarity inversion under pressure in the electron density of the BP crystal. Mori-Sánchez [11] has analyzed the electron density of the diatomic BP molecule at different computational levels, namely HF, CASSCF and CISD and he found that, a competition between the three states of lower energy 1S+, 3S, and 3P and concluded that 3P is the fundamental state in agreement with the results of Boldyrev and Simons [6] . More theoretical calculations have been done using different techniques as HF, CASSCF, CCSD and FCI [12] -[16] where few spectroscopic constants for the low excited electronic states and the dissociation equilibrium have been obtained. The focus of this study is devoted to an accurate description of the ground and electronically excited triplet electronic states. In order to emphasize the accuracy of our work, we used the multi-reference configuration interaction (MRCI+Q) model expansion. The spectroscopic constants Re, Te, Be, ωe, …for each of the corresponding electronic states have been investigated along with the static dipole moment. 2. Method of Calculations The correlation energy for a certain state is the difference between the exact eigenvalue of the Hamiltonian and its expectation value in the HF approximation. The configuration interaction (CI) treatment of this electron correlation is obtained by adding to the HF wavefunction terms that represent promotion of electrons from occupied to virtual (unoccupied) orbital that are singly, doubly, and triply excited. Each Slater determinant D, or linear combination of a few determinants, represent an idealized configuration, called configuration state function (CSF). The CI wavefunctions then, are linear combinations of CSFs. The most widely-used implementations of CI are multiconfigurational self consistent field (MCSCF) method. In this method, one writes the molecular wavefunction as a linear combination of CSFs and varies not only the expansion coefficients but also the forms of the molecular orbitals in the CSFs. CASSCF wavefunctions are often used as the starting point for MRCI calculation. When a CASSCF wavefunction is used for a MRCISD calculation, the number of CSFs may be too many to deal with, so one procedure used to reduce the amount of computations is the internally contracted MRCI. The theoretical study of the low-lying singlet and triplet electronic states of the molecule BP have been studied by using the state averaged Complete Active Space Self-Consistent Field (CASSCF) procedure [17] [18] followed by a Multireference Doubly and Singly Configuration Interaction MRDSCI+Q with Davidson correction [19] -[22] . The entire CASSCF configuration space was used as the reference in the MRDSCI calculations, which were done via the computational chemistry Program MOLPRO [23] taking advantage of the graphical user interface GABEDIT [24] . This software is intended for high accuracy correlated ab initio calculations. MOLPRO has been run on a PC-computer with LINUX-type operating systems. The boron and phosphorous species are treated in all electron scheme. The basis sets were chosen for s, p and d functions using aug-cc-pCVTZ; c and aug-cc- pVTZ; c for B and P atoms respectively from Molpro library. The CASSCF active space is 6s (B: 2p, 3s; P:3p0, 4s, 3d0, 4p0) and 4p (B:2p±1; P:3p±1, 3d±1, 4p±1) 1d (P:3d±2) orbitals in the C2v molecular orbitals are distributed into the irreducible representation a1, b1, b2 and 1a2 in the following way 6a1, 4b1, 4b2, 1a2 noted [6, 4, 4, 1]. The 1s2 2s2 of B and 1s2 2s2 2p6 3s2 of P atoms were frozen in the MCSCF procedure. The number of active orbitals and valence electrons are 16 and 8 respectively. 3. Results and Discussion Potential energy curves (PECs) of the 17 triplet electronic states, in the representation of the molecule BP, has been performed for 189 internuclear distances. These curves are given in Figure 1. One can notice the deep potential wells for the low doublet and shallower wells for the higher excited electronic states. Moreover, some crossings and avoided crossings of abscissas Rc and Rac respectively have been obtained between the potential energy curves. If the two curves correspond to states of different symmetry cross where the crossing in this case is strictly allowed. But if these wavefunctions have the same symmetry, the two Potential energy curves of the triplet electronic states of the molecule BP states will only be diabatic solutions of the problem. They will mix with each other to give two adiabatic solutions which no longer cross and the crossing becomes avoided. The adiabatic solutions of the Schrödinger equation Y1 and Y2 are obtained by linear combinations of the diabatic ones where the variation method is used to solve for the coefficients. Such crossings or avoided crossings can dramatically alter the stability of molecules and the trend of the potential energy curve. From Figure 1, we can find these avoided crossings between the electronic states (1)3D/(2)3D at 2.23 Å and (2)3D/(3)3D at 2.77 Å. More over the wells are absent from some states which are unbound states. The electronic states (2)3P presents double wells where we calculated the spectroscopic constants in Table 1. The potential energy curves for the 17 low-lying triplet electronic states of the molecule BP have been calculated using the MRSDCI method in the representation 2s+1Λ(±). These curves are drawn versus the internuclear distance 1.2 Å ≤ R ≤ 5.5 Å in Figure 1. The equilibrium bond distances Re, the harmonic vibrational frequencies ωe, the relative energy separations Te, and the rotational constants Be for these electronic states have been calculated and given in Table 1. The comparison of our calculated internuclear distance Re for the ground state X3P with those given in literature, either experimental or theoretical, shows a very good agreement with relative difference of 0.4% (Ref. [16] ) ≤ DRe/Re ≤ 1.4% (Ref. [18] ). Similar very good agreement is obtained by comparing our calculated value of Re, Spectroscopic constants of the potential wells of the electronic states of the BP molecule StatesTe (cm−1)Re (Å)we (cm−1)Be (cm−1) X3P0.0a1.740a 1.7595b 1.7478c1 1.7520c2 1.7539d1 1.7538d2 1.757e1 1.764e2 1.747e3 1.758f 1.747g1026.08a 941.39 c2 934.69d1 934.800d2 1148e1 897e2 941e3 1148f 973g0.6985a 0.6762c2 0.674706d1 0.674727d2 (1)3Σ5783.98a 6708.3b 7612c1 6822c2 7469c3 7412c4 6987.19d1 7062.91d21.933a 1.9730b 1.9616c1 1.9690c2 1.9623d1 1.963d2 1.9424e1 1.9684e3 1.943f 1.694g773.14a 633.75c2 636.72d1 642.760d2 585e1 634e3 585f 925g0.5622a 0.5353c2 0.53877d1 0.539617d2 (2)3Π 1st minimum25628.34a 1.903a 1.7539d 2.019g892.2a 934.748d 522g0.5808a 0.674698d (2)3Π 2nd minimum30737.85a2.649a 302.9a0.2999a (1)3Σ+33989.58a1.945a 566.19a0.556a (2)3Σ35650a 3.303a153.4a0.1923a (5)3Σ 1st minimum55241.67a1.879a143.02a0.595a (5)3Σ 2nd minimum42569.96a4.843a573.69a0.804a aPresent theoretical study. bRef [15] , cRef [16] , dRef [17] , eRef [18] , fRef [6] , gRef [12] . for the first excited state (1)3Σ, with those given in literature where the relative difference is 1.5% (Ref. [16] ) ≤ DRe/Re ≤ 2.1% (Ref. [15] ), while this relative difference becomes DRe/Re = 7.7% for the state (2)3Π. One can notice the large discrepancy between the given values for the harmonic frequency ωe in literature, but our calculated value is found to be between the highest and lowest values as 934.69 cm−1 < 1026.1 cm−1 < 1148 cm−1. While our calculated value of we is greater than those given in literature for the electronic state (1)3Σ and smaller for the state (2)3Π. The investigated values of the rotational constant Be in literature are in very good agreement with our calculated values for the 2 electronic states X3Π and (1)3Σ and larger for the (2)3Π state. The calculated value of the energy Te with respect to the ground state is in an acceptable agreement with the theoretical data given in literature. From this agreement between our investigated data and those calculated in literature we can pretend the accuracy of the new constants obtained for these investigated new electronic excited states. 4. Permanent Dipole Moment The electric dipole moment function Re(r) of a molecule is given by: where and are respectively the electronic wavefunctions of the two different electronic states and is the permanent dipole moment. This permanent dipole moments is investigated for the considered electronic states of the molecule BP by taking the boron (B) atom at the origin, and the phosphorus (P) atom moves along the positive z-axis. All the calculations were performed with the MOLPRO [23] program. The dipole moment operator is among the most reliably predicted physical properties, because the quantum mechanical operator is a simple sum of one-electron operators. The expectation value of this operator is sensitive to the nature of the least energetic and most chemically relevant valence electrons. The positive sign of the dipole moment corresponds to a charge transfer from the P atom towards the B atom. To obtain the best accuracy, MRCI wavefunctions were constructed using MCSCF active space. The values of the dipole moments for the investigated electronic states are given in atomic unites (a.u.) as a function of the internuclear distance R in Figure 2. At large internuclear distances, the dipole moment of all the investigated electronic states either tends to a small constant indicating a nearly covalent character of the corresponding state or it smoothly approaches zero which is theoretically the correct behavior for a molecule that dissociates into natural fragments. 5. Conclusion In the present work, ab initio investigation for 17 low-lying electronic states in the representation 2s+1Λ(+/−) of the Dipole moment curves of the triplet electronic states of the molecule BP BP molecule was computed via CASSCF/MRSDCI methods for the electronic excited states. The potential energy curves, the electronic energy with respect to the ground state Te, the harmonic frequency ωe, the internuclear distance Re have been calculated along with the rotational constant Be. The comparisons of the present results with the available values in the literature show an overall a very good agreement. To the best of our knowledge, 14 new electronic states have been investigated for the first time through this work. With the recent interest on this molecule through, the present study of these new excited electronic states may assist in searching for bound states of the BP molecule, which leads to more investigation of new experimental works on this molecule. Cite this paper MahdiMansour,NaylaEl-Kork,MahmoudKorek, (2015) Theoretical Study of the Triplet Electronic States of the BP Molecule. Journal of Modern Physics,06,1156-1161. doi: 10.4236/jmp.2015.68119 NOTESReferencesHuber, K.P. and Herzberg, G. (1979) Molecular Spectra and Molecular Structure. Vol. 4, Van Nostrand Reinhold, New York., E. (1992) Chemical Reviews, 92, 141., G.F. (1995) Infrared Technology, 5, 23.Min, X.M., Cai, K.F. and Nan, C.W. (1998) Chinese Journal of Computation Physics, 15, 445.Gingerich, K.A. (1972) The Journal of Chemical Physics, 56, 4239., A.I. and Simons, J. (1993) The Journal of Physical Chemistry, 97, 6149-6154., R.F. (1990) Atom in Molecules: A Quantum Theory. Oxford University Press, Oxford.Bader, R.F. (1994) Physical Review B, 49, 13348.ás, A.M., Blanco, M.A., Costales, A., Mori-Sánchez, P. and Lua&#241;a, P. (1999) Physical Review Letters, 83, 1930.ánchez, P. and Lua&#241;a, V. (2001) Physical Review B, 63, 125103.ánchez, P. (2002) Densidad electrónica y enlace químico. De la molécula al cristal. Ph.D. Thesis, Universidad de Oviedo, Asturias.Miguel, B., Omar, S., Mori-Sánchez, P. and Garc&#237;a de la Vega, J.M. (2003) Chemical Physics Letters, 381, 720-724., A., Garc&#237;a de la Vega, J.M. and Miguel, B. (1997) Journal of the Chemical Society, 93, 29-32.Garc&#237;a de la Vega, J.M. and Miguel, B. (2000) Theoretical Chemistry Accounts, 104, 189-194., Z.T., Grant, D.J., Harrison, R.J. and Dixon, D.A. (2006) The Journal of Chemical Physics, 125, Article ID: 124311., R., Komiha, N., Oswald, R., Mitrushchenkov, A. and Rosmus, P. (2008) Chemical Physics, 346, 1-7., W.B., Kun, Y., Zhang, X.M. and Liu, Y.F. (2014) Acta Physica Sinica, 63, Article ID: 073302.Shinsuke, H. (2008) Theoretical Study of Electronic Structure and Spectroscopy of Molecules Containing Metallic Atoms. Ph.D. Thesis, Universite Paris-Est, Paris.Werner, H.J. and Knowles, P.J. (1988) The Journal of Chemical Physics, 89, 5803., P.J. and Werner, H.-J. (1988) Chemical Physics Letters, 145, 514-522., S.R. and Davidson, E.R. (1974) International Journal of Quantum Chemistry, 8, 61-72., A., Buenker, R.J. and Peyerimhoff, S.D. (1978) Chemical Physics, 28, 305-312., H.J., Knowles, P.J., Lindh, R., Manby, F.R., Schütz, M., Celani, P., Korona, T., Rauhut, G., Amos, R.D., Bernhardsson, A., Berning, A., Cooper, D.L., Deegan, M.J.O., Dobbyn, A.J., Eckert, F., Hampel, C., Hetzer, G., Lloyd, A.W., McNicholas, S.J., Meyer, W., Mura, M.E., Nicklass, A., Palmieri, P., Pitzer, R., Schumann, U., Stoll, H., Stone, A.J., Tarroni, R. and Thorsteinsson, T. (2012) MOLPRO: A Package of Ab-Initio Programs.Allouche, A.R. (2010) Journal of Computational Chemistry, 32, 174-182.
a2083062dbbd31a5
The two pillars of modern physics are quantum mechanics and Einstein’s special/general relativity. We generally  use quantum mechanics to describe how reality behaves at the  quantum level (i.e., the level of atoms and subatomic particles). In our everyday world, the macro level, we generally use Einstein’s special or general relativity to describe the behavior of reality. Thus, it appears we need two separate set of “laws” to describe reality, one for the quantum level and one for the macro level. In addition,  quantum mechanics is incompatible with general relativity. This suggests that quantum mechanics may not apply at the macro level. However, in 2010, an important experiment suggested that quantum mechanics is applicable to the macro level. What happened on December 2010? Scientists at the University of Santa Barbara, United States, published a paper “Quantum mechanics applies to the motion of macroscopic objects”. The University of Santa Barbara scientists made a clear demonstration that the theory of quantum mechanics applies to the mechanical movement of an object large enough to be seen with the naked eye. Now, before we go into the detail, just think about the implications and questions this raises if this proves to be true. Do macroscopic objects have a particle-wave duality? Can macroscopic objects be modeled using wave equations, like the Schrödinger equation? Will macroscopic reality behave, under the right circumstances, similar to microscopic reality?  To approach an answer, let’s take a look at what the University of Santa Barbara scientists demonstrated. Our story starts out with Dr. Markus Aspelmeyer, an Austrian quantum physicist, who performed an experiment in 2009 between a photon and a micromechanical resonator, which is a micromechanical system typically created in a silicon chip and sometimes an integrated circuit, which may or may not contain additional electronic integrated circuits. The micromechanical resonator can be caused to resonate, i.e. move up and down much like a plucked guitar string. The interesting part is Dr. Aspelmeyer was able to established an interaction between a photon and a micromechanical resonator, creating the so-called strong coupling (i.e. a strong and noticeable interaction), able to transfer quantum effects to the macroscopic world. This is the first recorded time in history that the quantum world communicated with the macro world. So what happened at the University of Santa Barbara at 2010? Andrew Cleland and John Martinis at the University of California (UC), Santa Barbara, worked with Ph.D. student Aaron O’Connell. This team became the first to experimentally induce and measure quantum effect in the motion of a human made object. The work, released in March 2010, was voted by Science and AAAS (the publisher of Science Careers) as the 2010 Breakthrough of the Year “in recognition of the conceptual ground their experiment breaks, the ingenuity behind it and its many potential applications,” according to a AAAS press release. What exactly did they do? They showed that a mechanical resonator (i.e. in this case a small metal strip that can vibrate freely), which was cooled to its ground state energy (ground state), works at the macro level. To understand how profound this is we need to understand the ground state. The ground state is the lowest fundamental level of energy of vibration a physical entity may have according to quantum mechanics. It is not zero energy, but close. A particle with zero energy would violate the Heisenberg uncertainty principle. We would know where it is and how fast it is going simultaneously! The act of putting a system in its ground state energy has never been done before. Their method was brilliant. They first built a mechanical resonator to the frequency of the microwave. Then, the resonator was physically connected to a superconducting qubit (i.e. a quantum system controlled with great precision, used in research of quantum computers). The set was then cooled to near absolute zero. But, how could they be sure that they were at the ground state. They used quantum qubit as a thermometer and demonstrated that the mechanical resonator contained no extra vibration. In other words, it had been cooled to a level equivalent to its ground state energy. At this point, the experiment is already in the history books, but wait an even larger punch line is coming. The mechanical resonator is as close as possible to being perfectly still, i.e. the ground state. The scientists then added a single quantum of energy, a phonon, the smallest physical unit of mechanical vibration, thus the lowest possible level of excitement. What they observed next is astounding. When the mechanical resonator absorbed this fundamental unit of energy, the qubit and resonator became entangled quantum mechanically (i.e., quantum entanglement). This means that any change in the quantum state of one of them will be immediately cause a change in the state of the other. Measurements of the vibrational energy showed that the results exactly followed the predictions of the quantum mechanics. The quantum level and the macro level, given the appropriate physical circumstances, follow the same natural laws. This one experiment may have put us one step closer to a unified theory of everything, the “holy grail” of physics.
7ebf9920fb3a8571
Magnetic field equation Magnetic Field Formula - Definition, Equations, Example 1. The magnetic field formula contains the \(constant^{\mu_{0}}\). This is known as permeability of free space and has a \(value^{\mu}_{0}\) = \(4\pi \times 10^{-7} (T \cdot m\)/ A). Besides, the unit of a magnetic field is Tesla (T) 2. A magnetic field is a vector field that describes the magnetic influence on moving electric charges, electric currents,: ch1 and magnetic materials. A moving charge in a magnetic field experiences a force perpendicular to its own velocity and to the magnetic field.: ch13 A permanent magnet′s magnetic field pulls on ferromagnetic materials such as iron, and attracts or repels other magnets 3. Equation [1] states that the magnitude of the magnetic field decreases with distance as 1/R from the wire. The Magnetic field is also directly proportional to the current I. The Magnetic field is a vector quantity like the Electric Field. The magnitude of the magnetic field is given by Equation [1] and the direction doesn't point away, towards, or in the same direction as the wire, but wraps around the wire Defining equation SI units Dimension Magnetic field, field strength, flux density, induction field B = Gyromagnetic ratio (for charged particles in a magnetic field) γ = Hz T −1 [M] −1 [T][I] Electric circuits. DC circuits, general definitions. Quantity (common name/s) (Common) symbol/s Defining equation SI units Dimension Terminal Voltage for Power Supply. V ter: V = J C −1 [M] [L. The magnetic field penetrates through space and acts as a driving force to move electric charges and magnetic dipoles. The Lorentz force is given by the equation, Force = Charge x Magnetic Field (B) x Velocity. The above equation can be rewritten to find the magnetic field as: Magnetic field = Force x [Charge x Velocity]-1 → (1) We know that the velocity can be written in terms of. The equation says that the integral of the magnetic field around a loop ∂ is equal to the current through any surface spanning the loop, plus a term depending on the rate of change of the electric field through the surface. This term, the second term on the right, is the displacement current. For applications with no time varying electric fields (unchanging charge density) it is zero and is. Magnetic Force Equation. Suppose a charge q is moving with a velocity v in a magnetic field of strength B, the formula for the magnetic force is. →F =q→v X →B F → = q v → X B →. Here, F is represented as a vector, and v x B is the cross product of v and B The magnetic field at point P has been determined in Equation \ref{12.15}. Since the currents are flowing in opposite directions, the net magnetic field is the difference between the two fields generated by the coils. Using the given quantities in the problem, the net magnetic field is then calculated Magnetic field - Wikipedi Maxwell's equations integral form explain how the electric charges and electric currents produce magnetic and electric fields. The equations describe how the electric field can create a magnetic field and vice versa. Maxwell First Equation. Maxwell first equation is based on the Gauss law of electrostatic which states that when a closed surface integral of electric flux density is always. Classical field equations describe many physical properties like temperature of a substance, velocity of a fluid, stresses in an elastic material, electric and magnetic fields from a current, etc. They also describe the fundamental forces of nature, like electromagnetism and gravity The concept of magnetic field intensity also turns out to be useful in a certain problems in which \(\mu\) is not a constant, but rather is a function of magnetic field strength. In this case, the magnetic behavior of the material is said to be nonlinear. For more on this, see Section 7.16 Learn about the magnetic field strength equation The magnetic field is strongest at the poles, where the field lines are most concentrated. Field lines also show what happens to the magnetic fields of two magnets during attraction or repulsion The magnetic field both inside and outside the coaxial cable is determined by Ampère's law. Based on this magnetic field, we can use Equation \ref{14.22} to calculate the energy density of the magnetic field. The magnetic energy is calculated by an integral of the magnetic energy density times the differential volume over the cylindrical. Magnetic fieldLorentz Force - Torques - Electric Motors (DC) - OscilloscopeThis lecture is part of 8.02 Physics II: Electricity and Magnetism, as taught in S.. Magnetic field of the Helmholtz Coils In order to derive the equation for a magnetic field at the point half-way on the axis between the two Helmholtz coils, or radii R, and separated by the same distance R, we shall use the Bio-Savart's law for elementary magnetic fielddB G, which is produced by the element of current, I, with length ds. According to this law 0 2 ˆ 4 Ids r dB r µ π. Magnetic Field Formula Solenoid, A solenoid is a coil wound into a tightly packed helix. When a current passes through it, it creates a nearly uniform magnetic field inside. Learn more about Magnetic Field In A Solenoid Equation and solved example Magnetic fields may be represented mathematically by quantities called vectors that have direction as well as magnitude. Two different vectors are in use to represent a magnetic field: one called magnetic flux density, or magnetic induction, is symbolized by B; the other, called the magnetic field strength, or magnetic field intensity, is symbolized by H According to above equation, the value of magnetic field strength is affected by magnet's grade, dimension and testing position. It should be noted that the measured value of Nickel-coated magnet's magnetic field strength will lower than Biot-Savart simulation value due to shielding effect from ferromagnetism Nickel coating. For multi-pole magnetization and complex conditions, the designer. Learn what magnetic fields are and how to calculate them. Learn what magnetic fields are and how to calculate them. If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Courses. Search. Donate Login Sign up. Search for courses. an equation giving the magnetic field at a point produced by a current-carrying wire: diamagnetic materials: their magnetic dipoles align oppositely to an applied magnetic field; when the field is removed, the material is unmagnetized : ferromagnetic materials: contain groups of dipoles, called domains, that align with the applied magnetic field; when this field is removed, the material is. A magnetic field is a vector field in the neighbourhood of a magnet, electric current, or changing electric field, in which magnetic forces are observable. A magnetic field is produced by moving electric charges and intrinsic magnetic moments of elementary particles associated with a fundamental quantum property known as the spin. Magnetic field and electric field are both interrelated to each. magnetic field of a bar magnets and direction of magnetic fieldVery easily explained magnetic field of a bar magnets through this animation !Newton's story o.. Magnetic field equations We will summarize the basic equations for the magnetic field and their applications. The Lorentz force equation defines the force exerted on a particle of charge q moving through a magnetic field B at velocity v. The Lorentz force equation is used to derive the force exerted on a on a current-carrying wire of length l from a magnetic field B. The Biot-Savart law. You do not need to know the meaning of this equation for A-level. F, E, v and B are all underlined meaning that they are all vectors because they are all quantities that have direction.vxB is know as a 'cross product' which accounts for charges moving through a magnetic field that aren't perpendicular. Often we either only deal with a magnetic field or an electric field I don't know if this equation has any particular name, but it plays the same role for static magnetic fields that Poisson's equation plays for electrostatic fields. No matter what the distribution of currents, the magnetic vector potential at any point must obey Equation \(\ref{15.6.5}\). Contributor . Jeremy Tatum (University of Victoria, Canada) Back to top; 15.5: Maxwell's Third Equation. The Magnetic Field - Maxwell's Equations Introduction. Maxwell's Equations and the Lorentz Force Law together comprise the e/m field equations; i.e., those equations determining the interactions of charged particles in the vicinity of electric and magnetic fields and the resultant effect of those interactions on the values of the e/m field. For ease of explanation, the following will refer to fields as though they possess. How do I generate a magnetic vector field using equations? I; Thread starter darkdave3000; Start date Dec 26, 2020; Dec 26, 2020 #1 darkdave3000 . 138 2. Summary: How do I calculate each vector's magnitude and direction in a vector field representing magnetic field of a magnetic dipole given some initial values? I am considering using a pair of point charges: positive and negative electric. For ease of visualization, only the field lines in the medial plane of the magnet are shown. The three-dimensional field can easily be pictured by virtue of the cylindrical symmetry about the axis. The lines of force originate from the north pole on the right and terminate on the south pole on the left. Magnetic-induction magnitudes are not emphasized in this Demonstration, only the geometry. magnetic field of a bar magnets and direction of magnetic fieldVery easily explained magnetic field of a bar magnets through this animation !Newton's story o.. So a toroidal solenoid satisfies the equation of magnetic field of closely wound long straight solenoid. In case of an ideal solenoid, it is approximated that the loops are perfect circles and the windings of loops is compact, that is the solenoid is tightly wound. In such a case we can conclude that the magnetic field outside the solenoid (for path 1 and path 3) is zero also suggested by. List of electromagnetism equations - Wikipedi In the case under consideration where we have a charged particle carrying a charge q moving in a uniform magnetic field of magnitude B, the magnetic force acts perpendicular to the velocity of the particle. Here we say that no work is done by the magnetic force on the particle and hence, no change in the velocity of the particle can be seen. Mathematically, when the velocity of the particle v. The Earth's magnetic field at the surface is about 0.5 Gauss. Discussion of current loop: Index Magnetic field concepts Currents as magnetic sources . HyperPhysics***** Electricity and Magnetism : R Nave: Go Back: Field on Axis of Current Loop. Show geometric details: The application of the Biot-Savart law on the centerline of a current loop involves integrating the z-component. The symmetry. The magnetic field is strongest at the poles, where the field lines are most concentrated. Two bar magnets. The magnetic field pattern when two magnets are used is shown in this diagram The above observations can be summarized with the following equation: F A uniform magnetic field pointing in the +y direction is applied. Find the magnetic force acting on the straight segment and the semicircular arc. Solution: LetB =Bˆj G and and the forces acting on the straight segment and the semicircular parts, respectively. Using Eq. (8.3.3) and noting that the length of the. Motivated by the fact that the equations of the geodesics are particular Lorentz equations, when the Lorentz force vanishes identically, the geodesics may be regarded as particular magnetic trajectories, the magnetic field in is a closed 2-form , and the Lorentz force corresponding to is a (1,1)-type tensor field as Unlike electric fields, magnetic fields do not have 'charge 'counterparts. In other words there are no sources or sinks of magnetic fields, there can only be a dipole. Anything that can produce a magnetic field comes with both a source and a sink i.e. there is both a north pole and South Pole. In many ways, the magnetic dipole is the fundamental unit that can produce a magnetic field STRUCTURE OF MAGNETIC FIELDS 1 Chapter 3 Structure of Magnetic Fields Many of the most interesting plasmas are permeated by or imbedded in magnetic flelds.1 As shown in Fig. 3.1, the magnetic fleld structures in which plasmas are immersed are very diverse; they can also be quite complicated. Many properties of magnetic flelds in plasmas can be discussed without specifying a model for the. EMC Formulas & Equations; Magnetic Field Conversions; Magnetic Field Conversions . Here is our easy to use conversion tables for common magnetic field measurement units. With magnetic field testing sometimes it becomes necessary to convert from one unit of measure to another. This magnetic field conversion chart provides the conversion relationship between different types of magnetic field. The magnetic field inside a toroidal coil (Equation 7.7.5) depends only on distance from the central axis and is proportional to winding density and current. Now let us consider what happens outside the coil Any magnetic field must obey Maxwell's equations. For a static field in a region free of current and magnetic materials, the magnetic field B can be expressed as B = ∇ Φ, where the scalar field Φ satisfies Laplace's equation: ∇ 2 Φ = 0. When treated as a boundary-value problem, Eq. (1) can sometimes be solved via a separation of. The Electromagnetic Field Notes Pdf - EMF Notes Pdf book starts with the topics covering Electrostatic Fields, Laplace's and Poison's equations, Electric field inside a dielectric material, Magneto Statics :Static magnetic fields, Ampere's circuital law and its applications, Moving charges in a Magnetic field, Scalar Magnetic potential and its limitations, Time varying fields, Faraday. Rotating magnetic field is magnetic field which rotates in space about some point or axis. The North and South poles continuously rotates with a specific speed, called synchronous speed. All poly phase electrical machines are associated with rotating magnetic field in the air gap. Therefore, an understanding of rotating magnetic field produced by poly phase winding is very important to analyze. This means that there are no magnetic charges which would create a magnetic field In the same way as electric charges create an electric field. There are four scalar equations (36.51 and (36.6) for determining the three components of the magnetic Indudlon vector. This, however does not make the system of equations overdetermined (see Sec. 58J Magnetic fields are a consequence of special relativity as follows: Given two charges A and B, and looking at the effect of A on B, you will get the correct result without any thought for magnetic fields unless both charges have a velocity. In any frame where one of the charges is motionless, magnetic field has no effect Video: Dimensions of Magnetic Field - Formula, Equation and Equation EFB has μ on the denominator so the field energy is lower here than in the air, and the further the flux can go through the iron the lower the energy. Think of current flow through a resistor; the current has an easier time going through a low resistance than a high resistance. Flux goes easier through high permeability than through low. When the rod is aligned with the field the. When electric and magnetic fields act simultaneously on a charge, the total force on the charge is given by Lorentz force equation, F = q( v ×B + E) where E is the electric field. (2) The path of a charged particle of mass 'm' projected with a velocity 'v' perpendicular to a magnetic field B is a circle of radius 'r' given b As described aabove, the stator magnetic field rotates in an AC machine, and therefore the rotor cannot catch up with the stator field and is in constant pursuit of it. The speed of rotation of the rotor will therefore depend on the number of magnetic poles present in the stator and in the rotor. The magnitude of the torque produced in the machine is a function of the angle γ between. Not necessarily the answer is no, Maxwell equation only says that there are not magnetic sources associated the magnetic field, i.e. there are always two poles for the magnetic field Physics equations/Magnetic field calculations - Wikiversit The magnetic field B is proportional to the current I in the coil. The expression is an idealization to an infinite length solenoid, but provides a good approximation to the field of a long solenoid. Derive field expression: Calculate field: Field of current loop: The solenoid as an inductor: Superconducting magnets : Index Magnetic field concepts Currents as magnetic sources . HyperPhysics. The electric field does not rest on the magnetic field, and the same as the magnetic field does not depend on the electric field. In the electric field, electromagnetic field generates VARS(Capacitive), on the contrary, in the magnetic field, electromagnetic field absorbs VARS(Inductive). The electric field may be monopole or dipole while the magnetic field is the only dipole. The force that. The Magnetic Field strength at that distance is 599 Gauss. Now, look at a big 1 cube, (BX0X0X0). We'll plug in a distance value equal to 1 in this case, and the calculator again indicates 599 Gauss. The bigger magnet is projecting the magnetic field over a much larger area and distance than the little one. What doesn't the Surface Field number tell me? Surface Field is just the Magnetic. The magnetic circuit of Fig. 9.7.3 might be used to produce a high magnetic field intensity in the narrow air gap. An N -turn coil is wrapped around the left leg of the highly permeable core. Provided that the length g of the air gap is not too large, the flux resulting from the current i in this winding is largely guided along the magnetizable material Characterization of the critical magnetic field in the Dirac-Coulomb equation. Journal of Physics A General Physics (1968-1972), Institute of Physics (IOP), 2008, 41, pp.185303. ￿hal-00201095￿ hal-00201095, version 1 - 23 Dec 2007 Characterization of the critical magnetic field in the Dirac-Coulomb equation J Dolbeault1, M Esteban2 and M Loss3 1,2 Ceremade (UMR CNRS 7534), Universit´e. Magnetic Force: Definition, Equation, and Example Download lecture notes & dpp from http://physicswallahalakhpandey.com/class-xii/physics-xii/08-electromagnetic-waves/Physicswallah App on Google Play Store :.. Magnetic fields are generated by moving charges or by changing electric fields. Maxwell's equations predict that regardless of wavelength and frequency, every light wave has the same structure. This means Maxwell's equations predicted that radio and x-ray waves existed, even though they hadn't actually been discovered yet The inaccuracy of the classical magnetic field integral equation (MFIE) is a long-studied problem. We investigate one of the potential approaches to solve the accuracy problem: higher-order discretization schemes. While these are able to offer increased accuracy, we demonstrate that the accuracy problem may still be present. We propose an advanced scheme based on a weak-form discretization of. A magnetic field applied to the electron gas of a solid breaks the time-reversal symmetry giving access to information at the atomic scale that is inaccessible otherwise. The de Haas-van Alphen oscillations of the magnetic susceptibility or the Shubnikov-de Haas oscillations of the conductivity are among the classical experimental tools used in high-magnetic-field facilities worldwide to. Researching the Internet produces many complex equations, most indicating that magnet field varies inversely with the third power of distance, in other words an inverse cube law. Since it all seemed vague, or at best theoretical, I decided to test for myself. Add Tip Ask Question Comment Download. Step 2: First Trial: Measure Magnetic Attraction Using Precise Scale . My initial plan was to. These include Maxwell's equations which describe the interaction between magnetic fields and electric currents, the Navier-Stokes equation which describes the fluid motion in the outer core, and equations describing the gravity potential, the heat flow, and many other parameters. Each equation is, in turn, dependent on the boundary conditions and the initial conductions chosen. These are often. Visualizing Magnetic Fields: Numerical Equation Solvers in Action provides a complete description of the theory behind a new technique, a detailed discussion of the ways of solving the equations (including a software visualization of the solution algorithms), the application software itself, and the full source code. Most importantly, there is a succinct, easy-to-follow description of each. 12.5: Magnetic Field of a Current Loop - Physics LibreText Magnetic Field Conversion Factors Add the indicated value to convert from to dBuV/m dB Gauss dBpT dBuA/m dBWb/m2 dB gamma 0 dB microvolts-per meter = 0 -209.5 -49.5 -51.5 -289.5 -109.5 0 dB gauss (1) = 209.5 0 160 158 -80 100 0 dB picotesla 49.5 -160 0 -2 -240 -60 0 dB microampere-per-meter = 51.5 -158 2 0 -238 -58 0 dB weber per-square meter (2) = 289.5 80 240 238 0 180 0 dB gamma = 109.5. I am trying to find an equation that tells the strength of a magnetic field a given distance away from the source. It would be very helpful if all terms are defined, since the internet is notorious for not saying what variables mean. Gracia We should determine the particle's trajectory, then find out an equation for the particle's motion and solve it. Analysis. A particle is placed in an electromagnetic field which is characterized by two vectors perpendicular to each other: electric field \(\vec{E}\) and magnetic field \(\vec{B}\). Both the electric and magnetic fields act on the particle with forces. The force of the. dynamical Schrodinger equation in a magnetic field¨ Mourad Bellassoued University of Carthage, Tunisia and Fed´ eration Denis-Poisson, and´ LE STUDIUM r, Institute for Advanced Studies, Orleans, France´ Groupe de Travail Controleˆ Universite de Paris 6, 13 d´ ecembre 2013´ M.Bellassoued Magnetic Schr¨odinger equation GDT Contrˆole 1 / 46. Outline 1 Introduction 2 Geometrical. Magnetic field strength and position can be changed at the ends to try to compensate for the deposition falloff. The problem with this can be that with the increased erosion the target is worn through faster at the ends than at the rest of the target and this results in an overall reduction in efficiency of material use. For web coaters, it is common to have cathodes that extend beyond the. Solutions to the Schrödinger equation for a charged Magnetic flux density for a uniformly magnetizied body can be calculated by the formula: (1) Formula 1 is a scalar potentiel of flux density, M is a constant vector of magnetization, ϕ is a scalar potentiel of the same body charged with unity charge density. The potential can be calculated by the formula Strength of Magnetic Field Formula. Equation for calculate strength of magnetic field is,. B = μ 0 m 4π r³ x 1 + 3 sin 2 λ. where, μ 0 = permeability of free space (µ 0 = 4π×10 âˆ'7 H·m âˆ'1 ≈ 1.2566370614…×10 âˆ'6 H·m âˆ'1 or N·A âˆ'2) m = dipole moment (VADM=virtual axial dipole moment) r = distance from the cente Mathematical descriptions of the electromagnetic field Maxwell law leads directly to a wave equation for the electric and magnetic field. Faraday's law Ampere-Maxwell law ∫ ⋅ = − Φ dt d E dl B ∫ Φ ⋅ = 0 ( + 0) dt d B dl µ I ε E ∂t ∂ ∇× = E B µ0ε0 ∂t ∂ ∇× =− B E 2 2 0 0 2 ( ) ( ) t ∂t ∂ ⇒ ∇ = ∂ ∂∇× ∇× ∇× =− E E B E µε 14 34.8 Derivation of the Wave Equation (II) We will assume E and B vary. Les équations de Jefimenko donnent le champ E et le champ B produits par une charge arbitraire ou une distribution de courant, de densité de charge ρ et de densité de courant J : E ( r , t ) = 1 4 π ϵ 0 ∫ [ ( ρ ( r ′ , t r ) | r − r ′ | 3 + 1 | r − r ′ | 2 c ∂ ρ ( r ′ , t r ) ∂ t ) ( r − r ′ ) − 1 | r − r ′ | c 2 ∂ J ( r ′ , t r ) ∂ t ] d 3 r ′ Magnetic Field Strength Formula - Definition, Equations 1. And you can understand that a current loop is a magnetic dipole and has it's own magnetic field similar to that of the permanent magnet (see Figure 3 and Figure 4). The magnetic field can be further increased by using materials such as iron such as in electromagnet shown in Figure 1 2. Explains how to find the magnetic field due to multiple wires. This is at the AP Physics level.For a complete index of these videos visit http://www.apphysi.. 3. Where, H is the magnetizing force, N is the number of turns of the coil and l is the effective length of the coil. Now putting expression of L and I in equation of U, we get new expression i.e. So, the stored energy in a electromagnetic field i.e. a conductor can be calculated from its dimension and flux density 4. Calculating electric field using a given magnetic field equation (Maxwell-Faraday law) Ask Question Asked 5 years, 8 months ago. If you are comfortable with the magnetic field going in a circle when solving $\vec \nabla \times \vec B = \mu_0 \vec J$ then you should be equally happy with solving $\vec \nabla \times \vec E = - \frac{\partial \vec B}{\partial t}.$ Same techniques work for the. 5. In Sinnoh, Mt. Coronet contains a special magnetic field. Magneton and Nosepass may evolve anywhere in Mt. Coronet, including exterior areas, the Spear Pillar, and the Hall of Origin.; In Unova, Chargestone Cave contains a special magnetic field. This magnetism is also evident by the cave's puzzle: the player must push stone fragments blocking some passages so they are attracted by larger. Magnetism and Magnetic Fields Boundless Physic 1. The Magnetic Field Diffusion Equation Including Dynamic Hysteresis: A Linear Formulation of the Problem M. A. Raulet, B. Ducharne, J. P. Masson, and G. Bayada Abstract—The introduction of accurate material modeling such as hysteresis phenomenon in numerical field calculation leads to numerical problems induced by the nonlinear properties of the ini-tial system. We focus on the solution of. 2. Magnetic field of two Helmholtz coils: To setup a Helmholtz coil two similar coils with radius R are placed in the same distance R. When the coils are so connected that the current through the coils flows in the same direction, the Helmholtz coils produce a region with a nearly uniform magnetic field 3. Ampere-Maxwell's law which says a changing electric field (changing with time) produces a magnetic field; The combination of equations 3 and 4 can explain electromagnetic wave (such as light) which can propagate on its own. The combination says that a changing magnetic field produces a changing electric field, and this changing electric field produces another changing magnetic field. Thus the. 4. Magnetic Field of Current. The magnetic field lines around a long wire which carries an electric current form concentric circles around the wire. The direction of the magnetic field is perpendicular to the wire and is in the direction the fingers of your right hand would curl if you wrapped them around the wire with your thumb in the direction of the current 5. To get the given magnetic field the voltage has to be $$U(t)=\frac{1}{C}Q(t)=\frac{1}{C}\int \frac{dQ}{dt}dt=\frac{1}{C}\int \frac{B(r)A}{\mu_0 r}dt=\frac{-k}{\sqrt{a^2+t^2}}\frac{A}{\mu_0}+\text{const.}$ 6. Finite Element Analysis of Stationary Magnetic Field 103 moving 0v and the electric and magnetic quantities are invariable in time, 0. t . A stationary magnetic field in a conducting domain satisfies the following system of equations: - the magnetic circuit law (Ampère's theorem) rotH J (1) - the magnetic flux law (local form) divB 7. magnet field and split into two beams. These two beams represent the two states, α and β, of the silver (spin 1/2) nuclei. The nuclear spin has an intrinsic angular momentum, a vector that is represented by the symbol I (vectors will be in bold). Vectors have 3 orientations (x, y, and z) and a length. However, the Heisenberg Uncertainty Principle tells that we can only know one orientation. Maxwell's equations - Wikipedi 1. Magnetic field from a single moving charge will have a lot of complicated features. When you sum fields from many moving charges, to get the field from a current-carrying wire, you loose a lot of these features. For example. The field from a single moving charge will decay as ##1/r^2## with the distance (##r##) between the observer and the charge. The field from the (straight) current-carrying wire will decay as ##1/\rho## with the (shortest) distance (##\rho##) between the observer and the. 2. In this equation the 0 0 is the angle of field intensity (H) vector it is shown in the figure which denoted as. The direction of the filed intensity H aa ' (t) can be described by the right-hand rule. Right Hand Rule says that if we curl our right-hand fingers in the direction of current then the thumb will be in the direction of the magnetic field. One point you should note that the. 3. Magnetic Field Units. The standard SI unit for magnetic field is the Tesla, which can be seen from the magnetic part of the Lorentz force law F magnetic = qvB to be composed of (Newton x second)/(Coulomb x meter). A smaller magnetic field unit is the Gauss (1 Tesla = 10,000 Gauss). The magnetic quantity B which is being called magnetic field here is sometimes called magnetic flux density Magnetic Fields II Maxwell's Equations: Maxwell's 4 Equations And Their 1. Circular Motion of Charged Particle in Magnetic Field: A negatively charged particle moves in the plane of the page in a region where the magnetic field is perpendicular into the page (represented by the small circles with x's—like the tails of arrows). The magnetic force is perpendicular to the velocity, and so velocity changes in direction but not magnitude. Uniform circular motion. 2. magnetic field can be obtained by using Ampere's law: ∫Bs⋅=dµ0eInc GG v (13.1.1) The equation states that the line integral of a magnetic field around an arbitrary closed loop is equal to µ0eI nc, where Ienc is the conduction current passing through the surface bound by the closed path. In addition, we also learned in Chapter 10 that, as 3. It follows that, if one frame you have non-zero electric and magnetic fields that are perpendicular (so that $\mathbf{B}\cdot\mathbf{E} = 0$) such that $c^2 \mathbf{B}^2-\mathbf{E}^2 > 0$, then it is possible to go to a frame where the electric field is zero and the magnetic field is non-zero 4. Emf induced in rod traveling through magnetic field (Opens a modal) Faraday's Law for generating electricity (Opens a modal) About this unit. This unit is part of the Physics library. Browse videos, articles, and exercises by topic. Our mission is to provide a free, world-class education to anyone, anywhere. Khan Academy is a 501(c)(3) nonprofit organization. Donate or volunteer today! Site. 5. An Example: Motion in a Constant Magnetic Field We'll take a constant magnetic field, pointing in the z-direction: B =(0,0,B). We'll take E =0.Theparticleisfreeinthez-direction, with the equation of motion mz¨ =0. The more interesting dynamics takes place in the (x,y)-plane where the equations of motion are mx¨ = qBy˙ and my¨ = qBx˙ (6.3 Calculating the Magnetic Field Due to a Moving PointMAGNETIC FIELD DUE TO INFINITELY LONG CONdUCTOR - YouTubeElectric Forces and FieldsA proton is released from rest in a uniform electric fieldAtomic Structure - Presentation ChemistryHeat Equation: Solution using Fourier transforms - YouTubeEarths Magnetic Field stock vector Magnetic electro-mechanical machines Lorentz Force A magnetic field exerts force on a moving charge. The Lorentz equation: f = q(E + v × B) where f: force exerted on charge q E: electric field strength v: velocity of the moving charge B: magnetic flux density Consider a stationary straight conductor perpendicular to a vertically-oriented magnetic field Earths Magnetic Field 3. Introduction • Earth's magnetic field, also known as the geomagnetic field, is the magnetic field that extends from the Earth's interior to where it meets the solar wind, a stream of charged particles emanating from the Sun. • Its magnitude at the Earth's surface ranges from 25 to 65 microteslas (0.25 to 0.65 gauss) A magnetic field B will also exert a force on a charge q, but only if the charge is moving (and not moving in a direction parallel to the field). The direction of the force exerted by a magnetic field on a moving charge is perpendicular to the field, and perpendicular to the velocity (i.e., perpendicular to the direction the charge is moving). The equation that gives the force on a charge. • Neuropsychologue public. • Bloody Caesar Clamato. • Eagle moss collection. • CASIRA Honduras. • Mettre son visage sur une autre photo avec Paint. • Désherbant puissant. • Marabout Forum. • Lampe d'atelier articulée vintage. • Restaurant indien Servette. • Mini tente roulotte. • Cinéville Bruz. • Moto Flat Track homologué. • Booking vé máy bay giá rẻ. • Pole Fitness Montauban Facebook. • RAGIE terminologie. • Pourquoi il neige en montagne. • ROC Hydra Crème Nourrissante Confort 24h. • Test de radon CAA. • Qui fabrique les pneus Laufenn. • Traduire j y serai. • Sortie 4c rocade Bordeaux. • Anime zorro. • Like4Like Instagram. • Frères rivaux 6play. • Master paléontologie Bordeaux. • La Délicatesse film streaming. • Pesticides utilisés en agriculture. • Transition Pro IDF. • Huile piquante piment de Cayenne. • Huile essentielle d'encens sous la langue. • Visa pour aller au Nigeria. • Pendentif oeil de Sainte Lucie Or. • Chanteur espagnol romantique. • Marchandise pour épicerie. • Phone មួយ ទឹក. • Bureau du registraire UdeS. • Enlever piercing nombril définitivement. • Paraboot bergerac avis. • Coiffeur lattes barber. • Société de financement pour les entreprises. • Adorer mots fléchés.
11926c9d79fa6e78
Philosophy and Metaphysics For Albert Einstein, locality was one aspect of a broader philosophical puzzle: Why are we humans able to do science at all? Why is the world such that we can make sense of it? (location 39) Like philosophers, their intellectual siblings, they [physicists] are driven by the conviction that the universe is within the human power to understand and that if you look beneath its variety and intricacy, you will find comprehensible rules. (location 699) In daily life, when we ask "Why?" what we're usually after is what drove a person to do what they did. (location 751) His [Emmanuel Kant] overarching interest was to analyze how we know and what we know, or think we know. (location 1089) Like Albert Einstein, the author is confusing and conflating three different methods of inquiry: metaphysics, philosophy, and science. Science is the method of inquiry arising from observations made with our five senses: Why is the sky blue? Why is the half-life of cobalt-60 about 5 years? Metaphysics arises from observations rooted in our ability to make ourselves the subject of our own knowledge: What is free will? What is the conscious knowledge of human beings as opposed to the sense knowledge of animals? Philosophical questions are above and about more fundamental methods of inquiry: What is the best way to do science? What does the mathematical function used in the Schrödinger equation for quantum mechanics correspond to in reality? In my opinion, you can't understand science, especially quantum mechanics, unless you understand the difference between the three methods of inquiry. Both of the questions in the first quote are not philosophical questions. The first is a metaphysical question because animals don't ask questions about what they see. The second question assumes that human beings can understand the world. It too is a metaphysical question because it raises the question: What does it mean to understand something? Comprehensibility of the Universe Much of what happens defies reason (especially when romance or driving is involved).(location 42) When a friend wrote to ask Einstein what he'd meant by the comprehensibility remark, he wrote back, “A priori one should expect a chaotic world which cannot be grasped by the mind in any way". (location 47) Ultimately, though, instrumentalist ["Shut up and calculate"] is just a tactical retreat. In the end most people sill crave a picture of what the universe is really like, what lies under the surface of our perceptions. (location 1035) Relativity vindicates the age-old intuition that nonlocality would render the universe incomprehensible. (location 1388) For Copenhagenists, indeterminism was a lesson of modernity, an antidote to a misplaced Enlightenment trust in reason, which German intellectuals in the 1920s widely held responsible for their country's defeat in the First World War. A number of historians have traced this cultural mood to the magical and Romantic belief that nature is beyond rational understanding. (location 1488) Einstein was struck by the fact that the universe is comprehensible in so many ways, and he thought it strange to suppose that particles would prove to be an exception. Either the universe should be intelligible or it should be inscrutable, but not half and half. Furthermore, Einstein saw that indeterminism would entail nonlocality. (location 1494) The crowning achievement of metaphysics is the argument for God's existence. This argument was explained by Ètienne Gilson in the 1920s and uses the metaphysics of Thomas Aquinas, not his famous five ways. The argument is based on the assumption or hope that the universe is intelligible. Since all religions in the world tell us that we will pay for our sins after we die, it is reckless to wait until you are on your deathbed before deciding whether or not the universe is intelligible. “God's wrath comes upon a sudden,” to quote St. Augustine. Assuming the universe is intelligible in science is a double-edged sword because scientists have to decide which questions to spend time on. This assumption may have helped Johannes Kepler, who spent years working on his famous laws. A counter example is how much time Einstein spent trying to improve quantum mechanics. Cognitive Dissonance Yet locality is for our own good. It grounds our sense of self, our confidence that our thoughts and feelings are our own…We are insulated from one another by seas of space, and we should be grateful for it. Were it not for locality, the world would be magical—and not in a happy, Disneyesque way. (location 76) According to Bernard Lonergan, the human mind is structured like the scientific method. 1. At the lowest level, we make observations, which requires paying attention. 2. At the level of inquiry, we ask questions about what we observe. We have a drive to know and understand everything, and want to know the cause of things, the relationship between thing, and the unity between things. At this level we create answers to the questions, theories, or insights. This requires intelligence. Our sense of self does not come from the observation that we are in some room or on some street corner. It comes from reflecting on our own existence. As René Descartes put it, "I think, therefore I am." George Berkeley, Bishop of Cloyne, did not even think the material universe was real. He arrived at this theory while sitting on a rock in a forest. He figured he existed because God created him. He assumed God created the rock so that he would have a place to sit. But God, having infinite power, could just as easily create the rock as the illusion of the rock in God's mind. According to Berkeley, there is no reason to assume the rock is anything but an illusion. This metaphysical theory is called Berkeley's interaction with the rock was entirely passive. Our interaction with other people is not passive at all. Other people affect our consciousness in ways that we have no control over. It is very clear that other people exist, which means that our existence is limited to ourselves. In other words, we are finite beings. We are not insulated from one another at all but we are insulated from tables and chairs. According to Martin Buber, we have an "I-it" relationship with chairs and traffic cops. But, we have an "I-thou" relationship with people we have intimate conversations with. In my opinion, the above quote indicates that Musser is suffering from cognitive dissonance. Cognitive dissonance refers to the mental and emotional stress we suffer when we believe something that is in conflict with some aspect of reality. My guess is that Musser believes life ends in the grave. The aspect of reality this conflicts with is the fact that the Near Eastern, Chinese, and Indian religions say there is life after death. Musser makes himself feel better by ignoring the metaphysical reasons for our "sense of self" and focusing attention on the observation that the three-dimensional location of our bodies can be specified. Cause and Effect Every effect has a cause linked to it by a chair of events unbroken in space and time. There's no point at which you have to wave your hands and mumble, "Then a miracle occurs." (location 88) First and foremost, breaking the speed limit would muck up sequences of cause and effect. Different people would disagree not only on what "now" is, but on what is "before" and "after." (location 1348) In addition to maintaining the direction of causality, the speed limit ensures that the very concept of a law of physics makes sense. (location 1370) [Bohr to Einstein]”The whole foundation for causal spacetime description is taken away by quantum theory.” (location 1526) You can derive time from the sequence of cause and effect. (location 2850) There are three kinds of causality: 1. In human action, if you spend 20 minutes washing your car, the final cause is a clean car. 2. In metaphysics, a being that begins to exist at some point in time and a being that is a composition of two incomplete beings or metaphysical principles needs a cause. Cause and effect, in metaphysics, occur simultaneously. The cause precedes the effect in the order of causality, not in the order of time. 3. In physics, a causal system is one where the total energy is constant. When you throw a ball into the air the only force acting on the ball is gravity, which depends only on location, not time. One might say that the initial position and velocity of the ball "causes" or "determines" the final position and velocity of the ball. In physics, cause precedes the effect in the order of time. Hard Core Materialism Our thoughts are impulses zapping along pathways in space. (location 106) This is proof that Musser suffers from cognitive dissonance. Why refer to "thoughts" and not a particular kind of thought? An example of a kind of thought is the mental image we create in our minds of something we see with our eyes. We ask the metaphysical question: What is a mental image? Musser's theory is that a mental image has to do with "an impulse zapping along pathways in space." There is no evidence for this theory. Musser judges it to be true because it makes him feel better. It makes him feel better because it supports his belief that human beings are “assemblages of particles.” A materialist is like a person collecting minerals and arranging them according to color. He builds a chest of drawers and labels the drawers one of the colors of the rainbow. He finds a blue mineral and puts it in the blue drawer. He puts a red one in the red drawer, and so on. One day he finds a white mineral. He goes back to his chest of drawers and says, "White minerals don't exist." Probability and Spooky Entanglement If influences can leap across space as though it weren't there, the natural conclusion is:space isn't really there. (location 157) In all these examples [black holes], physics enters a twilight zone. Things can outrun light; cause and effect can be reversed; distance can lose meaning; two objects may actually be one. The universe becomes spooky. (location 677) As Musser explains in the book, we observe influences leaping across space (spooky entanglement) with our eyes. We ask the scientific question: What causes spooky entanglement? We have a drive to know and understand everything and we want to understand spooky entanglement. I don't see any evidence for the theory, "space isn't really there." This theory raises the question: What is space? This is Bernard Lonergan's explanation of what a straight line is. Look at a line drawn on a piece of paper with a straight edge and a pencil. Create the image of this lead ribbon in your mind. Now, suppose that the ribbon is infinitely thin and long. My understanding of space is this: Look at the room you are in with all its furniture and walls. Create an image of this in your mind. Now, suppose that there is no furniture and walls. In my opinion, space is a mental being, not a real being. But space corresponds to a real thing, just like a straight line corresponds to a ribbon. I see a connection between the question of what causes spooky entanglement and the question of what causes a radioactive atom to decay at the particular time it does decay. The half-life of Cobalt-60 is about 5 years. This means that 1 gram of cobalt-60 will decay into 1/2 gram of nickel and 1/2 gram of cobalt-60 in 5 years. Looking at a single cobalt-60 atom for five years is like flipping a coin: half the time the coin comes up head (decays) and half the time tails (does not decay). One of the triumphs of the Schrödinger equation is that it explains this observation. This raises the philosophical question: What does the Schrödinger function correspond to in reality? The theory given in physics textbooks is that it is a probability wave function. There is a lot of evidence that it is a wave, but there is no evidence, in my opinion, that it involves probability. Probability always involves two events. In the case of a coin, there is the flipping and the landing. If you step on someone's foot in a dark movie house, the probability that it is a man is 50%. The two events are selecting the person and deciding on the sex of the person. We understand perfectly well what causes the coin to come up heads when that happens and tails when that happens. We do not know what causes a radioactive atom to decay at the time it does decay but want to know, just like we want to know what causes spooky entanglement. In my opinion, we should be open-minded about the meaning of the word cause. We can certainly exclude the causality of human action. It is tempting to exclude metaphysical causality (for every effect there is a cause) because it can’t be defined or explicated. But this may be a mistake because it leaves us only the time-based causality of physics. Philosophy vs Science Whenever those concept questions did come up, physicists deemed them "philosophical," which wasn't intended as a compliment, but as a way to deny that the questions were even worth asking. (location 318) Philosophy is distinguished not just by its interests, but by its methods: philosophers are trained in logic as opposed to mathematics or experiments. (location 322) There’s no sharp boundary between philosophical issues and physicals issues, merely a porous border with lots of trade across it. (location 1045) Modern physicists call interpretation a "philosophical" issue, implying that it requires a different frame of mind or a different academic discipline altogether. (location1023) The author does not agree with my opening paragraph, which implies that metaphysics and physics are equals and philosophy is superior to both. To the author, philosophy and physics are equal, “siblings” as he put it somewhere else, and metaphysics is not a method of inquiry. According to the author, physics differs from philosophy because of different methods. Physics uses experimentation and philosophy uses logic. In my opinion, all methods of inquiry have observations, theories, and evidence because this is the way the human mind is structured. A person who judges a theory in physics, metaphysics, or philosophy to be true when there is very little evidence has poor judgment. Quantum Mechanics and Observers I was struck by her [Fotini Markopoulou] thinking about how physics theories need to assimilate that we are part of the universe, rather than outsiders looking in. (location 551) The idea that humans are “part of the universe” could be a way of expressing the metaphysical theory that human beings are collections of molecules. It could also be reference to the philosophical and scientific questions arising from double-slit experiments. When water waves hit a double slit, we can observe the interference pattern that results. When a beam of light or a beam of electrons hit a double slit, we cannot observe any interference pattern. We only see the interference pattern if we place the slit between a screen and a source of photons or electrons. In other words, we can’t just think of ourselves as being outsiders because we were the ones who decided where to place the slits. Natural and Supernatural Naturalistic explanations tend to be local. In our experience, when you want something to move, you can't will it to do so; you need to go over and push on it or send someone to do it for you. (location 758) Musser is referring to natural explanations because he mentioned the theory that Poseidon causes earthquakes. Musser undoubtedly thinks there are no supernatural explanations because there are no supernatural beings. The reason there are no supernatural explanations is that there are no natural explanations. The only kinds of explanations that exist are good and bad ones. Good explanations are supported by evidence and judged to be true by rational people. Bad explanations are just the opposite. Free Will Human beings make decisions for any number of reasons, not all of them entirely sensible. Those reasons follow from earlier events, and ultimately the chain of causes can be traced to the origin of the universe. (location 1744) That doesn't necessarily mean your will is unfree: freedom can be an emergent property, one that particles do not possess, but assemblages of particles do. As far as you're concerned, your choices can be entirely open until you make them. (location1754) If A decides to marry B, the final cause is being married to B. If the reason A decided on this final cause is that B is rich, A is not a sensible person because every sensible person knows you can’t buy happiness. B’s wealth cannot be traced back to the origin of the universe. It can only be traced back to someone’s decision to work hard and save. If A decided on B because B is beautiful, this is a different matter. It may very well be that you can trace B’s beauty to the origin of the universe. Consider this statement: Freewill is an emergent property of a human being. The word emergent is an adjective, and is an attempt at to answer a question that arises from the metaphysical observation that human beings have freewill: What is freewill? The circular answer is: Freewill means I can move my hand around anyway I decide to, but if I lose my hand in an accident, I still continue to exist. My hand is something that I have. This answer gives rise to the more substantial metaphysical question: What is the relationship between my body and myself? There are three theories in addition to Berkeley’s idealism. Materialism is the theory that human beings are “assemblages of particles” and freewill is an illusion. Dualism is the theory that there is a spiritual substance or “little man” inside a human being that controls the body like a stagecoach driver controls a coach. The third theory, which is the one supported by the evidence and judged to be true by rational people, is that it is an unsolvable mystery. In other words, human beings are embodied spirits of spirited bodies. This brings us to the question of what Musser means by using the adjective emergent to modify free will. My guess is that he does not grasp or is unaware of the theory that human beings are embodied sprits. In his mind, the two possible theories are dualism and materialism. He uses the adjective emergent to express his judgment in favor of materialism. Hardcore Atheism Musicians call the difference between pitches a musical "interval," which has connotations of distance, as if our brains really do think of the differences between pitches as spatial separation. (Location 3122) Emergent-spacetime models also give us a new way to understand the big bang. The genesis of the universe has always presented something of a paradox. Nothing is supposed to precede it, yet something must precede it to set the cosmos in motion. But the paradox dissipates when we think of the big bang not as an abrupt moment of creation, but as a transitional process. If space emerges from spaceless building blocks just as life emerges from lifeless atoms, then the birth of the universe is no more inscrutable than the birth of a living creature. (location 3191) A lot of people mistakenly think that the Big Bang is evidence of God’s existence. It is rather evidence that God does not exist because it is evidence that the universe is not intelligible. The Big Bang is, however, a reason to believe God inspired the human authors of the Bible because the Bible is filled with the idea that God created the universe from nothing. There is no evidence for the “emergent-spacetime models,” just as there is no evidence for the conjectures about how life emerged from lifeless atoms. All the references are to the Kindle edition of “Spooky Action at a Distance: The Phenomenon That Reimagines Space and Time—and What It Means for Black Holes, the Big Bang, and Theories of Everything,” by George Musser. Published by Scientific American/Farrar, Straus and Giroux, New York, 2015.
3261ebe8657b9bf8
Trimestre "Le Monde Quantique" A quantum dynamics including the Schrödinger evolution and the von Neumann spontaneous collapse by Prof. Franck LALOË (Lab Kastler Brossel, ENS, Paris) Amphithéâtre Léon Motchane (IHES) Amphithéâtre Léon Motchane The linear Schrödinger equation does not predict the uniqueness of  measurement results;  it does not predict that macroscopic bodies should  be located at one place in space only. This is the origin of the so  called measurement problem, Schrödinger cat paradox, etc. Theories such  as GRW (Ghirardi-Rimini-Weber) and CSL (Continuous spontaneous  localization) theories solve the problem by adding stochastic terms to  the Schrödinger equation. In this talk we will propose another approach  to reach the same unified dynamics, but without requiring the  introduction of stochastic Wiener processes acting in all space. The  method combines ideas of the dBB (de Broglie Bohm) interpretation and of  CSL. It introduces an attraction between the space density of Bohmian  position and the space density operators, with a deterministic dynamics;  randomness arises only from the initial random positions of the Bohmian  positions. Various microscopic or macroscopic consequences of this  dynamics will be discussed. Notes de la conférence Your browser is out of date!
e25ad48a25094eab
Design of Feedback Control Laws for Information Transfer in Spintronics Networks Design of Feedback Control Laws for Information Transfer in Spintronics Networks Sophie G Schirmer,  Edmond Jonckheere,  Frank C LangbeinSupported by the Welsh Government and Higher Education Funding Council for Wales through the Sêr Cymru National Research Network in Advanced Engineering and Materials (NRN082).Supported by ARO MURI.SGS is with the College of Science (Physics), Swansea University, Swansea, SA2 8PP, UK, is with the Dept. of Electrical Engineering, Univ. of Southern California, Los Angeles, CA 90089, USA, is with the School of Computer Science & Informatics, Cardiff University, Cardiff, CF24 3AA, UK, Information encoded in networks of stationary, interacting spin-1/2 particles is central for many applications ranging from quantum spintronics to quantum information processing. Without control, however, information transfer through such networks is generally inefficient. Currently available control methods to maximize the transfer fidelities and speeds mainly rely on dynamic control using time-varying fields and often assume instantaneous readout. We present an alternative approach to achieving efficient, high-fidelity transfer of excitations by shaping the energy landscape via the design of time-invariant feedback control laws without recourse to dynamic control. Both instantaneous readout and the more realistic case of finite readout windows are considered. The technique can also be used to freeze information by designing energy landscapes that achieve Anderson localization. Perfect state or super-optimal transfer and localization are enabled by conditions on the eigenstructure of the system and signature properties for the eigenvectors. Given the eigenstructure enabled by super-optimality, it is shown that feedback controllers that achieve perfect state transfer are, surprisingly, also the most robust with regard to uncertainties in the system and control parameters. I Introduction: Spintronics Devices Encoding information in spin degrees of freedom has the potential to revolutionize information technology through the development of novel devices utilizing electron spin. Information encoded in spin degrees of freedom can be transferred via spin-polarized currents. Information stored in spin states can also propagate through a network of coupled spins without charge transport, mediated directly by quantum-mechanical interactions. This is of particular interest as devices that do not rely on charge transport are not limited by heat dissipation due to resistance—potentially enabling higher component densities and greater energy efficiency [1, 2]. The realization of novel spintronic devices presents many technological challenges in device design and fabrication. Utilizing information encoded in spin degrees of freedom especially requires efficient, controlled on-chip transfer of excitations in spin networks. In quantum mechanical language, this transfer or transport of an excitation from one site to another requires steering the system from one quantum state to another, a problem akin to the well known unit step response of linear Single Degree of Freedom (SDoF) tracking controllers—with the significant difference of the presence of a global phase factor in the tracking error. As propagation of spin-based information is fundamentally governed by quantum-mechanics and the Schrödinger equation, however, excitations in a spin network propagate, disperse and refocus in a wave-like manner. Controlling information transport in such networks is thus a highly non-classical control problem. Previous work has shown that natural transmission of information does occur, but without active control the propagation of spin-based information in such networks can be slow and inefficient [3]. In this paper we consider how we can optimize transport in terms of transfer efficiency, speed and robustness using control. This requires an approach quite different from modern robust control, where time-domain specifications are substituted for conventional singular value Bode plots. The need for state-selective transfer makes the architecture depart from the SDoF configuration and precludes control designs that ensure asymptotic stability of the target state. Instead, we rely on the concept of Anderson localization [4, 5], which is utilized to hold the system at or around the desired target state for future use. We explore how information transfer or localization in spin networks can be controlled simply by shaping the energy landscape of the system. We show how the latter problem can be viewed in terms of feedback control laws, and that feedback control designs that achieve the best performance w.r.t. transfer fidelity also achieve the best robustness. This is unlike the traditional limitations observed for SDoF classical control and demonstrates the advantages of two degrees-of-freedom controllers [6, 7] and is the setup adopted here. The deeper message of this paper is that quantum transport presents many challenges and opportunities for control and a rich source of new problems and paradigms relating to the foundation of classical control theory. In Section II relevant theory of quantum spin networks and control paradigms are reviewed. The control objectives, conditions for perfect state transfer and speed limits for excitation transfer are discussed in Section III, followed by eigenstructure analysis of the dynamic generators and signature properties for the eigenvectors to establish general conditions for optimality in Section IV. In Section V the sensitivity of the design to uncertainty in the dynamical generators of the system is analyzed, and the result of vanishing sensitivity for superoptimal controllers is proven. Numerical optimization and sensitivity results are presented in Section VI. We conclude with a discussion of classical vs quantum robust control in Section VII and general conclusions and directions for future work in Section VIII. Ii Theory and Definitions Ii-a Networks of Coupled Spins Let , and be the Pauli spin operators and let (, ) be a tensor product of operators, all of which are the identity , except for a single (, ) operator in the th position. With this notation, the Hamiltonian of a system of spin- particles with onsite potentials and two-body interactions between pairs of spins is where for all due to the symmetry of the interaction. The constants and are measured in units of frequency. is a parameter that depends on the coupling type: isotropic Heisenberg coupling () or XX coupling (). The coupling constants are determined by the topology of the network. For a chain with nearest-neighbor coupling we have unless and similarly for a ring, except that . A chain can be thought of as a type of quantum wire and a ring as a basic routing element to distribute information encoded in the network, e.g., via chains attached to nodes of the ring. A network is uniform or homogeneous if all non-zero couplings have a fixed strength . Spin networks of this type are widely applicable to modeling nuclear spin systems, electron spins in quantum dots and pseudo-spin systems consisting of trapped ions or atoms and even superconducting qubits. Systems coming very close to reproducing the ideal dynamics of a one-dimensional Heisenberg chain have been realized [1, 8, 9, 11, 10]. Using the Dirac notation, a (pure) state of a system of spin- particles is a linear combination of the product states of the single spin eigenstates, which are eigenstates of the operator denoted by , : The operator applied to a product state thus returns if the th spin is , and if it is . Hence, effectively counts the number of spins that are in the excited state . The Hamiltonian (2) commutes with the total excitation operator, . As commuting operators are simultaneously diagonalizable, it can easily be shown that the Hilbert space of the system decomposes into excitation subspaces [12] that are invariant under the dynamics. If we assume that only a single excitation (or bit of information) propagates through the network at any given time, then the Hamiltonian can be reduced to the single excitation subspace Hamiltonian where the form the diagonal for the single excitation subspace of , which can be absorbed into the . can be thought of as a column vector with zero entries except for a in the th position, can be thought of as a row vector with zero entries except for a in the th position and can be thought of as a matrix that is zero except for a in the position. denotes a single excitation state with the excitation localized at the th spin. The Hamiltonian of the system determines the time evolution of pure states via , where is a one-parameter group of unitary operators governed by the Schrödinger equation where is the identity operator and is the reduced Planck constant (see, e.g., [34, Eq. (1)]). By choosing time in units of and energy in units of , we get and can drop in the following. Ii-B Actuators for Spin Networks & Control Paradigms Formally, an actuator for a quantum system is a device that interacts with the system, thereby altering its Hamiltonian—replacing by . is the original system Hamiltonian, describing the intrinsic dynamics of the network, such as Eq. (4) for the single excitation subspace. is a perturbation to the system Hamiltonian induced by the actuators, which can be constant or time-dependent. In the usual dynamic control framework for quantum systems, consists of one or more fixed interaction Hamiltonians with interaction strengths that can be dynamically varied as This results in a bilinear control problem for the controls . A considerable amount of work on quantum control has focused on this paradigm of time-dependent bilinear control [30, 32]. This has proven to be a powerful tool and has been applied to controlling spin networks by dynamically varying all or some of the couplings or potentials  [13]. Usually finding suitable controls is regarded as an open-loop control problem, but it can also be formulated in terms of finding a Feedback Control Law (FCL), It is worth noting the differences between a FCL as defined above and Measurement-based Feedback Control (MFC) or Coherent Feedback Control (CFC) for quantum systems. FCLs such as Eq.(7) are sometimes referred to as model-based feedback as the feedback is dependent on the evolution operator of the system, which cannot be measured directly. Moreover, any measurement to obtain information about the evolution or current state of the system has a backaction that disturbs the system and thus acts as a co-actuator. In MFC, the state of the system is therefore usually replaced by an estimated state, which represents our state of knowledge about the system. It is obtained by state estimation based on continuous weak measurements. Incorporation of the measurement backaction and the probabilistic nature of quantum measurements further leads to stochastic differential equations and non-unitary evolution. CFC is another paradigm for quantum feedback based on coherent interaction between system and controller. This implicitly assumes that both, the system to be controlled and the controller, are quantum systems. See [14, 29, 31] for good introductions to quantum control from a control engineering perspective. All of these control paradigms play important roles in quantum control and are necessary to solve different problems [15]. MFC, for instance, is an important tool for deterministic state reduction and initial state preparation [16]. CFC can be used to stabilize quantum networks against noise and external perturbations [17]. Dynamic open-loop control has found many applications from the preparation of quantum states of special interest, such as entangled states, and implementation of quantum gates for quantum information processing, to the control of spin dynamics in nuclear magnetic resonance (NMR), electron spin resonance (ESR), magnetic resonance imaging (MRI), and electronic, vibrational and rotational states of atoms and molecules [18]. All of these paradigms, however, also have limitations and drawbacks. Dynamic control, for example, requires the ability to temporally modulate interactions, often at significant speed and time resolution. Besides, for networks with a high degree of symmetry such as rings with uniform coupling, controllability is often limited by dynamic symmetries, which imposes restrictions on what can be achieved, especially with local actuators [12]. Fig. 1: Schematic of direct feedback loop for a quantum network (left) and conventional operational amplifier (right) Here we focus on the paradigm of finding constant interaction strengths as an alternative to dynamic control. Specifically, we wish to design simple FCL’s, with time-invariant , giving rise to a linear control system where is the identity matrix. and are complex operators but we could transform the system into a real system. Decomposing the Schrödinger equation (5) as (8)-(9) and interpreting the control term as a feedback control law is not only conceptually but practically useful as it brings control insights to the problem. The addition of the control term creates a no-measurement “direct feedback loop,” a concept reminiscent of the seminal work of Bode [37], where even though feedback exists no measurements are needed. Fig. 1 attempts to illustrate the quantum control-feedback amplifier metaphor. Dynamic control problems have been formulated in terms of model-based feedback and techniques such as Lyapunov control have been successfully applied to these problems, e.g., [19, 20], and even dynamic open-loop control schemes can be reformulated as time-varying FCLs. However, our aim here is to find constant FCLs for certain tasks, while at the same time restricting the Hamiltonian to have a simple form. Restricting the control of a bilinear system such as Eqs. (5)-(6) to be time-invariant reduces the design to a linear, but unconventional, control design [33]. Iii Design of Optimal Feedback Control Laws for Excitation Transport Iii-a Control Objectives Our main control objective is to transfer an initial state , corresponding to the initial excitation of the system on spin , to a desired target state , corresponding to the excitation on spin , for any given pair of initial and target spins. Mathematically, we formulate the problem of arbitrary state transfer (not limited to single excitation states) as finding an input-output map given by a unitary operator that maximizes the (squared) fidelity or probability of successful transfer from to in an amount of time : In practice, readout of information is generally not instantaneous but takes place over a finite time window. In this case it is more advantageous to maximize the average transfer fidelity for a given readout time window , Setting and choosing a large readout time window we can suppress transport from time to and localize or freeze excitations at a particular node for later use by maximizing . Unitarity of ensures selectivity of the transfer as , i.e., if maps the input state to the target state then no other state can be mapped to the target state. Quantitatively, an initial preparation error maps to a terminal error of the same magnitude as that of the initial error. The flipside of this selectivity requirement is that we cannot hope to engineer a process that renders the target state asymptotically stable but can only expect Lyapunov stability or Anderson localization [4, 5]. Fig. 2: Spin ring with energy landscape created by localized potentials. We are interested in control of information transfer by shaping the potential energy landscape of the system (see Fig. 2). The extent to which the energy landscape can be controlled in an actual device is subject to constraints, the precise nature of which depends on the physical realization. However, there is generally some freedom to shape the energy landscape. For example, there are proposals for semiconductor architectures consisting of quantum dots with surface gates that control the energy levels via the Stark shift. In other architectures, magnetic fields (Zeeman shift) can be used to locally or globally control the energy landscape. In atom traps, control of the energy landscape can be achieved by deforming the optical lattice [9]. As this paper is mainly concerned with the development of a theoretical framework, details of experimental realizations and constraints are beyond the scope of the current work and are left for future work. Controlling the energy landscape means we wish to find a FCL with and that maximizes the probability of information transfer given by Eq. (10) or (11). For a network with fixed topology defined by the couplings , this corresponds to applying local potentials that are constant in time, resulting in a constant Hamiltonian and an input-output map that is the solution of the Schrödinger Eq. (5) with . The objective is to find a control parameter vector that maximizes the instantaneous transfer fidelity or the average transfer fidelity at some time . We can fix , require with an upper bound , or aim to achieve the transfer with maximum fidelity in minimum time. We also wish to consider the sensitivity of the transfer with regard to uncertainties in system parameters such as coupling strengths and local potentials as well as disturbances such as environmental noise. For practical applications, it is often preferable to modify the objective slightly and aim to find a FCL that achieves a desired transfer in minimum time with a certain margin of error, as we do not necessarily require perfect state transfer but only that the final state be sufficiently close, up to a global phase factor, to the desired target state. Finding FCLs for information transfer in spin networks thus reduces to an optimization problem, which can be solved using standard optimization tools. However, the optimization landscape is very challenging, in particular when the goal is to find a control that achieves the highest possible fidelity in the shortest time possible, possibly subject to various other constraints. Iii-B Perfect State Transfer & Speed Limits There are many open questions regarding the existence of FCLs that achieve perfect state transfer, in finite time or asymptotically, and the control resources required. Perfect state transfer from state to state at time requires realization of a such that . For perfect state transfer we have with a global phase factor . Hence, if the fidelity reaches its upper bound, we need not have , but only , where denotes the equivalence class of a unit vector of in the complex projective space . It is easy to see that perfect state transfer is always possible between any pair of states in time for any if there are no constraints on the control Hamiltonian , as we can simply set . However, the existence of FCLs that achieve perfect state transfer when the actuators are constrained is not obvious. Furthermore, even if such FCLs exist, information transfer is usually subject to speed limits. While it is nontrivial to derive speed limits for arbitrary quantum networks, we can derive lower bounds on the transfer time in certain cases, which can be used as performance indicators for the optimization. If the distance between initial and target spin is , we can reduce the network to a two-spin system with direct coupling by applying large biases to sites other than the input and output spins, yielding an effective two-spin Hamiltonian This system undergoes Rabi oscillations with the Rabi frequency and it can easily be shown that The maximum of is achieved for , if and only if , or . Similarly, if the distance between input and output spin is , the network can be reduced to a three-spin chain by quenching it and assuming zero-bias on the three remaining spins. In this case we can easily show that Here we have perfect state transfer for . More generally, we can derive speed limits by quenching rings to chains from the eigenstructure symmetries. If the distance between input and output spin in a ring with spins with uniform nearest neighbor couplings is and the biases satisfy , then commutes with the permutation with corresponding permutation matrix , i.e. . Let be an eigendecomposition of with eigenvectors and eigenvalues . Then or , i.e., the first and last row of are the same, and the tracking error with global phase factor becomes (see Eq. (21) derived in Section IV). This expression vanishes if is an integer multiple of . For a chain of length three with no bias, and , and we achieve perfect state transfer for , setting . Iv Eigenstructure and Symmetry The observations about the role of symmetries and the Hamiltonian eigenstructure motivate a careful analysis of the role the latter play in the design of FCLs for information transfer in spin networks. Iv-a Eigenstructure Consider the eigendecomposition of the Hamiltonian where is the projector on the th eigenspace associated with the eigenvalue and is the number of distinct eigenspaces. The eigenvalues are real as the Hamiltonian is Hermitian. Furthermore, as in our case is a real symmetric matrix, the projectors are also real symmetric. The associated input-output map is . For the objective of maximizing the transfer fidelity at time , we have where is the subset of the eigenspaces that have non-zero overlap with the input and output state, , and is a global phase factor that does not affect the norm. This means the maximum is achieved if (but not only if) • the phases of the exponentials cancel the phases of the projections , up to a global phase factor that is absorbed by the absolute value, and • is maximized simultaneously as the phase assignment. The transfer is perfect if the upper bound is attained, in which case we call the controller superoptimal. To prove that the preceding conditions are not only sufficient but necessary, we observe the following: This yields Theorem 1. Necessary and sufficient conditions for superoptimality are 1. the eigenprojections of satisfy ; 2. the eigenvalues are such that the ’s are even or odd multiples of depending on whether the ’s are positive or negative, resp. Corollary 1. For any -controller and any it is impossible for all , , to have the same sign. As is a resolution of identity and , In the special case of containing two elements, e.g., , Eq. (20) yields for any controller with the remaining states being dark, i.e., , . The resulting freedom could be used to secure the phase condition along with , which yields , i.e., perfect state transfer. Iv-B Signature Property in the Case of Distinct Eigenvalues In the generic case when has distinct eigenvalues, we have , where is the (real) orthonormal frame of eigenvectors of . Taking and to be the projections of the (real) input and output states onto the th eigenvector of , the tracking error becomes It assumes its global minimum of if and only if where and is real at optimality. Noting , the previous condition is equivalent to Setting , we get Even though only the th and th components of the eigenvectors matter in perfect state transfer, the signature property extends to other components related by symmetry. Iv-C Symmetries & Full Signature Property of Eigenvectors The key to finding good feedback control laws by optimization lies in understanding the symmetries of the system and using the biases to enforce or annul certain symmetries. For this we constrain the controls to ensure that the first condition, is satisfied for all and for all admissible controls. Let be an eigendecomposition of . If there is a unitary operator that commutes with , , then implies that if is a unit eigenvector with eigenvalue then so is . If the eigenvalues of are distinct, then both vectors can only differ by a phase, ; in particular Hence, we need to find a unitary operator that commutes with and satisfies . Example 1. For a chain of length with uniform coupling we have inversion symmetry, i.e., the system Hamiltonian commutes with the permutation operator , , where for . If the control Hamiltonian also commutes with , then for all whenever the input and output node satisfy . Example 2. For a ring of spins with uniform coupling, we have translation invariance in addition to inversion symmetry. Therefore, we can always choose biases such that for all . For a ring with uniform coupling we can show that Eq. (24) not only holds for the input-output components, but also for those components related by the permutation . Motivated by the cyclic symmetry of the ring requiring modulo operations, we relabel the indices of the spins, starting at rather than and the indices are taken modulo without indicating this explicitly to keep the notation simple. By convention, the labeling of the vertices is clockwise around the ring. Theorem 2. For a ring of spins with uniform coupling between adjacent spins only, the eigenvectors are signature symmetric as under the symmetry of the biases Furthermore, if is even then . The proof is given in Appendix A. V Sensitivity to Uncertainties V-a General Sensitivity We analyze the sensitivity of the (squared) fidelity or probability of successful transfer relative to uncertainties in the couplings or other parameters. Let be the total Hamiltonian of the perturbed system, where is the ideal system Hamiltonian, is the control Hamiltonian, here assumed to be time-invariant, and the for are perturbations. reflects the structure of the perturbation and its amplitude. For uncertainty in the coupling , we take . The transfer operator of the perturbed system is The design sensitivity is determined by the partial derivative As generally does not commute with the perturbation , we use the general formula to evaluate the partial derivative which remains valid around (see [21, 22]). From the eigendecomposition of the perturbed Hamiltonian , , where is the projector onto the eigenspace associated with the eigenvalue , it is readily found that Evaluation of the integral gives Inserting this into Eq. (30) gives where we used . Finally, defining and , gives V-B Sensitivity at Up to now, the sensitivity could have been evaluated at any . From here on, we restrict the discussion to . One of the implications of Theorem 1, saying that at superoptimality is a multiple of modulo the global phase factor , is that for the argument of the sine in Eq. (34) is a multiple of (the global phase factors in cancel) and thus the sine vanishes. Therefore, the sum over in Eq. (34) can be restricted to . Next, observe that This allows us to isolate the sum over , which takes the value at superoptimality: Finally, observe that vanishes when is a multiple of . Putting everything together, the sensitivity formula becomes where indicates that the sum is restricted to those such that is an odd multiple of . Fig. 3: Results of optimizing the information propagation from spin to (left) and to (right) for an XX-ring of spins using L-BFGS optimization with exact gradients over spatial biases for fixed times from to with step size . Each data point represents the infidelity achieved for the corresponding time by a single optimization run across different initial values with restarts for each time . The optimization gets trapped often but it can still find good solutions for certain times. In particular, good solutions were found for the speed limit for both transfers. V-C Vanishing Sensitivity to Symmetric Perturbations at Optimality We prove that the sensitivity of the fidelity relative to a real perturbation structured as vanishes. This includes a perturbation of the coupling, in which case , and the case where the perturbation is on the control bias, in which case . The vanishing of the latter sensitivity is quite trivial; indeed, if the controller is differentiably optimal, the first order conditions require that the directional derivative of the fidelity along any control direction, in particular , must vanish. By a real Gram-Schmidt orthonormalization process, we write , so that Eq. (35) can be rewritten, Observe that is symmetric relative to the indices . That is, Likewise, is symmetric relative to the indices , , because is a real symmetric matrix and the eigenvectors were taken to be real. If on the right-hand side of Eq. (36) we add the same right-hand side with interchanged indices and , we obtain twice the partial derivative of the squared fidelity relative to . Thus, Next, we use the signature property to derive the following: Therefore, the right-hand side of Eq. (37) vanishes and the sensitivity vanishes. Theorem 3. Consider a spin ring in its single excitation subspace with biases which differentiably maximize the fidelity. At optimality the sensitivity of the fidelity relative to any real, symmetric perturbation vanishes. Vi Optimization in a Challenging Landscape We present numerical optimization results for instantaneous and average information transfer and localization as well as the corresponding sensitivities of the controllers versus their performance. Initial results on controlling instantaneous information flow in spin networks are available in [23], which are summarized and expanded here. The results here are computed for uniform rings of spins with XX couplings. All results for from to are available in a separate data set [24]. Solving the optimization problem in Eq. (12) directly for a fixed target time is challenging even without constraints on the biases as the landscape is extremely complicated with many local extrema, resulting in trapping of local optimization approaches such as quasi-Newton methods. Fig. 3 shows the results of various runs for fixed times for a ring of spins, with the objective being to propagate the excitation from spin to spin and , respectively. While good solutions are found for certain times, including the minimum times given by the quantum speed limits (see Section III-B), the optimization clearly gets trapped frequently, making finding good solutions very expensive. Instead of fixing the transfer time we add the time as additional parameter to optimize over. The results in Section IV show that the structure of the eigenvalues and eigenvectors must fulfill a specific condition to be able to maximize the transfer fidelity. While there are many potential structures that fulfill the condition, we can choose a specific one to provide a guide for good initial values and a restricted domain for the search. The idea is to quench the ring of spins into a chain from the initial spin to the target spin. Previous work showed that this can be easily achieved by applying a very strong potential in the middle between initial and target spin [3]. If we can control the potentials of all spins then we can generalize this to quench the ring just before the initial and after the target spin, giving two options for a chain connecting the two nodes where either could provide a solution. Furthermore, applying mirror symmetric potentials across the axis through the middle between initial and target state in the ring gives rise to an eigenstructure satisfying the optimality conditions. Consequently, we choose such symmetric potentials in combination with the approximate times where the maximum fidelity is achieved in the related chains as initial values for the optimization. This significantly improves the efficiency of finding controls for maximum information transfer in minimum time, as already observed in [23]. The symmetry constraint can be easily applied to the optimization by reducing the number of biases to be found to , symmetric across the symmetry axis between initial and target spin. Convergence of the optimization can be further improved by selecting constants, peaks or troughs as biases between initial and target spin on both sides of the rings randomly as initial values, and selecting initial times from the transition times required for a spin chain of length corresponding to the distance between and . Fig. 4: Optimization results for the information transfer probability from spin to for an XX-ring of spins over spatial biases and time. The left column shows the biases and evolution (in blue vs. the natural evolution in red) giving the best fidelity at time with an error of . The middle column shows the fastest solution found with a fidelity greater than at . The right column shows the overall solutions found by repeated optimization, plotting time vs logarithm of the infidelity and a histogram of the logarithm of the infidelity. The bottom row shows the eigenstructure of the best and the fastest solution and their symmetries, with eigenvectors being the columns of the matrices (in cyan; green and red rows indicate and states resp.) and corresponding eigenvalues at the bottom (in purple). Figs. 4, 5 show the optimization results for a ring of size and for the transition from spin to . We report the solution with the highest fidelity and the fastest solution with a fidelity larger than . Typically the highest fidelity solutions are found at longer times, but good solutions for short times are also achieved. However, many restarts of the optimization are required, and many runs fail with fidelities smaller than . Observe the eigenstructure symmetries for the solutions consistent with Section IV. Shortest time solutions are found, while the best solution is at a different time. We also report results for optimizing the average transfer fidelity, Eq. (13): see Fig. 6 for a -ring for the transition from spin to and Fig. 7 for a ring from spin to . We show the solution with the highest fidelity and the fastest solution with a fidelity larger than , lower than in the instantaneous case as the average fidelities are smaller as well. Figs. 8 and 9 show the shortest times achieved for instantaneous fidelities greater than and average fidelities greater than for rings of size to in summary. Due to the symmetry in the connections only transitions from to are reported. For target spins and , the fastest times are generally consistent with the speed limits in Section III-B, but the shortest times could not always be achieved. All individual results can be accessed in a separately data set [24]. The cases where no minimum time solution satisfying the minimum fidelity requirements was found further show the difficulty of finding good controllers. Improved optimization strategies will be explored in future work. Optimizing the average information transfer fidelity per Eq. (11) can also be used to localize the excitation at a particular spin by maximizing as noted in Section III-A. Numerical results for rings of size and are shown in Figs. 10 and 11 for a holding time of . Theorem 3 indicates that at superoptimality the sensitivity vanishes, which is further explored numerically here. Results for instantaneous transfer, time-window average transfer shown in Figs. 12, 13, 14 indicate a positive correlation between the sensitivity of the controllers and the infidelity. The specific sensitivity measure used here is the norm of the vector of sensitivities w.r.t. uncertainties in the couplings. Among controllers indexed in decreasing order of fidelity, where we only show those with fidelities greater than , the very best controllers nearly achieving the upper bound on the fidelity (achieving near vanishing tracking error) have nearly vanishing sensitivity; furthermore, with the deterioration of the fidelity the sensitivity increases. Fig. 5: Results for optimizing the information transfer probability from spin to for a ring of spins similar to Fig. 4 (without eigenstructures). Vii Classical versus Quantum Robust Control Given a loop matrix , a classical result is that the sensitivity mapping from the reference to the tracking error and the logarithmic sensitivity of the sensitivity, , derived from , are in conflict since . Horowitz [25, Chap. Six] was probably first to point out that the limitation imposed by the SISO single degree-of-freedom configuration could be overcome by means of a two-degrees-of-freedom configuration. Ever since this fundamental observation, many MIMO two-degrees-of-freedom architectures have been proposed [6, 7]. The controller is, in a certain sense, a two-degrees-of-freedom controller as it depends on both, the current state and the target state , and does not explicitly depend on the tracking error. However, as already alluded to in Sec. III-B, the information transfer controller does not have a tracking error in the classical sense, but a projective tracking error. Fig. 6: Results for optimizing the average information transfer probability from spin to for a ring of spins with similar to Fig. 4 with the only difference that the fastest solutions with a fidelity of greater than , due to the averaging, has been selected. Fig. 8: Shortest times achieved for instantaneous transition fidelities greater than for rings of size and transitions from to . Note that for the transitions for from to , from to and from to no solution with fidelity greater than were found, so no fastest results are reported. The color of the bars indicate the infidelity of the fastest solution. Fig. 9: Shortest times achieved for average transition fidelities greater than for rings of size and transitions from to for . Note that for the transition for from to no solution with fidelity greater than were found, so no fastest results are reported. The color of the bars indicate the infidelity of the fastest solution. Fig. 10: Optimization results for localizing spin in a -ring over the spatial biases. The left column shows the biases and evolution (in blue vs. the natural evolution in red) giving the best fidelity for a localization time of with an error of . Fig. 11: Optimization results for localizing spin in a -ring similar to Fig. 10. To proceed towards classical Laplace domain control, consider the quantum mechanical projective tracking error where denotes the unit step. The phase factor is a generalization of the phase factor of Section III-B securing that is, minimization of is equivalent to maximization of . It is easily seen that the phase factor to secure the above equality is . This creates an unconventional (adaptive) feedback from to . Instead of minimizing or maximizing over at a specific time, we could optimize in a time-average sense, opening the road to Laplace transform techniques. The Laplace transform of the error reads where is a permutation matrix such that and denotes the Laplace domain convolution. Since is the mapping from the unit step reference to the error, it can be interpreted as a sensitivity matrix, but it differs significantly from t
1e22fe7c66484849
Download kavic_Poster0216 yes no Was this document useful for you?    Thank you for your participation! Document related concepts Quantum electrodynamics wikipedia, lookup Introduction to gauge theory wikipedia, lookup Wave–particle duality wikipedia, lookup History of quantum field theory wikipedia, lookup Canonical quantization wikipedia, lookup Path integral formulation wikipedia, lookup Hidden variable theory wikipedia, lookup Propagator wikipedia, lookup Scalar field theory wikipedia, lookup Renormalization group wikipedia, lookup Relativistic quantum mechanics wikipedia, lookup Renormalization wikipedia, lookup Molecular Hamiltonian wikipedia, lookup Lattice Boltzmann methods wikipedia, lookup Technicolor (physics) wikipedia, lookup Higgs mechanism wikipedia, lookup Quantum chromodynamics wikipedia, lookup Topological quantum field theory wikipedia, lookup Instanton wikipedia, lookup Dirac equation wikipedia, lookup Perturbation theory wikipedia, lookup BRST quantization wikipedia, lookup AdS/CFT correspondence wikipedia, lookup Scale invariance wikipedia, lookup Yang–Mills theory wikipedia, lookup Light-front quantization applications wikipedia, lookup Gauge fixing wikipedia, lookup Vishnu Jejjala,a Michael Kavic,b Djordje Minic,b Chia Tze,b IHES, Le Bois-Marie, 35, route des Chartres, F-91440 Bures sur Yvette, France b Department of Physics, Virginia Tech With the new Large Hadron Collider (LHC) becoming operational in the near future, our understanding of quantum chromodynamics (QCD) is essential in analyzing the data to be collected. One area in which we lack understanding is in the nonperturbative effects of the theory. Understanding the non-perturbative dynamics of YangMills theory will bring us one step closer to this goal. Currently, lattice gauge theory has made great progress in understanding QCD, however, analytic understanding has not come as far. In our present work, we consider Yang-Mills theory in (2+1) and (3+1) dimensions with a large number of colors. The primary focus of our research is to construct the spectrum of gauge-invariant glueball states. In the 2+1 case, we use a Hamiltonian approach proposed by Karabali, Kim, and Nair (1997) in which the theory is rewritten in terms of gauge-invariant “corner” variables. Using this approach, analytic computations can be done. In the 3+1 case, the Karabali, Kim, and Nair formalism is extended from 2+1 to 3+1 using corner variables (Bars 1978). This extension allows us to compute our results in 3+1 using the same physical insight and analytic tools as in the 2+1 case. Vacuum Wave Functional  In 2+1 dimensions, we take as the vacuum wavefunctional ansatz Comparison With Lattice  (2+1) 0++ states  The Scrödinger equation becomes  (2+1) 0– states where E0 is a divergent vacuum energy  The kernel equation is  (2+1) 2++ and 2-+ states which has a general solution in terms of Bessel functions  Only one solution is normalizable and has the correct asymptotics in the UV and IR limits and it is given by Summary of Results  (3+1) J++ states  Determined a new non-trivial form of the vacuum wavefunctional by solving the Schrodinger equation for (2+1) and (3+1) Yang-Mills theory.  Computed glueball mass spectrum in (2+1) and (3+1) Yang-Mills theory. The 0++ glueball mass in (2+1) is statistically indistinguishable from the lattice  In 3+1 dimensions, we take as the vacuum wave functional ansatz  Computed string tension to within 1% of lattice result.  Note that in the 2+1 case  (3+1) J-+ states  The Schrödinger equation with this wave functional yields the kernel equation  We begin with (2+1) Yang-Mills theory. The Hamiltonian for the system is  Again, only one normalizable solution with the correct asymptotics is found QCD String where E is conjugate to A in the temporal gauge A0 = 0. We choose the dynamics fields to be  We quantize the system using  Observable quantities and physical states must be gauge-invariant as a consequence of Gauss’ law  By calculating the expectation value of a large spatial Wilson loop, the string tension is determined to be  The Bessel function is essentially sinusoidal, so its zeros are evenly spaced (better for large n)  Thus, the predicted spectrum has approximate  This agrees within 1% of the lattice result  The spectrum is organized into bands concentrated around a given level Mass Spectrum •Background Independent Matrix Theory • We parameterize the gauge fields by • M transforms linearly under gauge transformations  Glueball states may found by computing the equal-time correlators of gaugeinvariant probe operators with the correct JPC quantum numbers  Preliminary counting suggests that there is an approximate (in the sense that the degeneracies are not exact) Hagedorn spectrum of states  We believe that this is a basic manifestation of the QCD string  For example, 0++ is probed using Tr (B2):  We can expand the kernel using the formula Future Prospects • Gauge-invariant variables are constructed using • The volume measure of the configuration space is given by a hermitian Wess-Zumino-Witten action • The volume of the configuration space is finite. • Let us introduce the current • In terms of the current, the Yang-Mills Hamiltonian is  Extension of method to include fundamental fermions (QCD) and other types of matter to get  Application to Yang-Mills theories at finite  Computation of the spectrum of baryons  Mn are mass constituents given by  Computation of scattering amplitudes where 2,n are the ordered zeros of J2 and 3,n are the ordered zeros of J2.  At large separation distances, we find contributions of single particle poles  Extension to supersymmetric and superconformal gauge theories  Condensed matter and Statistical Mechanics application: 3D Ising model, High-Tc superconductivity, etc.  Glueball masses are a sum of their constituents. • m is the ‘t Hooft coupling R. G. Leigh, D. Minic and A. Yelnikov, Phys. Rev. Lett. 96:222001 (2006); hep-th/0512111. R. G. Leigh, D. Minic and A. Yelnikov, hep-th/0604060. L. Freidel, R. G. Leigh and D. Minic, Phys. Lett. B641:105-111 (2006); hep-th/0604184. L. Freidel, hep-th/0604185. L. Freidel, R. G. Leigh, D. Minic and A. Yelnikov, hep-th/0801.1113.
3b1c29e741c89e67
Chemical Science - Hydrogen Atom - Lecture 6 Chemical Science/n * Email this page/nVideo Lectures - Lecture 6/nTopics covered: /nThe Hydrogen Atom/nInstructor: /nProf. Sylvia Ceyer/nTranscript - Lecture 6/nAll right. Let's get going. Where were we? We were at the point where we started out the course wondering about the structure of the atom, how the electron and the nucleus hung together. And we saw that we could not explain how that nucleus and electron hung together using classical ideas, classical physics, classical mechanics and classical electromagnetism./nAnd so we put that discussion aside and started to talk about the wave-particle duality of light and matter. And we saw that both light and matter can behave as a wave or it can behave as a particle./nAnd we needed that discussion in order to come back to talk about the structure of the atom. And, in particular, what was so important last time that we met is that we saw the results of an experiment, that Davidson, Germer and George Thompson experiment that demonstrated that mass of particle could exhibit wave-like behavior./nWe saw the interference pattern of electrons reflected from a nickel single crystal. An actual original paper that reports that result is on our website. You're welcome to take a look at it. But that really was the impetus, this observation of the wave-like behavior of matter./nThat was the impetus for this gentleman here, Erwin Schrödinger in 1926-27 to write down an equation of motion for waves. That is he thought maybe the answer here is that if particles can behave like waves then maybe we have to treat the wave-like nature of particles, the wave-like nature of electrons, in particular in the case when the wavelength, the de Broglie wavelength of the particle or the electron is comparable to the size of its environment./nMaybe in those cases we have to use a different kind of equation of motion, a wave equation of motion. And that is what he did. So, he wrote down this equation. We briefly looked at it last time. This equation has some kind of operator called the Hamiltonian operator./nIt has a hat on it, a carrot on the top of it. That tells us it is an operator that operates on this thing, psi. That psi is what's going to represent our particle. That psi is a wave, since we're going to give it a functional form in another day or so we're going to call it a wave function./nSomehow that psi represents our particle. Exactly how it represents our particle is something we're going to talk about again in a few days. But right now the important thing is to realize that this psi represents the presence of a particle, in this case the electron./nAnd when H operates on psi we get back psi. We do this operation and out comes psi again, the original wave function, but it's multiplied by something. That something is the energy. It's the binding energy./nIt's the energy with which the electron is bound to the nucleus. This equation is an equation of motions. This wave equation, Schrödinger's equation is to this new kind of mechanics, called quantum mechanics, like Newton's equation of motion, and I show you just the second law here, are to classical mechanics./nThis equation here tells us how psi changes with position and also with time. It tells us something about where the electron is and also tells us where the electron is as a function of time. It's an equation of motion./nAnd this is what Schrödinger realized, is that maybe when the wavelength of a particle is on the order of the size of its environment you have to treat it with a different equation of motion. You can no longer use F = ma, this classical equation of motion./nYou have to use a different equation of motion. And that's Schrödinger's equation, this wave equation. Can we dim the front lights a little bit? Because the screen is just a little bit hard to see, I think./nWe are going to let your electron be represented by this wave ?. And so psi, since it's going to be representing the electron, is going to be a function of some position coordinates, and also in the broadest sense a function of time./nNow, we, of course, can label psi in Cartesian coordinates giving it an x, y and z. If I gave you x, y and z for this electron in this kind of coordinate system where the nucleus is at the origin of the coordinate system, if I give you x, y and z coordinates, you'd know where the electron was./nBut it turns out that this problem of the hydrogen atom is really impossible to solve if I use a Cartesian coordinate system. So, I am going to use a spherical coordinate system. How many of you are familiar and have used a spherical coordinate system before? A few of you./nNot all of you. Well, it's not hard to understand. And it is important that you understand it. So, instead of giving you an x, y and z to locate this electron, this particle in space, we're going to give you an r, a T and a ?./nAnd the definitions of r, T and ? are the following. If here is my nucleus at the origin and here is the electron, r is the distance of that electron from the nucleus. It's just the length of this line right here./nThat's one coordinate. A second coordinate is T. T is the angle that this r makes from the z axis. And then the third coordinate is psi. Psi is the following. If I take that electron and I just drop it perpendicular to the xy plane, and I then draw a line in the xy plane from that point of intersection to the origin, the angle that that line makes with the x-axis here is ?./nSo, I am going to give you an r, a T and a ?. R is just the distance of the electron from the nucleus. T and psi tell us something about the angular position. And then, as I said, in the largest sense, there is also time./nBut we'll talk about time a little bit later. Psi represents our electron. Now, what does the Schrödinger equation specifically, for a hydrogen atom, actually look like? This Hpsi times Epsi is kind of generic Schrödinger equation./nAnd now we've got to write a specific one, one specific for the hydrogen atom. We need to know what this is, H. That's our Hamiltonian, our operator. And so the operator here for that hydrogen atom is this./nWhat is it is essentially three second derivatives, one here is with respect to r, a second is a second derivative effectively with respect to T, and another one, the final one is a second derivative with respect to psi./nIn other words, if this whole Hamiltonian is operating on psi, what you're going to do is essentially take the first derivative of psi with respect to r, multiply it by r2. And then take a first derivative with respect to r again and multiple it by 1 / r2./nAnd add to that the first derivative of psi with respect to T multiplied by psi and T, etc. You don't have to know this. I'm just showing this to you so you recognize it later on. This is a differential equation./nIn 18.03, you learned how to solve these differential equations. And then there is another very important term here, so it's all of this plus this, u of r. What is u(r)? Potential energy of interaction./nAnd the potential energy of interaction, of course, is the Coulomb potential energy right here, one over r dependence. We've talked about the Coulomb force. This is the potential energy of interaction that corresponds to the Coulomb force./nSo, that's the specific Schrödinger equation for the hydrogen atom. Now, what we have got to do is we have to solve this equation for the hydrogen atom. And when I say solve this equation, what I mean is we're going to have to find E, these binding energies of the electron to the nucleus./nThat is part of our goal when we say solve this differential equation is knowing what E is, is figuring out what E is. And, actually, this is what we're going to do today, finding those energies. But then the second goal is to find psi./nWe want to find what is the functional form of psi that represents the electron and the hydrogen atom? Therefore, we're going to want to find the wave functions for psi. And, you know what, those wave functions are nothing other than what you already sort of know, and that is orbitals./nYou talked in high school about S orbitals and P orbitals and B orbitals. Those orbitals are nothing other than wave functions. They come from solving Schrödinger's equation for the hydrogen atom./nThat's where they come from. Now, specifically the orbital is something called the spatial part of the wave function as opposed to the spin part. But for all intents and purposes they are the same./nWe're actually going to use these terms interchangeably, orbital-wave function, wave function-orbital. The bottom line is that when you solve Schrödinger's equation for the energy and the wave function it makes predictions for the energies and the wave functions that agree with our observations, as we're going to see today, in particular for the case of the binding energies of the electron to the nucleus./nThis equation predicts having a stable hydrogen atom, a hydrogen atom that seemingly lives forever in contrast to when we use classical equations of motion. When we used classical equations of motion we got a hydrogen atom that lived for all of 10-seconds./nBut here we finally have some way to understand the stability of the hydrogen atom. It makes the Schrödinger equation, it makes predictions that agree with our observations of the world we live in./nAnd, therefore, we believe it to be correct. That is it. It agrees with the observations that we make. Let's start. And we're actually not going to solve the equation, as I said, but you will do so if you take 5.61, which is the quantum course in chemistry, or 8.04, I think it is, which is the quantum course in physics after you take differential equations so that you know how to solve the differential equations./nBut we're going to write down the solution in particular here for E, the binding energies of the electrons to the nucleus. Now we're going to need this. We've got H? = E?. And when we solve that equation, we get the following expression for E, these binding energies./nE = 1 / n2 (me4/8?o2 h2). That is what we get out of it. And there is a minus sign out in front. Now, what is m? M is the mass of the electron. What is e? E is the charge on the electron. Eo is this permittivity of vacuum that we talked about before./nIt's really just a unit conversation factor here. H is Planck's constant. Here comes Planck's constant again. It is ubiquitous. It's everywhere. And what we do is that we typically take all of these constants and group them together into another constant that we call the Rydberg constant./nAnd we denote it as a RH. All of that is equal to RH, so this is over n2, minus 1. And the value of RH, that Rydberg constant, and this is something you're going to need to use a lot in the next few weeks, is 2.17987 times 10 to the negative 8 joules./nBut you also see, in this expression for the binding energies of the electron to the nucleus, that there is this n here. What's n? N is an integer. When you solve that differential equation, you find that n has only certain allowed values./nN can be as low as 1, 2, 3, and n can go all the way up to infinity. N is what we call the principal quantum number. I am going to explain that a little bit more by looking right now at an energy level diagram./nThat is the expression. But now let's plot it out so that we can understand what is going on here a little more. We are going to be plotting this expression as n goes from 1 to infinity. I have the energy access here./nEnergy is going to be going up in that direction. When n is equal to 1, the binding energy of that electron to the nucleus is effectively the Rydberg constant. Here I rounded it off, minus 2.18 times 10 to the negative 18 joules./nBut our expression here says that there can be another binding energy of the electron to the nucleus. It says that n can be equal to 2. And if n is equal to 2, well, then the binding energy of the electron is one-quarter of the Rydberg constant, because it is the Rydberg constant over 22./nIf n is equal to 3, well, our expression says that the binding energy is minus one-ninth of the Rydberg constant. If n is equal to 4 it is minus a sixteenth of the Rydberg constant, n is equal to 5, minus a twenty-fifth, n equal to 6, minus a thirty-sixth, n equal to 7, minus a forty-ninth, all the way up to n equals infinity./nAnd you know what the value of the binding energy is when n is equal to infinity? Zero. Our equation says that the electron can be bound to the nucleus with this much energy or this much energy or this much energy and so on, but it cannot be bound to the nucleus with this much energy, somewhere in between, or this much energy or that much energy./nIt has to be exactly this, exactly this, exactly this, so on and so forth. That is important. What we see here is that the binding energies of the electron to the nucleus are quantized, that that binding energy can only have specific allowed values./nIt doesn't have a continuum of values for the binding energy. Yes? Those are identically the same size. Because this is an operator, right? I left the hat off here. Remember that we took a second derivative of psi? So, you cannot cancel this./nThis is an operator taking the derivative of psi. You cannot just cancel that. This is to multiply by on this side. This side is. This is E times psi, but not over here. That's really important./nWe have these quantization of the allowed binding energies of the electron to the nucleus. Where did that quantization come from? That quantization came from solving the Schrödinger equation. It drops right out of solving the Schrödinger equation./nHow did that happen? Well, in differential equations, as you will see, when you solve a differential equation, what you have to do to solve it so that it adequately describes your physical situation is you have to often impose boundary conditions onto the problem./nAnd it's that imposition of boundary conditions that gives you that quantization. That is where it comes from mathematically. In other words, remember one of those angles that I showed you, the phi angle? You can see it would run from zero to 360 degrees./nBut you also know, if you go 90 degrees beyond 360 degrees, suppose you to go to 450 degrees, well, that should give you the same result as if you had F = 90. What you have to do is you have to cut off your solution at 360 degrees./nWhen you cut off that solution, well, then that gives you, in these differential equations, these quantization. That is physically where it comes from. Again, this is not something you're responsible for, but when you do differential equations later on in 18.03, you will see how that happens./nLet's talk some more about these loud energy levels. When the electron, or when n = 1, the language we use is that we say that the hydrogen atom, or we say that the electron or the hydrogen atom is in the ground state./nWe call this the ground state because this is the lowest energy state. It has got the most negative energy. It's the lowest energy state. We call n = 1 the ground state of the hydrogen atom or the ground state for the electron./nWe use those terms of the electron or the hydrogen atom interchangeably. Now, what's the significance of this binding energy? And this is important. The significance is that the binding energy is minus the ionization energy for the hydrogen atom, because if I put this energy in from here to there into the system then I will be ripping off the electron and will have a free electron./nSo, the ionization energy is minus the value of this binding energy. The ionization energy is always positive. The binding energy, the way we're going to treat this, is going to be negative because the electron is bound./nAnd then the separated limit, the electron far away from the nucleus, well, that energy is zero. So, the binding energy is minus the ionization energy or conversely the ionization is minus the binding energy./nThat is the physical significance of these binding energies. And when we talk about an ionization energy for an atom, we are typically talking about the ionization energy when the atom is in the ground state./nThis is the ionization energy we're talking about. But we also said that the binding energy of the electron can be this much meaning it's in the n = 2 state. That can be possible also. Not at the same time as it's in the n = 1 state, but you can have a hydrogen atom in a state, which is the n = 2 state./nWhat that means is that the electron is bound by less energy. When that is the case, we talk about the hydrogen atom being in the first excited state. This is the ground, this is the first excited state, but n is equal to 2./nIn that case, the electron is not as strongly bound because it is going to require less energy to rip that electron off. The binding energy is n = 2 is minus the ionization energy if you have a hydrogen atom in the first excited state./nMake sense to you? Yeah. OK, so we can have atoms in this state, too. Then the ionization energy is less. It takes less energy to pull the electron off. Yes? In everything that we are going to deal with, we are going to have binding energies that are negative./nLet's do that. You can, of course, have a binding energy that is positive, but the problem is that isn't a stable situation. OK. Good. Other questions? Yes. When we're dealing with a solid, we talk about a work function as opposed to an ionization energy./nWhen we're dealing with an atom or a molecule, we talk about an ionization energy as opposed to the work function. It's really the same thing. Historically there is a reason for calling the ionization energy off of a solid the work function./nOh, one other thing. I just wanted to point out again right here is that when n is equal to infinity the binding energy is zero. That is the ionization limit. That is when the electron is no longer bound to the nucleus./nNow, one other point here is that this solution to the Schrödinger equation for the hydrogen atom works. It predicts the allowed energy levels for any one electron atom. What do I mean by one electron atom? Well, helium plus is one electron atom./nBecause helium usually has two electrons, but if you take one away you have only one electron left. And so this helium plus ion, that's a one electron atom, or if you want to say it more preciously one electron ion./nOr, lithium double plus, that's a one electron atom or a one electron ion. Because lithium usually has three electrons, but if you take two away and you only have one left that's a one electron atom./nUranium plus 91 is a one electron atom. Because you took 92 of them away, one is left, that's a one electron atom or an ion. And the bottom line is that this expression for the energy levels predicts all of the binding energies for one electron atoms as long as you remember to put in the Z2 up here./nFor a hydrogen atom that is, of course, Z = 1, so we just have -RH / n2. But for these other one electron atoms you have to have the Z in there, the charge on the nucleus. Why? Well, because that Z comes from the potential energy of interaction./nThe Coulomb potential energy of interaction is the charge on the electron times Z times E, the charge on the nucleus. That is where the Z comes from. That is important. How do we know that the Schrödinger equation is making predictions that agree with our observations? Well, we've got to do an experiment./nAnd the experiment we're going to do is we're going to take a glass tube like this. We're going to pump it out and we're going to fill it with hydrogen, H2. And then in this glass tube there are two electrodes, a positive electrode and a negative electrode./nAnd what I am going to do here is I am going to crank up the potential difference between these positive and negative electrodes, higher, higher, higher until at a point we're going to have the gas break down, a discharge is going to be ignited, just like I am going to do over here./nDid I ignite a discharge? Yes. There it is. And the gas is going to glow. We are going to have a plasma formed here. Oh, and what happens in this plasma is that the H2 is broken down into hydrogen atoms./nAnd these hydrogen atoms are going to emit radiation. That is some of the radiation that you're seeing here in this particular discharge lamp. We are going to take that radiation and we're going to disburse it./nThat is we're going to send the light to a diffraction grading. This is kind of like the two-slit experiment. And when you look at it you're going to see constructive and destructive interference./nBut when you look at the bright spots of constructive interference you're going to find that those bright spots now are broken down into different colors, purple, blue, green, etc. And that is because the different colors of light have different wavelengths./nAnd if they have different wavelengths, well, then the points in space of maximum constructive interference are going to be a little different. And so we're going to literally separate the light out in space depending on their colors./nAnd we're going to see what colors come out of this. And so now, in order to help you do that, we've got some diffraction grading glasses for you. You should put them on and look into this light. And you will be able to see off to your left and to the right some very distinct lines./nAnd if you look into the lights above you can see all different colors from the white light. All right. Do you see the hydrogen lamp? I know that the white lights above the room are more interesting because there is a whole rainbow there./nI am going to turn the lamp a little bit since not all of you, if you are way on the side, can see it. I am going to start over here and I am going to turn the lamp a bit. Can you see that now? You should see a bunch of lines here to your left and some to your right./nAnd then, of course, you will see some up here. But they will probably dispersed best to your left and to your right. Pardon? Can we dim the bay lights? Can we dim those big lights over there? Probably not./nI am going to turn it over here. Can you see it? The spectrum that you should see is what I am showing on the center board there. You see it? Pardon? You have to look at the light. Oh, thank you./nThank you very much. Can you see that better? I will turn it back there. Do you see the emission spectrum now? It's a little better. Let's see if we can try to understand this emission spectrum that you're seeing./nWhat you should see the brightest is a purple line. No? Well, let's see. The purple line is actually rather weak, I have to say. If you come really close you can see it. And you're invited to come up a little bit closer./nThe purple line is kind of weak. What did I do? [LAUGHTER] Oh, I see. Yes. Interference phenomena work. Hey, look at that. [LAUGHTER] Fantastic. All right. The purple line is kind of weak, but the blue line is really strong./nAnd then there is a green line, which is also a little bit weak. And I can see because I'm really close, well, I'm not going to tell you that. There is a green line there. And then there is this red line./nLet's see if we can understand where these lines are coming from. What is happening is that this discharge, not only does it pull the H2 apart, break bonds, make hydrogen atoms, but it puts some of those hydrogen atoms into these excited states./nAnd so a hydrogen atom might be in this excited state. This initial excited state, high energy state. And, of course, that's a high energy state. It is unstable. The system wants to relax. It wants to relax to a lower energy state./nAnd when it does so, because it's going to lower energy state, it has to emit radiation. And that radiation is going to come out as a photon whose energy is exactly the energy difference between these two states./nThat's the quantum nature here of the hydrogen atom. The photon that comes out has to have an energy psi e which is exactly the energy of the initial state minus that of the final state. And, therefore, the frequency of that radiation is going to have one value given by this energy difference divided by Planck's constant H./nThat is what's happening in the discharge. What we've got is some hydrogen atoms excited to say, for example, this B state, which is a lower energy state, and so when it relaxes there is a small energy difference between here and this bottom state./nTherefore, you are going to have a low frequency of radiation. If you have some other hydrogen atoms in the discharge that are excited to this state up here, well, this is a big energy difference. And so psi e is going to be large./nAnd, therefore, you're going to have some radiation emitted that's at a high frequency because psi e is large. If it's at a high frequency, it's going to have a short wavelength. These hydrogen atoms are going to have a low frequency emission./nIt's going to be a long wavelength. So, we've got a mixture of atoms in this state or in this state or in any other state in this discharge. Now, let's try to understand this spectrum. And to do that I have drawn an energy level diagram for the hydrogen atom./nHere is n = 1 state, n = 2, n = 3, n = 4, all the way up to n = 0 here on the top. They get closer and closer together as we go up. This purple line, it turns out, or the purple color comes from a transition made from a hydrogen atom in the n = 6 state to the n = 2 state./nThe final n here is 2. The blue line comes from a hydrogen atom that has made a transition from n = 5 also to n = 2. The green line is from a hydrogen atom that makes a transition from n = 4 to n =2 and then the red line from n = 3 to n =2./nOf course, the transition from n = 3 to n = 2 is the smallest energy. Therefore, it is going to be the longest wavelength. n = 6 to n = 2 largest energy. Therefore, it is going to have the smallest wavelength./nNow, how do we know that these frequencies agree with what Schrödinger predicted they should be? Well, to know that, what we're going to do is we're going to write down this equation here, which is just telling us what the frequency of the radiation should be psi e over H./nBut we're going to use the predictions from the Schrödinger equation and plug them into here to calculate what the frequency should be. We were told here that the energy, say, of the initial state given by the Schrödinger equation is -RH / the initial quantum number squared./nWe're going to plug that into there. The final state, well, that's also the expression for the energy, we're going to plug that into there. We're then going to rearrange that equation so we get the frequency is the Rydberg constant over H times this quantity 1 / nf2 - 1 / ni2./nAnd since I told you here that all of these lines -- The final quantum number is 2. We can plug that in. And then we can just go in and put in 3, 4, 5, 6 and get predictions for what nu is for the frequency./nAnd what you would find is that predictions that this makes, the Schrödinger equation makes agrees with the observations of the frequencies of these lines to one part and 108. There is really just remarkable agreement between the energies or the frequencies predicted by Schrödinger equation and what we actual observe for the hydrogen atom./nHere is another diagram of the energies of the hydrogen atom n = 1, n = 2, n = 3. And the four lines that we were looking at where shown right here. These are the four lines. Here is n = 6 to n = 2, n = 5 to n = 2./nThese lines are actually called the Balmer series. I want you to know that there is also a transition from n = 6 to n = 1. It is over here. But you can see that that transition is a very high energy transition./nThat transition occurs in the ultraviolet range of the electromagnetic spectrum. And, therefore, you cannot see it, but it is there. Actually, what you can see is that there are transitions from these higher energy states to the ground state, transitions from all of them to the ground state, but they're all in the ultraviolet range of the electromagnetic spectrum./nThat is why you cannot see that right now. But those lines are called the Lyman series. And then there are transitions here to the n = 3 state. These transitions from the larger quantum number to n = 3 are called the Paschen series./nThey occur in the near infrared. Brackett series in the infrared. Pfund series in the far infrared. I got that backwards. And these different series are all labeled by the final state. And they're labeled by the names of the discoverers./nAnd the reason there are so many different discoverers is because in order to see the different kinds of radiation, you have to have a different kind of detector. And, depending on what kind of a detector an experimentalist had, well, that will dictate then what he actually can see, what kind of radiation, which one of these transitions he can view./nNow, we looked at emission. But it is also possible for there to be absorption between these allowed states of a hydrogen atom. That is we can have a hydrogen atom here in a low energy state, the initial state Ei./nAnd if there is a photon around whose energy matches the energy difference between these two states, well, then at photon can be absorbed by the hydrogen atom. Again, the energy of that photon has to be exactly the difference in energy between those two states./nIt cannot be a little larger. If it is a little larger that photon is not going to be absorbed. That's important. That's the quantum nature again of the hydrogen atom. They are specific energies that are allowed and nothing in between./nAnd then from knowing the energy of the photon you can get that frequency. And then in the case of absorption, the frequencies of the radiation that can be absorbed by a hydrogen atom are given by this expression./nThis expression differs from the frequencies for emission only in that I've reversed these two terms. This is 1 / ni2. This is 1 / nf2. I have reversed them so that you come out with a frequency that is a positive number./nFrequencies do have to be positive. So, we've got two different expressions here for the frequency depending on whether we're absorbing a photon or we're emitting a photon. Questions? I cannot see anybody./nThere. Epsilon knot is a conversion factor for electrostatic units. That is all you need at the moment. In 8.02 maybe you will go through the unit conversation there to get you to SI units. You need to login to download this video. login or signup Channels: Chemistry (General) Tags: Chemical Science Hydrogen Atom Duration: 47m 24s Dnatube: Sciencific and Medical video site.
d705667e207a00c6
Creation and annihilation operators   (Redirected from Annihilation operator) Creation and annihilation operators are mathematical operators that have widespread applications in quantum mechanics, notably in the study of quantum harmonic oscillators and many-particle systems.[1] An annihilation operator (usually denoted ) lowers the number of particles in a given state by one. A creation operator (usually denoted ) increases the number of particles in a given state by one, and it is the adjoint of the annihilation operator. In many subfields of physics and chemistry, the use of these operators instead of wavefunctions is known as second quantization. Creation and annihilation operators can act on states of various types of particles. For example, in quantum chemistry and many-body theory the creation and annihilation operators often act on electron states. They can also refer specifically to the ladder operators for the quantum harmonic oscillator. In the latter case, the raising operator is interpreted as a creation operator, adding a quantum of energy to the oscillator system (similarly for the lowering operator). They can be used to represent phonons. The mathematics for the creation and annihilation operators for bosons is the same as for the ladder operators of the quantum harmonic oscillator.[2] For example, the commutator of the creation and annihilation operators that are associated with the same boson state equals one, while all other commutators vanish. However, for fermions the mathematics is different, involving anticommutators instead of commutators.[3] Ladder operators for the quantum harmonic oscillatorEdit In the context of the quantum harmonic oscillator, one reinterprets the ladder operators as creation and annihilation operators, adding or subtracting fixed quanta of energy to the oscillator system. Creation/annihilation operators are different for bosons (integer spin) and fermions (half-integer spin). This is because their wavefunctions have different symmetry properties. First consider the simpler bosonic case of the phonons of the quantum harmonic oscillator. Start with the Schrödinger equation for the one-dimensional time independent quantum harmonic oscillator, Make a coordinate substitution to nondimensionalize the differential equation The Schrödinger equation for the oscillator becomes Note that the quantity   is the same energy as that found for light quanta and that the parenthesis in the Hamiltonian can be written as The last two terms can be simplified by considering their effect on an arbitrary differentiable function   which implies, coinciding with the usual canonical commutation relation  , in position space representation:  . and the Schrödinger equation for the oscillator becomes, with substitution of the above and rearrangement of the factor of 1/2, If one defines as the "creation operator" or the "raising operator" and as the "annihilation operator" or the "lowering operator", the Schrödinger equation for the oscillator reduces to This is significantly simpler than the original form. Further simplifications of this equation enable one to derive all the properties listed above thus far. Letting  , where   is the nondimensionalized momentum operator one has Note that these imply The operators   and   may be contrasted to normal operators, which commute with their adjoints.[4] Using the commutation relations given above, the Hamiltonian operator can be expressed as One may compute the commutation relations between the   and   operators and the Hamiltonian:[5] These relations can be used to easily find all the energy eigenstates of the quantum harmonic oscillator as follows. Assuming that   is an eigenstate of the Hamiltonian  . Using these commutation relations, it follows that[5] This shows that   and   are also eigenstates of the Hamiltonian, with eigenvalues   and   respectively. This identifies the operators   and   as "lowering" and "raising" operators between adjacent eigenstates. The energy difference between adjacent eigenstates is  . The ground state can be found by assuming that the lowering operator possesses a nontrivial kernel:   with  . Application of the above formula for the Hamiltonian yields So   is an eigenfunction of the Hamiltonian. This gives the ground state energy  , which allows one to identify the energy eigenvalue of any eigenstate   as[5] Furthermore, it turns out that the first-mentioned operator in (*), the number operator   plays the most important role in applications, while the second one,   can simply be replaced by  . The time-evolution operator is then Explicit eigenfunctionsEdit The ground state   of the quantum harmonic oscillator can be found by imposing the condition that Written out as a differential equation, the wavefunction satisfies with the solution The normalization constant C is found to be     from  ,  using the Gaussian integral. Explicit formulas for all the eigenfunctions can now be found by repeated application of   to  .[6] Matrix representationEdit The matrix expression of the creation and annihilation operators of the quantum harmonic oscillator with respect to the above orthonormal basis is These can be obtained via the relationships   and  . The eigenvectors   are those of the quantum harmonic oscillator, and are sometimes called the "number basis". Generalized creation and annihilation operatorsEdit The operators derived above are actually a specific instance of a more generalized notion of creation and annihilation operators. The more abstract form of the operators are constructed as follows. Let   be a one-particle Hilbert space (that is, any Hilbert space, viewed as representing the state of a single particle). The (bosonic) CCR algebra over   is the algebra-with-conjugation-operator (called *) abstractly generated by elements  , where  ranges freely over  , subject to the relations in bra–ket notation. The map   from   to the bosonic CCR algebra is required to be complex antilinear (this adds more relations). Its adjoint is  , and the map   is complex linear in H. Thus   embeds as a complex vector subspace of its own CCR algebra. In a representation of this algebra, the element   will be realized as an annihilation operator, and   as a creation operator. In general, the CCR algebra is infinite dimensional. If we take a Banach space completion, it becomes a C* algebra. The CCR algebra over   is closely related to, but not identical to, a Weyl algebra. For fermions, the (fermionic) CAR algebra over   is constructed similarly, but using anticommutator relations instead, namely The CAR algebra is finite dimensional only if   is finite dimensional. If we take a Banach space completion (only necessary in the infinite dimensional case), it becomes a   algebra. The CAR algebra is closely related to, but not identical to, a Clifford algebra. Physically speaking,   removes (i.e. annihilates) a particle in the state   whereas   creates a particle in the state  . The free field vacuum state is the state | 0   with no particles, characterized by If   is normalized so that  , then   gives the number of particles in the state  . Creation and annihilation operators for reaction-diffusion equationsEdit The annihilation and creation operator description has also been useful to analyze classical reaction diffusion equations, such as the situation when a gas of molecules   diffuse and interact on contact, forming an inert product:  . To see how this kind of reaction can be described by the annihilation and creation operator formalism, consider   particles at a site i on a one dimensional lattice. Each particle moves to the right or left with a certain probability, and each pair of particles at the same site annihilates each other with a certain other probability. The probability that one particle leaves the site during the short time period dt is proportional to  , let us say a probability   to hop left and   to hop right. All   particles will stay put with a probability  . (Since dt is so short, the probability that two or more will leave during dt is very small and will be ignored.) We can now describe the occupation of particles on the lattice as a `ket' of the form  . It represents the juxtaposition (or conjunction, or tensor product) of the number states    ,   located at the individual sites of the lattice. Recall for all n  ≥ 0, while This definition of the operators will now be changed to accommodate the "non-quantum" nature of this problem and we shall use the following definition: note that even though the behavior of the operators on the kets has been modified, these operators still obey the commutation relation Now define   so that it applies   to  . Correspondingly, define   as applying   to  . Thus, for example, the net effect of   is to move a particle from the   to the  site while multiplying with the appropriate factor. This allows writing the pure diffusive behavior of the particles as where the sum is over  . The reaction term can be deduced by noting that   particles can interact in   different ways, so that the probability that a pair annihilates is  , yielding a term where number state n is replaced by number state n − 2 at site   at a certain rate. Thus the state evolves by Other kinds of interactions can be included in a similar manner. This kind of notation allows the use of quantum field theoretic techniques to be used in the analysis of reaction diffusion systems. Creation and annihilation operators in quantum field theoriesEdit In quantum field theories and many-body problems one works with creation and annihilation operators of quantum states,   and  . These operators change the eigenvalues of the number operator, by one, in analogy to the harmonic oscillator. The indices (such as  ) represent quantum numbers that label the single-particle states of the system; hence, they are not necessarily single numbers. For example, a tuple of quantum numbers   is used to label states in the hydrogen atom. The commutation relations of creation and annihilation operators in a multiple-boson system are, where   is the commutator and   is the Kronecker delta. For fermions, the commutator is replaced by the anticommutator  , Therefore, exchanging disjoint (i.e.  ) operators in a product of creation or annihilation operators will reverse the sign in fermion systems, but not in boson systems. If the states labelled by i are an orthonormal basis of a Hilbert space H, then the result of this construction coincides with the CCR algebra and CAR algebra construction in the previous section but one. If they represent "eigenvectors" corresponding to the continuous spectrum of some operator, as for unbound particles in QFT, then the interpretation is more subtle. See alsoEdit • Feynman, Richard P. (1998) [1972]. Statistical Mechanics: A Set of Lectures (2nd ed.). Reading, Massachusetts: Addison-Wesley. ISBN 978-0-201-36076-9. • Albert Messiah, 1966. Quantum Mechanics (Vol. I), English translation from French by G. M. Temmer. North Holland, John Wiley & Sons. Ch. XII. online 1. ^ (Feynman 1998, p. 151) 2. ^ (Feynman 1998, p. 167) 3. ^ (Feynman 1998, pp. 174–5) 4. ^ A normal operator has a representation A= B + i C, where B,C are self-adjoint and commute, i.e.  . By contrast, a has the representation   where   are self-adjoint but  . Then B and C have a common set of eigenfunctions (and are simultaneously diagonalizable), whereas p and q famously don't and aren't. 5. ^ a b c Branson, Jim. "Quantum Physics at UCSD". Retrieved 16 May 2012. 6. ^ This, and further operator formalism, can be found in Glimm and Jaffe, Quantum Physics, pp. 12–20.
fd1844b6e83a58b9
Take the 2-minute tour × As we can see in the picture in this website: It's strange that the bound state wavefunction always reach its largest peak near the boundary of its classically forbidden region(not in the region). Is it true that this phenomenom holds for all bound state wavefunction? I think that the reflected wave may interfere with the original one, thus creating the peak near the forbidden region.But I can't explain why it is the largest peak or there is no peak inside the classically forbidden region,Thanks for your attention. share|improve this question 1 Answer 1 up vote 2 down vote accepted Yes, the wavefunction will peak near the boundary of the forbidden region and this effect will increase at higher energy levels. In the limit of very high energy levels the quantum harmonic oscillator must reproduce the classical result and a classical harmonic oscillator will more likely be found near the endpoints of its motion since that is when it is moving much slower than in the center where it has maximum velocity and maximum kinetic energy. A quote from Wikipedia: Note that the ground state probability density is concentrated at the origin. This means the particle spends most of its time at the bottom of the potential well, as we would expect for a state with little energy. As the energy increases, the probability density becomes concentrated at the classical "turning points", where the state's energy coincides with the potential energy. This is consistent with the classical harmonic oscillator, in which the particle spends most of its time (and is therefore most likely to be found) at the turning points, where it is the slowest. The correspondence principle is thus satisfied The Wikipedia article also has the following animated images showing the wavefunction for eigenstates as well as animations for wavefunctions of states that are not eigenstates that begin to approximate the classical behavior of moving back and forth from one limit state to the other. enter image description here Some trajectories of a harmonic oscillator according to Newton's laws of classical mechanics (A-B), and according to the Schrödinger equation of quantum mechanics (C-H). In (A-B), the particle (represented as a ball attached to a spring) oscillates back and forth. In (C-H), some solutions to the Schrödinger Equation are shown, where the horizontal axis is position, and the vertical axis is the real part (blue) or imaginary part (red) of the wavefunction. (C,D,E,F), but not (G,H), are energy eigenstates. (H) is a coherent state, a quantum state which approximates the classical trajectory. share|improve this answer Your Answer
326e1bf95f619742
Finite potential well From Wikipedia, the free encyclopedia Jump to: navigation, search The finite potential well (also known as the finite square well) is a concept from quantum mechanics. It is an extension of the infinite potential well, in which a particle is confined to a box, but one which has finite potential walls. Unlike the infinite potential well, there is a probability associated with the particle being found outside the box. The quantum mechanical interpretation is unlike the classical interpretation, where if the total energy of the particle is less than the potential energy barrier of the walls it cannot be found outside the box. In the quantum interpretation, there is a non-zero probability of the particle being outside the box even when the energy of the particle is less than the potential energy barrier of the walls (cf quantum tunnelling). Particle in a 1-dimensional box[edit] For the 1-dimensional case on the x-axis, the time-independent Schrödinger equation can be written as: -\frac{\hbar^2}{2 m} \frac{d^2 \psi}{d x^2} + V(x) \psi = E \psi \quad (1) \hbar = \frac{h}{2 \pi}, h \, is Planck's constant, m \, is the mass of the particle, \psi\, is the (complex valued) wavefunction that we want to find, V\left(x\right)\, is a function describing the potential energy at each point x, and E\, is the energy, a real number, sometimes called eigenenergy. For the case of the particle in a 1-dimensional box of length L, the potential is zero inside the box, but rises abruptly to a value V_o at x = -L/2 and x = L/2. The wavefunction is considered to be made up of different wavefuctions at different ranges of x, depending on whether x is inside or outside of the box. Therefore the wavefunction is defined such that: \psi = \begin{cases} \psi_1, & \mbox{if }x<-L/2\mbox{ (the region outside the box)} \\ \psi_2, & \mbox{if }-L/2<x<L/2\mbox{ (the region inside the box)} \\ \psi_3 & \mbox{if }x>L/2\mbox{ (the region outside the box)} \end{cases} Inside the box[edit] For the region inside the box V(x) = 0 and Equation 1 reduces to -\frac{\hbar^2}{2 m} \frac{d^2 \psi_2}{d x^2} = E \psi_2 . k = \frac{\sqrt{2mE}}{\hbar}, the equation becomes \frac{d^2 \psi_2}{d x^2} = -k^2 \psi_2 . This is a well-studied differential equation and eigenvalue problem with a general solution of \psi_2 = A \sin(kx) + B \cos(kx)\quad. E = \frac{k^2 \hbar^2}{2m} . Here, A and B can be any complex numbers, and k can be any real number. Outside the box[edit] For the region outside of the box, since the potential is constant, V(x) = V_o and Equation 1 becomes: -\frac{\hbar^2}{2 m} \frac{d^2 \psi_1}{d x^2} = ( E - V_o) \psi_1 There are two possible families of solutions, depending on whether E is less than V_o (the particle is bound in the potential) or E is greater than V_o (the particle is free). For a free particle, E > V_o, and letting k'=\frac{\sqrt{2m(E - V_o)}}{\hbar} \frac{d^2 \psi_1}{d x^2} = -k'^2 \psi_1 with the same solution form as the inside-well case: \psi_1 = C \sin(k' x) + D \cos(k' x)\quad This analysis will focus on the bound state, where V_o > E. Letting \alpha = \frac{\sqrt{2m(V_o - E)}}{\hbar} \frac{d^2 \psi_1}{d x^2} = \alpha^2 \psi_1 where the general solution is exponential: \psi_1 = Fe^{- \alpha x}+ Ge^{ \alpha x} \,\! Similarly, for the other region outside the box: \psi_3 = He^{- \alpha x}+ Ie^{ \alpha x} \,\! Now in order to find the specific solution for the problem at hand, we must specify the appropriate boundary conditions and find the values for A, B, F, G, H and I that satisfy those conditions. Finding wavefunctions for the bound state[edit] Solutions to the Schrödinger equation must be continuous, and continuously differentiable. These requirements are boundary conditions on the differential equations previously derived. In this case, the finite potential well is symmetrical, so symmetry can be exploited to reduce the necessary calculations. Summarizing the previous section: where we found \psi_1, \psi_2 \,\! and \psi_3 \,\! to be: \psi_2 = A \sin(k x) + B \cos(k x)\quad We see that as x goes to -\infty, the F term goes to infinity. Likewise, as x goes to +\infty, the I term goes to infinity. As the wave function must have finite total integral, this means we must set F=I=0, and we have: \psi_1 = Ge^{ \alpha x} \,\! and \psi_3 = He^{- \alpha x} \,\! Next, we know that the overall \psi \,\! function must be continuous and differentiable. In other words the values of the functions and their derivatives must match up at the dividing points: \psi_1(-L/2) = \psi_2(-L/2) \,\! \psi_2(L/2) = \psi_3(L/2) \,\! \frac{d\psi_1}{dx}(-L/2) = \frac{d\psi_2}{dx}(-L/2) \,\! \frac{d\psi_2}{dx}(L/2) = \frac{d\psi_3}{dx}(L/2) \,\! These equations have two sorts of solutions, symmetric, for which A=0 and G=H, and antisymmetric, for which B=0 and G=-H. For the symmetric case we get He^{- \alpha L/2} = B \cos(k L/2) - \alpha He^{- \alpha L/2} = - k B \sin(k L/2) so taking the ratio gives Roots of the equation for the quantized energy levels \alpha=k \tan(k L/2). Similarly for the antisymmetric case we get \alpha=-k \cot(k L/2). Recall that both \alpha and k depend on the energy. What we have found is that the continuity conditions cannot be satisfied for an arbitrary value of the energy. Only certain energy values, which are solutions to one or other of these two equations, are allowed. Hence we find, as always, the bound-state energies are quantized. The energy equations cannot be solved analytically. Graphical or numerical solutions are aided by rewriting them a little. If we introduce the dimensionless variables u=\alpha L/2 and v=k L/2 , and note from the definitions of \alpha and k that u^2=u_0^2-v^2, where u_0^2=m L^2 V_0/2 \hbar^2 , the master equations read \sqrt{u_0^2-v^2} = \begin{cases} v \tan v, & \mbox{(symmetric case) } \\ -v \cot v, & \mbox{(antisymmetric case) } \end{cases} In the plot to the right, for u_0^2=20, solutions exist where the blue semicircle intersects the purple or grey curves (v \tan v and -v \cot v). Each purple or grey curve represents a possible solution, v_i within the range \frac{\pi}{2}(i-1) \leq v_i < \frac{\pi}{2}i. The total number of solutions, N, (i.e., the number of purple/grey curves that are intersected by the blue circle) is therefore determined by dividing the radius of the blue circle, u_0, by the range of each solution \pi/2 and then rounding up the result to the nearest integer. N = \left\lceil\frac{2u_0}{\pi}\right\rceil In this case there are exactly three solutions, since N = \lceil 2\sqrt{20}/\pi\rceil = \lceil 2.85 \rceil = 3. Solutions of the finite square well v_1 =1.28, v_2=2.54 and v_3=3.73, with the corresponding energies E_n={2\hbar^2 v_n^2\over m L^2}. If we want, we can go back and find the values of the constants A, B, G, H in the equations now (we also need to impose the normalisation condition). On the right we show the energy levels and wave functions in this case (where x_0\equiv\hbar/\sqrt{2m V_0}): We note that however small u_0 is (however shallow or narrow the well), there is always at least one bound state. Two special cases are worth noting. As the height of the potential becomes large, V_0\to\infty, the radius of the semicircle gets larger and the roots get closer and closer to the values v_n=n\pi/2, and we recover the case of the infinite square well. The other case is that of a very narrow, deep well - specifically the case V_0\to\infty and L\to 0 with V_0 L fixed. As u_0\propto V_0 L^2 it will tend to zero, and so there will only be one bound state. The approximate solution is then v^2=u_0^2-u_0^4, and the energy tends to E=-m L^2 V_0^2/2\hbar^2. But this is just the energy of the bound state of a Delta function potential of strength V_0 L, as it should be. Note: The above derivation does not consider the possibility that the effective mass of the particle could be different inside the potential well and the region outside the well. Spherical cavity[edit] The results above can be used to show that, contrary to the one-dimensional case, there is not always a bound state in a spherical cavity. The ground state of a spherically symmetric potential will always have zero orbital angular momentum, and the reduced wave function U(r)\equiv r \psi(r) satisfies the equation -\frac{\hbar^2}{2 m}{d^2 U\over d r^2}+V(r) U(r)=E U(r) This is identical to the one-dimensional equation, except for the boundary conditions. As before, U(r) and its first derivative must be continuous at the edge of the well r=R. However there is another condition, that \psi(0) must be finite, and that requires U(0)=0. By comparison with the solutions above, we can see that only the antisymmetric ones have nodes at the origin. Thus only the solutions to \alpha=-k \cot(k R) are allowed. These correspond to the intersection of the semicircle with the grey curves, and so if the cavity is too shallow or small, there will be no bound state. See also[edit]
b7ad1174ea211393
Essay:Rebuttal to Counterexamples to Relativity From Conservapedia This is an old revision of this page, as edited by SamHB (Talk | contribs) at 22:17, 9 July 2013. It may differ significantly from current revision. Jump to: navigation, search The cited article says absolutely nothing about the number of black holes, modeled or observed. It is about a discrepancy between computer models predicting the masses of black holes at the centers of galaxies and the observed masses. The article suggests a heretofore-unmodeled mechanism whereby galaxy evolution would cause less mass to go into the black holes. The article in no way suggests that black holes don't exist. The objection has been raised that this would still be a counterexample to Relativity. Yes, if it were actually true that the Moon's orbit is undergoing some kind of anomalous perturbation, it would indeed be a disproof of Relativity, Newtonian mechanics, and, in fact, all of physics since Galileo. Actually, the average radius of the Moon's orbit is in fact increasing. By 38mm a year. This was first predicted in the late 19th century and actually measured since at least the early 1970's and more accurately thereafter thanks to the mirrors left for that purpose by the Apollo astronauts. The reason for this is well-known and simple enough to be explained on science-oriented TV shows from time to time. To put it simply: the Moon pulls on the Earth causing the tides and slowing down its rotation slightly, lengthening it's day by 2 milliseconds every 100 years. Reciprocally, the Earth pulls on the Moon and accelerates it slightly thus increasing the height of its orbit and energy is conserved. For a more complete explanation see [1]. This behavoiur, predicted over 100 years ago, observed and measured, is in no way "anomalous" and relativity, general, special or otherwise doesn't really concern itself with tidal mechanics so on that point at least physics, Galillean, Newtonian and Einsteinian, are quite safe for now. 3. The Pioneer anomaly. 4. the solar flattening is ... too small to agree with that predicted from its surface rotation. This observation is interesting, but the predictions of oblateness come from analysis of fluid mechanics; relativity is not involved, and the cited paper makes no mention of relativity. We expect a very large rotating body to show oblateness according to well-known principles involving centrifugal force. This is particularly easy to observe with Jupiter, though the other planets, including Earth, show it too. Scientists do not know why the Sun exhibits nearly zero oblateness, but relativity is not believed to be the cause. Now it happens that there is a connection between solar oblateness and the calculation of the precession of the perihelion of Mercury, and that involves General Relativity. When checking the observed precession against the effect predicted by Relativity, one needs to subtract the effect of solar oblateness, along with the other effects, such as equinoctial precession and the gravitational effects of other planets. The effect of solar oblateness is a mere .0254 arcseconds per century, insignificant in comparison with precession (about 5500) and other planets' gravity (about 550). The figure of .0254 was calculated based on what fluid dynamics predicted, and was subtracted in the figures quoted below in item #9. When that tiny effect is removed, the error bars still overlap that for the GR prediction. 5. Quantum entanglement near the event horizon of a black hole .... It is well known that quantum gravity (that is, the "Theory of Everything") is an unsolved problem, and it is the subject of much discussion in the physics community. The whole topic of string theory, for example, revolves around this. Among the places where General Relativity and Quantum Mechanics collide most notoriously is within the Planck length of a black hole's event horizon. Among the issues involved is the notion of "quantum loss of information", that was the subject of a famous bet among Stephen Hawking, Kip Thorne, and John Preskill. (Hawking and Thorne lost.) Basically, physicists do not know just what happens within quantum-mechanical distances (the Planck length) of a black hole. But no one doubts the behavior of black holes at reasonable distances, or, for that matter, General Relativity at reasonable distances. Update: The problem seems to have been caused by a faulty cable connection between a computer and a GPS unit. When the connection was repaired, the travel time increased by 60 nanoseconds, which had been the amount of the anomaly [3] [4]. The claims of faster-than-light neutrinos have now been refuted very thoroughly[5][6]. The objection has been raised that a recent news report from the BBC ("now we are 100% sure that the speed of light is the speed of neutrinos") is also contradictory, since neutrinos have mass and are therefore forbidden by relativity to travel at exactly the speed of light. Since the neutrino energies were 17GeV, and the current estimate for the neutrino mass is about 0.25eV, the deviation from the speed of light would be about 1 part in 1022. This means that the neutrinos would arrive at the detector about .26x10-24 seconds (.26 yoctoseconds) later than the speed of light itself. This is one quarter of a billionth of a femtosecond, or about .26x10-15 of a nanosecond. The accuracy of the GPS units and cesium clocks used in the measurement is greater than a nanosecond, so the discrepancy cannot possibly be detected. It is unfortunate that the claim "the speed of light is the speed of neutrinos" was taken so literally. A "counter-rebuttal" has been made: The politicized rush to rehabilitate the Theory of Relativity was far from convincing, and the "resolution" was a clear statement that is flatly contrary to the Theory. Just what statement is being referred to as the "resolution" is not clear, but our best guess is that it is the statement by Sandro Centro, quoted by Jason Palmer in the cited BBC article. As noted above, that statement is in error by about 15 orders of magnitude less than the resolution of the measurement. None of the assertions of relativity denies that anyone could ever make a statement that is not precisely correct. This may be another case of the Pioneer anomaly, or it may be something else. However, it is very unlikely that it shows that relativity is wrong and Newtonian mechanics is correct. Saying that, every time someone finds some phenomenon that is puzzling, that shows that relativity is wrong, is not a convincing way to do science. The cited paper is about unexpected behavior of some spacecraft as they use "gravity assists" in near-Earth flybys. The hypothesized causes involve errors in the mathematical models to calculate such effects as relativistic effects (the detailed calculation of them, not the question of whether they exist), tidal effects, Earth radiation pressure, or atmospheric drag. The paper suggests that most of those can be ruled out, though there could be round-off and integration errors, or errors in the spherical harmonic representation of Earth's gravity field. 43.11 ± 0.21 (Shapiro et al., 1976) 42.92 ± 0.20 (Anderson et al., 1987) 42.94 ± 0.20 (Anderson et al., 1991) 43.13 ± 0.14 (Anderson et al., 1992) [Source: Pijpers 2008] The above discussion notwithstanding, more sophisticated technology has measured a precession of 5599.7 arc-seconds per century, with a margin of error of only 0.01, which disproves the prediction of the Theory of Relativity. Notice how publication of data stopped two decades ago when the observations diverged from the theory.--Andy Schlafly 14:50, 14 April 2012 (EDT) Measurements of planetary motion are now calculated relative to the "International Celestial Reference Frame" (ICFR), replacing the older, and much less accurate "equinoctial" frame. The older measurements got a value of about 5600 arcseconds/century for the precession of Mercury, nearly all of which (5025 arcseconds) was because of the rotation of the "equinoctial" frame. The ICFR frame removes that source of uncertainty, and, with the very accurate radar measurements conducted by NASA between 1987 and 1997[7], gets a value of 574.10±0.65 arcseconds observed, in good agreement with the predicted 574.64±0.69 value. The ICFR is described in this document, dated 2003. True, the direct searches for gravitational waves have not yet yielded any clear results, though indirect observations have been made (the Hulse/Taylor observations.) Before people dismiss indirect observations, recall that no one has ever seen an electron. The Earth-based LIGO detectors have, so far, not detected any unambiguous gravitational wave signatures from such events as a black-hole merger. It is barely sensitive enough to find such things in the Milky Way or very nearby galaxies. It is being upgraded in a plan that should complete in 2014. It is hoped that, after the upgrade, it will be able to see these events clearly and unambiguously. The space-based LISA detectors have not been built yet, and the original proposal has been scrapped because of budgetary problems. A new version, called "eLISA" has been proposed, and should be sensitive to events as far away as redshift 15. Whether these experiments are a good use of money is another question, one that has no bearing on whether relativity is correct. The objection has been raised that the experiments should not have been funded if scientists were going to ignore negative results. The results are only partly negative. The scientists knew all along that a certain amount of luck would be involved in finding a sufficiently strong signal within the time frame of the experiment. The failure so far does not mean that black-hole mergers do not emit gravitational waves; just that they haven't been lucky enough to find them. They will continue to search. Update: Another, much more commonplace observation of gravitational wave emission has neen reported[8]. The article suggests that, since it shows detectable gravitational waves are more common than previously thought, there is optimism that the eLISA detector, when completed, might find one source per week. This has nothing to do with global warming. For a massless particle, the speed is always c. 14. The observed lack of curvature in overall space. Spacetime has very definite curvature near any massive object—this is what makes gravity work. The global curvature of spacetime is an altogether different issue. Whether the average global curvature is zero has consequences for cosmological theories, but it has essentially no effect on the curvature that keeps the Earth in its orbit. If it did have an effect, the issue would have been settled long ago. 15. The universe shortly after its creation, when quantum effects dominated and contradicted Relativity. 16. The action-at-a-distance of quantum entanglement. Miracles do not violate logic, and the Theory of Relativity asserts that these action-at-a-distance miracles described in the Gospels are impossible. No plausible medium for these miracles, even if they were non-instantaneous action-at-a-distance, has been proposed. No one expects to observe gravitons. Calculations show that it is well-nigh impossible with any conceivable detector that we could build. No one is designing, funding, or building any apparatus to search for gravitons. Now it happens that theoretical physicists discuss and speculate on the existence and nature of such things as gravitons as part of their theoretical work. Some of these discussions take place among scientists who receive their salaries from various government agencies that are funded by taxpayers. Whether all of the things that scientists think about, talk about, and write about constitute a good use of money is not for us to say. When physicists encounter something puzzling, their first reaction is usually not to assume that it shows that relativity is wrong. In fact the cited article never mentions relativity at all. It is about "magnetars", a type of neutron star about which very little is known, other than their extremely strong magnetic fields. Many scientific discoveries have arisen from observations that were puzzling at first. Edwin Hubble's observations of galactic redshift led to the realization that the universe is expanding isotropically. The observations by Jocelyn Bell and Antony Hewish, of periodic pulsation in the radio emissions of a star, were so puzzling at first that they seemed to suggest transmissions from intelligent extraterrestrials. They had actually discovered pulsars. When Carl Anderson saw unexplained particle tracks in a cloud chamber, he had discovered antiparticles. None of these people assumed that the explanation for these observations was that relativity was wrong. They were all astrophysicists and were quite familiar with relativity. In fact, Anderson's antiparticles had been predicted by the Dirac equation, a synthesis of special relativity and the Schrödinger equation of quantum mechanics, from a few years earlier. It is utterly baffling how anyone could make such an assertion. The insights from relativity are multitudinous. Relativity forms the basis for astronomy, cosmology, electrodynamics, and many other fields. The interconnection between the electric and magnetic forces is now seen to be a straightforward consequence of relativity. Relativity combined with quantum mechanics are the basis of all of contemporary physics. The Dirac equation, which gives rise to antiparticles and the theory of spinors, was an early example of introducing relatitivity into the Schrödinger equation. More generally, special relativity was combined with quantum theory to produce quantum field theory (QFT). An example of a QFT is quantum electrodynamics, the most precise theory in physics. Furthermore, string theory is Lorentz invariant and produces general relativity in the low energy limit. 22. The change in mass over time of standard kilograms preserved under ideal conditions. We are baffled that anyone would connect this problem with relativity. The non-relativity principle of conservation of mass has been known, in general, for hundreds of years, going back to the time of the alchemists, and has been a fundamental and accurate principle since the 19th century. The principle of conservation of energy has also been a fundamental and accurate principle since the 19th century. Relativity simply generalizes this to a principle of both together, with even greater precision. So, in principle, it is recognized that the mass of the standard kilogram could change if an energy transfer took place. But no combustion, corrosion, or nuclear decay is suspected of having taken place. In any case, the amount of energy that would have to have been released is 1.25 megawatt-hours, which would certainly have been noticed. Relativity does not promise uniformity of the universe any more strongly than classical physics did. The apparent change in mass of the standard kilogram is simply a mystery. 23. The uniformity in temperature throughout the universe. The cited article is fascinating, and is about a fascinating aspect of contemporary physics. Like item 23, it would have been useful to say what the article is about. The cited article is about speculation that the constant "alpha" (see item 17 above) may not be constant. It might have decreased, by 45 parts per billion, as recently (in cosmic time) as two billion years ago, based on data from the Oklo "natural fission reactor". Other measurements have been made of alpha at earlier times, such as measurement of light from distant quasars. These measurements suggest that alpha has increased by a few parts per 105 in 12 billion years. There is plenty of literature on theories about change in alpha, and some of it indicates that this may be due to change in the speed of light. Specifically, the Oklo data may suggest that the speed of light may have been increasing slightly. (This is the opposite of the direction of change claimed by fundamentalists, but is much smaller in any case.) Speculation on a different speed of light in the past may relate to theories of "cosmic inflation", which touches on the question of why the Cosmic Background Radiation is so nearly isotropic, which indicates a near-uniformity of temperature, which requires inflation or some equivalent mechanism. None of the scientists working in this area seem to doubt the fundamental correctness of relativity. The cited article is about a very recent and exciting development in fundamental theoretical physics, generally called the "holographic principle". This proposes that our perceived 3-dimensional space actually arises from a "hologram" on a 2-dimensional space. Much has been written about this recently, including the best-selling book The Hidden Reality by Brian Greene. This hologram manifests itself in the "foamlike" ripples of spacetime, at the scale of the Planck length, which is so small that there seemed to be no way to detect it directly. But it seems that some "inexplicable static" in the results of another unrelated experiment, searching for gravitational waves, may be the first hints of the holographic nature of space. If so, this is a lucky and serendipitous result. Serendipitous scientific discoveries have been made many times, as when Henri Becquerel put a photographic plate in a dark drawer because the weather was cloudy, and thereby discovered radioactivity. That the "foamlike" nature of space at short distances is contrary to the continuous nature presumed by classical mechanics and relativity has been known for some time. This is the problem that quantum gravity seeks to solve. Quoting things without explaining the context is often a bad idea, and the indicated item is a good example of this. It gives no hint of what the article from which the quote was taken is about. The article is about one scientist's contribution to the problem of unifying relativity and quantum mechanics. The scientist, Petr HoYava of Berkeley, has come up with an approach that he says eliminates the infinities that have plagued other unification attempts. The two quoted sentences are HoYava's statement of the problem. So it comes as no surprise that he says that there is a conflict between relativity and quantum mechanics. The very next paragraph begins: The solution, HoYava says, is to snip threads that bind time to space at very high energies .... At low energies, general relativity emerges from this underlying framework ..." It is well known that, just as classical mechanics emerges from quantum mechanics at non-microscopic scales, and classical mechanics emerges from relativity at low speeds, both relativity and quantum mechanics should emerge from the Grand Unified Theory (whatever that turns out to be) at the appropriate scales. The topology of wormholes is an interesting topic, widely discussed among theoretical physicists and mathematicians. There is much speculation about what they would be like (if they could exist at all under quantum gravity), and what kind of "cosmic censorship" or "chronology protection" theorems might make practical time travel impossible. The nature of these theorems seems to be intertwined with theories of quantum gravity and "grand unification", so the exact form of the "cosmic censorship", if it exists, can't be known until quantum mechanics and relativistic geometry are unified. It is an exciting field of research. No one believes that the "cosmic censorship" will take the form of relativity not being a true non-quantum approximation to reality. Since the 70's much work has been done on the subject of black hole thermodynamics[10][11], most notably by the Lucasian Professorship of Mathematics at Cambridge Stephen Hawking. When quantum field theory is added to the analysis of black holes it is found that they do not possess "low entropy" (quite the opposite, in fact) and are consistent with the laws of thermodynamics[12]. The Counterexamples to Relativity article has labelled this work in a footnote as "[c]ontrived explanations", with no explanation for this characterization given. Please read the cited paper carefully. It is a survey of their observations over a 30 year period. They point out that their data matches general relativity to within 0.2 percent, and is now down in the "noise" of other effects, such as lack of accurate knowledge of just how far away the pulsars are, and lack of accurate knowledge of galactic constants. As they say in the abstract, "tighter bounds will be difficult to obtain." The paper, written in 2004, also notes that, because the pulsar beams are slowly tilting out of the line of sight to Earth, "A core component [of the emission] is quite prominent in the data taken in 1980-81, but it faded very significantly between 1980 and 1998 and was nearly gone by 2003." That is why they are not releasing further data. 30 years is a fairly long time to watch a pair of pulsars. They're not doing the experiment any more—it did its job, and it's finished. No one drops cannonballs off the Leaning Tower of Pisa any more either. The data are not diverging from the predictions. The Global Positioning System (GPS) uses general relativity to achieve greater accuracy[13]. In fact, relativity is what makes the magnetic force necessary. The magnetic force is used in, among other things, electric motors and generators. Even if it were the case that no practical applications have come from relativity, that is irrelevant. The validity of a theory is not based on the creation of useful devices. It is based on its ability to accurately predict the results of an experiment. Relativity "has held up under extensive experimental scrutiny" [14]. If scientific theories were judged by their application in useful devices, the following Nobel-worthy theories would be rejected: cosmic inflation parity violation in the weak force the "standard model", with strange/charm/top/bottom/mu/tau the Chandrasekhar limit for white dwarf stars where c is the speed of light. With a photon of zero rest mass, this gives: where is the photon's wavelength.[15] The reference to the twin paradox suggests that the author thought that the passage of time is some kind of scalar field that should be obtainable as the path integral of a conservative vector field. It is not. Passage of time is a property of one's path through spacetime, and is similar to path length. (In fact, under the Lorentz/Minkowski metric, it is exactly path length.) Just as two paths from point A to point B on a sheet of paper can have different lengths, the paths of the twins can have different lengths, and hence different elapsed local times. The "Ehrenfest paradox" is not an actual paradox. Non-inertial relativistic motion of solid bodies is quite complicated, involving such concepts as "Born rigidity", "Langevin observers", the "Langevin-Landau-Lifschitz metric", and "quotient manifolds". In general, the subject is complicated, and has provided physicists with much food for thought. But it does not disprove relativity. The claims of that item are preposterous. Read the cited paper (or its abstract) carefully. Einstein's statement that clocks would run slower at the equator, due to time dilation, was correct according to special relativity alone. What Einstein didn't realize, because he wouldn't discover general relativity for another 10 years, was that the gravitational time shift would offset that. The cited paper does not refute relativity. Time isn't a vector. It is a component of the vector space known as "spacetime". Vectors have negatives; the word "inverse" is not typically used here. While there are thermodynamic and other reasons for not allowing time to go backwards in the real world, the mathematics of spacetime allow vectors with any components, even negative ones. Mathematicians define a 4-dimensional vector space as having two operators: addition and scalar multiplication. There is also the identity vector (0,0,0,0). So, to the extent that one asks what vector can be added to (0,0,0,t) to produce (0,0,0,0), the answer is (0,0,0,-t), but that has nothing to do with realtivity. That a Bible verse "likely" refers to a scientific phenomenon does not make good science. The objection appears to be claiming that the Bible contradicts the Michelson-Morley experiment, so the latter must be wrong. The Michelson-Morley experiment is very well known, has been repeated countless times, and is incontrovertible. To suggest that the Bible verse in Genesis contradicts such an incontrovertible phenomenon does a disservice to both the Bible and to science. This one is utterly absurd. The calculation of the gravity of the Sun and Moon in producing tides has been known for hundreds of years, explained by Newtonian gravity. The effect of relativity on this is microscopic. The cited article is about the many things affecting the gross behavior of tides. Keep in mind that the factors affecting the tides involve a lot of hard-to-precisely-quantify things such as wind and ocean currents, and the melting of glaciers. As an example of how imprecise things are, the Atlantic and Pacific Oceans differ in mean height by about 8 inches at the Panama Canal, and the tides are about 20 feet on the Pacific side and 1 foot on the Atlantic side. As another example, tides in the Bay of Fundy are as high as 53 feet. Tides are not a simple calculation from Newtonian or Einsteinian mechanics. The difference between Newtonian gravitation and Einsteinian gravitation is utterly insignificant in comparison to these gross effects. This item is about unexpected behavior of the supermassive black holes at the centers of many galaxies. Black holes are a prediction of relativity. The fact that people are still learning new things about them is hardly surprising. The cited article never mentions relativity or the possibility that relativity is wrong. In fact, the entire discussion is in the context of black holes. The second study was about a black hole at the center of a galaxy apparently wandering out of place. The hypothesis is that a merger of two galaxies was involved, and "the theoretical prediction is that when two black holes merge, the newly combined black hole receives a 'kick' due to the emission of gravitational waves ..." Gravitational waves are, of course, a specific prediction of general relativity. 45. Scientists are unable to explain a June 2012 cluster of earthquakes in Ireland. This one is even more absurd than the one above about tides. We cannot fathom why anyone would think earthquakes are a counterexample to relativity. This refers to a recently discovered planetary system in which another star is closer than had previously been seen in such a system. Systems involving two stars and a planet constitute the "three body problem", which has never been solved analytically. But if only one of the objects is massive, such as the Sun, the system can be stable for all practical purposes, which is why the solar system is effectively stable even though it is a many-body problem. But two stars make the system much more problematical. In the past, such systems have had the other star far enough away that it has essentially no effect. The Gamma Cephei system is different, and scientists are eager to analyze it in detail. They are frustrated in this by the extremely scant data about the planet's orbit, since that information has to be gleaned from tiny spectroscopic shifts (in only one dimension!) of the two stars. The effect of Relativity on this is, of course, the precession of the "perihelion" due to the effects of General Relativity. From the information that has been gathered, the precession should be about 1.14 arcseconds per Earth century, or about 1/37th that of Mercury. The precession of Mercury was established with accurate visual observations. Measuring something 1/37th as large, from the Doppler shift in spectroscopic measurements, is utterly beyond current technology. Other than that, there is no reason to consider relativity. Classical mechanics will do just fine. 47. General Relativity fails to predict the Allais Effect. A quick Google search will show that there is no "concerted effort to suppress knowledge about the phenomenon". Tom van Flandern (no darling of those who accept relativity; he has been cited at Conservapedia in the "Lack of evidence for Relativity" section) explains the phenomenon in his 2003 paper Allais gravity and pendulum effects during solar eclipses explained: "Here we show that an unusual phenomenon that occurs only during solar eclipses, rapid air mass movement for the bulk of the atmosphere above normal cloud levels, appears to be a sufficient explanation for both the magnitude and behavior of the anomaly previously reported in these pages." The fact that "attempts to duplicate the result have been reported as failures" is certainly consistent with an effort to suppress knowledge, but it's also consistent with the effect not existing. 48. The Pauli Exclusion Principle states that no two electrons... This is wrong on many levels. If there were exactly as many quantum-mechanical states (eigenfunctions) as there are particles, then, indeed, a fermion particle could only go to another state if the particle already in that state moved. But there are many more available states than there are particles. The cited article was a lecture by Brian Cox, a "popularizer" of physics, and the comments on the web page take him to task over a great many points, including a confusion between "quantum states" and "energy states", and what quantum-mechanical interconnectedness really means. People would be well advised to read those comments, as well as the analysis by Sean Carroll here, which points out the many flaws in Brian Cox's reasoning. In any case, this is about quantum mechanics, not relativity. Unless someone can come up with a sensible example, we will not be posting further rebuttals. The recent ones have been hideously pointless, and not worth replying to. Many of them have cited articles that have nothing to do with relativity. Tides, and earthquakes, do not disprove relativity. It's possible that, because of the rather famous nature of the "counterexamples" page (it is cited around the world, and has nearly 2 million web views), people are simply putting in parody, or trolling, or humor, or whatever, in an attempt to see their work on a world-famous page. 7. [1] 8. BBC article
a9fcf0b0429970e8
Davisson–Germer experiment From Wikipedia, the free encyclopedia Jump to: navigation, search The Davisson–Germer experiment was a physics experiment conducted by American physicists Clinton Davisson and Lester Germer in the years 1923–1927,[1] which confirmed the de Broglie hypothesis. This hypothesis advanced by Louis de Broglie in 1924 says that particles of matter such as electrons have wave-like properties. The experiment not only played a major role in verifying the de Broglie hypothesis and demonstrated the wave-particle duality, but also was an important historical development in the establishment of quantum mechanics and of the Schrödinger equation. History and overview[edit] According to Maxwell's equations in the late 19th century, light was thought to consist of waves of electromagnetic fields and matter was thought to consist of localized particles. However, this was challenged in Albert Einstein’s 1905 paper on the photoelectric effect, which described light as discrete and localized quanta of energy (now called photons), which won him the Nobel Prize in Physics in 1921. In 1924 Louis de Broglie presented his thesis concerning the wave-particle duality theory, which proposed the idea that all matter displays the wave-particle duality of photons.[2] According to de Broglie, for all matter and for radiation alike, the energy E of the particle was related to the frequency of its associated wave \nu by the Planck relation: And that the momentum of the particle p was related to its wavelength by what is now known as the de Broglie relation: where h is Planck's constant. An important contribution to the Davisson–Germer experiment was made by Walter M. Elsasser in Göttingen in the 1920s, who remarked that the wave-like nature of matter might be investigated by electron scattering experiments on crystalline solids, just as the wave-like nature of X-rays had been confirmed through X-ray scattering experiments on crystalline solids.[2][3] This suggestion of Elsasser was then communicated by his senior colleague (and later Nobel Prize recipient) Max Born to physicists in England. When the Davisson and Germer experiment was performed, the results of the experiment were explained by Elsasser's proposition. However the initial intention of the Davisson and Germer experiment was not to confirm the de Broglie hypothesis, but rather to study the surface of nickel. American Physical Society plaque in Manhattan commemorates the experiment In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow moving electrons at a crystalline nickel target. The angular dependence of the reflected electron intensity was measured and was determined to have the same diffraction pattern as those predicted by Bragg for X-rays. This experiment was independently replicated by George Paget Thomson, and Davisson and Thomson shared the Nobel Prize in Physics in 1937.[2][4] The Davisson – Germer experiment confirmed the de Broglie hypothesis that matter has wave-like behavior. This, in combination with the Compton effect discovered by Arthur Compton (who won the Nobel Prize for Physics in 1927),[5] established the wave–particle duality hypothesis which was a fundamental step in quantum theory. Early experiments[edit] Davisson began work in 1921 to study electron bombardment and secondary electron emissions. A series of experiments continued through 1925. Experimental setup Davisson and Germer's actual objective was to study the surface of a piece of nickel by directing a beam of electrons at the surface and observing how many electrons bounced off at various angles. They expected that because of the small size of electrons, even the smoothest crystal surface would be too rough and thus the electron beam would experience diffuse reflection.[6] The experiment consisted of firing an electron beam from an electron gun directed to a piece of nickel crystal at normal incidence (i.e. perpendicular to the surface of the crystal). The experiment included an electron gun consisting of a heated filament that released thermally excited electrons, which were then accelerated through a potential difference giving them a certain amount of kinetic energy, towards the nickel crystal. To avoid collisions of the electrons with other molecules on their way towards the surface, the experiment was conducted in a vacuum chamber. To measure the number of electrons that were scattered at different angles, a faraday cup electron detector that could be moved on an arc path about the crystal was used. The detector was designed to accept only elastically scattered electrons. During the experiment an accident occurred and air entered the chamber, producing an oxide film on the nickel surface. To remove the oxide, Davisson and Germer heated the specimen in a high temperature oven, not knowing that this affected the formerly polycrystalline structure of the nickel to form large single crystal areas with crystal planes continuous over the width of the electron beam.[6] When they started the experiment again and the electrons hit the surface, they were scattered by atoms which originated from crystal planes inside the nickel crystal. In 1925, they generated a diffraction pattern with unexpected peaks. A breakthrough[edit] On a break, Davisson attended the Oxford meeting of the British Association for the Advancement of Science in the summer of 1926. At this meeting, he learned of the recent advances in quantum mechanics. To Davisson's surprise, Max Born gave a lecture that used diffraction curves from Davisson's 1923 research which he had published in Science that year, using the data as confirmation of the de Broglie hypothesis.[7] He learned that in prior years, other scientists, Walter Elsasser, E. G. Dymond, and Blackett, James Chadwick, and Charles Ellis had attempted similar diffraction experiments, but were unable to generate low enough vacuums or detect the low intensity beams needed.[7] Returning to the United States, Davisson made modifications to the tube design and detector mounting, adding azimuth in addition to colatitude. Following experiments generated a strong signal peak at 65 V and an angle θ = 45°. He published a note to Nature titled, "The Scattering of Electrons by a Single Crystal of Nickel". Questions still needed to be answered and experimentation continued through 1927. By varying the applied voltage to the electron gun, the maximum intensity of electrons diffracted by the atomic surface was found at different angles. The highest intensity was observed at an angle θ = 50° with a voltage of 54 V, giving the electrons a kinetic energy of 54 eV.[2] As Max von Laue proved in 1912 the periodic crystal structure serves as a type of three-dimensional diffraction grating. The angles of maximum reflection are given by Bragg's condition for constructive interference from an array, Bragg's law n\lambda=2d\sin \left(90^{\circ} -\frac{\theta}{2} \right), for n = 1, θ = 50°, and for the spacing of the crystalline planes of nickel (d = 0.091 nm) obtained from previous X-ray scattering experiments on crystalline nickel.[2] According to the de Broglie relation, electrons with kinetic energy of 54 eV have a wavelength of 0.167 nm. The experimental outcome was 0.165 nm via Bragg's law, which closely matched the predictions. Davisson and Germer's accidental discovery of the diffraction of electrons was the first direct evidence confirming de Broglie's hypothesis that particles can have wave properties as well. Davisson's attention to detail, his resources for conducting basic research, the expertise of colleagues, and luck all contributed to the experimental success. 1. ^ Davisson, C. J.; Germer, L. H. (1928-04-01). "Reflection of Electrons by a Crystal of Nickel". Proceedings of the National Academy of Sciences of the United States of America 14 (4): 317–322. ISSN 0027-8424. PMC 1085484. PMID 16587341.  2. ^ a b c d e R. Eisberg, R. Resnick (1985). "Chapter 3 – de Broglie's Postulate—Wavelike Properties of Particles". Quantum Physics: of Atoms, Molecules, Solids, Nuclei, and Particles (2nd ed.). John Wiley & Sons. ISBN 0-471-87373-X.  3. ^ H. Rubin (1995). "Walter M. Elsasser". Biographical Memoirs 68. National Academy Press. ISBN 0-309-05239-4.  4. ^ The Nobel Foundation (Clinton Joseph Davisson and George Paget Thomson) (1937). "Clinton Joseph Davisson and George Paget Thomson for their experimental discovery of the diffraction of electrons by crystals". The Nobel Foundation 1937.  5. ^ The Nobel Foundation (Arthur Holly Compton and Charles Thomson Rees Wilson) (1937). "Arthur Holly Compton for his discovery of the effect named after him and Charles Thomson Rees Wilson for his method of making the paths of electrically charged particles visible by condensation of vapour". The Nobel Foundation 1927.  6. ^ a b Hugh D. Young, Roger A. Freedman: University Physics, Ed. 11. Pearson Education, Addison Wesley, San Francisco 2004, 0-321-20469-7, S. 1493-1494. 7. ^ a b Gehrenbeck, Richard K. (1978). "Electron diffraction: fifty years ago". Physics today (American Institute of Physics): 34–41.  External links[edit]
2fae76ba7b17fb73
Pauli equation From Wikipedia, the free encyclopedia Jump to: navigation, search In quantum mechanics, the Pauli equation or Schrödinger–Pauli equation is the formulation of the Schrödinger equation for spin-½ particles, which takes into account the interaction of the particle's spin with an external electromagnetic field. It is the non-relativistic limit of the Dirac equation and can be used where particles are moving at speeds much less than the speed of light, so that relativistic effects can be neglected. It was formulated by Wolfgang Pauli in 1927.[1] For a particle of mass m and charge q, in an electromagnetic field described by the vector potential A = (Ax, Ay, Az) and scalar electric potential ϕ, the Pauli equation reads: Pauli equation (general) \left[ \frac{1}{2m}(\boldsymbol{\sigma}\cdot(\mathbf{p} - q \mathbf{A}))^2 + q \phi \right] |\psi\rangle = i \hbar \frac{\partial}{\partial t} |\psi\rangle where σ = (σx, σy, σz) are the Pauli matrices collected into a vector for convenience, p = −∇ is the momentum operator wherein ∇ denotes the gradient operator, and |\psi\rangle = \begin{pmatrix} \psi_+ \\ is the two-component spinor wavefunction, a column vector written in Dirac notation. The Hamiltonian operator is a 2 × 2 matrix operator, because of the Pauli matrices. Substitution into the Schrödinger equation gives the Pauli equation. This Hamiltonian is similar to the classical Hamiltonian for a charged particle interacting with an electromagnetic field, see Lorentz force for details of this classical case. The kinetic energy term for a free particle in the absence of an electromagnetic field is just p2/2m where p is the kinetic momentum, while in the presence of an EM field we have the minimal coupling p = P − qA, where P is the canonical momentum. The Pauli matrices can be removed from the kinetic energy term, using the Pauli vector identity: (\boldsymbol{\sigma}\cdot \mathbf{a})(\boldsymbol{\sigma}\cdot \mathbf{b}) = \mathbf{a}\cdot\mathbf{b} + i\boldsymbol{\sigma}\cdot \left(\mathbf{a} \times \mathbf{b}\right) to obtain[2] Pauli equation (standard form) \hat{H} |\psi\rangle = \left[\frac{1}{2m}\left[\left(\mathbf{p} - q \mathbf{A}\right)^2 - q \hbar \boldsymbol{\sigma}\cdot \mathbf{B}\right] + q \phi\right]|\psi\rangle = i \hbar \frac{\partial}{\partial t} |\psi\rangle where B = ∇ × A is the magnetic field. Relationship with the Schrödinger equation and the Dirac equation[edit] The Pauli equation is non-relativistic, but it does predict spin. As such, it can be thought of as occupying the middle ground between: Note that because of the properties of the Pauli matrices, if the magnetic vector potential A is equal to zero, then the equation reduces to the familiar Schrödinger equation for a particle in a purely electric potential ϕ, except that it operates on a two-component spinor: \left[ \frac{\mathbf{p}^2}{2m} + q \phi \right] \begin{pmatrix} \psi_+ \\ \end{pmatrix} = i \hbar \frac{\partial}{\partial t} \begin{pmatrix} \psi_+ \\ Therefore, we can see that the spin of the particle only affects its motion in the presence of a magnetic field. Relationship with Stern–Gerlach experiment[edit] Both spinor components satisfy the Schrödinger equation. For a particle in an externally applied B field, the Pauli equation reads: Pauli equation (B-field) \underbrace{i \hbar \frac{\partial}{\partial t} |\psi_\pm\rangle = \left( \frac{( \mathbf{p} -q \bold A)^2}{2 m} + q \phi \right) \hat 1 \bold |\psi\rangle }_\mathrm{Schr\ddot{o}dinger~equation} - \underbrace{\frac{q \hbar}{2m}\boldsymbol{\sigma} \cdot \bold B \bold |\psi\rangle }_\mathrm{Stern-Gerlach \, term} \hat 1 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ is the 2 × 2 identity matrix, which acts as an identity operator. The Stern–Gerlach term can obtain the spin orientation of atoms with one valence electron, e.g. silver atoms which flow through an inhomogeneous magnetic field. Analogously, the term is responsible for the splitting of spectral lines (corresponding to energy levels) in a magnetic field as can be viewed in the anomalous Zeeman effect. See also[edit] 1. ^ Wolfgang Pauli (1927) Zur Quantenmechanik des magnetischen Elektrons Zeitschrift für Physik (43) 601-623 2. ^ Bransden, BH; Joachain, CJ (1983). Physics of Atoms and Molecules (1st ed.). Prentice Hall. p. 638-638. ISBN 0-582-44401-2.  • Schwabl, Franz (2004). Quantenmechanik I. Springer. ISBN 978-3540431060.  • Schwabl, Franz (2005). Quantenmechanik für Fortgeschrittene. Springer. ISBN 978-3540259046.  • Claude Cohen-Tannoudji, Bernard Diu, Frank Laloe (2006). Quantum Mechanics 2. Wiley, J. ISBN 978-0471569527.  External links[edit]
fc06d6195ae178b6
I spend a lot of time promoting Rhett Allain’s Dot Physics blog, enough that some people probably wonder if I get a cut of his royalties (I don’t). I’m going to take issue with his latest, though, because he’s decided to revive his quixotic campaign against photons, or at least teaching about photons early in the physics curriculum. We went through this back in 2008 and 2009 (though Rhett’s old posts are linkrotted away, so you only get my side of the story…). I’m no more convinced this time around, even though he drags in Willis Lamb and David Norwood for support. There are basically two pieces to the anti-photon argument, neither of which I find remotely convincing. In fact, I will happily stipulate that both of the central points are correct, and even after that, I don’t find this to be a problem that requires fixing. The first claim is that photons are redundant, given that you can explain all of the phenomena they’re usually invoked to explain without ever referring to particle-like characteristics of light. Which is true– you can construct a semi-classical model of the photoelectric effect in which electrons inside metals occupy quantized states and are excited out of them by classical electromagnetic waves of the appropriate frequency. That reproduces essentially all of the features of the quantum model proposed by Einstein in 1905 and confirmed by Millikan’s experiments in 1916. Using that instead of the photon model would be a little ahistorical– the semiclassical model was first published by Mandel and Wolf in the 1960’s, and relies on a Fermi Golden Rule calculation assuming the Schrödinger equation which wasn’t invented until the late 1920’s– but you could do it. But even allowing that it’s true, I don’t see what the point is. To bring it back to classical physics, in the same way that it’s perfectly true that you can describe the photoelectric effect with a semiclassical model, it’s perfectly true that you can describe an elastic collision between two objects by direct integration of Newton’s second law (or the Momentum Principle, in the language of the Matter and Interactions textbook that Rhett and I both use). But there’s absolutely no reason to do that, other than to make a philosophical point– sane people considering elastic collision problems make use of energy in addition to momentum, because it makes the problem simpler. And that’s the case for photons in talking about the photoelectric effect: it’s much, much easier. Actually working out the details of the semi-classical model of the photoelectric effect is really complicated: you need to know about the Schrödinger equation, make a few approximations, and do an integral involving complex numbers. The photon version requires subtraction. We invoke photons for the photoelectric effect because it’s much, much simpler. And since a quantized model of light is known to be necessary to explain photon anti-bunching (a point even Rhett concedes), there’s no good reason not to employ it there. (I’ll note in passing that Norwood makes repeated references to some sort of experiment that supposedly shows a delay in electron emission for low light intensity. I have absolutely no idea what he’s talking about, and he doesn’t provide a citation. The only “delay in photoemission” measurements I’m aware of are attosecond scale delays after excitation with an ultrafast alser pulse, which is not remotely the same thing.) The second argument against photons basically amounts to the language used to describe light in terms of particles being imprecise in a way that offends some people’s aesthetic sense. And, again, strictly speaking this is perfectly true. Photons are not perfectly described as particles with all the properties of classical particles. A proper description is that photons are quantized excitations of particular modes of the electromagnetic field. But you know what else isn’t perfectly described as a particle with all the properties of classical particles? An electron. In fact, strictly speaking, electrons also ought to be described as quantized excitations of an “electron field.” There are some differences between the mathematical descriptions of photons and electrons as field excitations, but we’ve known since Dirac’s day that electrons are best described as field quanta. And yet, you don’t find many physicists willing to argue that we shouldn’t teach electrons as particles. But all the same linguistic ambiguities are present with electrons that are present with photons. Neither is truly a particle or a wave in the classical sense– rather, they’re both a third kind of object for which we lack a convenient single word. There’s necessarily a lot of imprecision in the language used to talk about this; Lamb’s article includes a kind of snotty remark blaming Bohr for this, but I don’t see any great alternatives. And even after stipulating that talking about photons as particles is imprecise, I fail to see what the problem is. Or, more specifically, I don’t see where this particular bit of imprecision creates a real problem for anything. If this is a pernicious misconception, what is the physical problem that thinking of light in terms of photons keeps you from solving? Where does it lead students astray in a way that gets a clearly wrong answer, as opposed to getting the right answer by aesthetically unappealing means? I haven’t seen a good example yet, though this is at least the third time I’ve read a bunch of anti-photon rants. So, like the post title says, I see no reason to drop photons. A fully quantized model of light is unquestionably necessary for more advanced experiments, invoking a simplified version of it makes certain classes of intro problems vastly easier to deal with, and the imprecision that the simplified model introduces doesn’t seem to cause any significant problems (particularly for the vast majority of students who will only ever take introductory-level classes). There’s just nothing there that rises to the level of a problem requiring a change in pedagogy. (I suspect the closest analogue in classical physics is the “work done by friction” business, where a lot of physics education folks vehemently object to the notion of frictional forces doing work, for reasons I have never quite understood. I’ve had it explained to me several times, and all I remember about it is that it turns on a poor choice of what you call the system. This has basically eliminated a whole class of problems from the intro classes– you won’t find problems where students find the stopping distance for a sliding object using energy methods any more– in order to avoid a “misconception” that seems to me to be almost entirely an aesthetic issue.) 1. #1 Nick Theodorakis July 12, 2013 I am not a physicist, but can Rhett Allain explain Compton scattering without photons? 2. #2 Jim Deane July 12, 2013 As a reference, Art Hobson at U. Arkansas makes an argument for fields over particles: I continue to think that the particle model has useful relevance, and I think that switching to an all-fields model for introductory physics would make physics all that much more unreachable (and irrelevant, and unfundable) to the “the common people”. Disclaimer–I haven’t ready Rhett’s post yet, so this comment is general with respect to the topic, not specific to Rhett’s article. 3. […] liked Chad Orzel’s blog post title so much that I’m stealing it, to mention a new studying showing that photons don’t […] 4. #4 john July 12, 2013 Beginners intuitively connect photons in photoelectric effect with billiard balls: “photon in, electron out, we’re done here”. Discussions of wave properties of photons and electrons rarely connect back to discussions of photons and particles, beyond generic wave-particle duality. To me this summarizes the case against photons: they create a intuitive but incorrect model overpowering the much more important wave model. 5. #5 Eric Lund July 12, 2013 A photon doesn’t just have energy. It has linear momentum and angular momentum as well. You could presumably construct a field with those properties, but that involves math sufficiently esoteric that it would have to be relegated to a graduate level course. So while it is technically possible to explain all of the relevant physics without resorting to photons, the attempt would likely introduce more problems than it solves. Conversely, despite never having taken a formal solid state physics course I am aware of the existence of phonons, which are basically quantized sound waves, as a tool for describing certain phenomena. As with photons, this is done because some phenomena are conceptually easier to describe in terms of phonons rather than a field of sound waves. I’ll defer to others more knowledgeable than I about the feasibility of eliminating discussion of phonons from solid state courses, but if you think phonons are useful, it is hard to argue that photons are not. I’m also boggling about the objection to friction forces doing work. What’s the alternative description that would be intelligible at the freshman level? I get that invoking friction is a highly simplified description of what’s going on, and as with photons vs. fields there is a level where that simplification is inappropriate. But it’s rare for such situations to arise at the undergraduate physics level. 6. #6 timo July 12, 2013 I don’t like the notion of photons because they muddle more then they illuminate. The only sensible definition of a photon is as a click in a photodetector. 7. #7 Alex July 12, 2013 The notion of a photon as a particle has some problems, but you need to think pretty deeply about physics, and study it at a pretty high level, to get there. OTOH, you can get started in the lab from day 1 if photons are particles. Indeed, I suspect that even the people who object most loudly to this probably think in terms of photons as particles with (more-or-less-well-defined) energy, linear momentum, and angular momentum when they are trying to understand experiments. I’m a theorist, but experiments are what science is ultimately all about. As to friction, yeah, I’ve heard some of those complaints, but the issue never made much sense to me. I get the impression (perhaps mistakenly) that it isn’t the newer wave of education researchers who complain about the “work done by friction” thing, so much as an older wave of people who want to “get it right, damnit!” I guess my answer is that if it depends on how we define the system, what part of the surface we call the system, etc., then how can we talk about “work done by gravity”? What part of the earth is pulling on me when I fall? In fact, different parts of the earth are exerting different forces on me, and with different directions even! If we’re doing to worry about the fact that different parts of the road are exerting frictional forces on me at different times, then we should worry about the fact that the gravitational force exerted on me by mountains in Canada is mostly northward, not downward. (I’m in the US.) 8. #8 CCPhysicist July 14, 2013 You can also use a geocentric system to do all of physics if you wish, ideally with the origin of the universal coordinate system right THERE at the corner of a lab bench or your telescope mount. The latter is actually convenient, but not simple, the same issue here. If Rhett acknowledges that you need photons to explain and/or predict experimental results like photon anti-bunching, then that is the more fundamental theory. What is wrong with deriving Maxwell’s Equations from QED? More fundamentally, what is wrong with telling future engineers and future physicists that you can derive Maxwell’s Equations from QED but a first year course is not the time or place to do that, anymore than it makes sense to teach Hamiltonian or Lagrangian mechanics to freshmen. I also really don’t get the problem with work by friction. Do they also dispute work done by a gas because of some misconception held by someone? By a hand operated by muscles? By the muscle cells when the hand does not move? If you worry about that, shouldn’t you also worry about what “force” means? (The only place where it is clearly defined is in a field theory.) Why does Rhett call gravity a force when the most likely theoretical explanation, necessary for GPS to work, says particles simply follow a geodesic in a model that entirely excludes quantum mechanics? Can you even justify using continuous functions to describe the motion of objects? What experiment tells us that space is continuous? Yeah, you can go nuts worrying about the foundations of classical mechanics even before you start worrying about the semantic misconceptions inherent in referring to an atom or a proton, let alone a ball, as “a” particle. We use what works. 9. #9 Pete Attkins July 14, 2013 I would find it very cumbersome to explain things such as photon shot noise if we do away with photons. 10. #10 OMF July 14, 2013 Physicists have been trying for 400 years to get rid of the concept of there being “particles” of light, but it refuses to go away. There’s probably a good reason for that. 11. #11 John Duffield July 14, 2013 Complaining about photon sounds like complaining about soliton because some people think it’s some kind of billiard-ball. Anyway, I think “electron field” is a bigger problem myself. Take a look at http://en.wikipedia.org/wiki/Two-photon_physics and note that pair production occurs because pair production occurs. That’s junk science. It ignores the hard scientific evidence. You can make an electron (and a positron) out of light in pair production, you can diffract it, it’s got a magnetic dipole moment, and the Einstein-de Haas effect “demonstrates that spin angular momentum is indeed of the same nature as the angular momentum of rotating bodies as conceived in classical mechanics”. Oh, and in atomic orbitals electrons “exist as standing waves”, and after annihilation you’ve got light again. The photon is a singleton electromagnetic field-variation/wave (or four-potential pulse) propagating linearly at c. The electron is “Dirac’s belt” standing-wave configuration where the field variation is now a standing field. Look to TQFT. It’s like an electromagnetic knot. 12. #12 Todd Zimmerman United States July 14, 2013 13. #13 Ulf Lorenz July 15, 2013 @Eric Lund: 1. I think you can simulate most of the photon’s properties with a classical wave for everyday situations. The linear momentum translates to the exp(ikr) part of the plane wave, and the angular momentum translates to the polarizability (I guess the latter part, but since it is an additional thing that you need to rotate, it should behave like an angular momentum). 2. The people that do not like photons despise phonons even more with a similar argument. At least Lamb does so, and it makes some sense for consistency. So you would not convince a hardcore Antiphotoner with this argument. Having said this, I do agree with Chad. This discussion is about as arcane as hidden parameter theories. 14. #14 Dr . Decay July 16, 2013 The Hanbury Brown Twiss experiment is a good example of an effect which is easier to understand using classical electromagnetism than using quantum field theory. Roy Glauber got a Nobel prize for straightening everybody out about how to analyse HBT correctly in QED. Another slippery point about QED is that photon are massless and often not conserved. Indeed, the most commonly encountered states of the electromagnetic field, coherent states and thermal states, do not have a well defined photon number. If you just think of a light beam as a bunch of particles it’s a little bit hard to get your head around this idea, and can lead to some stupid statements. Fock states, which do have definite number of photons, are hard to produce and very fragile. And when you start thinking about Fock states and phase measurements, your intuition, if it only goes as far as “photons are particles”, can quickly lead you astray. For example I sometimes hear arguments like: “If I send a single photon into this interferometer, there will be no fringes because single photons don’t have a phase”. Response: Maybe and maybe not, it depends on exactly what you measure. Thus there are many situations in electromagnetism in which classical thinking will serve you much better than superficial “photon theories”. I therefore have some sympathy with the antiphoton curmudgeons. Of course, the best thing to do is to THINK CAREFULLY. This often means you should attempt both a classical and a quantum explanation for a given effect and make sure you understand how and why they are different. I’m not saying it’s easy. 15. #15 Mark P July 16, 2013 “Neither is truly a particle or a wave in the classical sense…” Exactly. We talk about photons as particles when it’s convenient (or, more precisely, say that they behave like particles), and we speak of them as waves (or more precisely, say that they behave like waves) when it’s convenient. And appropriate. Arguing that they aren’t particles is more an exercise in semantics than anything else. Of course they aren’t particles. They are, as you say, a third thing. But so what? 16. […] is usually the case, Chad Orzel’s post titles are so awesome I want to borrow them.  His latest post is […] New comments have been temporarily disabled. Please check back soon.
1c867dfd66041f90
Take the 2-minute tour × I have a strong interest in the mathematical structure of quantum mechanics. I'm particularly interested in discrete systems, i.e. systems whose state is in a finite-dimensional Hilbert space. Up to now I've been thinking of the Hamiltonian in such cases as just being some arbitrary Hermitian matrix that governs the system's dynamics. However, it would be really helpful to have some idea of what these Hamiltonian matrices and their elements represent in particular (idealised) physical situations. For example: what is the Hamiltonian for the spin state of an electron in a magnetic field (if that's a meaningful question to ask) and how is it derived? The Hamiltonian for an evolving spin state is a $2\times 2$ Hermitian matrix - do its individual elements have any particular physical significance? What about systems with more than two states? For example, can one write down a Hamiltonian for the spin states of two interacting electrons in some particular situation? It's difficult to search for such examples, because what tends to come up are systems like the quantum harmonic oscillator, whose Hamiltonians have discrete spectra, but which nevertheless live in infinite-dimensional Hilbert spaces. share|improve this question I might be off or to broad here for you, but by the time evolution of the basisvectors given by the Schrödinger equation "$\frac{\text d }{\text d t}\psi=\frac{1}{T}\psi$", aren't the elements just some eigenfrequencies, describing (relative) rates of change of the physical configurations wandering through state space? –  NikolajK Jul 30 '12 at 11:43 @NickKidman if I understand you correctly, a similar thought occurred to me while writing the question. But I've often seen the spin state of an electron referred to as if it occupied a two-dimensional Hilbert space, whereas I've never seen the state of a quantum Harmonic oscillator referred to as if it lived in a countably-infinite-dimensional Hilbert space. If the finite-dimensional space is derived from the infinite-dimensional one in that particular way, it would still be good to see a worked example. –  Nathaniel Jul 30 '12 at 11:49 I see, so matrix mechanics basically is the representation of QHO-like systems as if they lived in countable-dimensional Hilbert spaces. That's very interesting, thanks. (It'll take some time to digest.) –  Nathaniel Jul 30 '12 at 13:03 I don't know why you formulate it using the term "as if". I mean if you can count it, it's countable. You're not dealing with Skolem's paradox-ish rigour here. –  NikolajK Jul 30 '12 at 13:18 2 Answers 2 up vote 2 down vote accepted This is covered completely in the Feynman Lectures on Physics III: the Hamiltonian for an electron in a magnetic field is, up to a constant $$ B\cdot \sigma $$ Where B is the field, and $\sigma$ is a Pauli spin matrix. Without loss of generality, take the B field to point in the x-z plane, and then the Hamiltonian is a real matrix $$ (B_z , B_x; B_x , - B_z)$$ The interpretation of the on-diagonal matrix elements is that they are the energy of the spin states, ignoring transitions, in this case the interaction of the magnetic moment with the z-direction field. The interpretation of the off-diagonal elements is that they tell you the transition rate between up and down spin. In this case, you can just rotate the x and z directions to make B all in the z-direction. All that happens is that the initial electronic spin wavefunction precesses, meaning that the two vector describing the initial electron spin-wavefunction is a rotation of the vector (1,0) describing an electron with spin in the z-direction, rotated using the $\sigma$ matrices to point in some other direction, and the direction in which this spin vector points precesses around the B field direction. The general solution to the 2-component quantum system is covered well in Neilson and Chuang. It is described by Pauli-matrices/quaternions/3-sphere variables (these are all equivalent up to notation), and it is used to build intution for qubits. To build intuition for two interacting electrons, consider the two spin Hamiltonian: $$ \sigma_1 \cdot \sigma_2 $$ Were $\sigma_1$ acts on the first electron's spin, while $\sigma_2$ acts on the second electron's spin (you should assume they are attached to separated spinless nuclei, so that they are in a different spatial wavefunction, otherwise Pauli exclusion will require that the wavefunction in spin is antisymmetric). This Hamiltonian describes a dipole-dipole energy for two electrons interacting at a distance. You can use the same Hamitlonian to understand the fine-structure splitting in Hyrogen. This is a 4 by 4 matrix whose eigenstates are the spin singlet and spin triplet, and solving this will help you understand why the theory of quantum angular momentum addition is important. share|improve this answer +1 for 'without loss of generality' –  Benjamin Hodgson Aug 11 '12 at 21:19 There are different ways of getting discrete systems. Generally, there is a Hamiltonian that defines the unperturbed state of a system (e.g., an atom trap, or a quantum storage device). If the system cannot move or break up (and is not too large), its spectrum is typically discrete, and the levels of interest (below some maximal excitation energy) can be labelled. These labels are the indices with which the components of your state vectors are labelled. They span a finite-dimensional vector space, and the operators on it are matrices. The diagonal matrix whose entries are the energy levels defines the unperturbed Hamiltonian $H_0$. If now an interaction is switched on, $H_0$ is changed by some Hermitian interaction operator $V$, which usually is a nondiagonal matrix. Depending on one's experimental skill, one can create systems where $V$ has some desired properties. If the interaction is controlled by an external control, it becomes time-dependent. The components $V_{jk}$ of the matrix $V$ are the matrix elements $\langle j|V|k\rangle$, and their absolute squares have a physical meaning in terms of transition rates. A simple example is a laser, which is typically represented by a 2 level or 3 level system interacting with an external field. Another example is a silver atom in the doubly degenerate ground state (because its nuclear spin is 1/2), which responds to an external magnetic field and thus gives rise to the Stern-Gerlach experiment. Note that a moving system that hangs together, when considered in its rest frame, becomes nonmoving, and then the above applies. In experiments such as Stern--Gerlach, or most quantum optics experiemnts, the motion is dewscribed classically, and only the finitely many nonmoving degrees of freedom are described by quantum mechanics. To get more complicated systems, one takes a system consisiting of several parts with few levels, and takes their tensor product as the Hilbert space. The corresponding matrices are now sums of Kronecker products of small matrices acting on the individual parts. This is the playing ground for entanglement and quantum computing. share|improve this answer Your Answer
50bbb2ffc7061ce8
History of chemistry From Wikipedia, the free encyclopedia Jump to: navigation, search The 1871 periodic table constructed by Dmitri Mendeleev. The periodic table is one of the most potent icons in science, lying at the core of chemistry and embodying the most fundamental principles of the field. Ancient history[edit] Early Metallurgy[edit] Silver, copper, tin and meteoric iron can also be found native, allowing a limited amount of metalworking in ancient cultures.[3] Egyptian weapons made from meteoric iron in about 3000 BC were highly prized as "Daggers from Heaven".[4] Arguably the first chemical reaction used in a controlled manner was fire. However, for millennia fire was seen simply as a mystical force that could transform one substance into another (burning wood, or boiling water) while producing heat and light. Fire affected many aspects of early societies. These ranged from the simplest facets of everyday life, such as cooking and habitat lighting, to more advanced technologies, such as pottery, bricks, and melting of metals to make tools. It was fire that led to the discovery of glass and the purification of metals which in turn gave way to the rise of metallurgy.[citation needed] During the early stages of metallurgy, methods of purification of metals were sought, and gold, known in ancient Egypt as early as 2900 BC, became a precious metal. Bronze Age[edit] Main article: Bronze Age Certain metals can be recovered from their ores by simply heating the rocks in a fire: notably tin, lead and (at a higher temperature) copper, a process known as smelting. The first evidence of this extractive metallurgy dates from the 5th and 6th millennium BC, and was found in the archaeological sites of Majdanpek, Yarmovac and Plocnik, all three in Serbia. To date, the earliest copper smelting is found at the Belovode site,[5] these examples include a copper axe from 5500 BC belonging to the Vinča culture.[6] Other signs of early metals are found from the third millennium BC in places like Palmela (Portugal), Los Millares (Spain), and Stonehenge (United Kingdom). However, as often happens with the study of prehistoric times, the ultimate beginnings cannot be clearly defined and new discoveries are continuous and ongoing. These first metals were single ones or as found. By combining copper and tin, a superior metal could be made, an alloy called bronze, a major technological shift which began the Bronze Age about 3500 BC. The Bronze Age was period in human cultural development when the most advanced metalworking (at least in systematic and widespread use) included techniques for smelting copper and tin from naturally occurring outcroppings of copper ores, and then smelting those ores to cast bronze. These naturally occurring ores typically included arsenic as a common impurity. Copper/tin ores are rare, as reflected in the fact that there were no tin bronzes in western Asia before 3000 BC. After the Bronze Age, the history of metallurgy was marked by armies seeking better weaponry. Countries in Eurasia prospered when they made the superior alloys, which, in turn, made better armor and better weapons.[citation needed] This often determined the outcomes of battles.[citation needed] Significant progress in metallurgy and alchemy was made in ancient India.[7] Iron Age[edit] Main article: Iron Age The extraction of iron from its ore into a workable metal is much more difficult than copper or tin. It appears to have been invented by the Hittites in about 1200 BC, beginning the Iron Age. The secret of extracting and working iron was a key factor in the success of the Philistines.[4][8] Classical antiquity and atomism[edit] Main article: Atomism Democritus, Greek philosopher of atomistic school. Philosophical attempts to rationalize why different substances have different properties (color, density, smell), exist in different states (gaseous, liquid, and solid), and react in a different manner when exposed to environments, for example to water or fire or temperature changes, led ancient philosophers to postulate the first theories on nature and chemistry. The history of such philosophical theories that relate to chemistry can probably be traced back to every single ancient civilization. The common aspect in all these theories was the attempt to identify a small number of primary classical element that make up all the various substances in nature. Substances like air, water, and soil/earth, energy forms, such as fire and light, and more abstract concepts such as ideas, aether, and heaven, were common in ancient civilizations even in absence of any cross-fertilization; for example in Greek, Indian, Mayan, and ancient Chinese philosophies all considered air, water, earth and fire as primary elements.[citation needed] Ancient World[edit] Around 420 BC, Empedocles stated that all matter is made up of four elemental substances—earth, fire, air and water. The early theory of atomism can be traced back to ancient Greece and ancient India.[11] Greek atomism dates back to the Greek philosopher Democritus, who declared that matter is composed of indivisible and indestructible atoms around 380 BC. Leucippus also declared that atoms were the most indivisible part of matter. This coincided with a similar declaration by Indian philosopher Kanada in his Vaisheshika sutras around the same time period.[11] In much the same fashion he discussed the existence of gases. What Kanada declared by sutra, Democritus declared by philosophical musing. Both suffered from a lack of empirical data. Without scientific proof, the existence of atoms was easy to deny. Aristotle opposed the existence of atoms in 330 BC. Earlier, in 380 BC, a Greek text attributed to Polybus argues that the human body is composed of four humours. Around 300 BC, Epicurus postulated a universe of indestructible atoms in which man himself is responsible for achieving a balanced life. With the goal of explaining Epicurean philosophy to a Roman audience, the Roman poet and philosopher Lucretius[12] wrote De Rerum Natura (The Nature of Things)[13] in 50 BC. In the work, Lucretius presents the principles of atomism; the nature of the mind and soul; explanations of sensation and thought; the development of the world and its phenomena; and explains a variety of celestial and terrestrial phenomena. Much of the early development of purification methods is described by Pliny the Elder in his Naturalis Historia. He made attempts to explain those methods, as well as making acute observations of the state of many minerals. Medieval alchemy[edit] See also: Minima naturalia, a medieval Aristotelian concept analogous to atomism The elemental system used in Medieval alchemy was developed primarily by the Arabian alchemist Jābir ibn Hayyān and rooted in the classical elements of Greek tradition.[14] His system consisted of the four Aristotelian elements of air, earth, fire, and water in addition to two philosophical elements: sulphur, characterizing the principle of combustibility; "the stone which burns", and mercury, characterizing the principle of metallic properties. They were seen by early alchemists as idealized expressions of irreducibile components of the universe[15] and are of larger consideration within philosophical alchemy. The three metallic principles: sulphur to flammability or combustion, mercury to volatility and stability, and salt to solidity. became the tria prima of the Swiss alchemist Paracelsus. He reasoned that Aristotle's four-element theory appeared in bodies as three principles. Paracelsus saw these principles as fundamental and justified them by recourse to the description of how wood burns in fire. Mercury included the cohesive principle, so that when it left in smoke the wood fell apart. Smoke described the volatility (the mercurial principle), the heat-giving flames described flammability (sulphur), and the remnant ash described solidity (salt).[16] The philosopher's stone[edit] Main article: Alchemy "The Alchemist", by Sir William Douglas, 1855 Alchemy is defined by the Hermetic quest for the philosopher's stone, the study of which is steeped in symbolic mysticism, and differs greatly from modern science. Alchemists toiled to make transformations on an esoteric (spiritual) and/or exoteric (practical) level.[17] It was the protoscientific, exoteric aspects of alchemy that contributed heavily to the evolution of chemistry in Greco-Roman Egypt, the Islamic Golden Age, and then in Europe. Alchemy and chemistry share an interest in the composition and properties of matter, and prior to the eighteenth century were not separated into distinct disciplines. The term chymistry has been used to describe the blend of alchemy and chemistry that existed before this time.[18] The earliest Western alchemists, who lived in the first centuries of the common era, invented chemical apparatus. The bain-marie, or water bath is named for Mary the Jewess. Her work also gives the first descriptions of the tribikos and kerotakis.[19] Cleopatra the Alchemist described furnaces and has been credited with the invention of the alembic.[20] Later, the experimental framework established by Jabir ibn Hayyan influenced alchemists as the discipline migrated through the Islamic world, then to Europe in the twelfth century. During the Renaissance, exoteric alchemy remained popular in the form of Paracelsian iatrochemistry, while spiritual alchemy flourished, realigned to its Platonic, Hermetic, and Gnostic roots. Consequently, the symbolic quest for the philosopher's stone was not superseded by scientific advances, and was still the domain of respected scientists and doctors until the early eighteenth century. Early modern alchemists who are renowned for their scientific contributions include Jan Baptist van Helmont, Robert Boyle, and Isaac Newton. Problems encountered with alchemy[edit] There were several problems with alchemy, as seen from today's standpoint. There was no systematic naming scheme for new compounds, and the language was esoteric and vague to the point that the terminologies meant different things to different people. In fact, according to The Fontana History of Chemistry (Brock, 1992): The language of alchemy soon developed an arcane and secretive technical vocabulary designed to conceal information from the uninitiated. To a large degree, this language is incomprehensible to us today, though it is apparent that readers of Geoffery Chaucer's Canon's Yeoman's Tale or audiences of Ben Jonson's The Alchemist were able to construe it sufficiently to laugh at it.[21] Chaucer's tale exposed the more fraudulent side of alchemy, especially the manufacture of counterfeit gold from cheap substances. Less than a century earlier, Dante Alighieri also demonstrated an awareness of this fraudulence, causing him to consign all alchemists to the Inferno in his writings. Soon after, in 1317, the Avignon Pope John XXII ordered all alchemists to leave France for making counterfeit money. A law was passed in England in 1403 which made the "multiplication of metals" punishable by death. Despite these and other apparently extreme measures, alchemy did not die. Royalty and privileged classes still sought to discover the philosopher's stone and the elixir of life for themselves.[22] There was also no agreed-upon scientific method for making experiments reproducible. Indeed, many alchemists included in their methods irrelevant information such as the timing of the tides or the phases of the moon. The esoteric nature and codified vocabulary of alchemy appeared to be more useful in concealing the fact that they could not be sure of very much at all. As early as the 14th century, cracks seemed to grow in the facade of alchemy; and people became sceptical.[citation needed] Clearly, there needed to be a scientific method where experiments can be repeated by other people, and results needed to be reported in a clear language that laid out both what is known and unknown. Alchemy in the Islamic World[edit] Jābir ibn Hayyān (Geber), a Arabian alchemist whose experimental research laid the foundations of chemistry. In the Islamic World, the Muslims were translating the works of the ancient Greeks and Egyptians into Arabic and were experimenting with scientific ideas.[23] The development of the modern scientific method was slow and arduous, but an early scientific method for chemistry began emerging among early Muslim chemists, beginning with the 9th century chemist Jābir ibn Hayyān (known as "Geber" in Europe), who is considered as "the father of chemistry".[24][25][26][27] He introduced a systematic and experimental approach to scientific research based in the laboratory, in contrast to the ancient Greek and Egyptian alchemists whose works were largely allegorical and often unintelligble.[28] He also invented and named the alembic (al-anbiq), chemically analyzed many chemical substances, composed lapidaries, distinguished between alkalis and acids, and manufactured hundreds of drugs.[29] He also refined the theory of five classical elements into the theory of seven alchemical elements after identifying mercury and sulfur as chemical elements.[30][verification needed] Among other influential Muslim chemists, Abū al-Rayhān al-Bīrūnī,[31] Avicenna[32] and Al-Kindi refuted the theories of alchemy, particularly the theory of the transmutation of metals; and al-Tusi described a version of the conservation of mass, noting that a body of matter is able to change but is not able to disappear.[33] Rhazes refuted Aristotle's theory of four classical elements for the first time and set up the firm foundations of modern chemistry, using the laboratory in the modern sense, designing and describing more than twenty instruments, many parts of which are still in use today, such as a crucible, cucurbit or retort for distillation, and the head of a still with a delivery tube (ambiq, Latin alembic), and various types of furnace or stove.[citation needed] For practitioners in Europe, alchemy became an intellectual pursuit after early Arabic alchemy became available through Latin translation, and over time, they improved. Paracelsus (1493–1541), for example, rejected the 4-elemental theory and with only a vague understanding of his chemicals and medicines, formed a hybrid of alchemy and science in what was to be called iatrochemistry. Paracelsus was not perfect in making his experiments truly scientific. For example, as an extension of his theory that new compounds could be made by combining mercury with sulfur, he once made what he thought was "oil of sulfur". This was actually dimethyl ether, which had neither mercury nor sulfur.[citation needed] 17th and 18th centuries: Early chemistry[edit] Agricola, author of De re metallica Practical attempts to improve the refining of ores and their extraction to smelt metals was an important source of information for early chemists in the 16th century, among them Georg Agricola (1494–1555), who published his great work De re metallica in 1556. His work describes the highly developed and complex processes of mining metal ores, metal extraction and metallurgy of the time. His approach removed the mysticism associated with the subject, creating the practical base upon which others could build. The work describes the many kinds of furnace used to smelt ore, and stimulated interest in minerals and their composition. It is no coincidence that he gives numerous references to the earlier author, Pliny the Elder and his Naturalis Historia. Agricola has been described as the "father of metallurgy".[34] In 1605, Sir Francis Bacon published The Proficience and Advancement of Learning, which contains a description of what would later be known as the scientific method.[35] In 1605, Michal Sedziwój publishes the alchemical treatise A New Light of Alchemy which proposed the existence of the "food of life" within air, much later recognized as oxygen. In 1615 Jean Beguin published the Tyrocinium Chymicum, an early chemistry textbook, and in it draws the first-ever chemical equation.[36] In 1637 René Descartes publishes Discours de la méthode, which contains an outline of the scientific method. The Dutch chemist Jan Baptist van Helmont's work Ortus medicinae was published posthumously in 1648; the book is cited by some as a major transitional work between alchemy and chemistry, and as an important influence on Robert Boyle. The book contains the results of numerous experiments and establishes an early version of the law of conservation of mass. Working during the time just after Paracelsus and iatrochemistry, Jan Baptist van Helmont suggested that there are insubstantial substances other than air and coined a name for them - "gas", from the Greek word chaos. In addition to introducing the word "gas" into the vocabulary of scientists, van Helmont conducted several experiments involving gases. Jan Baptist van Helmont is also remembered today largely for his ideas on spontaneous generation and his 5-year tree experiment, as well as being considered the founder of pneumatic chemistry. Robert Boyle[edit] Robert Boyle, one of the co-founders of modern chemistry through his use of proper experimentation, which further separated chemistry from alchemy Title page from The sceptical chymist, 1661, Chemical Heritage Foundation Anglo-Irish chemist Robert Boyle (1627–1691) is considered to have refined the modern scientific method for alchemy and to have separated chemistry further from alchemy.[37] Although his research clearly has its roots in the alchemical tradition, Boyle is largely regarded today as the first modern chemist, and therefore one of the founders of modern chemistry, and one of the pioneers of modern experimental scientific method. Although Boyle was not the original discover, he is best known for Boyle's law, which he presented in 1662:[38] the law describes the inversely proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system.[39][40] Boyle also tried to purify chemicals to obtain reproducible reactions. He was a vocal proponent of the mechanical philosophy proposed by René Descartes to explain and quantify the physical properties and interactions of material substances. Boyle was an atomist, but favoured the word corpuscle over atoms. He commented that the finest division of matter where the properties are retained is at the level of corpuscles. He also performed numerous investigations with an air pump, and noted that the mercury fell as air was pumped out. He also observed that pumping the air out of a container would extinguish a flame and kill small animals placed inside. Boyle helped to lay the foundations for the Chemical Revolution with his mechanical corpuscular philosophy.[41] Boyle repeated the tree experiment of van Helmont, and was the first to use indicators which changed colors with acidity. Development and dismantling of phlogiston[edit] Joseph Priestley, co-discoverer of the element oxygen, which he called "dephlogisticated air" In 1702, German chemist Georg Stahl coined the name "phlogiston" for the substance believed to be released in the process of burning. Around 1735, Swedish chemist Georg Brandt analyzed a dark blue pigment found in copper ore. Brandt demonstrated that the pigment contained a new element, later named cobalt. In 1751, a Swedish chemist and pupil of Stahl's named Axel Fredrik Cronstedt, identified an impurity in copper ore as a separate metallic element, which he named nickel. Cronstedt is one of the founders of modern mineralogy.[42] Cronstedt also discovered the mineral scheelite in 1751, which he named tungsten, meaning "heavy stone" in Swedish. In 1754, Scottish chemist Joseph Black isolated carbon dioxide, which he called "fixed air".[43] In 1757, Louis Claude Cadet de Gassicourt, while investigating arsenic compounds, creates Cadet's fuming liquid, later discovered to be cacodyl oxide, considered to be the first synthetic organometallic compound.[44] In 1758, Joseph Black formulated the concept of latent heat to explain the thermochemistry of phase changes.[45] In 1766, English chemist Henry Cavendish isolated hydrogen, which he called "inflammable air". Cavendish discovered hydrogen as a colorless, odourless gas that burns and can form an explosive mixture with air, and published a paper on the production of water by burning inflammable air (that is, hydrogen) in dephlogisticated air (now known to be oxygen), the latter a constituent of atmospheric air (phlogiston theory). In 1773, Swedish chemist Carl Wilhelm Scheele discovered oxygen, which he called "fire air", but did not immediately publish his achievement.[46] In 1774, English chemist Joseph Priestley independently isolated oxygen in its gaseous state, calling it "dephlogisticated air", and published his work before Scheele.[47][48] During his lifetime, Priestley's considerable scientific reputation rested on his invention of soda water, his writings on electricity, and his discovery of several "airs" (gases), the most famous being what Priestley dubbed "dephlogisticated air" (oxygen). However, Priestley's determination to defend phlogiston theory and to reject what would become the chemical revolution eventually left him isolated within the scientific community. In 1781, Carl Wilhelm Scheele discovered that a new acid, tungstic acid, could be made from Cronstedt's scheelite (at the time named tungsten). Scheele and Torbern Bergman suggested that it might be possible to obtain a new metal by reducing this acid.[49] In 1783, José and Fausto Elhuyar found an acid made from wolframite that was identical to tungstic acid. Later that year, in Spain, the brothers succeeded in isolating the metal now known as tungsten by reduction of this acid with charcoal, and they are credited with the discovery of the element.[50][51] Volta and the Voltaic Pile[edit] A voltaic pile on display in the Tempio Voltiano (the Volta Temple) near Volta's home in Como. Italian physicist Alessandro Volta constructed a device for accumulating a large charge by a series of inductions and groundings. He investigated the 1780s discovery "animal electricity" by Luigi Galvani, and found that the electric current was generated from the contact of dissimilar metals, and that the frog leg was only acting as a detector. Volta demonstrated in 1794 that when two metals and brine-soaked cloth or cardboard are arranged in a circuit they produce an electric current. In 1800, Volta stacked several pairs of alternating copper (or silver) and zinc discs (electrodes) separated by cloth or cardboard soaked in brine (electrolyte) to increase the electrolyte conductivity.[52] When the top and bottom contacts were connected by a wire, an electric current flowed through the voltaic pile and the connecting wire. Thus, Volta is credited with constructed the first electrical battery to produce electricity. Volta's method of stacking round plates of copper and zinc separated by disks of cardboard moistened with salt solution was termed a voltaic pile. Thus, Volta is considered to be the founder of the discipline of electrochemistry.[53] A Galvanic cell (or voltaic cell) is an electrochemical cell that derives electrical energy from spontaneous redox reaction taking place within the cell. It generally consists of two different metals connected by a salt bridge, or individual half-cells separated by a porous membrane. Antoine-Laurent de Lavoisier[edit] Portrait of Monsieur Lavoisier and his wife, by Jacques-Louis David Although the archives of chemical research draw upon work from ancient Babylonia, Egypt, and especially the Arabs and Persians after Islam, modern chemistry flourished from the time of Antoine-Laurent de Lavoisier, a French chemist who is celebrated as the "father of modern chemistry". Lavoisier demonstrated with careful measurements that transmutation of water to earth was not possible, but that the sediment observed from boiling water came from the container. He burnt phosphorus and sulfur in air, and proved that the products weighed more than the original. Nevertheless, the weight gained was lost from the air. Thus, in 1789, he established the Law of Conservation of Mass, which is also called "Lavoisier's Law."[54] The world's first ice-calorimeter, used in the winter of 1782-83, by Antoine Lavoisier and Pierre-Simon Laplace, to determine the heat involved in various chemical changes; calculations which were based on Joseph Black's prior discovery of latent heat. These experiments mark the foundation of thermochemistry. Repeating the experiments of Priestley, he demonstrated that air is composed of two parts, one of which combines with metals to form calxes. In Considérations Générales sur la Nature des Acides (1778), he demonstrated that the "air" responsible for combustion was also the source of acidity. The next year, he named this portion oxygen (Greek for acid-former), and the other azote (Greek for no life). Lavoisier thus has a claim to the discovery of oxygen along with Priestley and Scheele. He also discovered that the "inflammable air" discovered by Cavendish - which he termed hydrogen (Greek for water-former) - combined with oxygen to produce a dew, as Priestley had reported, which appeared to be water. In Reflexions sur le Phlogistique (1783), Lavoisier showed the phlogiston theory of combustion to be inconsistent. Mikhail Lomonosov independently established a tradition of chemistry in Russia in the 18th century. Lomonosov also rejected the phlogiston theory, and anticipated the kinetic theory of gases. Lomonosov regarded heat as a form of motion, and stated the idea of conservation of matter. Lavoisier worked with Claude Louis Berthollet and others to devise a system of chemical nomenclature which serves as the basis of the modern system of naming chemical compounds. In his Methods of Chemical Nomenclature (1787), Lavoisier invented the system of naming and classification still largely in use today, including names such as sulfuric acid, sulfates, and sulfites. In 1785, Berthollet was the first to introduce the use of chlorine gas as a commercial bleach. In the same year he first determined the elemental composition of the gas ammonia. Berthollet first produced a modern bleaching liquid in 1789 by passing chlorine gas through a solution of sodium carbonate - the result was a weak solution of sodium hypochlorite. Another strong chlorine oxidant and bleach which he investigated and was the first to produce, potassium chlorate (KClO3), is known as Berthollet's Salt. Berthollet is also known for his scientific contributions to theory of chemical equilibria via the mechanism of reverse chemical reactions. Traité élémentaire de chimie Lavoisier's Traité Élémentaire de Chimie (Elementary Treatise of Chemistry, 1789) was the first modern chemical textbook, and presented a unified view of new theories of chemistry, contained a clear statement of the Law of Conservation of Mass, and denied the existence of phlogiston. In addition, it contained a list of elements, or substances that could not be broken down further, which included oxygen, nitrogen, hydrogen, phosphorus, mercury, zinc, and sulfur. His list, however, also included light, and caloric, which he believed to be material substances. In the work, Lavoisier underscored the observational basis of his chemistry, stating "I have tried...to arrive at the truth by linking up facts; to suppress as much as possible the use of reasoning, which is often an unreliable instrument which deceives us, in order to follow as much as possible the torch of observation and of experiment." Nevertheless, he believed that the real existence of atoms was philosophically impossible. Lavoisier demonstrated that organisms disassemble and reconstitute atmospheric air in the same manner as a burning body. With Pierre-Simon Laplace, Lavoisier used a calorimeter to estimate the heat evolved per unit of carbon dioxide produced. They found the same ratio for a flame and animals, indicating that animals produced energy by a type of combustion. Lavoisier believed in the radical theory, believing that radicals, which function as a single group in a chemical reaction, would combine with oxygen in reactions. He believed all acids contained oxygen. He also discovered that diamond is a crystalline form of carbon. While many of Lavoisier's partners were influential for the advancement of chemistry as a scientific discipline, his wife Marie-Anne Lavoisier was arguably the most influential of them all. Upon their marriage, Mme. Lavoisier began to study chemistry, English, and drawing in order to help her husband in his work either by translating papers into English, a language which Lavoisier did not know, or by keeping records and drawing the various apparatuses that Lavoisier used in his labs.[55] Through her ability to read and translate articles from Britain for her husband, Lavoisier had access knowledge from many of the chemical advances happening outside of his lab.[55] Furthermore, Mme. Lavoisier kept records of Lavoisier's work and ensured that his works were published.[55] The first sign of Marie-Anne's true potential as a chemist in Lavoisier's lab came when she was translating a book by the scientist Richard Kirwan. While translating, she stumbled upon and corrected multiple errors. When she presented her translation, along with her notes to Lavoisier [55] Her edits and contributions led to Lavoisier's refutation of the theory of phlogiston. Lavoisier made many fundamental contributions to the science of chemistry. Following Lavoisier's work, chemistry acquired a strict quantitative nature, allowing reliable predictions to be made. The revolution in chemistry which he brought about was a result of a conscious effort to fit all experiments into the framework of a single theory. He established the consistent use of chemical balance, used oxygen to overthrow the phlogiston theory, and developed a new system of chemical nomenclature. Lavoisier was beheaded during the French Revolution. 19th century[edit] In 1802, French American chemist and industrialist Éleuthère Irénée du Pont, who learned manufacture of gunpowder and explosives under Antoine Lavoisier, founded a gunpowder manufacturer in Delaware known as E. I. du Pont de Nemours and Company. The French Revolution forced his family to move to the United States where du Pont started a gunpowder mill on the Brandywine River in Delaware. Wanting to make the best powder possible, du Pont was vigilant about the quality of the materials he used. For 32 years, du Pont served as president of E. I. du Pont de Nemours and Company, which eventually grew into one of the largest and most successful companies in America. Throughout the 19th century, chemistry was divided between those who followed the atomic theory of John Dalton and those who did not, such as Wilhelm Ostwald and Ernst Mach.[56] Although such proponents of the atomic theory as Amedeo Avogadro and Ludwig Boltzmann made great advances in explaining the behavior of gases, this dispute was not finally settled until Jean Perrin's experimental investigation of Einstein's atomic explanation of Brownian motion in the first decade of the 20th century.[56] Well before the dispute had been settled, many had already applied the concept of atomism to chemistry. A major example was the ion theory of Svante Arrhenius which anticipated ideas about atomic substructure that did not fully develop until the 20th century. Michael Faraday was another early worker, whose major contribution to chemistry was electrochemistry, in which (among other things) a certain quantity of electricity during electrolysis or electrodeposition of metals was shown to be associated with certain quantities of chemical elements, and fixed quantities of the elements therefore with each other, in specific ratios.[citation needed] These findings, like those of Dalton's combining ratios, were early clues to the atomic nature of matter. John Dalton[edit] John Dalton is remembered for his work on partial pressures in gases, color blindness, and atomic theory Main articles: John Dalton and Atomic theory In 1803, English meteorologist and chemist John Dalton proposed Dalton's law, which describes the relationship between the components in a mixture of gases and the relative pressure each contributes to that of the overall mixture.[57] Discovered in 1801, this concept is also known as Dalton's law of partial pressures. Dalton also proposed a modern atomic theory in 1803 which stated that all matter was composed of small indivisible particles termed atoms, atoms of a given element possess unique characteristics and weight, and three types of atoms exist: simple (elements), compound (simple molecules), and complex (complex molecules). In 1808, Dalton first published New System of Chemical Philosophy (1808-1827), in which he outlined the first modern scientific description of the atomic theory. This work identified chemical elements as a specific type of atom, therefore rejecting Newton's theory of chemical affinities. Instead, Dalton inferred proportions of elements in compounds by taking ratios of the weights of reactants, setting the atomic weight of hydrogen to be identically one. Following Jeremias Benjamin Richter (known for introducing the term stoichiometry), he proposed that chemical elements combine in integral ratios. This is known as the law of multiple proportions or Dalton's law, and Dalton included a clear description of the law in his New System of Chemical Philosophy. The law of multiple proportions is one of the basic laws of stoichiometry used to establish the atomic theory. Despite the importance of the work as the first view of atoms as physically real entities and introduction of a system of chemical symbols, New System of Chemical Philosophy devoted almost as much space to the caloric theory as to atomism. French chemist Joseph Proust proposed the law of definite proportions, which states that elements always combine in small, whole number ratios to form compounds, based on several experiments conducted between 1797 and 1804[58] Along with the law of multiple proportions, the law of definite proportions forms the basis of stoichiometry. The law of definite proportions and constant composition do not prove that atoms exist, but they are difficult to explain without assuming that chemical compounds are formed when atoms combine in constant proportions. Jöns Jacob Berzelius[edit] Jöns Jacob Berzelius, the chemist who worked out the modern technique of chemical formula notation and is considered one of the fathers of modern chemistry Main article: Jöns Jacob Berzelius A Swedish chemist and disciple of Dalton, Jöns Jacob Berzelius embarked on a systematic program to try to make accurate and precise quantitative measurements and insure the purity of chemicals. Along Lavoisier, Boyle, and Dalton, Berzelius is known as the father of modern chemistry. In 1828 he compiled a table of relative atomic weights, where oxygen was set to 100, and which included all of the elements known at the time. This work provided evidence in favor of Dalton's atomic theory: that inorganic chemical compounds are composed of atoms combined in whole number amounts. He determined the exact elementary constituents of large numbers of compounds. The results strongly confirmed Proust's Law of Definite Proportions. In his weights, he used oxygen as a standard, setting its weight equal to exactly 100. He also measured the weights of 43 elements. In discovering that atomic weights are not integer multiples of the weight of hydrogen, Berzelius also disproved Prout's hypothesis that elements are built up from atoms of hydrogen. Motivated by his extensive atomic weight determinations and in a desire to aid his experiments, he introduced the classical system of chemical symbols and notation with his 1808 publishing of Lärbok i Kemien, in which elements are abbreviated by one or two letters to make a distinct abbreviation from their Latin name. This system of chemical notation—in which the elements were given simple written labels, such as O for oxygen, or Fe for iron, with proportions noted by numbers—is the same basic system used today. The only difference is that instead of the subscript number used today (e.g., H2O), Berzelius used a superscript (H2O). Berzelius is credited with identifying the chemical elements silicon, selenium, thorium, and cerium. Students working in Berzelius's laboratory also discovered lithium and vanadium. Berzelius developed the radical theory of chemical combination, which holds that reactions occur as stable groups of atoms called radicals are exchanged between molecules. He believed that salts are compounds of an acid and bases, and discovered that the anions in acids would be attracted to a positive electrode (the anode), whereas the cations in a base would be attracted to a negative electrode (the cathode). Berzelius did not believe in the Vitalism Theory, but instead in a regulative force which produced organization of tissues in an organism. Berzelius is also credited with originating the chemical terms "catalysis", "polymer", "isomer", and "allotrope", although his original definitions differ dramatically from modern usage. For example, he coined the term "polymer" in 1833 to describe organic compounds which shared identical empirical formulas but which differed in overall molecular weight, the larger of the compounds being described as "polymers" of the smallest. By this long superseded, pre-structural definition, glucose (C6H12O6) was viewed as a polymer of formaldehyde (CH2O). New elements and gas laws[edit] Humphry Davy, the discover of several alkali and alkaline earth metals, as well as contributions to the discoveries of the elemental nature of chlorine and iodine. Main article: Humphry Davy English chemist Humphry Davy was a pioneer in the field of electrolysis, using Alessandro Volta's voltaic pile to split up common compounds and thus isolate a series of new elements. He went on to electrolyse molten salts and discovered several new metals, especially sodium and potassium, highly reactive elements known as the alkali metals. Potassium, the first metal that was isolated by electrolysis, was discovered in 1807 by Davy, who derived it from caustic potash (KOH). Before the 19th century, no distinction was made between potassium and sodium. Sodium was first isolated by Davy in the same year by passing an electric current through molten sodium hydroxide (NaOH). When Davy heard that Berzelius and Pontin prepared calcium amalgam by electrolyzing lime in mercury, he tried it himself. Davy was successful, and discovered calcium in 1808 by electrolyzing a mixture of lime and mercuric oxide.[59][60] He worked with electrolysis throughout his life and, in 1808, he isolated magnesium, strontium[61] and barium.[62] Davy also experimented with gases by inhaling them. This experimental procedure nearly proved fatal on several occasions, but led to the discovery of the unusual effects of nitrous oxide, which came to be known as laughing gas. Chlorine was discovered in 1774 by Swedish chemist Carl Wilhelm Scheele, who called it "dephlogisticated marine acid" (see phlogiston theory) and mistakenly thought it contained oxygen. Scheele observed several properties of chlorine gas, such as its bleaching effect on litmus, its deadly effect on insects, its yellow-green colour, and the similarity of its smell to that of aqua regia. However, Scheele was unable to publish his findings at the time. In 1810, chlorine was given its current name by Humphry Davy (derived from the Greek word for green), who insisted that chlorine was in fact an element.[63] He also showed that oxygen could not be obtained from the substance known as oxymuriatic acid (HCl solution). This discovery overturned Lavoisier's definition of acids as compounds of oxygen. Davy was a popular lecturer and able experimenter. Joseph Louis Gay-Lussac, who stated that the ratio between the volumes of the reactant gases and the products can be expressed in simple whole numbers. French chemist Joseph Louis Gay-Lussac shared the interest of Lavoisier and others in the quantitative study of the properties of gases. From his first major program of research in 1801–1802, he concluded that equal volumes of all gases expand equally with the same increase in temperature: this conclusion is usually called "Charles's law", as Gay-Lussac gave credit to Jacques Charles, who had arrived at nearly the same conclusion in the 1780s but had not published it.[64] The law was independently discovered by British natural philosopher John Dalton by 1801, although Dalton's description was less thorough than Gay-Lussac's.[65][66] In 1804 Gay-Lussac made several daring ascents of over 7,000 meters above sea level in hydrogen-filled balloons—a feat not equaled for another 50 years—that allowed him to investigate other aspects of gases. Not only did he gather magnetic measurements at various altitudes, but he also took pressure, temperature, and humidity measurements and samples of air, which he later analyzed chemically. In 1808 Gay-Lussac announced what was probably his single greatest achievement: from his own and others' experiments he deduced that gases at constant temperature and pressure combine in simple numerical proportions by volume, and the resulting product or products—if gases—also bear a simple proportion by volume to the volumes of the reactants. In other words, gases under equal conditions of temperature and pressure react with one another in volume ratios of small whole numbers. This conclusion subsequently became known as "Gay-Lussac's law" or the "Law of Combining Volumes". With his fellow professor at the École Polytechnique, Louis Jacques Thénard, Gay-Lussac also participated in early electrochemical research, investigating the elements discovered by its means. Among other achievements, they decomposed boric acid by using fused potassium, thus discovering the element boron. The two also took part in contemporary debates that modified Lavoisier's definition of acids and furthered his program of analyzing organic compounds for their oxygen and hydrogen content. The element iodine was discovered by French chemist Bernard Courtois in 1811.[67][68] Courtois gave samples to his friends, Charles Bernard Desormes (1777–1862) and Nicolas Clément (1779–1841), to continue research. He also gave some of the substance to Gay-Lussac and to physicist André-Marie Ampère. On December 6, 1813, Gay-Lussac announced that the new substance was either an element or a compound of oxygen.[69][70][71] It was Gay-Lussac who suggested the name "iode", from the Greek word ιώδες (iodes) for violet (because of the color of iodine vapor).[67][69] Ampère had given some of his sample to Humphry Davy. Davy did some experiments on the substance and noted its similarity to chlorine.[72] Davy sent a letter dated December 10 to the Royal Society of London stating that he had identified a new element.[73] Arguments erupted between Davy and Gay-Lussac over who identified iodine first, but both scientists acknowledged Courtois as the first to isolate the element. In 1815, Humphry Davy invented the Davy lamp, which allowed miners within coal mines to work safely in the presence of flammable gases. There had been many mining explosions caused by firedamp or methane often ignited by open flames of the lamps then used by miners. Davy conceived of using an iron gauze to enclose a lamp's flame, and so prevent the methane burning inside the lamp from passing out to the general atmosphere. Although the idea of the safety lamp had already been demonstrated by William Reid Clanny and by the then unknown (but later very famous) engineer George Stephenson, Davy's use of wire gauze to prevent the spread of flame was used by many other inventors in their later designs. There was some discussion as to whether Davy had discovered the principles behind his lamp without the help of the work of Smithson Tennant, but it was generally agreed that the work of both men had been independent. Davy refused to patent the lamp, and its invention led to him being awarded the Rumford medal in 1816.[74] Amedeo Avogadro, who postulated that, under controlled conditions of temperature and pressure, equal volumes of gases contain an equal number of molecules. This is known as Avogadro's law. Main articles: Amedeo Avogadro and Avogadro's law After Dalton published his atomic theory in 1808, certain of his central ideas were soon adopted by most chemists. However, uncertainty persisted for half a century about how atomic theory was to be configured and applied to concrete situations; chemists in different countries developed several different incompatible atomistic systems. A paper that suggested a way out of this difficult situation was published as early as 1811 by the Italian physicist Amedeo Avogadro (1776-1856), who hypothesized that equal volumes of gases at the same temperature and pressure contain equal numbers of molecules, from which it followed that relative molecular weights of any two gases are the same as the ratio of the densities of the two gases under the same conditions of temperature and pressure. Avogadro also reasoned that simple gases were not formed of solitary atoms but were instead compound molecules of two or more atoms. Thus Avogadro was able to overcome the difficulty that Dalton and others had encountered when Gay-Lussac reported that above 100 °C the volume of water vapor was twice the volume of the oxygen used to form it. According to Avogadro, the molecule of oxygen had split into two atoms in the course of forming water vapor. Avogadro's hypothesis was neglected for half a century after it was first published. Many reasons for this neglect have been cited, including some theoretical problems, such as Jöns Jacob Berzelius's "dualism", which asserted that compounds are held together by the attraction of positive and negative electrical charges, making it inconceivable that a molecule composed of two electrically similar atoms—as in oxygen—could exist. An additional barrier to acceptance was the fact that many chemists were reluctant to adopt physical methods (such as vapour-density determinations) to solve their problems. By mid-century, however, some leading figures had begun to view the chaotic multiplicity of competing systems of atomic weights and molecular formulas as intolerable. Moreover, purely chemical evidence began to mount that suggested Avogadro's approach might be right after all. During the 1850s, younger chemists, such as Alexander Williamson in England, Charles Gerhardt and Charles-Adolphe Wurtz in France, and August Kekulé in Germany, began to advocate reforming theoretical chemistry to make it consistent with Avogadrian theory. Wöhler and the vitalism debate[edit] Structural formula of urea In 1825, Friedrich Wöhler and Justus von Liebig performed the first confirmed discovery and explanation of isomers, earlier named by Berzelius. Working with cyanic acid and fulminic acid, they correctly deduced that isomerism was caused by differing arrangements of atoms within a molecular structure. In 1827, William Prout classified biomolecules into their modern groupings: carbohydrates, proteins and lipids. After the nature of combustion was settled, another dispute, about vitalism and the essential distinction between organic and inorganic substances, began. The vitalism question was revolutionized in 1828 when Friedrich Wöhler synthesized urea, thereby establishing that organic compounds could be produced from inorganic starting materials and disproving the theory of vitalism. Never before had an organic compound been synthesized from inorganic material.[citation needed] This opened a new research field in chemistry, and by the end of the 19th century, scientists were able to synthesize hundreds of organic compounds. The most important among them are mauve, magenta, and other synthetic dyes, as well as the widely used drug aspirin. The discovery of the artificial synthesis of urea contributed greatly to the theory of isomerism, as the empirical chemical formulas for urea and ammonium cyanate are identical (see Wöhler synthesis). In 1832, Friedrich Wöhler and Justus von Liebig discovered and explained functional groups and radicals in relation to organic chemistry, as well as first synthesizing benzaldehyde. Liebig, a German chemist, made major contributions to agricultural and biological chemistry, and worked on the organization of organic chemistry. Liebig is considered the "father of the fertilizer industry" for his discovery of nitrogen as an essential plant nutrient, and his formulation of the Law of the Minimum which described the effect of individual nutrients on crops. In 1840, Germain Hess proposed Hess's law, an early statement of the law of conservation of energy, which establishes that energy changes in a chemical process depend only on the states of the starting and product materials and not on the specific pathway taken between the two states. In 1847, Hermann Kolbe obtained acetic acid from completely inorganic sources, further disproving vitalism. In 1848, William Thomson, 1st Baron Kelvin (commonly known as Lord Kelvin) established the concept of absolute zero, the temperature at which all molecular motion ceases. In 1849, Louis Pasteur discovered that the racemic form of tartaric acid is a mixture of the levorotatory and dextrotatory forms, thus clarifying the nature of optical rotation and advancing the field of stereochemistry.[75] In 1852, August Beer proposed Beer's law, which explains the relationship between the composition of a mixture and the amount of light it will absorb. Based partly on earlier work by Pierre Bouguer and Johann Heinrich Lambert, it established the analytical technique known as spectrophotometry.[76] In 1855, Benjamin Silliman, Jr. pioneered methods of petroleum cracking, which made the entire modern petrochemical industry possible.[77] Formulas of acetic acid given by August Kekulé in 1861. Avogadro's hypothesis began to gain broad appeal among chemists only after his compatriot and fellow scientist Stanislao Cannizzaro demonstrated its value in 1858, two years after Avogadro's death. Cannizzaro's chemical interests had originally centered on natural products and on reactions of aromatic compounds; in 1853 he discovered that when benzaldehyde is treated with concentrated base, both benzoic acid and benzyl alcohol are produced—a phenomenon known today as the Cannizzaro reaction. In his 1858 pamphlet, Cannizzaro showed that a complete return to the ideas of Avogadro could be used to construct a consistent and robust theoretical structure that fit nearly all of the available empirical evidence. For instance, he pointed to evidence that suggested that not all elementary gases consist of two atoms per molecule—some were monatomic, most were diatomic, and a few were even more complex. Another point of contention had been the formulas for compounds of the alkali metals (such as sodium) and the alkaline earth metals (such as calcium), which, in view of their striking chemical analogies, most chemists had wanted to assign to the same formula type. Cannizzaro argued that placing these metals in different categories had the beneficial result of eliminating certain anomalies when using their physical properties to deduce atomic weights. Unfortunately, Cannizzaro's pamphlet was published initially only in Italian and had little immediate impact. The real breakthrough came with an international chemical congress held in the German town of Karlsruhe in September 1860, at which most of the leading European chemists were present. The Karlsruhe Congress had been arranged by Kekulé, Wurtz, and a few others who shared Cannizzaro's sense of the direction chemistry should go. Speaking in French (as everyone there did), Cannizzaro's eloquence and logic made an indelible impression on the assembled body. Moreover, his friend Angelo Pavesi distributed Cannizzaro's pamphlet to attendees at the end of the meeting; more than one chemist later wrote of the decisive impression the reading of this document provided. For instance, Lothar Meyer later wrote that on reading Cannizzaro's paper, "The scales seemed to fall from my eyes."[78] Cannizzaro thus played a crucial role in winning the battle for reform. The system advocated by him, and soon thereafter adopted by most leading chemists, is substantially identical to what is still used today. Perkin, Crookes, and Nobel[edit] In 1856, Sir William Henry Perkin, age 18, given a challenge by his professor, August Wilhelm von Hofmann, sought to synthesize quinine, the anti-malaria drug, from coal tar. In one attempt, Perkin oxidized aniline using potassium dichromate, whose toluidine impurities reacted with the aniline and yielded a black solid—suggesting a "failed" organic synthesis. Cleaning the flask with alcohol, Perkin noticed purple portions of the solution: a byproduct of the attempt was the first synthetic dye, known as mauveine or Perkin's mauve. Perkin's discovery is the foundation of the dye synthesis industry, one of the earliest successful chemical industries. German chemist August Kekulé von Stradonitz's most important single contribution was his structural theory of organic composition, outlined in two articles published in 1857 and 1858 and treated in great detail in the pages of his extraordinarily popular Lehrbuch der organischen Chemie ("Textbook of Organic Chemistry"), the first installment of which appeared in 1859 and gradually extended to four volumes. Kekulé argued that tetravalent carbon atoms - that is, carbon forming exactly four chemical bonds - could link together to form what he called a "carbon chain" or a "carbon skeleton," to which other atoms with other valences (such as hydrogen, oxygen, nitrogen, and chlorine) could join. He was convinced that it was possible for the chemist to specify this detailed molecular architecture for at least the simpler organic compounds known in his day. Kekulé was not the only chemist to make such claims in this era. The Scottish chemist Archibald Scott Couper published a substantially similar theory nearly simultaneously, and the Russian chemist Aleksandr Butlerov did much to clarify and expand structure theory. However, it was predominantly Kekulé's ideas that prevailed in the chemical community. A Crookes tube (2 views): light and dark. Electrons travel in straight lines from the cathode (left), as evidenced by the shadow cast from the Maltese cross on the fluorescence of the righthand end. The anode is at the bottom wire. British chemist and physicist William Crookes is noted for his cathode ray studies, fundamental in the development of atomic physics. His researches on electrical discharges through a rarefied gas led him to observe the dark space around the cathode, now called the Crookes dark space. He demonstrated that cathode rays travel in straight lines and produce phosphorescence and heat when they strike certain materials. A pioneer of vacuum tubes, Crookes invented the Crookes tube - an early experimental discharge tube, with partial vacuum with which he studied the behavior of cathode rays. With the introduction of spectrum analysis by Robert Bunsen and Gustav Kirchhoff (1859-1860), Crookes applied the new technique to the study of selenium compounds. Bunsen and Kirchoff had previously used spectroscopy as a means of chemical analysis to discover caesium and rubidium. In 1861, Crookes used this process to discover thallium in some seleniferous deposits. He continued work on that new element, isolated it, studied its properties, and in 1873 determined its atomic weight. During his studies of thallium, Crookes discovered the principle of the Crookes radiometer, a device that converts light radiation into rotary motion. The principle of this radiometer has found numerous applications in the development of sensitive measuring instruments. In 1862, Alexander Parkes exhibited Parkesine, one of the earliest synthetic polymers, at the International Exhibition in London. This discovery formed the foundation of the modern plastics industry. In 1864, Cato Maximilian Guldberg and Peter Waage, building on Claude Louis Berthollet's ideas, proposed the law of mass action. In 1865, Johann Josef Loschmidt determined the exact number of molecules in a mole, later named Avogadro's number. In 1865, August Kekulé, based partially on the work of Loschmidt and others, established the structure of benzene as a six carbon ring with alternating single and double bonds. Kekulé's novel proposal for benzene's cyclic structure was much contested but was never replaced by a superior theory. This theory provided the scientific basis for the dramatic expansion of the German chemical industry in the last third of the 19th century. Today, the large majority of known organic compounds are aromatic, and all of them contain at least one hexagonal benzene ring of the sort that Kekulé advocated. Kekulé is also famous for having clarified the nature of aromatic compounds, which are compounds based on the benzene molecule. In 1865, Adolf von Baeyer began work on indigo dye, a milestone in modern industrial organic chemistry which revolutionized the dye industry. Swedish chemist and inventor Alfred Nobel found that when nitroglycerin was incorporated in an absorbent inert substance like kieselguhr (diatomaceous earth) it became safer and more convenient to handle, and this mixture he patented in 1867 as dynamite. Nobel later on combined nitroglycerin with various nitrocellulose compounds, similar to collodion, but settled on a more efficient recipe combining another nitrate explosive, and obtained a transparent, jelly-like substance, which was a more powerful explosive than dynamite. Gelignite, or blasting gelatin, as it was named, was patented in 1876; and was followed by a host of similar combinations, modified by the addition of potassium nitrate and various other substances. Mendeleev's periodic table[edit] Dmitri Mendeleev, responsible for organizing the known chemical elements in a periodic table. An important breakthrough in making sense of the list of known chemical elements (as well as in understanding the internal structure of atoms) was Dmitri Mendeleev's development of the first modern periodic table, or the periodic classification of the elements. Mendeleev, a Russian chemist, felt that there was some type of order to the elements and he spent more than thirteen years of his life collecting data and assembling the concept, initially with the idea of resolving some of the disorder in the field for his students. Mendeleev found that, when all the known chemical elements were arranged in order of increasing atomic weight, the resulting table displayed a recurring pattern, or periodicity, of properties within groups of elements. Mendeleev's law allowed him to build up a systematic periodic table of all the 66 elements then known based on atomic mass, which he published in Principles of Chemistry in 1869. His first Periodic Table was compiled on the basis of arranging the elements in ascending order of atomic weight and grouping them by similarity of properties. Mendeleev had such faith in the validity of the periodic law that he proposed changes to the generally accepted values for the atomic weight of a few elements and, in his version of the periodic table of 1871, predicted the locations within the table of unknown elements together with their properties. He even predicted the likely properties of three yet-to-be-discovered elements, which he called ekaboron (Eb), ekaaluminium (Ea), and ekasilicon (Es), which proved to be good predictors of the properties of scandium, gallium, and germanium, respectively, which each fill the spot in the periodic table assigned by Mendeleev. At first the periodic system did not raise interest among chemists. However, with the discovery of the predicted elements, notably gallium in 1875, scandium in 1879, and germanium in 1886, it began to win wide acceptance. The subsequent proof of many of his predictions within his lifetime brought fame to Mendeleev as the founder of the periodic law. This organization surpassed earlier attempts at classification by Alexandre-Émile Béguyer de Chancourtois, who published the telluric helix, an early, three-dimensional version of the periodic table of the elements in 1862, John Newlands, who proposed the law of octaves (a precursor to the periodic law) in 1864, and Lothar Meyer, who developed an early version of the periodic table with 28 elements organized by valence in 1864. Mendeleev's table did not include any of the noble gases, however, which had not yet been discovered. Gradually the periodic law and table became the framework for a great part of chemical theory. By the time Mendeleev died in 1907, he enjoyed international recognition and had received distinctions and awards from many countries. In 1873, Jacobus Henricus van 't Hoff and Joseph Achille Le Bel, working independently, developed a model of chemical bonding that explained the chirality experiments of Pasteur and provided a physical cause for optical activity in chiral compounds.[79] van 't Hoff's publication, called Voorstel tot Uitbreiding der Tegenwoordige in de Scheikunde gebruikte Structuurformules in de Ruimte, etc. (Proposal for the development of 3-dimensional chemical structural formulae) and consisting of twelve pages text and one page diagrams, gave the impetus to the development of stereochemistry. The concept of the "asymmetrical carbon atom", dealt with in this publication, supplied an explanation of the occurrence of numerous isomers, inexplicable by means of the then current structural formulae. At the same time he pointed out the existence of relationship between optical activity and the presence of an asymmetrical carbon atom. Josiah Willard Gibbs[edit] J. Willard Gibbs formulated a concept of thermodynamic equilibrium of a system in terms of energy and entropy. He also did extensive work on chemical equilibrium, and equilibria between phases. American mathematical physicist J. Willard Gibbs's work on the applications of thermodynamics was instrumental in transforming physical chemistry into a rigorous deductive science. During the years from 1876 to 1878, Gibbs worked on the principles of thermodynamics, applying them to the complex processes involved in chemical reactions. He discovered the concept of chemical potential, or the "fuel" that makes chemical reactions work. In 1876 he published his most famous contribution, "On the Equilibrium of Heterogeneous Substances", a compilation of his work on thermodynamics and physical chemistry which laid out the concept of free energy to explain the physical basis of chemical equilibria.[80] In these essays were the beginnings of Gibbs’ theories of phases of matter: he considered each state of matter a phase, and each substance a component. Gibbs took all of the variables involved in a chemical reaction - temperature, pressure, energy, volume, and entropy - and included them in one simple equation known as Gibbs' phase rule. Within this paper was perhaps his most outstanding contribution, the introduction of the concept free energy, now universally called Gibbs free energy in his honor. The Gibbs free energy relates the tendency of a physical or chemical system to simultaneously lower its energy and increase its disorder, or entropy, in a spontaneous natural process. Gibbs's approach allows a researcher to calculate the change in free energy in the process, such as in a chemical reaction, and how fast it will happen. Since virtually all chemical processes and many physical ones involve such changes, his work has significantly impacted both the theoretical and experiential aspects of these sciences. In 1877, Ludwig Boltzmann established statistical derivations of many important physical and chemical concepts, including entropy, and distributions of molecular velocities in the gas phase.[81] Together with Boltzmann and James Clerk Maxwell, Gibbs created a new branch of theoretical physics called statistical mechanics (a term that he coined), explaining the laws of thermodynamics as consequences of the statistical properties of large ensembles of particles. Gibbs also worked on the application of Maxwell's equations to problems in physical optics. Gibbs's derivation of the phenomenological laws of thermodynamics from the statistical properties of systems with many particles was presented in his highly influential textbook Elementary Principles in Statistical Mechanics, published in 1902, a year before his death. In that work, Gibbs reviewed the relationship between the laws of thermodynamics and statistical theory of molecular motions. The overshooting of the original function by partial sums of Fourier series at points of discontinuity is known as the Gibbs phenomenon. Late 19th century[edit] German engineer Carl von Linde's invention of a continuous process of liquefying gases in large quantities formed a basis for the modern technology of refrigeration and provided both impetus and means for conducting scientific research at low temperatures and very high vacuums. He developed a methyl ether refrigerator (1874) and an ammonia refrigerator (1876). Though other refrigeration units had been developed earlier, Linde's were the first to be designed with the aim of precise calculations of efficiency. In 1895 he set up a large-scale plant for the production of liquid air. Six years later he developed a method for separating pure liquid oxygen from liquid air that resulted in widespread industrial conversion to processes utilizing oxygen (e.g., in steel manufacture). In 1883, Svante Arrhenius developed an ion theory to explain conductivity in electrolytes.[82] In 1884, Jacobus Henricus van 't Hoff published Études de Dynamique chimique (Studies in Dynamic Chemistry), a seminal study on chemical kinetics.[83] In this work, van 't Hoff entered for the first time the field of physical chemistry. Of great importance was his development of the general thermodynamic relationship between the heat of conversion and the displacement of the equilibrium as a result of temperature variation. At constant volume, the equilibrium in a system will tend to shift in such a direction as to oppose the temperature change which is imposed upon the system. Thus, lowering the temperature results in heat development while increasing the temperature results in heat absorption. This principle of mobile equilibrium was subsequently (1885) put in a general form by Henry Louis Le Chatelier, who extended the principle to include compensation, by change of volume, for imposed pressure changes. The van 't Hoff-Le Chatelier principle, or simply Le Chatelier's principle, explains the response of dynamic chemical equilibria to external stresses.[84] In 1884, Hermann Emil Fischer proposed the structure of purine, a key structure in many biomolecules, which he later synthesized in 1898. He also began work on the chemistry of glucose and related sugars.[85] In 1885, Eugene Goldstein named the cathode ray, later discovered to be composed of electrons, and the canal ray, later discovered to be positive hydrogen ions that had been stripped of their electrons in a cathode ray tube; these would later be named protons.[86] The year 1885 also saw the publishing of J. H. van 't Hoff's L'Équilibre chimique dans les Systèmes gazeux ou dissous à I'État dilué (Chemical equilibria in gaseous systems or strongly diluted solutions), which dealt with this theory of dilute solutions. Here he demonstrated that the "osmotic pressure" in solutions which are sufficiently dilute is proportionate to the concentration and the absolute temperature so that this pressure can be represented by a formula which only deviates from the formula for gas pressure by a coefficient i. He also determined the value of i by various methods, for example by means of the vapor pressure and François-Marie Raoult's results on the lowering of the freezing point. Thus van 't Hoff was able to prove that thermodynamic laws are not only valid for gases, but also for dilute solutions. His pressure laws, given general validity by the electrolytic dissociation theory of Arrhenius (1884-1887) - the first foreigner who came to work with him in Amsterdam (1888) - are considered the most comprehensive and important in the realm of natural sciences. In 1893, Alfred Werner discovered the octahedral structure of cobalt complexes, thus establishing the field of coordination chemistry.[87] Ramsay's discovery of the noble gases[edit] Main articles: William Ramsay and Noble gas The most celebrated discoveries of Scottish chemist William Ramsay were made in inorganic chemistry. Ramsay was intrigued by the British physicist John Strutt, 3rd Baron Rayleigh's 1892 discovery that the atomic weight of nitrogen found in chemical compounds was lower than that of nitrogen found in the atmosphere. He ascribed this discrepancy to a light gas included in chemical compounds of nitrogen, while Ramsay suspected a hitherto undiscovered heavy gas in atmospheric nitrogen. Using two different methods to remove all known gases from air, Ramsay and Lord Rayleigh were able to announce in 1894 that they had found a monatomic, chemically inert gaseous element that constituted nearly 1 percent of the atmosphere; they named it argon. The following year, Ramsay liberated another inert gas from a mineral called cleveite; this proved to be helium, previously known only in the solar spectrum. In his book The Gases of the Atmosphere (1896), Ramsay showed that the positions of helium and argon in the periodic table of elements indicated that at least three more noble gases might exist. In 1898 Ramsay and the British chemist Morris W. Travers isolated these elements—called neon, krypton, and xenon—from air brought to a liquid state at low temperature and high pressure. Sir William Ramsay worked with Frederick Soddy to demonstrate, in 1903, that alpha particles (helium nuclei) were continually produced during the radioactive decay of a sample of radium. Ramsay was awarded the 1904 Nobel Prize for Chemistry in recognition of "services in the discovery of the inert gaseous elements in air, and his determination of their place in the periodic system." In 1897, J. J. Thomson discovered the electron using the cathode ray tube. In 1898, Wilhelm Wien demonstrated that canal rays (streams of positive ions) can be deflected by magnetic fields, and that the amount of deflection is proportional to the mass-to-charge ratio. This discovery would lead to the analytical technique known as mass spectrometry.[88] Marie and Pierre Curie[edit] Marie Curie, a pioneer in the field of radioactivity and the first twice-honored Nobel laureate (and still the only one in two different sciences) Marie Skłodowska-Curie was a Polish-born French physicist and chemist who is famous for her pioneering research on radioactivity. She and her husband are considered to have laid the cornerstone of the nuclear age with their research on radioactivity. Marie was fascinated with the work of Henri Becquerel, a French physicist who discovered in 1896 that uranium casts off rays similar to the X-rays discovered by Wilhelm Röntgen. Marie Curie began studying uranium in late 1897 and theorized, according to a 1904 article she wrote for Century magazine, "that the emission of rays by the compounds of uranium is a property of the metal itself—that it is an atomic property of the element uranium independent of its chemical or physical state." Curie took Becquerel's work a few steps further, conducting her own experiments on uranium rays. She discovered that the rays remained constant, no matter the condition or form of the uranium. The rays, she theorized, came from the element's atomic structure. This revolutionary idea created the field of atomic physics and the Curies coined the word radioactivity to describe the phenomena. Pierre Curie, known for his work on radioactivity as well as on ferromagnetism, paramagnetism, and diamagnetism; notably Curie's law and Curie point. Pierre and Marie further explored radioactivity by working to separate the substances in uranium ores and then using the electrometer to make radiation measurements to ‘trace’ the minute amount of unknown radioactive element among the fractions that resulted. Working with the mineral pitchblende, the pair discovered a new radioactive element in 1898. They named the element polonium, after Marie's native country of Poland. On December 21, 1898, the Curies detected the presence of another radioactive material in the pitchblende. They presented this finding to the French Academy of Sciences on December 26, proposing that the new element be called radium. The Curies then went to work isolating polonium and radium from naturally occurring compounds to prove that they were new elements. In 1902, the Curies announced that they had produced a decigram of pure radium, demonstrating its existence as a unique chemical element. While it took three years for them to isolate radium, they were never able to isolate polonium. Along with the discovery of two new elements and finding techniques for isolating radioactive isotopes, Curie oversaw the world's first studies into the treatment of neoplasms, using radioactive isotopes. With Henri Becquerel and her husband, Pierre Curie, she was awarded the 1903 Nobel Prize for Physics. She was the sole winner of the 1911 Nobel Prize for Chemistry. She was the first woman to win a Nobel Prize, and she is the only woman to win the award in two different fields. While working with Marie to extract pure substances from ores, an undertaking that really required industrial resources but that they achieved in relatively primitive conditions, Pierre himself concentrated on the physical study (including luminous and chemical effects) of the new radiations. Through the action of magnetic fields on the rays given out by the radium, he proved the existence of particles electrically positive, negative, and neutral; these Ernest Rutherford was afterward to call alpha, beta, and gamma rays. Pierre then studied these radiations by calorimetry and also observed the physiological effects of radium, thus opening the way to radium therapy. Among Pierre Curie's discoveries were that ferromagnetic substances exhibited a critical temperature transition, above which the substances lost their ferromagnetic behavior - this is known as the "Curie point." He was elected to the Academy of Sciences (1905), having in 1903 jointly with Marie received the Royal Society's prestigious Davy Medal and jointly with her and Becquerel the Nobel Prize for Physics. He was run over by a carriage in the rue Dauphine in Paris in 1906 and died instantly. His complete works were published in 1908. Ernest Rutherford[edit] Ernest Rutherford, discoverer of the nucleus and considered the father of nuclear physics New Zealand-born chemist and physicist Ernest Rutherford is considered to be "the father of nuclear physics." Rutherford is best known for devising the names alpha, beta, and gamma to classify various forms of radioactive "rays" which were poorly understood at his time (alpha and beta rays are particle beams, while gamma rays are a form of high-energy electromagnetic radiation). Rutherford deflected alpha rays with both electric and magnetic fields in 1903. Working with Frederick Soddy, Rutherford explained that radioactivity is due to the transmutation of elements, now known to involve nuclear reactions. Top: Predicted results based on the then-accepted plum pudding model of the atom. Bottom: Observed results. Rutherford disproved the plum pudding model and concluded that the positive charge of the atom must be concentrated in a small, central nucleus. He also observed that the intensity of radioactivity of a radioactive element decreases over a unique and regular amount of time until a point of stability, and he named the halving time the "half-life." In 1901 and 1902 he worked with Frederick Soddy to prove that atoms of one radioactive element would spontaneously turn into another, by expelling a piece of the atom at high velocity. In 1906 at the University of Manchester, Rutherford oversaw an experiment conducted by his students Hans Geiger (known for the Geiger counter) and Ernest Marsden. In the Geiger–Marsden experiment, a beam of alpha particles, generated by the radioactive decay of radon, was directed normally onto a sheet of very thin gold foil in an evacuated chamber. Under the prevailing plum pudding model, the alpha particles should all have passed through the foil and hit the detector screen, or have been deflected by, at most, a few degrees. However, the actual results surprised Rutherford. Although many of the alpha particles did pass through as expected, many others were deflected at small angles while others were reflected back to the alpha source. They observed that a very small percentage of particles were deflected through angles much larger than 90 degrees. The gold foil experiment showed large deflections for a small fraction of incident particles. Rutherford realized that, because some of the alpha particles were deflected or reflected, the atom had a concentrated centre of positive charge and of relatively large mass - Rutherford later termed this positive center the "atomic nucleus". The alpha particles had either hit the positive centre directly or passed by it close enough to be affected by its positive charge. Since many other particles passed through the gold foil, the positive centre would have to be a relatively small size compared to the rest of the atom - meaning that the atom is mostly open space. From his results, Rutherford developed a model of the atom that was similar to the solar system, known as Rutherford model. Like planets, electrons orbited a central, sun-like nucleus. For his work with radiation and the atomic nucleus, Rutherford received the 1908 Nobel Prize in Chemistry. 20th century[edit] The first Solvay Conference was held in Brussels in 1911 and was considered a turning point in the world of physics and chemistry. In 1903, Mikhail Tsvet invented chromatography, an important analytic technique. In 1904, Hantaro Nagaoka proposed an early nuclear model of the atom, where electrons orbit a dense massive nucleus. In 1905, Fritz Haber and Carl Bosch developed the Haber process for making ammonia, a milestone in industrial chemistry with deep consequences in agriculture. The Haber process, or Haber-Bosch process, combined nitrogen and hydrogen to form ammonia in industrial quantities for production of fertilizer and munitions. The food production for half the world's current population depends on this method for producing fertilizer. Haber, along with Max Born, proposed the Born–Haber cycle as a method for evaluating the lattice energy of an ionic solid. Haber has also been described as the "father of chemical warfare" for his work developing and deploying chlorine and other poisonous gases during World War I. Robert A. Millikan, who is best known for measuring the charge on the electron, won the Nobel Prize in Physics in 1923. In 1905, Albert Einstein explained Brownian motion in a way that definitively proved atomic theory. Leo Baekeland invented bakelite, one of the first commercially successful plastics. In 1909, American physicist Robert Andrews Millikan - who had studied in Europe under Walther Nernst and Max Planck - measured the charge of individual electrons with unprecedented accuracy through the oil drop experiment, in which he measured the electric charges on tiny falling water (and later oil) droplets. His study established that any particular droplet's electrical charge is a multiple of a definite, fundamental value — the electron's charge — and thus a confirmation that all electrons have the same charge and mass. Beginning in 1912, he spent several years investigating and finally proving Albert Einstein's proposed linear relationship between energy and frequency, and providing the first direct photoelectric support for Planck's constant. In 1923 Millikan was awarded the Nobel Prize for Physics. In 1909, S. P. L. Sørensen invented the pH concept and develops methods for measuring acidity. In 1911, Antonius Van den Broek proposed the idea that the elements on the periodic table are more properly organized by positive nuclear charge rather than atomic weight. In 1911, the first Solvay Conference was held in Brussels, bringing together most of the most prominent scientists of the day. In 1912, William Henry Bragg and William Lawrence Bragg proposed Bragg's law and established the field of X-ray crystallography, an important tool for elucidating the crystal structure of substances. In 1912, Peter Debye develops the concept of molecular dipole to describe asymmetric charge distribution in some molecules. Niels Bohr[edit] Niels Bohr, the developer of the Bohr model of the atom, and a leading founder of quantum mechanics Main articles: Niels Bohr and Bohr model In 1913, Niels Bohr, a Danish physicist, introduced the concepts of quantum mechanics to atomic structure by proposing what is now known as the Bohr model of the atom, where electrons exist only in strictly defined circular orbits around the nucleus similar to rungs on a ladder. The Bohr Model is a planetary model in which the negatively charged electrons orbit a small, positively charged nucleus similar to the planets orbiting the Sun (except that the orbits are not planar) - the gravitational force of the solar system is mathematically akin to the attractive Coulomb (electrical) force between the positively charged nucleus and the negatively charged electrons. In the Bohr model, however, electrons orbit the nucleus in orbits that have a set size and energy - the energy levels are said to be quantized, which means that only certain orbits with certain radii are allowed; orbits in between simply don't exist. The energy of the orbit is related to its size - that is, the lowest energy is found in the smallest orbit. Bohr also postulated that electromagnetic radiation is absorbed or emitted when an electron moves from one orbit to another. Because only certain electron orbits are permitted, the emission of light accompanying a jump of an electron from an excited energy state to ground state produces a unique emission spectrum for each element. Niels Bohr also worked on the principle of complementarity, which states that an electron can be interpreted in two mutually exclusive and valid ways. Electrons can be interpreted as wave or particle models. His hypothesis was that an incoming particle would strike the nucleus and create an excited compound nucleus. This formed the basis of his liquid drop model and later provided a theory base for the explanation of nuclear fission. In 1913, Henry Moseley, working from Van den Broek's earlier idea, introduces concept of atomic number to fix inadequacies of Mendeleev's periodic table, which had been based on atomic weight. The peak of Frederick Soddy's career in radiochemistry was in 1913 with his formulation of the concept of isotopes, which stated that certain elements exist in two or more forms which have different atomic weights but which are indistinguishable chemically. He is remembered for proving the existence of isotopes of certain radioactive elements, and is also credited, along with others, with the discovery of the element protactinium in 1917. In 1913, J. J. Thomson expanded on the work of Wien by showing that charged subatomic particles can be separated by their mass-to-charge ratio, a technique known as mass spectrometry. Gilbert N. Lewis[edit] Main article: Gilbert N. Lewis American physical chemist Gilbert N. Lewis laid the foundation of valence bond theory; he was instrumental in developing a bonding theory based on the number of electrons in the outermost "valence" shell of the atom. In 1902, while Lewis was trying to explain valence to his students, he depicted atoms as constructed of a concentric series of cubes with electrons at each corner. This "cubic atom" explained the eight groups in the periodic table and represented his idea that chemical bonds are formed by electron transference to give each atom a complete set of eight outer electrons (an "octet"). Lewis's theory of chemical bonding continued to evolve and, in 1916, he published his seminal article "The Atom of the Molecule", which suggested that a chemical bond is a pair of electrons shared by two atoms. Lewis's model equated the classical chemical bond with the sharing of a pair of electrons between the two bonded atoms. Lewis introduced the "electron dot diagrams" in this paper to symbolize the electronic structures of atoms and molecules. Now known as Lewis structures, they are discussed in virtually every introductory chemistry book. Shortly after publication of his 1916 paper, Lewis became involved with military research. He did not return to the subject of chemical bonding until 1923, when he masterfully summarized his model in a short monograph entitled Valence and the Structure of Atoms and Molecules. His renewal of interest in this subject was largely stimulated by the activities of the American chemist and General Electric researcher Irving Langmuir, who between 1919 and 1921 popularized and elaborated Lewis's model. Langmuir subsequently introduced the term covalent bond. In 1921, Otto Stern and Walther Gerlach establish concept of quantum mechanical spin in subatomic particles. For cases where no sharing was involved, Lewis in 1923 developed the electron pair theory of acids and base: Lewis redefined an acid as any atom or molecule with an incomplete octet that was thus capable of accepting electrons from another atom; bases were, of course, electron donors. His theory is known as the concept of Lewis acids and bases. In 1923, G. N. Lewis and Merle Randall published Thermodynamics and the Free Energy of Chemical Substances, first modern treatise on chemical thermodynamics. The 1920s saw a rapid adoption and application of Lewis's model of the electron-pair bond in the fields of organic and coordination chemistry. In organic chemistry, this was primarily due to the efforts of the British chemists Arthur Lapworth, Robert Robinson, Thomas Lowry, and Christopher Ingold; while in coordination chemistry, Lewis's bonding model was promoted through the efforts of the American chemist Maurice Huggins and the British chemist Nevil Sidgwick. Quantum mechanics[edit] Quantum mechanics in the 1920s Broglie Big.jpgPauli.jpg Erwin Schrödinger (1933).jpgWerner Heisenberg cropped.jpg From left to right, top row: Louis de Broglie (1892–1987) and Wolfgang Pauli (1900–58); second row: Erwin Schrödinger (1887–1961) and Werner Heisenberg (1901–76) In 1924, French quantum physicist Louis de Broglie published his thesis, in which he introduced a revolutionary theory of electron waves based on wave–particle duality in his thesis. In his time, the wave and particle interpretations of light and matter were seen as being at odds with one another, but de Broglie suggested that these seemingly different characteristics were instead the same behavior observed from different perspectives — that particles can behave like waves, and waves (radiation) can behave like particles. Broglie's proposal offered an explanation of the restriction motion of electrons within the atom. The first publications of Broglie's idea of "matter waves" had drawn little attention from other physicists, but a copy of his doctoral thesis chanced to reach Einstein, whose response was enthusiastic. Einstein stressed the importance of Broglie's work both explicitly and by building further on it. In 1925, Austrian-born physicist Wolfgang Pauli developed the Pauli exclusion principle, which states that no two electrons around a single nucleus in an atom can occupy the same quantum state simultaneously, as described by four quantum numbers. Pauli made major contributions to quantum mechanics and quantum field theory - he was awarded the 1945 Nobel Prize for Physics for his discovery of the Pauli exclusion principle - as well as solid-state physics, and he successfully hypothesized the existence of the neutrino. In addition to his original work, he wrote masterful syntheses of several areas of physical theory that are considered classics of scientific literature. In 1926 at the age of 39, Austrian theoretical physicist Erwin Schrödinger produced the papers that gave the foundations of quantum wave mechanics. In those papers he described his partial differential equation that is the basic equation of quantum mechanics and bears the same relation to the mechanics of the atom as Newton's equations of motion bear to planetary astronomy. Adopting a proposal made by Louis de Broglie in 1924 that particles of matter have a dual nature and in some situations act like waves, Schrödinger introduced a theory describing the behaviour of such a system by a wave equation that is now known as the Schrödinger equation. The solutions to Schrödinger's equation, unlike the solutions to Newton's equations, are wave functions that can only be related to the probable occurrence of physical events. The readily visualized sequence of events of the planetary orbits of Newton is, in quantum mechanics, replaced by the more abstract notion of probability. (This aspect of the quantum theory made Schrödinger and several other physicists profoundly unhappy, and he devoted much of his later life to formulating philosophical objections to the generally accepted interpretation of the theory that he had done so much to create.) German theoretical physicist Werner Heisenberg was one of the key creators of quantum mechanics. In 1925, Heisenberg discovered a way to formulate quantum mechanics in terms of matrices. For that discovery, he was awarded the Nobel Prize for Physics for 1932. In 1927 he published his uncertainty principle, upon which he built his philosophy and for which he is best known. Heisenberg was able to demonstrate that if you were studying an electron in an atom you could say where it was (the electron's location) or where it was going (the electron's velocity), but it was impossible to express both at the same time. He also made important contributions to the theories of the hydrodynamics of turbulent flows, the atomic nucleus, ferromagnetism, cosmic rays, and subatomic particles, and he was instrumental in planning the first West German nuclear reactor at Karlsruhe, together with a research reactor in Munich, in 1957. Considerable controversy surrounds his work on atomic research during World War II. Quantum chemistry[edit] Main article: Quantum chemistry Some view the birth of quantum chemistry in the discovery of the Schrödinger equation and its application to the hydrogen atom in 1926.[citation needed] However, the 1927 article of Walter Heitler and Fritz London[89] is often recognised as the first milestone in the history of quantum chemistry. This is the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. In the following years much progress was accomplished by Edward Teller, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Linus Pauling, Erich Hückel, Douglas Hartree, Vladimir Aleksandrovich Fock, to cite a few.[citation needed] Still, skepticism remained as to the general power of quantum mechanics applied to complex chemical systems.[citation needed] The situation around 1930 is described by Paul Dirac:[90] Hence the quantum mechanical methods developed in the 1930s and 1940s are often referred to as theoretical molecular or atomic physics to underline the fact that they were more the application of quantum mechanics to chemistry and spectroscopy than answers to chemically relevant questions. In 1951, a milestone article in quantum chemistry is the seminal paper of Clemens C. J. Roothaan on Roothaan equations.[91] It opened the avenue to the solution of the self-consistent field equations for small molecules like hydrogen or nitrogen. Those computations were performed with the help of tables of integrals which were computed on the most advanced computers of the time.[citation needed] In the 1940s many physicists turned from molecular or atomic physics to nuclear physics (like J. Robert Oppenheimer or Edward Teller). Glenn T. Seaborg was an American nuclear chemist best known for his work on isolating and identifying transuranium elements (those heavier than uranium). He shared the 1951 Nobel Prize for Chemistry with Edwin Mattison McMillan for their independent discoveries of transuranium elements. Seaborgium was named in his honour, making him the only person, along Albert Einstein, for whom a chemical element was named during his lifetime. Molecular biology and biochemistry[edit] By the mid 20th century, in principle, the integration of physics and chemistry was extensive, with chemical properties explained as the result of the electronic structure of the atom; Linus Pauling's book on The Nature of the Chemical Bond used the principles of quantum mechanics to deduce bond angles in ever-more complicated molecules. However, though some principles deduced from quantum mechanics were able to predict qualitatively some chemical features for biologically relevant molecules, they were, till the end of the 20th century, more a collection of rules, observations, and recipes than rigorous ab initio quantitative methods.[citation needed] Diagrammatic representation of some key structural features of DNA This heuristic approach triumphed in 1953 when James Watson and Francis Crick deduced the double helical structure of DNA by constructing models constrained by and informed by the knowledge of the chemistry of the constituent parts and the X-ray diffraction patterns obtained by Rosalind Franklin.[92] This discovery lead to an explosion of research into the biochemistry of life. In the same year, the Miller–Urey experiment demonstrated that basic constituents of protein, simple amino acids, could themselves be built up from simpler molecules in a simulation of primordial processes on Earth. Though many questions remain about the true nature of the origin of life, this was the first attempt by chemists to study hypothetical processes in the laboratory under controlled conditions.[citation needed] In 1983 Kary Mullis devised a method for the in-vitro amplification of DNA, known as the polymerase chain reaction (PCR), which revolutionized the chemical processes used in the laboratory to manipulate it. PCR could be used to synthesize specific pieces of DNA and made possible the sequencing of DNA of organisms, which culminated in the huge human genome project. An important piece in the double helix puzzle was solved by one of Pauling's students Matthew Meselson and Frank Stahl, the result of their collaboration (Meselson–Stahl experiment) has been called as "the most beautiful experiment in biology". They used a centrifugation technique that sorted molecules according to differences in weight. Because nitrogen atoms are a component of DNA, they were labelled and therefore tracked in replication in bacteria. Late 20th century[edit] Buckminsterfullerene, C60 In 1970, John Pople developed the Gaussian program greatly easing computational chemistry calculations.[93] In 1971, Yves Chauvin offered an explanation of the reaction mechanism of olefin metathesis reactions.[94] In 1975, Karl Barry Sharpless and his group discovered a stereoselective oxidation reactions including Sharpless epoxidation,[95][96] Sharpless asymmetric dihydroxylation,[97][98][99] and Sharpless oxyamination.[100][101][102] In 1985, Harold Kroto, Robert Curl and Richard Smalley discovered fullerenes, a class of large carbon molecules superficially resembling the geodesic dome designed by architect R. Buckminster Fuller.[103] In 1991, Sumio Iijima used electron microscopy to discover a type of cylindrical fullerene known as a carbon nanotube, though earlier work had been done in the field as early as 1951. This material is an important component in the field of nanotechnology.[104] In 1994, Robert A. Holton and his group achieved the first total synthesis of Taxol.[105][106][107] In 1995, Eric Cornell and Carl Wieman produced the first Bose–Einstein condensate, a substance that displays quantum mechanical properties on the macroscopic scale.[108] Mathematics and chemistry[edit] Classically, before the 20th century, chemistry was defined as the science of the nature of matter and its transformations. It was therefore clearly distinct from physics which was not concerned with such dramatic transformation of matter. Moreover, in contrast to physics, chemistry was not using much of mathematics. Even some were particularly reluctant to use mathematics within chemistry. For example, Auguste Comte wrote in 1830: However, in the second part of the 19th century, the situation changed and August Kekulé wrote in 1867: I rather expect that we shall someday find a mathematico-mechanical explanation for what we now call atoms which will render an account of their properties. Scope of chemistry[edit] As understanding of the nature of matter has evolved, so too has the self-understanding of the science of chemistry by its practitioners. This continuing historical process of evaluation includes the categories, terms, aims and scope of chemistry. Additionally, the development of the social institutions and networks which support chemical enquiry are highly significant factors that enable the production, dissemination and application of chemical knowledge. (See Philosophy of chemistry) Chemical industry[edit] Main article: Chemical industry The later part of the nineteenth century saw a huge increase in the exploitation of petroleum extracted from the earth for the production of a host of chemicals and largely replaced the use of whale oil, coal tar and naval stores used previously. Large-scale production and refinement of petroleum provided feedstocks for liquid fuels such as gasoline and diesel, solvents, lubricants, asphalt, waxes, and for the production of many of the common materials of the modern world, such as synthetic fibers, plastics, paints, detergents, pharmaceuticals, adhesives and ammonia as fertilizer and for other uses. Many of these required new catalysts and the utilization of chemical engineering for their cost-effective production. In the mid-twentieth century, control of the electronic structure of semiconductor materials was made precise by the creation of large ingots of extremely pure single crystals of silicon and germanium. Accurate control of their chemical composition by doping with other elements made the production of the solid state transistor in 1951 and made possible the production of tiny integrated circuits for use in electronic devices, especially computers. See also[edit] Histories and timelines[edit] Notable chemists[edit] listed chronologically: 1. ^ Selected Classic Papers from the History of Chemistry 3. ^ Photos, E., 'The Question of Meteorictic versus Smelted Nickel-Rich Iron: Archaeological Evidence and Experimental Results' World Archaeology Vol. 20, No. 3, Archaeometallurgy (February 1989), pp. 403–421. Online version accessed on 2010-02-08. 6. ^ Neolithic Vinca was a metallurgical culture Stonepages from news sources November 2007 7. ^ Will Durant wrote in The Story of Civilization I: Our Oriental Heritage: 11. ^ a b Will Durant (1935), Our Oriental Heritage: 14. ^ Norris, John A. (2006). "The Mineral Exhalation Theory of Metallogenesis in Pre-Modern Mineral Science". Ambix. 53: 43. doi:10.1179/174582306X93183.  16. ^ Strathern, 2000. Page 79. 17. ^ Holmyard, E.J. (1957). Alchemy. New York: Dover, 1990. pp. 15, 16.  18. ^ William Royall Newman. Atoms and Alchemy: Chymistry and the experimental origins of the scientific revolution. University of Chicago Press, 2006. p.xi 19. ^ Holmyard, E.J. (1957). Alchemy. New York: Dover, 1990. pp. 48, 49.  21. ^ Brock, William H. (1992). The Fontana History of Chemistry. London, England: Fontana Press. pp. 32–33. ISBN 0-00-686173-3.  22. ^ Brock, William H. (1992). The Fontana History of Chemistry. London, England: Fontana Press. ISBN 0-00-686173-3.  23. ^ The History of Ancient Chemistry (cf. Ahmad Y Hassan. "A Critical Reassessment of the Geber Problem: Part Three". Retrieved 2008-08-09. ) 36. ^ Crosland, M.P. (1959). "The use of diagrams as chemical 'equations' in the lectures of William Cullen and Joseph Black." Annals of Science, Vol 15, No. 2, June 37. ^ Robert Boyle 41. ^ Ursula Klein (July 2007). "Styles of Experimentation and Alchemical Matter Theory in the Scientific Revolution". Metascience. Springer. 16 (2): 247–256 [247]. doi:10.1007/s11016-007-9095-8. ISSN 1467-9981.  42. ^ Nordisk familjebok – Cronstedt: "den moderna mineralogiens och geognosiens grundläggare" = "the modern mineralogy's and geognosie's founder" 49. ^ Saunders, Nigel (2004). Tungsten and the Elements of Groups 3 to 7 (The Periodic Table). Chicago: Heinemann Library. ISBN 1-4034-3518-9.  50. ^ "ITIA Newsletter" (PDF). International Tungsten Industry Association. June 2005. Archived from the original (PDF) on July 21, 2011. Retrieved 2008-06-18.  51. ^ "ITIA Newsletter" (PDF). International Tungsten Industry Association. December 2005. Archived from the original (PDF) on July 21, 2011. Retrieved 2008-06-18.  52. ^ Mottelay, Paul Fleury (2008). Bibliographical History of Electricity and Magnetism (Reprint of 1892 ed.). Read Books. p. 247. ISBN 1-4437-2844-6.  53. ^ "Inventor Alessandro Volta Biography". The Great Idea Finder. The Great Idea Finder. 2005. Retrieved 2007-02-23.  54. ^ Lavoisier, Antoine (1743-1794) -- from Eric Weisstein's World of Scientific Biography, ScienceWorld 55. ^ a b c d http://www.humantouchofchemistry.com/marieanne-lavoisier.htm 56. ^ a b Pullman, Bernard (2004). The Atom in the History of Human Thought. Reisinger, Axel. USA: Oxford University Press Inc. ISBN 0-19-511447-7.  57. ^ "John Dalton". Chemical Achievers: The Human Face of Chemical Sciences. Chemical Heritage Foundation. 2005. Retrieved 2007-02-22.  58. ^ "Proust, Joseph Louis (1754-1826)". 100 Distinguished Chemists. European Association for Chemical and Molecular Science. 2005. Retrieved 2007-02-23.  61. ^ Weeks, Mary Elvira (1933). "XII. Other Elements Isolated with the Aid of Potassium and Sodium: Beryllium, Boron, Silicon and Aluminum". The Discovery of the Elements. Easton, Pennsylvania: Journal of Chemical Education. ISBN 0-7661-3872-0.  62. ^ Robert E. Krebs (2006). The history and use of our earth's chemical elements: a reference guide. Greenwood Publishing Group. p. 80. ISBN 0-313-33438-2.  64. ^ Gay-Lussac, J. L. (L'An X – 1802), "Recherches sur la dilatation des gaz et des vapeurs" [Researches on the expansion of gases and vapors], Annales de chimie, 43: 137–175  Check date values in: |date= (help). English translation (extract). 65. ^ J. Dalton (1802) "Essay IV. On the expansion of elastic fluids by heat," Memoirs of the Literary and Philosophical Society of Manchester, vol. 5, pt. 2, pages 595-602. 66. ^ http://www.chemistryexplained.com/Fe-Ge/Gay-Lussac-Joseph-Louis.html 67. ^ a b Courtois, Bernard (1813). "Découverte d'une substance nouvelle dans le Vareck". Annales de chimie. 88: 304.  In French, seaweed that had been washed onto the shore was called "varec", "varech", or "vareck", whence the English word "wrack". Later, "varec" also referred to the ashes of such seaweed: The ashes were used as a source of iodine and salts of sodium and potassium. 68. ^ Swain, Patricia A. (2005). "Bernard Courtois (1777–1838) famed for discovering iodine (1811), and his life in Paris from 1798" (PDF). Bulletin for the History of Chemistry. 30 (2): 103.  69. ^ a b Gay-Lussac, J. (1813). "Sur un nouvel acide formé avec la substance décourverte par M. Courtois". Annales de chimie. 88: 311.  70. ^ Gay-Lussac, J. (1813). "Sur la combination de l'iode avec d'oxigène". Annales de chimie. 88: 319.  71. ^ Gay-Lussac, J. (1814). "Mémoire sur l'iode". Annales de chimie. 91: 5.  75. ^ "History of Chirality". Stheno Corporation. 2006. Archived from the original on 2007-03-07. Retrieved 2007-03-12.  76. ^ "Lambert-Beer Law". Sigrist-Photometer AG. 2007-03-07. Retrieved 2007-03-12.  77. ^ "Benjamin Silliman, Jr. (1816–1885)". Picture History. Picture History LLC. 2003. Retrieved 2007-03-24.  78. ^ Moore, F. J. (1931). A History of Chemistry. McGraw-Hill. pp. 182–1184. ISBN 0-07-148855-3.  (2nd edition) 79. ^ "Jacobus Henricus van't Hoff". Chemical Achievers: The Human Face of Chemical Sciences. Chemical Heritage Foundation. 2005. Retrieved 2007-02-22.  80. ^ O'Connor, J. J.; Robertson, E.F. (1997). "Josiah Willard Gibbs". MacTutor. School of Mathematics and Statistics University of St Andrews, Scotland. Retrieved 2007-03-24.  81. ^ Weisstein, Eric W. (1996). "Boltzmann, Ludwig (1844–1906)". Eric Weisstein's World of Scientific Biography. Wolfram Research Products. Retrieved 2007-03-24.  82. ^ "Svante August Arrhenius". Chemical Achievers: The Human Face of Chemical Sciences. Chemical Heritage Foundation. 2005. Retrieved 2007-02-22.  83. ^ "Jacobus H. van 't Hoff: The Nobel Prize in Chemistry 1901". Nobel Lectures, Chemistry 1901–1921. Elsevier Publishing Company. 1966. Retrieved 2007-02-28.  84. ^ "Henry Louis Le Châtelier". World of Scientific Discovery. Thomson Gale. 2005. Retrieved 2007-03-24.  85. ^ "Emil Fischer: The Nobel Prize in Chemistry 1902". Nobel Lectures, Chemistry 1901–1921. Elsevier Publishing Company. 1966. Retrieved 2007-02-28.  86. ^ "History of Chemistry". Intensive General Chemistry. Columbia University Department of Chemistry Undergraduate Program. Retrieved 2007-03-24.  87. ^ "Alfred Werner: The Nobel Prize in Chemistry 1913". Nobel Lectures, Chemistry 1901–1921. Elsevier Publishing Company. 1966. Retrieved 2007-03-24.  88. ^ "Alfred Werner: The Nobel Prize in Physics 1911". Nobel Lectures, Physics 1901–1921. Elsevier Publishing Company. 1967. Retrieved 2007-03-24.  89. ^ W. Heitler and F. London, Wechselwirkung neutraler Atome und Homöopolare Bindung nach der Quantenmechanik, Z. Physik, 44, 455 (1927). 90. ^ P.A.M. Dirac, Quantum Mechanics of Many-Electron Systems, Proc. R. Soc. London, A 123, 714 (1929). 91. ^ C.C.J. Roothaan, A Study of Two-Center Integrals Useful in Calculations on Molecular Structure, J. Chem. Phys., 19, 1445 (1951). 92. ^ Watson, J. and Crick, F., "Molecular Structure of Nucleic Acids" Nature, April 25, 1953, p 737–8 94. ^ Catalyse de transformation des oléfines par les complexes du tungstène. II. Télomérisation des oléfines cycliques en présence d'oléfines acycliques Die Makromolekulare Chemie Volume 141, Issue 1, Date: 9 February 1971, Pages: 161–176 Par Jean-Louis Hérisson, Yves Chauvin doi:10.1002/macp.1971.021410112 97. ^ Jacobsen, E. N.; Marko, I.; Mungall, W. S.; Schroeder, G.; Sharpless, K. B. J. Am. Chem. Soc. 1988, 110, 1968. (doi:10.1021/ja00214a053) 98. ^ Kolb, H. C.; Van Nieuwenhze, M. S.; Sharpless, K. B. Chem. Rev. 1994, 94, 2483–2547. (Review) (doi:10.1021/cr00032a009) 99. ^ Gonzalez, J.; Aurigemma, C.; Truesdale, L. Org. Syn., Coll. Vol. 10, p.603 (2004); Vol. 79, p.93 (2002). (Article) 100. ^ Sharpless, K. B.; Patrick, D. W.; Truesdale, L. K.; Biller, S. A. J. Am. Chem. Soc. 1975, 97, 2305. (doi:10.1021/ja00841a071) 101. ^ Herranz, E.; Biller, S. A.; Sharpless, K. B. J. Am. Chem. Soc. 1978, 100, 3596–3598. (doi:10.1021/ja00479a051) 102. ^ Herranz, E.; Sharpless, K. B. Org. Syn., Coll. Vol. 7, p.375 (1990); Vol. 61, p.85 (1983). (Article) 103. ^ "The Nobel Prize in Chemistry 1996". Nobelprize.org. The Nobel Foundation. Retrieved 2007-02-28.  104. ^ "Benjamin Franklin Medal awarded to Dr. Sumio Iijima, Director of the Research Center for Advanced Carbon Materials, AIST". National Institute of Advanced Industrial Science and Technology. 2002. Retrieved 2007-03-27.  105. ^ First total synthesis of taxol 1. Functionalization of the B ring Robert A. Holton, Carmen Somoza, Hyeong Baik Kim, Feng Liang, Ronald J. Biediger, P. Douglas Boatman, Mitsuru Shindo, Chase C. Smith, Soekchan Kim, et al.; J. Am. Chem. Soc.; 1994; 116(4); 1597–1598. DOI Abstract 106. ^ First total synthesis of taxol. 2. Completion of the C and D rings Robert A. Holton, Hyeong Baik Kim, Carmen Somoza, Feng Liang, Ronald J. Biediger, P. Douglas Boatman, Mitsuru Shindo, Chase C. Smith, Soekchan Kim, and et al. J. Am. Chem. Soc.; 1994; 116(4) pp 1599–1600 DOI Abstract 107. ^ A synthesis of taxusin Robert A. Holton, R. R. Juo, Hyeong B. Kim, Andrew D. Williams, Shinya Harusawa, Richard E. Lowenthal, Sadamu Yogai J. Am. Chem. Soc.; 1988; 110(19); 6558–6560. Abstract 108. ^ "Cornell and Wieman Share 2001 Nobel Prize in Physics". NIST News Release. National Institute of Standards and Technology. 2001. Retrieved 2007-03-27.  Further reading[edit] External links[edit]
0d1b5f7471f0b98a
Schrödinger’s Cat Explained So You Can Finally Understand It The thought experiment was actually conceived to prove a popular theory wrong. You probably have heard of Schrödinger’s Cat. You may have even attempted to discuss the experiment at a fun party to sound smart. But do you truly understand it? The thought experiment can be confusing as people often misunderstand it as trying to make the opposite point of what it was making. In this video, we seek to explain Schrödinger’s Cat in a way everyone can finally understand it. But first who was Schrödinger, the man the experiment was named after? Erwin Schrödinger was awarded the Nobel Prize in physics in 1933 and was most famous for his work in quantum theory, particularly for the Schrödinger equation which provides a way to calculate the wave function of a system and how it changes dynamically in time. The physicist also tackled the problems of genetics from the point of view of physics, but he is perhaps best known for his thought experiment: Schrödinger’s cat. Schrödinger conceived of the cat experiment to try and disprove an idea proposed by physicists in Copenhagen known as the Copenhagen interpretation of quantum mechanics. What was the idea? How did Schrödinger disprove it? And what was the outcome of his thought experiment? We answer all these questions and more in this video.  Follow Us on Stay on top of the latest engineering news Just enter your email and we’ll take care of the rest:
e4e620b86b57b557
next up previous contents Next: Solving the Electronic Eigenvalue Up: Molecular Quantum Mechanics Previous: The Born-Oppenheimer Approximation Separation of the Nuclear Hamiltonian The nuclear Schrödinger equation can be approximately factored into translational, rotational, and vibrational parts. McQuarrie [1] explains how to do this for a diatomic in section 10-13. The rotational part can be cast into the form of the rigid rotor model, and the vibrational part can be written as a system of harmonic oscillators. Time does not allow further comment on the nuclear Schrödinger equation, although it is central to molecular spectroscopy.
8d1767abd66e2be6
Home > All Issues > PIER ISSN: 1070-4698 Vol. 170 Latest Volume All Volumes All Issues 2021-04-03 PIER Vol. 170, 129-152, 2021. doi:10.2528/PIER21020702 L-Band Radar Scattering and Soil Moisture Retrieval of Wheat, Canola and Pasture Fields for Smap Active Algorithms Huanting Huang, Tien-Hao Liao, Seung Bum Kim, Xiaolan Xu, Leung Tsang, Thomas J. Jackson, and Simon Yueh Wheat, canola, and pasture are three of the major vegetation types studied during the Soil Moisture Active Passive Validation Experiment 2012 (SMAPVEX12) conducted to support NASA's Soil Moisture Active Passive (SMAP) mission. The utilized model structure is integrated in the SMAP baseline active retrieval algorithm. Forward lookup tables (data-cubes) for VV and HH backscatters at L-band are developed for wheat and canola fields. The data-cubes have three axes: vegetation water content (VWC), root mean square (RMS) height of rough soil surface and soil permittivity. The volume scattering and doublebounce scattering of the fields are calculated using the distorted Born approximation and the coherent reflectivity in the double-bounce scattering. The surface scattering is determined by the numerical solutions of Maxwell equations (NMM3D). The results of the data-cubes are validated with airborne radar measurements collected during SMAPVEX12 for ten wheat fields, five canola fields, and three pasture fields. The results show good agreement between the data-cube simulation and the airborne data. The root mean squared errors (RMSE) were 0.82 dB, 0.78 dB, and 1.62 dB for HH, and 0.97 dB, 1.30 dB, and 1.82 dB for VV of wheat, canola, and pasture fields, respectively. The data-cubes are next used to perform the time-series retrieval of the soil moisture. The RMSEs of the soil moisture retrieval are 0.043 cm3/cm3, 0.082 cm3/cm3, and 0.082 cm3/cm3 for wheat, canola, and pasture fields, respectively. The results of this paper expand the scope of the SMAP baseline radar algorithm for wheat, canola, and pastures formed and provide a quantitative validation of its performance. It will also have applications for the upcoming NISAR (NASA-ISRO SAR Mission). 2021-02-05 PIER Vol. 170, 97-128, 2021. doi:10.2528/PIER20121201 A Fine Scale Partially Coherent Patch Model Including Topographical Effects for GNSS-R Ddm Simulations Haokui Xu, Jiyue Zhu, Leung Tsang, and Seung Bum Kim In this paper, we propose a fine scale partially coherent patch model (FPCP) for GNSS-R land applications for soil moisture retrieval. The land surface is divided into coherent planar patches on which microwave roughness is superimposed. The scattered waves of the coherent patch are decomposed into the coherent specular reflection and diffuse incoherent scattering. A fine scale of 2 meter patch size is chosen for the coherent patch to be applicable to complex terrain with large varieties of topographical elevations and with small to large topographical slopes. The summation of scattered fields over patches is carried out using physical optics. The phase term of the scattered wave of each patch is kept so that correlation scattering effects among patches are accounted for. Results are illustrated for power ratio for areas near the specular point and areas far away from the specular point. Comparisons are made with the radiative transfer geometric optics model. DDM simulations are performed with good agreement with CYGNSS data. 2021-02-02 PIER Vol. 170, 79-95, 2021. doi:10.2528/PIER20082001 High Efficiency Multi-Functional All-Optical Logic Gates Based on MIM Plasmonic Waveguide Structure with the Kerr-Type Nonlinear Nano-Ring Resonators Yaw-Dong Wu In this paper, high efficiency multi-functional all-optical logic gates based on a metal-insulator-metal (MIM) plasmonic waveguide structure with Kerr-type nonlinear nano-ring resonators are proposed. The proposed structure consists of three straight input ports, eight nano-ring resonators filled with the Kerr-type nonlinear medium, and one straight output port. By fixing the input signal power and properly changing the control power, it can be used to design high efficiency multi-functional all-optical logic gates. The numerical results show that the proposed Kerr-type nonlinear plasmonic waveguide structures could really function as all-optical XOR/NXOR, AND/NAND, and OR/NOR logic gates in the optical communication spectral region. The transmission efficiency of the high logic state is higher than 95%, and that of the low logic state is about 0% at the wavelength 1310nm. The performance of the proposed logic gates was analyzed and simulated by the finite element method (FEM). 2021-01-21 PIER Vol. 170, 63-78, 2021. doi:10.2528/PIER20122201 Computational Investigation of Nanoscale Semiconductor Devices and Optoelectronic Devices from the Electromagnetics and Quantum Perspectives by the Finite Difference Time Domain Method (Invited Review) Huali Duan, Wenxiao Fang, Wen-Yan Yin, Erping Li, and Wenchao Chen In the simulation of high frequency nanoscale semiconductor devices in which electromagnetic (EM) fields and carrier transport are coupled, and optoelectronic devices in which strong interactions between EM fields and charged particles exist, both the Maxwell's equations and the time-dependent Schrödinger equation (TDSE) need to be solved to capture the interactions between EM and quantum mechanics (QM). One of the numerical simulation methods for solving these equations is the finite difference time domain (FDTD) method. In this review paper, the development of FDTD method applied in EM and QM simulation is discussed. Several widely used FDTD techniques, i.e., explicit, implicit, explicit staggered-time, and Chebyshev methods, for solving the TDSE are introduced and compared. The hybrid approaches based on FDTD method, which are used to solve the Poisson-TDSE and Maxwell-TDSE coupled equations for EM-QM simulation, are also discussed. Furthermore, the applications of these simulation methods for nanoscale semiconductor devices and optoelectronic devices are introduced. Finally, a conclusion is given. 2021-01-15 PIER Vol. 170, 17-62, 2021. doi:10.2528/PIER20122108 Advanced Progress on Χ(3) Nonlinearity in Chip-Scale Photonic Platforms (Invited Review) Zhe Kang, Chao Mei, Luqi Zhang, Zhichao Zhang, Julian Evans, Yunjun Cheng, Kun Zhu, Xianting Zhang, Dongmei Huang, Yuhua Li, Jijun He, Qiang Wu, Binbin Yan, Kuiru Wang, Xian Zhou, Keping Long, Feng Li, Qian Li, Shaokang Wang, Jinhui Yuan, Ping-Kong Alexander Wai, and Sailing He χ(3) nonlinearity enables ultrafast femtosecond scale light-to-light coupling and manipulation of intensity, phase, and frequency. χ(3) nonlinear functionality in micro- and nano-scale photonic waveguides can potentially replace bulky fiber platforms for many applications. In this review, we summarize and comment on the progress on χ(3) nonlinearity in chip-scale photonic platforms, including several focused hot topics such as broadband and coherent sources in the new bands, nonlinear pulse shaping, and all-optical signal processing. An outlook of challenges and prospects on this hot research field is given at the end. 2021-01-14 PIER Vol. 170, 1-15, 2021. doi:10.2528/PIER20112306 Designing Nanoinclusions for Quantum Sensing Based on Electromagnetic Scattering Formalism (Invited Paper) Constantinos Valagiannopoulos Quantum interactions between a single particle and nanoinclusions of spherical or cylindrical shape are optimized to produce scattering lineshapes of high selectivity with respect to impinging energies, excitation directions and cavity sizes. The optimization uses a rigorous solution derived via electromagnetic scattering formalism while the adopted scheme rejects boundary extrema corresponding to resonances that occur outside of the permissible parametric domains. The reported effects may inspire experimental efforts towards designing quantum sensing systems employed in applications spanning from quantum switching and filtering to single-photon detection and quantum memory.
6f772a60218e5ea1
T. E. Bearden, LTC, U.S. Army (Retired) Director, Association of Distinguished American Scientists (ADAS) Fellow Emeritus, Alpha Foundation’s Institute for Advanced Study (AIAS) Final Draft June 24, 2000 Web Site: www.cheniere.org The World Energy Crisis The world energy crisis is now driving the economies of the world nations There is an escalating worldwide demand for electrical power and transportation, much of which depends on fossil fuels and particularly oil or oil products.  The resulting demand for oil is expected to increase year by year. Recent sharp rises in some U.S. metropolitan areas included gasoline at more than $2.50 per gallon already. At the same time, it appears that world availability of oil may have peaked in early 2000, if one factors in the suspected Arab inflation of reported oil reserves. From now on, it appears that oil availability will steadily decline, slowly at first but then at an increasing pace. Additives to aid clean burning of gasoline are also required in several U.S. metropolitan areas, increasing costs and refinery storage and handling. The increasing disparity between demand and supply-steadily increasing demand for electricity using oil products versus decreasing world supplies of oil, with other factors such as required fuel additives-produces a dramatically increasing cost of oil and oil products. Further, newer supplies of oil must be taken by increasingly more expensive production means. Manipulative means of influencing the price of oil include (i) the ability of OPEC to increase or decrease production at will, and (ii) the ability of the large oil companies to reduce or increase the holding storage of the various oil products, types of fuel, etc. Interestingly, several large oil companies are reporting record profits {[1]}. At the same time, the burgeoning populaces of the major petroleum producers—and their increasing economic needs—press hard for an increasing inflation of oil prices in order to fund the economic benefits. As an example, Saudi moderation of OPEC is vanishing or has already vanished.  The increasing demands of the expanding Saudi Royal Family group and the guaranteed benefits to the expanding populace have overtaken and surpassed the present Saudi financial resources unless the price of OPEC oil is raised commensurately {[2]}. The Federal Reserve contributes directly to the economic problem in the U.S., since it interprets the escalating prices of goods and services (due to escalating energy prices) as evidence of inflation.  It will continue to raise interest rates to damp the economy, further damping U.S. business, employment, and trade.  The Fed has already increased interest rates six times in one year as of this date. International Trade Factors Under NAFTA, GATT {[3]}, and other trade agreements, the transfer of production and manufacturing to the emerging nations is also increasing and trade barriers are lowered. Some 160 emerging nations are essentially exempt from environmental pollution controls, under the Kyoto accords. In these nations, electrical power needs and transport needs are increasing, and will continue to increase, due to the increasing production and movement of goods and the building of factories and assembly plants.  Very limited pollution controls — if any — will be applied to the new electrical plants and transport capabilities to be built in those exempted nations. The transfer of manufacturing and production to many of these nations is a transfer to essentially “slave labor” nations. Workers have few if any benefits, are paid extremely low wages, work long hours, and have no unions or bargaining rights. In some of these nations, to pay off their debts many parents sell their children into bondage for manufacture of goods, with 12 to 14 hour workdays being a norm for the children {[4]}. In such regions the local politicians can usually be “bought” very cheaply so that there are also no effective government controls. Such means have set up a de facto return to the feudalistic capitalism of an earlier era when enormous profits could be and were extracted from the backs of impoverished workers, and government checks and balances were nil. The personal view of this author is that NAFTA, GATT, and Kyoto were set in place for this very purpose. As the transfer builds for the next 50 years, it involves the extraction of perhaps $2 trillion per year, from the backs of these impoverished laborers. It would not appear accidental that Kyoto removed the costly pollution control measures from this giant economic buildup that would otherwise have been required. The result will be increased pollution of the biosphere on a grand scale. Ironically, the Environmental Community itself was deceived into supporting the Kyoto accords and helping achieve them, hoping to put controls on biospheric pollution worldwide. In fact, the Kyoto accords will have exactly the opposite effect. Resulting World Economic Collapse Bluntly, we foresee these factors — and others {[5]}{[6] }not covered—converging to a catastrophic collapse of the world economy in about eight years. As the collapse of the Western economies nears, one may expect catastrophic stress on the 160 developing nations as the developed nations are forced to dramatically curtail orders. International Strategic Threat Aspects History bears out that desperate nations take desperate actions. Prior to the final economic collapse, the stress on nations will have increased the intensity and number of their conflicts, to the point where the arsenals of weapons of mass destruction (WMD) now possessed by some 25 nations, are almost certain to be released.  As an example, suppose a starving North Korea {[7]} launches nuclear weapons upon Japan and South Korea, including U.S. forces there, in a spasmodic suicidal response. Or suppose a desperate China — whose long-range nuclear missiles (some) can reach the United States — attacks Taiwan. In addition to immediate responses, the mutual treaties involved in such scenarios will quickly draw other nations into the conflict, escalating it significantly. Strategic nuclear studies have shown for decades that, under such extreme stress conditions, once a few nukes are launched, adversaries and potential adversaries are then compelled to launch on perception of preparations by one’s adversary.  The real legacy of the MAD concept is this side of the MAD coin that is almost never discussed. Without effective defense, the only chance a nation has to survive at all is to launch immediate full-bore pre-emptive strikes and try to take out its perceived foes as rapidly and massively as possible. As the studies showed, rapid escalation to full WMD exchange occurs. Today, a great percent of the WMD arsenals that will be unleashed, are already on site within the United States itself {[8]}. The resulting great Armageddon will destroy civilization as we know it, and perhaps most of the biosphere, at least for many decades. My personal estimate is that, beginning about 2007, on our present energy course we will have reached an 80% probability of this “final destruction of civilization itself” scenario occurring at any time, with the probability slowly increasing as time passes. One may argue about the timing, slide the dates a year or two, etc., but the basic premise and general time frame holds. We face not only a world economic crisis, but also a world destruction crisis. So unless we dramatically and quickly solve the energy crisis—rapidly replacing a substantial part of the “electrical power derived from oil” by “electrical power freely derived from the vacuum”—we are going to incur the final “Great Armageddon” the nations of the world have been fearing for so long.  I personally regard this as the greatest strategic threat of all times—to the United States, the Western World, all the rest of the nations of the world, and civilization itself {[9]} {[10]}. What Is Required to Solve the Problem? To avoid the impending collapse of the world economy and/or the destruction of civilization and the biosphere, we must quickly replace much of the “electrical energy from oil” heart of the crisis at great speed, and simultaneously replace a significant part of the “transportation using oil products” factor also. Such replacement by clean, nonpolluting electrical energy from the vacuum will also solve much of the present pollution of the biosphere by the products of hydrocarbon combustion. Not only does it solve the energy crisis, but it also solves much of the environmental pollution problem. The technical basis for that solution and a part of the prototype technology required, are now at hand. We discuss that solution in this paper To finish the task in time, the Government must be galvanized into a new Manhattan Project {[11]} to rapidly complete the new system hardware developments and deploy the technology worldwide at an immense pace. Once the technology hardware solutions are ready for mass production, even with a massive worldwide deployment effort some five years are required to deploy the new systems sufficiently to contain the problem of world economic collapse. This means that, by the end of 2003, those hardware technology solutions must have been completed, and the production replacement power systems must be ready to roll off the assembly lines en masse. The 2003 date appears to be the critical “point of no return” for the survival of civilization as we have known it. Reaching that point, say, in 2005 or 2006 will not solve the crisis in time. The collapse of the world economy as well as the destruction of civilization and the biosphere will still almost certainly occur, even with the solutions in hand. A review of the present scientific and technical energy efforts to blunt these strategic threat curves, immediately shows that all the efforts and indeed the conventional scientific thinking) are far too little and far too late. Even with a massive effort on all of the “wish list” of conventional projects and directions, the results would be insufficient to prevent the coming holocaust. As one example, the entire hot fusion effort has a zero probability of contributing anything of significance to the energy solution in the time frame necessary. Neither will windmills, more dams, oil from tar sands, biofuels, solar cells, fuel cells, methane from the ocean bottom, ocean-wave-powered generators, more efficient hydrocarbon combustion, flywheel energy storage systems, etc. All of those projects are understandable and “nice”, but they have absolutely zero probability of solving the problem and preventing the coming world economic collapse and Armageddon. Those conventional approaches are all “in the box” thinking, applied to a completely “out of the box” problem unique in world history. The conventional energy efforts and thinking may be characterized as essentially “business as usual but maybe hurry a little bit.”They divert resources, time, effort, and funding into commendable areas, but areas which will not and cannot solve the problem. In that sense, they also contribute to the final Armageddon that is hurtling toward us {[12]}. If we continue conventionally and with the received scientific view, even with massively increased efforts and a Manhattan Project, we almost certainly guarantee the destruction of civilization as we know it, and much of the biosphere as well. Bluntly, the only viable option is to rapidly develop systems which extract energy directly from the vacuum and are therefore self-powering, like a windmill in the wind {[13] }. Fortunately, analogous electrical systems—open systems far from thermodynamic equilibrium in their exchange with the active vacuum—are permitted by the laws of physics, electrodynamics {[14]} and thermodynamics {[15]}. Such electrical systems are also permitted by Maxwell’s equations, prior to their arbitrary curtailment by Lorentz symmetrical regauging {[16]} {[17]} {20}. The good news was that the little mathematical trick by Lorentz made the resulting equations much easier to solve (for the selected “subset” of the Maxwell-Heaviside systems retained). However, the bad news is that it also just arbitrarily discarded all Maxwellian EM systems far from thermodynamic equilibrium (i.e., asymmetrical and in disequilibrium) with respect to their vacuum energy exchange. So the bad news is that Lorentz arbitrarily discarded all the permissible electrical power systems analogous to a windmill in a wind, and capable of powering themselves and their loads. All our energy scientists and engineers continue to blindly develop only Lorentz-limited electrical power systems. The good news is that we now know how to easily initiate continuous and powerful “electromagnetic energy winds” from the vacuum at will. Once initiated, each free EM energy wind flows continuously so long as the simple initiator is not deliberately destroyed. The bad news is that all our present electrical power systems are designed and developed so that they continually kill their “energy winds” from the vacuum faster than they can collect some of the energy from the winds and use it to power their loads. But the good news is that we now know how to go about designing and developing electrical power systems which (i) initiate copious EM energy flow “winds” in the vacuum, (ii) do not destroy these winds but let them continue to freely flow, and (iii) utilize these freely-flowing energy winds to power themselves and their loads. So we have already solved the first half of the energy crisis problem {[18]} {[19]}:We can  produce the necessary “EM energy wind flow” in any amount required, whenever and wherever we wish, for peanuts and with ridiculous ease.  We can insure that, once initiated, the electromagnetic energy wind flows indefinitely or until we wish to shut it off. A tiny part of the far frontier of the scientific community is also now pushing hard into catching and using this available EM energy from the vacuum {[20]}. However, they are completely unfunded and working under extremely difficult conditions {[21]}. In addition, there are more than a dozen appropriate processes already available(some are well-known in the hard literature), which can be developed to produce the new types of electrical energy systems {[22]}. What Must Be Done Technically We have about two and a half years to develop several different types of systems for the several required major applications — and particularly the following: (1)  self-powering open electrical power systems extracting their electrical energy directly from the active vacuum and readily scalable in size and output, (2)  burner systems {[23]} to replace the present “heater” elements of conventional power plants, increasing the coefficient of performance (COP) {[24]} of those altered systems to COP>1.0, and perhaps to COP = 4.0, (3)  specialized self-powering engines to replace small combustion engines { [25]}, (4)  self-regenerating, battery-powered systems enabling practical electric automobiles, based on the Bedini {[26]} process, (5)  Kawai COP>1.0 magnetic motors {[27]} with clamped feedback, powering themselves and their loads, (6)  magnetic Wankel engines {[28]} with small self-powering batteries, which enable a very practical self-powering automotive engine unit for direct replacement in present automobiles, (7)  permanent magnet motors such as the Johnson {[29]} approach using self-initiated exchange force pulses {[30]} in nonlinear magnetic materials to provide a nonconservative field, hence a self-powering unit, (8)  iterative retroreflective EM energy flow systems which intercept and utilize significant amounts of the enormous Heaviside {[31]} which surrounds every electrical circuit but is presently ignored, (9)  Iterative phase conjugate retroreflective systems which passively recover and reorder the scattered energy dissipated from the load, and reuse the energy again and again { [32]}, (10)  Shoulders’ charge cluster devices {[33]} which yield COP>1.0 by actual measurement, (11)  self-exciting systems using intensely scattering optically active media and iterative asymmetrical self-regauging {[34]}{[35]} {[36]} {67}, (12)  true negative resistors such as the Kron {[37]} and Chung {[38]} negative resistors, the original point-contact transistor { [39]} which can be made into a negative resistor, and the Fogal negative resistor semiconductor, and (13)  overunity transformers using a negative resistor bypass across the secondary, reducing the back-coupling from secondary to primary and thus lowering the dissipation of energy in the primary {[40]}. What Must Be Done for Management and Organization To meet the critical 2003 “point of no return” milestone, the work must be accomplished under a declared National Emergency and a Presidential Decision Directive. The work must be amply funded, with authority—because of the extreme emergency—to utilize any available patented processes and devices capable of being developed and deployed in time, with accounting and compensation of the inventors and owners separately. As an example, two of the above mentioned devices — the Kawai engine and the magnetic Wankel engine — can be quickly developed and produced en masse.  However, they have been seized by the Japanese Yakuza {[41]} {[42]} {[43]} and are being held off the world market.  The two devices are quite practical and can be developed and manufactured with great rapidity.  As an example, two models of the Kawai engine were tested by Hitachi to exhibit COP = 1.4 and COP = 1.6 respectively.  Use of these two inventions, under U.S. Government auspices, will greatly contribute to solving a significant portion of the transportation power problem, at low risk for this part of the solution.  Use of them cannot be obtained by normal civil means, due to the involvement of the Yakuza. The technical part of the project to solve the energy crisis is doable in the required time — but just barely, and only if we move at utmost speed Thanks to more than 20 years work on unconventional solutions to the problem, much of the required solution is already in hand, and the project can go forward at top speed from the outset. The remaining managing and organizing problem is to marshal the necessary great new Manhattan Project as a U.S. government project operating under highest national priority and ample funding.  The Project must be a separate Agency, operating directly under the appropriate Department Secretary and reporting directly to the President (through the Secretary) and to a designated Joint Committee of the Senate and the House. The selection of the managers and directors must be done with utmost care; else, they themselves will become the problem rather than the solution.   We strongly stress that here even the most highly qualified managerial scientist may have to be disqualified because of his or her own personal biases and dogmatic beliefs.  Leaders and scientists are required who will run with the COP>1.0 ball on a wide front. The compelling authority to assign individual tasks to the National Laboratories and other government agencies is required, but under no circumstances can the project be placed under the control of the national laboratories themselves.  Those laboratories such as Los Alamos National Laboratory, Lawrence Livermore National Laboratory, and Oak Ridge National Laboratory are far too committed to their entrenched Big Science projects and the resulting bias against electrical energy from the vacuum. Assigning management of the project to them would be setting the foxes to minding the hen house, and would guarantee failure.  Those agencies whose favored approaches are responsible for the present energy crisis, cannot be expected to direct an effective solution to it that is outside their managerial and scientific ansatz and totally against their institutional and professional biases.   If they are allowed to direct the project, then implacable scientists, who adamantly oppose electrical energy from the vacuum from the getgo, will hamstring and destroy the project from its inception. Not only will they fiddle while Rome burns, but they will help burn it. Enormous EM Energy Flow Is Easily Extracted From the Active Vacuum There is not now and there never has been a problem in readily obtaining as much electromagnetic energy flow from the vacuum as we wish.  Anywhere.  Anytime. For peanuts. Every electrical power system and circuit ever built already does precisely that {[44] }{[45] }. But almost all the vast EM energy flow that the present flawed systems extract from the vacuum is unaccounted and simply wasted. It is wasted by the conventional, seriously flawed circuits and systems designed and built by our power system scientists and engineers in accord with a terribly flawed 136-year old set of electrodynamics concepts and foundations. Specifically, it is wasted because Lorentz discarded it a century ago {45}. Since then, everyone has blindly followed Lorentz’s lead. Our electrical scientists and engineers have not yet even discovered how a circuit is powered! They have no valid concept of where the electrical energy flowing down the power line actually comes from. They do not model the interaction that provides it { [46]}, in their theoretical models and equations. This vast scientific “conspiracy of ignorance” is completely inexplicable, because the actual source of the EM energy powering the external circuits has been known (and rigorously proven)in particle physics for nearly half a century! However, it has not yet even been added into the fundamental electrical theory used in designing and building power systems. We have a scientific mindset problem of epic proportions, and scientific negligence and electromagnetics dogma of epic proportions.   I sometimes refer to this as an unwitting “conspiracy of ignorance”, where I use the word “ignorance” technically as meaning “unaware”. We certainly do not intend the phrase to be pejorative. So we do not have an energy problem per se. We have an unwitting conspiracy of scientific ignorance problem. Because of its bias, our electrical scientific community also strongly resists updating the 136-year old electrodynamics foundations even though much of it is known to be seriously flawed and even incorrect {[47]}[48]}. Indeed, organized science has always fiercely resisted strong innovation.  As Max Planck {[49]} so eloquently put it, Arthur C. Clarke { [50]} expressed it succinctly for our more modern scientific community, as follows: “If they [quantum fluctuations of vacuum] can be [tapped], the impact upon our civilization will be incalculable.  Oil, coal, nuclear, hydropower, would become obsolete — and so would many of our worries about environmental pollution.”  “Don’t sell your oil shares yet — but don’t be surprised if the world again witnesses the four stages of response to any new and revolutionary development: 1. It’s crazy! 2. It may be possible — so what?   3. I said it was a good idea all along.  4. I thought of it first.” With respect to extracting and using EM energy from the vacuum, our present scientific community is mostly in Clarke’s phase 1.  A few scientists are in phase 2 but surmise that “it may perhaps be the science of the next century.” We do not have a century remaining. We have two and a half years. For nearly half a century (i) the active vacuum, (ii) the vacuum’s energetic interaction with every dipole, and (iii) the broken symmetry of the dipole {[51]} in that energetic interaction {55} have been known and proven in particle physics. These proven COP>1.0 vacuum energy mechanisms have not been incorporated into the electrodynamic theory used to design and build electrical power and transportation systems { [52] }.  We are still waiting for the “old scientific opponents” — adamantly opposed to the very notion of electrical energy from the vacuum — to “die off and get out of the way.” Hence our universities, the National Science Foundation, the National Academy of Science, the National Laboratories, etc. have not taken advantage of the enormous EM energy so universally available from the active vacuum, and in fact universally and copiously extracted from the vacuum by every EM system today — and wasted.  Indeed, present organized science will not fund and will not tolerate research that would violate the presently decreed view of power systems and their functioning. Hence, our present organized scientific community will strongly resist funding of a vigorous program to gather all this proven, known physics together and rapidly use it to change and update (modernize) the terribly flawed EM theory and the design of electrical power systems.  Most scientists attempting to do this research have had to proceed on their own.  They have undergone vicious and continual ad hominem attacks, lost research funds and tenure, been unable to get their papers published, and in fact risked being destroyed by the scientific community itself {21}. The bottom line is this: Left to sweet reason, because of the depth of its present bias the scientific community is totally incapable of reacting to the problem in time to prevent the destruction of civilization.  If we wish to survive, government will have to directly force the scientific community to do the job, over careers and “dead bodies” (so to speak) if necessary. But first the government itself must be motivated to do so. Only the environmental community has the clout, financial resources, and activists to motivate the government in the extremely short time in which it must be accomplished.  So it would seem that the most urgent task is to educate and wake up the environmental community. It has been “had”, and it has been “had” since the beginning. Understanding What Powers Electrical Circuits Let us cut through the scientific errors in how electrical power systems are presently viewed: Batteries and generators themselves do not power circuits. They never have, and they never will. They dissipate their available internal energy {[53]} to do one thing and one thing only: forcibly separate their own internal charges to form a “source dipole” {[54]}.  Once the dipole has been formed, the dipole directly extracts electromagnetic energy from the active vacuum {[55]}, pouring the extracted EM energy out from the terminals of the battery or generator. Batteries and generators make a dipole, nothing else.   All the fuel every burned, the nuclear fuel rods ever consumed, and chemical energy ever expended by batteries, did nothing but make dipoles.  None of all that destructive activity, of itself, ever added a single watt to the power line. Once made, the dipole then extracts EM energy from the seething vacuum, and pours it out down the circuit and through all surrounding space around the circuit {56}.  A little bit of that energy flow strikes the circuit and enters it by being deflected (diverged) into the wires {57}.  That tiny bit of intercepted energy flow that is diverged into the circuit, then powers the circuit (its loads and losses){58}. All the rest of that huge energy flow around the circuit just roars on off into deep space and is wasted. The Dipole Extracts Enormous Energy from the Vacuum The outflow of EM energy extracted from the vacuum by a small dipole is enormous.  It fills all space surrounding the attached external circuit (e.g., surrounding the power lines attached to a power plant generator) {[56]}. In the attached circuits, the electrical charges on the surfaces of the wires are struck by the mere edge of the violent flow of EM energy passing along those surfaces.  The resulting tiny “intercepted” part {[57]} of the EM energy flow is deflected into the wires, very much like placing one’s hand outside a moving automobile and diverting some of the wind into the car.  The deflected energy that enters the wires is the Poynting component of the energy flow. It is not the entire EM energy flow by any means, but only a very, very tiny component of it {[58]}. Only that tiny bit of the energy flow that is actually diverged into the wires is used to power the circuit and the loads. All the rest of the enormous energy flow present and available outside the circuit is just ignored and wasted. A nominal 1-watt generator, e.g., is actually one whose external circuit can “catch” only one watt of its output.  The generator’s actual total output — in the great flow which fills all space around the external circuit and is not intercepted and used — is something on the order of 10 trillion watts!> Our Scientists and Engineers Design Dipole-Destroying Systems Here is the most inane thing of all. Precisely half of the small amount of energy that is actually caught by the circuit is used to destroy the dipole! That half of the intercepted energy does not power the load, nor does it power losses in the external circuit. Instead, it is used to directly scatter the dipole charges and destroy the dipole. Our scientists and engineers have given us the ubiquitous closed current loop circuit {[59]}, which destroys the dipole faster than it powers the load. In short, the scientists and engineers design and build only those electrical power systems that “continuously commit suicide” by continuously destroying the source dipole that is extracting the vacuum energy and emitting it out along the circuit to power everything in the first place. So now, we have the real picture. Every electrical load ever powered, and every load powered today, has been and is powered by electromagnetic energy extracted directly from the seething vacuum by the source dipole in the generator or battery However, our scientists and engineers design and build electrical power systems that only intercept and use a tiny fraction of the vast EM energy flow available.  They also only design and build systems that destroy their source dipole faster than they power their loads. If one does not destroy the dipole once it is made, it will continue to freely extract copious EM energy flow from the vacuum, indefinitely, pouring out a stupendous flow of EM energy. As an example, dipoles in the original matter formed in the Big Bang at the beginning of the universe have been steadily extracting EM energy from the vacuum and pouring it out for about 15 billion years. The energy problem is not due to the inability to produce copious EM energy flows at will — as much as one wishes, anywhere, anytime. Every dipole already does this, including in every EM power system ever built. The energy problem is due to the complete failure to (i) intercept and utilize more of the vast energy flows made available by the common dipole, and (ii) doing so without using the present inanely designed circuits. These circuits use half their collected energy to destroy the dipole that is extracting the energy flow from the vacuum in the first place! This is part of the “conspiracy of scientific ignorance” earlier mentioned. Ignoring the Vacuum as the Source of Electrical Energy in All Circuits In their conventional theoretical models, our present electrical power system scientists and engineers do not even include the vacuum interaction or the dipole’s extraction of EM energy from the vacuum. They simply ignore — and do not model — what is really powering every electrical system they build. Consequently, we reiterate that our electrical scientists have never even discovered how an EM circuit is powered — although it has been discovered and known for nearly 50 years in particle physics. All the hydrocarbons ever burned, all the water over all the dams ever built, all the nuclear fuel rods ever expended in all the nuclear power plants, added not a single watt to the power line. Instead, all that expense, effort, and pollution and destruction of the biosphere was and is necessary in order to keep adding internal energy to the generator—so that it can keep continually rebuilding its source dipole that is continually destroyed by the inane circuits that the power system scientists and engineers keep designing and building for us. It takes as much energy input to the generator to restore the dipole, as it took the circuit to destroy the dipole.  Thus all the systems our scientists and engineers design and build, require that we continually input more energy to restore the dipole, than the circuit dissipates in the load. Our technical folks thus happily design and give us systems which can and will only exhibit COP<1.0 — thus continuing to require that we ourselves steadily provide more energy to the system to continually rebuild its dipole, than the inane masochistic system uses to power its load. In short, we pay the power companies (and their scientists and engineers) to deliberately engage in a giant wrestling match inside their generators and lose. That is not the way to run the railroad! One is reminded of one of the classic comments by Churchill: “Most men occasionally stumble over the truth, but most pick themselves up and continue on as if nothing had happened.” It seems that not very many energy system scientists and engineers have “stumbled over the truth” as to what really powers their systems, and how inanely they are really designing them. Electrical Energy Required from Hydrocarbon Burning Drives the Problem The heart of the present environmental pollution problem is the ever-increasing need for electrical energy obtained from burning of hydrocarbon fuels and/or nuclear power stations. The increasing production of electrical power to fill the rising needs, increasingly pollutes the environment including the populace itself (lungs, bodies, etc.).  Almost every species on earth is affected, and as a result every year some species become extinct. Environmental pollution includes pollution of the soil, fresh and salt water, and the atmosphere by a variety of waste products. Given global warming, it also includes excess heat pollution in addition to chemical and nuclear residues. Under present procedures, the electrical energy problem is exacerbated by decreasing available oil supplies, which are believed to have peaked this year, with a projected decline from now on. But really, the electrical energy problem is due to the scientific community’s adamant defense and use of electrical power system models and theories that are 136 years old { [60]} in their very foundations. These models and theories are riddled with errors and non sequiturs, and seriously flawed. The scientific community has not even recognized the problem, much less the solution. In fact, it does not even intend to recognize the problem, even though the basis for it has been known in particle physics for nearly 50 years.  As Bunge {[61]} put it some decades ago: “…it is not usually acknowledged that electrodynamics, both classical and quantal, are in a sad state.” The scientific community has done little to correct that fundamental problem since Bunge made his wry statement. Let us put it very simply: The most modern theory today is modern gauge field theory. In that theory, freedom of gauge is assumed from the getgo. Applied to electrodynamics, this means — as all electrodynamicists have assumed for the last century or longer — that the potential energy of an EM system can be freely changed at will. In other words, in theory it costs nothing at all to increase the EM energy collected in a system; this is merely “changing the voltage”, which does not require power. In other words, we can “excite” the system with excess energy (actually taken from the vacuum), at will. For free. And the best science of the day agrees with that statement. It also follows that we can freely change the excitation energy again, at will. In short, we can dissipate that excess energy freely and at will. Without cost. Well, this means that we are free—by the laws of nature, physics, thermodynamics, and gauge field theory — to dissipate that free excess potential energy in an external load, thus doing “free work”. Since none of the systems our energy scientists and engineers build for us aredoing that, it follows a priori that the fault lies entirely in their own system design and building.  It does not lie in any prohibition by nature or the laws of physics. A priori, then, the present COP<1.0 performance of our electrical power systems is a monstrosity and the direct fault of our scientists and engineers.  We cannot blame the laws of nature or the laws of physics. The present energy crisis then is due totally to that “conspiracy of ignorance” we referred to. It is maintained by the scientific community today, and it has been maintained by it for more than 100 years. This is the real situation that the environmentalists must become aware of, if they are to see the correct path into which their energies and efforts should be directed — to solve both the energy crisis and the problem of gigantic pollution of the biosphere. Outside Intervention Must Forcibly Move Energy Science Forward Unless outside intervention occurs forcibly, the scientific community’s lock-up of research funds for “in the box” energy research may result in the economic collapse of the Western World in perhaps as little as eight years. Let us examine the gist of the problem facing us. Suppose we launch a crash program to develop, manufacture, deploy, and employ the new “vacuum powered” systems.  Once the new self-powering systems are developed and ready to roll off the production lines en masse, it will require a minimum of five years worldwide to sufficiently alter the “electrical energy from oil” demand curve, so that economic collapse can be averted. In turn, this means that the new systems must be ready to roll off the manufacturing lines by the end of 2003. While this is a very tight schedule, it can be done if we move rapidly. The necessary scientific corrections along the lines indicated in this paper can be quickly applied to solve the electrical energy problem permanently and economically, given a Manhattan type project under a Presidential Decision Directive together with a Presidential declaration of a National Energy Emergency. In a paper {[62]} to be published in Russia in July 2000, this researcher has proposed some 15 viable methods for developing new “self-powering” systems powering themselves and their loads with energy extracted from the vacuum. Several of these systems can be developed very rapidly, and can be easily mass-produced. A second paper {[63]} will be published in the same proceedings, revealing the Bedini method for invoking a negative resistor inside a storage battery. The negative resistor freely extracts vacuum energy and adds it to both the battery-recharging function and the load powering function. In Bedini’s negative resistor method, the ion current inside the battery is decoupled (dephased) from the electron current between the outer circuit and the external surfaces of the battery plates. This allows the battery to be charged (with increased charging energy) simultaneously as the load is powered with increased current and voltage. At my specific request, both papers were thoroughly reviewed by qualified Russian scientists, and the premises passed successfully. A third paper {[64]} gives the exact giant negentropy mechanism by which the dipole extracts such enormous energy from the vacuum. We will further explain that mechanism below. Conventional Approaches: Too Little, Too Late It appears that the Environmental Community itself has finally realized that the present scientific approaches and research are simply too little and too late. Further, the conventional approaches are largely “in the box thinking” applied to an “out of the box problem.” We leave it to others such as Loder {[65]} to succinctly summarize the shortfalls of these present solutions. Loder, e.g., particularly and incisively explains how the problem with automobiles breaks down. In fact, no single COP>1.0 approach will be all sufficing. Several solutions, each for a different application, must be developed and deployed simultaneously. As an example, it is possible to create certain dipolar phenomena in plasmas produced in special burners, such that the dipoles extract substantial excess EM energy from the vacuum. Output of the excess energy produces ordinary excess heat well beyond what the combustion process alone will yield. Given a Manhattan type project, the inventor of that process (with already working models and rigorous measurements) could rapidly be augmented to develop a series of replacement burners (heaters). They could be used in existing electrical power plants to heat the water to make the steam for the steam turbines turning the shafts of the generators. The entire remainder of the power system, grid, etc. could be left intact. Some fuel would still be burned, but far less would be consumed in order to furnish the same required heat output. In short, a rather dramatic reduction in power plant hydrocarbon combustion could be achieved — in the present electrical power plants with minimum modification, and in the necessary time frame — while maintaining or even increasing the electrical energy output of the power systems. We believe the inventor would fully participate in a government-backed Manhattan type energy program where a National Emergency has been declared, given a U.S. government guarantee that his process, equipment, and inventions will not be confiscated {[66]}. Another process capable of quick development and enormous application is the development of point contact transistors as true negative resistors {39}. Two other processes that can be developed for massive production in less than two years are (i) the process {27}, and (ii) the magnetic Wankel process {28}. In addition, the Johnson {29} process can be developed and readied for manufacture in the same time frame, given a full-bore sophisticated laboratory team. There are other processes {[67]} {62} {63} which can also be developed rapidly, to provide major contributions in solving their parts of the present “electrical energy from hydrocarbon combustion” problem. Giant Negentropy and a Great New Symmetry Principle We now summarize some recent technical discoveries by the present author that bear directly upon the problem of extracting and using copious EM energy flows from the vacuum. Any dipole has a scalar potential between its ends, as is well known. Extending earlier work by Stoney {[68]}, in 1903 Whittaker {[69]} showed that the scalar potential decomposes into — and identically is — a harmonic set of bidirectional longitudinal EM wavepairs. Each wavepair is comprised of a longitudinal EM wave (LEMW) and its phase conjugate LEMW replica. Hence, the formation of the dipole actually initiates the ongoing production of a harmonic set of such biwaves in 4-space { [70]}. We separate the Whittaker waves into two sets: (i) the convergent phase conjugate set, in the imaginary plane, and (ii) the divergent real wave set, in 3-space. In 4-space, the 4th dimension may be taken as –ict. The only variable in –ictis t. Hence the phase conjugate waveset in the scalar potential’s decomposition is a set of harmonic EM waves converging upon the dipole in the time dimension, as a time-reversed EM energy flow structure inside the structure of time {[71] }. Or, one can just think of the waveset as converging upon the dipole in the imaginary plane {[72]} — a concept similar to the notion of “reactive power” in electrical engineering. The divergent real EM waveset in the scalar potential’s decomposition is then a harmonic set of EM waves radiating out from the dipole in all directions at the speed of light. As can be seen, there is perfect 4-symmetry in the resulting EM energy flows, but there is broken 3-symmetry since there is no observable 3-flow EM energy input to the dipole. Our professors have taught us that output energy flow in 3-space from a source or transducer, must be accompanied by an input energy flow in 3-space.  That is not true. It must be accompanied by an input energy flow, period. That input can be an energy flow in the 4th dimension, time — or we can consider it as an inflow in the imaginary plane. The flow of energy must be conserved, not the dimensions in which the flow exists. There is no requirement by nature that the inflow of EM energy must be in the same dimension as the outflow of EM energy. Indeed, nature prefers to do it the other way! Simply untie nature’s foot from the usually enforced extra condition of 3-space energy flow conservation. Then nature joyfully and immediately sets up a giant 4-flow conservation, ongoing. Enormous EM energy is inflowing from the imaginary plane into the source charge or dipole, and is flowing out of the source charge or dipole in 3-space, at the speed of light, and in all directions. In other words, nature then gladly gives us as much EM energy flow as we need, indefinitely — just for paying a tiny little bit initially to “make the little dipole.” After that, we never have to pay anything again, and nature will happily keep on pouring out that 3-flow of EM energy for us. This is the giant negentropy mechanism I uncovered, performed in the simplest way imaginable: just make an ordinary little dipole. We may interpret the giant negentropy mechanism in electrical engineering terms {[73]}. The EM energy flow in the imaginary plane is just incoming “pure reactive power” in the language of electrical engineering. The outgoing EM energy flow in the real plane (3-space) is “real power” in the same language. So the dipole is continuously receiving a steady stream of reactive power, transducing it into real power, and outputting it as a continuous outflow of real EM power. Further, there is perfect 1:1 correlation between the convergent waveset in the imaginary plane and the divergent waveset in 3-space. This perfect correlation between the two sets of waves and their dynamics represents a deterministic re-ordering of a fraction of the 4-vacuum energy. This re-ordering initiated by the formation of the dipole spreads radially outward at the speed of light, continuously. This clearly shows that (i) we can initiate reordering of a usable fraction of the vacuum’s energy at any place, anytime, easily and cheaply (we need only to form a simple dipole), and (ii) the process continues indefinitely, so long as the dipole exists, without the operator inputting a single additional watt of power. This is a very great benefit. So long as the dipole exists, this re-ordering continues and a copious flow of observable, usable EM energy pours from the dipole in all directions at the speed of light. This is the full solution to the first half of the energy crisis, once and for all. Ansatz of the Major Players To appreciate the difficulty in implementing the solution to the energy crisis, one must be aware of the characteristics of the major communities whose dynamics and interactions determine the outcome. Accordingly, we summarize our personal assessment of the present “status” and “awareness” of the various communities involved. We do that by attempting to express the overall “ansatz” of the specific community. Scientific Community For the most part, the organized scientific community varies from highly resistant to openly hostile toward any mention of extracting copious EM energy from the active vacuum. The “Big Nuclear” part of the community is particularly adamant in this respect, as witness its ferocious onslaught on the fledgling and struggling cold fusion researchers — a ferocity of scientific attack seldom seen in the annals of science {[74]} {[75]}. The scientific community also largely suppresses {[76]} or severely badgers scientists attempting to advance electrodynamics to a more modern model, suitable to the needs of the 21st century and the desperate need for cheap, clean, nonpolluting electrical power worldwide {21}. The community still applies classicalequilibrium thermodynamics to the electrical part of all its electrical power systems, even though every EM system is inherently a system far from equilibriumwith the active vacuum environment, and a different thermodynamics applies. Only if the system is specifically so designed — e.g., so that during the dissipation of its excitation energy it enforces the Lorentz symmetrical regauging condition — will the system behave as a classical equilibrium system. The thermodynamics of open dissipative systems is well known {[77]}. Such a system is permitted to (1) self-order, (2) self-oscillate or self-rotate, (3) output more energy than the operator inputs (the excess energy is freely received from the active environment), (4) power itself and its load simultaneously (all the energy is taken from the active environment, similar to a windmill’s operation), and (5) exhibit negentropy. Our present electrical power systems do not do these five things, even though each is an open system in violent energy exchange with the vacuum. A priori, that reveals it is the scientific model and the engineering design that are at fault. It is not any law of nature or principle of physics that prevents self-powering open electrical power systems. Instead, it is the scientific community and its prevailing mindset against extracting and using EM energy from the vacuum. Environmental Community In the past, the environmental community has been overly naïve with respect to physics, and particularly with respect to electrical physics. Its science advisors have come mostly from the conservative “in the box” scientific community. Hence, the community has failed to realize that COP>1.0 electrical power systems are normal and permitted by the laws of nature and the laws of physics.  They have no inkling that Heaviside discovered—in the 1880s! — the enormous unaccounted EM energy pouring from the terminals of any battery or generator. They are unaware that Poynting considered only the tiny component of the energy flow that enters the circuit. They are also unaware that, completely unable to explain the astounding enormity of the EM energy flow if the nondiverged (nonintercepted)Heaviside component is accounted, Lorentz {18} just arbitrarily used a little procedure to discard that troublesome Heaviside “dark” (unaccounted) component.; Lorentz reasoned that, since the huge dark energy flow component missed the circuit entirely, it “had no physical significance.” This is like arguing that none of the wind on the ocean has any physical significance, except for that small portion of the wind that strikes the sail of one’s own sailboat. It ignores the obvious fact that whole fleets of additional sailboats can also be powered by that “physically insignificant” wind component that misses one’s own sailboat entirely. Nonetheless, electrodynamicists continue to use Lorentz’s little discard trick, and try to call the feeble Poynting energy flow component caught by the circuit theentire EM energy flow connected with it. This is like arguing that the component of wind hitting the sails of one’s own sailboat, is the entire great wind on the ocean. As a result, the environmental community has failed to grasp the technical reason for the energy crisis and the increasing pollution of the biosphere. They have been deceived and manipulated into thinking that conventional organized science is giving them the very best technical advice possible on electrical power systems. The environmentalists have been and are further deceived into believing that the conventional scientific community is advocating and performing the best possible scientific studies and developments for trying to solve the energy crisis. Of major importance, the environmental community itself has been deceived as to the exact nature of the energy flow in and around a circuit, the vastness of the unaccounted energy flow (or even that any of the energy flow is deliberately unaccounted), and the fact that this present but unaccounted EM energy flow can be intercepted and captured for use in powering loads and developing self-powering systems. Worst of all, the environmental community has been deceived as to what powers every electrical load and EM circuit. They have been deceived into believing that burning all those hydrocarbons, using those nuclear fuel rods, building those dams and windmills, and putting out solar cell arrays are necessary and the best that can be done. In short, they have been smoothly diverted from solving the very problem — the problem of the increasing pollution and destruction of the biosphere — they are striving to rectify. However, their continued demonstrations in the street demonstrate that many environmentalists now suspect that much of the world’s continued policy of “the rich get richer and the poor get poorer” in international trade agreements are deliberately planned and implemented {[78]}. They perceive the implementation to the advantage of a favored financial class and the exploitation of the poorer laboring classes in disadvantaged nations. Electrical Power Community The electrical power community: (1)  ubiquitously uses equilibrium thermodynamics, believing that COP>1.0 is perpetual motion nonsense and against the laws of physics, (2)  has no notion that the energy flowing down their power lines and filling all space around them, is extracted directly from the active vacuum by the source dipole in the generator, (3)  erroneously believes that the hydrocarbons they burn, or the water through the hydroturbines at the dam, or the nuclear fuel rods they consume, actually add the power to the transmission lines, (4)  uses half of the tiny component of energy caught by the power lines, to destroy the source dipoles in their generators, thus requiring ever more shaft input energy via powering a steam turbine, hydroturbine, etc., (5)  believes that energy can be “used” only once, when in fact it can be used and re-used repeatedly since it cannot be created or destroyed, (6)  allows only a single pass of the EM energy flow down the power lines, so that only one tiny interception of energy occurs from the energy flow and the rest (most) of the energy flow is wasted, (7)  believes that the electrical energy problem translates into more hydrocarbon combustion or nuclear fuel rod consumption rather than a totally different way of doing business, and (8)  believes that the theory they apply is correct, when in fact it is so seriously flawed as to be inane, and has been inane for a century. Industries also acquire their own hidden agendas, when serious threats to the industries arise. As an example, a potentially serious problem arose some decades ago when it became apparent that EM radiation from power lines might detrimentally affect people or at least some people. To put it gently, a great deal of fuss and fury resulted, and a great deal of money was and is spent by the power companies (or through organizations and foundations funded by them) in EM bioeffects research. Not too surprisingly, just about the entire output of this industry-funded research “finds” that there is no problem with powerline radiation {[79]}. Those scientists such as Robert Becker {[80]}{[81]} who advocate or show otherwise, usually wind up having all their funds cut off, hounded from their jobs, and—in the case of Becker—forced to retire early. It is no different in the electrical energy science field {21}. Storage Battery Companies Battery companies are primarily of much the same outlook and ansatz as are the power companies.  They have gone to pulse charging of batteries and improved battery chemistry and materials {[82]}. They have no notion that batteries do not power circuits, but only make source dipoles — and it is the source dipole that then extracts EM energy from the vacuum and pours it out into the external circuit. Consequently, they erroneously believe that chemical energy in the battery is expended in order to provide power to the external circuit. Instead, it is expended only to continuously remake the source dipole, which the closed current loop circuit fiendishly keeps destroying faster than the load is powered. They also have not investigated deliberately dephasing and decoupling the major ion current within the battery and between the plates, from the electron current between the outside of the plates and the external circuit. Consequently they have no concept of permissible Maxwellian COP>1.0 battery-powered systems. Instead, battery companies, scientists, and engineers still believe — along with the power companies and most electrodynamicists, and the environmental community — that applying the Lorentz symmetrical regauging to the Heaviside-Maxwell equations retains all the Maxwellian systems. It does not. Instead, it arbitrarily discards all Maxwellian systems which are permitted by the laws of nature and the laws of physics to produce COP>1.0! University Community The University community mostly supports the prevailing EM view. It also suffers from the rise of common “greed” in the universities themselves. The professor now must attract external funding (for his research, and for his graduate students — and especially for the lucrative “overhead” part of the funding which goes to the University itself). The research funds available for “bidding” via submitting proposals, are already cut into “packages” where the type of research to be accomplished in each package is rigorously specified and controlled. Research on COP>1.0 systems is strictly excluded. Dramatic revision of electrodynamics is excluded. Unless the professor successfully bids and obtains packages and their accompanying funding, he is essentially ostracized and soon discharged or just “parked” by the wayside. Also, if he tries to “go out of the box” in his papers submitted for publication, his peer reviewers will annihilate him and his papers will not be published. Shortly he will effectively be blacklisted and it will be very difficult for him to have his submitted papers honestly reviewed, much less published. Again, that means no tenure, no security, and eventual release or “dead-end parking” by the university. When one looks at the “innovative” packages so highly touted, they either (1) are research focused upon some approved thing such as hot fusion—which has spent billions and has yet to produce a single watt on the power line, and cannot do so in any reasonable time before the collapse of the Western economy — or (2) use clever buzzwords for things which are actually “more of the same” and “in the box thinking” with just some new words or twists thrown in for spin control. Meanwhile, all this makes for a self-policing system, which rewards conservatism — conservative publications, conservative research, conservative thinking, conservative teaching, etc.  In short, it selects and approves electrical power system research that is “too little, too late” to solve the world energy crisis in time, and ruthlessly rejects all the rest. It also makes for a self-policing system which roots out and destroys (or parks on the sidelines) those professors, graduate students, and post-docs who — given a chance to be highly innovative and “out of the box” researchers — might upset the status quo. In short, the scientific community is itself the greatest arch foe of high innovation, just as Planck indicated. The university generally typifies and reflects that overall attitude because its outside research funds are controlled and managed by the upper echelons of the organized Big Science community and the governmental community. Government Community — Technical The technical part of the U.S. government research community is drawn from the universities, private industry, etc. It mostly reflects an even more conservative group than the universities. Again, papers published and funding are the major requirements, within given and largely accepted scientific constraints. Further, the managerial government scientists must compete for funding, annual budgets, etc. and have their own “channel” constraints from on high. At the top levels (such as NSF and NAS), cross-fertilization by the aims and perceptions of the conservative scientific community leaders is achieved. The real reasons for the violent attacks were the prestige and power of the Stealth community at the time — and because UWB radar had the implication of tracking Stealth vehicles readily. Interestingly, the arch foes of UWB at the time, today would have us believe they are “staunch experts” in the UWB field. To understand their remarkable metamorphosis, one need only recall Arthur C. Clarke’s words, quoted earlier. In the COP>1.0 EM energy field, we are still rather much at the stage where the UWB researchers started. We are still in the “violent attack, personal insults, character assassination, slander, libel, etc.” stage. Sadly, such ad hominemsavagery is by scientists who themselves have no notion of how electromagnetic circuits are actually powered, and who — like ostriches — still have their heads buried in the sand back there in the 1880s when Lorentz discarded the enormous Heaviside energy flow component. Government Community — Non Technical Here we have a rather mixed situation. The nontechnical person — e.g., a Senator or a Congressperson — is operating under a distinct disadvantage. In taking the stance that much better electrical power systems can readily be achieved, he or she is in fact opposing almost the entire set of University, Government Technical, University, Power Company, Battery Company, and Organized Science communities. Further, in most cases his technical advisors are themselves from one or the other of those communities, and likely to go back into that community or those communities when the Senator or Congressperson leaves office, or even before. So the Congress and the non-technical government community at large operate at a great disadvantage. As an example, admittedly there are some very misguided unorthodox energy system inventors and scientists out there, who in the guise of furthering COP>1.0 systems actually contribute to the problem rather than to the solution. A few do not even realize that they cannot properly measure a “spiky” output with an RMS meter! Some are also more interested in selling “dealerships” and “stock” than in furthering the science of COP>1.0 systems. Few have submitted their purported COP>1 devices to rigorous testing by an independent, Government-certified test laboratory { [83]}. This “noise” seriously dilutes the unconventional scientific community’s legitimate efforts in COP>1.0 systems. By playing up such “dilution” and accenting “the crazies”, the orthodox scientific community often convinces government nontechnical managers and personnel that the unorthodox scientific COP>1.0 community is comprised only of lunatics, charlatans, stock-scam artists and misguided crank inventors. Such of course is not the case. A goodly number of reputable, skilled scientists are seriously struggling with the problems of developing COP>1.0 EM power systems and devices. A few are also struggling to develop an adequate theory of such systems. Progress is slowly being made and has been made, in spite of the harassment {[84]}. The independent assessments that Congress once enjoyed with the OTA are no more because the OTA was abolished. Now the committees, subcommittees, and individual Congresspersons and Senators are largely on their own, with their own staffs and their own technical advisors. That said, nonetheless it can be seen by savvy Senators and Congresspersons that the U.S. Ship of State is headed for a great economic bust, and probably the greatest one of all time. The Government Non-Technical community (the Senate and the Congress, in particular) is in far better shape than the Government Technical community, to appreciate the world implications of the pending economic disaster. I am hopeful that both the environmentalists and the Government Non-Technical community will rapidly unite in a common goal to get this vacuum energy program launched, under a National Emergency declaration. If so, then they can solve the energy crisis and the pending economic crisis, in fairly short order, and permanently. In Conclusion There is an even more ominous specter looming behind the shadow of the coming great economic collapse. When national economies get strained to the breaking point — with some of them failing, etc. worldwide as the price of oil escalates — the conflicts among nations will increase in number and grow in intensity. About a year or so ahead of the “Great Collapse” of the world economies, the intensity and desperation of the resulting national conflicts will have increased to the breaking point. Some 25 nations already have weapons of mass destruction (WMD) — including nuclear warheads; missile, aircraft, boat, and terrorist delivery systems; biological warfare weaponry; and other advanced weapons {9} {10}, etc. { [85]} { [86]} Any knowledgeable person knows that hostile terrorist agents are already on site here in the U.S. {[87]}, and some will have smuggled in their WMDs. It is not too difficult to surmise that some of those missing Russian “suitcase nukes” probably wound up right here in the U.S., hidden in our population centers { [88]}. Or that some of Saddam Hussein’s large stock of anthrax has been spirited into the U.S. as well.  As is well known, the threat from weapons of mass destruction is now officially recognized as the greatest strategic threat facing the U.S. It is not a matter of if the WMD weapons will be unleashed, but when. If one transposes that recognized escalating WMD threat onto the escalating economic pressures worldwide, then another factor comes into play — the dark side of the Mutual Assured Destruction (MAD) concept. We have opted (at least to date) not to defend our populace. The U.S. government has deliberately placed U.S. population centers in a defenseless situation so that their destruction is “assured” once the WMD balloon really goes up. The insanity of the MAD concept is revealed when war preparations by many nations start to be perceived — as they will be, when the conflicts intensify sufficiently and the looming economic collapse tightens the cinch on the nations of the world. Without any protection of its populace, a defending nation has to fire on perception of nuclear preparations by its adversaries, if that nation is to have even the slightest chance of surviving. At about that 2007 date when a nation sees its adversaries preparing WMD and nuclear assets for launch or use in ongoing intense conflicts, at some point that nation must pre-empt and fire massively, or accept its own “assured destruction”. The only question in MAD is whether the assured destruction shall be mutual or solitary. So one or more nations will fire, immediately moving all the rest into the “fire on perception” mode. Very rapidly, the situation then escalates to the all-out worldwide exchange so long dreaded. This massive exchange means the destruction of civilization itself, and probably much of the entire biosphere for decades or centuries. Such escalation from one or more initial nuclear firings has been shown for decades by all the old strategic nuclear studies. It is common knowledge to strategic analysts unless one engages in wishful thinking. Eerily, this very threat now looms in our not too distant future, due in large part to the increasing and unbearable stresses that escalating oil prices will elicit. So about seven years or so from now, we will enter the period of the threat of the Final Armageddon, unless we do something very, very quickly now, to totally and permanently solve the present “electrical energy from oil” crisis. This is really why we must have a National Emergency proclamation, and a Manhattan Project. Mass manufacturing, deployment, and employment of replacement electrical power systems must begin in earnest in early 2004. In my estimate, the point of no return for developing the self-powering replacement systems is about the end of 2003. If by early 2004 we do not have multiple types of vacuum-energy powered systems rolling off the assembly lines en masse, then we shall overshoot the point of no return. In that case, it matters not whether the systems then become available or not. They will then be too late to prevent the great Armageddon and the destruction of civilization. Personally, the present author regards the increasing energy crisis as the greatest strategic threat to the United States in its entire history. I will do anything within my power to help prevent what I perceive to be the looming economic collapse of the Western world, preceded or accompanied by a sudden, explosive, all-out and continuing exchange of the WMD arsenals of most of the world. We can still meet this early 2004 production deadline. It is difficult, but it is definitely a doable at this time. We must do it, and we must do it now. Else the technology for electrical energy from the vacuum will also be “too little, too late.” In that case, not only the world economy but civilization itself will likely be destroyed — not 100 years from now, not 50 years from now, but in less than one decade from now. In the name of all humanity, let us begin! Else by the time this first decade of the new millennium ends, much of humanity may not remain to see the second decade. References and Notes [1] Tesla, Nikola, “The Problem of Increasing Human onEnergy,” Century, June, 1900 [2] “The World Bank and the G-7: Changing the Earth’s Climate for Business,” Ver. 1.1, Aug. 1997, IPS [3] Keeling et al., “Seasonal and interannual variation in atmospheric oxygen and implication for the global carbon cycle”, Nature, Vol. 358, 8/27/92, p.354 [4] Vinnikov, Science, Dec. 3, 1999, p. 1934 [5] Linden, Eugene,”The Big Meltdown,” TIME, Sept. 4, 2000, p.53 [6] Brown, Lester, et al., State of the World, Worldwatch Institute, 1999, p. 25, citing U.N. 1997 report [7] Epstein, Paul, “Is Global Warming Harmful to Health?” Scientific American, August, 2000, p.50 [8] ibid., p.57 [9] Brown, p. 26 [10] ibid., p. 25 [11] Annual Energy Outlook, DOE Energy Information Administration. EIA-X035 [12] Brown, p. 25 [13] Valone, Thomas, “Future Energy Technologies,” Proceedings of the Annual Conference of the World Future Society, 2000. [14] US DOE Energy Information Administration, Energy INFOcard, 1999 [15}. Future Energy: Proceedings of the First International Conference on Future Energy, Integrity Research Institute, 1999, CD-ROM [1]. And of course it is said to be accidental that all the manipulative measures and profit-taking happen to coincide with the large increase in demand in the U.S. during the summer vacation and tourist months. [2]. E.g., see F. Gregory Gause III, “Saudi Arabia Over a Barrel,” Foreign Affairs, 79(3), May/June 2000, p. 80-94.  Quoting, p. 82:  “Saudi oil policy is now driven primarily by the immediate revenue needs of a government struggling to maintain a welfare state designed in the 1970s — when money seemed limitless and the population was small — for a society with one of the world’s fastest-growing populations.”  Our comment is that the financial disarray of the Saudis is seen by Gause as a need to get Saudi Arabia into the World Trade Organization — in other words, into the clutches of globalization. For a resounding exposé of the WTO, see Lori Wallach and Michelle Sforza, Whose Trade Organization? Corporate Globalization and the Erosion of Democracy, published by Public Citizen Foundation  and available by order from the web at http://www.globaltradewatch.org.  Wallach and Sforza reveal and document the machinations of the World Trade Organization as an instrument of globalization and usurpation of national rights.  The WTO is only one of many organizations prepared by the High Cabal (Winston Churchill’s term) to establish the return for much of the world to a version of the old feudal capitalism where national governments posed no checks and balances and workers had no rights or benefits. [3]. NAFTA stands for North American Free Trade Agreement, passed by Congress in 1993, creating a trade and investment region consisting of Canada, the United States, and Mexico.  GATT stands for General Agreement on Tariffs and Trade (Uruguay Round) in 1994, which created the World Trade Organization (WTO).  Other such agreements set in place to initiate world globalization financial control over nations include or have included MAI (Multilateral Agreement on Investment) and OECD (Organization for Economic Co-operation and Development) in which many of the “secret” agreements are prepared and then scurried through passage by “fast track” means where the Congress allows the President to negotiate trade agreements that are then voted on by the Congress without amendment.  Quoting Moisés Naím, “Lori’s War,” Foreign Policy, Vol. 118, Spring 2000, p. 35,  “…’fast track’ is the legislative legerdemain under which Congress allows the president to negotiate trade agreements that are then voted on without amendments.  Without it, the White House has no guarantee that lawmakers will not seek to change the terms of trade agreements reached after lengthy trade talks.”  Our comment is that there should be no such guarantee to the White House, since the Congress consists of our duly elected representatives — elected precisely for the purpose of representing the U.S. public rather than the administration.  The “fast track” ploy is one way of bypassing full Congressional discussion, examination, etc. so that the desired globalization control measures can be “sneaked through” without a rigorous examination of their provisions.  In this way, national authority and constitutional provisions can gradually be undermined by a continuing series of such sneak actions. [4]. According to the International Labour Organization, some 250 million boys and girls between the ages of five and 14 are exploited in hazardous work conditions.  Most of these children live in the developing world — although in industrialized countries such as the United States, hundreds of thousands of underage boys and girls are at work in sweatshops, farm fields, brothels, and on the street. E.g., see Sandy Hobbs, Michael Lavalette, and Jim McKechnie, Child Labor, ABC-CLIO, Inc., 1999.  For a poignant visual and verbal tour through the problem, see Russell Freedman and Lewis Hine, Kids at Work: Lewis Hine and the Crusade Against Child Labor, Houghton Mifflin, Aug. 1994.  The United Nations also has several publications on the problem and its extent. [5]. As one example, the Russian mafia, together with the GRU and KGB under its new name, are the dominant factors in Russia, Russian business, and the Russian side of relations between the U.S. and Russia.  See particularly Stanislov Lunev and Ira Winkler, Through the Eyes of the Enemy: Russia’s Highest Ranking Military Defector Reveals Why Russia Is More Dangerous Than Ever, Regnery, Washington, D.C., 1998.  Quoting p. 12: “When the Soviet Union collapsed and its industries were privatized, there was only one group within Russia with the money to buy the new industries, and that was the Russian mafia.  But the mafia did more than buy the industries — it bought the government.”  Quoting p. 13: “The Cold War is not over; the new Cold War is between the Russian mafia and the United States. ” Quoting p. 14: “The Soviet Union did not collapse because of ‘reform minded leaders’ or because of the Reagan administration’s brilliantly aggressive strategy (though that strategy played a part).  The truth is that the Russian mafia caused the collapse.  Soviet ‘reform’ was nothing more than a criminal revolution.” [6]>. As another example, the Japanese Yakuza has penetrated most large Japanese corporations, including Japanese banking and to include the national Japanese bank.  E.g., see Michael Hirsh and Hideko Takayama, “Big Bang or Bust?”, Newsweek, Sept. 1, 1997, p. 44-45. Some $300 billion or more were extracted by the Yakuza from the Japanese taxpayers in a great land scandal. Japan’s banks loaned billions to Yakuza-affiliated real-estate speculators, and the Yakuza would not repay the funds.  The banks were literally too terrified to collect on the $300-600 billion in bad debt that ensnared the banking system.  E.g., when Sumitomo Bank got a little aggressive in collecting loans in Nagoya, its branch manager was killed.  For a summary of this scandal, see Brian Bremner, “How the Mob burned the Banks: The Yakuza is at the center of the $350 billion bad-loan scandal,” Business Week, Jan. 29, 1996, p. 42-43, 46-47.  The Japanese government — i.e., the taxpayers — had to absorb this enormous loss. The Yakuza have achieved the power and status of a hostile nation, operating within U.S.-Japanese corporate relations, within other nations’ relations with Japan, and within the oriental communities of foreign states.  Great influence upon the ability or inability of the U.S. government to continue its deficit financing now rests in the hands of the Yakuza.  Effectively, the Yakuza can trigger a U.S. stock market crash at will, by simply shutting off all further Japanese purchase of U.S. government deficit financing bonds. The Yakuza regard themselves as the last Samurai, still follow the old Bushido concept, and are intensely hostile to the United States for the humiliating defeat of Japan in WW II and for dropping the atomic bomb on Japan.  At the critical time in the coming economic crisis, cessation of Japanese purchase of U.S. Government bonds can and will initiate the financial coup de grace which generates the final and sudden collapse of the U.S. economy, dragging down other economies with it.  It appears that the Yakuza tested the response of the U.S. stock market to this tactic on two occasions, by simply slowing the rate of Japanese purchases of U.S. government bonds.  The immediate drops in the stock market on both occasions showed the efficacy of this financial weapon, whenever the Yakuza wish to employ it. In the U.S., the Yakuza constitute an important and growing hostile terrorist group, an intense subculture increasing in numbers, and a group biding its time prior to engaging in mass terrorism strikes.  Together with the Aum Shinrikyo, in 1990 the Yakuza leased the operational use of clandestine strategic longitudinal EM wave interferometer weapons in Russia.  They now possess some of the most powerful strategic weapons on earth (see notes 9 and 10, below). [7]>. The recent historic meetings of North and South Korean leaders, with proclamations of cooperation etc., are a healthy sign for the better.  With the former implacable North Korean dictator now dead, the new and younger leader may have less hostile outlook.  However, progress can be made only very slowly, since the Communist apparatus is still in power in the armed forces and the nation.  Only as more of the old die-hard Communist leaders die off, will real progress start to be made in materially lessening the threat posed by North Korea.  That is a process requiring a generation, but at least a start has been made.  For our thesis, that progress is likely to be sufficiently slow that, while it damps the stress curves a little, it has no appreciable effect on the overall thesis of the eruption within the decade of a great conflagration involving weapons of mass destruction. [8]. Particularly see Lunev and Winkler, ibid., 1998 for the fact that Spetznatz assassination and terror teams are already deployed on site in the United States, as are their WMD weapon caches to include nuclear weapons.  A number of nations of the world have secretly deployed nuclear and biological weapons throughout the interior of their perceived enemy nations, often using diplomatic pouch privilege to bring them directly into the targeted nation.  It is called “dead man fuzing”.  The notion was an extension of the MAD concept: with weapons and teams secreted throughout a targeted nation, then the potent threat that, even if one’s own nation is destroyed, one can still destroy the foe who did it, supposedly acts as a deterrent. [9]. Also involved, there are clandestine weapons of far greater power than nuclear weapons, but most of that subject is beyond the scope of this presentation.  For some time we have informed the U.S. government of these developments, the evidence, the events, etc.  An example — current at its time of preparation — is T. E. Bearden, Energetics: Extensions to Physics and Advanced Technology for Medical and Military Applications, CTEC Proprietary, May 1, 1998, 200+ page inclosure to CTEC Letter, “Saving the Lives of mass BW Casualties from Terrorist BW Strikes on U.S. Population Centers,” to Major General Thomas H. Neary, Director of Nuclear and Counterproliferation, Office of the Deputy Chief of Staff, Air and Space Operations, HQ USAF, May 4, 1998.  Copies of a similar presentation were furnished the DoD, Senator Shelby as head of the Senate’s Intelligence subcommittee, and Congressman Weldon as head of the House’s Intelligence subcommittee efforts, as well as other U.S. government agencies and high ranking officials. [10]. The earlier clandestine asymmetrical strategic weapons were developed by the former USSR under rigid KGB and GRU control.  The first of these weapons were longitudinal EM wave interferometers; see Lunev and Winkler, ibid. 1998, p. 30: “Other instruments of destruction the Russians have had success with are seismic weapons.  Spitac and other small towns in the Transcaucasus Mountains were almost destroyed during a seismic weapons test that set off an earthquake.  This would have obvious applications on America’s west coast and other areas of the world prone to earthquakes.” These are also the weapons obliquely referred to by Defense Secretary Cohen in this statement: “Others [terrorists] are engaging even in an eco-type of terrorism whereby they can alter the climate, set off earthquakes, volcanoes remotely through the use of electromagnetic waves… So there are plenty of ingenious minds out there that are at work finding ways in which they can wreak terror upon other nations…It’s real, and that’s the reason why we have to intensify our [counterterrorism] efforts.”  Secretary of Defense William Cohen at an April 1997 counterterrorism conference sponsored by former Senator Sam Nunn.  Quoted from DoD News Briefing, Secretary of Defense William S. Cohen, Q&A at theConference on Terrorism, Weapons of Mass Destruction, and U.S. Strategy, University of Georgia, Athens, Apr. 28, 1997.  The present author has been briefing these weapons to DoD and other government agencies for many years.  Most major weapons laboratories in various nations—including China—have now discovered longitudinal EM waves and either have such weapons or are furiously developing them.  As an example of a test by a giant strategic longitudinal EM wave interferometer, see Daniel A. Walker, Charles S. McCreery, and Fermin J. Oliveira, “Kaitoku Seamount and the Mystery Cloud of 9 April 1984,” Science, Vol. 227, Feb. 8, 1985, p. 607-611;  Daniel L. McKenna  and Daniel Walker, “Mystery Cloud: Additional Observations,” Science, Vol. 234, Oct. 24, 1986, p. 412-413.  This was a test in two modes: (a) in a cold explosion mode above the surface of the sea, creating a sudden low pressure zone above the water and accounting for the suction of water from the ocean to form the cloud, and (b) formation of a glowing spherical shell of l ight in the top of the cloud, and expanding that shell to some 400 miles diameter. The cold explosion can destroy a naval task force at sea or an armored element on the ground, as an example, or take out the personnel in fixed installations and fortified positions.  The intense shell of EM energy duds the electronics of any vehicle (aircraft, missile, satellite) passing through it, by inducing an extremely sharp pulse of electromagnetic energy arising inside the electronics, from local spacetime itself.  Hundreds of tests of these weapons have been observed. The great advantage of using longitudinal EM waves is that they readily pass right through intervening mass such as the ocean or the earth, with little attenuation.  Hence an underwater nuclear submarine can be destroyed deep beneath the ocean—as witnessed by precisely that test of the first deployed Russian LW weapon to kill the U.S.S. Thresher in April 1963 off the East Coast of the United States.  The totally anomalous jamming signatures on the Thresher’s surface companion, the U.S.S. Skylark, positively reveal the nature of the weapon employed.  Kill of the Arrow DC-8 in Gander, Newfoundland was by one of these weapons, with abundant decisive signatures.  The present author published a photograph of the strike of the weapon two weeks earlier, offset from a night shuttle launch in Cape Canaveral, Florida.  This was the same weapon, being used for crew training, which destroyed the Arrow some two week later.  The TWA-800 crash off the East Coast of the U.S. was also such a shoot-down, as have been numerous others over the years, documented by the present author  At least seven nations now possess such longitudinal EM wave interferometer weapons.   Others are working furiously to develop them.  Also, even more powerful weapons of novel kind have been developed and deployed by three nations—neither of which is the United States. [11]. Proceeding conventionally, it will be 50 years before the organized scientific community will permit these emerging solutions to actually be developed and produced.  This is senseless; as the Manhattan Project in WW II showed, a newly emerging technology can go to production in four years.  Given only that neutron fission of the proper uranium isotope produced more neutrons than were input, the Manhattan Project developed operational atomic bombs of two major types in four years.  An appreciable number of other “waiting areas for such development” exists in science in the literature.  However, they are not usually pushed forward into development for decades due to the continuing resistance of the scientific community to all innovations which threaten the favored projects (such as hot fusion) and favored theories.  Any “scientist in the trenches” is well aware that the progress of science is by means of a continuing massive cat and dog fight, not at all by sweet scientific reason and logic. [12]. A perhaps excessive harsh characterization of these “in the box” efforts is that they represent “psychological displacement activities” for the scientific community, the government decision makers, and perhaps even a part of the environmental community.  At best these programs represent “Look at all the good things we are doing!”.  They must further be assessed with the view that “Look at what they will not do, and what the results of expending all our efforts on them will be: catastrophic economic collapse in a decade or less.” [13]. We strongly point out that Maxwell’s equations are purely hydrodynamic equations.  There is thus a 100% correspondence to hydrodynamics and electromagnetic power systems.  Anything that can be done mechanically, or hydrodynamically with fluid flow, can be done with electromagnetic field energy flow, a priori.  It is thus a serious fault of the scientific community in proclaiming that electrical power systems with COP>1.0 are prohibited, because closed systems cannot exhibit COP>1.0.  All such arguments are evanescent, since all they state is that an open EM system far from thermodynamic equilibrium with the active vacuum is what is required.  But the classical electrodynamics (136 years old) used to design and build electrical power systems, does not even model the energy exchange between active vacuum and the system.  To put it mildly, this is a completely inexplicable aberration of the scientific mindset, and it has been such for over a century. [14]. Open EM systems far from thermodynamic equilibrium with their electrically active vacuum environment are indeed permitted by the Maxwell-Heaviside equations, prior to the arbitrary symmetrical regauging of the equations to yield simpler equations more mathematically amenable (done by Lorenz in 1867 and later by H.A. Lorentz).  The Lorentz condition requires that the system be symmetrical in its discharge of its free excitation energy.  The present closed current loop circuit ubiquitously used in power systems is designed specifically such that the system itself enforces the Lorentz symmetrical discharge of its excitation energy.  Thus one-half of the energy is discharged in the external losses and load, while one-half is discharged to destroy the source dipole actually extracting the EM energy from the active vacuum.  Such design guarantees a system which destroys its intake of free electrical energy from the vacuum faster than it can use part of that energy to power the load.  I.e., it guarantees suicidal systems which can only exhibit COP<1.0.  Every electrical system ever built has been and is powered by electrical energy extracted directly from the seething vacuum, as we explain in the present paper. [15]. Such open systems far from thermodynamic equilibrium in the active vacuum exchange, rigorously are permitted to exhibit COP>1.0 and power themselves and their loads simultaneously.  By building only that subset of Maxwellian systems that forces Lorentz symmetrical regauging during discharge of the system’s excitation energy, our scientists and engineers have in fact simply discarded all those Maxwellian systems not in equilibrium with the vacuum during their excitation discharge.  In short, they simply do not build any such systems, or even design such.  The scientific and engineering communities themselves have directly produced and maintained the present horrible energy crisis and pollution of the biosphere. [16]. Ludvig Valentin Lorenz, “On the identity of the vibrations of light with electrical currents,” Philosophical Magazine, Vol. 34, 1867, p. 287-301. In this paper Lorenz gave essentially what today is called the “Lorentz symmetrical regauging”.  Not much attention was paid to the earlier Lorenz work.  Later, H.A. Lorentz introduced the symmetrical regauging of the Maxwell-Heaviside equations, in its present modern form.  Lorentz’s influence was so great that symmetrical regauging — which reduced the theory to a subset and discarded all Maxwell-Heaviside systems of COP>1.0 and capable of powering themselves and a load simultaneously — was adopted and utilized.  It is still utilized ubiquitously; e.g., see [17]. Lorentz symmetrical regauging is still utilized ubiquitously, so that no self-powering systems are designed and developed by our energy scientists and engineers.  E.g., see J. D. Jackson, Classical Electrodynamics, Second Edition, Wiley, New York, 1975, p. 219-221; 811-812.  In symmetrically regauging the Heaviside-Maxwell equations, electrodynamicists assume that the potential energy of a system can be freely changed at will (i.e., that the system can beasymmetrically regauged at will).  They do it twice in succession, but carefully select two such “paired simultaneous asymmetrical regaugings” such  that the two new free force fields that emerge are equal and opposite and there is thus no net force which can be used to dissipate the free excess system energy from regauging and perform work in a load.  In short, they retain only those Maxwellian systems that foolishly oppose and strangle their own ability to freely discharge and use the free energy they first acquire (from the vacuum, by the first asymmetrical regauging).  Thereby the energy scientists arbitrarily discard all those Maxwellian systems which net asymmetrically regauge by changing their own potential energy and also producing a net nonzero force that can be used to discharge the excess free energy in a load without reservation.  Net asymmetrically regauged systems are open dissipative EM systems, freely receiving energy from their active external environment and thus permitted to dissipate the excess regauging energy in loads because they do not strangle that latter ability.  Hence the performance of the arbitrarily-excluded Maxwellian systems is not confined to classical thermodynamics, but is described by the thermodynamics of an open dissipative system.  Such systems can (i) self-organize, (ii) self-oscillate, (iii) output more energy than the operator himself inputs (the excess is freely received from the external active environment) (iv) “power” its own losses and an external load simultaneously (all the energy to operate the system and the load is received freely from the external active environment), and (v) exhibit negentropy. [18]. We can now show that enormous EM energy flow can be easily and cheaply initiated from the active vacuum, anywhere, at any time.  The basis for this was in fact discovered by Heaviside in the 1880s.  Lorentz knew of this huge energy flow component but discarded it arbitrarily, apparently to avoid being attacked and accused of being a perpetual motion advocate. See H.A. Lorentz, Vorlesungen über Theoretische Physik an der Universität Leiden, Vol. V, Die Maxwellsche Theorie (1900-1902), Akademische Verlagsgesellschaft M.B.H., Leipzig, 1931, “Die Energie im elektromagnetischen Feld,” p. 179-186.  Figure 25 on p. 185 shows the Lorentz concept of integrating the Poynting vector around a closed cylindrical surface surrounding a volumetric element.  This is the procedure which arbitrarily selects only a small component of the energy flow associated with a circuit — specifically, the small Poynting component striking the surface charges and being diverged into the circuit to power it — and then treats that tiny component as the “entire” Poynting energy flow. [19]. The mathematical “trick” used by Lorentz to get rid of this easily and universally evoked giant negentropy, is still employed by electrical scientists and engineers without realizing what is actually being discarded.  For a full explanation, see T.E. Bearden, “Giant Negentropy from the Common Dipole,”Proc. IC-2000, St. Petersburg, Russia, July 2000 (in press).  A series of excellent papers by the Alpha Foundation’s Institute for Advanced Study (AIAS) have also been published, approved for publication, or submitted for consideration, in leading journals.  An example is M.W. Evans, T.E. Bearden et al., “Classical Electrodynamics without the Lorentz Condition: Extracting Energy from the Vacuum,” Physica Scripta, Vol. 61, 2000, p. 513-517.  A most formidable new AIAS paper, “Electromagnetic Energy from Curved Spacetime,” has been submitted to Optik and is in the referee process.  Two related paper giving a very solid basis for vacuum energy are M.W. Evans et al., “The Most General Form of Electrodynamics,” and “Energy Inherent in the Pure Gauge Vacuum,” both submitted to Physica Scripta and in the referee process.  The theoretical basis for extracting copious EM energy from the vacuum is now unequivocal and either has been published or is rapidly being published in leading journals. [20]. For example, see Myron W. Evans et al., AIAS group paper by 15 authors, “Classical Electrodynamics Without the Lorentz Condition: Extracting Energy from the Vacuum,” 2000, ibid.; “Runaway Solutions of the Lehnert Equations: The Possibility of Extracting Energy from the Vacuum,” Optik, 2000 (in press);—”Vacuum Energy Flow and Poynting Theorem from Topology and Gauge Theory,” submitted to Physica Scripta;—”Energy Inherent in the Pure Gauge Vacuum,” submitted to Physica Scripta;—”The Most General Form of Electrodynamics,” submitted to Physica Scripta; “The Aharonov-Bohm Effect as the Basis of Electromagnetic Energy Inherent in the Vacuum,” submitted to Optik;—”Electromagnetic Energy from Curved Spacetime,” submitted to Optik. [21]. As an example: The most critical scientist in the Western world, working on the “energy from the vacuum” approach, is Dr. Myron Evans, Founder and Director of the Alpha Foundation’s Institute for Advanced Study (AIAS).  Dr. Evans was hounded from his professorial position, has had his life threatened, has been without salary for several years, and fled to the United States for his very life.  He has some 600 papers in the hard literature, and is presently producing—in accord with Dr. Mendel Sachs’ epochal union of general relativity and electrodynamics — the world’s first engineerable unified field theory, and an advanced electrodynamics fully capable of dealing with and modeling EM energy from the vacuum.  Yet, Dr. Evans lives in the United States (where he recently became a naturalized citizen) at the poverty level.  He can afford only one meal a day, has no automobile, no air conditioning, and continues epochal work under a medical condition that would stop any ordinary person less scientifically dedicated.  He continues to be vilified and viciously attacked by elements of the scientific community, even though other elements are of much assistance in publishing and reviewing his papers, etc.  It is a remarkable commentary upon the sad state of our scientific community that such a scientist and such epochal work, of tremendous importance to both the United States and all humanity, must continue in such circumstances.  Meanwhile, the scientific community spends billions on vast projects of little significance in general, and of no significance at all in avoiding the coming world economic collapse and the destruction of civilization.  If this paper should fall into sympathetic hands which can obtain funding for Dr. Evans, then this author most fervently urges that such be accomplished at all speed.  The fate of most of the civilized world may well hinge upon such a simple thing, and upon such an insignificant expenditure. [22]. These are listed in M.W. Evans et al., “Classical Electrodynamics Without the Lorentz Condition: Extracting Energy from the Vacuum,” 2000, ibid. [23]. This system exists in small working prototype already, but I am under a nondisclosure agreement and cannot reveal the details of the process or the identity and location of the inventor.  The system is capable of being rapidly scaled up to meet the 2003 critical milestone of “ready for mass production”.  One can expect up to a COP = 4 from this process. [24]. In an electrical power system, Coefficient of Performance (COP) may be taken as the average energy dissipated in the load divided by the average energy furnished to the system by the operator.  Or, it may be taken as the average power dissipated in the load divided by the average power dissipated in the input process.  COP can be taken across any component, several components, or the entire system.  The COP of a normal generator itself may be 0.9, for example, while when the entire system including the heater, etc. is taken into account, the system COP may be only 0.3.  For COP>1.0, excess energy must be furnished to the system by the external environment, while only part of the energy (or none of it) is input by the operator. [25]. The Kawai process, Johnson process, and the magnetic Wankel engine are ideal for this purpose. [26]. T.E. Bearden, “Bedini’s Method For Forming Negative Resistors In Batteries,” Proceedings of the IC-2000, St. Petersburg, Russia, July 2000 (in press). [27]. Teruo Kawai, “Motive Power Generating Device,” U.S. Patent No. 5,436,518.  Jul. 25, 1995.  Applying the Kawai process to a magnetic motor essentially doubles the motor’s efficiency.  If one starts with high efficiency magnetic motors of, say, COP = 0.7 or 0.8, then the new COPs will be 1.4 and 1.6.  Two Kawai-modified high efficiency Hitachi motors were in fact independently tested by Hitachi and yielded COP 1.4 and 1.6 respectively. [28]. See T.E. Bearden, “The Master Principle of EM Overunity and the Japanese Overunity Engines,”  Infinite Energy, 1(5&6), Nov. 1995-Feb. 1996, p. 38-55; “The Master Principle of Overunity and the Japanese Overunity Engines: A New Pearl Harbor?”, The Virtual Times, Internet Node www.hsv.com, Jan. 1996.  The principle of the magnetic Wankel engine is self-evident from the drawings alone. [29].Johnson, Howard R., “Permanent Magnet Motor.”  U.S. Patent No. 4,151,431,  Apr. 24, 1979; “Magnetic Force Generating Method and Apparatus,” U.S. Patent No. 4,877,983, Oct. 31, 1989; “Magnetic Propulsion System,” U.S. Patent No. 5,402,021,  Mar. 28, 1995. [30]. In magnetic materials, the presence of two electrons near each other and having parallel spins results in the presence of a very strong force tending to flip the spin so that they are antiparallel.  The forces between the electrons due to spin geometry are exchange forces of quantum mechanical nature.  In complex assemblies of different magnetic materials comprising a single stator or rotor magnet, the shapes and structures can be produced so that, as the rotor moves by the attracting stator and enters the usual back mmf zone, the powerful spin force is suddenly unleashed by the geometry, relative field strengths, and movement.   This triggers the release of a violent pulse of magnetic field that greatly overrides the back mmf and strongly repels the rotor on out of this “gate” region where the exchange force is triggered.  Exchange force pulses may momentarily be 1,000 times as strong as the magnetic field H, or in some cases even stronger.  Evoking these responses automatically by the materials themselves, at controlled times and directions, produces the open system freely adding rotary energy from its vacuum exchanges inside the nonlinear materials.  Johnson has been able to achieve this effect consistently, opening the way for a legitimate self-powering permanent magnet motor.  We accent that the electrons involved are in direct energy exchange with the vacuum, and the exchange force energy comes from the violently broken symmetry in that vacuum exchange.  Multivalued magnetic potentials and hence nonconservative magnetic fields arise naturally in magnetic theory anyway.  However, conventional scientists exert enormous effort to eliminate such effects or minimize them — when in fact what is needed is to deliberately evoke and use them to produce systems with COP>1.0. [31]. Surrounding every dipolar EM circuit there exists a vast flow of nondiverged EM energy which misses the circuit entirely and is not presently accounted (thus “dark”) in electrical power systems and circuit theory.  Heaviside discovered it, Poynting never realized it, and Lorentz discarded it.  He discarded it because (a) he reasoned it was physically insignificant since it did nothing in the circuit, and (b) no one had the foggiest notion where such an enormous flow of EM energy—pouring from the terminals of every battery and generator—could possibly be coming from.  The trick Lorentz used to arbitrarily discard it is still used by electrodynamicists ubiquitously.  For a full background, see T.E. Bearden, “Giant Negentropy from the Common Dipole,” Proc. IC-2000 (ibid.); “On Extracting Electromagnetic Energy from the Vacuum, ” Proceedings of the IC-2000, St. Petersburg, Russia, July 2000 (in press); “Dark Matter or ?”, Journal of New Energy, 2000 (in press). [32]. Energy cannot be created or destroyed, but only changed in form.  Changing the form of energy is called “work”.   When one joule of collected energy is “dissipated” to perform one joule of work, one still has one joule of energy remaining after that joule of work has been done.  The energy is now just in a different form.  Scattering of energy in a resistor, e.g., is perhaps the simplest way of performing work, and known as “joule heating”.  However, for a thought experiment: If the resistor is surrounded by a phase conjugate reflective mirror surface, much of the scattered energy will be precisely returned back to the resistor as re-ordered energy.  It can indeed be “reused” by again being scattered in the resistor to do work.  There is no conservation of work law in physics or thermodynamics!  If there is no re-ordering at all, then one can get only one joule of work from one joule of energy changed in form.  The remaining joule of energy in different form (as in heat) is just “wasted” from the system.  But if we deliberately use re-ordering (such as simple passive retroreflection), we can reuse the same joule of energy to do joule after joule of work, changing the form of the energy in each interaction.  Eerily, most of our scientists and engineers are aware that energy can be changed in form indefinitely without loss, but will then argue that energy cannot be recycled and reused.  The scientific prejudice against “COP>1.0” processes and systems is so deep that many scientists are incapable of dealing with the real law of conservation of energy—which is simply that you can never get rid of any energy at all, but can only change its form.  Every joule of energy in the universe, e.g., was present not long after the Big Bang.  Since then, most of those joules of energy have each been doing joule after joule of work, for some 15 billion years. [33]. Kenneth R. Shoulders, “Energy Conversion Using High Charge Density,” U.S. Patent # 5,018,180, May 21, 1991.  See also Shoulders’ patents 5,054,046 (1991); 5,054,047 (1991); 5,123,039 (1992), and 5,148,461 (1992).   See also Ken Shoulders and Steve Shoulders, “Observations on the Role of Charge Clusters in Nuclear Cluster Reactions,” Journal of New Energy, 1(3), Fall 1996, p. 111-121. [34]. For a summary of this rapidly developing field, see Diederik Wiersma and Ad Lagendijk, “Laser Action in Very White Paint,” Physics World, Jan. 1997, p. 33-37. [35]. For the early discovery, see V.S. Letokhov, “Generation of light by a scattering medium with negative resonance absorption,” Zh. Eksp. Teor. Fiz., Vol. 53, 1967, p. 1442; Soviet Physics JETP, Vol. 26, 1968, p. 835-839; “Laser Maxwell’s Demon,” Contemp. Phys., 36(4), 1995, p. 235-243.  For initiating experiments although with external excitation of the medium, see N.M. Lawandy et al., “Laser action in strongly scattering media,” Nature, 368(6470), Mar. 31, 1994, p. 436-438.  See also D.S. Wiersma, M.P. van Albada, and A. Lagendijk, Nature, Vol. 373, 1995, p. 103. [36]. For new effects, see D.S. Wiersma and Ad. Lagendijk, “Light diffusion with gain and random lasers,” Phys. Rev. E, 54(4), 1996, p. 4256-4265; D.S. Wiersma, Meint. P. van Albada, Bart A. van Tiggelen, and Ad Lagendijk, “Experimental Evidence for Recurring Multiple Scattering Events of Light in Disordered Media,”Phys. Rev. Lett., 74(21), 1995, p. 4193-4196; D.S. Wiersma, M.P. Van Albada, and A. Lagendijk, Phys. Rev. Lett., Vol. 75, 1995, p. 1739; D.S. Wiersma et al., Nature, Vol. 390, 1997, p. 671-673; F. Sheffold et al., Nature, Vol. 398, 1999, p. 206; J. Gomez Rivas et al., Europhys. Lett., 48(1), 1999, p. 22-28; Gijs van Soest, Makoto Tomita, and Ad Lagendijk, “Amplifying volume in scattering media, ” Opt. Lett., 24(5), 1999, p. 306-308; A. Kirchner, K. Busch and C. M. Soukoulis, Phys. Rev. B, Vol. 57, 1998, p. 277. [37]. A true negative resistor appears to have been developed by the renowned Gabriel Kron, who was never permitted to reveal its construction or specifically reveal its development.  For an oblique statement of his negative resistor success, see Gabriel Kron, “Numerical solution of ordinary and partial differential equations by means of equivalent circuits,” J. Appl. Phys., Vol. 16, Mar. 1945a, p. 173.  Quoting: “When only positive and negative real numbers exist, it is customary to replace a positive resistance by an inductance and a negative resistance by a capacitor (since none or only a few negative resistances exist on practical network analyzers).”  Apparently Kron was required to insert the words “none or” in that statement.  See also Gabriel Kron, “Electric circuit models of the Schrödinger equation,” Phys. Rev. 67(1-2), Jan. 1 and 15, 1945, p. 39.  We quote: “Although negative resistances are available for use with a network analyzer,…”.  Here the introductory clause states in rather certain terms that negative resistors were available for use on the network analyzer, and Kron slipped this one through the censors.  It may be of interest that Kron was a mentor of Floyd Sweet, who was his protégé.  Sweet worked for the same company, but not on the Network Analyzer project.  However, he almost certainly knew the secret of Kron’s “open path” discovery and his negative resistor.  The present author worked for several years with Sweet, who produced a solid state device (the magnetic Vacuum Triode Amplifier) with no moving parts which produced 500 watts of output power for some 33 microwatts of input power.  See Floyd Sweet and T.E. Bearden, “Utilizing Scalar Electromagnetics to Tap Vacuum Energy,” Proc. 26th Intersoc. Energy Conversion Engineering Conf. (IECEC ’91), Boston, Massachusetts, p. 370-375. [38]. Shoukai Wang and D.D.L. Chung, “Apparent negative electrical resistance in carbon fiber composites,” Composites, Part B, Vol. 30, 1999, p. 579-590.  Negative electrical resistance was observed, quantified, and controlled through composite engineering by Chung and her team.  Electrons were caused to flow backwards against the voltage, with backflow across a composite interface.  The team was able to control the manufacturing process to produce either positive or negative resistance as desired.  The University at Buffalo filed a patent application.  It first placed a solicitation to industry for developments, and offered a technical package to interested companies signing nondisclosure, then suddenly withdrew the offer.  It appears to this author that a “fix” may be in place on the development. [39]. It is common knowledge that the point-contact transistor could be manufactured to produce a true negative resistor where the output current moved against the voltage. E.g., see William B. Burford III and H. Grey Verner.Semiconductor Junctions and Devices: Theory to Practice, McGraw-Hill, New York, 1965.  Chapter 18: Point-Contact Devices.  Quoting from p. 281: “First, the theory underlying their function is imperfectly understood even after almost a century…, and second, they involve active metal-semiconductor contacts of a highly specialized nature.  …The manufacturing process is deceptively simple, but since much of it involves the empirical know-how of the fabricator, the true variables are almost impossible to isolate or study.   … although the very nature of these units limits them to small power capabilities, the concept of small-signal behavior, in the sense of the term when applied to junction devices, is meaningless, since there is no region of operation wherein equilibrium or theoretical performance is observed.  Point-contact devices may therefore be described as sharply nonlinear under all operating conditions.”  We point out that the power limitation can be overcome by arrays of multiple point contacts placed closely together. [40]. It is the back coupling of the magnetic field from the secondary to the primary windings that forces the dissipation of equal energy in the primary of the transformer as is dissipated in the secondary.  If part of the return current in the secondary circuit bypasses the secondary of the transformer, the back field coupling to the primary is reduced accordingly.  Using a negative resistor as the bypass, the bypass of the current is “for free” (powered by the vacuum and a negentropic process).  Hence the result is a transformer/bypass system with COP>1.0.  In that case, such a system can have a positive clamped feedback from the output of the secondary circuit, into the primary to power it, while still having energy remaining to power a load.  No laws of physics or thermodynamics are violated, once one understands how an EM circuit is actually powered.  E.g., see Bearden, “On Extracting EM Energy from the Vacuum, 2000 (ibid.). [41]. The Kawai process was seized in the personal presence of the present author and his CTEC, Inc. Board of Directors.  We had reached a full agreement with Kawai to manufacture and sell his units worldwide, at great speed.  Control of his company, his invention, and Kawai himself was taken over in our presence the next morning, and the Japanese contingent was in fear and trembling. [42]. The magnetic Wankel engine was developed and actually placed in a Mazda automobile.  The back mmf of the rotary permanent magnet motor is confined to a very small angle of the rotation.  As the rotor enters that region, a sudden cutoff of a small trickle current in a coil generates a momentary large Lenz law effect which overrides the back mmf and produces a forward mmf in that region.  The result is that one furnishes a small bit of energy to convert the engine to a rotary permanent magnet motor with no back mmf, but with a nonconservative net magnetic field.  For details, see T.E. Bearden, “The Master Principle of EM Overunity and the Japanese Overunity Engines,” Infinite Energy, 1(5&6), Nov. 1995-Feb. 1996, p. 38-55; “The Master Principle of Overunity and the Japanese Overunity Engines: A New Pearl Harbor?”, The Virtual Times, Internet Node www.hsv.com, Jan. 1996. [43]. For a history and present status of Japanese organized crime, see Adam Johnston, “Yakuza: Past and Present,” Committee for a Safe Society, Organized Crime Page: Japan (available on the Internet).  Michael Hirsh and Hideko Takayama, “Big Bang or Bust?”  Newsweek, Sept. 1, 1997, p. 44-45. [44]. As a ball-park figure for illustration, a nominal electrical circuit or power system actually extracts from the vacuum and pours out into space some 10 trillion times as much energy flow as the poorly designed “single pass” circuits intercept and utilize. [45]. However, the orthodox scientists do not know it, because they follow blindly the method introduced by Lorentz a century ago.  Lorentz arbitrarily discarded all that astounding energy flow that pours from the source dipole and misses the circuit, and retained only the tiny, tiny bit of it that strikes the circuit and enters it to power it.  Nothing at all has been done since then to capture more of that huge available energy and use it.  As a result of the ubiquitous Lorentz procedure, most electrical power system scientists and engineers are no longer aware that the huge unaccounted energy flow not striking the circuit even exists. [46]. The active vacuum interacts profusely with every electrodynamic system, but this is not modeled at all by the scientists and engineers designing and building electrical power systems.  They unwittingly design every system to enforce Lorentz symmetrical regauging during excitation energy discharge, which in effect forces equilibrium in the vacuum-system energy exchange during that dissipation.  Hence, classical equilibrium thermodynamics rigorously applies during use of the collected energy.  Such systems are limited to COP<1.0 a priori. [47]. In Nobelist Feynman’s words: “We…wish to emphasize … the following points: (1) the electromagnetic theory predicts the existence of an electromagnetic mass, but it also falls on its face in doing so, because it does n ot produce a consistent theory – and the same is true with the quantum modifications; (2) there is experimental evidence for the existence of electromagnetic mass, and (3) all these masses are roughly the same as the mass of an electron.  So we come back again to the original idea of Lorentz – maybe all the mass of an electron is purely electromagnetic, maybe the whole 0.511 Mev is due to electrodynamics.  Is it or isn’t it? We haven’t got a theory, so we cannot say. Richard P. Feynman, Robert B. Leighton, and Matthew Sands, Lectures on Physics, Vol. 2, 1964, p. 28-12.   Also: “We do not know how to make a consistent theory – including the quantum mechanics – which does not produce an infinity for the self-energy of an electron, or any point charge.  And at the same time, there is no satisfactory theory that describes a non-point charge.  It’s an unsolved problem.” Ibid., Vol. 2, 1964, p. 28-10.  In fact, “energy” itself is actually a very nebulous and inexact concept.  Again quoting: “It is important to realize that in physics today, we have no knowledge of what energy is.”  Ibid., Vol. 1, 1964, p. 4-2. [48]. E.g., a very recent AIAS paper, M.W. Evans et al., “The Most General Form of Electrodynamics,” submitted to Physica Scripta, rigorously shows just how wrong the present limited EM theory is.  Quoting: “…there can be no electro-magnetic field [as such]  in the vacuum.  In other words there can be no electromagnetic field propagating in a source-free region as in the Maxwell-Heaviside theory, which is written in flat space-time using ordinary derivatives instead of covariant derivatives.”  The reason is quite simple: spacetime is active and curved.  The great John Wheeler and Nobelist Feynman, e.g., realized that EM force fields cannot exist in space.  They pointed out that only the potential for such fields existed in space, should some charges be made available so that the fields could be developed on them.  See Richard P. Feynman, Robert B. Leighton and Matthew Sands, The Feynman Lectures on Physics, Addison-Wesley, New York, Vol. I, 1963, p. 2-4. [49]. Max Planck, as quoted in G. Holton, Thematic Origins of Scientific Thought, Harvard University Press, Cambridge, MA, 1973. [50]. Arthur C. Clarke, in “Space Drive: A Fantasy That Could Become Reality” NSS … AD ASTRA, Nov/Dec 1994, p. 38. [51]. E.g., quoting Nobelist Lee:  “”…the discoveries made in 1957 established not only right-left asymmetry, but also the asymmetry between the positive and negative signs of electric charge. … “Since non-observables imply symmetry, these discoveries of asymmetry must imply observables.” [T. D. Lee, Particle Physics and Introduction to Field Theory, Harwood, New York, 1981, p. 184.] On p. 383, Lee points out that the microstructure of the scalar vacuum field (i.e., of vacuum charge) is not utilized.  Particularly see Lee’s own attempt to indicate the possibility of using vacuum engineering, in his “Chapter 25: Outlook: Possibility of Vacuum Engineering,” p. 824-828.  Unfortunately Lee was unaware of Whittaker’s profound 1903 decomposition of the scalar potential, as between the ends of a dipole, which gives a much more practical and easily evoked method for re-ordering some of the vacuum’s energy, extracting copious EM energy flows from it, and setting the stage for self-powering electrical power systems worldwide. [52]. The present author has taken the necessary first major step, by using Whittaker decomposition of the scalar potential between the poles of a dipole to reveal a simple, direct, cheap method for extracting and sustaining enormous EM energy flows from the dipole’s asymmetry in its energetic exchange with the active vacuum. [53]. The internal energy available to a generator is the shaft energy we input to it.  In large power plants this is usually by a steam turbine, and heat (from a nuclear reactor, burning hydrocarbons, etc.) is used merely to heat the water in the boiler to make steam to run the steam turbine.  Every bit of all that is just so the generator will have some internal energy made available with which it can then forcibly make the dipole.  That is all that generators (and batteries) do: Use their available internal energy to continually make the source dipole — which our engineers design the circuit to keep destroying faster than the load is powered. [54]. By “dipole” we mean the positive charges are forced to one side, and the negative charges forced to the other.  This internal “source dipole” formed by the generator or battery is electrically connected to the terminals. [55]. This has been known in particle physics for nearly 50 years.  It stems from the discovery of broken symmetry by C.S. Wu et al.  in 1957.  A dipole is known to be a broken symmetry in its violent energy exchange with the active vacuum.  Rigorously this means that some of the “disordered” EM energy received by the dipole from the vacuum, is re-ordered and re-radiated as usable, observable EM energy.  Conventional electrodynamics and power system engineering do not model the vacuum’s interaction, much less the broken symmetry of the generator or battery dipole in that continuous energy exchange. [56]. A pictorial illustration of the enormity of the energy flow through the surrounding space, and missing the external circuit entirely, is given by John D. Kraus, Electromagnetics, Fourth Edn., McGraw-Hill, New York, 1992—a standard university text.  Figure 12-60, a and b, p. 578 shows a good drawing of the huge energy flow filling all space around the conductors, with almost all of that energy flow not intercepted by the circuit at all, and thus not diverged into the circuit to power it, but just “wasted” by passing it on out into space. [57]. That is, the interception of the little “boundary layer” or “sheath” of the flow, right on the surface of the wires. [58]. Poynting never considered anything but this small little “intercepted” component of the energy flow that actually entered the circuit.  E.g., see  J.H. Poynting, “On the connexion between electric current and the electric and magnetic inductions in the surrounding field,” Proc. Roy. Soc. Lond., Vol. 38, 1985, p. 168. [59]. In technical terms, the closed current loop circuit forces the Lorentz symmetrical regauging condition during the discharge of the excitation energy collected by the circuit.  By definition, half the energy is thus used to oppose the system function (i.e., to destroy the source dipole) while the other half of the excitation energy is used to power the external losses and the load.  With half the collected energy used to destroy the free extraction of energy from the vacuum, and less than half used to power the load, these ubiquitous circuits destroy their source of free vacuum energy faster than they power their loads.  Hence, we ourselves have to steadily input shaft energy to the generators so that they can continue to reform the dipole.  In the vernacular, that is not the way to run the railroad! [60]. Maxwell’s seminal paper was published in 1864, as a purely material fluid flow (hydrodynamic) theory.  At the time, the electron and the atom had not been discovered, hence the reaction of two opposite charges (positive nuclei, negative Drude electrons) in the wire was not modeled but only one was modeled, etc.  Maxwell omitted half the EM wave in the vacuum and half the energy, resulting in the omission of the EM cause and generatrix of Newton’s third law reaction from electrodynamics.  This omission is present in electrodynamics, where the third law reaction appears as a mystical effect without a known cause.  The cause and mechanism is the omitted reaction of the observed effect back upon the non-observed cause.  General relativity, e.g., does include this reaction mechanism from the effect back upon the cause.  However, electrodynamicists still omit half the electromagnetics, half the wave, and half the energy as is easily shown.  E.g., it is demonstrated in every EM signal reception in a simple wire antenna, when the resulting perturbations of both the positive nuclei and the Drude electrons are correctly attributed to their interactions with the incoming EM fields (waves) from the vacuum. [61]. Mario Bunge, Foundations of Physics, Springer-Verlag, New York, 1967, p. 176. [62]. T.E. Bearden, “On Extracting Electromagnetic Energy from the Vacuum, “Proc. IC-2000, St. Petersburg, Russia, July 2000 (in press). [63]. T.E. Bearden, “Bedini’s Method For Forming Negative Resistors In Batteries,” Proc. IC-2000, St. Petersburg, Russia, July 2000 (in press). [65]. E.g., a good short summary is given by Dr. Theodore Loder, Institute for the Study of Earth, Oceans, and Space (EOS), University of New Hampshire, Durham, NH in his short paper, “‘Comparative Risk Issues’ Regarding Present and Future Environmental Trends: Why We Need to be Looking Ahead Now!”, prepared for the Senate Committee on the Environment and Public Works, June 1, 2000.  Certainly Dr. Loder and EOS can fully expound on the details of the biospheric pollution from the various contributing factors and processes. [66]. One need only regard the vehement attacks by the scientific community (and much of the government including national laboratories) upon cold fusion researchers, to understand why many inventors and scientists in the COP>1.0 open dissipative energy field are openly distrustful of the government and government scientists.  Further, the U.S. Patent Office is known to be under rather explicit instructions not to issue patents on COP>1.0 electrical processes and systems. [67]. E.g., the well-known Bohren experiment produces 18 times as much energy output as the operator must input.  The excess energy is extracted directly from the vacuum.  There has been no program, to my knowledge, seeking to exploit this well-proven COP>1.0 mechanism that has been in the hard science literature for some time.  See Craig F. Bohren, “How can a particle absorb more than the light incident on it?”  Am. J. Phys., 51(4), Apr. 1983, p. 323-327. Under nonlinear conditions, a particle can absorb more energy than is in the light incident on it.  Metallic particles at ultraviolet frequencies are one class of such particles and insulating particles at infrared frequencies are another. For independent validation of the Bohren phenomenon, see H. Paul and R. Fischer, {Comment on “How can a particle absorb more than the light incident on it?’},” Am. J. Phys., 51(4), Apr. 1983, p. 327. [68]. G. Johnstone Stoney, “Microscopic Vision,” Phil. Mag. Vol. 42, Oct. 1896, p. 332; , “On the Generality of a New Theorem,” Phil. Mag., Vol. 43, 1897, p. 139-142; “Discussion of a New Theorem in Wave Propagation,” Phil. Mag., Vol. 43, 1897, p. 273-280; “On a Supposed Proof of a Theorem in Wave-motion,” Phil. Mag., Vol. 43, 1897, p. 368-373. [69]. E. T. Whittaker, “On the Partial Differential Equations of Mathematical Physics,” Math. Ann., Vol. 57, 1903, p. 333-355. [70]. Evans in a private communication has pointed out that Whittaker’s method depends upon the Lorentz gauge being assumed.  If the latter is not used, the Whittaker method is inadequate, because the scalar potential becomes even more richly structured.  My restudy of the problem with this in mind concluded that, for the negentropic vacuum-reordering mechanism involving only the dipole and the charge as a composite dipole, it appears that the Whittaker method can be applied without problem, at least to generate the minimum negentropic process itself.  However, this still leaves open the possibility of additional structuring.  The actual negentropic reordering of the vacuum energy (and the structure of the outpouring of the EM energy 3-flow from the charge or dipole) may permissibly be much richer than given by the simple Whittaker structure alone.  In other words, the Whittaker structure used in this paper should be regarded as the simplest structuring of the negentropic process that can be produced, and hence as a lower boundary condition on the process. [71]. Time-like currents and flows do appear in the vacuum energy, if extended electrodynamic theory is utilized.  E.g., in the received view the Gupta-Bleuler method removes time-like photons and longitudinal photons.  For disproof of the Gupta-Bleuler method, proof of the independent existence of such photons, and a short description of their characteristics, see Myron W. Evans et al., AIAS group paper, “On Whittaker’s F and G Fluxes, Part III: The Existence of Physical Longitudinal and Time-Like Photons,” J. New Energy, 4(3), Winter 1999, p. 68-71; “On Whittaker’s Analysis of the Electromagnetic Entity, Part IV: Longitudinal Magnetic Flux and Time-Like Potential without Vector Potential and without Electric and Magnetic Fields,” ibid., p. 72-75.  To see how such entities produce ordinary EM fields and energy in vacuo, see Myron W. Evans et al., AIAS group paper, “On Whittaker’s Representation of the Electromagnetic Entity in Vacuo, Part V: The Production of Transverse Fields and Energy by Scalar Interferometry,”ibid., p. 76-78.  See also Myron W. Evans et al., AIAS group paper, “Representation of the Vacuum Electromagnetic Field in Terms of Longitudinal and Time-like Potentials: Canonical Quantization,” ibid., p. 82-88. [72]. For a short treatise on the complex Poynting vector, see D.S. Jones, The Theory of Electromagnetism, Pergamon Press, Oxford, 1964, p. 57-58.  In a sense our present use is similar to the complex Poynting energy flow vector, but in our usage the  absolute value of the imaginary energy flow is equal to the absolute value of the real energy flow, and there is a transformation process in between.  This usage is possible because the imaginary flow is into a transducer, which takes care of transforming the received imaginary EM energy into the output realEM energy.  We stress that the word “imaginary” is not at all synonymous withfictitious, but merely refers to what “dimension” or state the EM energy exists in. [73]. Unfortunately, electrical engineers use the term “power” to also mean the rate of energy flow, when rigorously the term “power” means the rate at which work is done.  We accent that we fully understand the difference, but are using the terminology common to the profession. [74]. Nobelist Prigogine experienced something very similar when he proposed his open dissipative systems, where the system operations did not lead to the conventional increasing disorder.  To say that he was subjected to the Inquisition is not an exaggeration.  Other scientists have repeatedly been subjected to intense scientific attack and suppression—including Mayer (conservation of energy), Einstein (relativity), Wegener (drifting continental plates), Ovshinsky (amorphous semiconductors), to name just a few of the hundreds who have been attacked in similar fashion.  Science does not proceed by sweet reason, but by a vicious dogfight with no holds barred.  It delights in “wolf pack” attacks upon the scientist with a new idea or discovery. [75]. And the scientific community is certainly not prepared for the notion of using time as energy, freely and anywhere.  In a sense, one can “burn time as fuel”.  Consider this: In physics, the choice of fundamental units in one’s physics model is completely arbitrary.  E.g., one can make a quite legitimate physics model having only a single fundamental unit (such is already done in certain areas of physics).  E.g., suppose we make the “joule” (energy) the only fundamental unit.  It follows then that everything else — including the second and therefore time — is a function of energy.  One can utilize the second as c2 joules of energy.  Hence, the flow of time would have the same energy density as mass.  After Einstein, the atom bomb, and the nuclear reactor, of course, we are all comfortable with the fact that mass is just spatial energy compressed by the factor c2.  So we really should not be too uncomfortable at the notion that time itself is energy compressed by the factor c2.  In this case, if every second of the passage of time, we were to convert one microsecond into ordinary EM spatial energy, we would produce some 9´1010 joules of EM energy.  Since that is done each second, this would give us the equivalent of the output of 90 1000-megawatt power plants.  If only 1.11% efficient, the conversion process would yield the equivalent of one 1000-megawatt power plant. In fact, it is in theory possible to do such a conversion, and we have previously indicated the various mechanisms involved.  There are also some rough experimental results that are at least consistent with the thesis.  The interested reader is referred to T.E. Bearden, “EM Corrections Enabling a Practical Unified Field Theory with Emphasis on Time-Charging Interactions of Longitudinal EM Waves,” J. New Energy, 3(2/3), 1998, p. 12-28.  See also the author’s similar paper with the same title, in Explore, 8(6), 1998, p. 7-16.  We believe that the real energy technology for the second half of this century is based on use of time for fuel.  The fundamental reactions and principles also enable a totally new form of high energy physics reactions, where very low spatial energy photons are the carriers (their time components carry canonical time-energy, so that the highest energy photons of all, given time-energy conversion, are low frequency photons.  These new reactions (given in the references cited) are indeed consistent with the startling nuclear transformation reactions met at low (spatial) photon energies in hundreds of successful cold fusion experiments worldwide. [76]. A classic example is given by Paul Nahin in his Oliver Heaviside: Sage in Solitude, IEEE Press, New York, 1988, p. 225.  Quoting: “J.J. Waterston’s paper on the kinetic theory of gases, in 1845, was rejected by the Royal Society of London.  One of the referees declared it to be ‘nothing but nonsense, unfit even for reading before the Society.’ … “Waterston’s dusty manuscript was finally exhumed from its archival tomb forty years later, because of the efforts of Lord Rayleigh…”  Our comment is that the same scientific attitude and resistance to innovative change prevails today.  As the French say, “Plus ça change, plus c’est la même chose!” [77]. E.g., see G. Nicolas and I. Prigogine, Exploring Complexity, Piper, Munich, 1987 (an English version is Exploring Complexity: An Introduction, Freeman, New York, 1989); Ilya Prigogine, From Being to Becoming: Time and Complexity in the Physical Sciences, W.H. Freeman and Company, San Francisco, 1980. In 1977, Prigogine received the Nobel Prize in chemistry for his contributions to nonequilibrium thermodynamics, especially the theory of dissipative structures. [78]. E.g., see, Moisés Naím, “Lori’s War,” Foreign Policy, Vol. 118, Spring 2000, p. 28-55.  See particularly Lori Wallach and Michelle Sforza, Whose Trade Organization? Corporate Globalization and the Erosion of Democracy, published by Public Citizen Foundation  and available by order fromhttp://www.globaltradewatch.org.  Perusal of the leading environmental activist web sites now shows a significant and rising awareness that globalization is merely the surface façade of an older, imperial, feudalistic capitalism where checks and balances established by national states are being slowly and methodically bypassed. [79]. The interested reader is referred to Andrew A. Marino, Powerline Electromagnetic Fields and Human Health, athttp://www.ortho.lsumc.edu/Faculty/Marino/Marino.html. Particularly see “Chapter 5, Blue-Ribbon Committees and Powerline EMF Health Hazards,” and “Chapter 6: Power-Industry Science and Powerline EMF Health Hazards.” Biophysicist Marino is one of the leaders in the field and has been personally involved in many skirmishes with powerline-dominated studies and findings.  As an example, quoting from Chapter 6: “Neither scientists nor the public can rely on power-industry research or analysis to help decide whether powerline electromagnetic fields affect human health because power-industry research and analysis are radically misleading.”  There are many other reports in the literature, which also show effects of EM nonionizing radiation on cells, including detrimental effects. [80]. Becker studied not just the immune system — which “heals” nothing at all, not even its own damaged cells —but also the cellular regenerative system.  He and others found, e.g., that tiny trickle currents and potentials — either steady or pulsed — placed across otherwise intractable bone fractures, would result in a rather astounding set of cellular changes which led to healing of the fracture by deposit of new bone.  Eerily, Becker showed that the red blood cells coming into the area and under the EM influence, would shuck their hemoglobin and grow cellular nuclei (i.e., dedifferentiate back to an earlier cellular state).  Then these cells would redifferentiate into the type of cells that made cartilage.  Then those cells would differentiate into the type of cells that make bone, and be deposited in the fracture to “grow bone” and heal the fracture.  Incredibly, this is the only true “healing” modality in all Western medical science — which is otherwise built upon the theory of intervention rather than healing.  After the intervention (which may be quite necessary!), the body’s cellular regenerative system — or what is left of it after damage by such interventions as chemotherapy, etc. — is left entirely upon its own to restore the damage (heal the damaged cells and tissues).  Becker was twice nominated for a Nobel Prize.  However, because he also testified in court against power companies, giving testimony as an expert witness that EM radiation from power lines could indeed induce harmful conditions in some exposed people, he was suppressed and eventually forced to retire. [81]. See Robert O. Becker and Andrew A. Marino, Electromagnetism and Life, State University of New York Press, Albany, 1982.   This reference gives a nice summary of EM bioeffects from the orthodox view, current as of the publication date.  For Becker’s work with the cellular regenerative system, see particularly R.O. Becker, “The neural semiconduction control system andits interaction with applied electrical current and magnetic fields,” Proc. XI Internat. Congr. Radiol., Vol. 105, 1966, p. 1753-1759, Excerpta Medica Foundation, Amsterdam.  See Becker, “The direct current field: A primitive control and communication system related to growth processes,” Proc. XVI Internat. Congr. Zool., Washington, D.C., Vol. 3, 1963, p. 179-183. [82]. For an overview of the ansatz of present battery technology, see David Linden, Editor in Chief, Handbook of Batteries, Second Edition, McGraw Hill, New York, 1995; Colin A. Vincent and Bruno Scrosati, Modern Batteries: An Introduction to Electrochemical Power Sources, Second Edition, Wiley, New York, 1997.  For a process to make a battery include a negative resistor and exhibit COP>1.0, see Bearden, “Bedini’s Method For Forming Negative Resistors In Batteries,” Proc. IC-2000, St. Petersburg, Russia (in press). [83]. Such laboratories are private and professional testing companies, where the U.S. government has certified their expertise and qualifications, their testing to NIST, IEEE, and U.S. government standards, their use of calibrated instruments, and the experience and ability of their professional test engineers and scientists.  Such labs are routinely and widely used by aerospace firms.  A Test Certificate from such a lab is acceptable by the courts, the U.S. Patent and Trademark Office, the U.S. government (which requires it on many contracts), and by the U.S. scientific community.  A goodly number of these laboratories are available throughout the U.S. [84]. A few struggling publications in the “new energy” field are crucial to continued progress.  The major ones are Journal of New Energy (Dr. Hal Fox, publisher), Infinite Energy (Dr. Eugene Mallove, publisher), and Explore(Chrystyne Jackson, publisher).  Independent sustaining funding for these publications is urgently needed.  We also highly commend the Department of Energy’s Transportation group for maintaining a DOE website carrying the advanced electrodynamics papers of the Alpha Foundation’s Institute for Advanced Study (AIAS).  Funding for the AIAS is also urgently needed, to continue this absolutely essential theoretical work that is placing a solid physics foundation under the program of extracting and using EM energy from the vacuum. [85]. Some recommended publications of interest are: Joshua Lederberg, Editor,Biological Weapons: Limiting the Threat, MIT Press, Cambridge, MA, 1999, with a foreword by Defense Secretary William S. Cohen; Richard A. Falkenrath, Robert D. Newman, and Bradley A. Thayer, America’s Achilles Heel: Nuclear, Biological, and Chemical Terrorism and Covert Attack, MIT Press, 1998; Wendy Barnaby,The Plague Makers: The Secret World of Biological Warfare, Vision Paperbacks, Satin Publications Ltd., London, 1999 (a most readable and educational book for the nonspecialist), U.S. Congress, Office of Technology Assessment, Proliferation of Weapons of Mass Destruction: Assessing the Risks, Government Printing Office,Washington, D.C., 1993 (a major study on WMD and the risks to the U.S., including to the U.S. civilian population); Global Proliferation of Weapons of Mass Destruction, Part I, Senate Hearing 104-422, Hearings Before the Permanent Subcommittee on Investigations of the Committee on Governmental Affairs, U.S. Senate, Oct. 31 and Nov. 1, 1995. [86]. Unfortunately, the extant unclassified references on longitudinal EM and more advanced EM weapons seem to be the publications by the present author, e.g., T.E. Bearden, “Mind Control and EM Wave Polarization Transductions, Part I”, Explore, 9(2), 1999, p. 59; Part II, Explore, 9(3), 1999, p. 61; Part III, Explore, 9(4,5), 1999, p. 100-108;—”EM Corrections Enabling a Practical Unified Field Theory with Emphasis on Time-Charging Interactions of Longitudinal EM Waves,”Journal of New Energy, 3(2/3), 1998, p.12-28;—Energetics of Free Energy Systems and Vacuum Engine Therapies, Tara Publishing, Internet nodewww.tarapublishing.com/books, July 1997;—Gravitobiology: A New Biophysics, Tesla Book Co., P.O. Box 121873, Chula Vista, CA 91912, 1991;—Fer-de-Lance, Tesla Book Co., 1986;—AIDS: Biological Warfare, Tesla Book Co., 1988;—Soviet Weather Engineering Over North America, 1-hour videotape, 1985;—Energetics: Extensions to Physics and Advanced Technology for Medical and Military Applications, CTEC Proprietary, May 1, 1998, 200+ page inclosure to CTEC Letter, “Saving the Lives of mass BW Casualties from Terrorist BW Strikes on U.S. Population Centers,” to Major General Thomas H. Neary, Director of Nuclear and Counterproliferation, Office of the Deputy Chief of Staff, Air and Space Operations, HQ USAF, May. 4, 1998;—”Overview and Background of KGB Energetics Weapons Threat to the U.S.,” updated Jan. 3, 1999, furnished to selected Senators and Congresspersons. [87]. As an example, for decades Castro ran guerrilla and agent training camps in Southern Mexico. Many of the graduates of those camps—trained terrorists all—have been infiltrated across the U.S. border and into the U.S., to bide their time and wait for instructions.  Some estimates are that several thousand such Castro agents alone are already on site and positioned for sabotage, poisoning of water supplies, destruction of transmission line towers, destruction of key bridges, etc.  Several other nations hostile to the U.S. are also known to have agent teams already on site within the U.S.  The new form of warfare/terrorism is to introduce the “troops” into the adversary’s nation and populace in advance, as well as weapons caches, etc.  So such preparations have definitely been accomplished within the United States, and undoubtedly some are still in progress and ongoing. [88]. ‘E.g., see Stanislov Lunev and Ira Winkler, 1998, ibid. Quoting, p. 22:”Though most Americans don’t realize it, America is already penetrated by Russian military intelligence to the extent that arms caches lie in wait for use by Russian special forces — or Spetznatz.” Quoting, p. 26: “It is surprisingly easy to smuggle nuclear weapons into the United States. A commonly used method is for a Russian airplane to fly across the ocean on a typical reconnaissance flight. The planes will be tracked by U.S. radar, but that’s not a problem. When there are no other aircraft in visual range, the Russian airplane will launch a small, high-tech, stealth transport missile that can slip undetected into remote areas of the country. The missiles are retrieved by GRU operatives. Another way to get a weapon into the country is to have an ‘oceanographic research’ submarine deliver the device — accompanied by GRU specialists — to a remote section of coastline. Nuclear devices can also be slipped across the Mexican or Canadian borders. It is easy to get a bomb to Cuba and from there transport it to Mexico. Usually the devices are carried by a Russian intelligence officer or a trusted agent.” The Author Dr. Thomas Bearden (Lieutenant Colonel U.S. Army – Retired) is presently the President and Chief Executive Officer, CTEC, Inc., a Fellow Emeritus of Alpha Foundation’s Institute of Advanced Study (AIAS) and a Director of the Association of Distinguished American Scientists (ADAS).  He has a Science PhD, a MS in Nuclear Engineering, BS in Mathematics, with minor in Electronic Engineering as well as a graduate of C&GSC, U.S. Army and graduate of the U.S. Army Guided Missile Staff Officer’s Course (equivalent to MS in Aerospace Engineering). He also has graduate courses in statistics, electromagnetics and numerous missile, radar, electronic warfare, and counter-countermeasures courses. He had twenty years of active service in the U.S. Army. His field Artillery, Patriot, Hawk, Hercules, Nike Ajax, and technical research experience was followed by nineteen years of technical research in re-entry vehicles and heat shielding, computer systems, C4I, wargame analysis, simulation and analysis, EW, ARM countermeasures, and strategy and tactics.  He has spent more than 20 years personal research in foundations of electrodynamics and open EM systems far from thermodynamic equilibrium with the active environment, as well as novel effects of longitudinal EM waves on living systems and founded the beginning of a legitimate theory of permissible COP>1.0 electrical power systems. He is the author or co-author of approximately 200 papers and books and has been connected with four successful COP>1.0 laboratory prototype EM power systems. He is one of the world’s leading theorists dealing with the hard physics of over-unity energy systems and scalar weapons technology. Web site: www.cheniere.org.
4a08120b241dc05b
How simplifications help us to better understand the world An image showing a woman on a swhing hanging onto an atom Source: © Pepe Serra/Ikon Images Idealisations carry us through the world, increasing our understanding of what we see Whether scientists describe the world with a mathematical equation, a pictorial representation or a diagram, one thing is almost always the case: there is some degree of idealisation involved. In recent years, the use of idealisations has drawn the attention of philosophers. In fact, their work has illuminated the fascinating role of idealisations in understanding not only chemical practice and education, but also the chemical world itself.  What are idealisations? By idealisations, philosophers wish to capture the deliberate act of dismissing, simplifying or disregarding some feature of the phenomenon that scientists aim to describe. Disregarding the effects of friction when mathematically describing the movement of a body is a classic example of an idealisation. In chemistry too there are many examples. Perhaps the most notable example of an idealisation in chemistry is the pictorial representation of atoms and molecules, depicted with static structures and without taking into account the effects of the environment (among other things). Another example is the mathematical description of atoms and molecules as given by solving the Schrödinger equation. This equation describes the properties of an atom or molecule without considering relativistic effects, and by often disregarding some of the interactions of the electrons and nuclei that make it up. Philosophers have identified different types of idealisations based on their form and function in science.1 For example, scientists often simplify things by distorting some of the features of the examined phenomenon; such idealisations are called Galilean (inspired by Galileo who talked about their value in his work). This type of idealisation is especially prevalent in mathematical equations that cannot be solved easily. In such cases, scientists distort specific features of the examined system (by distorting specific properties – like its mass or friction) so as to solve the relevant equation and get a quantitative result.  Another reason why scientists idealise is because this helps them understand and explain phenomena. For example, consider the behaviour of a molecule in a particular container under specific conditions. If one took into account all the details of this situation, it would be very difficult to explain which factors play the most decisive role in how the molecule reacts. So chemists idealise by disregarding those features or conditions that they believe play no role in the molecule’s behaviour. These are called Aristotelian idealisations.2 Based on this, it is not difficult to see that the use of idealisations has important implications on how science represents and understands the world.3 First, the results that scientists obtain from solving idealised equations are not strictly speaking correct. So a major issue in scientific practice is whether the descriptions scientists employ produce sufficiently accurate results. If those results are not acceptable, then one must examine what sort of idealisations are employed, to what degree these idealisations affect the accuracy of the relevant description, and whether it is possible to ‘de-idealise’ the description by adding some of the details that were initially omitted. Secondly, the use of idealisations has significantly revised how we understand science. The image of science as the practice of searching for the one true theory is far from how science is actually done.4 Instead, a range of different models are employed, each of which makes different idealisations and serves different purposes. Some models are used because they accurately predict certain properties; others are used because they explain well; whereas others are preferred because they are general enough to capture a group of different systems. So, the choice of a particular idealisation is determined by the aims that the relevant description is supposed to serve.  Does this mean that chemistry is false? An interesting conclusion that philosophers often draw from this is that science is not only about the search for truth. In fact, the use of idealisations has affected how philosophers evaluate scientific theories with respect to how realistically they describe the world. Some argue that the use of idealisations should at least make us sceptical about the degree in which we believe our theories to be true. On this view, the use of idealisations undermines the grounds we have to believe that our theories correctly represent the world. Instead, the value of our scientific descriptions lies mainly in how useful they are in explaining, predicting and manipulating things around us. Others accept that idealisations render our descriptions strictly speaking false, but nevertheless believe that these descriptions correctly inform us about the world. Despite the use of idealisations, our theories are evaluated with respect to the best empirical means we have and meet certain standards in order to be deemed acceptable. Therefore, one should not dismiss scientific theories from being faithful representations of the world. Lastly, some philosophers believe that it is by idealising that we can grasp how the world actually is. By omitting details that play a minimal role in how a system behaves, idealisations lead to descriptions that successfully identify only those features that are causally relevant to the system’s behaviour.5 In this context, idealisations do not merely help us describe a complex world; they are the means that enable us to accurately understand how specific parts of the world work.
b2f67f011b323ee0
You are hereZMF Course: Fundamentals of Modern Physics Department/Abbreviation: KEF/ZMF Year: 2020 Guarantee: 'Mgr. Lukáš Richterek, Ph.D.' Annotation: The course Fundamentals of Modern Physics consists of lectures and numerical exercises. Students will be acquainted with foundations of quantum and statistical physics, i.e. the disciplines that are necessary for understanding and explanation of phenomena solved by contemporary physics. The principles of the quantum physics and the examples of its applications will be presented. The attention will be paid to the context of classical and quantum physics, the quantum physics contribution in the description of physical Course review: 1. Historical introduction. Old quantum theory of light, corpuscular-wave dualism, radiation. Planck's law, the photoelectric effect, the Compton effect. Bohr's theory of the structure of atoms . De Broglie waves, diffraction of electrons. 2. The concept of wave function, its physical meaning. Properties of wave functions. Representation of physical quantities, linear Hermitian operators, operator equations. Mean values ??of physical quantities. Operators of specific physical variables, commutation relations, the uncertainty principle. 3. Schrödinger equation, stationary and non-stationary states. Green's function. Limit transitions to classical mechanics. Rate of change of physical quantities, by the time derivative operator. Ehrenfest theorem. Parity condition. 4. Applications. Solutions for rectangular potentials, one-dimensional, three-dimensional potential well, the method of separation of variables. A potential barrier, the tunnel effect, cold emissions, a radioactive decay. One-dimensional and three-dimensional quantum linear harmonic oscillator (LHO). The particles in a spherically symmetric potential field. A model of the hydrogen atom, orbitals. Mechanical and magnetic orbital angular momentum of the electron. 5. Approximate methods of solving problems in quantum physics. The perturbation theory, variational methods. The stationary perturbation theory, non-degenerate and degenerate states, the non-stationary perturbation theory, the Fermi rule. Direct and general variational methods. 6. Free particles, Green's function of a free particle. 7. Representation theory. Wave functions and operators as vectors and matrices in a Hilbert space. Dirac notation. Coordinate, impulse and energy representations. Schroedinger's and Heisenberg's attitudes. The density matrix, pure and mixed states. 8. The spin of particles. its experimental discovery. Pauli spin matrices, the Hamiltonian of a particle in an electromagnetic field. The Pauli equation, basics of relativistic quantum mechanics. the Klein-Gordon-Fock equation, the Dirac equation. 9. Basic concepts of statistical physics. The phase space, the Liuoville theorem. Microcanonical , canonical and grand canonical ensembles. The statistical definition of entropy. 10. Statistical properties of the sum and the statistical integral, calculations of thermodynamic quantities, applications for some systems (Maxwell's distribution of velocities, monoatomic and diatomic ideal gases, the concept of paramagnetism). 11. Statistical distribution. fermions and bosons, Bose Einstein condensation. 12. A model of a photon gas and blackbody radiation.