chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
2cd360bce11513b8 | path integral
Quantum field theory
physics, mathematical physics, philosophy of physics
Surveys, textbooks and lecture notes
theory (physics), model (physics)
experiment, measurement, computable physics
Measure and probability theory
Integration theory
under construction
The notion of path integral originates in and is mainly used in the context of quantum mechanics and quantum field theory, where it is a certain operation supposed to model the notion of quantization.
The idea is that the quantum propagator – in FQFT the value of the functor U:CobVectU : Cob \to Vect on a certain cobordism – is given by an integral kernel U:ψK(,y)ψ(y)dμU : \psi \mapsto \int K(-,y) \psi(y) d\mu where K(x,y)K(x,y) is something like the integral of the exponentiated action functional SS over all field configurations ϕ\phi with prescribed boundary datat xx and yy. Formally one writes
K(x,y)=exp(iS(ϕ))Dϕ K(x,y) = \int \exp(i S(\phi))\; D\phi
and calls this the path integral. Here the expression DϕD \phi is supposed to allude to an measure integral on the space of all ϕ\phi. The main problem with the path integral idea is that it is typically unclear what this measure should be, or, worse, it is typically clear that no suitable such measure does exist.
The name path integral originates from the special case where the system is the sigma model describing a particle on a target space manifold XX. In this case a field configuration ϕ\phi is a path ϕ:[0,1]X\phi : [0,1] \to X in XX, hence the integral over all field configurations is an integral over all paths.
The idea of the path integral famously goes back to Richard Feynman, who motivated the idea in quantum mechanics. In that context the notion can typically be made precise and shown to be equivalent to various other quantization prescriptions.
The central impact of the idea of the path integral however is in its application to quantum field theory, where it is often taken in the physics literatire as the definition of what the quantum field theory encoded by an action functional should be, disregarding the fact that in these contexts it is typically quite unclear what the path integral actually means, precisely.
Notably the Feynman perturbation series summing over Feynman graphs is motivated as one way to make sense of the path integral in quantum field theory and in practice usually serves as a definition of the perturbative path integral.
We start with stating the elementary description of the Feynman-Kac formula? as traditional in physics textbooks in
Then we indicate the more abstract formulation of this in terms of integration against the Wiener measure on the space of paths (for the Euclidean path integral) in
Then we indicate a formulation in perturbation theory and BV-formalism in
Elementary description in quantum mechanics
A simple form of the path integral is realized in quantum mechanics, where it was originally dreamed up by Richard Feynman and then made precise using the Feynman-Kac formula?. (Most calculations in practice are still done using perturbation theory, see the section Perturbatively in BV-formalism below).
The Schrödinger equation says that the rate at which the phase of an energy eigenvector rotates is proportional to its energy:
(1)iddtψ=Hψ. i \hbar \frac{d}{dt} \psi = H \psi.
Therefore, the probability that the system evolves to the final state ψ F\psi_F after evolving for time tt from the initial state ψ I\psi_I is
(2)ψ F|e iHt|ψ I. \langle \psi_F|e^{-iHt}|\psi_I\rangle.
Chop this up into time steps Δt=t/N\Delta t = t/N and use the fact that
(3) |qq|=1\int_{-\infty}^{\infty}|q\rangle\langle q| = 1
to get
(4)ψ F|e iHΔt( |q N1q N1|dq N1)e iHΔt( |q N2q N2|dq N2)e iHΔte iHΔt( |q 1q 1|dq 1)e iHΔt|ψ I \langle \psi_F| e^{-iH\Delta t} \left(\int_{-\infty}^{\infty} |q_{N-1} \rangle \langle q_{N-1}| dq_{N-1}\right) e^{-iH\Delta t} \left(\int_{-\infty}^{\infty} |q_{N-2} \rangle \langle q_{N-2}| dq_{N-2}\right) e^{-iH\Delta t} \cdots e^{-iH\Delta t} \left(\int_{-\infty}^{\infty} |q_1 \rangle \langle q_1| dq_1\right) e^{-iH\Delta t} |\psi_I\rangle
(5)= q 1 q N2 q N1ψ F|e iHΔt|q N1q N1|e iHΔt|q N2q N2|e iHΔte iHΔt|q 1q 1|e iHΔt|ψ Idq N1dq N2dq 1 = \int_{q_1} \cdots \int_{q_{N-2}} \int_{q_{N-1}} \langle \psi_F| e^{-iH\Delta t} |q_{N-1} \rangle \langle q_{N-1}| e^{-iH\Delta t} |q_{N-2} \rangle \langle q_{N-2}| e^{-iH\Delta t} \cdots e^{-iH\Delta t} |q_1 \rangle \langle q_1| e^{-iH\Delta t} |\psi_I\rangle dq_{N-1} dq_{N-2} \cdots dq_1
Assume we have the free Hamiltonian H=p 2/2m.H=p^2/2m. Looking at an individual term q n+1|e iHΔt|q n,\langle q_{n+1}| e^{-iH\Delta t} |q_{n} \rangle, we can insert a factor of 1 and solve to get
(6)q n+1|e iHΔt( dp2π|pp|)|q n = dp2πe ip 2Δt/2mq n+1|pp|q n = dp2πe ip 2Δt/2me ip(q n+1q n) = (i2πmΔt) 12e iΔt(m/2)[(q n+1q n)/Δt] 2. \array{\langle q_{n+1}| e^{-iH\Delta t} \left(\int_{-\infty}^{\infty} \frac{dp}{2\pi}|p\rangle \langle p|\right)|q_{n} \rangle &=& \int_{-\infty}^{\infty} \frac{dp}{2\pi} e^{-ip^2\Delta t/2m} \langle q_{n+1}|p\rangle \langle p|q_{n} \rangle \\ &=& \int_{-\infty}^{\infty} \frac{dp}{2\pi} e^{-ip^2\Delta t/2m} e^{ip(q_{n+1}-q_n)} \\ &=& \left(\frac{-i 2\pi m}{\Delta t}\right)^{\frac{1}{2}} e^{i \Delta t (m/2)[(q_{n+1}-q_n)/\Delta t]^2}.}
(7)Dq=lim N(i2πmΔt) N2 n=0 N1dq n,\int Dq = \lim_{N \to \infty} \left(\frac{-i 2\pi m}{\Delta t}\right)^{\frac{N}{2}} \prod_{n=0}^{N-1} \int dq_n,
and letting Δt0,N,\Delta t \to 0, N \to \infty, we get
(8)ψ F|e iHt|ψ I=Dqe i 0 tdt12mq˙ 2. \langle \psi_F|e^{-iHt}|\psi_I\rangle = \int Dq e^{i \int_0^t dt \frac{1}{2}m \dot{q}^2}.
For arbitrary Hamiltonians H=p 22m+V(x),H = \frac{p^2}{2m} + V(x), we get
(9)ψ F|e iHt|ψ I = Dqe i 0 tdt12mq˙ 2V(x) = Dqe i 0 t(q˙,q)dt = Dqe iS(q), \array{\langle \psi_F|e^{-iHt}|\psi_I\rangle &=& \int Dq e^{i \int_0^t dt \frac{1}{2}m \dot{q}^2 - V(x)} \\ &=& \int Dq e^{i\int_0^t\mathcal{L}(\dot{q},q) dt} \\ &=& \int Dq e^{iS(q)}, }
where S(q)S(q) is the action functional.
Is there an easy way to see how the Hamiltonian transforms into the Lagrangian in the exponent?
As an integral against the Wiener measure
More abstractly, the Euclidean path integral for the quantum mechanics of a charged particle may be defined by integration the gauge-coupling action again the Wiener measure on the space of paths.
Consider a Riemannian manifold (X,g)(X,g) – hence a background field of gravity – and a connection :XBU(1) conn\nabla : X \to \mathbf{B}U(1)_{conn} – hence an electromagnetic background gauge field.
The gauge-coupling interaction term is given by the parallel transport of this connection
exp(iS)exp(2πi ()[(),]):[I,X] x 0,x 1Hom(E x 0,E x 1), \exp(i S) \coloneqq \exp(2\pi i \int_{(-)} [(-),\nabla] ) \colon [I, X]_{x_0,x_1} \to Hom(E_{x_0}, E_{x_1}) \,,
where EXE \to X is the complex line bundle which is associated to \nabla.
The Wiener measure dμ Wd\mu_W on the space of stochastic paths in XX,we may write suggestively write as
dμ W=[exp(S kin)Dγ] d\mu_W = [\exp(-S_{kin})D\gamma]
for it combines what in the physics literature is the kinetic action and a canonical measure on paths.
(This is a general phenomenon in formalizations of the process of quantization: the kinetic action (the free field theory-part of the action functional) is absorbed as part of the integration measure against with the remaining interaction terms are integrated. )
Then one has (e.g. Norris92, theorem (34), Charles 99, theorem 6.1):
the integral kernel for the time evolution propagator is
U(x 0,x 1)= γtra()(γ)[exp(S kin(γ))Dγ], U(x_0,x_1) = \int_{\gamma} tra(\nabla)(\gamma) \, [\exp(-S_{kin}(\gamma)) D\gamma] \,,
hence the integration of the parallel transport/holonomy against the Wiener measure.
(To make sense of this one first needs to extend the parallel transport from smooth paths to stochastic paths, see the references below.)
This “holonomy integrated against the Wiener measure” is the path integral in the form in which it notably appears in the worldline formalism for computing scattering amplitudes in quantum field theory. See (Strassler 92, (2.9), (2.10)). Notice in particular that by the discussion there this is the correct Wick rotated form: the kinetic action is not a complex phase but a real exponential exp(S kin)\exp(- S_{kin}) while the gauge interaction term (the holonomy) is a complex phase (locally exp(i γA)\exp(i \int_\gamma A)).
From the point of view of higher prequantum field theory this means that the path integral sends a correspondence in the slice (infinity,1)-topos of smooth infinity-groupoids over the delooping groupoid BU(1)\mathbf{B}U(1)
[I,X] ()| 0 ()| 1 X exp(iS) X χ() χ() BU(1) \array{ && [I,X] \\ & {}^{(-)|_0}\swarrow && \searrow^{(-)|_1} \\ X && \swArrow_{\exp(i S)} && X \\ & {}_{\mathllap{\chi(\nabla)}}\searrow && \swarrow_{\mathrlap{\chi(\nabla)}} \\ && \mathbf{B}U(1) }
(essentially a prequantized Lagrangian correspondence) to another correspondence, now in the slice over the stack (now an actual 2-sheaf) Mod\mathbb{C}\mathbf{Mod} of modules over the complex numbers, hence of complex vector bundles:
X×X p 1 p 2 X γexp(iS(γ))[exp(S kin(γ))Dγ] X ρ(χ()) ρ(χ()) Mod. \array{ && X \times X \\ & {}^{p_1}\swarrow && \searrow^{p_2} \\ X && \swArrow_{\int_{\gamma}\exp(i S(\gamma)) [\exp(-S_{kin}(\gamma))D\gamma]} && X \\ & {}_{\mathllap{\rho(\chi(\nabla))}}\searrow && \swarrow_{\mathrlap{\rho(\chi(\nabla))}} \\ && \mathbb{C}\mathbf{Mod} \,. }
For more discussion along these lines see at motivic quantization.
Perturbatively for free field theory in BV-formalism
BV-BRST formalism is a means to formalize the path integral in perturbation theory as the passage to cochain cohomology in a quantum BV-complex. See at The BV-complex and homological integration for more details.
action functionalkinetic actioninteractionpath integral measure
BV differentialelliptic complex +antibracket with interaction +BV-Laplacian
The path integral in the bigger picture
Ours is the age whose central fundamental theoretical physics question is:
What is quantum field theory?
A closely related question is:
What is the path integral ?
After its conception by Richard Feynman in the middle of the 20th century It was notably Edward Witten’s achievement in the late 20th century to make clear the vast potential for fundamental physics and pure math underlying the concept of the quantum field theoretic path integral.
And yet, among all the aspects of QFT, the notion of the path integral is the one that has resisted attempts at formalization the most.
While functorial quantum field theory is the formalization of the properties that the locality and the sewing law of the path integral is demanded to have – whatever the path integral is, it is a process that in the end yields a functor on a (infinity,n)-category of cobordisms – by itself, this sheds no light on what that procedure called “path integration” or “path integral quantization” is.
The single major insight into the right higher categorical formalization of the path integral is probably the idea indicated in
which says that
• it is wrong to think of the action functional that the path integral integrates over as just a function: it is a higher categorical object;
• accordingly, the path integral is not something that just controls the numbers or linear maps assigned by a dd-dimensional quantum field theory in dimension dd: also the assignment to higher codimensions is to be regarded as part of the path integral;
• notably: the fact that quantum mechanics assigns a (Hilbert) space of sections of a vector bundle to codimension 1 is to be regarded as due to a summing operation in the sense of the path integral, too: the space of sections of a vector bundle is the continuum equivalent of the direct sum of its fibers
More recently, one sees attempts to formalize this observation of Freed’s, notably in the context of the cobordism hypothesis:
based on material (on categories of “families”) in On the Classification of Topological Field Theories .
The original textbook reference is
• Richard Feynman, A. R. Hibbs, , Quantum Mechanics and Path Integrals , New York: McGraw-Hill, (1965)
Lecture notes include
Textbook accounts include
• G. Johnson, M. Lapidus, The Feynman integral and Feynman’s operational calculus, Oxford University Press, Oxford, 2000.
• Barry Simon, Functional integration and quantum physics AMS Chelsea Publ., Providence, 2005
• Joseph Polchinski, String theory, part I, appendix A
The worldline path integral as a way to compute scattering amplitudes in QFT was understood in
Stochastic integration theory
The following articles use the integration over Wiener measures on stochastic processes? for formalizing the path ingegral.
• James Norris, A complete differential formalism for stochastic calculus in manifolds, Séminaire de probabilités de Strasbourg, 26 (1992), p. 189-209 (NUMDAM)
• Vassili Kolokoltsov, Path integration: connecting pure jump and Wiener processes (pdf)
• Bruce Driver, Anton Thalmaier, Heat equation derivative formulas for vector bundles, Journal of Functional Analysis 183, 42-108 (2001) (pdf)
For charged particle/path integral of holonomy functional
The following articles discuss (aspects of) the path integral for the charged particle coupled to a background gauge field, in which case the path integral is essentially the integration of the holonomy/parallel transport functional against the Wiener measure.
• Marc Arnaudon and Anton Thalmaier, Yang–Mills fields and random holonomy along Brownian bridges, Ann. Probab. Volume 31, Number 2 (2003), 769-790. (Euclid)
• Mikhail Kapranov, Noncommutative geometry and path integrals, in Algebra, Arithmetic and Geometry, Birkhäuser Progress in Mathematics 27 (2009) (arXiv:math/0612411)
• Christian Bär, Frank Pfäffle, Path integrals on manifolds by finite dimensional approximation, J. reine angew. Math., (2008), 625: 29-57. (arXiv:math.AP/0703272)
• Dana Fine, Stephen Sawin, A Rigorous Path Integral for Supersymmetric Quantum Mechanics and the Heat Kernel (arXiv:0705.0638)
A discussion for phase spaces equipped with a Kähler polarization and a prequantum line bundle is in
• Laurent Charles, Feynman path integral and Toeplitz Quantization, Helv. Phys. Acta 72 (1999) 341., (pdf)
following Norris 92, theorem (34).
Other references on mathematical aspects of path integrals include
Detailed rigorous discussion for quadratic Hamiltonians and for phase space paths in in
Discussion of quantization of Chern-Simons theory via a Wiener measure is in
• Adrian P. C. Lim, Chern-Simons Path Integral on 3\mathbb{R}^3 using Abstract Wiener Measure (pdf)
Lecture notes on quantum field theory, emphasizing mathematics of the Euclidean path integrals and the relation to statistical physics are at
MathOverflow questions: mathematics-of-path-integral-state-of-the-art,path-integrals-outside-qft, doing-geometry-using-feynman-path-integral, path-integrals-localisation, finite-dimensional-feynman-integrals, the-mathematical-theory-of-feynman-integrals
• Theo Johnson-Freyd, The formal path integral and quantum mechanics, J. Math. Phys. 51, 122103 (2010) arxiv/1004.4305, doi; On the coordinate (in)dependence of the formal path integral, arxiv/1003.5730
Revised on January 2, 2015 17:20:44 by Urs Schreiber ( |
e143bb2224df9689 | Tag Archives: composite
The meaning of the word “superposition”
This is from the Wikipedia article on Hilbert's 13th Problem as it was on 31 March 2012:
In their paper A relation between multidimensional data compression and Hilbert’s 13th problem, Masahiro Yamada and Shigeo Akashi describe an example of Arnold's theorem this way:
Let $f ( \cdot , \cdot, \cdot )$ be the function of three variable defined as \(f(x, y, z)=xy+yz+zx\), $x ,y , z\in \mathbb{C}$ . Then, we can easily prove that there do not exist functions of two variables $g(\cdot , \cdot )$ , $u(\cdot, \cdot)$ and $v(\cdot , \cdot )$ satisfying the following equality: $f(x, y, z)=g(u(x, y),v(x, z)) , x , y , z\in \mathbb{C}$ . This result shows us that $f$ cannot be represented any 1-time nested superposition constructed from three complex-valued functions of two variables. But it is clear that the following equality holds: $f(x, y, z)=x(y+z)+(yz)$ , $x,y,z\in \mathbb{C}$ . This result shows us that $f$ can be represented as a 2-time nested superposition.
The article about superposition in All about circuits says:
Superposition Theorem in Wikipedia:
The superposition theorem for electrical circuits states that for a linear system the response (Voltage or Current) in any branch of a bilateral linear circuit having more than one independent source equals the algebraic sum of the responses caused by each independent source acting alone, while all other independent sources are replaced by their internal impedances.
Quantum superposition in Wikipedia:
Quantum superposition is a fundamental principle of quantum mechanics. It holds that a physical system — such as an electron — exists partly in all its particular, theoretically possible states (or, configuration of its properties) simultaneously; but, when measured, it gives a result corresponding to only one of the possible configurations (as described in interpretation of quantum mechanics).
Mathematically, it refers to a property of solutions to the Schrödinger equation; since theSchrödinger equation is linear, any linear combination of solutions to a particular equation will also be a solution of it. Such solutions are often made to be orthogonal (i.e. the vectors are at right-angles to each other), such as the energy levels of an electron. By doing so the overlap energy of the states is nullified, and the expectation value of an operator (any superposition state) is the expectation value of the operator in the individual states, multiplied by the fraction of the superposition state that is "in" that state
The CIO midmarket site says much the same thing as the first paragraph of the Wikipedia Quantum Superposition entry but does not mention the stuff in the second paragraph.
In particular, the Yamada & Akashi article describes the way the functions of two variables are put together as "superposition", whereas the Wikipedia article on Hilbert's 13th calls it composition. Of course, superposition in the sense of the Superposition Principle is a composition of multivalued functions with the top function being addition. Both of Yamada & Akashi's examples have addition at the top. But the Arnold theorem allows any continuous function at the top (and anywhere else in the composite).
So one question is: is the word "superposition" ever used for general composition of multivariable functions? This requires the kind of research I proposed in the introduction of The Handbook of Mathematical Discourse, which I am not about to do myself.
The first Wikipedia article above uses "composition" where I would use "composite". This is part of a general phenomenon of using the operation name for the result of the operation; for examples, students, even college students, sometimes refer to the "plus of 2 and 3" instead of the "sum of 2 and 3". (See "name and value" in abstractmath.org.) Using "composite" for "composition" is analogous to this, although the analogy is not perfect. This may be a change in progress in the language which simplifies things without doing much harm. Even so, I am irritated when "composition" is used for "composite".
Quantum superposition seems to be a separate idea. The second paragraph of the Wikipedia article on quantum superposition probably explains the use of the word in quantum mechanics.
Send to Kindle
Composites of functions
In my post on automatic spelling reform, I mentioned the various attempts at spelling reform that have resulted in both the old and new systems being used, which only makes things worse. This happens in Christian denominations, too. Someone (Martin Luther, John Wesley) tries to reform things; result: two denominations. But a lot of the time the reform effort simply disappears. The Chicago Tribune tried for years to get us to write “thru” and “tho” — and failed. Nynorsk (really a language reform rather than a spelling reform) is down to 18% of the population and the result of allowing Nynorsk forms to be used in the standard language have mostly been nil. (See Note 1.)
In my early years as a mathematician I wrote a bunch of papers writing functions on the right (including the one mentioned in the last post). I was inspired by some algebraists and particularly by Beck’s Thesis (available online via TAC), which I thought was exceptionally well-written. This makes function composition read left to right and makes the pronunciation of commutative diagrams get along with notation, so when you see the diagram below you naturally write h = fg instead of h = gf. Composite
Sadly, I gave all that up before 1980 (I just looked at some of my old papers to check). People kept complaining. I even completely rewrote one long paper (Reference [3]) changing from right hand to left hand (just like Samoa). I did this in Zürich when I had the gout, and I was happy to do it because it was very complicated and I had a chance to check for errors.
Well, I adapted. I have learned to read the arrows backward (g then f in the diagram above). Some French category theorists write the diagram backward, thus:
But I was co-authoring books on category theory in those days and didn’t think people would accept it. Not to mention Mike Barr (not that he is not a people, oh, never mind).
Nevertheless, we should have gone the other way. We should have adopted the Dvorak keyboard and Betamax, too.
[1] A lifelong Norwegian friend of ours said that when her children say “boka” instead of “boken” it sound like hillbilly talk does to Americans. I kind of regretted this, since I grew up in north Georgia and have been a kind of hillbilly-wannabe (mostly because of the music); I don’t share that negative reaction to hillbillies. On the other hand, you can fageddabout “ho” for “hun”.
[1] Charles Wells, Automorphisms of group extensions, Trans. Amer. Math. Soc, 155 (1970), 189-194.
[2] John Martino and Stewart Priddy, Group extensions and automorphism group rings. Homology, Homotopy and Applications 5 (2003), 53-70.
[3] Charles Wells, Wreath product decomposition of categories 1, Acta Sci. Math. Szeged 52 (1988), 307 – 319.
Send to Kindle |
d46b6da0ebf5b510 | From Wikipedia, the free encyclopedia
(Redirected from Solitons)
Jump to: navigation, search
For other uses, see Soliton (disambiguation).
Solitary wave in a laboratory wave channel
The soliton phenomenon was first described in 1834 by John Scott Russell (1808–1882) who observed a solitary wave in the Union Canal in Scotland. He reproduced the phenomenon in a wave tank and named it the "Wave of Translation".
A single, consensus definition of a soliton is difficult to find. Drazin & Johnson (1989, p. 15) ascribe three properties to solitons:
1. They are of permanent form;
2. They are localized within a region;
More formal definitions exist, but they require substantial mathematics. Moreover, some scientists use the term soliton for phenomena that do not quite have these three properties (for instance, the 'light bullets' of nonlinear optics are often called solitons despite losing energy during interaction).[1]
A hyperbolic secant (sech) envelope soliton for water waves. The blue line is the carrier waves, while the red line is the envelope soliton.
Dispersion and non-linearity can interact to produce permanent and localized wave forms. Consider a pulse of light traveling in glass. This pulse can be thought of as consisting of light of several different frequencies. Since glass shows dispersion, these different frequencies will travel at different speeds and the shape of the pulse will therefore change over time. However, there is also the non-linear Kerr effect: the refractive index of a material at a given frequency depends on the light's amplitude or strength. If the pulse has just the right shape, the Kerr effect will exactly cancel the dispersion effect, and the pulse's shape won't change over time: a soliton. See soliton (optics) for a more detailed description.
Many exactly solvable models have soliton solutions, including the Korteweg–de Vries equation, the nonlinear Schrödinger equation, the coupled nonlinear Schrödinger equation, and the sine-Gordon equation. The soliton solutions are typically obtained by means of the inverse scattering transform and owe their stability to the integrability of the field equations. The mathematical theory of these equations is a broad and very active field of mathematical research.
Some types of tidal bore, a wave phenomenon of a few rivers including the River Severn, are 'undular': a wavefront followed by a train of solitons. Other solitons occur as the undersea internal waves, initiated by seabed topography, that propagate on the oceanic pycnocline. Atmospheric solitons also exist, such as the Morning Glory Cloud of the Gulf of Carpentaria, where pressure solitons traveling in a temperature inversion layer produce vast linear roll clouds. The recent and not widely accepted soliton model in neuroscience proposes to explain the signal conduction within neurons as pressure solitons.
A topological soliton, also called a topological defect, is any solution of a set of partial differential equations that is stable against decay to the "trivial solution." Soliton stability is due to topological constraints, rather than integrability of the field equations. The constraints arise almost always because the differential equations must obey a set of boundary conditions, and the boundary has a non-trivial homotopy group, preserved by the differential equations. Thus, the differential equation solutions can be classified into homotopy classes.
There is no continuous transformation that will map a solution in one homotopy class to another. The solutions are truly distinct, and maintain their integrity, even in the face of extremely powerful forces. Examples of topological solitons include the screw dislocation in a crystalline lattice, the Dirac string and the magnetic monopole in electromagnetism, the Skyrmion and the Wess–Zumino–Witten model in quantum field theory, and cosmic strings and domain walls in cosmology.
In 1834, John Scott Russell describes his wave of translation.[nb 1] The discovery is described here in Scott Russell's own words:[nb 2]
Scott Russell spent some time making practical and theoretical investigations of these waves. He built wave tanks at his home and noticed some key properties:
• The waves are stable, and can travel over very large distances (normal waves would tend to either flatten out, or steepen and topple over)
• The speed depends on the size of the wave, and its width on the depth of water.
• Unlike normal waves they will never merge – so a small wave is overtaken by a large one, rather than the two combining.
Scott Russell's experimental work seemed at odds with Isaac Newton's and Daniel Bernoulli's theories of hydrodynamics. George Biddell Airy and George Gabriel Stokes had difficulty accepting Scott Russell's experimental observations because they could not be explained by the existing water wave theories. Their contemporaries spent some time attempting to extend the theory but it would take until the 1870s before Joseph Boussinesq and Lord Rayleigh published a theoretical treatment and solutions.[nb 3] In 1895 Diederik Korteweg and Gustav de Vries provided what is now known as the Korteweg–de Vries equation, including solitary wave and periodic cnoidal wave solutions.[3][nb 4]
An animation of the overtaking of two solitary waves according to the Benjamin–Bona–Mahony equation – or BBM equation, a model equation for (among others) long surface gravity waves. The wave heights of the solitary waves are 1.2 and 0.6, respectively, and their velocities are 1.4 and 1.2.
The upper graph is for a frame of reference moving with the average velocity of the solitary waves.
The lower graph (with a different vertical scale and in a stationary frame of reference) shows the oscillatory tail produced by the interaction.[4] Thus, the solitary wave solutions of the BBM equation are not solitons.
In 1965 Norman Zabusky of Bell Labs and Martin Kruskal of Princeton University first demonstrated soliton behavior in media subject to the Korteweg–de Vries equation (KdV equation) in a computational investigation using a finite difference approach. They also showed how this behavior explained the puzzling earlier work of Fermi, Pasta and Ulam.[5]
In 1967, Gardner, Greene, Kruskal and Miura discovered an inverse scattering transform enabling analytical solution of the KdV equation.[6] The work of Peter Lax on Lax pairs and the Lax equation has since extended this to solution of many related soliton-generating systems.
Note that solitons are, by definition, unaltered in shape and speed by a collision with other solitons.[7] So solitary waves on a water surface are near-solitons, but not exactly – after the interaction of two (colliding or overtaking) solitary waves, they have changed a bit in amplitude and an oscillatory residual is left behind.[8]
Solitons in fiber optics[edit]
Much experimentation has been done using solitons in fiber optics applications. Solitons in a fiber optic system are described by the Manakov equations. Solitons' inherent stability make long-distance transmission possible without the use of repeaters, and could potentially double transmission capacity as well.[9]
Year Discovery
1973 Akira Hasegawa of AT&T Bell Labs was the first to suggest that solitons could exist in optical fibers, due to a balance between self-phase modulation and anomalous dispersion.[10] Also in 1973 Robin Bullough made the first mathematical report of the existence of optical solitons. He also proposed the idea of a soliton-based transmission system to increase performance of optical telecommunications.
1987 Emplit et al. (1987) – from the Universities of Brussels and Limoges – made the first experimental observation of the propagation of a dark soliton, in an optical fiber.
1988 Linn Mollenauer and his team transmitted soliton pulses over 4,000 kilometers using a phenomenon called the Raman effect, named after Sir C. V. Raman who first described it in the 1920s, to provide optical gain in the fiber.
1991 A Bell Labs research team transmitted solitons error-free at 2.5 gigabits per second over more than 14,000 kilometers, using erbium optical fiber amplifiers (spliced-in segments of optical fiber containing the rare earth element erbium). Pump lasers, coupled to the optical amplifiers, activate the erbium, which energizes the light pulses.
1998 Thierry Georges and his team at France Telecom R&D Center, combining optical solitons of different wavelengths (wavelength-division multiplexing), demonstrated a composite data transmission of 1 terabit per second (1,000,000,000,000 units of information per second), not to be confused with Terabit-Ethernet.
The above impressive experiments have not translated to actual commercial soliton system deployments however, in either terrestrial or submarine systems, chiefly due to the Gordon–Haus (GH) jitter. The GH jitter requires sophisticated, expensive compensatory solutions that ultimately makes dense wavelength-division multiplexing (DWDM) soliton transmission in the field unattractive, compared to the conventional non-return-to-zero/return-to-zero paradigm. Further, the likely future adoption of the more spectrally efficient phase-shift-keyed/QAM formats makes soliton transmission even less viable, due to the Gordon–Mollenauer effect. Consequently, the long-haul fiberoptic transmission soliton has remained a laboratory curiosity.
2000 Cundiff predicted the existence of a vector soliton in a birefringence fiber cavity passively mode locking through SESAM. The polarization state of such a vector soliton could either be rotating or locked depending on the cavity parameters.[11]
2008 D. Y. Tang et al. observed a novel form of higher-order vector soliton from the perspect of experiments and numerical simulations. Different types of vector solitons and the polarization state of vector solitons have been investigated by his group.[12]
Solitons in biology[edit]
Solitons may occur in proteins[13] and DNA.[14] Solitons are related to the low-frequency collective motion in proteins and DNA.[15] A recently developed model in neuroscience proposes that signals are conducted within neurons in the form of solitons.[16][17][18]
Solitons in magnets[edit]
In magnets, there also exist different types of solitons and other nonlinear waves.[19] These magnetic solitons are an exact solution of classical nonlinear differential equations — magnetic equations, e.g. the Landau–Lifshitz equation, continuum Heisenberg model, Ishimori equation, nonlinear Schrödinger equation and others.
The bound state of two solitons is known as a bion, or in systems where the bound state periodically oscillates, a "breather."
In field theory Bion usually refers to the solution of the Born–Infeld model. The name appears to have been coined by G. W. Gibbons in order to distinguish this solution from the conventional soliton, understood as a regular, finite-energy (and usually stable) solution of a differential equation describing some physical system.[20] The word regular means a smooth solution carrying no sources at all. However, the solution of the Born–Infeld model still carries a source in the form of a Dirac-delta function at the origin. As a consequence it displays a singularity in this point (although the electric field is everywhere regular). In some physical contexts (for instance string theory) this feature can be important, which motivated the introduction of a special name for this class of solitons.
On the other hand, when gravity is added (i.e. when considering the coupling of the Born–Infeld model to general relativity) the corresponding solution is called EBIon, where "E" stands for Einstein.
See also[edit]
1. ^ "Translation" here means that there is real mass transport, although it is not the same water which is transported from one end of the canal to the other end by this "Wave of Translation". Rather, a fluid parcel acquires momentum during the passage of the solitary wave, and comes to rest again after the passage of the wave. But the fluid parcel has been displaced substantially forward during the process – by Stokes drift in the wave propagation direction. And a net mass transport is the result. Usually there is little mass transport from one side to another side for ordinary waves.
2. ^ This passage has been repeated in many papers and books on soliton theory.
3. ^ Lord Rayleigh published a paper in Philosophical Magazine in 1876 to support John Scott Russell's experimental observation with his mathematical theory. In his 1876 paper, Lord Rayleigh mentioned Scott Russell's name and also admitted that the first theoretical treatment was by Joseph Valentin Boussinesq in 1871. Joseph Boussinesq mentioned Russell's name in his 1871 paper. Thus Scott Russell's observations on solitons were accepted as true by some prominent scientists within his own lifetime of 1808–1882.
4. ^ Korteweg and de Vries did not mention John Scott Russell's name at all in their 1895 paper but they did quote Boussinesq's paper of 1871 and Lord Rayleigh's paper of 1876. The paper by Korteweg and de Vries in 1895 was not the first theoretical treatment of this subject but it was a very important milestone in the history of the development of soliton theory.
1. ^ "Light bullets".
2. ^ Scott Russell, J. (1844). "Report on waves". Fourteenth meeting of the British Association for the Advancement of Science.
3. ^ Korteweg, D. J.; de Vries, G. (1895). "On the Change of Form of Long Waves advancing in a Rectangular Canal and on a New Type of Long Stationary Waves". Philosophical Magazine 39: 422–443. doi:10.1080/14786449508620739.
4. ^ Bona, J. L.; Pritchard, W. G.; Scott, L. R. (1980). "Solitary‐wave interaction". Physics of Fluids 23 (3): 438–441. Bibcode:1980PhFl...23..438B. doi:10.1063/1.863011.
5. ^ Zabusky & Kruskal (1965)
6. ^ Gardner, Clifford S.; Greene, John M.; Kruskal, Martin D.; Miura, Robert M. (1967). "Method for Solving the Korteweg–deVries Equation". Physical Review Letters 19 (19): 1095–1097. Bibcode:1967PhRvL..19.1095G. doi:10.1103/PhysRevLett.19.1095.
7. ^ Remoissenet, M. (1999). Waves called solitons: Concepts and experiments. Springer. p. 11. ISBN 9783540659198.
8. ^ See e.g.:
Maxworthy, T. (1976). "Experiments on collisions between solitary waves". Journal of Fluid Mechanics 76 (1): 177–186. Bibcode:1976JFM....76..177M. doi:10.1017/S0022112076003194.
Fenton, J.D.; Rienecker, M.M. (1982). "A Fourier method for solving nonlinear water-wave problems: application to solitary-wave interactions". Journal of Fluid Mechanics 118: 411–443. Bibcode:1982JFM...118..411F. doi:10.1017/S0022112082001141.
Craig, W.; Guyenne, P.; Hammack, J.; Henderson, D.; Sulem, C. (2006). "Solitary water wave interactions". Physics of Fluids 18 (057106): 25 pp. Bibcode:2006PhFl...18e7106C. doi:10.1063/1.2205916.
9. ^ "Photons advance on two fronts". October 24, 2005. Retrieved 2011-02-15.
10. ^ Fred Tappert (January 29, 1998). "Reminiscences on Optical Soliton Research with Akira Hasegawa".
11. ^ Cundiff, S. T.; Collings, B. C.; Akhmediev, N. N.; Soto-Crespo, J. M.; Bergman, K.; Knox, W. H. (1999). "Observation of Polarization-Locked Vector Solitons in an Optical Fiber". Physical Review Letters 82 (20): 3988. Bibcode:1999PhRvL..82.3988C. doi:10.1103/PhysRevLett.82.3988.
12. ^ Tang, D. Y.; Zhang, H.; Zhao, L. M.; Wu, X. (2008). "Observation of high-order polarization-locked vector solitons in a fiber laser". Physical Review Letters 101 (15): 153904. Bibcode:2008PhRvL.101o3904T. doi:10.1103/PhysRevLett.101.153904. PMID 18999601.
13. ^ Davydov, Aleksandr S. (1991). Solitons in molecular systems. Mathematics and its applications (Soviet Series) 61 (2nd ed.). Kluwer Academic Publishers. ISBN 0-7923-1029-2.
14. ^ Yakushevich, Ludmila V. (2004). Nonlinear physics of DNA (2nd revised ed.). Wiley-VCH. ISBN 3-527-40417-1.
15. ^ Sinkala, Z. (August 2006). "Soliton/exciton transport in proteins". J. Theor. Biol. 241 (4): 919–27. doi:10.1016/j.jtbi.2006.01.028. PMID 16516929.
16. ^ Heimburg, T., Jackson, A.D. (12 July 2005). "On soliton propagation in biomembranes and nerves". Proc. Natl. Acad. Sci. U.S.A. 102 (2): 9790. Bibcode:2005PNAS..102.9790H. doi:10.1073/pnas.0503823102.
17. ^ Heimburg, T., Jackson, A.D. (2007). "On the action potential as a propagating density pulse and the role of anesthetics". Biophys. Rev. Lett. 2: 57–78. arXiv:physics/0610117. Bibcode:2006physics..10117H. doi:10.1142/S179304800700043X.
18. ^ Andersen, S.S.L., Jackson, A.D., Heimburg, T. (2009). "Towards a thermodynamic theory of nerve pulse propagation". Progr. Neurobiol. 88 (2): 104–113. doi:10.1016/j.pneurobio.2009.03.002.
19. ^ Kosevich, A. M.; Gann, V. V.; Zhukov, A. I.; Voronov, V. P. (1998). "Magnetic soliton motion in a nonuniform magnetic field". Journal of Experimental and Theoretical Physics 87 (2): 401–407. Bibcode:1998JETP...87..401K. doi:10.1134/1.558674.
20. ^ Gibbons, G. W. (1998). "Born–Infeld particles and Dirichlet p-branes". Nuclear Physics B 514 (3): 603–639. arXiv:hep-th/9709027. Bibcode:1998NuPhB.514..603G. doi:10.1016/S0550-3213(97)00795-5.
21. ^ Powell, Devin (20 May 2011). "Rogue Waves Captured". Science News. Retrieved 24 May 2011.
• Zabusky, N. J.; Kruskal, M. D. (1965). "Interaction of 'solitons' in a collisionless plasma and the recurrence of initial states". Phys. Rev. Lett. 15 (6): 240–243. Bibcode:1965PhRvL..15..240Z. doi:10.1103/PhysRevLett.15.240.
• Hasegawa, A.; Tappert, F. (1973). "Transmission of stationary nonlinear optical pulses in dispersive dielectric fibers. I. Anomalous dispersion". Appl. Phys. Lett. 23 (3): 142–144. Bibcode:1973ApPhL..23..142H. doi:10.1063/1.1654836.
• Emplit, P.; Hamaide, J. P.; Reynaud, F.; Froehly, C.; Barthelemy, A. (1987). "Picosecond steps and dark pulses through nonlinear single mode fibers". Optics Comm. 62 (6): 374–379. Bibcode:1987OptCo..62..374E. doi:10.1016/0030-4018(87)90003-4.
• Drazin, P. G.; Johnson, R. S. (1989). Solitons: an introduction (2nd ed.). Cambridge University Press. ISBN 0-521-33655-4.
• Dunajski, M. (2009). Solitons, Instantons and Twistors. Oxford University Press. ISBN 978-0-19-857063-9.
• Jaffe, A.; Taubes, C. H. (1980). Vortices and monopoles. Birkhauser. ISBN 0-8176-3025-2.
• Manton, N.; Sutcliffe, P. (2004). Topological solitons. Cambridge University Press. ISBN 0-521-83836-3.
• Mollenauer, Linn F.; Gordon, James P. (2006). Solitons in optical fibers. Elsevier Academic Press. ISBN 0-12-504190-X.
• Rajaraman, R. (1982). Solitons and instantons. North-Holland. ISBN 0-444-86229-3.
• Yang, Y. (2001). Solitons in field theory and nonlinear analysis. Springer-Verlag. ISBN 0-387-95242-X.
External links[edit]
Related to John Scott Russell |
2e6ff297b21340f5 | Hartree–Fock method
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In computational physics and chemistry, the Hartree–Fock (HF) method is a method of approximation for the determination of the wave function and the energy of a quantum many-body system in a stationary state.
The Hartree–Fock method often assumes that the exact, N-body wave function of the system can be approximated by a single Slater determinant (in the case where the particles are fermions) or by a single permanent (in the case of bosons) of N spin-orbitals. By invoking the variational method, one can derive a set of N-coupled equations for the N spin orbitals. A solution of these equations yields the Hartree–Fock wave function and energy of the system.
Especially in the older literature, the Hartree–Fock method is also called the self-consistent field method (SCF). In deriving what is now called the Hartree equation as an approximate solution of the Schrödinger equation, Hartree required the final field as computed from the charge distribution to be "self-consistent" with the assumed initial field. Thus, self-consistency was a requirement of the solution. The solutions to the non-linear Hartree–Fock equations also behave as if each particle is subjected to the mean field created by all other particles (see the Fock operator below) and hence the terminology continued. The equations are almost universally solved by means of an iterative method, although the fixed-point iteration algorithm does not always converge.[1] This solution scheme is not the only one possible and is not an essential feature of the Hartree–Fock method.
The Hartree–Fock method finds its typical application in the solution of the Schrödinger equation for atoms, molecules, nanostructures[2] and solids but it has also found widespread use in nuclear physics. (See Hartree–Fock–Bogoliubov method for a discussion of its application in nuclear structure theory). In atomic structure theory, calculations may be for a spectrum with many excited energy levels and consequently the Hartree–Fock method for atoms assumes the wave function is a single configuration state function with well-defined quantum numbers and that the energy level is not necessarily the ground state.
For both atoms and molecules, the Hartree–Fock solution is the central starting point for most methods that describe the many-electron system more accurately.
The rest of this article will focus on applications in electronic structure theory suitable for molecules with the atom as a special case. The discussion here is only for the Restricted Hartree–Fock method, where the atom or molecule is a closed-shell system with all orbitals (atomic or molecular) doubly occupied. Open-shell systems, where some of the electrons are not paired, can be dealt with by one of two Hartree–Fock methods:
Brief history[edit]
The origin of the Hartree–Fock method dates back to the end of the 1920s, soon after the discovery of the Schrödinger equation in 1926. In 1927 D. R. Hartree introduced a procedure, which he called the self-consistent field method, to calculate approximate wave functions and energies for atoms and ions. Hartree was guided by some earlier, semi-empirical methods of the early 1920s (by E. Fues, R. B. Lindsay, and himself) set in the old quantum theory of Bohr.
In the Bohr model of the atom, the energy of a state with principal quantum number n is given in atomic units as E = -1 / n^2. It was observed from atomic spectra that the energy levels of many-electron atoms are well described by applying a modified version of Bohr's formula. By introducing the quantum defect d as an empirical parameter, the energy levels of a generic atom were well approximated by the formula E = -1/(n+d)^2, in the sense that one could reproduce fairly well the observed transitions levels observed in the X-ray region (for example, see the empirical discussion and derivation in Moseley's law). The existence of a non-zero quantum defect was attributed to electron-electron repulsion, which clearly does not exist in the isolated hydrogen atom. This repulsion resulted in partial screening of the bare nuclear charge. These early researchers later introduced other potentials containing additional empirical parameters with the hope of better reproducing the experimental data.
Hartree sought to do away with empirical parameters and solve the many-body time-independent Schrödinger equation from fundamental physical principles, i.e., ab initio. His first proposed method of solution became known as the Hartree method. However, many of Hartree's contemporaries did not understand the physical reasoning behind the Hartree method: it appeared to many people to contain empirical elements, and its connection to the solution of the many-body Schrödinger equation was unclear. However, in 1928 J. C. Slater and J. A. Gaunt independently showed that the Hartree method could be couched on a sounder theoretical basis by applying the variational principle to an ansatz (trial wave function) as a product of single-particle functions.
In 1930 Slater and V. A. Fock independently pointed out that the Hartree method did not respect the principle of antisymmetry of the wave function. The Hartree method used the Pauli exclusion principle in its older formulation, forbidding the presence of two electrons in the same quantum state. However, this was shown to be fundamentally incomplete in its neglect of quantum statistics.
It was then shown that a Slater determinant, a determinant of one-particle orbitals first used by Heisenberg and Dirac in 1926, trivially satisfies the antisymmetric property of the exact solution and hence is a suitable ansatz for applying the variational principle. The original Hartree method can then be viewed as an approximation to the Hartree–Fock method by neglecting exchange. Fock's original method relied heavily on group theory and was too abstract for contemporary physicists to understand and implement. In 1935 Hartree reformulated the method more suitably for the purposes of calculation.
The Hartree–Fock method, despite its physically more accurate picture, was little used until the advent of electronic computers in the 1950s due to the much greater computational demands over the early Hartree method and empirical models. Initially, both the Hartree method and the Hartree–Fock method were applied exclusively to atoms, where the spherical symmetry of the system allowed one to greatly simplify the problem. These approximate methods were (and are) often used together with the central field approximation, to impose that electrons in the same shell have the same radial part, and to restrict the variational solution to be a spin eigenfunction. Even so, solution by hand of the Hartree–Fock equations for a medium sized atom were laborious; small molecules required computational resources far beyond what was available before 1950.
Hartree–Fock algorithm[edit]
The Hartree–Fock method is typically used to solve the time-independent Schrödinger equation for a multi-electron atom or molecule as described in the Born–Oppenheimer approximation. Since there are no known solutions for many-electron systems (hydrogenic atoms and the diatomic hydrogen cation being notable one-electron exceptions), the problem is solved numerically. Due to the nonlinearities introduced by the Hartree–Fock approximation, the equations are solved using a nonlinear method such as iteration, which gives rise to the name "self-consistent field method."
The Hartree–Fock method makes five major simplifications in order to deal with this task:
• The Born–Oppenheimer approximation is inherently assumed. The full molecular wave function is actually a function of the coordinates of each of the nuclei, in addition to those of the electrons.
• Typically, relativistic effects are completely neglected. The momentum operator is assumed to be completely non-relativistic.
• The variational solution is assumed to be a linear combination of a finite number of basis functions, which are usually (but not always) chosen to be orthogonal. The finite basis set is assumed to be approximately complete.
• Each energy eigenfunction is assumed to be describable by a single Slater determinant, an antisymmetrized product of one-electron wave functions (i.e., orbitals).
• The mean field approximation is implied. Effects arising from deviations from this assumption, known as electron correlation, are completely neglected for the electrons of opposite spin, but are taken into account for electrons of parallel spin.[3][4] (Electron correlation should not be confused with electron exchange, which is fully accounted for in the Hartree–Fock method.)[4]
Relaxation of the last two approximations give rise to many so-called post-Hartree–Fock methods.
Greatly simplified algorithmic flowchart illustrating the Hartree–Fock method
Variational optimization of orbitals[edit]
The variational theorem states that for a time-independent Hamiltonian operator, any trial wave function will have an energy expectation value that is greater than or equal to the true ground state wave function corresponding to the given Hamiltonian. Because of this, the Hartree–Fock energy is an upper bound to the true ground state energy of a given molecule. In the context of the Hartree–Fock method, the best possible solution is at the Hartree–Fock limit; i.e., the limit of the Hartree–Fock energy as the basis set approaches completeness. (The other is the full-CI limit, where the last two approximations of the Hartree–Fock theory as described above are completely undone. It is only when both limits are attained that the exact solution, up to the Born–Oppenheimer approximation, is obtained.) The Hartree–Fock energy is the minimal energy for a single Slater determinant.
The starting point for the Hartree–Fock method is a set of approximate one-electron wave functions known as spin-orbitals. For an atomic orbital calculation, these are typically the orbitals for a hydrogenic atom (an atom with only one electron, but the appropriate nuclear charge). For a molecular orbital or crystalline calculation, the initial approximate one-electron wave functions are typically a linear combination of atomic orbitals (LCAO).
The orbitals above only account for the presence of other electrons in an average manner. In the Hartree–Fock method, the effect of other electrons are accounted for in a mean-field theory context. The orbitals are optimized by requiring them to minimize the energy of the respective Slater determinant. The resultant variational conditions on the orbitals lead to a new one-electron operator, the Fock operator. At the minimum, the occupied orbitals are eigensolutions to the Fock operator via a unitary transformation between themselves. The Fock operator is an effective one-electron Hamiltonian operator being the sum of two terms. The first is a sum of kinetic energy operators for each electron, the internuclear repulsion energy, and a sum of nuclear-electronic Coulombic attraction terms. The second are Coulombic repulsion terms between electrons in a mean-field theory description; a net repulsion energy for each electron in the system, which is calculated by treating all of the other electrons within the molecule as a smooth distribution of negative charge. This is the major simplification inherent in the Hartree–Fock method, and is equivalent to the fifth simplification in the above list.
Since the Fock operator depends on the orbitals used to construct the corresponding Fock matrix, the eigenfunctions of the Fock operator are in turn new orbitals which can be used to construct a new Fock operator. In this way, the Hartree–Fock orbitals are optimized iteratively until the change in total electronic energy falls below a predefined threshold. In this way, a set of self-consistent one-electron orbitals are calculated. The Hartree–Fock electronic wave function is then the Slater determinant constructed out of these orbitals. Following the basic postulates of quantum mechanics, the Hartree–Fock wave function can then be used to compute any desired chemical or physical property within the framework of the Hartree–Fock method and the approximations employed.
Mathematical formulation[edit]
The Fock operator[edit]
Main article: Fock matrix
Because the electron-electron repulsion term of the electronic molecular Hamiltonian involves the coordinates of two different electrons, it is necessary to reformulate it in an approximate way. Under this approximation, (outlined under Hartree–Fock algorithm), all of the terms of the exact Hamiltonian except the nuclear-nuclear repulsion term are re-expressed as the sum of one-electron operators outlined below, for closed-shell atoms or molecules (with two electrons in each spatial orbital).[5] The "(1)" following each operator symbol simply indicates that the operator is 1-electron in nature.
\hat F[\{\phi_j\}](1) = \hat H^{\text{core}}(1)+\sum_{j=1}^{N/2}[2\hat J_j(1)-\hat K_j(1)]
is the one-electron Fock operator generated by the orbitals \phi_j, and
\hat H^{\text{core}}(1)=-\frac{1}{2}\nabla^2_1 - \sum_{\alpha} \frac{Z_\alpha}{r_{1\alpha}}
is the one-electron core Hamiltonian. Also
\hat J_j(1)
is the Coulomb operator, defining the electron-electron repulsion energy due to each of the two electrons in the jth orbital.[5] Finally
\hat K_j(1)
is the exchange operator, defining the electron exchange energy due to the antisymmetry of the total n-electron wave function. [5] This (so called) "exchange energy" operator, K, is simply an artifact of the Slater determinant. Finding the Hartree–Fock one-electron wave functions is now equivalent to solving the eigenfunction equation:
\hat F(1)\phi_i(1)=\epsilon_i \phi_i(1)
where \phi_i\;(1) are a set of one-electron wave functions, called the Hartree–Fock molecular orbitals.
Linear combination of atomic orbitals[edit]
Main articles: basis set (chemistry) and basis set
Typically, in modern Hartree–Fock calculations, the one-electron wave functions are approximated by a linear combination of atomic orbitals. These atomic orbitals are called Slater-type orbitals. Furthermore, it is very common for the "atomic orbitals" in use to actually be composed of a linear combination of one or more Gaussian-type orbitals, rather than Slater-type orbitals, in the interests of saving large amounts of computation time.
Various basis sets are used in practice, most of which are composed of Gaussian functions. In some applications, an orthogonalization method such as the Gram–Schmidt process is performed in order to produce a set of orthogonal basis functions. This can in principle save computational time when the computer is solving the Roothaan–Hall equations by converting the overlap matrix effectively to an identity matrix. However, in most modern computer programs for molecular Hartree–Fock calculations this procedure is not followed due to the high numerical cost of orthogonalization and the advent of more efficient, often sparse, algorithms for solving the generalized eigenvalue problem, of which the Roothaan–Hall equations are an example.
Numerical stability[edit]
Numerical stability can be a problem with this procedure and there are various ways of combating this instability. One of the most basic and generally applicable is called F-mixing or damping. With F-mixing, once a single electron wave function is calculated it is not used directly. Instead, some combination of that calculated wave function and the previous wave functions for that electron is used—the most common being a simple linear combination of the calculated and immediately preceding wave function. A clever dodge, employed by Hartree, for atomic calculations was to increase the nuclear charge, thus pulling all the electrons closer together. As the system stabilised, this was gradually reduced to the correct charge. In molecular calculations a similar approach is sometimes used by first calculating the wave function for a positive ion and then to use these orbitals as the starting point for the neutral molecule. Modern molecular Hartree–Fock computer programs use a variety of methods to ensure convergence of the Roothaan–Hall equations.
Weaknesses, extensions, and alternatives[edit]
Of the five simplifications outlined in the section "Hartree–Fock algorithm", the fifth is typically the most important. Neglecting electron correlation can lead to large deviations from experimental results. A number of approaches to this weakness, collectively called post-Hartree–Fock methods, have been devised to include electron correlation to the multi-electron wave function. One of these approaches, Møller–Plesset perturbation theory, treats correlation as a perturbation of the Fock operator. Others expand the true multi-electron wave function in terms of a linear combination of Slater determinants—such as multi-configurational self-consistent field, configuration interaction, quadratic configuration interaction, and complete active space SCF (CASSCF). Still others (such as variational quantum Monte Carlo) modify the Hartree–Fock wave function by multiplying it by a correlation function ("Jastrow" factor), a term which is explicitly a function of multiple electrons that cannot be decomposed into independent single-particle functions.
An alternative to Hartree–Fock calculations used in some cases is density functional theory, which treats both exchange and correlation energies, albeit approximately. Indeed, it is common to use calculations that are a hybrid of the two methods—the popular B3LYP scheme is one such hybrid functional method. Another option is to use modern valence bond methods.
Software packages[edit]
For a list of software packages known to handle Hartree–Fock calculations, particularly for molecules and solids, see the list of quantum chemistry and solid state physics software.
See also[edit]
1. ^ Froese Fischer, Charlotte (1987). "General Hartree-Fock program". Computer Physics Communication 43 (3): 355–365. Bibcode:1987CoPhC..43..355F. doi:10.1016/0010-4655(87)90053-1{{inconsistent citations}}
2. ^ Abdulsattar, Mudar A. (2012). "SiGe superlattice nanocrystal infrared and Raman spectra: A density functional theory study". J. Appl. Phys. 111 (4): 044306. Bibcode:2012JAP...111d4306A. doi:10.1063/1.3686610.
3. ^ Hinchliffe, Alan (2000). Modelling Molecular Structures (2nd ed.). Baffins Lane, Chichester, West Sussex PO19 1UD, England: John Wiley & Sons Ltd. p. 186. ISBN 0-471-48993-X.
4. ^ a b Szabo, A.; Ostlund, N. S. (1996). Modern Quantum Chemistry. Mineola, New York: Dover Publishing. ISBN 0-486-69186-1.
5. ^ a b c Levine, Ira N. (1991). Quantum Chemistry (4th ed.). Englewood Cliffs, New Jersey: Prentice Hall. p. 403. ISBN 0-205-12770-3.
• Levine, Ira N. (1991). Quantum Chemistry (4th ed.). Englewood Cliffs, New Jersey: Prentice Hall. pp. 455–544. ISBN 0-205-12770-3.
External links[edit] |
0807a9f7fc70011c | Take the 2-minute tour ×
Sometimes (often?) a structure depending on several parameters turns out to be symmetric w.r.t. interchanging two of the parameters, even though the definition gives a priori no clue of that symmetry.
As an example, I'm thinking of the Littlewood–Richardson coefficients: If defined by the skew Schur function $s_{\lambda/\mu}=\sum_\nu c^\lambda_{\mu\nu}s_\nu$, where the sum is over all partitions $\nu$ such that $|\mu|+|\nu|=|\lambda|$ and $s_{\lambda/\mu}$ itself is defined e.g. by $ s_{\lambda/\mu}= \det(h _{\lambda_i-\mu_j-i+j}) _{1\le i,j\le n}$, it is not at all straightforward to see from that definition that $c^\lambda_{\mu\nu} =c^\lambda_{\nu\mu} $.
Granted that this way of looking at it may seem a bit artificial, as I guess that in many of such cases, it is possible to come up with a "higher level" definition that shows the symmetry right away (e.g. in the above example, the usual (?) definition of $c_{\lambda\mu}^\nu$ via $s_\lambda s_\mu =\sum c_{\lambda\mu}^\nu s_\nu$), but showing the equivalence of both definitions may be more or less involved. So I am aware that it might just be a matter of "choosing the right definition". Therefore, maybe it would be better to think of the question as asking especially for cases where historically, the symmetry of a certain structure has been only stated 'later', after defining or obtaining it in a different way first.
Another example that would fit here: the Perfect graph theorem, featuring a 'conceptual' symmetry between a graph and its complement.
What are other examples of "unexpected" or at least surprising symmetries?
(NB. The 'combinatorics' tag seemed the most obvious to me, but I won't be surprised if there are upcoming examples far away from combinatorics.)
share|improve this question
Quadratic reciprocity. – Terry Tao Dec 13 '13 at 22:55
The relation between $\zeta(1-x)$ and $\zeta(x)$ for the Riemann $\zeta$ function. – Lev Borisov Dec 14 '13 at 2:26
Number of partitions of $n$ into no more than $k$ terms that are each no larger than $l$. The symmetry between $l$ and $k$ might not be immediately obvious to novices. – Yoav Kallus Dec 14 '13 at 2:46
The Peano definition of addition, even. – Joe Z. Dec 14 '13 at 2:56
I saw the title and my first thought was "Littlewood-Richardson coefficients". :) – darij grinberg Dec 14 '13 at 20:55
33 Answers 33
If $a$ and $b$ are positive integers, and you make the definition $$ a \cdot b = \underbrace{a + \cdots + a}_{b \text{ times} }$$ then it's a slightly surprising fact that $a \cdot b$ is actually equal to $b \cdot a$.
share|improve this answer
Indeed, this fails in general when $a,b$ are ordinals. – Terry Tao Dec 15 '13 at 4:51
It's even more surprising if you start with the inductive definitions of plus and times. The proof that $ab=ba$ comes as Proposition 72 in the first development of this theory, by Grassmann in 1861. – John Stillwell Jan 13 at 9:12
A nice example from classical mechanics is this: there is a hidden $SO(4)$ symmetry in the elliptical orbits of a particle in an inverse square potential, ie. the Kepler problem.
The system has an obvious $SO(3)$ symmetry because the inverse square law is invariant under rotations. But there's no a priori clue that an $SO(4)$ symmetry exists in this system.
You can read about it here: http://math.ucr.edu/home/baez/classical/runge_pro.pdf
This carries over to the quantum mechanical case when you solve the Schrödinger equation for an inverse square potential.
You can read about that here: http://hep.uchicago.edu/~rosner/p342/projs/weinberg.pdf
The result is that the hidden $SO(4)$ symmetry explains the "coincidence" that many hydrogen atom states have the same energy.
share|improve this answer
1. I think that if you put yourself back in the position of someone discovering this for the first time, the equality (under suitable hypotheses) $${\partial^2f\over\partial x\partial y}={\partial^2 f\over\partial y\partial x}\quad (1)$$ should count.
2. Here's a surprising application of that suprising equality. Suppose you're a profit-maximizing competitive firm, hiring both labor ($L$) (at a wage rate of $W$) and capital ($K$) (at a rental rate of $R$). Then an increase in $W$ will, in general, lead you to reduce your output and so employ less capital, but at the same time lead you to substitute capital for labor and so employ more capital. On balance, the derivative $dK/dW$ could be either positive or negative. Likewise for the derivative $dL/dR$. It does not seem to me to be at all intuitively obvious that these derivatives even have the same sign, much less that they are equal. But if one takes $f$ in (1) to be profit as a function of $x$ (labor) and $y$ (capital) then one discovers that in fact
$${dK\over dW}={dL\over dR}$$
(Of course this looks more symmetric if you write $X_1$ and $X_2$ for labor and capital, and $P_1$ and $P_2$ for the wage rate and the rental rate.)
share|improve this answer
Higher Homotopy groups $\pi_n(X)$ are abelian. This is quite surprising if you see the defintion for the first time and probably got in touch with the classical fundamental group before, which is not abelian in general.
In fact, higher homotopy groups should serve as a generalization to the fundamental group in contrast to the abelian homology groups, when they were introduced, but as one recognized, that they are abelian too, they seemed to be not a nice generalization.
share|improve this answer
Rolling one surface on another without slipping binds the velocity of the rolling surface and its angular velocity, giving a rank 2 subbundle in the tangent bundle of the 5-dimensional space of tangential positionings of the 2 surfaces in space. This subbundle, when you roll one sphere on another, has an 8 dimensional symmetry group, unless one sphere has exactly one third the radius of the other sphere, in which case the subbundle is preserved by a 14 dimensional group of diffeomorphisms of the 5-dimensional manifold: the split real form of the simple Lie group $G_2$.
share|improve this answer
This subbundle is my favorite example of a non-integrable distribution (if the surfaces are "generic", at least) - you can physically see that rolling a sphere in an "infinitesimal square" on a plane makes the sphere rotate. – Peter Samuelson Dec 14 '13 at 15:29
Consider the Desargues configuration. It consists of (1) two triangles, say $ABC$ and $A'B'C'$ such that the lines $AA'$, $BB'$, and $CC'$ all meet at a point $P$, and (2) the three points of intersection of corresponding sides $X=(BC)\cap(B'C')$, $Y=(AC)\cap(A'C')$, and $Z=(AB)\cap(A'B')$. Desargues's theorem says that then $X$, $Y$, and $Z$ are collinear. The Desargues configuration consists of the 10 points mentioned above ($A,B,C,A',B',C',P,X,Y,Z$) and the 10 lines mentioned (the three sides of both triangles, the three lines through $P$, and the line $XYZ$). The surprising (to me) symmetry is an action of the cyclic group of order 5. In fact, the graph whose vertices are the 10 points of the Desargues configuration and whose edges join any two points that are not together on any of the configuration's 10 lines is the Petersen graph, which is usually drawn in a way that makes the cyclic 5-fold symmetry visible.
share|improve this answer
Have used Desargues for easily a hundred times in my schooldays and never realized this. I actually wasn't aware that the Petersen graph had any deeper meaning than that of a counterexample to some conjectures of days gone by. Nice!! – darij grinberg Dec 14 '13 at 21:01
Hermite's reciprocity: as representations of $GL_2$, we have $$ S^k(S^l\mathbb{C}^2)\simeq S^l(S^k\mathbb{C}^2). $$
share|improve this answer
The joint distribution of IID normal random variables is spherically symmetric.
Although invariance under permutations of the coordinates is obvious for any IID variables, spherical symmetry is rare. In fact, this characterizes the normal distribution.
share|improve this answer
In fact, the "correct" definition of Littlewood-Richardson coefficients shows a surprising $S_3$-symmetry among all the indices $\lambda,\mu,\nu$. See http://arxiv.org/abs/0704.0817.
A further example related to symmetric functions is the symmetry between the area and bounce statistics of Dyck paths. See for instance Chapter 3 of http://www.math.upenn.edu/~jhaglund/books/qtcat.pdf. No combinatorial proof of symmetry is known.
There are many enumeration problems with "hidden symmetry." For instance, what is the probability that 1 and 2 are in the same cycle of a (uniform) random permutation of $1,2,\dots,n$? More interesting, suppose that I shuffle an ordinary deck of 26 red cards and 26 black cards. I turn the cards face up one at a time. At any point before the last card is dealt, you can guess that the next card is red. What strategy maximizes the probability of guessing correctly? The surprising answer is that all strategies have a probability of 1/2 of success! There is a very elegant way to see this.
share|improve this answer
@StevenLandsburg: imagine the dealer turns over the bottom card of the deck when you guess, instead of the top one. Clearly this situation is symmetric to the one described above, but also clearly every strategy gives 50/50 odds as the outcome is determined before the game even starts. – Sam Hopkins Dec 14 '13 at 1:00
Can you fix the first link to point to the abstract rather than directly to the PDF? Thank you! – Harry Altman Dec 14 '13 at 18:11
From school days... Take positive reals x,y,z,w. The following statement is actually symmetric in x,y,z,w:
"there exists an equilateral triangle of side length w, and a point whose distances from the three vertices are x,y,z"
enter image description here
A quick proof: Let $ABC$ be equilateral and $P$ arbitrary. Construct $BPQ$ equilateral. Let $AB=AC=BC=w$, $AP=x$, $BP=y$ and $CP=z$. Then $BP=PQ=BQ=y$ by construction, $CP=z$ and $CB=w$ obviously, so it remains to check that $CQ=x$. Now note that triangle $CBQ$ is the $60^\circ$ rotation of $ABP$ around $B$.
share|improve this answer
A pedestrian definition of the rank of a matrix as the maximum number of linearly independent columns equals the maximum number of linearly independent rows.
share|improve this answer
The combinatorial definition of the Schur functions is $$ s_\lambda(x) = \sum_{T \in SSYT(\lambda)} x^{cont(T)} $$ where $SSYT(\lambda)$ is the set of semi-standard Young tableaux of shape $\lambda$ and $x^{cont(T)}$ is the product over all $i$ of $x_i^{\# i\text{'s in }T}$. This is not manifestly a symmetric function. The Bender-Knuth involution proves that $s_\lambda(x)$ is invariant after swapping $x_i$ with $x_{i+1}$, and thus $s_\lambda(x)$ is, indeed, symmetric.
share|improve this answer
And more startlingly (or at least far less obviously), the Stanley symmetric functions and their generalizations. – darij grinberg Jan 22 at 17:43
The outer automorphism of $S_6$.
share|improve this answer
This is a rather specialized example, but dear to my heart.
Consider the set of "Richardson subvarieties" of the flag manifold $GL_n/B$, intersections of Schubert and opposite Schubert varieties. The only part of the Weyl group that preserves this set is $\{1,w_0\}$ where the $w_0$ exchanges Schubert and opposite Schubert varieties.
Now project these varieties to a $k$-Grassmannian, obtaining "positroid varieties". This includes the Richardson varieties in the Grassmannian, and many new varieties.
Now the part of the Weyl group that preserves this collection is the dihedral group $D_n$! The symmetry has gotten bigger by a factor of $n$.
share|improve this answer
I always found $\mathrm{Tor}_R\left(M,N\right) \cong \mathrm{Tor}_R\left(N,M\right)$ for a commutative ring $R$ and two $R$-modules $M$ and $N$ to be mysterious. Then again I have no idea about homology and thus wouldn't be surprised if this is a triviality from an appropriate viewpoint.
Volker Strehl's generalized cyclotomic identity (Corollary 6 in Volker Strehl, Cycle counting for isomorphism types of endofunctions states that $\prod\limits_{k\geq 1} \left(\dfrac{1}{1-az^k}\right)^{M_k\left(b\right)} = \prod\limits_{k\geq 1}\left(\dfrac{1}{1-bz^k}\right)^{M_k\left(a\right)}$ in the formal power series ring $\mathbb Q\left[\left[z,a,b\right]\right]$, where $M_k\left(t\right)$ denotes the $k$-th necklace polynomial $\dfrac{1}{k}\sum\limits_{d\mid k} \mu\left(d\right) t^{k/d}$. I recall this being not particularly difficult, but quite useful.
Every nontrivial commutativity of some family of operators probably qualifies as an unexpected symmetry. Here are three examples:
1. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{1,2,...,n\right\}$, define an element $Y_i \in \mathbb Z\left[S_n\right]$ by $Y_i = \left(1,i\right) + \left(2,i\right) + ... + \left(i-1,i\right)$ (a sum of $i-1$ transpositions). Then, $Y_i Y_j = Y_j Y_i$ for all $i$ and $j$ in $ \left\{1,2,...,n\right\}$. This is a simple exercise, and the $Y_i$ are called the Young-Jucys-Murphy elements.
2. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{0,1,...,n\right\}$, define an element $\mathrm{Sch}_i \in \mathbb Z\left[S_n\right]$ as the sum of all permutations $\sigma \in S_n$ satisfying $\sigma\left(1\right) < \sigma\left(2\right) < ... < \sigma\left(i\right)$. (Note that $\mathrm{Sch}_0 = \mathrm{Sch}_1$ when $n\geq 1$.) Then, $\mathrm{Sch}_i \mathrm{Sch}_j = \mathrm{Sch}_j \mathrm{Sch}_i$ for all $i$ and $j$ in $ \left\{0,1,...,n\right\}$. In fact, $\mathrm{Sch}_i \mathrm{Sch}_j = \sum\limits_{k=0}^{\min\left\{n,i+j-n\right\}} \dbinom{n-j}{i-k} \dbinom{n-i}{j-k} \left(n+k-i-j\right)! \mathrm{Sch}_k$, which makes the symmetry maybe not that surprising (no similar equalities hold in cases 1 and 3!). See Manfred Schocker, Idempotents for derangement numbers, Discrete Mathematics, vol. 269 (2003), pp. 239-248 for a proof.
3. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{1,2,...,n\right\}$, define an element $\mathrm{RSW}_i \in \mathbb Z\left[S_n\right]$ as
$\sum\limits_{1\leq u_1 < u_2 < ... < u_i\leq n} \sum\limits_{\substack{\sigma\in S_n, \\ \sigma\left(u_1\right) < \sigma\left(u_2\right) < ... < \sigma\left(u_i\right)}} \sigma$.
Then, $\mathrm{RSW}_i \mathrm{RSW}_j = \mathrm{RSW}_j \mathrm{RSW}_i$ for all $i$ and $j$ in $ \left\{1,2,...,n\right\}$. This is Theorem 1.1 in Victor Reiner, Franco Saliola, Volkmar Welker, Spectra of Symmetrized Shuffling Operators, arXiv:1102.2460v2, and a nice proof remains to be found.
share|improve this answer
The Tor symmetry is basically just that $M \otimes N \cong N \otimes M$, and you take the derived functors of both sides. Generalizing, any and all nice properties of (co)homology groups would seem to be mysterious symmetries if you consider the definition to be messing around with projective or injective modules, and not something more intrinsic like derived functors. – Ryan Reich Dec 15 '13 at 5:14
Morley's trisector theorem allows you to build a triangle which is maximally symmetric out of one which has no symmetry at all.
share|improve this answer
Let $G$ be a finite group with order $n$. For each $d$ dividing $n$, the number of subgroups of $G$ of order $d$ equals the number of subgroups of order $n/d$ if $G$ is abelian. More broadly, the lattice of subgroups of a finite abelian group looks the same if you flip it around by 180 degrees.
This is not at all obvious at the level at which the statement can first be understood, essentially because there is no natural way to construct subgroups of index $d$ from subgroups of order $d$ in a general finite abelian group with order divisible by $d$. It is not clear at a beginning level how the commutativity of the group leads to such conclusions.
share|improve this answer
A couple very disparate answers that spring to mind (fortunately, this is community wiki, and actual experts should feel very free to improve my exposition of either):
The negative gradient flow for the Chern-Simons functional on a 3-manifold $M$ naturally satisfies a four-dimensional symmetry. Namely, if one has a principal $G$-bundle on $M$ and some connection $A$ on this $G$-bundle (which I'll carelessly think of as a $\mathfrak{g}$-valued $1$-form on $M$), the Chern-Simons functional $CS(A) = \int_M \Big( dA + \frac{2}{3} A \wedge A \Big) \wedge A$ is a perfectly well-defined function on the space of connections, and one can attempt to perform the negative gradient flow with respect to a natural metric on this space of connections (this being a very natural thing to do from the point of view of Morse theory, for example). If you want, you can interpret the solution to this flow as a connection on the bundle pulled back to $M \times \mathbb{R}$, and while this connection clearly transforms nicely under $Diff(M)$, there's no particular reason to think it's a well-behaved object under the diffeomorphism group of the four-manifold $M \times \mathbb{R}$. However, this negative gradient flow equation turns out to be exactly the anti-self dual equation $F^+ = 0$, where the curvature $F = dA + A \wedge A$ and its self-dual part is $F^+ = \frac{1}{2}(F + *F)$. This equation manifestly respects the symmetries of the entire four-manifold, and this point of view is a very effective one for proving even basic things, like gauge invariance, of the Chern-Simons functional. Witten is very fond of making this point and my understanding is that this insight allowed him to extend his QFT description of the Jones polynomial to a QFT description of its categorification, Khovanov homology.
And now for something completely different: associativity of the quantum cup product. A familiar object to many people is the cohomology ring $H^*(X)$ of a space $X$, which is associative, (graded) commutative, and just generally great. If $X$ is a symplectic manifold, there's an interesting way to deform the multiplication on this ring using counts of $J$-holomorphic curves passing through various cycles. In effect, one picks a compatible almost-complex structure on the symplectic manifold, and then if one writes $\alpha * \beta = \sum_{\gamma} c_{\alpha \beta \gamma} \gamma$, where we think of $\alpha, \beta, \gamma$ as cycles in $X$ (using Poincare duality), the coefficient $c_{\alpha \beta \gamma}$ is a generating function in some formal variables, the coefficients of which are counts of holomorphic curves of fixed genus and homology class intersecting our three cycles $\alpha, \beta, \gamma$. Using this deformed multiplication gives the quantum cohomology ring $QH^*(X)$. Now, some properties of this ring, like graded commutativity, are fairly easy to see from the definition, but associativity is really quite tricky! (I realise this isn't exactly what you asked in your question as it's not just a symmetry of some coefficient, but you can phrase associativity as a symmetry of something or other -- if you want to be technical, a four-point Gromov-Witten invariant -- so I think it qualifies.) The associativity is somehow not so bad to see in the algebro-geometric case (or perhaps this is just my bias as an algebraic geometer), but in symplectic geometry you really need some nontrivial analytic estimates at some point in the proof. And you get a lot out of it! Associativity of this quantum cohomology ring encapsulates a wealth of information on enumerative geometry counts associated to $M$; indeed, it was basically this idea that allowed Kontsevich to find his recursion for the number of degree $d$ curves through $3d + 1$ general points in $\mathbb{P}^2$.
Finally, I kind of want to mention strange duality, even though that now really isn't an answer to the question, as you have to modify one side or the other; I'll just copy a very quick summary from the abstract to arxiv.org/abs/math/0602018: ``For X a compact Riemann surface of positive genus, the strange duality conjecture predicts that the space of sections of certain theta bundle on moduli of bundles of rank r and level k is naturally dual to a similar space of sections of rank k and level r.'' The paper itself is a great place to learn more about it if you're interested!
share|improve this answer
In number theory, Terry Tao already mentioned Quadratic Reciprocity in his first comment, but there's also the reciprocity formula $$ s(b,c) + s(c,b) = \frac1{12}\left( \frac{b}{c} + \frac1{bc} + \frac{c}{b} \right) - \frac14 $$ for Dedekind sums, symmetrized further in Rademacher's formula $$ D(a,b;c) + D(b,c;a) + D(c,a;b) = \frac1{12} \frac{a^2+b^2+c^2}{abc} - \frac14. $$ [Here $D(a,b;c) = \sum_{n\,\bmod\,c} ((an/c)) ((bn/c))$, where $((\cdot))$ is the sawtooth function taking $x$ to $0$ if $x \in {\bf Z}$ and to $x - \lfloor x \rfloor - 1/2$ otherwise; and the Dedekind sum is the special case $s(b,c) = D(1,b;c)$.]
share|improve this answer
But I don't understand what is so special about this, at least in terms of symmetry: for about any function $s(\cdot,\cdot)$, including the Legendre symbol, $s(b,c)+s(c,b)$ or $s(b,c)s(c,b)$ is symmetric in $b$ and $c$. Where is the surprise? – Wolfgang Dec 18 '13 at 18:01
@Wolfgang asks a fair question. To add to Matt Young's answer, we can define $s'(b,c) = s(b,c) + 1/8 - b/12c - 1/24bc$, and then the reciprocity formula says that $s'(b,c)$ is antisymmetric: $s'(b,c) = -s'(c,b)$. – Noam D. Elkies Dec 18 '13 at 20:25
@NoamD.Elkies Granted. That reminds me of the relation between $\zeta(1-s)$ and $\zeta(s)$, cast as $\Xi(1-s)=\Xi(s)$ with appropriate $\Xi$. – Wolfgang Dec 19 '13 at 7:56
Here is an example from potential theory where symmetry is a not-so-obvious property: the Green function of a bounded open subset $\Omega \subset \mathbb{C}$. More precisely, having specified a point $a \in \Omega$, one defines the classical Green function for $\Omega$ with pole at $a$, , as a function on $\mathbb{C}$ with the following properties: (i) $G_\Omega(\cdot; a)$ is harmonic in $\Omega \setminus \{a\}$; (ii) $z \mapsto G(z;a) + \log |z-a|$ extends to a harmonic function on $\Omega$; (iii) for each $w \in \partial \Omega$, $\lim_{z \to w} G_\Omega(z;a)=0$.
The symmetry property says that $G_\Omega(z;w)=G_\Omega(w;z)$ for any $z,w \in \Omega$ such that $z \ne w$. Note that the functions on either side of the equation are different: one has a pole at $w$ and the other at $z$. It is not very hard to prove the symmetry property, but it is not obvious either.
The existence of such a function is related to the solution of a Dirichlet problem for the Laplace equation in $\Omega$. Analogous functions can be considered for domains in $\mathbb{R}^n, \ n>2$ or in $\mathbb{C}^n, n > 1$, and they also enjoy the symmetry property.
share|improve this answer
Characters of affine Kac-Moody Lie algebras and Virasoro Lie algebra are modular forms. These modular symmetries are not that much evident from the definitions.
share|improve this answer
Consider a differential inequality, like the Hardy-Sobolev inequality $$\left|\int\int_{{\mathbb R}^N\times{\mathbb R^N}}\frac{\overline{f(x)}g(y)}{|x-y|^\lambda}dxdy\right|\leq C\|f\|_r\|g\|_s.$$ Even if you put the sharp constant $C$ in this inequality, for most functions the inequality is strict. Now look for maximizers, i.e., functions for which the LHS is equal to the RHS: they are highly symmetric functions, actually spherically symmetric and very smooth. This is a general phenomenon, connected with monotonicity of $L^p$ and Sobolev norms with respect to symmetrization procedures.
share|improve this answer
Maxwell's equations were originally formulated for Newtonian physics. However, special relativity has found that these equations have a surprising symmetry to Lorentz transformations. The equations remain true in a moving reference frame. The transformation of the values is such that (loosely speaking) what looks like pure electric charge in one reference frame can be electric current and charge in another reference frame; and what looks like pure electric field from one reference frame can be magnetic and electric field in another reference frame.
See https://en.wikipedia.org/wiki/Covariant_formulation_of_classical_electromagnetism for a precise formulation.
share|improve this answer
Betti numbers: the symmetry $\dim(H^k(M^n))=\dim(H^{n-k}(M^n))$ does not immediately follow from the definition.
share|improve this answer
@DanielLitt, I know, I just don't want to deal with torsion, and for the purpose of this question Betti numbers' symmetry is sufficient. – Michael Dec 13 '13 at 21:23
My point is that the symmetry does not come from the Betti numbers, but from the space $M$; I don't think this is an example of what the question asks for. – Daniel Litt Dec 13 '13 at 23:40
There is a philosophy that the functional equation of a zeta function should be a consequence of Poincare duality on some exotic space. For zeta functions of varieties over finite fields, this was made rigorous in the 1960s, but over number fields it's still just a philosophy. So we have two non-obvious symmetries that are the same, but not obviously the same. In other words, we have a non-obvious symmetry between non-obvious symmetries. – JBorger Jan 12 at 19:01
Two (unrelated) examples from combinatorics:
The first is Proposition 7.19.9 of volume 2 of Stanley's "Enumerative Combinatorics." Define a descent of a (skew) Standard Young Tableau $T$ of shape $\lambda/\mu$ to be an index $i$ such that $i+1$ is in a lower row than $i$. Let $D(T)$ denote the set of descents of $T$. Then for any $|\lambda/\mu|=n$ and for any $1 \leq i \leq n-1$, the number of SYTs $T$ of shape $\lambda/\mu$ such that $i \in D(T)$ is independent of $i$.
The second follows from a bijection of De Médicis and Viennot (1994, Adv. Appl. Math.) Let $\mathcal{M}_n$ denote the set of perfect matchings of $[2n]$, i.e. the set of partitions of $[2n] := \{1,2,\ldots,2n\}$ into pairs. Let $M \in \mathcal{M}_n$. For $p = \{a,b\}, q = \{c,d\} \in M$ with $a<b$, $c<d$, and $a<c$, we say that $p$ and $q$ cross if $a < c < b< d$ and we say they nest if $a<c<d<b$. Finally, we say they are aligned if they neither cross nor nest, i.e., $a<b<c<d$. Define:
$\mathrm{ne}(M):= |\{\{p,q\}\subset M\colon \textrm{$p$ and $q$ nest}\}|;$
$\mathrm{cr}(M):= |\{\{p,q\}\subset M\colon \textrm{$p$ and $q$ cross}\}|;$
$\mathrm{al}(M):= |\{\{p,q\}\subset M\colon \textrm{$p$ and $q$ are aligned}\}|.$
Then $\sum_{M \in \mathcal{M}_n}x^{\mathrm{ne}(M)}y^{\mathrm{cr}(M)}=\sum_{M \in \mathcal{M}_n}x^{\mathrm{cr}(M)}y^{\mathrm{ne}(M)}$. However, crossings and alignments (or nestings and alignments) are not equidistributed: $\sum_{M \in \mathcal{M}_n}x^{\mathrm{al}(M)}y^{\mathrm{cr}(M)} \neq \sum_{M \in \mathcal{M}_n}x^{\mathrm{cr}(M)}y^{\mathrm{al}(M)}$.
share|improve this answer
The Jacobson radical of a ring $R$ is defined to be the intersection of all maximal left ideals in $R$. It turns out that the Jacobson radical is the intersection of all maximal right ideals in $R$ as well, so the Jacobson radical does not depend on whether one considers left or right ideals. In particular, the Jacobson radical of a ring is a two-sided ideal. In fact, there are several characterizations of the Jacobson radical that do not appear to be symmetric with respect to "leftness" and "rightness" including the following.
1. The intersection of all maximal left ideals.
2. $\bigcap\{\textrm{Ann}(M)|M\,\textrm{is a simple left}\,R-\textrm{module}\}$
3. $\{x\in R|1-rx\,\textrm{has a left inverse for each}\,r\in R\}$
4. $\{x\in R|1-rx\,\textrm{has a two-sided inverse for each}\,r\in R\}$
share|improve this answer
Let $r_4(n)$ be the number of $4$-tuples $a,b,c,d\in \bf Z$ satisfying $a^2+b^2+c^2+d^2=n$. Then $\sum_{n\geq 0}r_4(n)e^{2\pi i\, nz}dz$ is a holomorphic differential form on the upper half-plane that is invariant by a subgroup of finite index in ${\rm SL}_2(\bf Z)$ (acting by $\frac{az+b}{cz+d}$).
The same is true if you replace $r_4(n)$ by $a_n(E)$ where:
-- $E$ is an elliptic curve defined over $\bf Q$,
-- if $p$ is a prime number, $a_p(E)=p+1-N_p(E)$ and $N_p(E)$ is the number of points of $E$ in ${\bf Z}/p{\bf Z}$,
-- $a_n(E)$, for $n\in\bf N$, is defined by $\sum_n a_n(E)n^{-s}=\prod_p(1-a_p(E)p^{-s}+p^{1-2s})^{-1}$ (the product has to be taken over the prime numbers $p$ such that $E$ remains an elliptic curve modulo $p$ which excludes finitely many of them).
share|improve this answer
I would like to add an example coming from the area of additive theory known as Freiman's structure theory. If I am not (too) blind, this has not been mentioned yet, and hopefully it qualifies as an appropriate answer.
Assume that $\mathbb{A} = (A, +)$ is a (possibly non-commutative) semigroup, and let $X$ be a non-empty subset of $A$. Given an integer $n \ge 1$, we write $nX$ for $\{x_1+\cdots + x_n: x_1, \ldots, x_n \in X\}$. In principle, we have $1 \le |nX| \le |X|^n$, and for all $k \in \mathbb{N}^+$ and $i \in \{1, \ldots, k\}$ we can actually find a pair $(\mathbb{A}, X)$ such that $|X| = k$ and $|nX| = i$, with the result that, in general, not much can be concluded about the "structure" of $X$. However, if $|nX|$ is sufficiently small with respect to $|X|$ and $\mathbb{A}$ has suitable properties, then "surprising" things start happening, and for instance we have the following:
Theorem. If $\mathbb{A}$ is a linearly orderable semigroup (i.e., there exists a total order $\preceq$ on $A$ such that $x + z \prec y + z$ and $z + x \prec z + y$ for all $x,y,z \in A$ with $x \prec y$) and $|2X| \le 3|X|-3$, then the smallest subsemigroup of $\mathbb{A}$ containing $X$ is abelian.
This implies at once an analogous result by Freiman and coauthors which is valid for linearly ordered groups; see Theorem 1.2 in [F] (a preprint can be found here). I don't know of any similar result for larger values of $n$.
[F] G. Freiman, M. Herzog, P. Longobardi, and M. Maj, Small doubling in ordered groups, to appear in J. Austr. Math. Soc.
share|improve this answer
In the definition of "Latin square" there is complete symmetry between the roles of "row", "column" and "symbol", so that any of the 6 permutations of that role produces another Latin square.
share|improve this answer
Let $a(m,n)$ be the number of partitions with no more than $m$ parts, each part (strictly) less than $n$, and the sum a multiple of $n$. Then $a(m,n)=a(n,m)$.
share|improve this answer
Your Answer
|
dd047ccc29ae58b4 | Take the 2-minute tour ×
Electrons in an atom have quantized energy quantity. Can uncertainty principle be applied in this case, then?
How does this work?
As energy is fixed, this seems to disobey $\Delta E \Delta t \geq \hbar/2$...
share|improve this question
add comment
1 Answer
up vote 3 down vote accepted
You can only measure the energy of an electron in an atom perfectly if you measure it for an infinitely long time. If you do your measurement for a finite time there will be a finite uncertainty due to the uncertainty principle.
This is not some esoteric piece of mathematics, it's a real and measurable effect. For example if you measure the emission spectrum from an atom you are measuring the difference in energy between some excited state and a lower energy state. Because the lifetime of the excited state is short it's energy is uncertain, and consequently the energy of the emitted photon is uncertain. The result is that the lines in the emission spectrum are not infinitely sharp. They have a finite width due to the uncertainty principle.
share|improve this answer
Indeed, line width is how we measure the lifetime of very fast transitions. – dmckee Aug 8 '12 at 11:31
Then how do we get quantized values for energy level? – Mark Lucas Aug 8 '12 at 11:48
By solving the time independent Schrödinger equation. The energy levels we calculate are the limits of infinite time. – John Rennie Aug 8 '12 at 12:23
add comment
Your Answer
|
6e68b972969390aa |
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
Wavepackets [k-space to z-space]
1. Jan 10, 2007 #1
Hi, this type of question has been confusing my slightly as of late, an a pointer in the right direction would be greatly appreciated
The wavefunction associated with a Gaussian wavepacket propagating in free space can be shown to be [included as attachment - it's too complicated for here] where delta k is withe width of the wavepacket in k space and v is the velocity of the wavepacket.
Deduce an expression for the width of the wavepacket in real space (z-space)as a function of time
2. Relevant equations
again, as attached
3. The attempt at a solution
I'm suspecting it has something to do with Fourier Transforms, but I'm really stumped. it's probably straightforward, but i'm a bit blind to it at the moment
Thanks in advance
Attached Files:
2. jcsd
3. Jan 11, 2007 #2
User Avatar
Science Advisor
Homework Helper
I have a hunch that [itex] \Delta z\Delta p =\frac{\hbar}{2} [/itex], since a gaussian wavepacket is minimizing the uncertainty relations.
4. Jan 19, 2007 #3
To find the witdth of the wave packet you should consider the form of
[tex] |\psi|^2 [/tex] .
This will have the form
[tex] \psi \propto \exp \left\{- \frac{(z - vt)^2}{A(t)} \right\} [/tex]
This has the form of a Gaussian curve. The maximum occurs where [tex] z = vt [/tex] where the exponens takes on the value 1.
The width is given by the lenght between the points where the exponent is [tex] 1/2 [/tex]. So the expression used to find the widht is
[tex] \exp \left\{ - \frac{(z-vt)^2}{A(t)} \right\} = \frac{1}{2} [/tex].
Solving this gives two solutions [tex] z_1(t) [/tex] and [tex] z_2 (t) [/tex] and the difference between these are the width of the wave packet.
You can expect that the width is increasing with time, since the Schrödinger equation has a dispersive term (a term that causes different Fourier components of the wave to propagate with different velocities).
Last edited: Jan 19, 2007
Have something to add?
Similar Discussions: Wavepackets [k-space to z-space]
1. K space problem (Replies: 13) |
a5faac878358def2 | Course Description
From 15th ISeminar 2011/12 Operator Semigroups for Numerical Analysis
Jump to: navigation, search
The course concentrates on the numerical solution of initial value problems of the type
u′(t) = Au(t) + f(t), \quad t \ge 0,
u(0) = u_0 \in D(A),
where A is a linear operator with dense domain of definition D(A) in a Banach space X, and u0 is the initial value. A model example is the Laplace operator A = Δ with appropriate domain in the Hilbert space L2(Ω). In this case the above partial differential equation describes heat conduction inside Ω. One way of finding a solution to this initial value problem is to imitate the way in which one solves linear ordinary differential equations with constant coefficients: First define the exponential etA in suitable way. Then the solution of the homogeneous problem is given by this fundamental operator applied to the initial value u0 , i.e., u(t) = etAu0. This is where operator semigroup theory enters the game: the fundamental operators T(t): = etA form a so-called strongly continuous semigroup of bounded linear operators on the Banach space X. That is to say the functional equation T(t + s) = T(t)T(s) and T(0) = I holds together with the continuity of the orbits t\mapsto T(t)u_0. If such a semigroup exists, we say that the initial value problem is well-posed. Once existence and uniqueness of solutions are guaranteed, the following numerical aspects appear.
• In most cases the operator A is complicated and numerically impossible to work with, so one approximates it via a sequence of (simple) operators Am hoping that the corresponding solutions \mathrm{e}^{tA_m} (expected to be easily computable) converge to the solution of the original problem etA in some sense. This procedure is called space discretisation. This discretisation may indeed come from a spatial mesh (e.g., for a finite difference method) or from some not so space-related discretisations, e.g., from Fourier-Galerkin methods.
• Equally hard is the computation of the exponential of an operator A. One idea is to approximate the exponential function z\mapsto\mathrm{e}^z by functions r that are easier to handle. A typical example, known also from basic calculus courses, is the backward Euler scheme r(z) = (1 − z) − 1. In this case the approximation means r(0) = r'(0) = e0, i.e., the first two Taylor coefficients of r and of the exponential function coincide. This leads to the following idea. If r(tA) is approximately the same as etA for small values of t (up to an error of magnitude t2), we may take the nth power of it. To compensate for the growing error, we take decreasing time steps as n grows and obtain
\left[r(\tfrac{t}{n}A)\right]^n \approx \big[\mathrm{e}^{\tfrac{t}{n}A}\big]^n=\mathrm{e}^{tA}
by the semigroup property. This procedure is called temporal discretisation.
• Due to numerical reasons, one is usually forced to combine the above two methods and add further spice to the stew: operator splitting. This is usually done when the operator A has a complicated structure, but decomposes into a finite number of parts that are easier to handle.
In semigroup theory the above methods culminate in the famous Lax Equivalence Theorem and Chernoff’s Theorem, describing precisely the situation when these methods work. In this course we shall develop the basic tools from operator semigroup theory needed for such an abstract treatment of discretisation procedures.
Topics to be covered include:
• initial value problems and operator semigroups,
• spatial discretisations, Trotter–Kato theorems, finite element and finite difference approximations,
• fractional powers, interpolation spaces, analytic semigroups,
• the Lax Equivalence Theorem and Chernoff’s Theorem, error estimates, order of convergence, stability issues,
• temporal discretisations, rational approximations, Runge–Kutta methods, operator splitting procedures,
• applications to various differential equations, like inhomogeneous problems, non-autonomous equations, semilinear equations, Schrödinger equations, delay differential equations, Volterra equations,
• exponential integrators.
Some of these topics will be elaborated on in Phase 2, where the students will have the possibility to work on projects which are related to active research.
Back to Main Page.
Personal tools |
3f84a6387a595789 | Quantum mechanical: Wikis
(Redirected to Quantum mechanics article)
From Wikipedia, the free encyclopedia
Quantum mechanics
Uncertainty principle
Introduction · Mathematical formulation
Fig. 1: Probability densities corresponding to the wavefunctions of an electron in a hydrogen atom possessing definite energy levels (increasing from the top of the image to the bottom: n = 1, 2, 3, ...) and angular momentum (increasing across from left to right: s, p, d,...). Brighter areas correspond to higher probability density in a position measurement. Wavefunctions like these are directly comparable to Chladni's figures of acoustic modes of vibration classical physics and are indeed modes of oscillation as well: they possess a sharp energy and thus a keen frequency. The angular momentum and energy are quantized, and only take on discrete values like those shown (as is the case for resonant frequencies in acoustics).
Quantum mechanics (QM) is a set of scientific principles describing the known behavior of energy and matter that predominate at the atomic and subatomic scales. QM gets its name from the notion of a quantum, and that quantum value is the Planck constant. The wave–particle duality of energy and matter at the atomic scale provides a unified view of the behavior of particles such as photons and electrons. While the notion of the photon as a quantum of light energy is commonly understood as a particle of light that has an energy value governed by the Planck constant, which is quantized for an electron is the angular momentum it can have as it is bound in an atomic orbital. When not bound to an atom, an electron's energy is no longer quantized, but it displays, like any other massive particle, a Compton wavelength. While a photon does not have mass, it does have linear momentum. The full significance of the Planck constant is expressed in physics through the abstract mathematical notion of action.
The mathematical formulation of quantum mechanics is abstract and its implications are often non-intuitive. The centerpiece of this mathematical system is the wavefunction. The wavefunction is a mathematical function of time and space that can provide information about the position and momentum of a particle, but only as probabilities, as dictated by the constraints imposed by the uncertainty principle. Mathematical manipulations of the wavefunction usually involve the bra-ket notation, which requires an understanding of complex numbers and linear functionals. Many of the results of QM can only be expressed mathematically and do not have models that are as easy to visualize as those of classical mechanics. For instance, the ground state in quantum mechanical model is a non-zero energy state that is the lowest permitted energy state of a system, rather than a more traditional system that is thought of as simple being at rest with zero kinetic energy.
The word quantum is Latin for "how great" or "how much".[1] In quantum mechanics, it refers to a discrete unit that quantum theory assigns to certain physical quantities, such as the energy of an atom at rest (see Figure 1). The discovery that particles are discrete packets of energy with wave-like properties led to the branch of physics that deals with atomic and subatomic systems which is today called quantum mechanics. It is the underlying mathematical framework of many fields of physics and chemistry, including condensed matter physics, solid-state physics, atomic physics, molecular physics, computational physics, computational chemistry, quantum chemistry, particle physics, nuclear chemistry, and nuclear physics. The foundations of quantum mechanics were established during the first half of the twentieth century by Werner Heisenberg, Max Planck, Louis de Broglie, Albert Einstein, Niels Bohr, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Wolfgang Pauli, David Hilbert, and others.[2] Some fundamental aspects of the theory are still actively studied.[3]
Quantum mechanics is essential to understand the behavior of systems at atomic length scales and smaller. For example, if classical mechanics governed the workings of an atom, electrons would rapidly travel towards and collide with the nucleus, making stable atoms impossible. However, in the natural world the electrons normally remain in an uncertain, non-deterministic "smeared" (wave–particle wave function) orbital path around or through the nucleus, defying classical electromagnetism.[4]
Broadly speaking, quantum mechanics incorporates four classes of phenomena that classical physics cannot account for: (I) the quantization (discretization) of certain physical quantities, (II) wave–particle duality, (III) the uncertainty principle, and (IV) quantum entanglement. Each of these phenomena is described in detail in subsequent sections.
The history of quantum mechanics began with the 1838 discovery of cathode rays by Michael Faraday, the 1859 statement of the black body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system could be discrete, and the 1900 quantum hypothesis by Max Planck.[7] Planck's hypothesis stated that any energy is radiated and absorbed in quantities divisible by discrete "energy elements", such that each energy element E is proportional to its frequency ν:
E = h \nu\
where h is Planck's action constant. Planck insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the physical reality of the radiation itself.[8] However, at that time, this appeared not to explain the photoelectric effect (1839), i.e. that shining light on certain materials can eject electrons from the material. In 1905, basing his work on Planck's quantum hypothesis, Albert Einstein postulated that light itself consists of individual quanta.[9]
In the mid-1920s, developments in quantum mechanics quickly led to it becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the "Old Quantum Theory". Light quanta came to be called photons (1926). From Einstein's simple postulation was born a flurry of debating, theorizing and testing, and thus, the entire field of quantum physics, leading to its wider acceptance at the Fifth Solvay Conference in 1927.
Quantum mechanics and classical physics
The main differences between classical and quantum theories have already been mentioned above in the remarks on the Einstein-Podolsky-Rosen paradox. Essentially the difference boils down to the statement that quantum mechanics is coherent (addition of amplitudes), whereas classical theories are incoherent (addition of intensities). Thus, such quantities as coherence lengths and coherence times come into play. For microscopic bodies the extension of the system is certainly much smaller than the coherence length; for macroscopic bodies one expects that it should be the other way round.[11] An exception to this rule can occur at extremely low temperatures, when quantum behavior can manifest itself on more macroscopic scales (see Bose-Einstein condensate).
This is in accordance with the following observations:
Many macroscopic properties of classical systems are direct consequences of quantum behavior of its parts. For example, the stability of bulk matter (which consists of atoms and molecules which would quickly collapse under electric forces alone), the rigidity of solids, and the mechanical, thermal, chemical, optical and magnetic properties of matter are all results of interaction of electric charges under the rules of quantum mechanics.[12]
While the seemingly exotic behavior of matter posited by quantum mechanics and relativity theory become more apparent when dealing with extremely fast-moving or extremely tiny particles, the laws of classical Newtonian physics remain accurate in predicting the behavior of large objects—of the order of the size of large molecules and bigger—at velocities much smaller than the velocity of light.[13]
There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the transformation theory proposed by Cambridge theoretical physicist Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics, matrix mechanics (invented by Werner Heisenberg)[14][15] and wave mechanics (invented by Erwin Schrödinger).[16]
The time evolution of wave functions is deterministic in the sense that, given a wavefunction at an initial time, it makes a definite prediction of what the wavefunction will be at any later time.[29] During a measurement, the change of the wavefunction into another one is not deterministic, but rather unpredictable, i.e., random. A time-evolution simulation can be seen here.[1]
Mathematical formulation
Interactions with other scientific theories
Unsolved problems in physics
In the correspondence limit of quantum mechanics: Is there a preferred interpretation of quantum mechanics? How does the quantum description of reality, which includes elements such as the "superposition of states" and "wavefunction collapse", give rise to the reality we perceive? Question mark2.svg
In the 21st century classical mechanics has been extended into the complex domain and complex classical mechanics exhibits behaviours very similar to quantum mechanics.[34]
The particle in a 1-dimensional potential energy box is the most simple example where restraints lead to the quantization of energy levels. The box is defined as zero potential energy inside a certain interval and infinite everywhere outside that interval. For the 1-dimensional case in the x direction, the time-independent Schrödinger equation can be written as:[35]
The general solutions are:
or, from Euler's formula,
The presence of the walls of the box determines the values of C, D, and k. At each wall (x = 0 and x = L), ψ = 0. Thus when x = 0,
and so D = 0. When x = L,
C cannot be zero, since this would conflict with the Born interpretation. Therefore sin kL = 0, and so it must be that kL is an integer multiple of π. Therefore,
Attempts at a unified field theory
As of 2010 the quest for unifying the fundamental forces through quantum mechanics is still ongoing. Quantum electrodynamics (or "quantum electromagnetism"), which is currently the most accurately tested physical theory,[36] has been successfully merged with the weak nuclear force into the electroweak force and work is currently being done to merge the electroweak and strong force into the electrostrong force. Current predictions state that at around 1014 GeV the three aforementioned forces are fused into a single unified field,[37] Beyond this "grand unification", it is speculated that it may be possible to merge gravity with the other three gauge symmetries, expected to occur at roughly 1019 GeV. However - and while special relativity is parsimoniously incorporated into quantum electrodynamics - the expanded general relativity, currently the best theory describing the gravitation force, has not been fully incorporated into quantum theory.
Relativity and quantum mechanics
Main articles: Quantum gravity and Theory of everything
Quantum mechanics is important for understanding how individual atoms combine covalently to form chemicals or molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. (Relativistic) quantum mechanics can in principle mathematically describe most of chemistry. Quantum mechanics can provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others, and by approximately how much.[39] Most of the calculations performed in computational chemistry rely on quantum mechanics.[40]
A working mechanism of a Resonant Tunneling Diode device, based on the phenomenon of quantum tunneling through the potential barriers.
Quantum tunneling is vital in many devices, even in the simple light switch, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives use quantum tunneling to erase their memory cells.
QM primarily applies to the atomic regimes of matter and energy, but some systems exhibit quantum mechanical effects on a large scale; superfluidity (the frictionless flow of a liquid at temperatures near absolute zero) is one well-known example. Quantum theory also provides accurate descriptions for many previously unexplained phenomena such as black body radiation and the stability of electron orbitals. It has also given insight into the workings of many different biological systems, including smell receptors and protein structures.[41] Even so, classical physics often can be a good approximation to results otherwise obtained by quantum physics, typically in circumstances with large numbers of particles or large quantum numbers. (However, some open questions remain in the field of quantum chaos.)
Philosophical consequences
Albert Einstein, himself one of the founders of quantum theory, disliked this loss of determinism in measurement (this dislike is the source of his famous quote, "God does not play dice with the universe."). Einstein held that there should be a local hidden variable theory underlying quantum mechanics and that, consequently, the present theory was incomplete. He produced a series of objections to the theory, the most famous of which has become known as the Einstein-Podolsky-Rosen paradox. John Bell showed that the EPR paradox led to experimentally testable differences between quantum mechanics and local realistic theories. Experiments have been performed confirming the accuracy of quantum mechanics, thus demonstrating that the physical world cannot be described by local realistic theories.[42] The Bohr-Einstein debates provide a vibrant critique of the Copenhagen Interpretation from an epistemological point of view.
See also
1. ^ Merriam-Webster.com
2. ^ FCCJ.org
3. ^ Compare the list of conferences presented here.
4. ^ Oocities.com
5. ^ Greiner, Walter; Müller, Berndt (1994), Quantum Mechanics Symmetries, Second edition, Springer-Verlag, p. 52, ISBN 3-540-58080-8, http://books.google.com/books?id=gCfvWx6vuzUC&pg=PA52 , Chapter 1, p. 52
6. ^ AIP.org
7. ^ J. Mehra and H. Rechenberg, The historical development of quantum theory, Springer-Verlag, 1982.
8. ^ T.S. Kuhn, Black-body theory and the quantum discontinuity 1894-1912, Clarendon Press, Oxford, 1978.
10. ^ Scribd.com
11. ^ Philsci-archive.pitt.edu
12. ^ Academic.brooklyn.cuny.edu
13. ^ Cambridge.org
14. ^ Spaceandmotion.com
16. ^ IF.uj.edu.pl
17. ^ OCW.ssu.edu
19. ^ Actapress.com
20. ^ Hirshleifer, Jack (2001), The Dark Side of the Force: Economic Foundations of Conflict Theory, Campbridge University Press, p. 265, ISBN 0-521-80412-4, http://books.google.com/books?id=W2J2IXgiZVgC&pg=PA265 , Chapter , p.
21. ^ Dict.cc
22. ^ Davies, P. C. W.; Betts, David S. (1984), Quantum Mechanics, Second edition, Chapman and Hall, p. 79, ISBN 0-7487-4446-0, http://books.google.com/books?id=XRyHCrGNstoC&pg=PA79 , Chapter 6, p. 79
23. ^ Books.Google.com
24. ^ PHY.olemiss.edu
25. ^ Greenstein, George; Zajonc, Arthur (2006), The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics, Second edition, Jones and Bartlett Publishers, Inc, p. 215, ISBN 0-7637-2470-X, http://books.google.com/books?id=5t0tm0FB1CsC&pg=PA215 , Chapter 8, p. 215
26. ^ Farside.ph.utexas.edu
27. ^ Mathews, Piravonu Mathews; Venkatesan, K. (1976), A Textbook of Quantum Mechanics, Tata McGraw-Hill, p. 36, ISBN 0-07-096510-2, http://books.google.com/books?id=_qzs1DD3TcsC&pg=PA36 , Chapter 2, p. 36
28. ^ Physics.ukzn.ac.za
29. ^ Reddit.com
33. ^ "The Nobel Prize in Physics 1979". Nobel Foundation. http://nobelprize.org/nobel_prizes/physics/laureates/1979/index.html. Retrieved 2010-02-16.
35. ^ Derivation of particle in a box, chemistry.tidalswan.com
36. ^ Life on the lattice: The most accurate theory we have.
39. ^ Books.google.com
40. ^ en.wikiboos.org
41. ^ Discovermagazine.com
42. ^ Plato.stanford.edu
43. ^ Plato.stanford.edu
44. ^ www-physics.lbl.gov
More technical:
• Dirac, P. A. M. (1930). The Principles of Quantum Mechanics. The beginning chapters make up a very clear and comprehensible introduction.
• Feynman, Richard P.; Leighton, Robert B.; Sands, Matthew (1965). The Feynman Lectures on Physics. 1-3. Addison-Wesley.
• Griffiths, David J. (2004). Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. ISBN 0-13-111892-7. OCLC 40251748. A standard undergraduate text.
• Omnès, Roland (1999). Understanding Quantum Mechanics. Princeton University Press. ISBN 0-691-00435-8. OCLC 39849482.
• Transnational College of Lex (1996). What is Quantum Mechanics? A Physics Adventure. Language Research Foundation, Boston. ISBN 0-9643504-1-6. OCLC 34661512.
Further reading
• Bohm, David (1989). Quantum Theory. Dover Publications. ISBN 0-486-65969-0.
• Eisberg, Robert; Resnick, Robert (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (2nd ed.). Wiley. ISBN 0-471-87373-X.
• Sakurai, J. J. (1994). Modern Quantum Mechanics. Addison Wesley. ISBN 0-201-53929-2.
• Shankar, R. (1994). Principles of Quantum Mechanics. Springer. ISBN 0-306-44790-8.
External links
Course material
Up to date as of January 15, 2010
(Redirected to quantum mechanics article)
Definition from Wiktionary, a free dictionary
Wikipedia has an article on:
quantum mechanics
quantum mechanics (uncountable)
1. (physics) The branch of physics which studies matter and energy at the level of atoms and other elementary particles, and substitutes probabilistic mechanisms for classical Newtonian ones.
2. (uncountable) (idiomatic) Something overly complicated or detailed.
See also
Got something to say? Make a comment.
Your name
Your email address |
7d28127261a446c8 | This is a good article. Click here for more information.
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Calculation results
Addition (+)
Subtraction (−)
Multiplication (×)
Division (÷)
Modulo (mod)
nth root (√)
Logarithm (log)
In that case, bn is called the n-th power of b, or b raised to the power n.
The definition of exponentiation can be extended to allow any real or complex exponent. Exponentiation by integer exponents can also be defined for a wide variety of algebraic structures, including matrices.
History of the notation[edit]
The term power was used by the Greek mathematician Euclid for the square of a line.[1] Archimedes discovered and proved the law of exponents, 10a 10b = 10a+b, necessary to manipulate powers of 10.[2] In the 9th century, the Persian mathematician Muhammad ibn Mūsā al-Khwārizmī used the terms mal for a square and kahb for a cube, which later Islamic mathematicians represented in mathematical notation as m and k, respectively, by the 15th century, as seen in the work of Abū al-Hasan ibn Alī al-Qalasādī.[3]
In the late 16th century, Jost Bürgi used Roman numerals for exponents.[4]
Nicolas Chuquet used a form of exponential notation in the 15th century, which was later used by Henricus Grammateus and Michael Stifel in the 16th century. The word "exponent" was coined in 1544 by Michael Stifel.[6] Samuel Jeake introduced the term indices in 1696.[1] In the 16th century Robert Recorde used the terms square, cube, zenzizenzic (fourth power), sursolid (fifth), zenzicube (sixth), second sursolid (seventh), and zenzizenzizenzic (eighth).[7] Biquadrate has been used to refer to the fourth power as well.
Another historical synonym, involution,[8] is now rare and should not be confused with its more common meaning.
In 1748 Leonhard Euler wrote "consider exponentials or powers in which the exponent itself is a variable. It is clear that quantities of this kind are not algebraic functions, since in those the exponents must be constant."[9] With this introduction of transcendental functions, Euler laid the foundation for the modern introduction of natural logarithm as the inverse function for y = ex.
The expression b2 = bb is called the square of b because the area of a square with side-length b is b2. It is pronounced "b squared".
The expression b3 = bbb is called the cube of b because the volume of a cube with side-length b is b3. It is pronounced "b cubed".
The exponent indicates how many copies of the base are multiplied together. For example, 35 = 3 ⋅ 3 ⋅ 3 ⋅ 3 ⋅ 3 = 243. The base 3 appears 5 times in the repeated multiplication, because the exponent is 5. Here, 3 is the base, 5 is the exponent, and 243 is the power or, more specifically, the fifth power of 3, 3 raised to the fifth power, or 3 to the power of 5.
The word "raised" is usually omitted, and very often "power" as well, so 35 is typically pronounced "three to the fifth" or "three to the five". Therefore, the exponentiation bn can be read as b raised to the n-th power, or b raised to the power of n, or b raised by the exponent of n, or most briefly as b to the n.
Integer exponents[edit]
The exponentiation operation with integer exponents requires only elementary algebra.
Positive integer exponents[edit]
Formally, powers with positive integer exponents may be defined by the initial condition[10]
and the recurrence relation
Zero exponent[edit]
Any nonzero number raised by the exponent 0 is 1;[11] one interpretation of such a power is as an empty product. The case of 00 is discussed below.
Negative exponents[edit]
Raising 0 by a negative exponent is left undefined.
Combinatorial interpretation[edit]
For nonnegative integers n and m, the power nm is the number of functions from a set of m elements to a set of n elements (see cardinal exponentiation). Such functions can be represented as m-tuples from an n-element set (or as m-letter words from an n-letter alphabet).
14 = │ { (1,1,1,1) } │ = 1 There is one 4-tuple from a one-element set.
Identities and properties[edit]
Exponentiation is not commutative. This contrasts with addition and multiplication, which are. For example, 2 + 3 = 3 + 2 = 5 and 2 ⋅ 3 = 3 ⋅ 2 = 6, but 23 = 8, whereas 32 = 9.
Exponentiation is not associative either. Addition and multiplication are. For example, (2 + 3) + 4 = 2 + (3 + 4) = 9 and (2 ⋅ 3) ⋅ 4 = 2 ⋅ (3 ⋅ 4) = 24, but 23 to the 4 is 84 or 4096, whereas 2 to the 34 is 281 or 2417851639229258349412352. Without parentheses to modify the order of calculation, by convention the order is top-down, not bottom-up[citation needed]:
While Google and WolframAlpha follow the above convention, note that some computer programs such as Microsoft Office Excel or Matlab associate to the left instead, i.e. a^b^c is evaluated as (a^b)^c.
Particular bases[edit]
Powers of ten[edit]
In the base ten (decimal) number system, integer powers of 10 are written as the digit 1 followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, 103 = 1000 and 10−4 = 0.0001.
Exponentiation with base 10 is used in scientific notation to denote large or small numbers. For instance, 299792458 m/s (the speed of light in vacuum, in metres per second) can be written as 2.99792458×108 m/s and then approximated as 2.998×108 m/s.
SI prefixes based on powers of 10 are also used to describe small or large quantities. For example, the prefix kilo means 103 = 1000, so a kilometre is 1000 m.
Powers of two[edit]
Powers of one[edit]
The powers of one are all one: 1n = 1.
Powers of zero[edit]
If the exponent is zero, some authors define 00 = 1, whereas others leave it undefined, as discussed below under § Zero to the power of zero.
Powers of minus one[edit]
Because of this, powers of −1 are useful for expressing alternating sequences. For a similar discussion of powers of the complex number i, see § Powers of complex numbers.
Large exponents[edit]
bn → ∞ as n → ∞ when b > 1
Any power of one is always one:
bn = 1 for all n if b = 1
(1 + 1/n)ne as n → ∞
See § The exponential function below.
Other limits, in particular of those that take on an indeterminate form, are described in § Limits of powers below.
Rational exponents[edit]
Main article: nth root
An nth root of a number b is a number x such that xn = b.
If b is a positive real number and n is a positive integer, then there is exactly one positive real solution to xn = b. This solution is called the principal nth root of b. It is denoted nb, where is the radical symbol; alternatively, the principal root may be written b1/n. For example: 41/2 = 2, 81/3 = 2.
The fact that solves follows from noting that
If n is even, then xn = b has two real solutions if b is positive, which are the positive and negative nth roots (the positive one being denoted ). If b is negative, the equation has no solution in real numbers for even n.
The principal root of a positive real number b with a rational exponent u/v in lowest terms satisfies
where u is an integer and v is a positive integer.
Rational powers u/v, where u/v is in lowest terms, are positive if u is even (and hence v is odd) (because then bu is positive), and negative for negative b if u and v are odd (because then bu is negative). There are two roots, one of each sign, if b is positive and v is even (as exemplified by the case in which u = 1 and v = 2, whereby a positive b has two square roots); in this case the principal root is defined to be the positive one.
Thus we have (−27)1/3 = −3 and (−27)2/3 = 9. The number 4 has two 3/2th roots, namely 8 and −8; however, by convention 43/2 denotes the principal root, which is 8. Since there is no real number x such that x2 = −1, the definition of bu/v when b is negative and v is even must use the imaginary unit i, as described more fully in the section § Powers of complex numbers.
Care needs to be taken when applying the power identities with negative nth roots (i.e. negative bases). For instance:
is clearly wrong. The problem here occurs in taking the positive square root rather than the negative one starting from the third term, i.e.
In general the same sorts of problems occur as described for complex numbers in the section § Failure of power and logarithm identities.
Real exponents[edit]
cannot be extended consistently to cases where b is a negative real number (see § Real exponents with negative bases). The failure of this identity is the basis for the problems with complex number powers detailed under § Failure of power and logarithm identities.
Exponentiation to real powers of positive real numbers can be defined either by extending the rational powers to reals by continuity, or more usually as given in § Powers via logarithms below.
Limits of rational exponents[edit]
Because the exponential function is continuous we find for convergent sequences (xn). This is shown here for xn = 1/n.
where the limit as r gets close to x is taken only over rational values of r. This limit only exists for positive b. The (εδ)-definition of limit is used, this involves showing that for any desired accuracy of the result bx one can choose a sufficiently small interval around x so all the rational powers in the interval are within the desired accuracy.
For example, if x = π, the nonterminating decimal representation π = 3.14159... can be used (based on strict monotonicity of the rational power) to obtain the intervals bounded by rational powers
, , , , , , ...
The bounded intervals converge to a unique real number, denoted by . This technique can be used to obtain the power of a positive real number b for any irrational exponent. The function fb(x) = bx is thus defined for any real number x.
The exponential function[edit]
Main article: Exponential function
Among other properties, exp satisfies the exponential identity
The exponential function is defined for all integer, fractional, real, and complex values of x. In fact, the matrix exponential is well-defined for square matrices (in which case this exponential identity only holds when x and y commute), and is useful for solving systems of linear differential equations.
Since exp(1) is equal to e and exp(x) satisfies this exponential identity, it immediately follows that exp(x) coincides with the repeated-multiplication definition of ex for integer x, and it also follows that rational powers denote (positive) roots as usual, so exp(x) coincides with the ex definitions in the previous section for all real x by continuity.
Powers via logarithms[edit]
for each real number x.
Real exponents with negative bases[edit]
The rational exponent method cannot be used for negative values of b because it relies on continuity. The function f(r) = br has a unique continuous extension[12] from the rational numbers to the real numbers for each b > 0. But when b < 0, the function f is not even continuous on the set of rational numbers r for which it is defined.
Irrational exponents[edit]
If a is a positive algebraic number, and b is a rational number, it has been shown above that ab is algebraic. This remains true even if one accepts any algebraic number for a, with the only difference that ab may take several values (see below), all algebraic. Gelfond–Schneider theorem provides some information on the nature of ab when b is irrational (that is not rational). It states:
If a is an algebraic number different from 0 and 1, and b an irrational algebraic number, then all the values of ab are transcendental numbers (that is, not algebraic).
Complex exponents with positive real bases[edit]
Imaginary exponents with base e[edit]
Main article: Exponential function
The exponential function ez can be defined as the limit of (1 + z/N)N, as N approaches infinity, and thus e is the limit of (1 + /N)N. In this animation N takes values increasing from 1 to 100. The computation of (1 + /N)N is displayed as the combined effect of N repeated multiplications in the complex plane, so that (1 + /N)k, k = 0 ... N are the vertices of a polygonal path whose final, leftmost endpoint is the actual value of (1 + /N)N. It can be seen that as N gets larger (1 + /N)N approaches a limit of −1. Therefore, e = −1, which is known as Euler's identity.
A complex number is an expression of the form , where x and y are real numbers, and i is the so-called imaginary unit, a number that satisfies the rule . A complex number can be visualized as a point in the (x,y) plane. The polar coordinates of a point in the (x,y) plane consist of a non-negative real number r and angle θ such that x = r cos θ and y = r sin θ. So
The product of two complex numbers z1 = x1 + iy1, z2 = x2 + iy2 is obtained by expanding out the product of the binomials and simplifying using the rule :
As a consequence of the angle sum formulas of trigonometry, if z1 and z2 have polar coordinates (r1, θ1), (r2, θ2), then their product z1z2 has polar coordinates equal to (r1r2, θ1 + θ2).
Trigonometric functions[edit]
Main article: Euler's formula
Before the invention of complex numbers, cosine and sine were defined geometrically. The above formula reduces the complicated formulas for trigonometric functions of a sum into the simple exponentiation formula
Complex exponents with base e[edit]
The power z = ex + iy can be computed as exeiy. The real factor ex is the absolute value of z and the complex factor eiy identifies the direction of z.
Complex exponents with positive real bases[edit]
If b is a positive real number, and z is any complex number, the power bz is defined as ez ⋅ ln(b), where x = ln(b) is the unique real solution to the equation ex = b. So the same method working for real exponents also works for complex exponents.
For example:
The identity (bz)u=bzu is not generally valid for complex powers. The power bz is a complex number and any power of it has to follow the rules for powers of complex numbers below. A simple counterexample is given by:
The identity is, however, valid for arbitrary complex when is an integer.
Powers of complex numbers[edit]
Complex powers of positive reals are defined via ex as in section Complex exponents with positive real bases above. These are continuous functions.
Exponentiating a real number to a complex power is formally a different operation from that for the corresponding complex number. However, in the common case of a positive real number the principal value is the same.
Complex exponents with complex bases[edit]
If z is an integer, then the value of wz is independent of the choice of log w, and it agrees with the earlier definition of exponentiation with an integer exponent.
If z is a rational number m/n in lowest terms with z > 0, then the countably infinitely many choices of log w yield only n different values for wz; these values are the n complex solutions s to the equation sn = wm.
If z is an irrational number, then the countably infinitely many choices of log w lead to infinitely many distinct values for wz.
A similar construction is employed in quaternions.
Complex roots of unity[edit]
Main article: Root of unity
The three 3rd roots of 1
If wn = 1 but wk ≠ 1 for all natural numbers k such that 0 < k < n, then w is called a primitive nth root of unity. The negative unit −1 is the only primitive square root of unity. The imaginary unit i is one of the two primitive 4th roots of unity; the other one is −i.
The number e2πi/n is the primitive nth root of unity with the smallest positive argument. (It is sometimes called the principal nth root of unity, although this terminology is not universal and should not be confused with the principal value of n1, which is 1.[13])
The other nth roots of unity are given by
for 2 ≤ kn.
Roots of arbitrary complex numbers[edit]
Computing complex powers[edit]
and thus
and use the formula above to compute
The value of a complex power depends on the branch used. For example, if the polar form i = 1e5πi/2 is used to compute ii, the power is found to be e−5π/2; the principal value of ii, computed above, is e−π/2. The set of all possible values for ii is given by:[14]
Failure of power and logarithm identities[edit]
• The identity log(bx) = x ⋅ log b holds whenever b is a positive real number and x is a real number. But for the principal branch of the complex logarithm one has
This identity does not hold even when considering log as a multivalued function. The possible values of log(wz) contain those of z ⋅ log w as a subset. Using Log(w) for the principal value of log(w) and m, n as any integers the possible values of both sides are:
For any integer n, we have:
but this is false when the integer n is nonzero.
There are a number of problems in the reasoning:
Exponentiation can be defined in any monoid.[16] A monoid is an algebraic structure consisting of a set X together with a rule for composition ("multiplication") satisfying an associative law and a multiplicative identity, denoted by 1. Exponentiation is defined inductively by:
• for all
• for all and non-negative integers n
• If n is a negative integer then is only defined[17] if has an inverse in X.
Monoids include many structures of importance in mathematics, including groups and rings (under multiplication), with more specific examples of the latter being matrix rings and fields.
Matrices and linear operators[edit]
If A is a square matrix, then the product of A with itself n times is called the matrix power. Also is defined to be the identity matrix,[18] and if A is invertible, then .
Matrix powers appear often in the context of discrete dynamical systems, where the matrix A expresses a transition from a state vector x of some system to the next state Ax of the system.[19] This is the standard interpretation of a Markov chain, for example. Then is the state of the system after two time steps, and so forth: is the state of the system after n time steps. The matrix power is the transition matrix between the state now and the state at a time n steps in the future. So computing matrix powers is equivalent to solving the evolution of the dynamical system. In many cases, matrix powers can be expediently computed by using eigenvalues and eigenvectors.
Apart from matrices, more general linear operators can also be exponentiated. An example is the derivative operator of calculus, , which is a linear operator acting on functions to give a new function . The n-th power of the differentiation operator is the n-th derivative:
These examples are for discrete exponents of linear operators, but in many circumstances it is also desirable to define powers of such operators with continuous exponents. This is the starting point of the mathematical theory of semigroups.[20] Just as computing matrix powers with discrete exponents solves discrete dynamical systems, so does computing matrix powers with continuous exponents solve systems with continuous dynamics. Examples include approaches to solving the heat equation, Schrödinger equation, wave equation, and other partial differential equations including a time evolution. The special case of exponentiating the derivative operator to a non-integer power is called the fractional derivative which, together with the fractional integral, is one of the basic operations of the fractional calculus.
Finite fields[edit]
A field is an algebraic structure in which multiplication, addition, subtraction, and division are all well-defined and satisfy their familiar properties. The real numbers, for example, form a field, as do the complex numbers and rational numbers. Unlike these familiar examples of fields, which are all infinite sets, some fields have only finitely many elements. The simplest example is the field with two elements with addition defined by and , and multiplication and .
Exponentiation in finite fields has applications in public key cryptography. For example, the Diffie–Hellman key exchange uses the fact that exponentiation is computationally inexpensive in finite fields, whereas the discrete logarithm (the inverse of exponentiation) is computationally expensive.
Any finite field F has the property that there is a unique prime number p such that for all x in F; that is, x added to itself p times is zero. For example, in , the prime number p = 2 has this property. This prime number is called the characteristic of the field. Suppose that F is a field of characteristic p, and consider the function that raises each element of F to the power p. This is called the Frobenius automorphism of F. It is an automorphism of the field because of the Freshman's dream identity . The Frobenius automorphism is important in number theory because it generates the Galois group of F over its prime subfield.
In abstract algebra[edit]
One has the following properties
If the operation has a two-sided identity element 1, then x0 is defined to be equal to 1 for any x.
[citation needed]
If the operation also has two-sided inverses and is associative, then the magma is a group. The inverse of x can be denoted by x−1 and follows all the usual rules for exponents.
When there are several power-associative binary operations defined on a set, any of which might be iterated, it is common to indicate which operation is being repeated by placing its symbol in the superscript. Thus, xn is x ∗ ... ∗ x, while x#n is x # ... # x, whatever the operations ∗ and # might be.
Over sets[edit]
Main article: Cartesian product
If n is a natural number and A is an arbitrary set, the expression An is often used to denote the set of ordered n-tuples of elements of A. This is equivalent to letting An denote the set of functions from the set {0, 1, 2, ..., n−1} to the set A; the n-tuple (a0, a1, a2, ..., an−1) represents the function that sends i to ai.
where each Vi is a vector space.
In category theory[edit]
Of cardinal and ordinal numbers[edit]
Repeated exponentiation[edit]
Just as exponentiation of natural numbers is motivated by repeated multiplication, it is possible to define an operation based on repeated exponentiation; this operation is sometimes called hyper-4 or tetration. Iterating tetration leads to another operation, and so on, a concept named hyperoperation. This sequence of operations is expressed by the Ackermann function and Knuth's up-arrow notation. Just as exponentiation grows faster than multiplication, which is faster-growing than addition, tetration is faster-growing than exponentiation. Evaluated at (3, 3), the functions addition, multiplication, exponentiation, and tetration yield 6, 9, 27, and 7625597484987 (= 327 = 333 = 33) respectively.
Zero to the power of zero[edit]
Discrete exponents[edit]
There are many widely used formulas having terms involving natural-number exponents that require 00 to be evaluated to 1. For example, regarding b0 as an empty product assigns it the value 1, even when b = 0. Alternatively, the combinatorial interpretation of b0 is the number of empty tuples of elements from a set with b elements; there is exactly one empty tuple, even if b = 0. Equivalently, the set-theoretic interpretation of 00 is the number of functions from the empty set to the empty set; there is exactly one such function, the empty function.[21]
Polynomials and power series[edit]
Likewise, when working with polynomials, it is often necessary to assign the value 1. A polynomial is an expression of the form where x is an indeterminate, and the coefficients are real numbers (or, more generally, elements of some ring). The set of all real polynomials in x is denoted by . Polynomials are added termwise, and multiplied by the applying the usual rules for exponents in the indeterminate x (see Cauchy product). With these algebraic rules for manipulation, polynomials form a polynomial ring. The polynomial is the identity element of the polynomial ring, meaning that it is the (unique) element such that the product of with any polynomial is just .[22] Polynomials can be evaluated by specializing the indeterminate x to be a real number. More precisely, for any given real number there is a unique unital ring homomorphism such that .[23] This is called the evaluation homomorphism. Because it is a unital homomorphism, we have That is, for all specializations of x to a real number (including zero).
This perspective is significant for many polynomial identities appearing in combinatorics. For example, the binomial theorem is not valid for x = 0 unless 00 = 1.[24] Similarly, rings of power series require to be true for all specializations of x. Thus identities like and are only true as functional identities (including at x = 0) if 00 = 1.
In differential calculus, the power rule is not valid for n = 1 at x = 0 unless 00 = 1.
Continuous exponents[edit]
Plot of z = xy. The red curves (with z constant) yield different limits as (x, y) approaches (0, 0). The green curves (of finite constant slope, y = ax) all yield a limit of 1.
Thus, the two-variable function xy, though continuous on the set {(x, y) : x > 0}, cannot be extended to a continuous function on any set containing (0, 0), no matter how one chooses to define 00.[26] However, under certain conditions, such as when f and g are both analytic functions and f is positive on the open interval (0, b) for some positive b, the limit approaching from the right is always 1.[27][28][29]
Complex exponents[edit]
In the complex domain, the function zw may be defined for nonzero z by choosing a branch of log z and defining zw as ew log z. This does not define 0w since there is no branch of log z defined at z = 0, let alone in a neighborhood of 0.[30][31][32]
History of differing points of view[edit]
The debate over the definition of has been going on at least since the early 19th century. At that time, most mathematicians agreed that , until in 1821 Cauchy[33] listed along with expressions like in a table of indeterminate forms. In the 1830s Libri[34][35] published an unconvincing argument for , and Möbius[36] sided with him, erroneously claiming that whenever . A commentator who signed his name simply as "S" provided the counterexample of , and this quieted the debate for some time. More historical details can be found in Knuth (1992).[37]
More recent authors interpret the situation above in different ways:
• Some argue that the best value for depends on context, and hence that defining it once and for all is problematic.[38] According to Benson (1999), "The choice whether to define is based on convenience, not on correctness. If we refrain from defining then certain assertions become unnecessarily awkward. The consensus is to use the definition , although there are textbooks that refrain from defining ."[39]
• Others argue that should be defined as 1. Knuth (1992) contends strongly that "has to be 1", drawing a distinction between the value , which should equal 1 as advocated by Libri, and the limiting form (an abbreviation for a limit of where ), which is necessarily an indeterminate form as listed by Cauchy: "Both Cauchy and Libri were right, but Libri and his defenders did not understand why truth was on their side."[37]
Treatment on computers[edit]
IEEE floating point standard[edit]
The IEEE 754-2008 floating point standard is used in the design of most floating point libraries. It recommends a number of functions for computing a power:[40]
Programming languages[edit]
Most programming language with a power function are implemented using the IEEE pow function and therefore evaluate 00 as 1. The later C[41] and C++ standards describe this as the normative behaviour. The Java standard[42] mandates this behavior. The .NET Framework method System.Math.Pow also treats 00 as 1.[43]
Mathematics software[edit]
• SageMath simplifies b0 to 1, even if no constraints are placed on b.[44] It takes 00 to be 1, but does not simplify 0x for other x.
• Maple distinguishes between integers 0, 1, ... and the corresponding floats 0.0, 1.0, ... (usually denoted 0., 1., ...). If x does not evaluate to a number, then x0 and x0.0 are respectively evaluated to 1 (integer) and 1.0 (float); on the other hand, 0x is evaluated to the integer 0, while 0.0x is evaluated as 0.x. If both the base and the exponent are zero (or are evaluated to zero), the result is Float(undefined) if the exponent is the float 0.0; with an integer as exponent, the evaluation of 00 results in the integer 1, while that of 0.0 results in the float 1.0.
• Mathematica and Wolfram Alpha simplify b0 into 1, even if no constraints are placed on b.[45] While Mathematica does not simplify 0x, Wolfram Alpha returns two results, 0 for x > 0, and "indeterminate" for real x.[46] Both Mathematica and Wolfram Alpha take 00 to be "(indeterminate)".[47]
• Matlab, Python, Magma, GAP, singular, PARI/GP and the Google and iPhone calculators evaluate 00 as 1.
Limits of powers[edit]
The section § Zero to the power of zero gives a number of examples of limits that are of the indeterminate form 00. The limits in these examples exist, but have different values, showing that the two-variable function xy has no limit at the point (0, 0). One may consider at what points this function does have a limit.
In fact, f has a limit at all accumulation points of D, except for (0, 0), (+∞, 0), (1, +∞) and (1, −∞).[48] Accordingly, this allows one to define the powers xy by continuity whenever 0 ≤ x ≤ +∞, −∞ ≤ y ≤ +∞, except for 00, (+∞)0, 1+∞ and 1−∞, which remain indeterminate forms.
Under this definition by continuity, we obtain:
Efficient computation with integer exponents[edit]
1. 22 = 4
2. (22)2 = 24 = 16
3. (24)2 = 28 = 256
4. (28)2 = 216 = 65,536
5. (216)2 = 232 = 4,294,967,296
6. (232)2 = 264 = 18,446,744,073,709,551,616
7. 264 232 24 = 2100 = 1,267,650,600,228,229,401,496,703,205,376
Exponential notation for function names[edit]
For historical reasons, this notation applied to the trigonometric and hyperbolic functions has a specific and diverse interpretation: a positive exponent applied to the function's abbreviation means that the result is raised to that power, while an exponent of −1 denotes the inverse function. That is, sin2 x is just a shorthand way to write (sin x)2 without using parentheses, whereas sin−1 x refers to the inverse function of the sine, also called arcsin x. Each trigonometric and hyperbolic has its own name and abbreviation both for the reciprocal; for example, 1/(sin x) = (sin x)−1 = csc x, as well as for its inverse, for example cosh−1 x = arcosh x. A similar convention applies to logarithms, where log2 x usually means (log x)2, not log log x.
In programming languages[edit]
In POSIX Shell arithmetic expansion, AWK, C, C++, C#, D, Go, Java, JavaScript, Perl, PHP, Python, Ruby and Tcl, the symbol ^ represents bitwise XOR. In Pascal, it represents indirection. In OCaml and Standard ML, it represents string concatenation.
List of whole-number powers[edit]
n n2 n3 n4 n5 n6 n7 n8 n9 n10
2 4 8 16 32 64 128 256 512 1,024
3 9 27 81 243 729 2,187 6,561 19,683 59,049
4 16 64 256 1,024 4,096 16,384 65,536 262,144 1,048,576
5 25 125 625 3,125 15,625 78,125 390,625 1,953,125 9,765,625
6 36 216 1,296 7,776 46,656 279,936 1,679,616 10,077,696 60,466,176
7 49 343 2,401 16,807 117,649 823,543 5,764,801 40,353,607 282,475,249
8 64 512 4,096 32,768 262,144 2,097,152 16,777,216 134,217,728 1,073,741,824
9 81 729 6,561 59,049 531,441 4,782,969 43,046,721 387,420,489 3,486,784,401
10 100 1,000 10,000 100,000 1,000,000 10,000,000 100,000,000 1,000,000,000 10,000,000,000
See also[edit]
2. ^ For further analysis see The Sand Reckoner.
5. ^ René Descartes, Discourse de la Méthode ... (Leiden, (Netherlands): Jan Maire, 1637), appended book: La Géométrie, book one, page 299. From page 299: " ... Et aa, ou a2, pour multiplier a par soy mesme; Et a3, pour le multiplier encore une fois par a, & ainsi a l'infini ; ... " ( ... and aa, or a2, in order to multiply a by itself; and a3, in order to multiply it once more by a, and thus to infinity ; ... )
6. ^ See:
• Earliest Known Uses of Some of the Words of Mathematics
9. ^ Leonard Euler (1748) Introduction to the Analysis of the Infinite, English version, page 75
13. ^ This definition of a principal root of unity can be found in:
16. ^ Nicolas Bourbaki (1970). Algèbre. Springer. , I.2
17. ^ David M. Bloom (1979). Linear Algebra and Geometry. p. 45. ISBN 0521293243.
18. ^ Chapter 1, Elementary Linear Algebra, 8E, Howard Anton
19. ^ Strang, Gilbert (1988), Linear algebra and its applications (3rd ed.), Brooks-Cole , Chapter 5.
20. ^ E Hille, R S Phillips: Functional Analysis and Semi-Groups. American Mathematical Society, 1975.
22. ^ Nicolas Bourbaki (1970). Algèbre. Springer. , §III.2 No. 9: "L'unique monôme de degré 0 est l'élément unité de ; on l'identifie souvent à l'élément unité 1 de ".
23. ^ Nicolas Bourbaki (1970). Algèbre. Springer. , §IV.1 No. 3.
26. ^ L. J. Paige (March 1954). "A note on indeterminate forms". American Mathematical Monthly. 61 (3): 189–190. doi:10.2307/2307224. JSTOR 2307224.
27. ^ sci.math FAQ: What is 0^0?
28. ^ Rotando, Louis M.; Korn, Henry (1977). "The Indeterminate Form 00". Mathematics Magazine. Mathematical Association of America. 50 (1): 41–42. doi:10.2307/2689754. JSTOR 2689754.
29. ^ Lipkin, Leonard J. (2003). "On the Indeterminate Form 00". The College Mathematics Journal. Mathematical Association of America. 34 (1): 55–56. doi:10.2307/3595845. JSTOR 3595845.
40. ^ Muller, Jean-Michel; Brisebarre, Nicolas; de Dinechin, Florent; Jeannerod, Claude-Pierre; Lefèvre, Vincent; Melquiond, Guillaume; Revol, Nathalie; Stehlé, Damien; Torres, Serge (2010). Handbook of Floating-Point Arithmetic (1 ed.). Birkhäuser. p. 216. doi:10.1007/978-0-8176-4705-6. ISBN 978-0-8176-4704-9. LCCN 2009939668.
41. ^ John Benito (April 2003). "Rationale for International Standard—Programming Languages—C" (PDF). Revision 5.10: 182.
42. ^ "Math (Java Platform SE 8) pow". Oracle.
43. ^ ".NET Framework Class Library Math.Pow Method". Microsoft.
44. ^ "Sage worksheet calculating x^0". Jason Grout.
45. ^ "Wolfram Alpha calculates b^0". Wolfram Alpha LLC, accessed April 25, 2015.
46. ^ "Wolfram Alpha calculates 0^x". Wolfram Alpha LLC, accessed April 25, 2015.
47. ^ "Wolfram Alpha calculates 0^0". Wolfram Alpha LLC, accessed April 25, 2015.
48. ^ N. Bourbaki, Topologie générale, V.4.2.
49. ^ Gordon, D. M. (1998). "A Survey of Fast Exponentiation Methods". Journal of Algorithms. 27: 129–146. doi:10.1006/jagm.1997.0913.
External links[edit] |
883f2b36a05d40ac | Ginzburg–Landau theory
From Wikipedia, the free encyclopedia
(Redirected from Ginzburg-Landau theory)
Jump to: navigation, search
In physics, Ginzburg–Landau theory, named after Vitaly Lazarevich Ginzburg and Lev Landau, is a mathematical physical theory used to describe superconductivity. In its initial form, it was postulated as a phenomenological model which could describe type-I superconductors without examining their microscopic properties. Later, a version of Ginzburg–Landau theory was derived from the Bardeen-Cooper-Schrieffer microscopic theory by Lev Gor'kov, thus showing that it also appears in some limit of microscopic theory and giving microscopic interpretation of all its parameters.
Based on Landau's previously-established theory of second-order phase transitions, Ginzburg and Landau argued that the free energy, F, of a superconductor near the superconducting transition can be expressed in terms of a complex order parameter field, ψ, which is nonzero below a phase transition into a superconducting state and is related to the density of the superconducting component, although no direct interpretation of this parameter was given in the original paper. Assuming smallness of |ψ| and smallness of its gradients, the free energy has the form of a field theory.
F = F_n + \alpha |\psi|^2 + \frac{\beta}{2} |\psi|^4 + \frac{1}{2m} \left| \left(-i\hbar\nabla - 2e\mathbf{A} \right) \psi \right|^2 + \frac{|\mathbf{B}|^2}{2\mu_0}
where Fn is the free energy in the normal phase, α and β in the initial argument were treated as phenomenological parameters, m is an effective mass, e is the charge of an electron, A is the magnetic vector potential, and \mathbf{B}=\nabla \times \mathbf{A} is the magnetic field. By minimizing the free energy with respect to fluctuations in the order parameter and the vector potential, one arrives at the Ginzburg–Landau equations
\alpha \psi + \beta |\psi|^2 \psi + \frac{1}{2m} \left(-i\hbar\nabla - 2e\mathbf{A} \right)^2 \psi = 0
\nabla \times \mathbf{B} = \mu_{0}\mathbf{j} \;\; ; \;\; \mathbf{j} = \frac{2e}{m} \mathrm{Re} \left\{ \psi^* \left(-i\hbar\nabla - 2e \mathbf{A} \right) \psi \right\}
where j denotes the dissipation-less electrical current density and Re the real part. The first equation — which bears some similarities to the time-independent Schrödinger equation, but is principally different due to a nonlinear term — determines the order parameter, ψ. The second equation then provides the superconducting current.
Simple interpretation[edit]
Consider a homogeneous superconductor where there is no superconducting current and the equation for ψ simplifies to:
\alpha \psi + \beta |\psi|^2 \psi = 0. \,
This equation has a trivial solution: ψ = 0. This corresponds to the normal state of the superconductor, that is for temperatures above the superconducting transition temperature, T>Tc.
Below the superconducting transition temperature, the above equation is expected to have a non-trivial solution (that is ψ ≠ 0). Under this assumption the equation above can be rearranged into:
|\psi|^2 = - \frac{\alpha} {\beta}.
When the right hand side of this equation is positive, there is a nonzero solution for ψ (remember that the magnitude of a complex number can be positive or zero). This can be achieved by assuming the following temperature dependence of α: α(T) = α0 (T - Tc) with α0 / β > 0:
• Above the superconducting transition temperature, T > Tc, the expression α(T) / β is positive and the right hand side of the equation above is negative. The magnitude of a complex number must be a non-negative number, so only ψ = 0 solves the Ginzburg–Landau equation.
• Below the superconducting transition temperature, T < Tc, the right hand side of the equation above is positive and there is a non-trivial solution for ψ. Furthermore
|\psi|^2 = - \frac{\alpha_{0} (T - T_{c})} {\beta},
that is ψ approaches zero as T gets closer to Tc from below. Such a behaviour is typical for a second order phase transition.
In Ginzburg–Landau theory the electrons that contribute to superconductivity were proposed to form a superfluid.[1] In this interpretation, |ψ|2 indicates the fraction of electrons that have condensed into a superfluid.[1]
Coherence length and penetration depth[edit]
The Ginzburg–Landau equations predicted two new characteristic lengths in a superconductor which was termed coherence length, ξ. For T > Tc (normal phase), it is given by
\xi = \sqrt{\frac{\hbar^2}{2 m |\alpha|}}.
while for T < Tc (superconducting phase), where it is more relevant, it is given by
\xi = \sqrt{\frac{\hbar^2}{4 m |\alpha|}}.
It sets the exponential law according to which small perturbations of density of superconducting electrons recover their equilibrium value ψ0. Thus this theory proposed it characterized all superconductors by two length scales. The second one is the penetration depth, λ. It was previously introduced by the London brothers in their London theory. Expressed in terms of the parameters of Ginzburg-Landau model it is
\lambda = \sqrt{\frac{m}{4 \mu_0 e^2 \psi_0^2}},
where ψ0 is the equilibrium value of the order parameter in the absence of an electromagnetic field. The penetration depth sets the exponential law according to which an external magnetic field decays inside the superconductor.
The original idea on the parameter "k" belongs to Landau. The ratio κ = λ/ξ is presently known as the Ginzburg–Landau parameter. It has been proposed by Landau that Type I superconductors are those with 0 < κ < 1/√2, and Type II superconductors those with κ > 1/√2.
The exponential decay of the magnetic field is equivalent with the Higgs mechanism in high-energy physics.
Fluctuations in the Ginzburg-Landau model[edit]
Taking into account fluctuations. For Type II superconductors, the phase transition from the normal state is of second order, as demonstrated by Dasgupta and Halperin. While for Type I superconductors it is of first order as demonstrated by Halperin, Lubensky and Ma.
Classification of superconductors based on Ginzburg-Landau theory[edit]
In the original paper Ginzburg and Landau observed the existence of two types of superconductors depending on the energy of the interface between the normal and superconducting states. The most important finding from Ginzburg–Landau theory was made by Alexei Abrikosov in 1957. He used Ginzburg–Landau theory to explain experiments on superconducting alloys and thin films. He found that in a type-II superconductor in a high magnetic field – the field penetrates in the form of hexagonal lattice of quantized tubes of flux.
Landau–Ginzburg theories in string theory[edit]
In particle physics, any quantum field theory with a unique classical vacuum state and a potential energy with a degenerate critical point is called a Landau–Ginzburg theory. The generalization to N=(2,2) supersymmetric theories in 2 spacetime dimensions was proposed by Cumrun Vafa and Nicholas Warner in the November 1988 article Catastrophes and the Classification of Conformal Theories, in this generalization one imposes that the superpotential possess a degenerate critical point. The same month, together with Brian Greene they argued that these theories are related by a renormalization group flow to sigma models on Calabi–Yau manifolds in the paper Calabi–Yau Manifolds and Renormalization Group Flows. In his 1993 paper Phases of N=2 theories in two-dimensions, Edward Witten argued that Landau–Ginzburg theories and sigma models on Calabi–Yau manifolds are different phases of the same theory. These models were later used to describe the low energy dynamics of 4-dimensional gauge theories with monopoles as well as brane constructions.Gaiotto, Gukov & Seiberg (2013)
See also[edit]
1. ^ a b Ginzburg VL (July 2004). "On superconductivity and superfluidity (what I have and have not managed to do), as well as on the 'physical minimum' at the beginning of the 21 st century". Chemphyschem. 5 (7): 930–945. doi:10.1002/cphc.200400182. PMID 15298379. |
12036919f553ed6a | About Rationally Speaking
Wednesday, May 22, 2013
Rationally Speaking podcast: Sean Carroll on philosophical naturalism
References: Moving Naturalism Forward workshop.
1. I was somewhat surprised Massimo didn't blog about that Philosopher survey. It did seem to indicate that philosophers don't converge as often as Massimo sometimes makes out.
2. I listened to this a couple of days ago and left a comment there (about Feynman paths). I agree mostly with what "Daniel" wrote. It I had to state my own ontology, it would be "Code is all that exists."
If I got it right, there was an interesting remark on the podcast about there being more non-naturalists among chemists than either physicists or biologists. Maybe it's the drugs.
3. Philosophically: Naturalism is freedom, and sadly the only laws that confine Nature (us), unnaturally are the laws we ourselves create.
How many laws are there?
Can they even be counted?
And where and how does self-evident freedom, Nature fit in?
4. Philosophically: Reduction-ism is the process of simplifying a problem or equation to a single simple solution, answer or truth.
When it comes to the infinite questions or problems of the Universe, problems again that we ourselves create, the mathematical solution for infinitely everything or Nature is = and the empirical answer is also equal, One or the same.
When All is equal All is truly One.
Truth is much more simple than thought.
5. Quantum Mechanics: Einstein was right again, God does not play dice.
And beyond probability is
6. The link to Sean's pick seems to be broken.
7. I like Carroll's book and liked this RS a lot. But I was a little sad to hear how pleased he is to be friends with Chalmers. I think Chalmers is committed to undermining naturalism. Thought experiments that hinge on conceivability implying possibility are silly. If you aren't swayed by the Ontological Argument, you should also not be swayed by pZombies! If existence isn't a predicate, neither is "possible" existence.
I would have liked to hear Carroll comment on Quantum Beyesianism, too. QBism seems like the correct ontological interpretation of wave functions to me. Rather than having minds play magical roles in the world of stuff, they play a more mundane one, and the wave function simply describes this relationship. This, however, would take some reconciling with Carroll's assertion that the entire universe is a wave function.
1. Chalmers often swims in vague concepts no less than Carroll, so they have something in common. But Chalmers' key point is that thought & feeling (consciousness) is a subjective experience that is inaccessible to objective enquiry. This leaves the door open for all sorts of speculation. It is a good point that he necessarily pushes in this you tube talk with Kahneman and others http://www.youtube.com/watch?v=F_MTuVozQzw Don't throw the baby out with the bathwater, and build from solid concepts rather than vague over-reaching.
2. Well, Carroll is a physicist, so his thought experiments need to potentially lead to real experiments. Chalmers has no such concerns.
3. "I think Chalmers is committed to undermining naturalism."
Chalmers is a naturalist.
"Thought experiments that hinge on conceivability implying possibility are silly."
Conceivability doesn't imply possibility, but it gives evidence of possibility. I can't coherently conceive of a round triangle, which provides at least some evidence that it's impossible.
"If you aren't swayed by the Ontological Argument, you should also not be swayed by pZombies!"
Anselm's ontological argument is guilty of a fallacy of equivocation. Chalmers' argument isn't. 'I don't like some philosophical arguments, therefore Chalmers is wrong' is not very persuasive; you need to establish some parity.
"If existence isn't a predicate, neither is 'possible' existence."
This is astoundingly confused. Chalmers' arguments trade in modal logics; nowhere does he assert that we can reduce modality to predication. And what's wrong with the ontological argument, for that matter, is not at all that it treats existence as a predicate. (Besides, linguistically, it is a predicate. If you have something special and metaphysical in mind when you say 'predicate', you'll have to clarify what that is.)
4. "Chalmers often swims in vague concepts no less than Carroll, so they have something in common."
Chalmers is one of the most catastrophically precise human beings in existence.
5. I don't see why Chalmers is catastrophic. The division between the subjective experience and the objective account of it is absolute rather than precise. In fact we must try to get as close as possible to an explanation of the subjective using objectivity. Maybe you are referring to zombies and so on (exploiting the lack of penetration of another's subjective experience to compare to our own) - which is more banal than catastrophic.
6. Phaen,
I can't defend "existence is not a predicate" any better than Kant did. So you'll have to consult him. But I believe it does establish exactly the parity you want me to establish.
Chalmers may claim to be a naturalist. But he is willing to truck in a class of thought experiment that I find to be fundamentally non-naturalistic and frankly silly in their overreach and over confidence. Perhaps Chalmers has a convenient definition of naturalism that somehow allows quallia and consciousness to be non describable by natural processes. That's not my idea of naturalism. And let's not get into an argument about whether or not science can explain things like love and poetry. I believe science can explain those things well enough, and consciousness and quallia just as well. Mary's Room is just as annoyingly obtuse. You should be swayed by Mary's Room only if you would expect to find a description of an ice cream cone to be as fun as eating an ice cream cone. And then, after this absurd claim is established, you are moved to find ontological import in the supposedly surprising notion that someone would rather eat one than understand one.
Yes, of course conceivability has some value. I'm not saying that it is of no value. But I would argue that a PZombie is actually as inconceivable as your round triangle. To use your example, can you conceive of a triangle that has a subjective experience of being round? Or can you conceive of a triangle that lacks a subjective sense of being a triangle? If so, then we can conclude, as Chalmers does, that there is a Hard Problem of Triangleness. Wait, you might respond, we know that triangles don't have subjective experience, therefore your triangle argument involves a smoke screen of subjectivity to hide your lack of logical coherence. And I would agree, Chalmers and I are both being silly.
I listened again. Carroll, while proud to be buddies with Chalmers, does not buy the hard problem of consciousness, much to my relief. Talking about Chalmers and "strong notion of emergence" he says, "I don't believe in that. I think that is just unnecessary to understand the world." 19:25
8. Carroll is no different from the fantasists if he thinks you can get 'something from nothing' in physics - and I include 'expanding space-time' as something from nothing. The notion that a volume is created from nothing, as an expanding universe with space-time created between particles by stretching, is as way out as it comes.
No doubt we can measure the space & time capacities of masses, as they are delineated by those capacities. Einstein thought it lacked parsimony to have a void of space & time within which masses exhibit their capacities - leaving only masses and no void around them. That's parsimony taken to a ridiculous level. I would rather have a volume around particles enabling them to move (expand) in the unusual way we all understand on earth. A void is undetectable and immeasurable, but it provides a setting for motion and expanding volumes without a need to literally create volume between particles as they thin out upon expansion.
9. Massimo broadly defines naturalism as “anything that is not supernatural”. Since he claims the concept of supernaturalism to be “vague and fuzzy”, that would make the concept of naturalism vague and fuzzy as well. What saves the concept (or more precisely, belief) from vagueness is the addition of “and the universe runs on scientific laws”. While well-defined, however, this proposition is philosophically tenuous. If the “scientific laws” are seen to have ontological reality, and especially if they are seen as more basic than time and space, it would seem to imply a strong mathematical-physical Platonism that is almost as sublime as the concept of a divine mover.
Lee Smolin addresses this problem by proposing time to be more fundamental than the “laws”, which he sees rather as “habits” evolving in time. While the theory obviates the need for an ontology of timeless mathematical laws, it is based on the premise that nature remembers. If Smolin is correct, naturalism would not have to be abandoned, but the definition above would need to be amended from “the universe runs on scientific laws” to “the universe learns”. So even before dealing with the bugbears of consciousness and morality, naturalism is burdened either with uncaused timeless laws or with uncaused eternal time and universal memory.
10. I noticed in philosophy news another proponent of wave function realism:
"Jill North, associate professor of philosophy, is a proponent of wave function realism, which posits that quantum mechanics' wave function is real and fundamental, but occupies a space very different from the one we seem to live in ..."
Maybe this is a trend in philosophy of physics. (An alternative to wave function realism is Feynman paths realism.)
On mathematical Platonism: There is an actual mathematical theorem that proves infinite entities can be dispensed with:
"If φ is a sentence in the language of T and φ' is a regular relativization of φ, then φ is a theorem of T if and only if φ' is a theorem of Fin(T)."
11. If any of you can explain the so-called wave-functions clearly and intelligibly, I would like to read it. A lot of physics today is speculative and inaccessible to general human understanding. You can say 'wow' and acquiesce without understanding, which is what I may be reading here, or you can attempt to understand and explain it.
What is needed is a new nomenclature in physics, or a new paradigm, that does not dispense with human experience. Reasoning and experience diverge in fantasies like expanding space-time, and mysterious wave-functions. You can read how this might be done in my free book at thehumandesign.net I provide a sensible explanation of existence without reverting to mysteries.
1. A direct answer is that a quantum wave function is a solution to a Schrödinger equation:
For what it means to 'ontologize' the wave function, the reference to Jill North I mentioned is a place to start:
(see "The Structure of a Quantum World" [pdf])
Now Feynman's brilliant idea was his sum over paths theory, a way of calculating what the wave function produces. I think if one can ontologize the wave function, one can ontologize Feynman paths as an alternative. They are easier to understand for me, anyway.
2. I had a quick look, but its not in plain language. Perhaps you can state it plainly, as you seem to understand it. Otherwise, I will wait to see if someone pops up who understands it sufficiently to have a try, and to put their interpretation to the test here. It will need to stay in the "what on earth?" basket until that can be done.
3. I should add that what is interesting about wave functions is how they enable space-time to expand and create volume, rather than thinning out in a void. It's common sense that a compacted state might expand - described by a 'wave function' - using its contained energy potential, but how that becomes a volume from 'nowhere' using General Relativity via Lemaitre is bizarre. Its a pity Q.M. doesn't directly confront the absence of a void in G.R.
4. Perhaps you could ask exactly what it is you want to know.
The solution to a Schrödinger equation Ψ(x,y,z, t) is interpreted in the following way:
the square of the absolute value of Ψ gives you the probability density for finding a particle at position (x,y,z) at time t.
The video lecture I linked to above on Feynman's sum over paths explanation (Did you watch that?), which is very basic, should be very helpful.
5. Note also Hawking and Hartle linked the wave function of the universe (via Feynman's paths theory) to the Big Bang:
"The Hartle–Hawking state is the wave function of the Universe–a notion meant to figure out how the Universe started–that is calculated from Feynman's path integral [sum-over-paths]."
6. I thought I had, but never mind. Thanks anyway.
12. This was one of my all-time favorite RS podcasts.
One thing that really surprised me was that in talking about naturalism vs. physicalism - and in discussing the "wavefunction" view - you seemed to be groping for but not quite finding the word you wanted: "process". The reason why "things" and "relations" don't feel right as a way of grounding definitions of naturalism/physicalism is that the description that we're looking for (whether reductive or not) is the description of a process, not just a description of things with properties or things with relations to other things.
If the fundamental description is a process description, physicalism becomes a "nothing spooky" position that is really hard to distinguish from naturalism.
This also plays into what Zal is saying above, because the "time" and "memory" and "runs" that Smolin talks about are the ingredients (roughly speaking) on top of "things" and "relations" that give us processes.
1. "If the fundamental description is a process description, physicalism becomes a 'nothing spooky' position that is really hard to distinguish from naturalism."
I disagree. Physicalism is endangered by irreducibly mental 'processes' just as much as it's endangered by irreducibly mental 'objects'. And the process view doesn't seem helpful in the debate over e.g. mathematical platonism.
2. If "irreducibly mental" means "not explicable in a way that is consistent with (and interleaves consistently as necessary with) our other theories", then sure. Otherwise, it doesn't matter if you call the process "mental" or "funkadelic" -- if it's non-spooky, it's physical. My point is that mereological and object/property descriptions make physicalism and naturalism seem farther apart than they are.
Mathematical platonism is probably a case-in-point. "Numbers are objects that exist" is a different proposition than "numbers are a process that takes place in brains". Understanding the existence of numbers in terms of processes can help to dissolve some of the weirdness of a platonic view, since the issue of immateriality is not sitting in the road. Getting there is going to take widespread understanding of how brains process information, including how "information" is inseparable from "process". But when we get there, I have a feeling that mathematical platonism will eventually seem quaint.
I hope you'll forgive the very inexact way I'm speaking above -- a proper discussion of this is probably not possible in the comment thread of a blog.
3. It sounds (with "numbers are a process that takes place in brains") like this is close to mathematical intentionalism: "Intentionalism says that pure mathematics is a description of finite structures consisting of finitely many imagined objects."
4. Math is just something we do with language, so as we understand language at the level of brain processing, we'll understand math at that level. It's misguided to look for math in the brain outside the context of language in the brain.
My view is that many philosophical perplexities about the nature of math arise from naïve views about the nature of existential assertions: it is thought that things are pre-bounded and words just name them; thus we must find out what pre-bounded things number names name. In reality, naming is the bounding of things. While with rabbits there is something material that is bounded, with numbers there is pure bounding. We stipulated that the number zero exists and that it has such-and-such properties. With such immaterial things, stipulating that they exist means that they exist, so long as their existence is consistent with prior stipulations and definitions.
Numbers are like fictional characters, though they are not fictional, i.e., they are not representations of things that do not exist. They are are like fictional characters qua fictional characters, which exist by stipulation. Sherlock Holmes does not exist, but the fictional character Sherlock Holmes does, and this fictional character as such is not a fictional entity. There cannot be a fictional fictional entity.
In sum, mathematical entities are like fictional entities except that they do not represent what does not exist. It may be add that they (e.g. numbers) are governed by more minimal and strict language.
5. Philip - Mycielski's idea is fascinating. It seems like, at least in a sense, domains of quantification do the work of moving from "object" to "process". In the example you provided (Peano axioms[?]) the standard formulation gives you something "existential" -- implying infinite sets, which "exist" in a static sort of way. In Mycielski's formulation, it's something less like an object and more like a "performance" -- the instantiation of whatever finite set is needed.
Am I understanding it anywhere near correctly?
6. Paul - This sounds like a conversation I'm always having with someone I call my "inner Wittgenstein".
I wonder if things like, say, cardinal numbers are the result of a process of abstraction from "things being counted". It's a process in which "property of thing" becomes "thing" and can be manipulated conceptually just as if it were a plain old thing.
What's interesting (to me) about this is that the abstraction process is intrinsic to the way neural networks behave, and it's a process that's present - in a sort of "proto-" form - in brains that haven't acquired the trick of symbolic cognition.
7. Asher - Lavine [ Understanding the Infinite: http://books.google.com/books/about/Understanding_the_Infinite.html?id=GvGqRYifGpMC ] imagines this model in his two chapters on Mycielski's theorems: The domain of a quantifier (the set of values a variable could take) is like an indefinitely large bag of beans (where you add or subtract beans as needed). So, doing math is like being a bean counter, I guess!
8. Asher - I'm glad to have put you in touch with your inner Wittgenstein ;) It's odd that Wittgenstein is regarded by philosophers as one the most important philosophers of the 20th-century - if not the most important - yet contemporary philosophy is still awash in just the sorts of mistakes he so conscientiously warned us about, viz., not taking language as a central issue in philosophical perplexities. Cheers.
13. Smolin adds the notion of evolving universal “habits” of nature to Whitehead’s process philosophy. Both Whitehead’s and Smolin’s ideas are grounded in quantum mechanics. Whether one prefers to interpret the wave-function as an expression of potential outcomes, or like Jill North, to believe that all possible outcomes of the wave function are real, what is always observed is a reduction of the wave function to a particular outcome, and so the empirical world is best described by “becoming” rather than “being”. Smolin can go farther than Whitehead and propose a teleological universal evolution because the inherent non-locality of an underlying reality, whatever it may turn out to be, is by now firmly established.
1. Non-local can be replaced with stochastic+contextual.*
* papers and lectures of Huw Price, e.g. prce.hu/w/teaching/PhilPhys/Lecture6.pdf. As Huw Price says at the end, "Philosophy needed."
2. Stochastic yes, contextual only, as Price points out, if quantum mechanics is incomplete, and even then the contextuality itself must be non-local. And the stochasm of entangled particles is quite different from that of a single particle. We can prepare a single electron with a known ‘direction’ of spin. If placed in a magnetic field, say, not aligned with the spin, the electron will either emit a photon or not. Whether it does so or not in an individual case is completely undetermined, but there is an exact probability of emission determined by the angle between the direction of the spin and that of the magnetic field. So over many measurements of the spin of an identically prepared electron, we painstakingly arrive at an empirical confirmation of what we knew to begin with.
With an entangled pair of electrons, the situation is different. Here the direction of spin of either electron is completely undetermined to begin with, and if placed in a magnetic field in any direction will emit a photon exactly half the time. What we can say, with certainty, is that IF the first electron emits a photon in a particularly aligned magnetic field, THEN electron B will not emit in an identically aligned field, instantaneously and no matter how far away, and vice versa. The emissions are stochastic, individually random, but corresponding one to another precisely, non-stochastically. So in the case of a single particle and a single measurement there is absolute knowledge completely lost, whereas with entangled particles, there is absolute knowledge gained by one measurement of the spin of A, i.e.the spin of B, where there was none to begin with. Philosophy needed, indeed.
3. Am I correct in saying the whole idea about 'something from nothing' springs from the gap in measurement due to being unable to measure position and momentum simultaneously? We can get within a wave length, and an entire universe can pop into existence within that gap - including expanding space & time? If so, that's ridiculous - like camels through eyes of needles.
I read about how the greats of physics tried to close the gap, including Einstein & Bohr, when the fact is that there are some things that cannot be done simultaneously. Measuring the position and momentum of the same object requires separate measurements - position & momentum (dependent upon motion) are separate frames of measurement (position 1 mile or 1 hour from target; motion 100 miles per hour - separate formalisms). The closest you can get to simultaneity is a wavelength, and they want a universe to pop into existence thanks to that restriction?
No doubt there are useful things about that gap, and the relationship between position & momentum presents other difficulties consistent with them having separate frames. But its just a matter of accepting the limitations to measurement itself, and not pretending to be omnipresent. Amazing they couldn't work that out -how can you freeze frame a position and measure motion at the same time? You must unify two perspectives that are entirely limited to their own frames and cannot therefore, by definition, be unified (simultaneous).
4. I have tried to understand your account of entanglement, but there is way you could randomly select two electrons that are not causally connected, and find that they always have oppositely aligned spins.
5. Creation ex-nihilo due to the uncertainty principle is absurd prima facie. An aspect of uncertainty is that the vacuum is inherently unstable: particles come in and out of existence. But the vacuum is not “nihilum”. It is only space empty of matter.
As for entanglement, I’m not sure I understand what you mean by coordination between non-causally connected electrons. It would be incorrect to speak of a causal connection between entangled particles, since coordination of measured results is instantaneous and causality implies temporal order. It is best to think of entangled particles as a single system.
6. I prefer your interpretation of uncertainty applied to empty space rather than 'nothing', but theoretical physics seems to rely on something from nothing, Krauss etc. Anyway, I can't see how its possible to exclude a void of space within which the universe expands.
That said, I can't pretend to understand how empty space can have activity rapidly appearing and disappearing from that empty space. There is, as I have explained above, an absolute restriction on measuring position & motion simultaneously, and whilst it might be possible for things to happen outside our capacity for observation for that reason (and others), there is no creation and destruction from 'no matter' (empty space). It's no more satisfying than creation & destruction from 'nothing' (no matter or empty space).
You may need to accept the limitations to physicists capacity to measure, the limits of measurement itself, and a lack of good logic binding observations into a rationally satisfying explanation. Most likely you have not detected where the matter comes from and goes to when it is 'borrowed' and appears in empty space - more work required, rather than non-causal' magic.
Causality is also the issue with entangled particles. If they can be measured simultaneously, they may be causally connected by that fact alone. Your answer seems to be that entanglement is a special state between some and not all particles, in some circumstances where we can simultaneously measure them. It seems to be another limitation to measurement and possibly an inability to exclude the measurer's affect upon the measurement. More and better measurements required - that's what science does best, rather than coming up with magical explanations for what they have been able to measure so far.
7. Measurement affects the correlations in entangled pairs only insofar as it creates a particular correlation depending on the particular measurement, but measurement (or uncertainty) does not explain the one-to-one correspondence itself. The correspondence is predicted by the mathematics, and does not depend on whether the second particle is measured; experiments serve only to confirm Bell’s theorem. Nature is non-local whether we measure it or not, and non-locality forces us to abandon our classical intuition of causality whether we like it or not.
Another example of why measurement itself does not explain the strangeness of quantum mechanics is the two-slit experiment, beginning with the interference pattern created in the absence of measurement at the slits, even when particles are sent through one or the other slit one at a time, and ending with the ‘collapse of the wave function’ when a measuring device can determine which slit the particle passed through. Disturbance by measurement (one of the ‘explanations’ for uncertainty) is insufficient to explain collapse, since collapse happens even if there is a measuring device at only one slit, say A, and the particle goes through B. Though the particle has not been observed to go through B, and so in the classical sense has not been disturbed, it behaves exactly as if it had been observed – and “disturbed” – directly. There are various physical interpretations for this, but none fits our classical notions of physicality, save, perhaps, the many worlds interpretation, which says that you see a particular result classically in a classical universe because there are separate universes, mutually-inaccessible, for all possible results. A less extravagant interpretation is that the particle, the two slits, the measuring device, the screen, and the observer are all in an entangled state, and the system must be seen holistically, i.e. what we think of as a particle is really an element in a larger information space where how we observe the particle to behave depends on how much information is made available to the system. If the particle is known not to go through A, then the entropy (lack of knowledge) of the system is reduced, and the observed behavior of the particle reflects this by the reduction (“collapse”) of the wave of possibilities to one outcome. The word “interpretation” is used advisedly rather than “explanation”. Nobody as yet understands how this can be.
So entanglement is not just a special case for specially prepared particles, but is at the very heart of quantum mechanics. My advice to anyone interested in the subject is to study it. There are free courses on the internet for non-physicists (though not math-free) where one can get at least a working understanding.
8. I agree that measurement requires analysis of many factors to eliminate the measurer, but the problem might be deeper than ensuring your instruments perform to expectations that can be reasonably interpreted. I suspect that our expectations are currently too low and our interpretations too incomplete. Unreasonable conclusions such as events with non-causal connections that are somehow connected might require ideas that have not been thought of, due to our observational limitations. It is an interim position that might require more humility in admitting limitations rather than concluding that magic is at work.
The subject can be dealt with conceptually without math, although advancing current theory might require advanced math. Your explanation is clear enough about what we currently know about photons through slits, but its what we don't know that might be more important to create a reasonable framework - not a task I would welcome.
However, I have problem with the conceptual framework used by science anyway. As explained in my earlier post, there is an absolute restriction on measuring position and motion simultaneously (the duration and length of a wave function is a close to pinning them together as we can get). The fact that science expects to be able to do so, dents my faith considerably about what science thinks its doing. It can measure, but it can't seem to understand the limitations of measurement itself. Fluctuations involving energy from existing sources might easily arise - indeed all particle interactions would be affected by a lack of simultaneity in measurement - so many causal fluctuations could 'appear' to involve creation ex nihilo.
I wouldn't be too keen on accepting conclusions such as 'non-causally entangled particles' as currently explained, given that physicists haven't even come to grips with the absolute frame limitation of measuring position & motion at the exact same time (and not as a smear). Then I read about physicists like Krauss taking uncertainty and extending it to a creation ex nihilo in nothing (not even a void) and I wonder at the investment of time involved. I pass, but I will continue to pass on my comments until physics is as sensible as the regular world be see around us.
14. As I explained, non-locality is not the result of observation or its limits, nor is it “magic”. It is, rather, a fundamental property of nature. If there exists a reality independent of our perception of it, then it is non-local. One may deny the predicate of the previous sentence, and posit instead some variety of idealism or even solipsism. In that case “magic” might indeed be an appropriate description.
The problem of proving Bell’s theorem because of relativistic frames of reference is known, and much ingenuity is going into finding ways to overcome the problem, with real progress.
You’re right about humility. Very little is ever final in science. But non-locality is well-established and universally accepted by physicists. There are metaphysical claims made by physicists that go too far, Krauss and Hawking come to mind, but much superb theoretical and empirical work is being done, despite the enormously difficult challenges. Notwithstanding the occasional glitch, physics remains a model of how science should be conducted.
|
c04e96a93e59b318 | Psychology Wiki
Quantum chemistry
34,135pages on
this wiki
This article is a historical introduction to the theoretical concepts of quantum chemistry. For information on computational methods in chemistry and more recent and/or technical aspects of quantum chemistry, see computational chemistry. For theoretical concepts related to chemistry see theoretical chemistry.
Main article: History of quantum mechanics
The history of quantum chemistry began essentially with the 1838 discovery of cathode rays by Michael Faraday, the 1859 statement of the black body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system could be discrete, and the 1900 quantum hypothesis by Max Planck that any energy radiating atomic system can theoretically be divided into a number of discrete ‘energy elements’ ε such that each of these energy elements is proportional to the frequency ν with which they each individually radiate energy, as defined by the following formula:
\epsilon = h \nu \,
where h is a numerical value called Planck’s Constant. Then, in 1905, to explain the photoelectric effect (1839), i.e. that shining light on certain materials can function to eject electrons from the material, Albert Einstein postulated, as based on Planck’s quantum hypothesis, that light itself consists of individual quantum particles, which later came to be called photons (1926). In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding.
Electronic structure Edit
Main article: Computational chemistry#Electronic structure
Wave model Edit
The foundation of quantum mechanics and quantum chemistry is the wave model, in which the atom is a small, dense, positively charged nucleus surrounded by electrons. Unlike the earlier Bohr model of the atom, however, the wave model describes electrons as "clouds" moving in orbitals, and their positions are represented by probability distributions rather than discrete points. The strength of this model lies in its predictive power. Specifically, it predicts the pattern of chemically similar elements found in the periodic table. The wave model is so named because electrons exhibit properties (such as interference) traditionally associated with waves. See wave-particle duality.
Valence bond Edit
Main article: Valence bond theory
Although the mathematical basis of quantum chemistry had been laid by Schrödinger in 1926, it is generally accepted that the first true calculation in quantum chemistry was that of the German physicists Walter Heitler and Fritz London on the hydrogen (H2) molecule in 1927. Heitler and London's method was extended by the American theoretical physicist John C. Slater and the American theoretical chemist Linus Pauling to become the Valence-Bond (VB) [or Heitler-London-Slater-Pauling (HLSP)] method. In this method, attention is primarily devoted to the pairwise interactions between atoms, and this method therefore correlates closely with classical chemists' drawings of bonds.
Molecular orbital Edit
Main article: Molecular orbital theory
An alternative approach was developed in 1929 by Friedrich Hund and Robert S. Mulliken, in which electrons are described by mathematical functions delocalized over an entire molecule. The Hund-Mulliken approach or molecular orbital (MO) method is less intuitive to chemists, but has turned out capable of predicting spectroscopic properties better than the VB method. This approach is the conceptional basis of the Hartree-Fock method and further post Hartree-Fock methods.
Density functional theory Edit
Main article: Density functional theory
The Thomas-Fermi model was developed independently by Thomas and Fermi in 1927. This was the first attempt to describe many-electron systems on the basis of electronic density instead of wave functions, although it was not very successful in the treatment of entire molecules. The method did provide the basis for what is now known as density functional theory. Though this method is less developed than post Hartree-Fock methods, its lower computational requirements allow it to tackle larger polyatomic molecules and even macromolecules, which has made it the most used method in computational chemistry at present.
Chemical dynamics Edit
A further step can consist of solving the Schrödinger equation with the total molecular Hamiltonian in order to study the motion of molecules. Direct solution of the Schrödinger equation is called quantum molecular dynamics, within the semiclassical approximation semiclassical molecular dynamics, and within the classical mechanics framework molecular dynamics (MD). Statistical approaches, using for example Monte Carlo methods, are also possible.
Adiabatic chemical dynamics Edit
Main article: Adiabatic formalism or Born-Oppenheimer approximation
In adiabatic dynamics, interatomic interactions are represented by single scalar potentials called potential energy surfaces. This is the Born-Oppenheimer approximation introduced by Born and Oppenheimer in 1927. Pioneering applications of this in chemistry were performed by Rice and Ramsperger in 1927 and Kassel in 1928, and generalized into the RRKM theory in 1952 by Marcus who took the transition state theory developed by Eyring in 1935 into account. These methods enable simple estimates of unimolecular reaction rates from a few characteristics of the potential surface.
Non-adiabatic chemical dynamics Edit
Main article: Vibronic coupling
Non-adiabatic dynamics consists of taking the interaction between several coupled potential energy surface (corresponding to different electronic quantum states of the molecule). The coupling terms are called vibronic couplings. The pioneering work in this field was done by Stueckelberg, Landau, and Zener in the 1930s, in their work on what is now known as the Landau-Zener transition. Their formula allows the transition probability between two diabatic potential curves in the neighborhood of an avoided crossing to be calculated.
Quantum chemistry and quantum field theory Edit
The application of quantum field theory (QFT) to chemical systems and theories has become increasingly common in the modern physical sciences. One of the first and most fundamentally explicit appearances of this is seen in the theory of the photomagneton. In this system, plasmas, which are ubiquitous in both physics and chemistry, are studied in order to determine the basic quantization of the underlying bosonic field. However, quantum field theory is of interest in many fields of chemistry, including: nuclear chemistry, astrochemistry, sonochemistry, and quantum hydrodynamics. Field theoretic methods have also been critical in developing the ab initio Effective Hamiltonian theory of semi-empirical pi-electron methods.
See also Edit
Further readingEdit
• Pauling, L. (1954). General Chemistry, Dover Publications. ISBN 0-486-65622-5.
• Landau, L.D. and Lifshitz, E.M. Quantum Mechanics:Non-relativistic Theory(Course of Theoretical Physics vol.3) (Pergamon Press)
External links Edit
Nobel lectures by quantum chemists Edit
Around Wikia's network
Random Wiki |
ee603c32ca865266 | Take the 2-minute tour ×
As far as I can check, the adiabatic theorem in quantum mechanics can be proven exactly when there is no crossing between (pseudo-)time-evolved energy levels. To be a little bit more explicit, one describes a system using the Hamiltonian $H\left(s\right)$ verifying $H\left(s=0\right)=H_{0}$ and $H\left(s=1\right)=H_{1}$, with $s=\left(t_{1}-t_{0}\right)/T$, $t_{0,1}$ being the initial (final) time of the interaction switching. Then, at the time $t_{0}$, one has
with the $P_{i}$'s being the projectors to the eigenstates associated with the eigenvalue $\varepsilon_{i}\left(0\right)$, that we suppose known, i.e. $H_{0}$ can be exactly diagonalised. Then, the time evolution of the eigenstates is supposed to be given by
which is fairly good because it just requires that we are able to diagonalise the Hamiltonian at any time, what we can always do by Hermiticity criterion. The adiabatic theorem (see Messiah's book for instance)
with the operator $U_{T}\left(s\right)$ verifying the Schrödinger equation
$$\mathbf{i}\hslash\dfrac{\partial U_{T}}{\partial s}=TH\left(s\right)U_{T}\left(s\right)$$
can be proven exactly if $\varepsilon_{i}\left(s\right)\neq\varepsilon_{j}\left(s\right)$ at any time (see e.g. Messiah or Kato).
Now, the Berry phase is supposed to be non vanishingly small when we have a parametric curve winding close to a degeneracy, i.e. precisely when $\varepsilon_{i}\left(s\right) \approx \varepsilon_{j}\left(s\right)$. For more details, Berry defines the geometric phase as
with (I adapted the Berry's notation to mine)
$$\mathbf{V}_{n}\left(\mathbf{R}\right)=\Im\left\{ \sum_{m\neq n}\dfrac{\left\langle n\left(\mathbf{R}\right)\right|\nabla_{\mathbf{R}}H\left(\mathbf{R}\right)\left|m\left(\mathbf{R}\right)\right\rangle \times\left\langle m\left(\mathbf{R}\right)\right|\nabla_{\mathbf{R}}H\left(\mathbf{R}\right)\left|n\left(\mathbf{R}\right)\right\rangle }{\left(\varepsilon_{m}\left(\mathbf{R}\right)-\varepsilon_{n}\left(\mathbf{R}\right)\right)^{2}}\right\} $$
for a trajectory along the curve $C$ in the parameter space $\mathbf{R}\left(s\right)$. In particular, Berry defines the adiabatic evolution as following from the Hamiltonian $H\left(\mathbf{R}\left(s\right)\right)$, so a parametric evolution with respect to time $s$. These are eqs.(9) and (10) in the Berry's paper.
Later on (section 3), Berry argues that
The energy denominators in [the equation for $\mathbf{V}_{n}\left(\mathbf{R}\right)$ given above] show that if the circuit $C$ lies close to a point $\mathbf{R}^{\ast}$ in parameter space at which the state $n$ is involved in a degeneracy, then $\mathbf{V}_{n}\left(\mathbf{R}\right)$ and hence $\gamma_{n}\left(C\right)$, is dominated by the terms $m$ corresponding to the other states involved.
What annoys me is that the Berry phase argument uses explicitly the adiabatic theorem. So my question is desperately simple: what's the hell is going on there? Can we reconcile the adiabatic theorem with the Berry phase elaboration? Is the Berry phase a kind of correction (in a perturbative expansion sense) of the adiabatic theorem? Is there some criterion of proximity to the degeneracy that must be required in order to find the Berry phase?
share|improve this question
Could you please ask a more specific question than "what's going on"? What does it mean "What's going on"? You described what's going on. There are other things that are going on, too, but it is not clear which of them you find interesting or confusing. There's surely no contradiction in the text you wrote. Some theorems hold when the epsilons are safely separated, some effects appear when they're not, and so on. – Luboš Motl Jul 11 '13 at 12:35
@LubošMotl Thanks for your comment, my question was written under rush. I've tried to add more focused questions, and more details. In short I want to know if the Berry phase and the adiabatic theorem are compatible, and to which extend they are (if they are). Please tell me if what I added is still insufficient to make any sense. Thanks again. – FraSchelle Jul 11 '13 at 14:02
Berry phase and the adiabatic theorem are compatible. Some statements of the adiabatic theorem omit to mention the phase of the final wavefunction. Berry phase is an elaboration in that it says explicitly what that phase factor is. That's all there is to it. – Dan Piponi Jul 11 '13 at 23:54
@DanPiponi Thanks for your comment. Would you then say that the proximity to a degeneracy point is not a problem at all, and that the adiabatic theorem can be expanded to include the degeneracy point(s) ? If yes, would you please elaborate a bit more about that. Thanks in advance. – FraSchelle Jul 12 '13 at 7:52
@Oaoa: I wonder if part of the issue is the age of the references you are using? They were written before Berry's paper. They could be (implicitly) redefining the states to remove the Berry's phase, without considering the effect of closed loops in parameter space. Perhaps just a more modern reference like Nakahara would clear this up. Also proximity to degeneracy points is not an issue, if you go slowly enough, its going through degeneracy points that breaks the adiabatic theorem and hence berry's phase. – BebopButUnsteady Jul 12 '13 at 14:19
1 Answer 1
up vote 2 down vote accepted
The adiabatic theorem is required to derive the Berry phase equation in quantum mechanics. Therefore the adiabatic theorem and the Berry phase must be compatible with one another. (Though geometric derivations are possible, they usually don't employ quantum mechanics. And while illuminating what is going on mathematically, they obscure what is going on physically.)
The question of degeneracy points is a little more subtle, but let me make one thing clear: if one crosses a degeneracy point, the adiabatic theorem is no longer valid and one cannot use the Berry phase equation that you have written in the question (the denominator will become zero at the degeneracy point).
Now, let us take the spin in magnetic field example as an illustration of the Berry phase. Suppose we have a spin-1/2 particle in a magnetic field. The spin will align itself to the magnetic field and be in the low energy state $E=E_-$. Now, we decide to adiabatically change the direction of the magnetic field, keeping the magnitude fixed. Adiabatically means that the probability of the spin-1/2 particle transitioning to the $E=E_+$ state is vanishingly small, i.e. $\Delta t/\hbar<<E_+-E_-$. Suppose now that the magnetic field traces out the loop below, starting and ending at the red point:
berry phase
In this case, one will pick up a Berry phase equal to:
\begin{equation} \textbf{Berry Phase}=\gamma = -\frac{1}{2}\Omega \end{equation}
where $\Omega$ is the solid angle subtended. This formula is proven in Griffiths QM section 10.2. However, it is not that important to understand the overall picture.
I chose this example because there a couple things to note that make it relevant to your question:
1) The adiabatic theorem is critical in this problem for defining the Berry phase. Since the Berry phase depends on the solid angle, any transition to the $E=E_+$ state would have destroyed the meaning of tracing out the solid angle.
2) The degeneracy point lies at the center of the sphere where $B=0$, where $B$ is the magnetic field. Though the spin may traverse any loop on the sphere, it cannot go through this degeneracy point for the Berry phase to have any meaning. This degeneracy point is ultimately responsible for the acquisition of the Berry phase, however. We must in some sense "go around the degeneracy point without going through it" for one to obtain a Berry phase.
share|improve this answer
Thanks a lot for your really enlightening answer. So if I understand correctly the adiabatic theorem always presents a Berry's phase, but usually this phase factor is just 1 (or the phase $= 0 \equiv 2\pi$) and there is no effect associated with it. When there is a degeneracy point somewhere, in contrary, the phase can be $\neq 0 \equiv 2\pi$ and there are effects associated with it. The question which remains is: what somewhere means? Does the distance from the degeneracy is a clear notion? What defines the space ? Perhaps I should ask an other question about that. Thanks again. – FraSchelle Jul 30 at 9:06
Well, I posted an other question regarding the space in which the degeneracy occurs: physics.stackexchange.com/questions/128754 – FraSchelle Jul 30 at 9:35
Well, the adiabatic theorem only gives a Berry phase for $\textit{closed}$ loops. This is a very stringent condition meaning that you must end up where you started in the parameter space. If there is no closed loop, the phase factor is the usual "just a phase" that can be gauged away. Sorry it took me so long to get back to you. – Xcheckr Aug 10 at 22:08
No problem for the delay, I was actually thinking I've been too enthusiast a few days ago. You're perfectly right, Berry phase only exist as a topological obstruction. Thanks again for your answer, which put me back on the right track. – FraSchelle Aug 12 at 4:00
Your Answer
|
46e8cff929988f73 | Cover image for Q is for quantum : an encyclopedia of particle physics
Q is for quantum : an encyclopedia of particle physics
Gribbin, John, 1946-
Personal Author:
Publication Information:
New York, NY : Free Press, [1998]
Physical Description:
545 pages : illustrations ; 25 cm
Format :
Call Number
Material Type
Home Location
Item Holds
QC793.2 .G747 1998 Adult Non-Fiction Central Closed Stacks-Non circulating
On Order
Here in one volume John Gribbin, the award-winning science writer and physicist, has collected the answer to everything you need to know about the quantum world -- the place where most of the greatest scientific advances of the twentieth century have been made. This exceptional reference begins with a thorough introduction setting out the current state of knowledge in particle physics. Throughout, Gribbin blends articles on the structure of particles and their interactions, accounts of the theoretical breakthroughs in quantum mechanics and their practical applications, and entertaining biographies of the scientists who have blazed the trail of discovery. In a special section, "Timelines," key dates in our quest to understand the quantum world are mapped out alongside landmarks in world history and the history of science. Q is for Quantum is an essential companion for anyone interested in particle physics.Historical highlights include: Isaac Newton's work on particles in the seventeenth century; the eighteenth- and nineteenth-century transformation of alchemy into chemistry, culminating in Dmitiri Mendeleyev's publication of the periodic table of the elements in 1869; James Clerk Maxwell's investigation of electromagnetism and waves around the same time; and the brilliant research of Christiaan Huygens, Thomas Young, and Augustin Fresnel. Among the longer biographies in the book number those of such twentieth-century scientific giants as Erwin Schroedinger, Albert Einstein, Richard Feynman, Linus Pauling, Robert Oppenheimer, and Andrei Sakharov.Quantum physics today is directly and continuously relevant to life. Fundamental life processes such as the workings of DNA depend on the quantum behavior of atoms. As entries in the encyclopedia note, the human conquest of this micro-realm has already led to the development of computer chips and the valuable technique of carbon dating, and we are on the verge of still greater practical advances. Gribbin shows that real quantum computer technology is gradually realizing the dreams of science fiction, and identifies the amazing possibilities of energy from nuclear fusion. No one can doubt that the fruits of quantum electrodynamics, the most accurate scientific theory ever developed, have yet to be fully gathered. And no one will dispute that this is the only reference to this weird and wonderful world. The curious, the imaginative, and the bold will require this encyclopedia of the fundamental science of the future.
Author Notes
John R. Gribbin (born 19 March 1946) is a British science writer, an astrophysicist, and a visiting fellow in astronomy at the University of Sussex. The topical range of his prolific writings include quantum physics, human evolution, climate change, global warming, the origins of the universe, and biographies of famous scientists. He also writes science fiction.
In 1984, Gribbin published In Search of Schrödinger's Cat: Quantum Physics and Reality, the book that he is best known for, which continues to sell well even after years of publication. At the 2009 World Conference of Science Journalists, the Association of British Science Writers presented Gribbin with their Lifetime Achievement award.
(Bowker Author Biography) John Gribbin, visiting fellow in astronomy at the University of Sussex. He is married to Mary Grivvin, also a science writer.
(Publisher Provided)
Reviews 3
Booklist Review
This dictionary by a well-known science popularizer provides good A-Z coverage of the field of quantum mechanics. Unlike many other science dictionaries, it covers more than concepts and terms. There are entries for people (Feynman, Richard Phillips; Huygens, Christiaan; Oppenheimer, Robert), places (Brookhaven National Laboratory, Fermilab), and historical highlights (Manhattan Project). Most entries are a few sentences, although biographies are generally longer, and some entries (relativity, string theory, time travel) cover several pages. There are ample cross-references. Some entries include suggested further readings; several of these are other books by Gribbin. Following the entries is a bibliography that lists the books referred to in the text, together with others; the more technical titles are indicated with an asterisk. The volume concludes with time lines of birth dates of famous scientists, key dates in physical sciences, and key dates in history. The book's audience ranges from the interested layperson to undergraduate physics majors to professional physicists. It is narrower in scope than Macmillan Encyclopedia of Physics [RBB Ap 15 97] or McGraw-Hill Dictionary of Physics (McGraw-Hill, 1997). However, Gribbin's approach makes a difficult topic accessible even for those who don't have a science background. Recommended for large public and academic libraries.
Library Journal Review
Written for the lay reader, this work on the complex world of particle physics by British astronomer Gribbin, the renowned author of such popular science books as Schrödinger's Kittens and the Search for Reality (LJ 5/1/95), is well written, informative, and highly accessible. Features include an introductory essay that puts the subject in historical perspective, clearly written entries, a brief bibliography, and time lines showing the birth dates of scientists and key dates in science and history. While physicists and other scientists will find it too basic for their purposes, this affordable one-volume book meets the needs of the general reader, filling a gap in the popular science literature. Recommended for general reference collections in public and academic libraries.Paul G. Haschak, Southeastern Louisiana Univ., Hammond (c) Copyright 2010. Library Journals LLC, a wholly owned subsidiary of Media Source, Inc. No redistribution permitted.
Choice Review
When the title says "Quantum" and the subtitle says "Particle," one expects an encyclopedia of modern fundamental physics. Gribbin's book is considerably more than that, since it includes historical background and sketches of principal characters, starting with Democritus of Abdera in the fifth century BCE. Readers should warned (or encouraged) that there is no mathematics, there are not even any formulas; nevertheless, the author has tried to explain the principal contemporary ideas concerning relativity, quantum theory, and particles, and to tell, in limited space, how they arose. This is not the place to list errors, but readers should be warned that even skimming through a few dozen articles reveals more of them than there should be. There is also opinion, which this reviewer supposes is inevitable in a one-man encyclopedia. Gribbin is more partial to fringe ideas than most physicists in this country; on the other hand, some of these ideas deserve to be noted and discussed. This book will be useful to the general reader trying to understand new developments and to undergraduate students trying to orient themselves in the huge culture of modern science. D. Park emeritus, Williams College
Introduction: The quest for the quantum This quick overview of a hundred years of scientific investigation of the microworld is intended to put the detail of the main section of this book in an historical perspective. All the technical terms are fully explained in the alphabetical section. The quantum world is the world of the very small -- the microworld. Although, as we shall see, quantum effects can be important for objects as large as molecules, the real quantum domain is in the subatomic world of particle physics. The first subatomic particle, the electron, was only identified, by J. J. Thomson, in 1897, exactly 100 years before this book, summing up our present understanding of the microworld, was completed. But it isn't just the neatness of this anniversary that makes this a good time to take stock of the quantum world; particle physicists have now developed an understanding of what things are made of, and how those things interact with one another, that is more complete and satisfying than at any time since Thomson's discovery changed the way people thought about the microworld. The standard model of particle physics, based upon the rules of quantum mechanics, tells us how the world is built up from the fundamental building blocks of quarks and leptons, held together by the exchange of particles called gluons and vector bosons. But don't imagine that even the physicists believe that the standard model is the last word. After all, it doesn't include gravity. The structure of theoretical physics in the twentieth century was built on two great theories, the general theory of relativity (which describes gravity and the Universe at large) and quantum mechanics (which describes the microworld). Unifying those two great theories into one package, a theory of everything, is the Holy Grail that physicists seek as we enter the 21st century. Experiments that probe the accuracy of the standard model to greater and greater precision are being carried out using particle accelerators like those at CERN, in Geneva, and Fermilab, in Chicago. From time to time, hints that the standard theory is not the whole story emerge. This gives the opportunity for newspapers to run sensational headlines proclaiming that physics is in turmoil; in fact, these hints of something beyond the standard model are welcomed by the physicists, who are only too aware that their theory, beautiful though it is, is not the last word. Unfortunately, as yet none of those hints of what may lie beyond the standard model has stood up to further investigation. As of the spring of 1997, the standard model is still the best game in town. But whatever lies beyond the standard model, it will still be based upon the rules of quantum physics. Just as the general theory of relativity includes the Newtonian version of gravity within itself as a special case, so that Newton's theory is still a useful and accurate description of how things work in many applications (such as calculating the trajectory of a space probe being sent to Jupiter), so any improved theory of the microworld must include the quantum theory within itself. Apples didn't start falling upwards when Albert Einstein came up with an improved theory of gravity; and no improved theory of physics will ever take away the weirdness of the quantum world. By the standards of everyday common sense, the quantum world is very weird indeed. One of the key examples is the phenomenon of wave-particle duality. J. J. Thomson opened up the microworld to investigation when he found that the electron is a particle; three decades later, his son George proved that electrons are waves. Both of them were right (and they each won a Nobel Prize for their work). An electron is a particle, and it is a wave. Or rather, it is neither a particle nor a wave, but a quantum entity that will respond to one sort of experiment by behaving like a particle, and to another set of experiments by behaving like a wave. The same is true of light -- it can behave either like a stream of particles (photons) or like a wave, depending on the circumstances. Indeed, it is, in principle, true of everything, although the duality does not show up with noticeable strength in the everyday world (which, of course, is why we do not regard the consequences of wave-particle duality as common sense). All of this is related to the phenomenon of quantum uncertainty. A quantum entity, such as an electron or a photon, does not have a well-determined set of properties, in the way that a billiard ball rolling across the table has a precisely determined velocity and a precisely determined position at any instant. The photon and the electron (and other denizens of the microworld) do not know, and cannot know, both precisely where they are and precisely where they are going. It may seem an esoteric and bizarre idea, of no great practical consequence in the everyday world. But it is this quantum uncertainty that allows hydrogen nuclei to fuse together and generate heat inside the Sun, so without it we would not be here to wonder at such things (quantum uncertainty is also important in the process of radioactive decay, for substances such as uranium-235). This highlights an important point about quantum physics. It is not just some exotic theory that academics in their ivory towers study as a kind of intellectual exercise, of no relevance to everyday life. You need quantum physics in order to calculate how to make an atom bomb, or a nuclear power station, that works properly -- which is certainly relevant to the modern world. And you also need quantum physics in order to design much more domestic items of equipment, such as lasers. Not everybody immediately thinks of a laser as a piece of domestic equipment; but remember that a laser is at the heart of any CD player, reading the information stored on the disc itself; and the laser's close cousin, the maser, is used in amplifying faint signals, including those from communications satellites that feed TV into your home. Where does the quantum physics come in? Because lasers operate on a principle called stimulated emission, a purely quantum process, whose statistical principles were first spelled out by Albert Einstein as long ago as 1916. If an atom has absorbed energy in some way, so that it is in what is called an excited state, it can be triggered into releasing a pulse of electromagnetic energy (a photon) at a precisely determined wavelength (a wavelength that is determined by the quantum rules) by giving it a suitable nudge. A suitable nudge happens when a photon with exactly the right wavelength (the same wavelength as the photon that the excited atom is primed to emit) passes by. So, in a process rather like the chain reaction of atomic fission that goes on in a nuclear bomb, if a whole array of atoms has been excited in the right way, a single photon passing through the array (perhaps in a ruby crystal) can trigger all of them to emit electromagnetic radiation (light) in a pulse in which all of the waves are marching precisely in step with one another. Because all of the waves go up together and go down together, this produces a powerful beam of very pure electromagnetic radiation (that is, a very pure colour). Quantum physics is also important in the design and operation of anything which contains a semiconductor, including computer chips -- not just the computer chips in your home computer, but the ones in your TV, hi-fi, washing machine and car. Semiconductors are materials with conducting properties that are intermediate between those of insulators (in which the electrons are tightly bound to their respective atomic nuclei) and conductors (in which some electrons are able to roam more or less freely through the material). In a semiconductor, some electrons are only just attached to their atoms, and can be made to hop from one atom to the next under the right circumstances. The way the hopping takes place, and the behaviour of electrons in general, depends on a certain set of quantum rules known as Fermi-Dirac statistics (the behaviour of photons, in lasers and elsewhere, depends on another set of quantum rules, Bose-Einstein statistics). After semiconductors, it is logical to mention superconductors -- materials in which electricity flows without any resistance at all. Superconductors are beginning to have practical applications (including in computing), and once again the reason why they conduct electricity the way they do is explained in terms of quantum physics -- in this case, because under the right circumstances in some materials electrons stop obeying Fermi-Dirac statistics, and start obeying Bose-Einstein statistics, behaving like photons. Electrons, of course, are found in the outer parts of atoms, and form the interface between different atoms in molecules. The behaviour of electrons in atoms and molecules is entirely described by quantum physics; and since the interactions between atoms and molecules are the raw material of chemistry, this means that chemistry is described by quantum physics. And not just the kind of schoolboy chemistry used to make impressive smells and explosive interactions. Life itself is based upon complex chemical interactions, most notably involving the archetypal molecule of life, DNA. At the very heart of the process of life lies the ability of a DNA molecule, the famous double-stranded helix, to 'unzip' itself and make two copies of the original double helix by building up a new partner for each strand of the original molecules, using each unzipped single molecule as a template. The links that are used in this process to hold the strands together most of the time, but allow them to unzip in this way when it is appropriate, are a kind of chemical bond, known as the hydrogen bond. In a hydrogen bond, a single proton (the nucleus of a hydrogen atom) is shared between two atoms (or between two molecules), forming a link between them. The way fundamental life processes operate can only be explained if allowance is made for quantum processes at work in hydrogen-bonded systems. As well as the importance of quantum physics in providing an understanding of the chemistry of life, an understanding of quantum chemistry is an integral part of the recent successes that have been achieved in the field of genetic engineering. In order to make progress in taking genes apart, adding bits of new genetic material and putting them back together again, you have to understand how and why atoms join together in certain sequences but not in others, why certain chemical bonds have a certain strength, and why those bonds hold atoms and molecules a certain distance apart from one another. You might make some progress by trial and error, without understanding the quantum physics involved; but it would take an awful long time before you got anywhere (evolution, of course, does operate by a process of trial and error, and has got somewhere because it has been going on for an awful long time). In fact, although there are other forces which operate deep within the atom (and which form the subject of much of this book), if you understand the behaviour of electrons and the behaviour of photons (light) then you understand everything that matters in the everyday world, except gravity and nuclear power stations. Apart from gravity, everything that is important in the home (including the electricity generated in nuclear power stations) can be described in terms of the way electrons interact with one another, which determines the way that atoms interact with one another, and the way they interact with electromagnetic radiation, including light. We don't just mean that all of this can be described in general terms, in a qualitative, hand-waving fashion. It can be described quantitatively, to a staggering accuracy. The greatest triumph of theoretical quantum physics (indeed, of all physics) is the theory that describes light and matter in this way. It is called quantum electrodynamics (QED), and it was developed in its finished form in the 1940s, most notably by Richard Feynman. QED tells you about every possible interaction between light and matter (to a physicist, 'light' is used as shorthand for all electromagnetic radiation), and it does so to an accuracy of four parts in a hundred billion. It is the most accurate scientific theory ever developed, judged by the criterion of how closely the predictions of the theory agree with the results of experiments carried out in laboratories here on Earth. Following the triumph of QED, it was used as the template for the construction of a similar theory of what goes on inside the protons and neutrons that make up the nuclei of atoms -- a theory known as quantum chromodynamics, or QCD. Both QED and QCD are components of the standard model. J. J. Thomson could never have imagined what his discovery of the electron would lead to. But the first steps towards a complete theory of quantum physics, and the first hint of the existence of the entities known as quanta, appeared within three years of Thomson's discovery, in 1900. That first step towards quantum physics came, though, not from the investigation of electrons, but from the investigation of the other key component of QED, photons. At the end of the 19th century, nobody thought of light in terms of photons. Many observations -- including the famous double-slit experiment carried out by Thomas Young -- had shown that light is a form of wave. The equations of electromagnetism, discovered by James Clerk Maxwell, also described light as a wave. But Max Planck discovered that certain features of the way in which light is emitted and absorbed could be explained only if the radiation was being parcelled out in lumps of certain sizes, called quanta. Planck's discovery was announced at a meeting of the Berlin Physical Society, in October 1900. But at that time nobody thought that what he had described implied that light only existed (or ever existed!) in the form of quanta; the assumption was that there was some property of atoms which meant that light could be emitted or absorbed only in lumps of a certain size, but that 'really' the light was a wave. The first (and for a long time the only) person to take the idea of light quanta seriously was Einstein. But he was a junior patent office clerk at the time, with no formal academic connections, and hadn't yet even finished his PhD. In 1905 he published a paper in which he used the idea of quanta to explain another puzzling feature of the way light is absorbed, the photoelectric effect. In order to explain this phenomenon (the way electrons are knocked out of a metal surface by light), Einstein used the idea that light actually travels as a stream of little particles, what we would now call photons. The idea was anathema to most physicists, and even Einstein was cautious about promoting the idea -- it was not until 1909 that he made the first reference in print to light as being made up of 'point-like quanta'. In spite of his caution, one physicist, Robert Millikan, was so annoyed by the suggestion that he spent the best part of ten years carrying out a series of superb experiments aimed at proving that Einstein's idea was wrong. He succeeded only in proving -- as he graciously acknowledged -- that Einstein had been right. It was after Millikan's experiments had established beyond doubt the reality of photons (which were not actually given that name until later) that Einstein received his Nobel Prize for this work (the 1921 prize, but actually awarded in 1922). Millikan received the Nobel Prize, partly for this work, in 1923. While all this was going on, other physicists, led by Niels Bohr, had been making great strides by applying quantum ideas to an understanding of the structure of the atom. It was Bohr who came up with the image of an atom that is still basically the one we learn about when we first encounter the idea of atoms in school -- a tiny central nucleus, around which electrons circle in a manner reminiscent of the way planets orbit around the Sun. Bohr's model, in the form in which it was developed by 1913, had one spectacular success: it could explain the way in which atoms produce bright and dark lines at precisely defined wavelengths in the rainbow spectrum of light. The difference in energy between any two electron orbits was precisely defined by the model, and an electron jumping from one orbit to the other would emit or absorb light at a very precise wavelength, corresponding to that energy difference. But Bohr's model introduced the bizarre idea that the electron did indeed 'jump', instantaneously, from one orbit to the other, without crossing the intervening space (this has become known as a 'quantum leap'). First it was in one orbit, then it was in the other, without ever crossing the gap. Bohr's model of the atom also still used the idea of electrons as particles, like little billiard balls, and light as a wave. But by the time Einstein and Millikan received their Nobel Prizes, it was clear that there was more to light than this simple picture accounted for. As Einstein put it in 1924, 'there are therefore now two theories of light, both indispensable...without any logical connection'. The next big step, which led to the first full quantum theory, came when Louis de Broglie pointed out that there was also more to electrons than the simple picture encapsulated in the Bohr model accounted for. De Broglie made the leap of imagination (obvious with hindsight, but a breakthrough at the time) of suggesting that if something that had traditionally been regarded as a wave (light) could also be treated as a particle, then maybe something that had traditionally been regarded as a particle (the electron) could also be treated as a wave. Of course, he did more than just speculate along these lines. He took the same kind of quantum calculations that had been pioneered by Planck and Einstein in their description of light and turned the equations around, plugging in the numbers appropriate for electrons. And he suggested that what actually 'travelled round' an electron orbit in an atom was not a little particle, but a standing wave, like the wave corresponding to a pure note on a plucked violin string. De Broglie's idea was published in 1925. Although the idea of electrons behaving as waves was puzzling, this business of standing waves looked very attractive because it seemed to get rid of the strange quantum jumping. Now, it looked as if the transition of an electron from one energy level to another could be explained in terms of the vibration of the wave, changing from one harmonic (one note) to another. It was the way in which this idea seemed to restore a sense of normality to the quantum world that attracted Erwin Schrödinger, who worked out a complete mathematical description of the behaviour of electrons in atoms, based on the wave idea, by the end of 1926. He thought that his wave equation for the electron had done away with the need for what he called 'damned quantum jumping'. But he was wrong. Also by 1926, using a completely different approach based entirely on the idea of electrons as particles, Werner Heisenberg and his colleagues had found another way to describe the behaviour of electrons in atoms, and elsewhere -- another complete mathematical quantum theory. And as if that weren't enough, Paul Dirac had found yet another mathematical description of the quantum world. It soon turned out that all of these mathematical approaches were formally equivalent to one another, different views of the same quantum world (a bit like the choice between giving a date in Roman numerals or Arabic notation). It really didn't matter which set of equations you used, since they all described the same thing and gave the same answers. To Schrödinger's disgust, the 'damned quantum jumping' had not been eliminated after all; but, ironically, because most physicists are very familiar with how to manipulate wave equations, it was Schrödinger's variation on the theme, based on his equation for the wave function of an electron, that soon became the conventional way to do calculations in quantum mechanics. This tradition was reinforced by the mounting evidence (including the experiments carried out by George Thomson in 1927) that electrons did indeed behave like waves (the ultimate proof of this came when electrons were persuaded to participate in a version of the double-slit experiment, and produced the classic diffraction effects seen with light under the equivalent circumstances). But none of this stopped electrons behaving like particles in all the experiments where they had always behaved like particles. By the end of the 1920s, physicists had a choice of different mathematical descriptions of the microworld, all of which worked perfectly and gave the right answers (in terms of predicting the outcome of experiments), but all of which included bizarre features such as quantum jumping, wave-particle duality and uncertainty. Niels Bohr developed a way of picturing what was going on that was taught as the standard version of quantum physics for half a century (and is still taught in far too many places), but which if anything made the situation even more confusing. This 'Copenhagen interpretation' says that entities such as electrons do not exist when they are not being observed or measured in some way, but spread out as a cloud of probability, with a definite probability of being found in one place, another probability of being detected somewhere else, and so on. When you decide to measure the position of the electron, there is a 'collapse of the wave function', and it chooses (at random, in accordance with the rules of probability, the same rules that operate in a casino) one position to be in. But as soon as you stop looking at it, it dissolves into a new cloud of probability, described by a wave function spreading out from the site where you last saw it. It was their disgust with this image of the world that led Einstein and Schrödinger, in particular, to fight a rearguard battle against the Copenhagen interpretation over the next twenty years, each of them independently (but with moral support from each other) attempting to prove its logical absurdity with the aid of thought experiments, notably the famous example of Schrödinger's hypothetical cat, a creature which, according to the strict rules of the Copenhagen interpretation, can be both dead and alive at the same time. Although this debate (between Einstein and Schrödinger on one side, and Bohr on the other) was going on, most physicists ignored the weird philosophical implications of the Copenhagen interpretation, and just used the Schrödinger equation as a tool to do a job, working out how things like electrons behaved in the quantum world. Just as a car driver doesn't need to understand what goes on beneath the bonnet of the car in order to get from A to B, as long as quantum mechanics worked, you didn't have to understand it, even (as Linus Pauling showed) to get to grips with quantum chemistry. The last thing most quantum physicists wanted was yet another mathematical description of the quantum world, and when Richard Feynman provided just that, in his PhD thesis in 1942, hardly anybody even noticed (most physicists at the time were, in any case, distracted by the Second World War). This has proved a great shame for subsequent generations of students, since Feynman's approach, using path integrals, is actually simpler conceptually than any of the other approaches, and certainly no more difficult to handle mathematically. It also has the great merit of dealing with classical physics (the old ideas of Newton) and quantum physics in one package; it is literally true that if physics were taught Feynman's way from the beginning, students would only ever have to learn the one approach to handle everything. As it is, although over the years the experts have come to accept that Feynman's approach is the best one to use in tackling real problems at the research level, the way almost all students get to path integrals is by learning classical physics first (in school), then quantum physics the hard way (usually in the form of Schrödinger's wave function, at undergraduate level) then, after completing at least one degree, being introduced to the simple way to do the job. Don't just take our word for this being the simplest way to tackle physics -- John Wheeler, Feynman's thesis supervisor, has said that the thesis marks the moment in the history of physics 'when quantum theory became simpler than classical theory'. Feynman's approach is not the standard way to teach quantum physics at undergraduate level (or classical physics in schools) for the same reason that the Betamax system is not the standard format for home video -- because an inferior system got established in the market place first, and maintains its position as much through inertia as anything else. Indeed, there is a deep flaw in the whole way in which science is taught, by recapitulating the work of the great scientists from Galileo to the present day, and it is no wonder that this approach bores the pants off kids in school. The right way to teach science is to start out with the exciting new ideas, things like quantum physics and black holes, building on the physical principles and not worrying too much too soon about the mathematical subtleties. Those children who don't want a career in science will at least go away with some idea of what the excitement is all about, and those who do want a career in science will be strongly motivated to learn the maths when it becomes necessary. We speak from experience -- one of us (JG) got turned on to science in just this way, by reading books that were allegedly too advanced for him and went way beyond the school curriculum, but which gave a feel for the mystery and excitement of quantum physics and cosmology even where the equations were at that time unintelligible to him. In Feynman's case, the path integral approach led him to quantum electrodynamics, and to the Feynman diagrams which have become an essential tool of all research in theoretical particle physics. But while these applications of quantum theory were providing the key to unlock an understanding of the microworld, even after the Second World War there were still a few theorists who worried about the fundamental philosophy of quantum mechanics, and what it was telling us about the nature of the Universe we live in. For those who took the trouble to worry in this way, there was no getting away from the weirdness of the quantum world. Building from another thought experiment intended to prove the non-logical nature of quantum theory (the EPR experiment, dreamed up by Einstein and two of his colleagues), the work of David Bohm in the 1950s and John Bell in the 1960s led to the realization that it would actually be possible to carry out an experiment which would test the non-commonsensical aspects of quantum theory in a definitive manner. What Einstein had correctly appreciated was that every version of quantum theory has built into it a breakdown of what is called 'local reality'. 'Local', in this sense, means that no communication of any kind travels faster than light. 'Reality' means that the world exists when you are not looking at it, and that electrons, for example, do not dissolve into clouds of probability, wave functions waiting to collapse, when you stop looking at them. Quantum physics (any and every formulation of quantum physics) says that you can't have both. It doesn't say which one you have to do without, but one of them you must do without. What became known as the Bell test provided a way to see whether local reality applies in the (for want of a better word) real world -- specifically, in the microworld. The appropriate experiments were carried out by several teams in the 1980s, most definitively by Alain Aspect and his colleagues in Paris, using photons. They found that the predictions of quantum theory are indeed borne out by experiment -- the quantum world is not both local and real. So today you have no choice of options, if you want to think of the world as being made up of real entities which exist all the time, even when you are not looking at them; there is no escape from the conclusion that the world is non-local, meaning that there are communications between quantum entities that operate not just faster than light, but actually instantaneously. Einstein called this 'spooky action at a distance'. The other option is to abandon both locality and reality, but most physicists prefer to cling on to one of the familiar features of the commonsense world, as long as that is allowed by the quantum rules. Our own preference is for reality, even at the expense of locality; but that is just a personal preference, and you are quite free to choose the other option, the traditional Copenhagen interpretation involving both collapsing wave functions and spooky action at a distance, if that makes you happier. What you are not free to do, no matter how unhappy you are as a result, is to think that the microworld is both local and real. The bottom line is that the microworld does not conform to the rules of common sense determined by our everyday experience. Why should it? We do not live in the microworld, and our everyday experience is severely limited to a middle range of scales (of both space and time) intermediate between the microworld and the cosmos. The important thing is not to worry about this. The greatest of all the quantum mechanics, Richard Feynman, gave a series of lectures at Cornell University on the theme The Character of Physical Law (published in book form by BBC Publications in 1965). In one of those lectures, he discussed the quantum mechanical view of nature, and in the introduction to that lecture he gave his audience a warning about the weirdness they were about to encounter. What he said then, more than 30 years ago, applies with equal force today: I think I can safely say that nobody understands quantum mechanics. So do not take the lecture too seriously, feeling that you really have to understand in terms of some model what I am going to describe, but just relax and enjoy it. I am going to tell you what nature behaves like. If you will simply admit that maybe she does behave like this, you will find her a delightful, entrancing thing. Do not keep saying to yourself, if you can possibly avoid it, 'But how can it be like that?' because you will go 'down the drain' into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that. That is the spirit in which we offer you our guide to the quantum world; take the advice of the master -- relax and enjoy it. Nobody knows how it can be like that. Copyright © 1998 by John & Mary Gribbin Excerpted from Q Is for Quantum: An Encyclopedia of Particle Physics by John Gribbin All rights reserved by the original copyright owners. Excerpts are provided for display purposes only and may not be reproduced, reprinted or distributed without the written permission of the publisher.
Table of Contents
Introduction:The quest for the quantum
A-Z Dictionary |
c896c5ab9e63d4ba | WikiJournal of Science/A card game for Bell's theorem and its loopholes
From Wikiversity
Jump to navigation Jump to search
WikiJournal of Science logo.svg
WikiJournal of Science
Wikipedia-integrated • Public peer review • Libre open access
Journal Issues
Volume 1(2)
Volume 1(1)
<meta name='citation_doi' value='10.15347/wjs/2018.005'>
Article information
Authors: Guy Vandegrift[i], Joshua Stomel
In 1964 John Stewart Bell made an observation about the behavior of particles separated by macroscopic distances that had puzzled physicists for at least 29 years, when Einstein, Podolsky and Rosen put forth the famous EPR paradox. Bell made certain assumptions leading to an inequality that entangled particles are routinely observed to violate in what are now called Bell test experiments. As an alternative to showing students a "proof" of Bell's inequality, we introduce a card game that is impossible to win. The solitaire version is so simple it can be used to introduce binomial statistics without mentioning physics or Bell's theorem. Things get interesting in the partners' version of the game because Alice and Bob can win, but only if they cheat. We have identified three cheats, and each corresponds to a Bell's theorem "loophole". This gives the instructor an excuse to discuss detector error, causality, and why there is a maximum speed at which information can travel.
The conundrum
Although this can be called a theorem, it might be better viewed as something "spooky" that has been routinely observed, and is consistent with quantum mechanics. But this puzzling behavior violates what might be called common notions about what is and is not possible.[1][2] Students typically encounter a mathematical theorem as an incomprehensible statement that cannot be digested until it is first proven and then applied in practice. It is not uncommon for novices to refer to some version of Bell's inequality as Bell's theorem because the inequality can be mathematically "proven".[3] The problem is that what is proven turns out to be untrue.
David Mermin described an imaginary device not unlike that shown in Fig. 1, and refers to the fact that such a device actually exists as a conundrum, then pointed out that many physicists deny that it is a conundrum.[4]
A simple Bell's theorem experiment
It is customary to name the particles[5] in a Bell's theorem experiment "Alice" and "Bob", an anthropomorphism that serves to emphasize the fact that a pair of humans cannot win the card game ... unless they cheat. To some experts, a "loophole" is a constraint on any theory that might replace quantum mechanics.[6] It is also possible to view a loophole as a physical mechanism by which the outcome of a Bell's theorem experiment might seem less "spooky". In this paper, we associate loopholes with ways to cheat at the partners' version of the card game. It should be noted that the three loophole mechanisms introduced in this paper raise questions that are even spookier than quantum mechanics: Are the photons "communicating" with each other? Do they "know" the future? Do they "persuade" the measuring devices to fail when the "cards are unfavorable"?[7]
Since entanglement is so successfully modeled by quantum mechanics, one can argue that there is no need for a mechanism that "explains" it. Nevertheless, there are reasons for investigating loopholes. At the most fundamental level, history shows that a successful physical theory can be later shown to be an approximation to a deeper theory, and the need for this new theory is typically associated with a failure of the old paradigm. It is plausible that a breakdown of quantum mechanics might be discovered using a Bell's theorem experiment designed to investigate a loophole. But the vast majority of us (including most working physicists) need other reasons to care about loopholes: Many find it interesting that we seem to live in a universe governed by fundamental laws, and Bell's theorem yields insights into the bizarre nature of those laws. Also, those who teach can use these card games to motivate introductory discussions about statistical inference, polarization, and modern physics.
Bell card house.svg
Figure 1 | The outside casing of each device remains stationary while the circle with parallel lines rotates with the center arrow pointing in one of three directions (, , .) If Jacks are used to represent these directions, Alice will see J as her question card. She will respond with an "odd"-numbered answer card (3) to indicate that she is blocked by the filter. If Bob passes through a filter with the "spade" orientation, he sees J as the question card, and answers with the "even" numbered 2. This wins one point for the team because they gave different answers to different questions.
Figure 1 shows a hypothetical and idealized experiment involving two entangled photons simultaneously emitted by a single (parent) atom. After the photons have been separated by some distance, each is exposed to a measurement that determines whether the photon would pass or be blocked by the polarizing filter.[8] To ensure that the results seem "spooky" it should be possible to rotate the filter while the photons are en route so that the filter's angle of orientation is not "known" to either photon until it encounters the filter. If the filters are rotated between only three polarization angles, we may use card suits (hearts , clubs ♣, spades ♠) to represent these angles. These three polarization angles are associated with "question" cards, because the measurement essentially asks the photon a question:
"Will you pass through a filter oriented at this angle?"
For simplicity we restrict our discussion to symmetric angles (0°, 120°, 240°.) The filter's axis of polarization is shown in the figure as parallel lines, with the center line pointing to the heart, club, or spade. Any face card can be used to "ask" the question, and the four face cards (jack, queen, king, ace) are equivalent. If the detectors are flawless, each measurement is binary: The photon either passes or is blocked by the filter (subsequent measurements on a photon would yield nothing interesting.) The measurement's outcome is represented by an even or odd numbered "answer" card (of the same suit). The numerical value of an answer card is not important: all even numbers (2,4,6,8) are equivalent and represent a photon passing through the filter, while the odd cards (3,5,7,9) represent a photon being blocked.
Although Bell's inequality is easy to prove[9], we avoid it here because the card game reverses roles regarding probability: Instead of the investigators attempting to ascertain the photons' so-called hidden variables, the players are acting as particles attempting to win the game by guessing the measurement angles. Another complication is that the original form of Bell's inequality does not adequately model the partners' version of the game because humans have the freedom to exhibit a behavior not observed by entangled particles (under ideal experimental conditions). This behavior involves a 100% correlation (or anti-correlation) whenever the polarization measurement angles are either parallel or perpendicular to each other. [10] In the partners' version of the card game, this behavior must be enforced by deducting a penalty from the partners' score whenever they are caught using a forbidden strategy (which we shall later call the β-strategy). The minimum required penalty is calculated in Supplementary file:The car and the goats. Fortunately students need not master this calculation because the actual penalty should often be whatever it takes to encourage a strategy that mimics this aspect of entanglement (which we shall call the α-strategy.)
A theoretical understanding of how one can model entanglement using the Schrödinger equation can be found in Supplementary file:Tube entanglement.
The solitaire card game
Bell's card game solitaire.svg
Figure 2 | Solitaire version of game. Cases 1, 2, and 3 represent three possible outcomes if the player chooses the best strategy (later called the "α-strategy": One answer (here, "odd" for ) differs from that given for the other two questions (here, "even" for & .)
Figure 2 shows the three possible outcomes associated with one hand of the solitaire version of the game. The solitaire version requires nine cards. The figure uses a set with three "jacks" ( ♣ ♠) for the questions, and (2,3) for the six (even/odd) answer cards. To play one round of the game, the player first shuffles the three question cards and places them face down so their identity is not known. Next, for each of the three suits, the player selects an even or odd answer card. The figure shows the player choosing the heart and club to be even, while the spade is odd: 2 2♣ 3♠. This is the only viable strategy, since the alternative is to always lose by selecting three answers that are all even or all odd. In the partner's version we shall introduce a second, β-strategy, which is not possible in the solitaire game.
After three answer cards are selected and turned face up, two of the three question cards are randomly selected and also turned face up. Figure 2 depicts all three equally probable outcomes, or ways to select two out of three cards (3 choose 2.)[11] The round is scored by adding or subtracting points, as shown in Table 1: First the suit of each of the two upturned question cards is matched to the corresponding answer card. In case 1 (shown in the figure), the player wins one point because answers are different: is an even number, while ♠ is odd. The player loses three points in case 2 because the and ♣ are the same (even). Case 3 wins one point for the player because the answers are different. It is evident that the player has a 2/3 probability winning a round. The conundrum of Bell's theorem is that entangled particles in an actual experiment manage to win with a probability of 3/4. Table 1 shows that this scoring system causes humans to average a loss of at least 1/3 of a point per round, while entangled particles maintain an average score of zero.[12] How do particles succeed where humans fail?
Table 1: Solitaire Scoring
Points Answers are: Example[13]
+1 different 2 and 3♠
+1 different 2 and 3♣
−3 same 2 and 2♣
The game for entangled partners
In the partners' version, Alice and Bob each play one (even/odd) answer card in response to the suit of a question card. Every round is played in two distinctly different phases. Alice and Bob are allowed to discuss strategy during phase 1 because it simulates the fact that the particles are (effectively) "inside" the parent atom before it emits photons. Then, all communication between the partners must cease during phase 2, which simulates the arrival of the photons at the detectors for measurement under conditions where communication is impossible. In this phase each player silently plays an (even/odd) answer that matches the question's suit. The player cannot know the other's question or answer during phase 2.
In the solitaire version, the player held a deck of six numbered cards and pre-selected (even/odd) answers for each of the three (question) suits. This simulated the parent atom "deciding" the responses that each photon will give to all possible polarization measurements.[14] In an "ideal" Bell's theorem experiment, the two photons' responses to identical polarization measurement angles are either perfectly correlated or perfectly anticorrelated.[8][15] This freedom to independently choose different answers when Alice and Bob are faced with the same question creates a dilemma for the designers of the partners' version of the card game. Adherence to any rule forbidding different answers to the same question cannot always be verified. To enforce this rule, we deduct points whenever they give different answers to the same question. No points are awarded for giving the same answer to the same question. Note how this complexity is relevant to actual experiments because detectors can register false events. The minimum penalty that should be imposed depends on how often the partners are given question cards of the same suit, and is derived at Supplementary file:The car and the goats:
where is the probability that Alice and Bob are asked the same question. The equality holds if and , which can be accomplished by randomly selecting two question cards from nine (K♠, K, K♣, Q♠, Q, Q♣,J♠, J, J♣), as shown in Fig. 3. If the equality in (1) holds, the partners are "neutral" with respect to the selection of two different strategies, one of which risks the 4 point penalty. Both strategies lose, but the loss rate is reduced to −1/4 points per round, because the referee must dilute the number of times different questions are asked.
A sample round begins in the top part of Fig. 3 as phase 1 where the pipe smoking referee has selected different questions (hearts and spades). In a classroom setting, consider allowing Alice and Bob to side-by-side, facing slightly away from each other during phase 2. Arrange for the audience to sit close enough to listen and watch for evidence of surreptitious communication between Alice and Bob. The prospect of cheating not only makes the game more fun, but also allows us to introduce "loopholes". The "thought-bubbles" above the partners show a tentative agreement by the partners to play the same α-strategy introduced in the solitaire version (both say "even" to ♣, and "odd" to ♠.) It is important to allow both players to hold all the answer cards in phase 2 so that each can change his or her mind upon seeing the actual question. The figure shows them following their original plan and winning because the referee selected a heart for Alice and a spade for Bob.
Bell's card game entangled.svg
Figure 3 | One round of the partners' version with Alice and Bob employing the same strategy (α) introduced in the solitaire game. Here, a version of "neutral" scoring is used in which the referee randomly selects from the nine question cards, with a penalty of 4 points assessed if different answers are given to the same question. Instructors might wish to override this "neutral" scoring by asking the same question more often than called for in the random selection.
But the partners have another strategy that might win: Suppose Alice agrees to answer "even" to any question, while Bob answer is always "odd". This wins (+1) if different questions are asked, and loses (−Q) if the same question is asked. This is called the β-strategy. The Supplementary file:The car and the goats establishes that no other strategy is superior to the α and/or β strategies:
α-strategy: Alice and Bob select their answers in advance, in such a way that both give the same answer if asked the same question. For example, they might both agree that ♣ are even, while ♠ is odd. This strategy was ensured in the solitaire version because only three cards are played: If the heart is chosen to be "even", the solitaire version models a situation where both Alice and Bob would answer "even" to "heart". This α-strategy requires that one answer differs from the other two (i.e., all "even" or all "odd" is never a good strategy). The expected loss is 1/3 for each round whenever different questions are asked.
β-strategy: One partner always answers "even" while the other always answers "odd". This strategy gains one point if different questions are asked, and loses points if the same question is asked.
For pedagogical reasons, the instructor may wish to discourage the β-strategy. If Alice and Bob are not asked the same question often, they might choose to risk large losses for the possibility winning just a few rounds using the β-strategy, perhaps terminating the game prematurely with a claim that they lost "quantum entanglement". To counter this, the referee can raise the penalty to six points and randomly shuffle only six question cards that result from the merging of two solitaire decks. We refer to any scoring that favors the players' use of the α-strategy as "biased scoring". To further inhibit use of the β-strategy, the referee should routinely override the shuffle and deliberately select question cards of the same suit. The distinction between biased and neutral scoring lies in whether the equality or the inequality holds in (1). Table 2 shows examples of each scoring system. Both were selected to match an integer value for . The shuffle of 9 face cards exactly matches the equality in (1) if , while the more convenient collection of 6 face cards will bias the players towards the α-strategy if .[16]
Table 2: Examples of neutral and biased scoring
Neutral scoring
Shuffle 9 face cards to ask the same question exactly 25% of the time.
Biased scoring
Shuffle 6 face cards and/or ask the same question with a probability higher than 2/11.
Points Alice and Bob give... Example Points
+1 different answers to different questions "even" to hearts and "odd" to spades +1
−3 the same answer to different questions "even" to clubs and "even" to hearts −3
−4 different answers to the same question "even" to clubs and "odd" to clubs −6
0 the same answer to the same question "even" to clubs (for both players) 0
Cheating at cards and Bell's theorem "loopholes"
In the card game, Alice and Bob could either win by surreptitiously communicating after they see their question cards, or by colluding with the referee to learn the questions in advance. Which seems more plausible, information travelling faster than light, or atoms acting as if they "know" the future? A small poll of undergraduate math and science college students suggests that they are inclined to favor faster-than-light communication as being more plausible. We shall use a space-time diagram to illustrate how faster-than-light communication violates causality by allowing people to send signals to their own past. And, we shall argue that one can make the case that decisions made today by humans regarding how and where to perform a Bell's theorem experiment next week, might be mysteriously connected to the behavior of an obscure atom in a distant galaxy billions of years ago.[17]
The third loophole was a surprise for us. In an early trial of the partners' game, a student[18] stopped playing and attempted to construct a modified version of the α-strategy that uses the new information a player gains upon seeing his or her question card. After convincing ourselves that no superior strategy exists, we realized that a player could cheat by terminating the game after seeing his or her own question card, but before playing the answer card. This is related to an important detector efficiency loophole.[19] The student's discovery also alerted us to the fact that our original calculation of (1) was just a lucky guess based on flawed logic.
Magic phones: Communications loophole
Alice and Bob could win every round of the partners' version if they cheat by communicating with each other after seeing their question cards in phase 2. In an actual experiment, this loophole is closed by making the measurements far apart in space and nearly simultaneous, which in effect requires that these communications travel faster than the speed of light.[20] While any faster-than-light communication is inconsistent with special relativity, we shall limit our discussion to information that travels at nearly infinite speed.[21]
Instantaneous communication Minkowskilike.svg
Figure 4 | "Magic phone#1" is situated on a moving train and can be used by Alice to send a message to Bob's past, which Bob relays back to Alice's past using the land-based "Magic phone #2". These magic phones transmit information with near infinite speed.
Figure 4 shows Alice and Bob slightly more than one light-year apart. The dotted world lines for each is vertical, indicating that they remain at rest for over a year. The slopes of world lines of the train's front and rear are roughly 3 years per light-year, corresponding to about 1/3 the speed of light. Both train images are a bit confusing because it is difficult to represent a moving train on a space-time diagram: A moving train can be defined by the location of each end at any given instant in time. This requires the concept of simultaneity, which is perceived differently in another reference frame. The horizontal image of the train at the bottom represents to location of each car on the train on the first day of January, as time and simultaneity are perceived by Alice and Bob. To complicate matters, the horizontal train image is not what they would actually see due to the finite transit time required for light to reach their eyes. It helps to imagine a distant observer situated on a perpendicular to some point on the train. The transit time for light to reach this distant observer will be nearly the same for every car on the train. Many years later, this distant observer will see the horizontal train as depicted at the bottom of the figure. It will be instructive to return to the perspective of this distant observer after the paradox has been constructed.
The slanted image of the train depicts the location of each car on the day that the (moving) passengers perceive the front to be adjacent to Alice, at the same time that the train's rear is perceived to be adjacent to Bob. It should be noted that Alice and Bob do not perceive these two events as simultaneous. The figure shows that the rear passes Bob several months before the front passes Alice (in the partners' reference frame.)
Now we establish that the passengers perceive the front of the train to reach Alice at the same time that the rear reaches Bob. The light-emitting-diode (LED) shown at the bottom of Fig. 4 emits two pulses from the center of the train in January. It is irrelevant whether the LED is stationary or moving because all observers will see the pulses travelling in opposite directions at the speed of light (±1 ly/yr.) Note how the backward moving pulses reaches the rear of the train in May, five months before the other pulse reaches the train's front in October. But, the passengers see two light pulses created at the center of the train, directed at each end of the train, and will therefore perceive the two pulses as striking simultaneously.
To create the causality paradox, we require two "magic-phones" capable of sending messages with nearly infinite speed. Unicorn icons use arrows to depict the information's direction of travel: magic phone #1 transmits from Alice to Bob, while #2 transmits from Bob to Alice. Magic phone #1 is situated on the moving train. When Alice shows her message through the front window as the train passes her in October, a passenger inside relays the message via magic phone #1 to the train's rear, where Bob can see it through a window. Bob immediately relays the message back to Alice via the land-based magic phone #2 in May, five months before she sent it.
Our distant observer will likely take a skeptical view of all this. The slope of the slanted train's image indicates that the distant observer will see magic phone #1 sending information from Bob to Alice, opposite to what the passengers perceive. The distant observer will first see the message inside the rear of the train (when it was adjacent to Bob in May). That message will immediately begin to travel towards of Alice, faster than the speed of light, but slow enough so that Alice will not receive the it until October. Meanwhile, Bob sends the same message via land-based phone #2 to Alice, who receives it in May. Alice waits for almost five months, until she prepares to send the same message, showing it through the front window just before the message also arrives at the front via the train-based magic phone #1. It would appear to the distant observer that the events depicted in Fig. 4 had been artificially staged.
This communications loophole in an actual Bell test experiment was closed by arranging for the measurements to coincide so that any successful effort to communicate would suggest that humans could change their own past using this ability to send information faster than light.
Referee collusion: Determinism loophole
Bell's theorem superdetermism cards.svg
Figure 5 | Cosmic photons from two distant spiral galaxies arrive on Earth with properties that trigger the filters to ask the & questions of photons just prior to their arrival with a winning combination of (even/odd) answers.
This "determinism", or "freedom-of-choice" loophole involves the ability of the quantum system to predict the future. Curiously, the strategy would not be called "cheating" in the card game if Alice or Bob relied on intuition to guess which cards the referee will play in the upcoming round. But what makes this loophole bizarre when applied to a Bell test experiment is that it would have been necessary to predict the circumstances under which the experiment was designed and constructed by human beings who evolved on a plant that was formed almost five billion years ago. On the other hand, viewing the parent atom, the two photons, and the detectors as one integrated quantum entity is consistent with the proper modeling of a quantum-mechanical system. The paradoxical violation of Bell's inequality arises from the need to model two remote particles as one system, so it is not unreasonable to assume that the conundrum can be resolved by including the devices that make the measurements into that model.
Figure 5 is inspired by a comment made by Bell during a 1985 radio interview that mentioned something he called "superdeterminism". [22][23] It is a timeline that depicts the big bang, beginning at a time when space and time were too confusing for us to graph. At this beginning, "instructions" were established that would dictate the entire future of the universe, from every action taken by every human being, to the energy, path, and polarization of every photon that will ever exist. Long ago, obscure atoms in two distant galaxies (Sb and Sc) were instructed to each emit what will become "cosmic photons" that strike Earth. Meanwhile, "instructions" will call for humans to evolve on Earth and create a Bell's theorem experiment that uses the frequency and/or polarization of cosmic photons to set the polarization measurement angles while the entangled photons Alice and Bob are still en route to the detectors. Alice and Bob will arrive at their destinations already "knowing" how to respond because the cosmic photons were "instructed" to have properties that cause the questions to be "heart" and "spade".
Viewed this way, the events depicted in Fig. 5 are just the way things happen to turn out. Efforts to enact the scenario with an actual experiment using cosmic photons in this way are being carried out. The most recent experiment looks back at photons created 600 years ago.[24][25] Note also how this experiment does not "close" the loophole, but instead greatly expands the scale of any "collusion" between the parent atom and detectors.
It is claimed that the results of Bell test experiments do not contradict special relativity, despite what may appear to some as faster-than-light "communication" between Alice and Bob.[26] Figure 5 can help us visualize this if the "instructions" represent the time evolution of an exotic version of Schrödinger's equation for the entire universe. If this wave equation is deterministic, future evolution of all probability amplitudes is predetermined. One flaw in this argument is that it relies on an equation that governs the entire universe, and for that reason is not likely to be solved or written down. Perhaps this is why the paradox seems to have no satisfactory resolution.
The Rimstock cheat: Detector error loophole
Bell's rimstock cards transposed.svg
Figure 6 | The Rimstock cheat: Bob flips a coin to determine whether to play the cheat on this round. Alice will play "even" to hearts and "odd" to spades or clubs.
Rimstock spaghetti Bell's theorem 50.svg
Figure 7 | Four teams of players engaging in the detector error cheat. Each connected dot represents a hand in which different questions were asked, and the horizontal dots simulate a detector error that coincided with a player receiving an unfavorable question.
The following variation of the α-strategy allows the team to match the performance of entangled particles by achieving an average score of zero: Alice preselects three answers and informs Bob of her decision. Bob will either answer in the same fashion, or he might abruptly stop the hand upon seeing his question card, perhaps requesting that the team take a brief break while another pair of students play the role of Alice and Bob. In a card game, this request to stop and replay a hand would require the cooperation of a gullible scorekeeper. But no detector in an actual Bell's theorem experiment is 100% efficient, and this complicates the analysis of a Bell's theorem experiment in a way that requires both careful calibration of the detector's efficiency, as well as detailed mathematical analysis.
Since this strategy never calls for Alice and Bob to give different (even/odd) answers to the same question, we may consider only rounds where the players get different questions. To understand why Bob might refuse to play a card, suppose Alice plans to answer "even" to hearts and "odd" to clubs and spades. As indicated at the top of Fig. 6, Bob the heart is the "desired" suit because he knows they will win if he sees that question. But their chances of wining are reduced to only 50% if Bob sees the "undesired" club or spade. To avoid raising suspicion, Bob does not stop the game each time he sees an unfavorable question. Instead, he stops with a 50% probability upon seeing an unfavorable card. To calculate the average score, we construct a probability space consisting of equally probable outcomes, beginning with the three possible suits that Bob might see. We quadruple the size of this probability space (from 3 to 12) by treating the following two pairs of events as independent, and occurring with equal probability:
1. Bob will either stop the hand, or play round (Do stop or Don't stop.)
2. After seeing his question, Bob knows that Alice might receive one of only two possible questions (ignoring rounds with the same question asked of both.)
Figure 6 can be used to show that Bob will stop the game with a probability of 1/3.[27] But if Bob and Alice randomly share this role of stopping the game, each player will stop a given round with only 1/6, yielding an apparent detector efficiency of 5/6 = 83.3%.[19] Typical results for a team playing this ruse are illustrated in Fig. 7. Ten rounds are played on four different occasions. The vertical axis represents in the team's net score, with upward steps corresponding to winning one point, and downward corresponding to losing three points. The horizontal lines showing no change in score indicate occasions where Bob or Alice refused to play an answer card (it was never necessary to ask both partners the same question in this simulation.)
Figure 6 was generated using an Excel spreadsheet using the rand() function, which caused the graphs to change at every ctrl+= keystroke. It took several strokes to get a graph where the lines did not cross, and all the event counts were this close to expected values. As discussed in a supplement, an Excel verification lab is an appropriate activity in a wide variety of STEM courses.
Pedagogical issues
To make sixteen solitaire decks, purchase three identical standard 52 card decks. Remove only one suit (hearts, clubs, spades) from each deck to create four solitaire sets. Each group should contain 3-5 people, and two solitaire decks (for "biased" scoring with Q=6.) To avoid confusion of an ace (question card) with an (even/odd) answer card, reserve the ace for groups with large even/odd number cards. For example, one group might have solitaire sets with (ace,8,9) and (king, 6,7). In a small classroom, the entire audience can observe or even give advice to one pair playing the partners' version at the front of the room. Placing the question cards adjacent to the players at the start will permit the instructor and entire class to join the partners' discussion regarding strategy during phase 1. For "neutral" scoring with Q=4, the instructor can either borrow question cards from the class, or convert unused "10" cards into questions. Since cheating will come so naturally, this game is not suitable for gambling (even for pennies).
Bell's theorem can lead to topics ranging from baseless pseudoscience to legitimate (but pedagogically unnecessary) speculation regarding alternatives to the theory of quantum mechanics. While few physicists are experts in such topics, all teachers will eventually face such issues in the classroom. The authors of this paper claim no expertise in any of this, and the intent is to illustrate the "spookiness" of Bell's theorem, show how one can use simple logic to prove that faster-than-light communication violates special relativity,[21] and introduce students to the concept of a "deterministic" theory or model.[26]
Additional information
Supplementary material
• Supplementary file 3 | Tube entanglement (OOjs UI icon download-ltr.svgDownload) Describes a simple analog to entanglement with polarized photons. It relies on Maluss' Law, and also introduces Dirac notation as a shorthand representation for the wavefunction of two non-interacting massive particles confined to a narrow tube.
Versions of this manuscript have received five referee reports (it was first submitted to the American Journal of Physics.) It is obvious that each referee was highly qualified, and that each exerted a considerable effort to improve the quality of this paper.
Competing interests
Guy Vandegrift is a member of the WikiJournal of Science editorial board.
1. Bell, John S. (1964). "On the Einstein Podolsky Rosen Paradox". Physics 1 (3): 195–200. doi:10.1103/physicsphysiquefizika.1.195.
2. Vandegrift, Guy (1995). "Bell's theorem and psychic phenomena". The Philosophical Quarterly 45 (181): 471–476. doi:10.2307/2220310.
3. See for example this discussion on the Wikipedia article's talk page, or Wikipedia's effort to clarify this with w:No-go theorem
4. Mermin, N. David (1981). "Bringing home the atomic world: Quantum mysteries for anybody". American Journal of Physics 49 (10): 940–943. doi:10.1119/1.12594. "(referring to those who do not consider this a conundrum) In one sense they are obviously right. Compare Tovey's remark that (Beethoven's) Waldstein Sonata has no more business than sunsets and sunrises to be paradoxical."
5. or detectors
6. These (hypothetical) theories are called "hidden variable" theories Larsson, Jan-Åke (2014). "Loopholes in Bell inequality tests of local realism". Journal of Physics A: Mathematical and Theoretical 47 (42): 424003. doi:10.1088/1751-8113/47/42/424003.
7. In w:special:permalink/829073568 these questions are associated with the "communication (locality)", the "free choice" and a "fair sampling" loophole, respectively.
8. 8.0 8.1 In most experiments electro-optical modulators are used instead of polarizing filters, and often it is necessary to rotate one set of orientations by 90°. Giustina, Marissa; Versteegh, Marijn A. M.; Wengerowsky, Sören; Handsteiner, Johannes; Hochrainer, Armin; Phelan, Kevin; Steinlechner, Fabian; Kofler, Johannes et al. (2015). "Significant-Loophole-Free Test of Bell’s Theorem with Entangled Photons". Physical Review Letters 115 (25): 250401. doi:10.1103/physrevlett.115.250401.
9. Maccone, Lorenzo (2013). "A simple proof of Bell's inequality". American Journal of Physics 81 (11): 854–859. doi:10.1119/1.4823600.
10. See equation 29 in Aspect, Alain (2002). "Bell's theorem: the naive view of an experimentalist". In Bertlmann, Reinhold A.; Zeilinger, Anton. Quantum [un] speakables (PDF). Berlin: Springer. p. 119-153. doi:10.1007/978-3-662-05032-3_9.
11. or "n choose k" is defined in w:Binomial coefficient
12. The player can lose more than 1/3 of a point per round by adopting the obviously bad strategy of making all three answers the same (all even or all odd.) This is closely related to the fact that Bell's "inequality" is not Bell's "equation".
13. Since 3-choose-2 equals 6, three other cases exist; all involve 3.
14. Keep in mind that it seems artificial for the parent atom to "know" that these photons are part of an experiment involving just three possible polarization measurements. This need to somehow orchestrate all possible fates for each emitted photon created the EPR conundrum long before Bell's inequality was discovered. See w:EPR paradox.
15. It is best not to assume that this correlations implies that the "decision" regarding polarization was actually made as the two photons are created by the parent atom. In physics, mathematical models should be judged by whether they yield predictions that can be verified by experiment, not whether these models make any sense.
16. Equation (1) shows that the case is neutral at .
17. One can also make the case that it is not the role of physics (or science) to speculate in such matters.
18. User:Rimstock
19. 19.0 19.1 Garg, Anupam; Mermin, N. David (1987). "Detector inefficiencies in the Einstein-Podolsky-Rosen experiment". Physical Review D 35 (12): 3831–3835. doi:10.1103/physrevd.35.3831.
20. Aspect, Alain; Dalibard, Jean; Roger, Gérard (1982). "Experimental Test of Bell's Inequalities Using Time- Varying Analyzers". Physical Review Letters 49 (25): 1804–1807. doi:10.1103/physrevlett.49.1804.
21. 21.0 21.1 Liberati, Stefano; Sonego, Sebastiano; Visser, Matt (2002). "Faster-than-c Signals, Special Relativity, and Causality". Annals of Physics 298 (1): 167–185. doi:10.1006/aphy.2002.6233.
22. Bell, John S. (2004). "Introduction to hidden-variable question". Speakable and unspeakable in quantum mechanics: Collected papers on quantum philosophy. Cambridge University Press. pp. 29–39. doi:10.1017/cbo9780511815676.006.
23. Kleppe, A. (2011). "Fundamental Nonlocality: What Comes Beyond the Standard Models". Bled Workshops in Physics. 12. pp. 103–111. In that interview, Bell was apparently speculating about a deterministic "hidden variable theory where all outcomes are highly dependent on initial conditions.
24. Gallicchio, Jason; Friedman, Andrew S.; Kaiser, David I. (2014). "Testing Bell’s Inequality with Cosmic Photons: Closing the Setting-Independence Loophole". Physical Review Letters 112 (11): 110405. doi:10.1103/physrevlett.112.110405.
25. Handsteiner, Johannes; Friedman, Andrew S.; Rauch, Dominik; Gallicchio, Jason; Liu, Bo; Hosp, Hannes; Kofler, Johannes; Bricher, David et al. (2017). "Cosmic Bell Test: Measurement Settings from Milky Way Stars". Physical Review Letters 118 (6): 060401. doi:10.1103/PhysRevLett.118.060401.
26. 26.0 26.1 See also Ballentine, Leslie E.; Jarrett, Jon P. (1987). "Bell's theorem: Does quantum mechanics contradict relativity?". American Journal of Physics 55 (8): 696–701. doi:10.1119/1.15059.
27. 2/3 is the product of the probability of receiving an unfavorable card, and 1/2 is the probability of stopping; hence (2/3)(1/2)=1/3 |
b2dcef4909d5a38d | Consciousness Studies/Measurement In Quantum Physics And The Preferred Basis Problem
From Wikibooks, open books for an open world
Jump to navigation Jump to search
The Measurement Problem[edit]
In quantum physics the probability of an event is deduced by taking the square of the amplitude for an event to happen. The term "amplitude for an event" arises because of the way that the Schrödinger equation is derived using the mathematics of ordinary, classical waves where the amplitude over a small area is related to the number of photons hitting the area. In the case of light, the probability of a photon hitting that area will be related to the ratio of the number of photons hitting the area divided by the total number of photons released. The number of photons hitting an area per second is the intensity or amplitude of the light on the area, hence the probability of finding a photon is related to "amplitude".
However, the Schrödinger equation is not a classical wave equation. It does not determine events, it simply tells us the probability of an event. In fact the Schrödinger equation in itself does not tell us that an event occurs at all, it is only when a measurement is made that an event occurs. The measurement is said to cause state vector reduction. This role of measurement in quantum theory is known as the measurement problem. The measurement problem asks how a definite event can arise out of a theory that only predicts a continuous probability for events.
Two broad classes of theory have been advanced to explain the measurement problem. In the first it is proposed that observation produces a sudden change in the quantum system so that a particle becomes localised or has a definite momentum. This type of explanation is known as collapse of the wavefunction. In the second it is proposed that the probabilistic Schrödinger equation is always correct and that, for some reason, the observer only observes one particular outcome for an event. This type of explanation is known as the relative state interpretation. In the past thirty years relative state interpretations, especially Everett's relative state interpretation have become favoured amongst quantum physicists.
The quantum probability problem[edit]
The measurement problem is particularly problematical when a single particle is considered. Quantum theory differs from classical theory because it is found that a single photon seems to be able to interfere with itself. If there are many photons then probabilities can be expressed in terms of the ratio of the number hitting a particular place to the total number released but if there is only one photon then this does not make sense. When only one photon is released from a light source quantum theory still gives us a probability for a photon to hit a particular area but what does this mean at any instant if there is indeed only one photon?
If the Everettian interpretation of quantum mechanics is invoked then it might seem that the probability of the photon hitting an area in your particular universe is related to the occurrences of the photon in all the other universes. But in the Everrettian interpretation even the improbable universes occur. This leads to a problem known as the quantum probability problem:
If the universe splits after a measurement, with every possible
measurement outcome realised in some branch, then how can it make
sense to talk about the probabilities of each outcome? Each
outcome occurs.
This means that if our phenomenal consciousness is a set of events then there would be endless copies of these sets of events, almost all of which are almost entirely improbable to an observer outside the brain but all of which exist according to an Everrettian Interpretation. Which set is you? Why should 'you' conform to what happens in the environment around you?
The preferred basis problem[edit]
It could be held that you assess probabilities in terms of the branch of the universe in which you find yourself but then why do you find yourself in a particular branch? Decoherence Theory is one approach to these questions. In decoherence theory the environment is a complex form that can only interact with particles in particular ways. As a result quantum phenomena are rapidly smoothed out in a series of micro-measurements so that the macro-scale universe appears quasi-classical. The form of the environment is known as the preferred basis for quantum decoherence. This then leads to the preferred basis problem in which it is asked how the environment occurs or whether the state of the environment depends on any other system.
According to most forms of decoherence theory 'you' are a part of the environment and hence determined by the preferred basis. From the viewpoint of phenomenal consciousness this does not seem unreasonable because it has always been understood that the conscious observer does not observe things as quantum superpositions. The conscious observation is a classical observation.
However, the arguments that are used to derive this idea of the classical, conscious observer contain dubious assumptions that may be hindering the progress of quantum physics. The assumption that the conscious observer is simply an information system is particularly dubious:
"Here we are using aware in a down - to - earth sense: Quite simply, observers know what they know. Their information processing machinery (that must underlie higher functions of the mind such as "consciousness") can readily consult the content of their memory. (Zurek 2003).
This assumption is the same as assuming that the conscious observer is a set of measurements rather than an observation. It makes the rest of Zurek's argument about decoherence and the observer into a tautology - given that observations are measurements then observations will be like measurements. However, conscious observation is not simply a change of state in a neuron, a "measurement", it is the entire manifold of conscious experience.
In his 2003 review of this topic Zurek makes clear an important feature of information theory when he states that:
There is no information without representation.
So the contents of conscious observation are states that correspond to states of the environment in the brain (i.e.: measurements). But how do these states in the brain arise? The issue that arises here is whether the representation, the contents of consciousness, is entirely due to the environment or due to some degree to the form of conscious observation. Suppose we make the reasonable assumption that conscious observation is due to some physical field in the dendrites of neurons rather than in the action potentials that transmit the state of the neurons from place to place. This field would not necessarily be constrained by decoherence; there are many possibilities for the field, for instance, it could be a radio frequency field due to impulses or some other electromagnetic field (cf: Anglin & Zurek (1996)) or some quantum state of macromolecules etc.. Such a field might contain many superposed possibilities for the state of the underlying neurons and although these would not affect sensations, they could affect the firing patterns of neurons and create actions in the world that are not determined by the environmental "preferred basis".
Zeh (2000) provides a mature review of the problem of conscious observation. For example he realises that memory is not the same as consciousness:
"The genuine carriers of consciousness ... must not in general be expected to represent memory states, as there do not seem to be permanent contents of consciousness."
and notes of memory states that they must enter some other system to become part of observation:
"To most of these states, however, the true physical carrier of consciousness somewhere in the brain may still represent an external observer system, with whom they have to interact in order to be perceived. Regardless of whether the ultimate observer systems are quasi-classical or possess essential quantum aspects, consciousness can only be related to factor states (of systems assumed to be localized in the brain) that appear in branches (robust components) of the global wave function — provided the Schrodinger equation is exact. Environmental decoherence represents entanglement (but not any “distortion” — of the brain, in this case), while ensembles of wave functions, representing various potential (unpredictable) outcomes, would require a dynamical collapse (that has never been observed)."
However, Zeh (2003) points out that events may be irreversibly determined by decoherence before information from them reaches the observer. This might give rise to a multiple worlds and multiple minds mixture for the universe, the multiple minds being superposed states of the part of the world that is the mind. Such an interpretation would be consistent with the apparently epiphenomenal nature of mind. A mind that interacts only weakly with the consensus physical world, perhaps only approving or rejecting passing actions would be an ideal candidate for a QM multiple minds hypothesis.
Further reading and references[edit] |
c301eb55d264c719 | Whirlpool, a maelstrom of emotion disambiguation et Y'becca
Aller en bas
Whirlpool, a maelstrom of emotion disambiguation et Y'becca
Message par yanis la chouette le Jeu 27 Avr - 18:49
A maelstrom is a powerful whirlpool.
Definition of maelstrom
tried to shoot the canoe across a stretch of treacherous maelstrom — Harper's
: something resembling a maelstrom in turbulence the maelstrom enveloping the country a maelstrom of emotions
A whirlpool is a body of swirling water produced by the meeting of opposing currents. The vast majority of whirlpools are not very powerful and very small whirlpools can easily be seen when a bath or a sink is draining. More powerful ones in seas or oceans may be termed maelstroms. Vortex is the proper term for any whirlpool that has a downdraft.
In oceans, in narrow straits, with fast flowing water, whirlpools are normally caused by tides; there are few stories of large ships ever being sucked into such a maelstrom, although smaller craft are in danger.[1][2] Smaller whirlpools also appear at the base of many waterfalls[3] and can also be observed downstream from manmade structures such as weirs and dams. In the case of powerful waterfalls, like Niagara Falls, these whirlpools can be quite strong.
Notable whirlpools
The maelstrom off Norway, as illustrated by Olaus Magnus on the Carta Marina, 1539
Main article: Moskstraumen
Moskstraumen is an unusual system of whirlpools in the open seas in the Lofoten Islands off the Norwegian coast.[4] It is the second strongest whirlpool in the world with flow currents reaching speeds as high as 32 km/h (20 mph). It finds mention in several books and movies.[5]
The Moskstraumen is formed by the combination of powerful semi-diurnal tides and the unusual shape of the seabed, with a shallow ridge between the Moskenesøya and Værøy islands which amplifies and whirls the tidal currents.[6]
The fictional depictions of the Maelstrom by Edgar Allan Poe and Jules Verne describe it as a gigantic circular vortex that reaches the bottom of the ocean, when in fact it is a set of currents and crosscurrents with a rate of 18 km/h (11 mph).[7] Poe described this phenomenon in his short story A Descent into the Maelstrom, which in 1841 was the first to use the word "maelstrom" in the English language;[6] in this story related to the Lofoten Maelstrom, two fishermen are swallowed by the maelstrom while one survives miraculously.[8]
Main article: Saltstraumen
The maelstrom of Saltstraumen is the Earth's strongest maelstrom, and is located close to the Arctic Circle,[5] 33 km (20 mi) round the bay on the Highway 17, south-east of the city of Bodø, Norway. The strait at its narrowest is 150 m (490 ft) in width and water "funnels" through the channel four times a day.[9] It is estimated that 400 million cubic meters of water passes the narrow strait during this event.[10] The water is creamy in colour and most turbulent during high tide, which is witnessed by thousands of tourists.[9] It reaches speeds of 40 km/h (25 mph),[6] with mean speed of about 13 km/h (8.1 mph). As navigation is dangerous in this strait only a small slot of time is available for large ships to pass through.[5] Its impressive strength is caused by the world's strongest tide occurring in the same location during the new and full moon. A narrow channel of 3 km (2 mi) length connects the outer Saltfjord with its extension, the large Skjerstadfjord, causing a colossal tide which in turn produces the Saltstraumen maelstrom.[11]
Main article: Corryvreckan
Corryvreckan whirlpool
The Corryvreckan is a narrow strait between the islands of Jura and Scarba, in Argyll and Bute, on the northern side of the Gulf of Corryvreckan, Scotland. It is the third-largest whirlpool in the world.[5] Flood tides and inflow from the Firth of Lorne to the west can drive the waters of Corryvreckan to waves of over 9 metres (30 ft), and the roar of the resulting maelstrom, which reaches speeds of 18 km/h (11 mph), can be heard 16 kilometres (9.9 mi) away. Though it was initially classified as non-navigable by the British navy it was later categorized as "extremely dangerous".[5]
A documentary team from Scottish independent producers Northlight Productions once threw a mannequin into the Corryvreckan ("the Hag") with a life jacket and depth gauge. The mannequin was swallowed and spat up far down current with a depth gauge reading of 262 metres (860 ft) with evidence of being dragged along the bottom for a great distance.[12]
Other notable maelstroms and whirlpools
Naruto whirlpools
Old Sow whirlpool is located between Deer Island, New Brunswick, Canada, and Moose Island, Eastport, Maine, USA. It is given the epithet "pig-like" as it makes a screeching noise when the vortex is at its full fury. The smaller whirlpools around this Old Sow are known as "Piglets.[5] and reaches speeds of up to 27.6 km/h (17.1 mph).[6]
The Naruto whirlpools are located in the Naruto Strait near Awaji Island in Japan, which have speeds of 26 km/h (16 mph).[6]
Skookumchuck Narrows is a tidal rapids that develops whirlpools, on the Sunshine Coast, Canada with current speeds exceeding 30 km/h (19 mph).[6]
French Pass (Te Aumiti) is a narrow and treacherous stretch of water that separates D'Urville Island from the north end of the South Island of New Zealand. In 2000 a whirlpool there caught student divers, resulting in multiple fatalities.[13]
There was a short-lived whirlpool that sucked in a portion of the Lake Peigneur of 1300 acre area in Louisiana, United States after a drilling mishap in November 1980. This was not a naturally occurring whirlpool, but a man-made disaster caused by breaking through the roof of a salt mine. This mishap resulted in destruction of five houses, loss of nineteen barges and eight tug boats, oil rigs, a mobile home and most of the botanical garden and 10 percent area of the nearby Jefferson Island. A crater of 0.5-mile was created. The lake then drained, until the mine filled and the water levels equalized but the ten-foot deep lake was now 1,300 feet deep. Nine of the barges which had sunk floated back.[14][15][16]
A more recent example of a man-made whirlpool that received significant media coverage was in early June 2015, when an intake vortex formed in Lake Texoma, on the Oklahoma–Texas border, near the floodgates of the dam that forms the lake. At the time of the whirlpool's formation, the lake was being drained after reaching its highest level ever. The Army Corps of Engineers, which operates the dam and lake, expected that the whirlpool would last until the lake reached normal seasonal levels by late July.[17]
An illustration from Jules Verne's essay "Edgard Poë et ses oeuvres" (Edgar Poe and his Works,1862) drawn by Frederic Lix or Yan' Dargent
Powerful whirlpools have killed unlucky seafarers, but their power tends to be exaggerated by laymen.[18] There are virtually no stories of large ships ever being sucked into a whirlpool. Tales like those by Paul the Deacon, Edgar Allan Poe, and Jules Verne are entirely fictional.[19]
In literature and popular culture
Apart from Poe and Verne other literary source is of the 1500s, of Olaus Magnus, a Swedish Bishop, who had stated that the maelstrom which was more powerful than The Odyssey destroyed ships which sank to the bottom of the sea, and even whales were sucked in. Pytheas, the Greek historian, also mentioned that maelstroms swallowed ships and threw them up again.[1]
Charybdis in Greek mythology was later rationalized as a whirlpool, which sucked entire ships into its fold in the narrow coast of Sicily, a disaster faced by navigators.[20]
— Paul the Deacon, History of the Lombards, i.6
Three of the most notable literary references to the Lofoten Maelstrom date from the nineteenth century. The first is the Edgar Allan Poe short story "A Descent into the Maelström" (1841). The second is 20,000 Leagues Under the Sea (1870), the famous novel by Jules Verne. At the end of this novel, Captain Nemo seems to commit suicide, sending his Nautilus submarine into the Maelstrom (although in Verne's sequel Nemo and Nautilus survived). The "Norway maelstrom" is also mentioned in Herman Melville's Moby-Dick.[22]
In the 'Life of St Columba', the author, Adomnan of Iona', attributes to the saint miraculous knowledge of a particular bishop who ran into a whirlpool off the coast of Ireland. In Adomnan's narrative, he quotes Columba saying[23]
One of the earliest uses in English of the Scandinavian word (malström or malstrøm) was by Edgar Allan Poe in his short story "A Descent into the Maelström" (1841). In turn, the Nordic word is derived from the Dutch maelstrom, modern spelling maalstroom, from malen (to grind) and stroom (stream), to form the meaning grinding current or literally "mill-stream", in the sense of milling (grinding) grain.[24]
See also
iconEnvironment portal iconWater portal
Coriolis effect
Eddy (fluid dynamics)
Rip current
In English, the word originally referred to the Moskstraumen.
Maelstrom may also refer to:
1 Amusement rides
2 Characters
3 Film and television
4 Games
5 Literature
6 Music
7 Other
Amusement rides
Maelstrom (ride), a dark ride that ran from 1988 to 2014 at one of 4 Walt Disney World Resort theme parks, Epcot, in Orlando, Florida, US
Maelstrom, a gyro swing ride at Drayton Manor Theme Park, Staffordshire, UK
Maelstrom (comics), fictional character that appears in comic books published by Marvel Comics
Maelstrom (Ice Age), a Pliosaurus
Mael Strom, aka Yuki Yoshida, from the anime/manga Kore wa Zombie Desu ka?
Film and television
Maelström (film), a 2000 Canadian film
Maelstrom (TV series), a 1985 BBC television drama serial
"Maelstrom" (Battlestar Galactica)
Maelstrom (1992 video game), an Apple Macintosh game
Maelstrom (role playing game), a role-playing game by Alexander Scott
Maelstrom (video game), a 2007 PC game
VOR: The Maelstrom, a miniature wargame
Maelstrom (Live Action Roleplaying), a live action roleplaying game by Profound Decisions
Maelstrom, a lightning-based hammer item in the video game "Dota 2"
A Descent into the Maelström, an 1841 short story by Edgar Allan Poe
Maelstrom (Timms novel), a novel by E.V. Timms
Maelstrom, a 2001 novel by Peter Watts
Maelstrom, a 2006 novel in the Petaybee Series by Anne McCaffrey and Elizabeth Ann Scarborough
Maelstrom (Destroyermen novel), the third book of the Destroyermen series
Maelstrom, an album by JR Ewing
"Maelstrom," a song by The Steve Miller Band on the album Living in the 20th Century
"Maelstrom," a composition by Julian Cochran
"The Maelstrom," a composition by Robert W. Smith
"Maelstrom 2010," a song by Kataklysm on the album Temple of Knowledge
Malström Graintable Synthesizer, a component of the musical software Reason
Maelstrom, a Boston-based thrash metal band active from the late 1980s to the early 1990s that released an LP on Taang! Records
"Ocean - II. Maelstrom" song by Canadian instrumental progressive metal band "Pomegranate Tiger" released on their 2013 album "Entities"
Maelstrom, a Chromium-based browser by San Francisco-based BitTorrent Inc. with an integrated BitTorrent engine
Whirlpool à Amiens : un dossier industriel sous haute tension
In fluid dynamics, a vortex is a region in a fluid in which the flow rotates around an axis line, which may be straight or curved.[1][2] The plural of vortex is either vortices or vortexes.[3][4] Vortices form in stirred fluids, and may be observed in phenomena such as smoke rings, whirlpools in the wake of boat, or the winds surrounding a tornado or dust devil.
Vortices are a major component of turbulent flow. The distribution of velocity, vorticity (the curl of the flow velocity), as well as the concept of circulation are used to characterize vortices. In most vortices, the fluid flow velocity is greatest next to its axis and decreases in inverse proportion to the distance from the axis.
A key concept in the dynamics of vortices is the vorticity, a vector that describes the local rotary motion at a point in the fluid, as would be perceived by an observer that moves along with it. Conceptually, the vorticity could be observed by placing a tiny rough ball at the point in question, free to move with the fluid, and observing how it rotates about its center. The direction of the vorticity vector is defined to be the direction of the axis of rotation of this imaginary ball (according to the right-hand rule) while its length is twice the ball's angular velocity. Mathematically, the vorticity is defined as the curl (or rotational) of the velocity field of the fluid, usually denoted by ω → {\displaystyle {\vec {\omega }}} {\vec {\omega }} and expressed by the vector analysis formula ∇ × u → {\displaystyle \nabla \times {\vec {\mathit {u}}}} \nabla \times {\vec {{\mathit {u}}}}, where ∇ {\displaystyle \nabla } \nabla is the nabla operator and u → {\displaystyle {\vec {\mathit {u}}}} {\vec {{\mathit {u}}}} is the local flow velocity.[5]
The local rotation measured by the vorticity ω → {\displaystyle {\vec {\omega }}} {\vec {\omega }} must not be confused with the angular velocity vector of that portion of the fluid with respect to the external environment or to any fixed axis. In a vortex, in particular, ω → {\displaystyle {\vec {\omega }}} {\vec {\omega }} may be opposite to the mean angular velocity vector of the fluid relative to the vortex's axis.
Vortex types
A rigid-body vortex
Ω → = ( 0 , 0 , Ω ) , r → = ( x , y , 0 ) , {\displaystyle {\vec {\Omega }}=(0,0,\Omega ),\quad {\vec {r}}=(x,y,0),} {\vec {\Omega }}=(0,0,\Omega ),\quad {\vec {r}}=(x,y,0),
u → = Ω → × r → = ( − Ω y , Ω x , 0 ) , {\displaystyle {\vec {u}}={\vec {\Omega }}\times {\vec {r}}=(-\Omega y,\Omega x,0),} {\vec {u}}={\vec {\Omega }}\times {\vec {r}}=(-\Omega y,\Omega x,0),
ω → = ∇ × u → = ( 0 , 0 , 2 Ω ) = 2 Ω → . {\displaystyle {\vec {\omega }}=\nabla \times {\vec {u}}=(0,0,2\Omega )=2{\vec {\Omega }}.} {\vec \omega }=\nabla \times {\vec {u}}=(0,0,2\Omega )=2{\vec {\Omega }}.
An irrotational vortex
If the particle speed u is inversely proportional to the distance r from the axis, then the imaginary test ball would not rotate over itself; it would maintain the same orientation while moving in a circle around the vortex axis. In this case the vorticity ω → {\displaystyle {\vec {\omega }}} {\vec {\omega }} is zero at any point not on that axis, and the flow is said to be irrotational.
Ω → = ( 0 , 0 , α r − 2 ) , r → = ( x , y , 0 ) , {\displaystyle {\vec {\Omega }}=(0,0,\alpha r^{-2}),\quad {\vec {r}}=(x,y,0),} {\vec {\Omega }}=(0,0,\alpha r^{{-2}}),\quad {\vec {r}}=(x,y,0),
u → = Ω → × r → = ( − α y r − 2 , α x r − 2 , 0 ) , {\displaystyle {\vec {u}}={\vec {\Omega }}\times {\vec {r}}=(-\alpha yr^{-2},\alpha xr^{-2},0),} {\vec {u}}={\vec {\Omega }}\times {\vec {r}}=(-\alpha yr^{{-2}},\alpha xr^{{-2}},0),
ω → = ∇ × u → = 0. {\displaystyle {\vec {\omega }}=\nabla \times {\vec {u}}=0.} {\vec {\omega }}=\nabla \times {\vec {u}}=0.
Irrotational vortices
Pathlines of fluid particles around the axis (dashed line) of an ideal irrotational vortex. (See animation)
In the absence of external forces, a vortex usually evolves fairly quickly toward the irrotational flow pattern[citation needed], where the flow velocity u is inversely proportional to the distance r. Irrotational vortices are also called free vortices.
For an irrotational vortex, the circulation is zero along any closed contour that does not enclose the vortex axis; and has a fixed value, Γ, for any contour that does enclose the axis once.[6] The tangential component of the particle velocity is then u θ = Γ 2 π r {\displaystyle u_{\theta }={\tfrac {\Gamma }{2\pi r}}} {\displaystyle u_{\theta }={\tfrac {\Gamma }{2\pi r}}}. The angular momentum per unit mass relative to the vortex axis is therefore constant, r u θ = Γ 2 π {\displaystyle ru_{\theta }={\tfrac {\Gamma }{2\pi }}} {\displaystyle ru_{\theta }={\tfrac {\Gamma }{2\pi }}}.
u θ = ( 1 − e − r 2 4 ν t ) Γ 2 π r . {\displaystyle u_{\theta }=\left(1-e^{\frac {-r^{2}}{4\nu t}}\right){\frac {\Gamma }{2\pi r}}.} {\displaystyle u_{\theta }=\left(1-e^{\frac {-r^{2}}{4\nu t}}\right){\frac {\Gamma }{2\pi r}}.}
Rotational vortices
Vortex geometry
Pressure in a vortex
A Plughole vortex
In an irrotational vortex flow with constant fluid density and cylindrical symmetry, the dynamic pressure varies as P∞ − K/r2, where P∞ is the limiting pressure infinitely far from the axis. This formula provides another constraint for the extent of the core, since the pressure cannot be negative. The free surface (if present) dips sharply near the axis line, with depth inversely proportional to r2. The shape formed by the free surface is called a hyperboloid, or "Gabriel's Horn" (by Evangelista Torricelli).
Two-dimensional modeling
When the particle velocities are constrained to be parallel to a fixed plane, one can ignore the space dimension perpendicular to that plane, and model the flow as a two-dimensional flow velocity field on that plane. Then the vorticity vector ω → {\displaystyle {\vec {\omega }}} {\vec {\omega }} is always perpendicular to that plane, and can be treated as a scalar. This assumption is sometimes made in meteorology, when studying large-scale phenomena like hurricanes.
Further examples
The visible core of a vortex formed when a C-17 uses high engine power (reverse-thrust) at slow speed on a wet runway.
In the hydrodynamic interpretation of the behaviour of electromagnetic fields, the acceleration of electric fluid in a particular direction creates a positive vortex of magnetic fluid. This in turn creates around itself a corresponding negative vortex of electric fluid. Exact solutions to classical nonlinear magnetic equations include the Landau-Lifshitz equation, the continuum Heisenberg model, the Ishimori equation, and the nonlinear Schrödinger equation.
Bubble rings are underwater vortex rings whose core traps a ring of bubbles, or a single donut-shaped bubble. They are sometimes created by dolphins and whales.
The lifting force of aircraft wings, propeller blades, sails, and other airfoils can be explained by the creation of a vortex superimposed on the flow of air past the wing.
Aerodynamic drag can be explained in large part by the formation of vortices in the surrounding fluid that carry away energy from the moving body.
Large whirlpools can be produced by ocean tides in certain straits or bays. Examples are Charybdis of classical mythology in the Straits of Messina, Italy; the Naruto whirlpools of Nankaido, Japan; and the Maelstrom at Lofoten, Norway.
Vortices in the Earth's atmosphere are important phenomena for meteorology. They include mesocyclones on the scale of a few miles, tornados, waterspouts, and hurricanes. These vortices are often driven by temperature and humidity variations with altitude. The sense of rotation of hurricanes is influenced by the Earth's rotation. Another example is the Polar vortex, a persistent, large-scale cyclone centered near the Earth's poles, in the middle and upper troposphere and the stratosphere.
Vortices are prominent features of the atmospheres of other planets. They include the permanent Great Red Spot on Jupiter, the intermittent Great Dark Spot on Neptune, the polar vortices of Venus, the Martian dust devils and the North Polar Hexagon of Saturn.
Sunspots are dark regions on the Sun's visible surface (photosphere) marked by a lower temperature than its surroundings, and intense magnetic activity.
The accretion disks of black holes and other massive gravitational sources.
Taylor-Couette flow occurs in a fluid between two nested cylinders, one rotating, the other fixed.
See also
iconPhysics portal
Artificial gravity
Batchelor vortex
Biot–Savart law
Coordinate rotation
Cyclonic separation
Helmholtz's theorems
History of fluid mechanics
Horseshoe vortex
Kelvin–Helmholtz instability
Quantum vortex
Rankine vortex
Shower-curtain effect
Strouhal number
Vile Vortices
Von Kármán vortex street
Vortex engine
Vortex tube
Vortex cooler
VORTEX projects
Vortex shedding
Vortex stretching
Vortex induced vibration
Ting, L. (1991). Viscous Vortical Flows. Lecture notes in physics. Springer-Verlag. ISBN 3-540-53713-9.
Vallis, Geoffrey (1999). Geostrophic Turbulence: The Macroturbulence of the Atmosphere and Ocean Lecture Notes (PDF). Lecture notes. Princeton University. p. 1. Retrieved 2012-09-26.
Clancy 1975, sub-section 7.5
Batchelor, G.K. (1967). An Introduction to Fluid Dynamics. Cambridge Univ. Press. Ch. 7 et seq. ISBN 9780521098175.
De La Fuente Marcos, C.; Barge, P. (2001). "The effect of long-lived vortical circulation on the dynamics of dust particles in the mid-plane of a protoplanetary disc". Monthly Notices of the Royal Astronomical Society. 323 (3): 601–614. Bibcode:2001MNRAS.323..601D. doi:10.1046/j.1365-8711.2001.04228.x.
External links
Wikimedia Commons has media related to Vortex.
Optical Vortices
Video of two water vortex rings colliding (MPEG)
Chapter 3 Rotational Flows: Circulation and Turbulence
Whirlpool à Amiens
Le dossier Whirlpool d'Amiens est industriellement complexe et
s'est imposé comme le symbole de la course à la compétitivité
que se livrent les pays de l'Union Européenne...
yanis la chouette
Messages : 597
Date d'inscription : 24/02/2017
Voir le profil de l'utilisateur
Revenir en haut Aller en bas
Re: Whirlpool, a maelstrom of emotion disambiguation et Y'becca
Message par yanis la chouette le Jeu 27 Avr - 18:51
Kármán vortex street
In fluid dynamics, a Kármán vortex street (or a von Kármán vortex sheet) is a repeating pattern of swirling vortices caused by the unsteady separation of flow of a fluid around blunt bodies. It is named after the engineer and fluid dynamicist Theodore von Kármán,[1] and is responsible for such phenomena as the "singing" of suspended telephone or power lines, and the vibration of a car antenna at certain speeds
Le groupe électroménager a décidé de délocaliser la production en Pologne et de fermer l'usine de sèche-linges en juin 2018. 40 millions d’euros d’investissements ont été réalisés sur le site
Duel surprise Macron-Le Pen ce mercredi sur le site de Whirlpool . Comme il s'y était engagé, Emmanuel Macron s'est rendu à Amiens pour rencontrer les salariés de l'usine d'électroménager, dont la production de sèche-linge sera délocalisée en Pologne en juin 2018. Mais pendant que le candidat d'En Marche s'entretenait en ville avec une délégation d'une dizaine de représentants des salariés à la Chambre de commerce et d'industrie, Marine Le Pen, son adversaire du FN, a effectué une visite surprise sur le site même de l'usine, là où les ouvriers tiennent un piquet de grève depuis deux jours.
Du coup, le candidat d'En Marche, qui avait justifié son choix de ne pas aller à l'usine, en expliquant ne pas vouloir exploiter à son profit la détresse des salariés, est allé lui aussi aller à la rencontre des salariés grévistes . «Il se moque de nous. Nous lui avons écrit il y a trois mois. Aujourd'hui, il vient dans l'entre-deux-tours. De toute façon, la plupart des ouvriers ont voté Marine », avait commenté Frédéric Chantrelle, délégué CFDT de l'usine.
Délocalisation en Pologne
Le dossier Whirlpool d'Amiens est industriellement complexe et s'est imposé comme le symbole de la course à la compétitivité que se livrent les pays de l'Union Européenne. C'est en Pologne, à Lodz, que le géant américain de l'électroménager veut, d'ici juin 2018, délocaliser sa production de sèche-linge réalisée dans son usine picarde. Cette fermeture entraînera la suppression de 290 emplois directs, auxquels s'ajoutent des dizaines d'intérimaires, ainsi qu'une soixantaine de postes du sous-traitant in situ Prima. En janvier, le groupe a expliqué que cette fermeture était nécessaire au maintien de sa compétitivité à long terme. Il a indiqué qu'il mettrait tout en oeuvre pour trouver un repreneur.
Une usine moderne
L'usine est en très bon état. Ces cinq dernières années, Whirlpool a investi pas moins de 40 millions d'euros dans sa modernisation. Il a imaginé en faire, un temps, un pôle européen d'excellence de la production de sèche-linge. En contre-partie de cette promesse, les salariés avaient accepté de renégocier leur temps de travail. Le projet s'était finalement soldé par un échec, le modèle innovant lancé par Whirlpool ne répondant pas aux normes de qualité énergétique voulues. En 2012, le site comptait plus de 1300 personnes.
A ce flop industriel, est venu s'ajouter, en 2015, le rachat par l'Américain de l'Italien Indesit et de sa quinzaine d'usine, réparties sur le Vieux Continent : en Italie, mais aussi en Turquie, en Russie et en Pologne. A Lodz justement, Whirlpool emploie désormais 1.200 personnes à la fabrication de cuisinières et de fours (1,2 million d'unités par an).« Whirlpool, c'est 800 millions de dollars de bénéfice, 21 milliards de chiffre d'affaire (...) mais ils fabriquent des chômeurs en France », expliquait ces dernières semaines François Gorlia, membre de la CGT.
Indemnités supra légales
Après la fermeture de Goodyear à Amiens Nord (1.143 salariés en 2014), et la perte de statut de capitale régionale, la fermeture de Whirlpool a été un nouveau coup dur pour la ville. Le bassin d'emploi qui compte plus de 13 % de chômage, a aussi perdu l'usine du fabricant de matelas Sapsa-bedding à Saleux (143 salariés en 2015).
Dans ce contexte, les négociations entre les salariés et la direction s'annoncent âpres, notamment concernant la question des indemnités supra-légales. A défaut d'empêcher sa fermeture, les salariés espèrent une reprise rapide du site. Une quinzaine de projets auraient déjà été déposés auprès de services de l'Etat, dont deux jugés très sérieux. Reste à savoir le nombre et le type d'emplois qu'ils permettront de maintenir.
Le caractère complexe du dossier s'explique aussi par sa situation géographique. L'usine est implantée dans une ville qui vient à peine de se remettre de la fermeture de Goodyear. Les syndicats accusent le groupe américain de délocaliser uniquement pour les profits.
Animation of vortex street created by a cylindrical object; the flow on opposite sides of the object is given different colors, showing that the vortices are shed from alternating sides of the object
A look at the Kármán vortex street effect from ground level, as air flows quickly from the Pacific ocean eastward over Mojave desert mountains.
A vortex street will only form at a certain range of flow velocities, specified by a range of Reynolds numbers (Re), typically above a limiting Re value of about 90. The (global) Reynolds number for a flow is a measure of the ratio of inertial to viscous forces in the flow of a fluid around a body or in a channel, and may be defined as a nondimensional parameter of the global speed of the whole fluid flow:
R e L = U L ν 0 {\displaystyle \mathrm {Re} _{L}={\frac {UL}{\nu _{0}}}} {\displaystyle \mathrm {Re} _{L}={\frac {UL}{\nu _{0}}}}
U {\displaystyle U} U = the free stream flow speed (i.e. the flow speed far from the fluid boundaries U ∞ {\displaystyle U_{\infty }} U_\infty like the body speed relative to the fluid at rest, or an inviscid flow speed, computed through the Bernoulli equation), which is the original global flow parameter, i.e. the target to be nondimensionalised.
L {\displaystyle L} L = a characteristic length parameter of the body or channel
ν 0 {\displaystyle \nu _{0}} \nu _{0} = the free stream kinematic viscosity parameter of the fluid, which in turn is the ratio:
ν = μ ρ {\displaystyle \nu ={\frac {\mu }{\rho }}} {\displaystyle \nu ={\frac {\mu }{\rho }}}
ρ 0 {\displaystyle \rho _{0}} \rho _{0} = the reference fluid density.
μ 0 {\displaystyle \mu _{0}} \mu _{0} = the free stream fluid dynamic viscosity
For common flows (the ones which can usually be considered as incompressible ar isothermal), the kinematic viscosity is everywhere uniform over all the flow field and constant in time, so there is no choice on the viscosity parameter, which becomes naturally the kinematic viscosity of the fluid being considered at the temperature being considered. On the other hand, the reference length is always an arbitrary parameter, so particular attention should be put when comparing flows around different obstacles or in channels of different shapes: the global Reynolds numbers should be referred to the same reference length. This is actually the reason for which most precise sources for airfoil and channel flow data specify the reference length at a pedix to the Reynolds number. The reference length can vary depending on the analysis to be performed: for body with circle sections such as circular cyliners or spheres, one usually chooses the diameter; for an airfoil, a generic noncircular cylinder or a bluff body or a revolution body like a fuselage or a submarine, it is usually the profile chord or the profile thickness, or some other given widths that are in fact stable design inputs; for flow channels usually the hydraulic diameter) about which the fluid is flowing.
Interestingly, for an aerodynamic profile the reference length depends on the analysis. In fact, the profile chord is usually chosen as the reference length also for aerodynamic coefficient for wing sections and thin profiles in which the primary target is to maximize the lift coefficient or the lift/drag ratio (i.e. as usual in thin airfoil theory, one would employ the chord Reynolds as the flow speed parameter for comparing different profiles). On the other hand, for fairings and struts the given parameter is usually the dimension of internal structure to be streamlined (let us think for simplicity it is a beam with circular section), and the main target is to minimize the drag coefficient or the drag/lift ratio. The main design parameter which becomes naturally also a reference length is therefore the profile thickness (the profile dimension or area perpendicular to the flow direction), rather than the profile chord.
The range of Re values will vary with the size and shape of the body from which the eddies are being shed, as well as with the kinematic viscosity of the fluid. Over a large Red range (47<Red<105 for circular cylinders; reference length is d: diameter of the circular cylinder) eddies are shed continuously from each side of the circle boundary, forming rows of vortices in its wake. The alternation leads to the core of a vortex in one row being opposite the point midway between two vortex cores in the other row, giving rise to the distinctive pattern shown in the picture. Ultimately, the energy of the vortices is consumed by viscosity as they move further down stream, and the regular pattern disappears.
When a single vortex is shed, an asymmetrical flow pattern forms around the body and changes the pressure distribution. This means that the alternate shedding of vortices can create periodic lateral (sideways) forces on the body in question, causing it to vibrate. If the vortex shedding frequency is similar to the natural frequency of a body or structure, it causes resonance. It is this forced vibration that, at the correct frequency, causes suspended telephone or power lines to "sing" and the antenna on a car to vibrate more strongly at certain speeds.
In meteorology
Kármán vortex street caused by wind flowing around the Juan Fernández Islands off the Chilean coast
The flow of atmospheric air over obstacles such as islands or isolated mountains sometimes gives birth to von Kármán vortex streets. When a cloud layer is present at the relevant altitude, the streets become visible. Such cloud layer vortex streets have been photographed from satellites.[2]
Engineering problems
File:Karman Vortex Street Off Cylinder.ogvPlay media
Simulated vortex street around a no-slip cylindrical obstruction
File:Vortex.fin.small.ogvPlay media
The same cylinder, now with a fin, suppressing the vortex street by reducing the region in which the side eddies can interact
Chimneys with spirals outside to break up vortices
In low turbulence, tall buildings can produce a Kármán street so long as the structure is uniform along its height. In urban areas where there are many other tall structures nearby, the turbulence produced by these prevents the formation of coherent vortices.[3] Periodic crosswind forces set up by vortices along object's sides can be highly undesirable, and hence it is important for engineers to account for the possible effects of vortex shedding when designing a wide range of structures, from submarine periscopes to industrial chimneys and skyscrapers.
In order to prevent the unwanted vibration of such cylindrical bodies, a longitudinal fin can be fitted on the downstream side, which, provided it is longer than the diameter of the cylinder, will prevent the eddies from interacting, and consequently they remain attached. Obviously, for a tall building or mast, the relative wind could come from any direction. For this reason, helical projections that look like large screw threads are sometimes placed at the top, which effectively create asymmetric three-dimensional flow, thereby discouraging the alternate shedding of vortices; this is also found in some car antennas. Another countermeasure with tall buildings is using variation in the diameter with height, such as tapering - that prevents the entire building being driven at the same frequency.
Even more serious instability can be created in concrete cooling towers, for example, especially when built together in clusters. Vortex shedding caused the collapse of three towers at Ferrybridge Power Station C in 1965 during high winds.
The failure of the original Tacoma Narrows Bridge was originally attributed to excessive vibration due to vortex shedding, but was actually caused by aeroelastic flutter.
Kármán turbulence is also a problem for airplanes, especially at landing.[4][5]
This formula will generally hold true for the range 250 < Red < 2 × 105:
S t = 0.198 ( 1 − 19.7 R e d ) {\displaystyle St=0.198\left(1-{\frac {19.7}{Re_{d}}}\right)\ } {\displaystyle St=0.198\left(1-{\frac {19.7}{Re_{d}}}\right)\ }
S t = f d U {\displaystyle St={\frac {fd}{U}}} {\displaystyle St={\frac {fd}{U}}}
f = vortex shedding frequency.
d = diameter of the cylinder
U = flow velocity.
This dimensionless parameter St is known as the Strouhal number and is named after the Czech physicist, Vincenc Strouhal (1850–1922) who first investigated the steady humming or singing of telegraph wires in 1878.
Although named after Theodore von Kármán,[6][7] he acknowledged[8] that the vortex street had been studied earlier by Mallock[9] and Bénard.[10]
See also
Eddy (fluid dynamics)
Kelvin–Helmholtz instability
Reynolds number
Vortex shedding
Vortex-induced vibration
Coandă effect
"Rapid Response - LANCE - Terra/MODIS 2010/226 14:55 UTC". Rapidfire.sci.gsfc.nasa.gov. Retrieved 2013-12-20.
Irwin, Peter A. (September 2010). "Vortices and tall buildings: A recipe for resonance". Physics Today. American Institute of Physics. 63 (9): 68–69. Bibcode:2010PhT....63i..68I. doi:10.1063/1.3490510. ISSN 0031-9228.
Wake turbulence
T. von Kármán: Nachr. Ges. Wissenschaft. Göttingen Math. Phys. Klasse pp. 509–517 (1911) and pp. 547–556 (1912).
T. von Kármán: and H. Rubach, 1912: Phys. Z.", vol. 13, pp. 49–59.
T. Kármán, 1954. Aerodynamics: Selected Topics in the Light of Their Historical Development (Cornell University Press, Ithaca), pp. 68–69.
A. Mallock, 1907: On the resistance of air. Proc. Royal Soc., A79, pp. 262–265.
H. Bénard, 1908: Comptes rendus de l'Académie des Sciences (Paris), vol. 147, pp. 839–842, 970–972.
External links
Wikimedia Commons has media related to Von Kármán vortex streets.
Encyclopedia of Mathematics article on von Karman vortex shedding
Kármán vortex street formula calculator
3D animation of the Vortex Flow Measuring Principle
Vortex streets and Strouhal instability
How Insects Fly (which can produce von Kármán vortices)
YouTube — Flow visualisation of the vortex shedding mechanism on circular cylinder using hydrogen bubbles illuminated by a laser sheet in a water channel
Various Views of von Karman Vortices, NASA page
Authority control
GND: 4163337-4
v t e
Patterns in nature
Crack Dune Foam Meander Phyllotaxis Soap bubble Symmetry
in crystals Quasicrystals in flowers in biology Tessellation Vortex street Wave Widmanstätten pattern
Yemen Chameleon (cropped).jpg
Pattern formation Biology
Natural selection Camouflage Mimicry Sexual selection Mathematics
Chaos theory Fractal Logarithmic spiral Physics
Crystal Fluid dynamics Plateau's laws Self-organization
Plato Pythagoras Empedocles Leonardo Fibonacci
Liber Abaci Adolf Zeising Ernst Haeckel Joseph Plateau Wilson Bentley D'Arcy Wentworth Thompson
On Growth and Form Alan Turing
The Chemical Basis of Morphogenesis Aristid Lindenmayer Benoît Mandelbrot
Emergence Mathematics and art
Y'becca et le Programme Spatial avec SPACE X, LA N.A.S.A, L'E.S.A et d'autres citoyens Morales et
Physiques ont des solutions qui se sont concrétisés durant les accord de la COP 21...
yanis la chouette
Messages : 597
Date d'inscription : 24/02/2017
Voir le profil de l'utilisateur
Revenir en haut Aller en bas
Re: Whirlpool, a maelstrom of emotion disambiguation et Y'becca
Message par yanis la chouette le Jeu 27 Avr - 18:52
Aperiodic tilings were discovered by mathematicians in the early 1960s, and, some twenty years later, they were found to apply to the study of quasicrystals. The discovery of these aperiodic forms in nature has produced a paradigm shift in the fields of crystallography. Quasicrystals had been investigated and observed earlier,[2] but, until the 1980s, they were disregarded in favor of the prevailing views about the atomic structure of matter. In 2009, after a dedicated search, a mineralogical finding, icosahedrite, offered evidence for the existence of natural quasicrystals.[3]
Roughly, an ordering is non-periodic if it lacks translational symmetry, which means that a shifted copy will never match exactly with its original. The more precise mathematical definition is that there is never translational symmetry in more than n – 1 linearly independent directions, where n is the dimension of the space filled, e.g., the three-dimensional tiling displayed in a quasicrystal may have translational symmetry in two dimensions. Symmetrical diffraction patterns result from the existence of an indefinitely large number of elements with a regular spacing, a property loosely described as long-range order. Experimentally, the aperiodicity is revealed in the unusual symmetry of the diffraction pattern, that is, symmetry of orders other than two, three, four, or six. In 1982 materials scientist Dan Shechtman observed that certain aluminium-manganese alloys produced the unusual diffractograms which today are seen as revelatory of quasicrystal structures. Due to fear of the scientific community's reaction, it took him two years to publish the results[4][5] for which he was awarded the Nobel Prize in Chemistry in 2011.[6]
A Penrose tiling
In 1961, Hao Wang asked whether determining if a set of tiles admits a tiling of the plane is an algorithmically unsolvable problem or not. He conjectured that it is solvable, relying on the hypothesis that every set of tiles that can tile the plane can do it periodically (hence, it would suffice to try to tile bigger and bigger patterns until obtaining one that tiles periodically). Nevertheless, two years later, his student Robert Berger constructed a set of some 20,000 square tiles (now called Wang tiles) that can tile the plane but not in a periodic fashion. As further aperiodic sets of tiles were discovered, sets with fewer and fewer shapes were found. In 1976 Roger Penrose discovered a set of just two tiles, now referred to as Penrose tiles, that produced only non-periodic tilings of the plane. These tilings displayed instances of fivefold symmetry. One year later Alan Mackay showed experimentally that the diffraction pattern from the Penrose tiling had a two-dimensional Fourier transform consisting of sharp 'delta' peaks arranged in a fivefold symmetric pattern.[7] Around the same time Robert Ammann created a set of aperiodic tiles that produced eightfold symmetry.
Mathematically, quasicrystals have been shown to be derivable from a general method that treats them as projections of a higher-dimensional lattice. Just as circles, ellipses, and hyperbolic curves in the plane can be obtained as sections from a three-dimensional double cone, so too various (aperiodic or periodic) arrangements in two and three dimensions can be obtained from postulated hyperlattices with four or more dimensions. Icosahedral quasicrystals in three dimensions were projected from a six-dimensional hypercubic lattice by Peter Kramer and Roberto Neri in 1984.[8] The tiling is formed by two tiles with rhombohedral shape.
Shechtman first observed ten-fold electron diffraction patterns in 1982, as described in his notebook. The observation was made during a routine investigation, by electron microscopy, of a rapidly cooled alloy of aluminium and manganese prepared at the US National Bureau of Standards (later NIST).
In the summer of the same year Shechtman visited Ilan Blech and related his observation to him. Blech responded that such diffractions had been seen before.[9][10] Around that time, Shechtman also related his finding to John Cahn of NIST who did not offer any explanation and challenged him to solve the observation. Shechtman quoted Cahn as saying: "Danny, this material is telling us something and I challenge you to find out what it is".
The observation of the ten-fold diffraction pattern lay unexplained for two years until the spring of 1984, when Blech asked Shechtman to show him his results again. A quick study of Shechtman's results showed that the common explanation for a ten-fold symmetrical diffraction pattern, the existence of twins, was ruled out by his experiments. Since periodicity and twins were ruled out, Blech, unaware of the two-dimensional tiling work, was looking for another possibility: a completely new structure containing cells connected to each other by defined angles and distances but without translational periodicity. Blech decided to use a computer simulation to calculate the diffraction intensity from a cluster of such a material without long-range translational order but still not random. He termed this new structure multiple polyhedral[disambiguation needed].
The idea of a new structure was the necessary paradigm shift to break the impasse. The “Eureka moment” came when the computer simulation showed sharp ten-fold diffraction patterns, similar to the observed ones, emanating from the three-dimensional structure devoid of periodicity. The multiple polyhedral structure was termed later by many researchers as icosahedral glass but in effect it embraces any arrangement of polyhedra connected with definite angles and distances (this general definition includes tiling, for example).
Shechtman accepted Blech's discovery of a new type of material and it gave him the courage to publish his experimental observation. Shechtman and Blech jointly wrote a paper entitled "The Microstructure of Rapidly Solidified Al6Mn" [11] and sent it for publication around June 1984 to the Journal of Applied Physics (JAP). The JAP editor promptly rejected the paper as being better fit for a metallurgical readership. As a result, the same paper was re-submitted for publication to the Metallurgical Transactions A, where it was accepted. Although not noted in the body of the published text, the published paper was slightly revised prior to publication.
Meanwhile, on seeing the draft of the Shechtman-Blech paper in the summer of 1984, John Cahn suggested that Shechtman's experimental results merit a fast publication in a more appropriate scientific journal. Shechtman agreed and, in hindsight, called this fast publication "a winning move”. This paper, published in the Physical Review Letters (PRL),[5] repeated Shechtman's observation and used the same illustrations as the original Shechtman-Blech paper in the Metallurgical Transactions A. The PRL paper, the first to appear in print, caused considerable excitement in the scientific community.
Next year Ishimasa et al. reported twelvefold symmetry in Ni-Cr particles.[12] Soon, eightfold diffraction patterns were recorded in V-Ni-Si and Cr-Ni-Si alloys.[13] Over the years, hundreds of quasicrystals with various compositions and different symmetries have been discovered. The first quasicrystalline materials were thermodynamically unstable—when heated, they formed regular crystals. However, in 1987, the first of many stable quasicrystals were discovered, making it possible to produce large samples for study and opening the door to potential applications. In 2009, following a 10-year systematic search, scientists reported the first natural quasicrystal, a mineral found in the Khatyrka River in eastern Russia.[3] This natural quasicrystal exhibits high crystalline quality, equalling the best artificial examples.[14] The natural quasicrystal phase, with a composition of Al63Cu24Fe13, was named icosahedrite and it was approved by the International Mineralogical Association in 2010. Furthermore, analysis indicates it may be meteoritic in origin, possibly delivered from a carbonaceous chondrite asteroid.[15]
Atomic image of a micron-sized grain of the natural Al71Ni24Fe5 quasicrystal (shown in the inset) from a Khatyrka meteorite. The corresponding diffraction patterns reveal a ten-fold symmetry.[16]
A further study of Khatyrka meteorites revealed micron-sized grains of another natural quasicrystal, which has a ten-fold symmetry and a chemical formula of Al71Ni24Fe5. This quasicrystal is stable in a narrow temperature range, from 1120 to 1200 K at ambient pressure, which suggests that natural quasicrystals are formed by rapid quenching of a meteorite heated during an impact-induced shock.[16]
Electron diffraction pattern of an icosahedral Ho-Mg-Zn quasicrystal
In 1972 de Wolf and van Aalst[17] reported that the diffraction pattern produced by a crystal of sodium carbonate cannot be labeled with three indices but needed one more, which implied that the underlying structure had four dimensions in reciprocal space. Other puzzling cases have been reported,[18] but until the concept of quasicrystal came to be established, they were explained away or denied.[19][20] However, at the end of the 1980s the idea became acceptable, and in 1992 the International Union of Crystallography altered its definition of a crystal, broadening it as a result of Shechtman’s findings, reducing it to the ability to produce a clear-cut diffraction pattern and acknowledging the possibility of the ordering to be either periodic or aperiodic.[4][notes 1] Now, the symmetries compatible with translations are defined as "crystallographic", leaving room for other "non-crystallographic" symmetries. Therefore, aperiodic or quasiperiodic structures can be divided into two main classes: those with crystallographic point-group symmetry, to which the incommensurately modulated structures and composite structures belong, and those with non-crystallographic point-group symmetry, to which quasicrystal structures belong.
Originally, the new form of matter was dubbed "Shechtmanite".[21] The term "quasicrystal" was first used in print by Steinhardt and Levine[22] shortly after Shechtman's paper was published. The adjective quasicrystalline had already been in use, but now it came to be applied to any pattern with unusual symmetry.[notes 2] 'Quasiperiodical' structures were claimed to be observed in some decorative tilings devised by medieval Islamic architects.[23][24] For example, Girih tiles in a medieval Islamic mosque in Isfahan, Iran, are arranged in a two-dimensional quasicrystalline pattern.[25] These claims have, however, been under some debate.[26]
Shechtman was awarded the Nobel Prize in Chemistry in 2011 for his work on quasicrystals. "His discovery of quasicrystals revealed a new principle for packing of atoms and molecules," stated the Nobel Committee and pointed that "this led to a paradigm shift within chemistry." [4][27]
A penteract (5-cube) pattern using 5D orthographic projection to 2D using Petrie polygon basis vectors overlaid on the diffractogram from an Icosahedral Ho-Mg-Zn quasicrystal
There are several ways to mathematically define quasicrystalline patterns. One definition, the "cut and project" construction, is based on the work of Harald Bohr (mathematician brother of Niels Bohr). The concept of an almost periodic function (also called a quasiperiodic function) was studied by Bohr, including work of Bohl and Escanglon.[28] He introduced the notion of a superspace. Bohr showed that quasiperiodic functions arise as restrictions of high-dimensional periodic functions to an irrational slice (an intersection with one or more hyperplanes), and discussed their Fourier point spectrum. These functions are not exactly periodic, but they are arbitrarily close in some sense, as well as being a projection of an exactly periodic function.
In order that the quasicrystal itself be aperiodic, this slice must avoid any lattice plane of the higher-dimensional lattice. De Bruijn showed that Penrose tilings can be viewed as two-dimensional slices of five-dimensional hypercubic structures.[29] Equivalently, the Fourier transform of such a quasicrystal is nonzero only at a dense set of points spanned by integer multiples of a finite set of basis vectors (the projections of the primitive reciprocal lattice vectors of the higher-dimensional lattice).[30] The intuitive considerations obtained from simple model aperiodic tilings are formally expressed in the concepts of Meyer and Delone sets. The mathematical counterpart of physical diffraction is the Fourier transform and the qualitative description of a diffraction picture as 'clear cut' or 'sharp' means that singularities are present in the Fourier spectrum. There are different methods to construct model quasicrystals. These are the same methods that produce aperiodic tilings with the additional constraint for the diffractive property. Thus, for a substitution tiling the eigenvalues of the substitution matrix should be Pisot numbers. The aperiodic structures obtained by the cut-and-project method are made diffractive by choosing a suitable orientation for the construction; this is a geometric approach that has also a great appeal for physicists.
Classical theory of crystals reduces crystals to point lattices where each point is the center of mass of one of the identical units of the crystal. The structure of crystals can be analyzed by defining an associated group. Quasicrystals, on the other hand, are composed of more than one type of unit, so, instead of lattices, quasilattices must be used. Instead of groups, groupoids, the mathematical generalization of groups in category theory, is the appropriate tool for studying quasicrystals.[31]
Using mathematics for construction and analysis of quasicrystal structures is a difficult task for most experimentalists. Computer modeling, based on the existing theories of quasicrystals, however, greatly facilitated this task. Advanced programs have been developed[32] allowing one to construct, visualize and analyze quasicrystal structures and their diffraction patterns.
Interacting spins were also analyzed in quasicrystals: AKLT Model and 8-vertex model were solved in quasicrystals analytically.[33]
Study of quasicrystals may shed light on the most basic notions related to quantum critical point observed in heavy fermion metals. Experimental measurements on the gold-aluminium-ytterbium quasicrystal have revealed a quantum critical point defining the divergence of the magnetic susceptibility as temperature tends to zero.[34] It is suggested that the electronic system of some quasicrystals is located at quantum critical point without tuning, while quasicrystals exhibit the typical scaling behaviour of their thermodynamic properties and belong to the famous family of heavy-fermion metals.
Materials science
Tiling of a plane by regular pentagons is impossible but can be realized on a sphere in the form of pentagonal dodecahedron.
A Ho-Mg-Zn icosahedral quasicrystal formed as a pentagonal dodecahedron, the dual of the icosahedron. Unlike the similar pyritohedron shape of some cubic-system crystals such as pyrite, the quasicrystal has faces that are true regular pentagons
TiMn quasicrystal approximant lattice.
Since the original discovery by Dan Shechtman, hundreds of quasicrystals have been reported and confirmed. Undoubtedly, the quasicrystals are no longer a unique form of solid; they exist universally in many metallic alloys and some polymers. Quasicrystals are found most often in aluminium alloys (Al-Li-Cu, Al-Mn-Si, Al-Ni-Co, Al-Pd-Mn, Al-Cu-Fe, Al-Cu-V, etc.), but numerous other compositions are also known (Cd-Yb, Ti-Zr-Ni, Zn-Mg-Ho, Zn-Mg-Sc, In-Ag-Yb, Pd-U-Si, etc.).[35]
Two types of quasicrystals are known.[32] The first type, polygonal (dihedral) quasicrystals, have an axis of 8, 10, or 12-fold local symmetry (octagonal, decagonal, or dodecagonal quasicrystals, respectively). They are periodic along this axis and quasiperiodic in planes normal to it. The second type, icosahedral quasicrystals, are aperiodic in all directions.
Quasicrystals fall into three groups of different thermal stability:[36]
Stable quasicrystals grown by slow cooling or casting with subsequent annealing,
Metastable quasicrystals prepared by melt spinning, and
Metastable quasicrystals formed by the crystallization of the amorphous phase.
Except for the Al–Li–Cu system, all the stable quasicrystals are almost free of defects and disorder, as evidenced by X-ray and electron diffraction revealing peak widths as sharp as those of perfect crystals such as Si. Diffraction patterns exhibit fivefold, threefold, and twofold symmetries, and reflections are arranged quasiperiodically in three dimensions.
The origin of the stabilization mechanism is different for the stable and metastable quasicrystals. Nevertheless, there is a common feature observed in most quasicrystal-forming liquid alloys or their undercooled liquids: a local icosahedral order. The icosahedral order is in equilibrium in the liquid state for the stable quasicrystals, whereas the icosahedral order prevails in the undercooled liquid state for the metastable quasicrystals.
A nanoscale icosahedral phase was formed in Zr-, Cu- and Hf-based bulk metallic glasses alloyed with noble metals.[37]
Most quasicrystals have ceramic-like properties including high thermal and electrical resistance, hardness and brittleness, resistance to corrosion, and non-stick properties.[38] Many metallic quasicrystalline substances are impractical for most applications due to their thermal instability; the Al-Cu-Fe ternary system and the Al-Cu-Fe-Cr and Al-Co-Fe-Cr quaternary systems, thermally stable up to 700 °C, are notable exceptions.
Quasicrystalline substances have potential applications in several forms.
Metallic quasicrystalline coatings can be applied by plasma-coating or magnetron sputtering. A problem that must be resolved is the tendency for cracking due to the materials' extreme brittleness.[38] The cracking could be suppressed by reducing sample dimensions or coating thickness.[39] Recent studies show typically brittle quasicrystals can exhibit remarkable ductility of over 50% strains at room temperature and sub-micrometer scales (<500 nm).[39]
An application was the use of low-friction Al-Cu-Fe-Cr quasicrystals[40] as a coating for frying pans. Food did not stick to it as much as to stainless steel making the pan moderately non-stick and easy to clean; heat transfer and durability were better than PTFE non-stick cookware and the pan was free from perfluorooctanoic acid (PFOA); the surface was very hard, claimed to be ten times harder than stainless steel, and not harmed by metal utensils or cleaning in a dishwasher; and the pan could withstand temperatures of 1,000 °C (1,800 °F) without harm. However, cooking with a lot of salt would etch the quasicrystalline coating used, and the pans were eventually withdrawn from production. Shechtman had one of these pans.[41]
The Nobel citation said that quasicrystals, while brittle, could reinforce steel "like armor". When Shechtman was asked about potential applications of quasicrystals he said that a precipitation-hardened stainless steel is produced that is strengthened by small quasicrystalline particles. It does not corrode and is extremely strong, suitable for razor blades and surgery instruments. The small quasicrystalline particles impede the motion of dislocation in the material.[41]
Quasicrystals were also being used to develop heat insulation, LEDs, diesel engines, and new materials that convert heat to electricity. Shechtman suggested new applications taking advantage of the low coefficient of friction and the hardness of some quasicrystalline materials, for example embedding particles in plastic to make strong, hard-wearing, low-friction plastic gears. The low heat conductivity of some quasicrystals makes them good for heat insulating coatings.[41]
Other potential applications include selective solar absorbers for power conversion, broad-wavelength reflectors, and bone repair and prostheses applications where biocompatibility, low friction and corrosion resistance are required. Magnetron sputtering can be readily applied to other stable quasicrystalline alloys such as Al-Pd-Mn.[38]
While saying that the discovery of icosahedrite, the first quasicrystal found in nature, was important, Shechtman saw no practical applications.
See also
Archimedean solid
Disordered Hyperuniformity
Fibonacci quasicrystal
Icosahedral twins
The use of the adjective 'quasicrystalline' for qualifying a structure can be traced back to the mid-1940-50s, e.g. in Kratky, O; Porod, G (1949). "Diffuse small-angle scattering of x-rays in colloid systems". Journal of Colloid Science. 4 (1): 35–70. doi:10.1016/0095-8522(49)90032-X. PMID 18110601.; Gunn, R (1955). "The statistical electrification of aerosols by ionic diffusion". Journal of Colloid Science. 10: 107–119. doi:10.1016/0095-8522(55)90081-7.
Ünal, B; V. Fournée; K.J. Schnitzenbaumer; C. Ghosh; C.J. Jenks; A.R. Ross; T.A. Lograsso; J.W. Evans; P.A. Thiel (2007). "Nucleation and growth of Ag islands on fivefold Al-Pd-Mn quasicrystal surfaces: Dependence of island density on temperature and flux". Physical Review B. 75 (6): 064205. Bibcode:2007PhRvB..75f4205U. doi:10.1103/PhysRevB.75.064205.
Bindi, L.; Steinhardt, P. J.; Yao, N.; Lu, P. J. (2009). "Natural Quasicrystals". Science. 324 (5932): 1306–9. Bibcode:2009Sci...324.1306B. doi:10.1126/science.1170827. PMID 19498165.
Andrea Gerlin (5 October 2011). "Technion's Shechtman Wins Nobel in Chemistry for Quasicrystals Discovery". Bloomberg.
Shechtman, D.; Blech, I.; Gratias, D.; Cahn, J. (1984). "Metallic Phase with Long-Range Orientational Order and No Translational Symmetry". Physical Review Letters. 53 (20): 1951–1953. Bibcode:1984PhRvL..53.1951S. doi:10.1103/PhysRevLett.53.1951.
"The Nobel Prize in Chemistry 2011". Nobelprize.org. Retrieved 2011-10-06.
Mackay, A.L. (1982). "Crystallography and the Penrose Pattern". Physica A. 114: 609–613. Bibcode:1982PhyA..114..609M. doi:10.1016/0378-4371(82)90359-4.
Kramer, P.; Neri, R. (1984). "On periodic and non-periodic space fillings of Em obtained by projection". Acta Crystallographica A. 40 (5): 580–587. doi:10.1107/S0108767384001203.
Yang, C. Y. (1979). "Crystallography of decahedral and icosahedral particles". J. Cryst. Growth. 47 (2): 274–282. Bibcode:1979JCrGr..47..274Y. doi:10.1016/0022-0248(79)90252-5.
Yang, C. Y.; Yacaman, M. J.; Heinemann, K. (1979). "Crystallography of decahedral and icosahedral particles". J. Cryst. Growth. 47 (2): 283–290. Bibcode:1979JCrGr..47..283Y. doi:10.1016/0022-0248(79)90253-7.
Shechtman, Dan; I. A. Blech (1985). "The Microstructure of Rapidly Solidified Al6Mn". Met. Trans. A. 16A (6): 1005–1012. Bibcode:1985MTA....16.1005S. doi:10.1007/BF02811670.
Ishimasa, T.; Nissen, H.-U.; Fukano, Y. (1985). "New ordered state between crystalline and amorphous in Ni-Cr particles". Physical Review Letters. 55 (5): 511–513. Bibcode:1985PhRvL..55..511I. doi:10.1103/PhysRevLett.55.511. PMID 10032372.
Wang, N.; Chen, H.; Kuo, K. (1987). "Two-dimensional quasicrystal with eightfold rotational symmetry". Physical Review Letters. 59 (9): 1010–1013. Bibcode:1987PhRvL..59.1010W. doi:10.1103/PhysRevLett.59.1010. PMID 10035936.
Steinhardt, Paul; Bindi, Luca (2010). "Once upon a time in Kamchatka: the search for natural quasicrystals". Philosophical Magazine. 91 (19–21): 1. Bibcode:2011PMag...91.2421S. doi:10.1080/14786435.2010.510457.
Bindi, Luca; John M. Eiler; Yunbin Guan; Lincoln S. Hollister; Glenn MacPherson; Paul J. Steinhardt; Nan Yao (2012-01-03). "Evidence for the extraterrestrial origin of a natural quasicrystal". Proceedings of the National Academy of Sciences. 109 (5): 1396–1401. Bibcode:2012PNAS..109.1396B. doi:10.1073/pnas.1111115109.
Bindi, L.; Yao, N.; Lin, C.; Hollister, L. S.; Andronicos, C. L.; Distler, V. V.; Eddy, M. P.; Kostin, A.; Kryachko, V.; MacPherson, G. J.; Steinhardt, W. M.; Yudovskaya, M.; Steinhardt, P. J. (2015). "Natural quasicrystal with decagonal symmetry". Scientific Reports. 5: 9111. Bibcode:2015NatSR...5E9111B. doi:10.1038/srep09111. PMC 4357871Freely accessible. PMID 25765857.
de Wolf, R.M. & van Aalst, W. (1972). "The four dimensional group of γ-Na2CO3". Acta Crystallogr. A. 28: S111.
Pauling, L (1987-01-26). "So-called icosahedral and decagonal quasicrystals are twins of an 820-atom cubic crystal.". Physical Review Letters. 58 (4): 365–368. Bibcode:1987PhRvL..58..365P. doi:10.1103/PhysRevLett.58.365. PMID 10034915.
Kenneth Chang (October 5, 2011). "Israeli Scientist Wins Nobel Prize for Chemistry". NY Times.
Browne, Malcolm W. (1989-09-05). "Impossible' Form of Matter Takes Spotlight In Study of Solids". New York Times.
Levine, Dov; Steinhardt, Paul (1984). "Quasicrystals: A New Class of Ordered Structures". Physical Review Letters. 53 (26): 2477–2480. Bibcode:1984PhRvL..53.2477L. doi:10.1103/PhysRevLett.53.2477.
Makovicky, E. (1992), 800-year-old pentagonal tiling from Maragha, Iran, and the new varieties of aperiodic tiling it inspired. In: I. Hargittai, editor: Fivefold Symmetry, pp. 67–86. World Scientific, Singapore-London
Lu, P. J.; Steinhardt, P. J. (2007). "Decagonal and Quasi-Crystalline Tilings in Medieval Islamic Architecture". Science. 315 (5815): 1106–1110. Bibcode:2007Sci...315.1106L. doi:10.1126/science.1135491. PMID 17322056.
Makovicky, Emil (2007). "Comment on "Decagonal and Quasi-Crystalline Tilings in Medieval Islamic Architecture"". Science. 318 (5855): 1383–1383. Bibcode:2007Sci...318.1383M. doi:10.1126/science.1146262. PMID 18048668.
"Nobel win for crystal discovery". BBC News. 2011-10-05. Retrieved 2011-10-05.
Bohr, H. (1925). "Zur Theorie fastperiodischer Funktionen I". Acta Mathematicae. 45: 580. doi:10.1007/BF02395468.
de Bruijn, N. (1981). "Algebraic theory of Penrose's non-periodic tilings of the plane". Nederl. Akad. Wetensch. Proc. A84: 39.
Suck, Jens-Boie; Schreiber, M.; Häussler, Peter (2002). Quasicrystals: An Introduction to Structure, Physical Properties and Applications. Springer Science & Business Media. pp. 1–. ISBN 978-3-540-64224-4.
Paterson, Alan L. T. (1999). Groupoids, inverse semigroups, and their operator algebras. Springer. p. 164. ISBN 0-8176-4051-7.
Yamamoto, Akiji (2008). "Software package for structure analysis of quasicrystals". Science and Technology of Advanced Materials. 9 (1): 013001. Bibcode:2008STAdM...9a3001Y. doi:10.1088/1468-6996/9/3/013001. PMC 5099788Freely accessible. PMID 27877919.
Korepin, V.E. Completely integrable models in quasicrystals. Comm. Math. Phys. Volume 110, Number 1 (1987), 157–171.
Deguchi, Kazuhiko; Matsukawa, Shuya; Sato, Noriaki K.; Hattori, Taisuke; Ishida, Kenji; Takakura, Hiroyuki; Ishimasa, Tsutomu (2012). "Quantum critical state in a magnetic quasicrystal". Nature Materials. 11: 1013–6. doi:10.1038/nmat3432. PMID 23042414.
MacIá, Enrique (2006). "The role of aperiodic order in science and technology". Reports on Progress in Physics. 69 (2): 397–441. Bibcode:2006RPPh...69..397M. doi:10.1088/0034-4885/69/2/R03.
Tsai, An Pang (2008). "Icosahedral clusters, icosaheral order and stability of quasicrystals—a view of metallurgy". Science and Technology of Advanced Materials. 9 (1): 013008. Bibcode:2008STAdM...9a3008T. doi:10.1088/1468-6996/9/1/013008. PMC 5099795Freely accessible. PMID 27877926.
Louzguine-Luzgin, D. V.; Inoue, A. (2008). "Formation and Properties of Quasicrystals". Annual Review of Materials Research. 38: 403–423. Bibcode:2008AnRMS..38..403L. doi:10.1146/annurev.matsci.38.060407.130318.
"Sputtering technique forms versatile quasicrystalline coatings". MRS Bulletin. 36 (Cool: 581. 2011. doi:10.1557/mrs.2011.190.
Zou, Yu; Kuczera, Pawel; Sologubenko, Alla; Sumigawa, Takashi; Kitamura, Takayuki; Steurer, Walter; Spolenak, Ralph (2016). "Superior room-temperature ductility of typically brittle quasicrystals at small sizes". Nature Communications. 7: 12261. doi:10.1038/ncomms12261. PMC 4990631Freely accessible. PMID 27515779.
Fikar, Jan (2003). Al-Cu-Fe quasicrystalline coatings and composites studied by mechanical spectroscopy. École polytechnique fédérale de Lausanne EPFL, Thesis n° 2707 (2002). doi:10.5075/epfl-thesis-2707.
Kalman, Matthew (12 October 2011). "The Quasicrystal Laureate". MIT Technology Review. Retrieved 12 February 2016.
External links
A Partial Bibliography of Literature on Quasicrystals (1996–2008).
BBC webpage showing pictures of Quasicrystals
What is... a Quasicrystal?, Notices of the AMS 2006, Volume 53, Number 8
Gateways towards quasicrystals: a short history by P. Kramer
Quasicrystals: an introduction by R. Lifshitz
Quasicrystals: an introduction by S. Weber
Steinhardt's proposal
Quasicrystal Research – Documentary 2011 on the research of the University of Stuttgart
Thiel, P.A. (2008). "Quasicrystal Surfaces". Annual Review of Physical Chemistry. 59: 129–152. Bibcode:2008ARPC...59..129T. doi:10.1146/annurev.physchem.59.032607.093736. PMID 17988201.
Foundations of Crystallography.
Quasicrystals: What are they, and why do they exist?, Marek Mihalkovic and many others. (Microsoft PowerPoint format)[permanent dead link]
"Indiana Steinhardt and the Quest for Quasicrystals – A Conversation with Paul Steinhardt", Ideas Roadshow, 2016
Shaginyan, V. R.; Msezane, A. Z.; Popov, K. G.; Japaridze, G. S.; Khodel, V. A. (2013). "Common quantum phase transition in quasicrystals and heavy-fermion metals". Physical Review B. 87 (24). doi:10.1103/PhysRevB.87.245122.
Authority control
GND: 4202613-1 NDL: 00577309
v t e
Patterns in nature
Crack Dune Foam Meander Phyllotaxis Soap bubble Symmetry
Yemen Chameleon (cropped).jpg
Pattern formation Biology
Natural selection Camouflage Mimicry Sexual selection Mathematics
Chaos theory Fractal Logarithmic spiral Physics
Crystal Fluid dynamics Plateau's laws Self-organization
Plato Pythagoras Empedocles Leonardo Fibonacci
On Growth and Form Alan Turing
The Chemical Basis of Morphogenesis Aristid Lindenmayer Benoît Mandelbrot
Emergence Mathematics and art
Whirlpool à Amiens
From Wikipedia, the free encyclopedia
Zellige terracotta tiles in Marrakech, forming edge-to-edge, regular and other tessellations
A periodic tiling has a repeating pattern. Some special kinds include regular tilings with regular polygonal tiles all of the same shape, and Semiregular tilings with regular tiles of more than one shape and with every corner identically arranged. The patterns formed by periodic tilings can be categorized into 17 wallpaper groups. A tiling that lacks a repeating pattern is called "non-periodic". An aperiodic tiling uses a small set of tile shapes that cannot form a repeating pattern. In the geometry of higher dimensions, a space-filling or honeycomb is also called a tessellation of space.
A real physical tessellation is a tiling made of materials such as cemented ceramic squares or hexagons. Such tilings may be decorative patterns, or may have functions such as providing durable and water-resistant pavement, floor or wall coverings. Historically, tessellations were used in Ancient Rome and in Islamic art such as in the decorative geometric tiling of the Alhambra palace. In the twentieth century, the work of M. C. Escher often made use of tessellations, both in ordinary Euclidean geometry and in hyperbolic geometry, for artistic effect. Tessellations are sometimes employed for decorative effect in quilting. Tessellations form a class of patterns in nature, for example in the arrays of hexagonal cells found in honeycombs.
Decorative mosaic tilings made of small squared blocks called tesserae were widely employed in classical antiquity,[2] sometimes displaying geometric patterns.[3][4]
In 1619 Johannes Kepler made an early documented study of tessellations. He wrote about regular and semiregular tessellations in his Harmonices Mundi; he was possibly the first to explore and to explain the hexagonal structures of honeycomb and snowflakes.[5][6][7]
Roman geometric mosaic
Some two hundred years later in 1891, the Russian crystallographer Yevgraf Fyodorov proved that every periodic tiling of the plane features one of seventeen different groups of isometries.[8][9] Fyodorov's work marked the unofficial beginning of the mathematical study of tessellations. Other prominent contributors include Shubnikov and Belov (1964),[10] and Heinrich Heesch and Otto Kienzle (1963).[11]
In Latin, tessella is a small cubical piece of clay, stone or glass used to make mosaics.[12] The word "tessella" means "small square" (from tessera, square, which in turn is from the Greek word τέσσερα for four). It corresponds to the everyday term tiling, which refers to applications of tessellations, often made of glazed clay.
A rhombitrihexagonal tiling: tiled floor of a church in Seville, Spain, using square, triangle and hexagon prototiles
Tessellation or tiling in two dimensions is a topic in geometry that studies how shapes, known as tiles, can be arranged to fill a plane without any gaps, according to a given set of rules. These rules can be varied. Common ones are that there must be no gaps between tiles, and that no corner of one tile can lie along the edge of another.[13] The tessellations created by bonded brickwork do not obey this rule. Among those that do, a regular tessellation has both identical[a] regular tiles and identical regular corners or vertices, having the same angle between adjacent edges for every tile.[14] There are only three shapes that can form such regular tessellations: the equilateral triangle, square, and regular hexagon. Any one of these three shapes can be duplicated infinitely to fill a plane with no gaps.[6]
Many other types of tessellation are possible under different constraints. For example, there are eight types of semi-regular tessellation, made with more than one kind of regular polygon but still having the same arrangement of polygons at every corner.[15] Irregular tessellations can also be made from other shapes such as pentagons, polyominoes and in fact almost any kind of geometric shape. The artist M. C. Escher is famous for making tessellations with irregular interlocking tiles, shaped like animals and other natural objects.[16] If suitable contrasting colours are chosen for the tiles of differing shape, striking patterns are formed, and these can be used to decorate physical surfaces such as church floors.[17]
Elaborate and colourful zellige tessellations of glazed tiles at the Alhambra in Spain
More formally, a tessellation or tiling is a cover of the Euclidean plane by a countable number of closed sets, called tiles, such that the tiles intersect only on their boundaries. These tiles may be polygons or any other shapes.[b] Many tessellations are formed from a finite number of prototiles in which all tiles in the tessellation are congruent to the given prototiles. If a geometric shape can be used as a prototile to create a tessellation, the shape is said to tessellate or to tile the plane. The Conway criterion is a sufficient but not necessary set of rules for deciding if a given shape tiles the plane periodically without reflections: some tiles fail the criterion but still tile the plane.[19] No general rule has been found for determining if a given shape can tile the plane or not, which means there are many unsolved problems concerning tessellations.[18] For example, the types of convex pentagon that can tile the plane remains an unsolved problem.[20]
Mathematically, tessellations can be extended to spaces other than the Euclidean plane.[6] The Swiss geometer Ludwig Schläfli pioneered this by defining polyschemes, which mathematicians nowadays call polytopes. These are the analogues to polygons and polyhedra in spaces with more dimensions. He further defined the Schläfli symbol notation to make it easy to describe polytopes. For example, the Schläfli symbol for an equilateral triangle is {3}, while that for a square is {4}.[21] The Schläfli notation makes it possible to describe tilings compactly. For example, a tiling of regular hexagons has three six-sided polygons at each vertex, so its Schläfli symbol is {6,3}.[22]
In mathematics
Introduction to tessellations
Further information: Euclidean tilings of regular polygons, Uniform tiling, and List of convex uniform tilings
Mathematicians use some technical terms when discussing tilings. An edge is the intersection between two bordering tiles; it is often a straight line. A vertex is the point of intersection of three or more bordering tiles. Using these terms, an isogonal or vertex-transitive tiling is a tiling where every vertex point is identical; that is, the arrangement of polygons about each vertex is the same.[18] The fundamental region is a shape such as a rectangle that is repeated to form the tessellation.[23] For example, a regular tessellation of the plane with squares has a meeting of four squares at every vertex.[18]
The sides of the polygons are not necessarily identical to the edges of the tiles. An edge-to-edge tiling is any polygonal tessellation where adjacent tiles only share one full side, i.e., no tile shares a partial side or more than one side with any other tile. In an edge-to-edge tiling, the sides of the polygons and the edges of the tiles are the same. The familiar "brick wall" tiling is not edge-to-edge because the long side of each rectangular brick is shared with two bordering bricks.[18]
A normal tiling is a tessellation for which every tile is topologically equivalent to a disk, the intersection of any two tiles is a single connected set or the empty set, and all tiles are uniformly bounded. This means that a single circumscribing radius and a single inscribing radius can be used for all the tiles in the whole tiling; the condition disallows tiles that are pathologically long or thin.[24]
The 15th convex monohedral pentagonal tiling, discovered in 2015
A monohedral tiling is a tessellation in which all tiles are congruent; it has only one prototile. A particularly interesting type of monohedral tessellation is the spiral monohedral tiling. The first spiral monohedral tiling was discovered by Heinz Voderberg in 1936; the Voderberg tiling has a unit tile that is a nonconvex enneagon.[1] The Hirschhorn tiling, published by Michael D. Hirschhorn and D. C. Hunt in 1985, is a pentagon tiling using irregular pentagons: regular pentagons cannot tile the Euclidean plane as the internal angle of a regular pentagon, 3π/5, is not a divisor of 2π.[25][26][27]
A Pythagorean tiling
An isohedral tiling is a special variation of a monohedral tiling in which all tiles belong to the same transitivity class, that is, all tiles are transforms of the same prototile under the symmetry group of the tiling.[24] If a prototile admits a tiling, but no such tiling is isohedral, then the prototile is called anisohedral and forms anisohedral tilings.
A semi-regular (or Archimedean) tessellation uses more than one type of regular polygon in an isogonal arrangement. There are eight semi-regular tilings (or nine if the mirror-image pair of tilings counts as two).[29] These can be described by their vertex configuration; for example, a semi-regular tiling using squares and regular octagons has the vertex configuration 4.82 (each vertex has one square and two octagons).[30] Many non-edge-to-edge tilings of the Euclidean plane are possible, including the family of Pythagorean tilings, tessellations that use two (parameterised) sizes of square, each square touching four squares of the other size.[31] An edge tessellation is one in which each tile can be reflected over an edge to take up the position of a neighbouring tile, such as in an array of equilateral or isosceles triangles.[32]
This tessellated, monohedral street pavement uses curved shapes instead of polygons. It belongs to wallpaper group p3.
Wallpaper groups
Main article: Wallpaper group
Tilings with translational symmetry in two independent directions can be categorized by wallpaper groups, of which 17 exist.[33] It has been claimed that all seventeen of these groups are represented in the Alhambra palace in Granada, Spain. Though this is disputed,[34] the variety and sophistication of the Alhambra tilings have surprised modern researchers.[35] Of the three regular tilings two are in the p6m wallpaper group and one is in p4m. Tilings in 2D with translational symmetry in just one direction can be categorized by the seven frieze groups describing the possible frieze patterns.[36] Orbifold notation can be used to describe wallpaper groups of the Euclidean plane.[37]
Aperiodic tilings
Main articles: Aperiodic tiling and List of aperiodic sets of tiles
A Penrose tiling, with several symmetries but no periodic repetitions
Penrose tilings, which use two different quadrilaterals, are the best known example of tiles that forcibly create non-periodic patterns. They belong to a general class of aperiodic tilings, which use tiles that cannot tessellate periodically. The recursive process of substitution tiling is a method of generating aperiodic tilings. One class that can be generated in this way is the rep-tiles; these tilings have surprising self-replicating properties.[38] Pinwheel tilings are non-periodic, using a rep-tile construction; the tiles appear in infinitely many orientations.[39] It might be thought that a non-periodic pattern would be entirely without symmetry, but this is not so. Aperiodic tilings, while lacking in translational symmetry, do have symmetries of other types, by infinite repetition of any bounded patch of the tiling and in certain finite groups of rotations or reflections of those patches.[40] A substitution rule, such as can be used to generate some Penrose patterns using assemblies of tiles called rhombs, illustrates scaling symmetry.[41] A Fibonacci word can be used to build an aperiodic tiling, and to study quasicrystals, which are structures with aperiodic order.[42]
A set of 13 Wang tiles that tile the plane only aperiodically
Wang tiles are squares coloured on each edge, and placed so that abutting edges of adjacent tiles have the same colour; hence they are sometimes called Wang dominoes. A suitable set of Wang dominoes can tile the plane, but only aperiodically. This is known because any Turing machine can be represented as a set of Wang dominoes that tile the plane if and only if the Turing machine does not halt. Since the halting problem is undecidable, the problem of deciding whether a Wang domino set can tile the plane is also undecidable.[43][44][45][46][47]
Random Truchet tiling
Truchet tiles are square tiles decorated with patterns so they do not have rotational symmetry; in 1704, Sébastien Truchet used a square tile split into two triangles of contrasting colours. These can tile the plane either periodically or randomly.[48][49]
Tessellations and colour
Further information: four colour theorem
If the colours of this tiling are to form a pattern by repeating this rectangle as the fundamental domain, at least seven colours are required; more generally, at least four colours are needed.
Sometimes the colour of a tile is understood as part of the tiling; at other times arbitrary colours may be applied later. When discussing a tiling that is displayed in colours, to avoid ambiguity one needs to specify whether the colours are part of the tiling or just part of its illustration. This affects whether tiles with the same shape but different colours are considered identical, which in turn affects questions of symmetry. The four colour theorem states that for every tessellation of a normal Euclidean plane, with a set of four available colours, each tile can be coloured in one colour such that no tiles of equal colour meet at a curve of positive length. The colouring guaranteed by the four-colour theorem does not generally respect the symmetries of the tessellation. To produce a colouring which does, it is necessary to treat the colours as part of the tessellation. Here, as many as seven colours may be needed, as in the picture at right.[50]
Tessellations with polygons
A Voronoi tiling, in which the cells are always convex polygons
Next to the various tilings by regular polygons, tilings by other polygons have also been studied.
Any triangle or quadrilateral (even non-convex) can be used as a prototile to form a monohedral tessellation, often in more than one way. Copies of an arbitrary quadrilateral can form a tessellation with translational symmetry and 2-fold rotational symmetry with centres at the midpoints of all sides. For an asymmetric quadrilateral this tiling belongs to wallpaper group p2. As fundamental domain we have the quadrilateral. Equivalently, we can construct a parallelogram subtended by a minimal set of translation vectors, starting from a rotational centre. We can divide this by one diagonal, and take one half (a triangle) as fundamental domain. Such a triangle has the same area as the quadrilateral and can be constructed from it by cutting and pasting.[51]
If only one shape of tile is allowed, tilings exists with convex N-gons for N equal to 3, 4, 5 and 6. For N = 5, see Pentagonal tiling and for N = 6, see Hexagonal tiling.
For results on tiling the plane with polyominoes, see Polyomino § Uses of polyominoes.
Voronoi tilings
Voronoi or Dirichlet tilings are tessellations where each tile is defined as the set of points closest to one of the points in a discrete set of defining points. (Think of geographical regions where each region is defined as all the points closest to a given city or post office.)[52][53] The Voronoi cell for each defining point is a convex polygon. The Delaunay triangulation is a tessellation that is the dual graph of a Voronoi tessellation. Delaunay triangulations are useful in numerical simulation, in part because among all possible triangulations of the defining points, Delaunay triangulations maximize the minimum of the angles formed by the edges.[54] Voronoi tilings with randomly placed points can be used to construct random tilings of the plane.[55]
Tessellations in higher dimensions
Main article: Honeycomb (geometry)
Tessellation can be extended to three dimensions. Certain polyhedra can be stacked in a regular crystal pattern to fill (or tile) three-dimensional space, including the cube (the only Platonic polyhedron to do so), the rhombic dodecahedron, the truncated octahedron, and triangular, quadrilateral, and hexagonal prisms, among others.[56] Any polyhedron that fits this criterion is known as a plesiohedron, and may possess between 4 and 38 faces.[57] Naturally occurring rhombic dodecahedra are found as crystals of Andradite (a kind of Garnet) and Fluorite.[58][59]
A Schwarz triangle is a spherical triangle that can be used to tile a sphere.[60]
Tessellations in three or more dimensions are called honeycombs. In three dimensions there is just one regular honeycomb, which has eight cubes at each polyhedron vertex. Similarly, in three dimensions there is just one quasiregular[c] honeycomb, which has eight tetrahedra and six octahedra at each polyhedron vertex. However, there are many possible semiregular honeycombs in three dimensions.[61] Uniform polyhedra can be constructed using the Wythoff construction.[62]
The Schmitt-Conway biprism is a convex polyhedron with the property of tiling space only aperiodically.[63]
Tessellations in non-Euclidean geometries
Rhombitriheptagonal tiling in hyperbolic plane, seen in Poincaré disk model projection
The regular {3,5,3} icosahedral honeycomb, one of four regular compact honeycombs in hyperbolic 3-space
It is possible to tessellate in non-Euclidean geometries such as hyperbolic geometry. A uniform tiling in the hyperbolic plane (which may be regular, quasiregular or semiregular) is an edge-to-edge filling of the hyperbolic plane, with regular polygons as faces; these are vertex-transitive (transitive on its vertices), and isogonal (there is an isometry mapping any vertex onto any other).[64][65]
A uniform honeycomb in hyperbolic space is a uniform tessellation of uniform polyhedral cells. In 3-dimensional hyperbolic space there are nine Coxeter group families of compact convex uniform honeycombs, generated as Wythoff constructions, and represented by permutations of rings of the Coxeter diagrams for each family.[66]
In art
Further information: Mathematics and art
A quilt showing a regular tessellation pattern.
In architecture, tessellations have been used to create decorative motifs since ancient times. Mosaic tilings often had geometric patterns.[4] Later civilisations also used larger tiles, either plain or individually decorated. Some of the most decorative were the Moorish wall tilings of Islamic architecture, using Girih and Zellige tiles in buildings such as the Alhambra[67] and La Mezquita.[68]
Tessellations frequently appeared in the graphic art of M. C. Escher; he was inspired by the Moorish use of symmetry in places such as the Alhambra when he visited Spain in 1936.[69] Escher made four "Circle Limit" drawings of tilings that use hyperbolic geometry.[70][71] For his woodcut "Circle Limit IV" (1960), Escher prepared a pencil and ink study showing the required geometry.[72] Escher explained that "No single component of all the series, which from infinitely far away rise like rockets perpendicularly from the limit and are at last lost in it, ever reaches the boundary line."[73]
Tessellated designs often appear on textiles, whether woven, stitched in or printed. Tessellation patterns have been used to design interlocking motifs of patch shapes in quilts.[74][75]
Tessellations are also a main genre in origami (paper folding), where pleats are used to connect molecules such as twist folds together in a repeating fashion.[76]
In manufacturing
Tessellation is used in manufacturing industry to reduce the wastage of material (yield losses) such as sheet metal when cutting out shapes for objects like car doors or drinks cans.[77]
In nature
Tessellate pattern in a Colchicum flower
Main article: Patterns in nature
The honeycomb provides a well-known example of tessellation in nature with its hexagonal cells.[78]
In botany, the term "tessellate" describes a checkered pattern, for example on a flower petal, tree bark, or fruit. Flowers including the Fritillary[79] and some species of Colchicum are characteristically tessellate.[80]
Many patterns in nature are formed by cracks in sheets of materials. These patterns can be described by Gilbert tessellations,[81] also known as random crack networks.[82] The Gilbert tessellation is a mathematical model for the formation of mudcracks, needle-like crystals, and similar structures. The model, named after Edgar Gilbert, allows cracks to form starting from randomly scattered over the plane; each crack propagates in two opposite directions along a line through the initiation point, its slope chosen at random, creating a tessellation of irregular convex polygons.[83] Basaltic lava flows often display columnar jointing as a result of contraction forces causing cracks as the lava cools. The extensive crack networks that develop often produce hexagonal columns of lava. One example of such an array of columns is the Giant's Causeway in Northern Ireland.[84] Tessellated pavement, a characteristic example of which is found at Eaglehawk Neck on the Tasman Peninsula of Tasmania, is a rare sedimentary rock formation where the rock has fractured into rectangular blocks.[85]
Other natural patterns occur in foams; these are packed according to Plateau's laws, which require minimal surfaces. Such foams present a problem in how to pack cells as tightly as possible: in 1887, Lord Kelvin proposed a packing using only one solid, the bitruncated cubic honeycomb with very slightly curved faces. In 1993, Denis Weaire and Robert Phelan proposed the Weaire–Phelan structure, which uses less surface area to separate cells of equal volume than Kelvin's foam.[86]
In puzzles and recreational mathematics
Traditional tangram dissection puzzle
Main articles: Tiling puzzle and recreational mathematics
Tessellations have given rise to many types of tiling puzzle, from traditional jigsaw puzzles (with irregular pieces of wood or cardboard)[87] and the tangram[88] to more modern puzzles which often have a mathematical basis. For example, polyiamonds and polyominoes are figures of regular triangles and squares, often used in tiling puzzles.[89][90] Authors such as Henry Dudeney and Martin Gardner have made many uses of tessellation in recreational mathematics. For example, Dudeney invented the hinged dissection,[91] while Gardner wrote about the rep-tile, a shape that can be dissected into smaller copies of the same shape.[92][93] Inspired by Gardner's articles in Scientific American, the amateur mathematician Marjorie Rice found four new tessellations with pentagons.[94][95] Squaring the square is the problem of tiling an integral square (one whose sides have integer length) using only other integral squares.[96][97] An extension is squaring the plane, tiling it by squares whose sizes are all natural numbers without repetitions; James and Frederick Henle proved that this was possible.[98]
Escher, M. C. (1974). J. L. Locher, ed. The World of M. C. Escher (New Concise NAL ed.). Abrams. ISBN 0-451-79961-5.
Gardner, Martin (1989). Penrose Tiles to Trapdoor Ciphers. Cambridge University Press. ISBN 978-0-88385-521-8.
Grünbaum, Branko; Shephard, G. C. (1987). Tilings and Patterns. W. H. Freeman. ISBN 0-7167-1193-1.
Gullberg, Jan (1997). Mathematics From the Birth of Numbers. Norton. ISBN 0-393-04002-X.
Magnus, Wilhelm (1974). Noneuclidean Tesselations and Their Groups. Academic Press. ISBN 978-0-12-465450-1.
Stewart, Ian (2001). What Shape is a Snowflake?. Weidenfeld and Nicolson. ISBN 0-297-60723-5.
External links
Wikimedia Commons has media related to Tessellation.
Eppstein, David. "The Geometry Junkyard: Hyperbolic Tiling". (list of web resources including articles and galleries)
v t e
Mathematics and art
Whirlpool à Amiens
et sur l'ensemble du territoire...
yanis la chouette
Messages : 597
Date d'inscription : 24/02/2017
Voir le profil de l'utilisateur
Revenir en haut Aller en bas
Re: Whirlpool, a maelstrom of emotion disambiguation et Y'becca
Message par yanis la chouette le Mar 2 Mai - 16:28
Le cygne et la chouette effraie, TAY...
"Ils jacassent... Pauvre petit... Le fermier est vraiment cruel de plus les nourrir
à l'OGM et à la Disette..." Chante le Cygne.
"Pourtant tout comme le vieux coq, le fermier possède ses graines d'orges et de froment.
Les allocations d'autonomie inspiré de la relation entre Jésus et Marie-Madeleine,
entre Mohamed et les femmes, entre Zarathoustra et l'Ivresse, entre Jacob et Esaü
pour lui permettre de rechercher, de découvrir et de proposer; j'y avais consenti
tout en montrant les Ethniques et les éthiques de la vie auquel ces différents personnages
se sont livrés tout comme Jean Maris et Jean Cocteau ont établis leurs consciences propre
sur le respect de leurs liaisons... Plus que Progressiste, j'ai été Universel..."
"Et pour moi..." chante le cygne. L'épidémie régnait sur le peuple des canards. Et les abatages
tel des sainte Barthélémy se faisait grande sous les règnes de Jospin, Le Foll et Macron.
"Pour trompette de Jéricho, je te réponds que la famine est aux portes des mondes...
Alors ils stockent voilà les restes de Consciences et d'Humanisme qu'ils nous restent:
Les Cellules souches, Les spermatozoïdes, les ovaires, les Graines, Les Adn's, On stocke...
Tel une décharge qui rentre et qui désosse voilà notre conscience et nos humanismes devant
la Famine, L'Homicide et le Savoir, On stocke et on le revends en incriminant la Nature
pour soulager Le Damné et sa conscience: Emmanuel et Marion...
"Qui ?" trompette le Cygne Claude.
"Personne et Tous ! Mieux que tous : Marion et Emmanuel". Claironne TAY
"Pauvre Chapon... Tu hulules à l'instant où l'on te conduit à la boucherie sans le plaisir
de l'aile et la cuisse." réponds Le Cygne Claude.
" Ce n’est pas vrai... Ils entendent !" Chante TAY la chouette effraie...
"ET QUI et QUOI...? dit Le Cygne Claude.
"Et bien rien justement, ceux qu'ils s’appellent les veaux, les agneaux, les chèvres, les chiennes,
les catherinettes, les orphelins et Benjamin l’Âne... Tu ne peux oublier que les chapons
ont une bravoure et elle s'appelle la paternité... Aux regards de L'Histoire de la Conscience;
La révolution n'est qu'une pâle figure devant l'adversité et le tempérament du peuple agricole et
des pécheurs, premiers piliers de l’Évolution et de la conscience."
"L’arrière salle, le liquide et le paiement." réponds une jeune fille du Marché Saint Cyprien sous le
regard de ses dragonnes et de son berger.
"DE QUI PARLES TU, JEUNE FILLE" Répondent les deux Grognards, TAY ET Claude
Sans légion d'honneur, je vous parle de ma grossesse et de ma maternité..." Dit la jeune fille
aux bonnet phrygien accompagné de ses dragonnes et de son berger...
yanis la chouette
Messages : 597
Date d'inscription : 24/02/2017
Voir le profil de l'utilisateur
Revenir en haut Aller en bas
Re: Whirlpool, a maelstrom of emotion disambiguation et Y'becca
Message par Contenu sponsorisé
Contenu sponsorisé
Revenir en haut Aller en bas
Revenir en haut
- Sujets similaires
Permission de ce forum:
Vous ne pouvez pas répondre aux sujets dans ce forum |
5865f2d06e2f0b13 | Lewis Theory
Most chemistry is taught in terms of Lewis theory
Most chemistry is learned in terms of Lewis theory
Most chemistry understood in terms of Lewis theory
Most chemists think in terms of Lewis theory most of the time
So, what is Lewis theory?
Electrons dance to a subtle & beautiful quantum mechanical tune,
and the resulting patterns are complicated & exquisite.
As chemists we try to understand the dance.
Our physicist friends attempt to understand the music.
What Is Lewis Theory?
Lewis theory is the study of the patterns that atoms display when they bond and react with each other.
The Lewis approach to understanding chemical structure and bonding is to look at many chemical systems, to study the patterns, count the electrons in the various patterns and to devise simple rules associated with stable/unstable atomic, molecular and ionic electronic configurations.
Lewis theory makes no attempt to explain how or why these empirically derived numbers of electrons – these magic numbers – arise. Although, it is striking that the magic numbers are generally (but not exclusively) positive integers of even parity: 0, 2, 4, 6, 8...
For example:
• Atoms and atomic ions show particular stability when they have a full outer or valence shell of electrons and are isoelectronic with He, Ne, Ar, Kr & Xe: Magic numbers 2, 10, 18, 36, 54.
• Atoms have a shell electronic structure: Magic numbers 2, 8, 8, 18, 18.
• Sodium metal reacts to give the sodium ion, Na+, a species that has a full octet of electrons in its valence shell. Magic number 8.
• A covalent bond consist of a shared pair electrons: Magic number 2.
• Atoms have valency, the number of chemical bonds formed by an element, which is the number of electrons in the valence shell divided by 2: Magic numbers 0 to 8.
• Ammonia, H3N:, has a lone pair of electrons in its valence shell: Magic number 2.
• Ethene, H2C=CH2, has a double covalent bond: Magic numbers (2 + 2)/2 = 2.
• Nitrogen, N2, N≡N, has a triple covalent bond: Magic numbers (2 + 2 + 2)/2 = 3.
• The methyl radical, H3C•, has a single unpaired electron in its valence shell: Magic number 1.
• Lewis bases (proton abstractors & nucleophiles) react via an electron pair: Magic number 2.
• Electrophiles, Lewis acids, accept a a pair of electron in order to fill their octet: Magic numbers 2 + 6 = 8.
• Oxidation involves loss of electrons, reduction involves gain of electrons. Every redox reaction involves concurrent oxidation and reduction: Magic number 0 (overall).
• Curly arrows represent the movement of an electron pair: Magic number 2.
• Ammonia, NH3, and phosphine, PH3, are isoelectronic in that they have the same Lewis structure. Both have three covalent bonds and a lone pair of electrons: Magic numbers 2 & 8.
• Aromaticity in benzene is associated with the species having 4n+2 π-electrons. Magic number 6. Naphthalene is also aromatic: Magic number 10.
• Etc.
Lewis theory is numerology.
Lewis theory is electron accountancy: look for the patterns and count the electrons.
Lewis theory is also highly eclectic in that it greedily begs/borrows/steals/assimilates numbers from deeper, predictive theories and incorporates them into itself, as we shall see.
Physics ••• Stamp Collecting ••• Lewis Theory
Ernest Rutherford famously said: "Physics is the only real science. The rest are just stamp collecting."
Imagine an alien culture trying to understand planet Earth using only a large collection of postage stamps. The aliens would see all sorts of patterns and would be able to deduce the existence of: countries, national currencies, pricing strategies, differential exchange rates, inflation, the existence of heads of state, what stamps are used for, etc., and – importantly – they would be able to make predictions about missing stamps.
But the aliens would be able to infer little about the biology of life on our planet by only studying stamps, although there would be hints in the data: various creatures & plants, males & females, etc.
So it is with atoms, ions, molecules, molecular ions, materials, etc.
As chemists we see many patterns in chemical structure and reactivity, and we try to draw conclusions and make predictions using these patterns: This is Lewis theory.
But this Lewis approach is not complete and it only gives hints about the underlying quantum mechanics, a world observed through spectroscopy and mathematics.
Consider the pattern shown in Diagram-1:
Now expand the view slightly and look at Diagram-2:
You may feel that the right hand side "does not fit the pattern" of Diagram-1 and so is an anomaly.
So, is it an anomaly?
Zoom out a bit and look at the pattern in Diagram-3, the anomaly disappears:
But then look at Diagram-4. The purple patch on the upper right hand side does not seem to fit the pattern and so it may represent anomaly:
But zooming right out to Diagram-5 we see that everything is part of a larger regular pattern:
Digital Flowers at DryIcons
When viewing the larger scale the overall pattern emerges and everything becomes clear. Of course, the Digital Flowers pattern is trivial, whereas the interactions of electrons and positive nuclei are astonishingly subtle.
This situation is exactly like learning about chemical structure and reactivity using Lewis theory. First we learn about the 'Lewis octet', and we come to believe that the pattern of chemistry can be explained in terms of the very useful Lewis octet model. Then we encounter phosphorous pentachloride, PCl5, and discover that it has 10 electrons in its valence shell. Is PCl5 an anomaly? No! The fact is that the pattern generated through the Lewis octet model is just too simple. As we zoom out and look at more chemical structure and reactivity examples we see that the pattern is more complicated that indicated by the Lewis octet magic number 8.
Our problem is that although the patterns of electrons in chemical systems are in principle predictable, new patterns always come as a surprise when they are first discovered:
• The periodicity of the chemical elements
• The 4n + 2 rule of aromaticity
• The observation that sulfur exists in S8 rings
• The discovery of neodymium magnets in the 1990s
• The serendipitous discovery of how to make the fullerene C60 in large amounts
While these observations can be explained after the fact, they were not predicted beforehand. We do not have the mathematical tools to do predict the nature of the quantum patterns with absolute precision.
The chemist's approach to understanding structure and reactivity is to count the electrons and take note of the patterns. This is Lewis theory.
Some Chemistry Patterns
There following some diagrams showing chemistry patterns. Do not think about these as chemistry systems, yet, but just as patterns.
Shell Structure of Atoms:
Atomic Orbitals:
Filling of Atomic Orbitals (Pauli Exclusion Principle, Aufbau Principle, Hund's Rule & Madelung's Rule):
The Janet or Left-Step Periodic Table:
The conventional representation of the periodic table can be regarded as a mapping applied to the Janet formulation. The pattern is clearer in the Janet periodic table.
VSEPR Geometries:
Congeneric Series, Planars & Volumes:
Homologous Series of Linear Alkanes:
Aromatic Hydrocarbon π-systems:
As chemists we attempt to 'explain' many of these patterns in terms of electron accountancy and magic numbers.
Caught In The Act: Theoretical Theft & Magic Number Creation
The crucial time for our understand chemical structure & bonding occurred in the busy chemistry laboratories at UC Berkeley under the leadership of G. N. Lewis in the early years of the 20th century.
Lewis and colleagues were actively debating the new ideas about atomic structure, particularly the Rutherford & Bohr atoms and postulated how they might give rise to models of chemical structure, bonding & reactivity.
Indeed, the Lewis model uses ideas directly from the Bohr atom. The Rutherford atom shows electrons whizzing about the nucleus, but to the trained eye, there is no structure to the whizzing. Introduced by Niels Bohr in 1913, the Bohr model is a quantum physics modification of the Rutherford model and is sometimes referred to the Rutherford–Bohr model. (Bohr was Rutherford's student at the time.) The model's key success lay in explaining (correlating with) the Rydberg formula for the spectral emission lines of atomic hydrogen.
[Greatly simplifying both the history & the science:] In 1916 atomic theory forked or bifurcated into physics and chemistry streams:
• The physics fork was initiated and developed by Bohr, Pauli, Sommerfield and others. Research involved studying atomic spectroscopy and this lead to the discovery of the four quantum numbers – principal, azimuthal, magnetic & spin – and their selection rules. More advanced models of chemical structure, bonding & reactivity are based upon the Schrödinger equation in which the electron is treated as a resonant standing wave. This has developed into molecular orbital theory and the discipline of computational chemistry.
Note: quantum numbers and their selection rules are not 'magic' numbers. The quantum numbers represent deep symmetries that are entirely self consistent across all quantum mechanics.
• The chemistry fork started when Lewis published his first ideas about the patterns he saw in chemical bonding and reactivity in 1916, and later in a more advanced form in 1923. Lewis realised that electrons could be counted and that there were patterns associated with structure, bonding and reactivity behaviour.
These early ideas have been extensively developed and are now taught to chemistry students the world over. This is Lewis theory.
Theories, Models, Ideas...
A word should be said about the philosophical nature of theory as it is possible to take two extreme positions: realist or anti-realist.
• The realist believes that theories are a true, real and actual representation of reality in that theories describe what the physical world is actually like.
• To the anti-realist theories are simply a representation or description of reality.
• An instrumentalist is an anti-realist who simply uses the toolbox of conceptual and mathematical techniques to describe and understand the physical world.
Even though theories are all we have, they should be used but not believed.
Do not inhale.
This is crucial, because when tested to the extreme all chemical theories are found wanting, a situation that always confuses the realist. The cynical anti-realist knows model breakdown is inevitable because theories are not real.
Now, chemistry teachers and textbook authors [yes, we are all guilty] are prone to present arguments about the nature of chemical structure and bonding without adding any anti-realist provisos... which confuses students. (But hey, we were confused when we were learning this stuff.)
For example, most of the structure and bonding ideas in textbooks, and this web book is no exception, are expressed in terms of Lewis/VSEPR ideas.
Lewis theory is a fantastic model and it works nearly all the time. In fact, Lewis theory is so good that most chemists can get away with assuming that it is true, that it is real.
However, Karl Popper introduced the notion that for a theory to be deemed scientific it must be possible to devise experiments that tests a theory to destruction: can the theory be falsified! (Note: religions all fail this test.)
Popper Falsification
Black Crow Theory
Theory: All crows are black
If just one white crow is found, for what ever reason, then the black crow theory cannot be a true and full description of the world. It cannot be real.
Black crow theory (BCT) may remain a useful model that works most of the time, but it cannot be true in a philosophical sense.
There is one very common molecule known to everybody that is not explained using Lewis logic:
Diatomic oxygen, O2
Oxygen, O2, "should" – according to Lewis logic – have the structure O=O and be like F2 and N2
F–F O=O N≡N
Indeed, this is the representation commonly used in beginning and high school level textbooks.
But, oxygen, O2, presents to the experimentalist as a blue, paramagnetic, diradical species, •O-O•, able to exist in singlet and triplet forms.
The physical and chemical properties of O2 are NOT explained by Lewis logic.
Diatomic oxygen is a white crow.
The empirical (experiential) observations associated with O2 can be explained in terms of molecular orbital theory, see elsewhere in this webbook.
This does not totally invalidate Lewis theory, but it does warn us that the Lewis model is fallible and so the model cannot be "a true and real representation of the physical world", in the realist sense.
Question: Is O2 an anomaly?
No, O2 is not an anomaly. The diradical structure is inevitable with respect to the patterns of the molecular orbitals, as discussed here. The quantum mechanical patterns of molecular structure are more subtle than the magic numbers of Lewis theory.
Question: Is molecular orbital theory a chemical theory of everything? Is MO theory real?
No, molecular orbital theory is not real. While MO theory gives a more accurate and capable description of diatomic oxygen O2 than Lewis theory, MO theory cannot explain why chiral (optically active) molecules like glucose rotate plane polarised light. This phenomenon requires explanation in terms of quantum electrodynamics, QED.
Modern Lewis Theory
At the top of this page it was stated that "Lewis theory is highly eclectic in that it greedily begs/borrows/steals/assimilates numbers from deeper, predictive theories and incorporates them into itself". Indeed, modern Lewis theory is the set of assimilated magic numbers & electron accountancy rules used by most chemists to explain chemical structure and reactivity.
• Electrons in Shells & The Lewis Octet Rule:
The idea that electrons are point like negative charges that exist in atomic shells is the quintessential Lewis approach.
The full shell numbers: 2, 8, 8, 18, 18 are determined by experiment.
There is no reason within Lewis theory as to why the numbers should be as they are, other than the pattern itself, the dance:
The first magic number of Lewis theory is 8, the number associated with Lewis octet. The octet rule is taught to beginning chemistry students the world over. It is a wonderful and useful rule because so much main group and organic chemistry – s- and p-block chemistry – exhibits patterns of structure, bonding & reactivity that can be explained in terms of the Lewis octet rule of 8 electrons... and 2 electrons...
Students must soon realise that 8 is not the only magic number because helium, He, and the lithium cation, Li+, have two electrons in their full shell, magic number 2. So, there are two Lewis magic numbers 2 and 8. This is the first hint that matters are a little more involved than first indicated.
Question: Is the magic number 2 an anomaly?
No, it tells us that the Lewis octet rule, although useful, is not subtle enough to describe the entire pattern.
The patterns we see are an echo of the underlying quantum mechanical patterns.
• Covalent Bonding:
A covalent bond is a form of chemical bonding characterised by the sharing of pairs of electrons between atoms.
The Lewis octet rule – with its magic numbers of 2 & 8 – can be used to explain much main group and organic chemistry. In the diagram of methane, CH4, above, the carbon atom has 8 electrons in its valence shell and each hydrogen has 2 electrons in its valence shell.
However, at university entrance level students will have come across two phosphorous chlorides, phosphorus trichloride, PCl3, and phosphorous pentachloride, PCl5.
There is no issue with PCl3 as it has a full octet, magic number 8 and is isoelectronic with ammonia, NH3.
Phosphorous pentachloride, PCl5, has 10 electrons in its valence shell, and this represents a NEW Lewis magic number, 10.
The structure and geometry/shape of phosphorous pentachloride, PCl5, is usually covered with reference to valence shell electron pair repulsion (VSEPR), as discussed on the next page of this webbook.
• Ionic Bonding:
First introduced by Walther Kossel, the ionic bond can be understood within the Lewis model. The reaction between lithium and fluorine gives the ionic salt lithium fluoride, LiF, where the Li+ ion is isoelectronic with He and F is isoelectronic with Ne, both ions have filled valence shells, magic numbers 2 & 8:
• Isoelectronicity:
From Wikipedia: "Two or more molecular entities (atoms, molecules, ions) are described as being isoelectronic with each other if they have the same number of valence electrons and the same structure (number and connectivity of atoms), regardless of the nature of the elements involved."
• The cations K+, Ca2+, and Sc3+, the anions Cl, S2−, and P3− are all isoelectronic with the Ar atom.
• The diatomics: CO, N2 & NO+ are isoelectronic because each have 2 nuclei and 10 valence electrons (4 + 6, 5 + 5, and 5 + 5, respectively).
Isoelectronic structures represent islands of stability in "chemistry structure space", where chemistry structure space represents the set of all conceivable structures, possible and impossible.
Lewis theory does not explain how or why the various sets of isoelectronic structures are stable, but it takes note of the patterns of stability.
• Valence Shell Electron Pair Repulsion:
From Wikipedia: "Valence shell electron pair repulsion (VSEPR) is a model used to predict the shape of individual molecules based upon the extent of electron-pair electrostatic repulsion."
The VSEPR model states that the electron pairs in a species valence shell will repel each other so as to give the most spherically symmetric geometry. For example:
Phosphorous pentachloride, PCl5, has 10 ÷ 2 = 5 electron pairs in its valence shell. These repel to give an AX5 structure with a trigonal bipyramidal geometry:
There is a beautiful pattern to the various VSEPR structures and geometries, or read more on the next page of this webbook, here:
The VSEPR technique is pure Lewis theory. Like Lewis theory it employs point-like electrons, but in pairs.
VSEPR predicts that the electron pairs will repel each other, and that non-bonded lone-pairs will repel slightly more than bonded pairs. The net effect to is maximise the distance between electron pairs and so generate the most spherically symmetric geometry about the atomic centre.
VSEPR is an astonishingly good "back of an envelope" method for making predictions about the shapes of small molecules and molecular ions.
VSEPR introduces to Lewis theory the idea that molecular systems, atoms-with-ligands, pair the electrons in the atoms valence shell and maximise the spherical symmetry about the atomic centre .
• Molecular Models:
Lewis theory and the VSEPR technique are so successful that it is possible to build physical models of molecular structures.
It is astonishing how well these physical 'balls & sticks" achieve their objective their objective or modelling molecular and network covalent materials.
• Lewis Acids & Lewis Bases:
A central theme of the Chemogenesis webbook is the idea that Lewis acids and Lewis bases such as borane, BH3, and ammonia, NH3, react together
• Borane, BH3, has only six electrons in its valence shell, but it wants eight "to fill its octet" and it is an electron pair acceptor.
• Ammonia, NH3, has a full octet, but two of the electrons are present as a reactive lone-pair.
• Borane reacts with ammonia to form a Lewis acid/base complex in which both the boron and the nitrogen atoms now have a full octets.
No explanation is given within Lewis theory as to why the magic number 8 should be so important.
• Aromatic π-Systems and The 4n + 2 Rule:
Some unsaturated organic ring systems, such as benzene, C6H6, are unexpectedly stable and are said to be aromatic. A quantum mechanical basis for aromaticity, the Hückel method, was first worked out by physical chemist Erich Hückel in 1931.
In 1951 von Doering succinctly reduced the Hückel analysis to the "4n + 2 rule".
• Aromaticity is associated a cyclic array of adjacent p-orbitals containing 4n+2 π-electrons, where n is zero or any positive integer.
• Aromaticity is associated with cationic, anionic and heterocyclic π-systems, as well as neutral hydrocarbon structures like benzene and naphthalene.
• Aromaticity can be identified by a ring current and associated down-field chemical shift in the proton NMR spectrum. (Aromatic compounds have a chemical shift of 7-8 in the proton spectrum).
von Doering's 4n+2 rule – as it should be called – gives the set of magic numbers 2, 6, 10, 14, 18...
The 4n + 2 rule is applicable in many situations, but not all:
Pyrene contains 16 conjugated electrons (8 π-bonds, n = 1.5), and coronene contains 24 conjugated electrons (12 π-bonds, n = 5.5):
Pyrene and coronene are both aromatic by NMR ring current AND by the Hückel method, but they both fail von Doering 4n + 2 rule.
This tells us that although it is a useful method, the 4n + 2 rule does not have the subtly of quantum mechanics. The 4n + 2 rule is pure Lewis theory.
• Resonance Structures & Curly Arrow Pushing:
Lewis theory is used to explain most types of reaction mechanisms, including Lewis acid/base, redox reactions, radical, diradical and photochemical reactions.
Whenever a curly arrow is used in a reaction mechanism, Lewis theory is being evoked. Do not be fooled by the sparse structural representations employed by organic chemists, curly arrows and interconverting resonance structures are pure Lewis theory:
• Reaction Mechanisms:
Lewis theory is very accommodating and is able to 'add-on' those bits of chemical structure and reactivity that it is not very good at explaining itself. Consider the mechanism of electrophilic aromatic substitution, SEAr:
The diagram above is pure Lewis theory:
• The toluene is assumed to have a sigma-skeleton that can be described with VSEPR.
• The benzene π-system is added to the sigma-skeleton.
• The curly arrows are showing the movement of pairs of electrons, pure Lewis.
• The Wheland intermediate is deemed to be non-aromatic because it does not possess the magic number six π-electrons.
Lewis Theory and Quantum Mechanics
Quantum mechanics and Lewis theory are both concerned with patterns. However, quantum mechanics actively causes the patterns whereas Lewis theory is passive and it only reports on patterns that are observed through experiment.
We observe patterns of structure & reactivity behaviour through experiment.
Lewis theory looks down on the empirical evidence, identifies patterns in behaviour and classifies the patterns in terms of electron accountancy& magic numbers. Lewis theory gives no explanation for the patterns.
In large part, chemistry is about the behaviour of electrons and electrons are quantum mechanical entities. Quantum mechanics causes chemistry to be the way it is. The quantum mechanical patterns are can be:
• Observed using spectroscopy.
• Echoes of the underlying quantum mechanics can be seen in the chemical structure & reactivity behaviour patterns.
• The patterns can be calculated, although the mathematics is not trivial.
Another way of thinking about these things:
Atoms and their electrons behave according to the rules of quantum mechanics. This quantum world projects onto our physical world which we observe as being constructed from matter. As chemical scientists we observe matter and look for patterns in structure and reaction behaviour. Even though quantum mechanics is all about patterns, when we observe matter we see 'daughter patterns' that show only echoes of the 'parent' quantum mechanical patterns.
To see the quantum mechanics directly we seen to study spectroscopy:
Falsification Of The Lewis-VSEPR Approach
At one level Lewis theory is utter tosh (complete rubbish). Electrons are not point charges. The covalent bond does not have a shared pair of electrons as explained by Lewis theory. Ammonia, H3N:, does not have a 'lone pair of electrons'. VSEPR is not a theory, but just a neat trick... a very, very, very useful neat trick!
Yet, Lewis theory and the associated VSEPR method work so well that it is actually rather difficult to falsify the approach. It is hard to think of counter examples where the model breaks down.
As discussed above, oxygen, O2, is a paramagnetic diradical. But O2 is only a diatomic molecule and does not have an ABC bond angle and so it is not appropriate to use VSEPR analysis. In hydrogen peroxide, H-O-O-H, the two oxygen atoms behave as typical Lewis/VSEPR atomic centres.
Carbon monoxide, another diatomic, 'looks all wrong' using the Lewis approach unless it is constructed as C≡O+.
Carbenes, such as methylene, CH2, do not fit into the Lewis scheme very well. Carbenes are diradicals.
The bond angle in hydrogen sulfide, H2S, is 92.2° is much less than the bond and in water, H2O, which is 104.5°. This is not explained by VSEPR.
There are many examples like this: the nitrogen in an amide is planar and not trigonal pyramidal; SF4 consists of an equilibrium of two interconverting see-saw forms; ammonia and phosphine, NH3 and PH3 have rather different bond angles; etc. But these examples are explainable perturbations rather than counter examples that disprove the approach.
The Lewis structure of nitrogen dioxide, NO2, is not easy to construct in an unambiguous way, as discussed by Dan Berger. Nitrogen dioxide is a radical species with an unshared electron. In a related way, it is difficult to draw out the Lewis structure of a nitro function in nitrobenzene in an unambiguous manner.
Copper(II) ions, Cu2+, such as [Cu(H2O)6]2+ exhibit Jahn-Teller distortion, a subtle quantum mechanical effect.
Lewis theory, in the form of the Drude approach, is not good at modelling metals.
Crucially, the Lewis model does NOT predict the aromatic stabilisation of benzene. However, the Lewis approach happily assimilates von Doering's – useful but not perfect – 4n + 2 rule. Interestingly, once aromaticity is incorporated into the Lewis methodology, VSEPR can be used to predict benzene's 120° bond angles.
There are Jahn-Teller distortions in organic chemistry, for example cyclobutadiene, but they are rare and end up giving the same result as predicted by VSEPR!
The cyclobutdienyl dication is aromatic, both by experiment and by the 4n+2 rule. It has a cyclic array of 4 p-orbitals containing 2 π-electrons so n = 0.
Cyclobutadiene, according to the Hückel method "should" be a perfectly square diradical, but this a high energy state.
As shown by Jahn and Teller, the actual molecule will not have an electronically degenerate ground state but will instead end up with a distorted geometry in which the degeneracy is removed and one molecular orbital (the lower energy one) becomes the unique HOMO. For cyclobutadiene that means distorting to a rectangle with two short bonds, where the π-bonds are found, and two long bonds. The net result is to give the structure as predicted by Lewis/VSEPR.
Many thanks to members of the ChemEd list for examples, discussions & clarifications.
Not Lewis Theory
By way of counter example, consider Spectroscopy:
The diagram below is a completely random spectrum showing a clear pattern pulled from the internet using Google image search. The only aim of this image is to show a spectrum that is clearly a regular pattern. (The regularity of quantum patterns are not always quite so obvious due to overlapping signals):
A Rydberg series of profiles. Fig 3, Gabriel, AH, Connerade, JP, Thiery, S, et al , Application of Fano profiles to asymmetric resonances in helioseismology, ASTRON ASTROPHYS, 2001, Vol: 380, Pages: 745 - 749, ISSN: 0004-6361, here
There is no point in using any type of Lewis theory to help explain the atomic spectra. These patterns are explained with exquisite precision using quantum mechanics.
Timeline of Structural Theory
Valence Shell Electron Pair Repulsion
© Mark R. Leach 1999-
Queries, Suggestions, Bugs, Errors, Typos...
If you have any:
Suggestions for links
Bug, typo or grammatical error reports about this page,
|
b58399fcea751c30 | Shattered Symmetry
In 2013 the first author presented his PhD at the KU Leuven with the title Symmetry and Symmetry Breaking in the Periodic Table - Towards a Group-Theoretical Classification of the Chemical Elements. The structure of the Mendeleev table is explained with an elementary particle approach and elementary particles can be classified based on a group theoretical structure and corresponding Lie algebras that are the working tools of quantum mechanics.
In this book the exposition of this idea is built from the ground up (starting with the duel that tragically ended the life of Évariste Galois, the father of group theory, followed by some elementary geometrical symmetries, the definition of a group, etc.) as it would be in a mathematical introduction to group theory. The book is however mainly written for chemistry students, spending more time on the elementary mathematics than on the elementary physics or chemistry. The ultimate goal is to explain the structure of the Mendeleev table from first principles. For the mathematics students, the organization of all the elementary particles, may not be very clear. At best, they know that the classification of all these particles is based on symmetry, and since symmetry is group theory, the group structure should be the best explanation for the classification. But it is not only groups and symmetry. Some physics and quantum physics are also needed. The atomic structure involves also angular momentum, energy levels, conservation laws, and the Schrödinger equation which defines the state of the system as a wave function that solves an eigenvalue problem for the Hamiltonian operator. It is the interplay between all these elements and how a symmetry property or a group structure is translated into properties about the spectrum of a differential operator that has to be clarified to remove the confusion. This book gives an excellent introduction to mathematical chemistry that can be perfectly used in a course about the subject. So it is certainly recommended for chemistry students. But if math students want to learn about the connection between the mathematics and the physics defining the chemistry, then this is the book they need. And if you are not a student, but a professional mathematician, who wants to be introduced to the basics of mathematical chemistry, you will be interested as well.
Let me try to explain some of the basics of all these connections starting at an elementary level of plane rotations. Let $\mathbf{a}$ be a plane vector, choose an orthogonal basis, and let $x$ and $y$ be functions mapping the vector to its coordinates $x(\mathbf{a})=a_x$ and $y(\mathbf{a})=a_y$. If $R(\omega)$ represents an operator that rotates the vector counter-clockwise over an angle $\omega\in[0,2\pi)$, then with respect to the orthogonal basis we have \[ R(\omega)\mathbf{a}=\mathbf{a}' ~~\Leftrightarrow~~ \mathbb{R}(\omega)\left[\begin{array}{c}a_x\\a_y\end{array}\right]=\left[\begin{array}{c}a'_x\\a'_y\end{array}\right],~~~ \mathbb{R}(\omega)=\left[\begin{array}{cc}\cos\omega & -\sin\omega\\ \sin\omega & \cos\omega\end{array}\right]. \] The set $\{\mathbb{R}(\omega)\in[0,2\pi)\}$ with multiplication forms the special orthogonal group SO(2) of plane rotations. It is obviously isomorphic to the group of the rotation operators $\{R(\omega):\in[0,2\pi)\}$ with composition.
If instead we keep the vector but rotate the basis vectors clockwise, then this affects the coordinate functions as follows \[ \hat{R}(\omega)\mathbf{x}= \mathbf{x}'~~\Leftrightarrow~~ \hat{\mathbb{R}}(\omega)\left[\begin{array}{c}x\\y\end{array}\right]=\left[\begin{array}{c}x'\\y'\end{array}\right],~~~ \hat{\mathbb{R}}(\omega)=\left[\begin{array}{cc}\cos\omega & \sin\omega\\ -\sin\omega & \cos\omega\end{array}\right] = \mathbb{R}(-\omega)=[\mathbb{R}(\omega)]^T=[\mathbb{R}(\omega)]^{-1} \] or $\hat{R}(\omega)[x~y]=[x'~y']=[x~y]\mathbb{R}(\omega)$.
A Taylor series expansion of $\mathbb{R}(\omega)$ defines its generator matrix $\mathbb{X}$ \[ \mathbb{R}(\omega)=\mathbb{I}+\sum_{k=1}^\infty \frac{1}{k!}(\mathbb{X}\omega)^k=\exp(\mathbb{X}\omega), \] \[ \mathbb{R}(0)=\mathbb{I}=\left[\begin{array}{cc} 1&0\\0&1\end{array}\right]=\left[\begin{array}{c}\partial_x\\\partial_y\end{array}\right][x~y],~~ \left.\frac{d^k}{d\omega^k} \mathbb{R}(\omega)\right|_{\omega=0}= \mathbb{X}^k,~~\mathbb{X}=~\left[\begin{array}{cc} 0&-1\\1&0\end{array}\right]. \] Thus if $\hat{X}=\left.\frac{d\hat{R}(\omega)}{d\omega}\right|_{\omega=0}$, then $\hat{X}[x~y]=[x~y]\mathbb{X} =[x~y]\mathbb{XI}=[x~y]\mathbb{X}\left[\begin{array}{c}\partial_x\\\partial_y\end{array}\right][x~y]$, so that $\hat{X}$ is the operator $\hat{X}=[x~y]\mathbb{X}\left[\begin{array}{c}\partial_x\\\partial_y\end{array}\right]=y\partial_x-x\partial_y$, where $\partial_x$ and $\partial_y$ represent partial derivatives. This links group elements to matrices operating on vectors and to differential operators operating on (vectors or tuples of) functions.
The differential operators allow to introduce the physical side. Consider for a moment the three-dimensional space. A rotation has the form $R(\omega\mathbf{n})$ with $\mathbf{n}$ a unit vector defining the axis of rotation (the Euler vector). Hence a rotation in 3D is defined not by 1 but by 3 parameters: 2 for the axis vector and 1 for the angle $\omega$ which can be restricted to the interval $[0,\pi)$ because $\mathbf{n}$ defines not only a direction, but also an orientation. The angular momentum vector is $\mathbf{L}=\mathbf{r}\times\mathbf{p}$ where $\mathbf{r}=[x~y~z]$ is the position vector and $\mathbf{p}=[p_x,~p_y,~p_z]$ is the linear moment vector. The outer product says that the components of $\mathbf{L}$ are $[L_x,~L_y,~L_z]=[yp_x-zp_y,zp_x-xp_z,xp_y-yp_x]$. If we stay in the $(x,y)$-plane then $z=0$ and $\mathbf{L}$ reduces to its $z$-component. Translating this to operators we get a quantum mechanical equivalent: $\hat{{L}}_z=\hat{x}\hat{p}_y-\hat{y}\hat{p}_x=i\hslash\hat{X}$ with $\hslash=h/2\pi$ and $h$ the Planck constant. Thus we now are using the quantum mechanical position operators $[\hat{x},\hat{y},\hat{z}]=[x,y,z]$ and moment operators $[\hat{p}_x,\hat{p}_y,\hat{p}_z]=-i\hslash[\partial_x,\partial_y,\partial_z]$. Since we can also have rotations around the $x$ or $y$ axis, we have not one but three matrices $\mathbb{X}_k$, $k=x,y,z$. The rotations in three dimensional space form a special group SO(3). Moreover the three matrices $\mathbb{X}_k$, $k=x,y,z$ or equivalently, the three operators $\hat{X}_k$, $k=x,y,z$ generate a Lie algebra ${\frak so}(3)$. That means that it has a composition defined by Lie brackets $[\hat{X}_i,\hat{X}_j]=\hat{X}_i\hat{X}_j-\hat{X}_j\hat{X}_i=\epsilon_{ijk} \hat{X}_k$ with $\epsilon_{ijk}\in\{0,+1,-1\}$ the structure constants for the Lie algebra. ($\epsilon_{ijk}$ is $+1$ or $-1$ if $ijk$ is an even or an odd permutation of $xyz$, and it is zero when two of the three indices are equal.) This gives the link with Lie algebras.
Next we need a link between symmetries and conservation laws, and between the wave function and eigenvalue problems. A physical invariant under space translation results in a conservation of the linear momentum. An invariant under time delays translates in a conservation of energy, and rotational symmetry defines the conservation of angular momentum.
The invariant linear momentum is $\hat{p}{}^2=\hat{\mathbf{p}} \cdot\hat{\mathbf{p}}=-\hslash^2[\partial_x^2+\partial_y^2+\partial_z^2]$, which gives the wave operator which defines the kinetic part of the Hamiltonian $\hat{\mathcal{H}}=\hat{p}^2/2m$ ($m$ is the mass of the particle). If we choose for a rotation around the $z$-axis, we have the eigenvalue problem $\hat{L}_z|\Psi\rangle=\lambda|\Psi\rangle$. This $\Psi$ is the wave function defining the state, written here as a "ket" which is to be understood as a column vector (the corresponding row is called a "bra" and denoted as $\langle\Psi|$, so that the inner product is a "braket" $\|\Psi\|^2=\langle\Psi|\Psi\rangle$). The eigenvalues are $\lambda=\hslash m_l$ with $m_l$ taking all integer values (magnetic quantum numbers). These integers are reflecting the periodicity in $\omega$. This gives a model of an electron confined to a circular ring with only one degree of freedom: the rotation angle.
The invariant for angular momentum is $\hat{L}{}^2=\hat{\mathbf{L}}\cdot\hat{\mathbf{L}}$. The eigenvalues of this operator are again discrete: $l(l+1)\hslash^2$ with $l\in\mathbb{N}/2$. In 3D the corresponding eigenstate $|\Psi\rangle$ will depend on two parameters if it is confined to a spherical shell. We denote the eigenstate as $|l,m_l\rangle$ with $m_l$ as in the circular case. Thus $\hat{L}{}^2|l,m_l\rangle=l(l+1)\hslash^2|l,m_l\rangle$. However, combining both eigenvalue problems restricts $m_l$ to $\{-l,\ldots,l\}$. The operator $\hat{L}{}^2$ commutes with all generators of the Lie algebra ${\frak so}(3)$ and is therefore called a Casimir operator, the only one in the case of ${\frak so}(3)$.
The time independent Schrödinger equation (i.e. assuming no potential energy) introduces energy $E$ as an eigenvalue of the problem $\hat{\mathcal{H}}|\Psi\rangle=E|\Psi\rangle$, where $\hat{\mathcal{H}}$ is the Hamiltonian operator given by $\hat{\mathcal{H}}=\hat{L}{}^2/(2mr^2)$ if the particle with mass is restricted to the spherical shell wit radius $r$. Hence the eigenvalues are $E_l=l(l+1)\hslash^2/(2mr^2)$ where $l$ refers to the successive shells, which are traditionally indicated by $s,p,d,f,...$ for $l=0,1,2,3,...$.
If $\hat{Y}$ is an operator commuting with $\hat{\mathcal{H}}$, then its expected value $\langle\hat{Y}\rangle=\langle\Psi|\hat{Y}|\Psi\rangle$ will not vary in time: $\partial_t\langle\hat{Y}\rangle=0$. Since the Hamiltonian commutes with itself, the energy is conserved. Solving the Hamiltonian eigenvalue problem gives discrete values for the energy $E$ and the corresponding eigenspace can be one dimensional or of higher dimension, corresponding to a nondegenerate or degenerate case respectively.
The operators $\hat{X}_i$ that commute with the Hamiltonian, thus satisfy $\hat{\mathcal{H}}\hat{X}_i|\Psi\rangle=E\hat{X}_i|\Psi\rangle$ and that map the eigenvector $|\Psi\rangle$ onto a multiple of itself (as on the nondegenerate case) are called Cartan generators (denoted $\hat{H}_i$). They generate in the general case a maximal abelian subalgebra. In our case $\hat{L}_z$ is such a Cartan generator. The remaining generators can be recombined into a set of Weyl generators. In our example there are two: $\hat{L}_\pm=\hat{L}_x\pm\hat{L}_y$, which are also called ladder operators. They shift the eigenvalues. Suppose $|l,m_l\rangle$ denotes the common eigenvector $|\Psi\rangle$ of $\hat{L}{}^2$ and $\hat{L}_z$ with eigenvalues $\hslash m_l$ and $l(l+1)\hslash^2$ respectively, then for example $\hat{L}_z\hat{L}{}_\pm^k|l,m_l\rangle=(m_l\pm k)\hslash\hat{L}{}_\pm^k|l,m_l\rangle$. Thus $\hat{L}{}_\pm$ shift the eigenvalue of $\hat{L}_z$ up or down the ladder with one unit. If the Casimir operator has eigenvalues $c_\mu$: $\hat{C}_\mu|c_\mu;h_i\rangle = c_\mu|c_\mu;h_i\rangle$, and the Cartan operator has eigenvalues $h_i$: $\hat{H}_i|c_\mu;h_i\rangle=h_i|c_\mu;h_i\rangle$ then a Weyl operator $\hat{E}_\alpha$ satisfies $\hat{H}_i\hat{E}_\alpha|c_\mu;h_i\rangle=(h_i+\alpha_i)\hat{E}_\alpha|c_\mu;h_i\rangle$ and thus $\hat{E}_\alpha|c_\mu;h_i\rangle\sim|c_\mu;h_i+\alpha_i\rangle$.
All this is just a brief summary of the introductory part I of the book building up the elementary quantum mechanical models in two and three dimensions. It ends with a scholium chapter giving a taste of the $n$-dimensional case. To arrive at the structure of the Mendeleev table, much more is needed. Part II introduces the dynamics and the "zoo of elementary particles" (pions, kaons, mesons,...) which requires the introduction of charge, spin, strangeness,... The Hamiltonian now involves a part for the potential energy. The group structure is extended to the unitary matrices and operators of U(3) (with 9 generators $\mathbb{X}_i$) and the special unitary subgroup SU(3) generated by three of them. One can again define a Lie algebra ${\frak su}(3)$ from these generators with corresponding Cartan and Weyl operators and Casimir invariants. This algebra is now much richer and different subalgebras can be defined. For example the reduction SU(3)$\to$SO(3) is called a symmetry breaking. One can think of it as a projection from the complex plane onto the real axis. More involved spinor operators $\hat{S}_i$ can be defined, associated with the famous Pauli matrices \[ \sigma_x=\left[\begin{array}{cc}1&0\\0&-1\end{array}\right],~~ \sigma_y=\left[\begin{array}{cc}0&-i\\i&0\end{array}\right],~~ \sigma_z=\left[\begin{array}{cc}0&1\\1&0\end{array}\right]. \] A spinor is like a rotation taking place on a Möbius band: after a rotation over $2\pi$ one ends up with the negative vector. This explains, or is explained by, the introduction of complex numbers: $i^2=-1$. One has to rotate over $4\pi$ to arrive at the original position. This explains doubling effects. The spinors generate SU(2) which is the double covering group for SO(3) because every rotation in 3D is the image of two elements in SU(2).
Newtonian mechanics and Kepler's rules are applied to derive the orbit of an electron around the kernel in classical mechanics. The quantum mechanical analog of the so called LRL (Laplace-Runge-Lenz) vector $\mathbf{M}$ (the vector defining the larger half axis of the elliptic trajectory) consists of three operators $\hat{M}_k$. Together with the $\hat{L}_k$ operators they generate the Lie algebra ${\frak so}(4)$.
Furthermore this part II has historical notes about the Kepler problem, the LRL vector, and the attempts to bring order in the particle zoo by e.g. Gell-Mann in 1960's. It also introduces root diagrams, which are 2D grids representing quantum numbers. Projections onto certain lines represent degeneracies, that are reductions to subalgebras. On such lines some parameter (like quantum number or spin) is constant. A root diagram catches the essentials of a Lie algebra. A root diagram for ${\frak su}(3)$ for example places the Cartan generators at the center of a regular hexagon and the six vertices describe the actions of the six Weyl generators.
Part III is about spectral generating symmetries and thus arrives at the ultimate goal of explaining the Mendeleev table. We can characterize an eigenstate by four quantum numbers: the principal quantum number $n$ referring to energy $E_n$, the orbital quantum number $l\in\{0,\ldots,n-1\}$ referring to the shell and its angular momentum, the magnetic quantum number $m_l\in\{-l,\ldots,l\}$, and $m_s\in\{\pm1/2\}$, the spin (up or down). For a true spectrum generating symmetry one has to consider transformations from eigenstate $|nl\rangle$ to eigenstate $|n'l\rangle$ with $n'\ne n$. Since $n$ refers to the energy levels, the energy changes, the Hamiltonian will not be invariant under these transformations and one has to consider the radial wave equation. As a consequence the previous group SO(3) has to be replaced by the pseudo orthogonal group SO(2,1). Note that SO(2,1) is not compact and the Lie algebra ${\frak so}(2,1)$ has infinite dimensional unitary representations, corresponding to orbital numbers $n=l+1,l+2,\ldots$. While the SO(3) transformations leave the sphere invariant, the SO(2,1) will leave a hyperboloid invariant. The 2,1 refers to the signature $(++-)$ of the hyperboloid. SO(2,1) has three generators $\hat{Q}_k$. Choosing the third one as an analog of $\hat{L}_z$, then it has to eigenvalues $n=l+1+s$, $s=0,1,2,...$ corresponding to energy levels $E_n=-mZ^2e^4/(8h^2\epsilon_0^2 n^2)$, $n=1,2,\ldots$ with $Z$ the atomic number (which is 1 for hydrogen which has only one electron), $e$ is the electron charge, $\epsilon_0$ vacuum permittivity. To describe the eigenstates $|nlm\rangle$, root diagrams for the Lie algebra ${\frak su}(4,2)$ are needed, which are now 3D instead of 2D grids, and one has ladder operators that increase or decrease each of the $n,l$ or $m$ parameters separately.
In the penultimate chapter the structure of the Mendeleev table is finally analysed. The group structure proposed by the authors is caught in the following symmetry breakings $SO(4,2)\otimes SU(2) \supset SO(3,2)\otimes SU(2) \supset SO'(4)\otimes SU(2)$. The first is the overall symmetry group with SU(2) corresponding to the spin. All possible $(n,l)$ couples will represent all the chemical elements corresponding to a basis for the infinite-dimensional unitary representation (unirep) of the group. The reduction to SO(3,2) corresponds to the period doubling because the elements split into two sets where $n+l$ is either even or odd. The last chapter explains SO'(4). Using a metaphor of a triangular chessboard with squares $\{(n,l): n=1,2,3\ldots; l=0,1,\ldots,n-1\}$. The moves on the chessboard correspond to operators. For example the rook can move vertical (operators in SO(4)) and horizontal (operators in SO(2,1)) so that this corresponds to SO(4)$\otimes$SO(2,1). Analogously one can have operators corresponding to the king, queen, knight, bishop, and pawn pieces, each belonging to specific groups. The diagonal moves of the bishop along diagonals corresponds to so-called Madelung $n\pm l$ rules and the corresponding Madelung operators are in SO(3,2). However, since there is an upward and a downward sloping diagonal, this should correspond to a further symmetry breaking. This is not obtained by a standard reduction of SO(3,2) to SO(3,1). Instead one has to combine left-right reflections with up-down reflection operators to get the separate diagonals. This required the introduction of a new Lie algebra ${\frak so}'(4,2)$ and one is forced to give up linearity.
Clearly the previous survey just lifts the corner of the veil that covers the magical world of elementary particles and atomic structures. All the details of this story can be found in the book, explained with much care. Do realise that it scratches only the surface: orbitals of particles moving around a nucleus. It remains away from relativity theory, supersymmetry, strings, and membranes. Moreover, it is not just the dry mathematics, but it is brought with much imagination. I mentioned already some of the historical excursions, and there is the chessboard in the last chapter. With this chessboard, the authors refer to Lewis Carroll's Alice in Wonderland, as they do throughout the book, borrowing the wonderful illustrations by John Tenniel. In principle no previous knowledge is required (73 pages with appendices recall the necessary background or work out some of the longer computations), it is still hard work for someone who is not already a bit familiar with the subject or has only some background in either mathematics or in chemistry. Nevertheless, it is a most fascinating story marrying mathematics, physics and chemistry that is a joy to read about, or work in. It is abstract mathematics and yet it describes some basic elements of physical reality.
Adhemar Bultheel
Book details
The authors explain the structure of the Mendeleev table from first principles. They derive the necessary symmetry groups and Lie algebras that appear in quantum mechanical description of the atomic structure. Both the physics and the mathematics are built up from scratch.
Author: Publisher:
978-0-190-61139-2 (hbk)
£ 60.00 (hbk) |
c16c9d52942727d6 | Castles and quantum mechanics
How are castles and quantum mechanics related? One connection is rook polynomials.
The rook is the chess piece that looks like a castle, and used to be called a castle. It can move vertically or horizontally, any number of spaces.
A rook polynomial is a polynomial whose coefficients give the number of ways rooks can be arranged on a chess board without attacking each other. The coefficient of xk in the polynomial Rm,n(x) is the number of ways you can arrange k rooks on an m by n chessboard such that no two rooks are in the same row or column.
The rook polynomials are related to the Laguerre polynomials by
Rm,n(x) = n! xn Lnmn(-1/x)
where Lnk(x) is an “associated Laguerre polynomial.” These polynomials satisfy Laguerre’s differential equation
x y” + (n+1-x) y‘ + k y = 0,
an equation that comes up in numerous contexts in physics. In quantum mechanics, these polynomials arise in the solution of the Schrödinger equation for the hydrogen atom.
Related: Relations between special functions
For daily posts on analysis, follow @AnalysisFact on Twitter.
AnalysisFact twitter icon
One thought on “Castles and quantum mechanics
1. Nice, I did not know about rook polynomials! A different connection between castles and quantum mechanics: during the last Ghent Light Festival, a result of a quantum mechanical computation (of nanowire structures) was projected on the Medieval castle “Gravensteen”. Picture here.
Leave a Reply
|
97bc0d89a804bfbb | tisdag 16 maj 2017
realQM Excited States
I have updated realQM with a section on
Classical vs Quantum Physics According to Lubos
lördag 6 maj 2017
Schrödinger: Do Electrons Think?
What do you think?
fredag 5 maj 2017
New Web Site: Real Quantum Mechanics
onsdag 3 maj 2017
Programmering i Matematikämnet: Så Lite Som Möjligt?
tisdag 2 maj 2017
CO2 Global Warming Alarmism: Hour of Reckoning
• We should ‘renegotiate’ the Paris Climate Change Agreement,
söndag 16 april 2017
Yes, anti-matter does anti-gravitate!
Sabine Hossenfelder asks in a recent post at Backreaction:
• Why doesn’t anti-matter anti-gravitate?
• $\Delta\phi = \rho$
This model is explored under the following categories on this blog
måndag 20 mars 2017
Climate Change Programmes: Waste of Money
The Independent and The Guardian reports:
• Donald Trump's budget director calls efforts to combat climate change "waste of money".
• The budget proposal calls for deep cuts across various federal agencies responsible for different climate change actions.
This means a historic shift from inhuman irrational political ideological extremism of CO2 climate change hysteria to science, rationality and humanity.
All the people of the world can now celebrate that there is more than enough fossil energy on this planet, which can safely be harvested and utilised under controllable environmental side effects, to allow virtually everybody to reach a good standard of living (under the right politics).
The industrial revolution was driven by coal and the boost of the standard of living during the 20th century in the West was made possible by the abundance of oil and gas. Without CO2 hysteria this development can now be allowed to continue and bring more prosperity to the people, as is now happening on large scale in China and India.
Wasting money on actions without meaning and effect is about the most stupid thing a government can do and that will now be put to stop in the US as concerns energy production (if not on military...)
It remains for the EU to come to the same conclusion...and that will come even if the awakening will take some time...
PS Note the shift of terminology from "global warming by CO2" to the more neutral "climate change", motivated by the lack of warming in the "hiatus" of global temperatures during now 20 years. If "stopping climate change" was the issue, the prime concern would be to stop the upcoming ice age. But that is not on the agenda, maybe because nobody believes that this is within the range of climate politics...the only thing that could have an effect would be massive burning of fossil fuel under the belief that it can cause some warming...
söndag 19 mars 2017
The World as Analog Computation?!
Augmented reality by digital simulation of analog reality.
Sabine Hossenfelder expresses on Backreaction:
• No, we probably don’t live in a computer simulation!
as a reaction to the Simulation Hypothesis:
Sabine starts her discussion with
And she gets support from Lubos Motl stating:
• Hossenfelder sensibly critical of our "simulated" world.
torsdag 9 mars 2017
Regeringen Beslutar om Programmering i Matematikämnet
Regeringen har idag beslutat om förtydliganden och förstärkningar i bland annat läroplaner, kursplaner och ämnesplaner för grundskolan och gymnasieskolan:
• Syftet är att tydliggöra skolans uppdrag att stärka elevernas digitala kompetens.
• Programmering införs som ett tydligt inslag i flera olika ämnen i grundskolan, framför allt i teknik och matematik.
• Ändringarna ska tillämpas senast från och med den 1 juli 2018. Huvudmännen kommer att kunna välja när de ska börja tillämpa ändringarna inom ett ettårigt tidsspann från och med den 1 juli 2017.
Nu återstår att fylla detta med konkret innehåll. Om det skall bli något annat än bara en tom åtbörd, fordras massiv vidareutbildning av särskilt lärare i matematik.
Mitt bidrag för detta ändamål finns i form av Matematik-IT.
Det finns starka konservativa krafter inom matematikutbildning från grundskola till universitet, som inte vill medverka till att bredda matematikämnet med programmering.
Det finns starka krafter inom datalogi att ta hand om programmeringen i skolan enligt en princip av "datalogiskt tänkande".
Matematikämnet står därmed inför det vägskäl som präglat hela mitt akademiska liv:
1. Förnya/utvidga traditionell analytisk matematik med programmering = Matematik-IT.
2. Bevara traditionell matematikutbildning och låt inte programmering störa bilden.
Regeringen har bestämt att 1. skall gälla, medan akademin lutar åt 2. Vad är bäst för Sveriges elever? Digital kompetens med eller utan matematik? Matematik med eller utan programmering? Kampen går vidare...
tisdag 28 februari 2017
Update of realQM
fredag 24 februari 2017
Skeptics Letter Reaches the White House
The Washington Examiner reports:
Also Washington Times reports on this historic letter:
lördag 18 februari 2017
Scott Pruitt New Director of EPA
lördag 11 februari 2017
QM: Waves vs Particles: Schrödinger vs Born
Summing up:
Born ends with:
fredag 10 februari 2017
2500 Years of Quantum Mechanics
tisdag 7 februari 2017
Towards a New EPA Without CO2 Alarmism
• Humans are largely responsible for recent climate change.
söndag 5 februari 2017
From Meaningless Towards Meaningful QM?
There are two approaches to mathematical modelling of the physical world:
PS Note how Weinberg describes the foundation of quantum mechanics:
fredag 3 februari 2017
Unphysical Basis of CO2 Alarmism = Hoax
tisdag 31 januari 2017
The End of CO2 Alarmism
• Climate sensitivity to CO2 emission vastly exaggerated.
• Climate industrial complex a very dangerous special interest.
And read about this historic press conference:
Radiation as Superposition or Jumping?
Which is more convincing: Superposition or jumping?
• The second paper refers to radiation only in passing.
måndag 30 januari 2017
Towards a Model of Atoms
In my search for a realistic atom model I have found the following pieces:
1. Atom in ground state as harmonic oscillator: 3d free boundary Schrödinger equation: realQM.
2. Radiating atom as harmonic oscillator with small Abraham-Lorentz damping: previous post and Mathematical Physics of Black Body Radiation.
3. Radiating atoms in collective resonance with exterior electromagnetic field with acoustic analog: Piano Secret
which I hope to assemble into a model which can describe:
• ground states and excited states as solutions of a 3d free boundary Schrödinger equation
• emission and absorption of light by collections of atoms in collective in phase resonance with an exterior electromagnetic field generated by oscillating atomic electric charge and associated Abraham-Lorentz damping.
The key concepts entering into such a model describing in particular matter-light interaction, are:
• physical deterministic computable 3d continuum model of atom as kernel + electrons
• electrons as clouds of charge subject to Coulomb and compression forces
• no conceptual difference between micro and macro
• no probability, no multi-d
• generalised harmonic oscillator
• small damping from Abraham-Lorentz force from oscillating electro charge
• near resonant forcing with half period phase shift
• collective phase coordination by resonance between many atoms and one exterior field.
Note that matter-light interaction is the scope of Quantum Electro Dynamics or Quantum Field Theory, which are very difficult to understand and use.
What I seek is something which can be understood and which is useful. A model in the spirit of Schrödinger as a deterministic 3d multi-species continuum mechanical wave model of microscopic atoms interacting with macroscopic electromagnetics. I don't see that anything like that is available in the literature within the Copenhagen Interpretation of Bohr or any of its clones...
Schrödinger passed away in 1961 after a life in opposition to Bohr since 1926 when his equation was hijacked, but his spirit lives...
...compare with the following trivial text book picture of atomic radiation in the spirit of Bohr:
söndag 29 januari 2017
The Radiating Atom
In the analysis on Computational Blackbody Radiation I used the following model of a harmonic oscillator of frequency $\omega$ with small damping $\gamma >0$ subject to near resonant forcing $f(t)$:
• $\ddot u+\omega^2u-\gamma\dddot u=f(t)$
with the following characteristic energy balance between outgoing and incoming energy:
• $\gamma\int\ddot u^2dt =\int f^2dt$
with integration over a time period and the dot signifying differentiation with respect to time $t$.
An extension to Schrödingers equation written as a system of real-valued wave functions $\phi$ and $\psi$ may take the form
• $\dot\phi +H\psi -\gamma\dddot \psi = f(t)$ (1)
• $-\dot\psi +H\phi -\gamma\dddot \phi = g(t)$ (2)
where $H$ is a Hamiltonian, $f(t)$ and $g(t)$ represent near-resonant forcing, and $\gamma =\gamma (\dot \rho )\ge 0$ with $\gamma (0)=0$ and $\rho =\phi^2 +\psi^2$ is charge density.
This model carries the characteristics displayed of the model $\ddot\phi+H^2\phi =0$ as the 2nd order in time model obtained after eliminating $\psi$ in the case $\gamma =0$ as displayed in a previous post.
In particular, multiplication of (1) by $\phi$ and (2) by $-\psi$ and addition gives conservation of charge if $f(t)\phi -g(t)\psi =0$ as a natural phase shift condition.
Further, multiplication of (1) by $\dot\psi$ and (2) by $\dot\phi$ and addition gives a balance of total energy as inner energy plus radiated energy
• $\int (\phi H\phi +\psi H\psi)dt +\gamma\int (\ddot\phi^2 +\ddot\psi^2)dt$
in terms of work of forcing.
lördag 28 januari 2017
Physical Interpretation of Quantum Mechanics Needed
The standard text book Copenhagen Interpretation of quantum mechanics formed by Bohr is a not a realist physical theory about "what is", but instead an idealist/positivist non-physical probabilistic theory of "what we can know".
This has led modern physics into a black hole of endless fruitless speculations with the Many Worlds Interpretation by Everett as the absurd result of anyway seeking to give a physical meaning to the non-physical Copenhagen Interpretation.
Now, it is a fact that the microscopic world of atoms interacts with the macroscopic world we perceive as being real physical. If the microscopic world is declared to be non-real non-physical, then the interaction becomes a mystery. That real physics can interact with real physics is obvious, but to think of interaction between non-real and real physics makes you dizzy as expressed so well by Bohr:
• Anyone who can contemplate quantum mechanics whit getting dizzy, hasn't understood it.
The emission spectrum of an atom shows that atom microscopics does interact with electromagnetic macroscopics. Physicists are paid to describe this interaction, but following Bohr this was and still is impossible, and the question is if the pay should continue...
In realQM atoms are real as composed of clouds of electric charge around a kernel and the emission spectrum is explained as the result of charge oscillation within atoms in resonance with exterior electromagnetic waves.
To keep being paid a physicist would say: Look, after all an atom is real as being composed of electron "particles orbiting" a kernel, and the non-real aspect is just that the physics is hidden to inspection and that we cannot know the whereabouts of these particles over time. So atoms are real but the nature of the reality is beyond human perception because you get dizzy when seeking to understand.
In particular it is to Bohr inexplicable that electron particles orbiting a kernel of an atom in ground state do not radiate and allows the ground state to be stable.
In realQM the charge distribution of an atom in ground state does not change in time and thus is not source of radiation and the atom can remain stable. On the other hand the charge distribution of a superposition of ground and excited states does vary with time and thus may radiate at the beat frequency as the difference between excited and ground frequency.
To Bohr contact with the inner microscopic world of an atom from the macroscopic would take place at a moment of observation, but that leaves out the constant interaction between micro and macro-scopics taking place in radiation.
An atom in ground state is not radiating and the inner mechanics of the atom is closed to inspection.
For this case one could argue that Bohr's view could be upheld, since one would be free to describe the inner mechanics in many different ways, for example in terms of probabilities of electron particle configurations, all impossible to experimentally verify.
The relevant problem is then the radiating atom in interaction with an outer macroscopic world and here Bohr has little to say because he believes that interaction micro-macro takes place only at observation in the form of "collapse of the wave function".
A real actuality of the inner mechanics of an atom may interact with an actual real outer world, with or without probability, but a probability of an inner particle mechanics of an atom cannot interact with an outer reality, and Bohr discards the first option...actualities can interact but not potentialities...
Let me sum up: The inner microscopics of a radiating atom interacts with outer macroscopics, and the interaction requires the microscopics to share physics with the macroscopics. This not the case in The Copenhagen Interpretation which thus must be false.
torsdag 26 januari 2017
Why Atomic Emission at Beat Frequencies Only?
An atom can emit radiation of frequency $\nu =E_2-E_1$ (with Planck's constant $h$ normalized to unity and allowing to replace energy by frequency) and $E_2>E_1$ are two frequencies as eigenvalues $E$ of a Hamiltonian $H$ with corresponding eigenfunction $\psi (x)$ depending on a space coordinate $x$ satisfying $H\psi =E\psi$ and corresponding wave function $\Psi (x,t)=\exp(iEt)\psi (x)$ satisfying Schrödingers wave equation
and $t$ is a time variable.
Why is the emission spectrum generated by differences $E_2-E_1$ of frequencies of the Hamiltonian as "beat frequencies" and not the frequencies $E_2$ and $E_1$ themselves? Why does an atom interact/resonate with an electromagnetic field of beat frequency $E_2-E_1$, but not $E_2$ or $E_1$?
In particular, why is the ground state of smallest frequency stable by refusing electromagnetic resonance?
This was the question confronting Bohr 1913 when trying to build a model of the atom in terms of classical mechanics terms. Bohr's answer was that "for some reason" only certain "electron orbits" with certain frequencies "are allowed" and that "for some reason" these electron orbits cannot resonate with an electromagnetic field, and then suggested that observed resonances at beat frequencies came from "electrons jumping between energy levels". This was not convincing and prepared the revolution into quantum mechanics in 1926.
Real Quantum Mechanics realQM gives the following answer: The charge density $\vert\Psi (t,x)\vert^2=\psi^2(x)$ of a wave function $\Psi (x,t)=\exp(iEt)\psi (x)$ with $\psi (x)$ satisfying $H\psi =E\psi$, does not vary with time and as such does not radiate.
On the other hand the difference $\Psi =\Psi_2-\Psi_1$ between two wave functions $\Psi_1(x,t)=\exp(iE_1t)\psi_1(x)$ and $\Psi_2(x,t)=\exp(iE_2t)\psi_2(x)$ with $H\psi_1=E_1$ and
$H\psi_2=E_2\psi_2$, is a solution to Schrödinger's equation and can be written
• $\Psi (x,t)=\exp(iE_1t)(\exp(i(E_2-E_1)t)\psi_2(x)-\psi_1(x))$
with corresponding charge density
• $\vert\Psi (t,x)\vert^2 = \vert\exp(i(E_2-E_1)t)\psi_2(x)-\psi_1(x)\vert^2$
with a visible time variation in space scaling with $(E_2-E_1)$ and associated radiation of frequency $E_2-E_1$ as a beat frequency.
Superposition of two eigenstates thus may radiate because the corresponding charge density varies in space with time, while pure eigenstates have charge densities which do not vary with time and thus do not radiate.
In realQM electrons are thought of as "clouds of charge" of density $\vert\Psi\vert^2$ with physical presence, which is not changing with time in pure eigenstates and thus does not radiate, while superpositions of eigenstates do vary with time and thus may radiate, because a charge oscillating at a certain frequency generates a electric field oscillating at the same frequency.
In standard quantum mechanics stdQM $\vert\Psi\vert^2$ is instead interpreted as probability of configuration of electrons as particles, which lacks physical meaning and as such does not appear to allow an explanation of the non-radiation/resonance of pure eigenstates and radiation/resonance at beat frequencies. In stdQM electrons are nowhere and everywhere at the same time, and it is declared that speaking of electron (or charge) motion is nonsensical and then atom radiation remains as inexplicable as to Bohr in 1913.
So the revolution of classical mechanics into quantum mechanics driven by Bohr's question and unsuccessful answer, does not seem to present any real answer. Or does it?
PS I have already written about The Radiating Atom in a sequence of posts 1-11 with in particular 3: Resolution of Schrödinger's Enigma connecting to this post.
onsdag 25 januari 2017
Ny Läroplan med Programmering på Regeringens Bord
SVT Nyheter i Gävleborg meddelar att den nya läroplanen med programmering som nytt studieämne nu ligger på Regeringens bord för beslut och att flera skolor i Gävle och Sandviken redan rivstartat och infört ämnet.
Snart måste övriga skolor följa efter. Mitt bidrag för att möta behovet av nya läromedel är Matematik-IT, färdigt att provas!
tisdag 24 januari 2017
Is the Quantum World Really Inexplicable in Classical Terms?
Peter Holland describes in the opening statement of The Quantum Theory of Motion the state of the art of modern physics in the form of quantum mechanics, as follows:
• The quantum world is inexplicable in classical terms.
• The predictions pertaining to the interaction of matter and light embodied in Newton's laws of motion and Maxwell's equations governing the propagation of electromagnetic fields, are in flat contradiction with the experimental facts at the microscopic scale.
• A key feature of quantum effects is their apparent indeterminism, that individual atomic events are unpredictable, uncontrollable and literally seem to have no cause.
• Regularities emerge onlywhen one considers a large ensemble of such events.
• This indeed is generally considered to constitute the heart of the conceptual problems posed by quantum phenomena, necessitating a fundamental revision of the deterministic classical world view.
No doubt this describes the predicament of modern physics and it is a sad story: It is nothing but a total collapse of rationality, and as far as I can understand, there are no compelling reasons to give up the core principles of classical continuum physics so well expressed in Maxwell's equations.
If classical continuum physics is modified just a little by adding a new element of finite precision computation, then the apparent contradiction of the ultra-violet catastrophe of black-body radiation as the root of "quantization", can be circled and rationality maintained. You can find these my arguments by browsing the labels to this post and the web sites Computational Black Body Radiation and The World as Computation with further development in the book Real Quantum Mechanics.
And so No, it may not be necessary to give up the deterministic classical world view when doing atom physics, the view which gave us Maxwell's equations and opened a new world of electro-magnetics connecting to atoms. It may suffice to modify the deterministic classical view just a little bit without losing anything to make it work also for atom physics.
After all, what can be more deterministic than the ground state of a Hydrogen atom?
Of course, this is not a message that is welcomed by physicists, who have been locked since 90 years into finding evidence that quantum mechanics is inexplicable, by inventing contradictions of concepts without physical reality. The root to such contradictions (like wave-particle duality) is the linear multi-d Schrödinger equation which is picked from the air as a formality without physics content, but just because of that being inexplicable. To advance, it seems that a new Schrödinger equation with physical meaning should be derived...
The question is how to generalise Schrödinger's equation for the Hydrogen atom with one electron, which works fine and can be understood, to Helium with two electrons and so on...The question is then how the two electrons of Helium find co-existence around the kernel. In Real Quantum Mechanics they split 3d space without overlap....like East and West of global politics or Germany...
Quantum Mechanics as Retreat to (German) Romantic Irrational Ideal
Quantum theory is widely held to resist any realist interpretation and to mark the advent of a ‘postmodern’ science characterised by paradox, uncertainty, and the limits of precise measurement. Keeping his own realist position in check, Christopher Norris provides a remarkably detailed and incisive account of the positions adopted by parties on both sides of this complex debate.
James Cushing gives in Bohmian Mechanics and Quantum Theory (1996): An Appraisal, an account of the rise to domination of the Born-Heisenberg-Bohr Copenhagen Interpretation of quantum mechanics:
• Today it is generally assumed that the success of quantum mechanics demands that we accept a world view in which physical processes at the most fundamental level are seen as being irreducibly and ineliminably indeterministic.
• That is, one of the great watersheds in twentieth-century scientific thought is the "Copenhagen" insight that empirical evidence and logic are seen as necessarily implying an indeterministic picture of nature.
• This is in marked contrast to any classical representation of a clockwork universe.
• A causal program would have been a far less radical departure from the then-accepted framework of classical physics than was the so-called Copenhagen version of quantum mechanics that rapidly gained ascendancy by the late 1920s and has been all-but universally accepted ever since.
• How could this happen?
• It has been over twenty years now since the dramatic and controversial "Forman thesis" was advanced that acausality was embraced by German quantum physicists in the Weimar era as a reaction to the hostile intellectual and cultural environment that existed there prior to and during the formulation of modem quantum mechanics.
• The goal was to establish a causal connection between this social intellectual milieu and the content of science, in this case quantum mechanics.
• The general structure of this argument is the following. Causality for physicists in the early twentieth century "meant complete lawfulness of Nature, determinism [(i.e., event-by-event causality)]".
• Such lawfulness was seen by scientists as absolutely essential for science to be a coherent enterprise. A scientific approach was also taken to be necessarily a rational one.
• When, in the aftermath of the German defeat in World War I, science was held responsible (not only by its failure, but even more because of its spirit) for the sorry state of society, there was a reaction against rationalism and a return to a romantic, "irrational" ideal.
Yes, quantum mechanics (in its Copenhagen Interpretation forcefully advocated by Bohr under influence from the anti-realist positivist philosopher Höffding) was a product of German physics in the Weimar republic of the 1920s, by Heisenberg and Born.
It seems reasonable to think that if the defeat of Germany in World War I was blamed on a failure of "rationality" and "realism", then a resort to "irrationality" and "anti-realism" would be rational in particular in Germany...and so quantum mechanics in its anti-realist form took over the scene as Germany rebuilt its power...
But maybe today Germany is less idealistic and anti-realistic (although the Energiewende is romantic anti-realism) and so maybe also a more realistic quantum mechanics can be allowed to develop...without the standard "shut-up and calculate" suppression of discussion...
måndag 23 januari 2017
Quantum Mechanics as Classical Continuum Physics and Not Particle Mechanics
Planck (with eyes shut) presents Einstein with the Max Planck medal of the German Physical Society, 28 June 1929, in Berlin, as the highest award of the Deutsche Physikalische Gesellschaft, for Einstein's idea of light as particles, which Planck did not believe in (and did not want to see).
Modern physics in the form of quantum mechanics was born in 1900 when Planck in a desperate act introduced the idea of smallest packet of energy or quanta to explain black-body radiation followed up in 1905 by Einstein's equally desperate attempt to explain photo-electricity by viewing light as a stream of light particles of energy quanta $h\nu$ where $\nu$ is frequency and $h$ Planck's constant.
Yes, Einstein was desperate, because he was stuck as patent clerk in Bern and his academic career was going nowhere. Yes, Planck was also desperate because his role at the University of Berlin as the successor of the great Kirchhoff, was to explain blackbody radiation as the most urgent unsolved problem of physics and thereby demonstrate the scientific leadership of an emerging German Empire.
The "quantisation" into discrete smallest packets of energy and light was against the wisdom of the continuum physics of the 19th century crowned by Maxwell's wave equations describing all of electro-magnetics as a system of partial differential equations over 3d-space as a continuum over real numbers as the ultimate triumph of the infinitesimal Calculus of Leibniz and Newton.
The "quantisation" of energy and light thus meant a partial retreat to the view of the early Greek atomists with the world ultimately built from indivisible particles or quanta and not waves, also named particle physics.
But the wave nature was kept in Schrödinger's linear multi-d equation as the basis of quantum mechanics, but then not in physical form as in Maxwell's equations, but as probability waves supposedly describing probabilities of particle configurations. The mixture was named wave-particle duality, which has been the subject of endless discussion after its introduction by Bohr.
Schrödinger never accepted a particle description and stuck to his original idea that waves are enough to explain atom physics. The trouble with this view was the multi-d aspect of Schrödinger's equation which could not be given a meaning/interpretation in terms of physical waves, like Maxwell's equations. This made Schrödinger's waves-are-enough idea impossible to defend and Schrödinger's equation was hijacked Bohr/Born/Heisenberg and twisted into a physical particle - probabilistic wave Copenhagen Interpretation as the textbook truth.
But blackbody radiation and the photoelectric effect can be explained by wave mechanics without any form of particles in the form of Computational Blackbody Radiation with the new element being finite precision computation.
The idea of a particle is contradictory, as something with physical presence without physical dimension. Atom physics can make sense as wave mechanics but not as particle mechanics. It is important to remember that this was the view of Schrödinger when he formulated his wave equation in 1925 for the Hydrogen atom. What is needed is an extension of Schrödinger's equation to atoms with several electrons which has a physical meaning, maybe as Real Quantum Mechanics, and this is not the standard linear multi-d Schrödinger equation with solutions interpreted as probability distributions of particle configurations in the spirit of Born-Bohr-Heisenberg but not Schrödinger.
Recall that particle motion is also a contradictory concept, as shown in Zeno's paradox: At each instant of time the particle (Zeno's arrow) is still at a point in space, and thus cannot move to another point. On the other hand, wave motion as the translatory motion of a water wave across a water surface of water, is possible to explain as the result of (circular) transversal water oscillation without translation. Electro-magnetic waves are propagating by transversal oscillation of electric-magnetic fields.
And do not believe that Zeno's paradox was ever solved. It expresses the truly contradictory nature of the concept of particle, which cannot be resolved. Ponder the following "explanation" on Stanford Encyclopedia of Philosophy:
• Think about it this way: time, as we said, is composed only of instants. No distance is traveled during any instant. So when does the arrow actually move? How does it get from one place to another at a later moment?
• In Bergson's memorable words—which he thought expressed an absurdity—‘movement is composed of immobilities’ (1911, 308): getting from X to Y is a matter of occupying exactly one place in between at each instant (in the right order of course).
As you understand, this is just nonsense:
Particles don't exist, and if they anyway are claimed to exist, they cannot move.
Waves do exist and can move. It is not so difficult to understand!
lördag 21 januari 2017
Deconstruction of CO2 Alarmism Started
Directly after inauguration the White House web site changes to a new Energy Plan, where all of Obama's CO2 alarmism has been completely eliminated:
Nothing about dangerous CO2! No limits on emission! Trump has listened to science! CO2 alarmism will be defunded and why not then also other forms of fake physics...
This is the first step to the Fall of IPCC and the Paris agreement and liberation of resources for the benefit of humanity, see phys.org.
The defunding of CO2 alarmism will now start, and then why not other forms of fake science?
PS1 Skepticism to CO2 alarmism expressed by Klimatrealisterna is now getting published in media in Norway, while in Sweden it is fully censored. I have recently accepted an invitation to become a member of the scientific committee of this organisation (not yet visible on the web site).
PS2 Read Roy Spencer's analysis of the Trump Dump:
Bottomline: With plenty of energy, poverty can be eliminated. Unstopped CO2 alarmism will massively increase poverty with no gain whatsoever. Trump is the first state leader to understand that the Emperor of CO2 Alarmism is naked, and other leaders will now open their eyes to see the same thing...and skeptics may soon say mission complete...
See also The Beginning of the End of EPA. |
7aff9e19056372a1 | The Feynman Lectures on Physics
From Wikipedia, the free encyclopedia
Jump to: navigation, search
The Feynman Lectures on Physics
The Feynman Lectures on Physics.jpg
The Feynman Lectures on Physics including Feynman's Tips on Physics: The Definitive and Extended Edition (2nd edition, 2005)
Author Richard P. Feynman, Robert B. Leighton and Matthew Sands
Country United States
Language English
Subject Physics
Publisher Addison–Wesley
Publication date
OCLC 19455482
The Feynman Lectures on Physics is a physics textbook based on some lectures by Richard P. Feynman, a Nobel laureate who has sometimes been called “The Great Explainer”.[1] The lectures were given to undergraduate students at the California Institute of Technology (Caltech), during 1961–1963. The book's authors are Feynman, Robert B. Leighton, and Matthew Sands.
The book comprises three volumes. The first volume focuses on mechanics, radiation, and heat, including relativistic effects. The second volume is mainly on electromagnetism and matter. The third volume is on quantum mechanics; it shows, for example, how the double-slit experiment contains the essential features of quantum mechanics. The book also includes chapters on mathematics and the relation of physics to other sciences.
The Feynman Lectures on Physics is perhaps the most popular physics book ever written. It has been printed in a dozen languages.[2] More than 1.5 million copies have sold in English, and probably even more copies in foreign-language editions.[2] A 2013 review in Nature described the book as having "simplicity, beauty, unity … presented with enthusiasm and insight".[3]
In 2013, Caltech made the book freely available, on the web site
Feynman the “Great Explainer”: The Feynman Lectures on Physics found an appreciative audience beyond the undergraduate community.
By 1960, Richard Feynman’s research and discoveries in physics had resolved a number of troubling inconsistencies in several fundamental theories. In particular, it was his work in quantum electrodynamics which would lead to the awarding in 1965 of the Nobel Prize in physics. At the same time that Feynman was at the pinnacle of his fame, the faculty of the California Institute of Technology was concerned about the quality of the introductory courses being offered to the undergraduate students. It was felt that these were burdened by an old-fashioned syllabus and that the exciting discoveries of recent years, many of which had occurred at Caltech, were not being conveyed to the students.
Thus, it was decided to reconfigure the first physics course offered to students at Caltech, with the goal being to generate more excitement in the students. Feynman readily agreed to give the course, though only once. Aware of the fact that this would be a historic event, Caltech recorded each lecture and took photographs of each drawing made on the blackboard by Feynman.
Based on the lectures and the tape recordings, a team of physicists and graduate students put together a manuscript that would become The Feynman Lectures on Physics. Although Feynman's most valuable technical contribution to the field of physics may have been in the field of quantum electrodynamics, the Feynman Lectures were destined to become his most widely read work.
The Feynman Lectures are considered to be one of the best and most sophisticated college level introductions to physics.[4] Feynman, himself, however, stated, in his original preface, that he was “pessimistic” with regard to the success with which he reached all of his students. The Feynman lectures were written “to maintain the interest of very enthusiastic and rather smart students coming out of high schools and into Caltech.” Feynman was targeting the lectures to students who, “at the end of two years of our previous course, [were] very discouraged because there were really very few grand, new, modern ideas presented to them.” As a result, some physics students find the lectures more valuable after they obtain a good grasp of physics by studying more traditional texts. Many professional physicists refer to the lectures at various points in their careers to refresh their minds with regard to basic principles.
As the two-year course (1961–1963) was still being completed, rumor of it spread throughout the physics community. In a special preface to the 1989 edition, David Goodstein and Gerry Neugebauer claim that as time went on, the attendance of registered students dropped sharply but was matched by a compensating increase in the number of faculty and graduate students. Sands, in his memoir accompanying the 2005 edition, contests this claim. Goodstein and Neugebauer also state that, “it was [Feynman’s] peers — scientists, physicists, and professors — who would be the main beneficiaries of his magnificent achievement, which was nothing less than to see physics through the fresh and dynamic perspective of Richard Feynman,” and that his "gift was that he was an extraordinary teacher of teachers".
Addison–Wesley published a collection of problems to accompany The Feynman Lectures on Physics. The problem sets were first used in the 1962-1963 academic year and organized by Robert B. Leighton. Some of the problems are sophisticated enough to require understanding of topics as advanced as Kolmogorov's zero-one law, for example.
Addison–Wesley also released in CD format all the audio tapes of the lectures, over 103 hours with Richard Feynman, after remastering the sound and clearing the recordings. For the CD release, the order of the lectures was rearranged from that of the original texts. (The publisher has released a table showing the correspondence between the books and the CDs.)
In March 1964, Feynman appeared before the freshman physics class as a guest lecturer, but the notes for this lecture were lost for a number of years. They were finally located, restored, and made available as Feynman's Lost Lecture: The Motion of Planets Around the Sun.
In 2005, Michael A. Gottlieb and Ralph Leighton co-authored Feynman's Tips on Physics, which includes four of Feynman's freshman lectures not included in the main text (three on problem solving, one on inertial guidance), a memoir by Matt Sands about the origins of the Feynman Lectures on Physics, and exercises (with answers) that were assigned to students by Robert B. Leighton and Rochus Vogt in recitation sections of the Feynman Lectures course at Caltech. Also released in 2005, was a "Definitive Edition" of the lectures which includes corrections to the original text.
An account on the history of these famous volumes is given by Sands in his memoir article “Capturing the Wisdom of Feynman”, Physics Today, Apr 2005, p. 49.[5]
In September 13, 2013, in an email to members of Feynman Lectures online forum, Gottlieb announced the launch of a new website by Caltech and The Feynman Lectures Website which offers "[A] free high-quality online edition" of the lecture text. Volume I of the lectures is initially posted on this website, but other volumes are expected to be available in the near future.[6] To provide a device independent reading experience, the website takes advantage of modern web technologies like HTML5, SVG, and Mathjax to present text, figures, and equations in any sizes while maintaining the display quality.[7]
Volume I. Mainly mechanics, radiation, and heat[edit]
Preface: “When new ideas came in, I would try either to deduce them if they were deducible or to explain that it was a new idea … and which was not supposed to be provable.”
Volume II. Mainly electromagnetism and matter[edit]
1. Electromagnetism
2. Differential calculus of vector fields
3. Vector integral calculus
4. Electrostatics
5. Application of Gauss' law
6. The electric field in various circumstances
7. The electric field in various circumstances (continued)
8. Electrostatic energy
9. Electricity in the atmosphere
10. Dielectrics
11. Inside dielectrics
12. Electrostatic analogs
13. Magnetostatics
14. The magnetic field in various situations
15. The vector potential
16. Induced currents
17. The laws of induction
18. The Maxwell equations
19. Principle of least action
20. Solutions of Maxwell's equations in free space
21. Solutions of Maxwell's equations with currents and charges
22. AC circuits
23. Cavity resonators
24. Waveguides
25. Electrodynamics in relativistic notation
26. Lorentz transformations of the fields
27. Field energy and field momentum
28. Electromagnetic mass (ref. to Wheeler–Feynman absorber theory)
29. The motion of charges in electric and magnetic fields
30. The internal geometry of crystals
31. Tensors
32. Refractive index of dense materials
33. Reflection from surfaces
34. The magnetism of matter
35. Paramagnetism and magnetic resonance
36. Ferromagnetism
37. Magnetic materials
38. Elasticity
39. Elastic materials
40. The flow of dry water
41. The flow of wet water
42. Curved space
Volume III. Quantum mechanics[edit]
1. Quantum behavior
2. The relation of wave and particle viewpoints
3. Probability amplitudes
4. Identical particles
5. Spin one
6. Spin one-half
7. The dependence of amplitudes on time
8. The Hamiltonian matrix
9. The ammonia maser
10. Other two-state systems
11. More two-state systems
12. The hyperfine splitting in hydrogen
13. Propagation in a crystal lattice
14. Semiconductors
15. The independent particle approximation
16. The dependence of amplitudes on position
17. Symmetry and conservation laws
18. Angular momentum
19. The hydrogen atom and the periodic table
20. Operators
21. The Schrödinger equation in a classical context: a seminar on superconductivity
Abbreviated editions[edit]
Six readily-accessible chapters were later compiled into a book entitled Six Easy Pieces: Essentials of Physics Explained by Its Most Brilliant Teacher. Six more chapters are in the book Six Not So Easy Pieces: Einstein's Relativity, Symmetry and Space-Time.
Six Easy Pieces grew out of the need to bring to as wide an audience as possible, a substantial yet nontechnical physics primer based on the science of Richard Feynman…. General readers are fortunate that Feynman chose to present certain key topics in largely qualitative terms without formal mathematics….”[8]
Six Easy Pieces (1994)[edit]
1. Atoms in motion
2. Basic Physics
3. The relation of physics to other sciences
4. Conservation of energy
5. The theory of gravitation
6. Quantum behavior
Six Not-So-Easy Pieces (1998)[edit]
1. Vectors
2. Symmetry in physical laws
3. The special theory of relativity
4. Relativistic energy and momentum
5. Space-time
6. Curved space
The Very Best of The Feynman Lectures (Audio, 2005)[edit]
1. The Theory of Gravitation (Vol. I, Chapter 7)
2. Curved Space (Vol. II, Chapter 42)
3. Electromagnetism (Vol. II, Chapter 1)
4. Probability (Vol. I, Chapter 6)
5. The Relation of Wave and Particle Viewpoints (Vol. III, Chapter 2)
6. Superconductivity (Vol. III, Chapter 21)
Publishing information[edit]
See also[edit]
1. ^ LeVine, Harry (2009). The Great Explainer: The Story of Richard Feynman. Greensboro, North Carolina: Morgan Reynolds. ISBN 978-1-59935-113-1.
2. ^ a b [1]
3. ^ Phillips R. (5 December 2013), "The Feynman lectures on physics", Nature, 504: 30-31.
4. ^ Rohrlich, Fritz (1989), From paradox to reality: our basic concepts of the physical world, Cambridge University Press, p. 157, ISBN 0-521-37605-X , Extract of page 157
5. ^ See also: Welton, T.A., “Memory of Feynman”, Physics Today, Feb 2007, p.46.
6. ^ Text of the email to Feynman Lectures Forum Members on Hacker News.
7. ^ Footnote on homepage of website The Feynman Lectures on Physics.
8. ^ Feynman, Richard Phillips; Leighton, Robert B.; Sands, Matthew (2011). Six Easy Pieces: Essentials of Physics Explained by Its Most Brilliant Teacher. Basic Books. p. vii. ISBN 0-465-02529-3. , Extract of page vii
External links[edit] |
c05784ce2364dbd6 |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
Imagine you're teaching a first course on quantum mechanics in which your students are well-versed in classical mechanics, but have never seen any quantum before. How would you motivate the subject and convince your students that in fact classical mechanics cannot explain the real world and that quantum mechanics, given your knowledge of classical mechanics, is the most obvious alternative to try?
If you sit down and think about it, the idea that the state of a system, instead of being specified by the finitely many particles' position and momentum, is now described by an element of some abstract (rigged) Hilbert space and that the observables correspond to self-adjoint operators on the space of states is not at all obvious. Why should this be the case, or at least, why might we expect this to be the case?
Then there is the issue of measurement which is even more difficult to motivate. In the usual formulation of quantum mechanics, we assume that, given a state $|\psi \rangle$ and an observable $A$, the probability of measuring a value between $a$ and $a+da$ is given by $|\langle a|\psi \rangle |^2da$ (and furthermore, if $a$ is not an eigenvalue of $A$, then the probability of measuring a value in this interval is $0$). How would you convince your students that this had to be the case?
I have thought about this question of motivation for a couple of years now, and so far, the only answers I've come up with are incomplete, not entirely satisfactory, and seem to be much more non-trivial than I feel they should be. So, what do you guys think? Can you motivate the usual formulation of quantum mechanics using only classical mechanics and minimal appeal to experimental results?
Note that, at some point, you will have to make reference to experiment. After all, this is the reason why we needed to develop quantum mechanics. In principle, we could just say "The Born Rule is true because its experimentally verified.", but I find this particularly unsatisfying. I think we can do better. Thus, I would ask that when you do invoke the results of an experiment, you do so to only justify fundamental truths, by which I mean something that can not itself just be explained in terms of more theory. You might say that my conjecture is that the Born Rule is not a fundamental truth in this sense, but can instead be explained by more fundamental theory, which itself is justified via experiment.
Edit: To clarify, I will try to make use of a much simpler example. In an ideal gas, if you fix the volume, then the temperature is proportional to pressure. So we may ask "Why?". You could say "Well, because experiment.", or alternatively you could say "It is a trivial corollary of the ideal gas law.". If you choose the latter, you can then ask why that is true. Once again, you can just say "Because experiment." or you could try to prove it using more fundamental physical truths (using the kinetic theory of gases, for example). The objective, then, is to come up with the most fundamental physical truths, prove everything else we know in terms of those, and then verify the fundamental physical truths via experiment. And in this particular case, the objective is to do this with quantum mechanics.
share|cite|improve this question
"making as little reference to experiment as possible" !!! The only reason we have developed quantum mechanics is because the experimental evidence demanded it and demands it. – anna v Dec 5 '12 at 17:59
Are you looking for a derivation from simple physical principles a la Einstein's derivation of relativity from his two postulates? That is the basic open question in quantum foundations, isn't it? – Emilio Pisanty Dec 5 '12 at 18:58
right, then the definitive answer is that such an argument does not exist. plenty of people devote their academic careers to answering your question, as Emilio implied, and no one is in agreement as to the correct answer, yet. if you are interested in this, then you should look up the work of Rob Spekkens. also, Chris Fuchs, Lucien Hardy, Jonathan Barrett, and probably a bunch of other people too. – Mark Mitchison Dec 5 '12 at 20:16
um...not necessarily. It's just that I think that I now understand the intent of the OP's question, and if I do - it cannot be put better than Emilio did - it is simply 'the standard open question of quantum foundations'. I know enough people working in this field to know that the experts do not consider this question at all resolved. – Mark Mitchison Dec 5 '12 at 20:26
Hey Johnny! Hope all is well. As to your question, I really feel that you can not talk about quantum mechanics in the way you envision while giving your students as solid an understanding when you approach from an experimental view point. I think the closest you could get would be to talk about the problems encountered before quantum mechanics and how quantum mechanics was realized and built after that; this would merely omit information on the experiments, equally unsatisfying. It is a tough question! – Dylan Sabulsky Dec 6 '12 at 1:17
14 Answers 14
I am late to this party here, but I can maybe advertize something pretty close to a derivation of quantum mechanics from pairing classical mechanics with its natural mathematical context, namely with Lie theory. I haven't had a chance yet to try the following on first-year students, but I am pretty confident that with just a tad more pedagogical guidance thrown in as need be, the following should make for a rather satisfactory motivation for any student with a little bit of mathematical/theoretical physics inclination.
For more along the following lines see at nLab:quantization.
Quantization of course was and is motivated by experiment, hence by observation of the observable universe: it just so happens that quantum mechanics and quantum field theory correctly account for experimental observations, where classical mechanics and classical field theory gives no answer or incorrect answers. A historically important example is the phenomenon called the “ultraviolet catastrophe”, a paradox predicted by classical statistical mechanics which is not observed in nature, and which is corrected by quantum mechanics.
But one may also ask, independently of experimental input, if there are good formal mathematical reasons and motivations to pass from classical mechanics to quantum mechanics. Could one have been led to quantum mechanics by just pondering the mathematical formalism of classical mechanics? (Hence more precisely: is there a natural Synthetic Quantum Field Theory?)
The following spells out an argument to this extent. It will work for readers with a background in modern mathematics, notably in Lie theory, and with an understanding of the formalization of classical/prequantum mechanics in terms of symplectic geometry.
So to briefly recall, a system of classical mechanics/prequantum mechnanics is a phase space, formalized as a symplectic manifold $(X,ω)$. A symplectic manifold is in particular a Poisson manifold, which means that the algebra of functions on phase space $X$, hence the algebra of classical observables, is canonically equipped with a compatible Lie bracket: the Poisson bracket. This Lie bracket is what controls dynamics in classical mechanics. For instance if $H\in C^{∞}(X)$ is the function on phase space which is interpreted as assigning to each configuration of the system its energy – the Hamiltonian function – then the Poisson bracket with $H$ yields the infinitesimal time evolution of the system: the differential equation famous as Hamilton's equations.
To take notice of here is the infinitesimal nature of the Poisson bracket. Generally, whenever one has a Lie algebra $\mathfrak{g}$, then it is to be regarded as the infinitesimal approximation to a globally defined object, the corresponding Lie group (or generally smooth group) $G$. One also says that $G$ is a Lie integration of $\mathfrak{g}$ and that $\mathfrak{g}$ is the Lie differentiation of $G$.
Therefore a natural question to ask is: Since the observables in classical mechanics form a Lie algebra under Poisson bracket, what then is the corresponding Lie group?
The answer to this is of course “well known” in the literature, in the sense that there are relevant monographs which state the answer. But, maybe surprisingly, the answer to this question is not (at time of this writing) a widely advertized fact that would have found its way into the basic educational textbooks. The answer is that this Lie group which integrates the Poisson bracket is the “quantomorphism group”, an object that seamlessly leads over to the quantum mechanics of the system.
Before we say this in more detail, we need a brief technical aside: of course Lie integration is not quite unique. There may be different global Lie group objects with the same Lie algebra.
The simplest example of this is already the one of central importance for the issue of quantization, namely the Lie integration of the abelian line Lie algebra $\mathbb{R}$. This has essentially two different Lie groups associated with it: the simply connected translation group, which is just $\mathbb{R}$ itself again, equipped with its canonical additive abelian group structure, and the discrete quotient of this by the group of integers, which is the circle group
$$ U(1) = \mathbb{R}/\mathbb{Z} \,. $$
Notice that it is the discrete and hence “quantized” nature of the integers that makes the real line become a circle here. This is not entirely a coincidence of terminology, but can be traced back to be at the heart of what is “quantized” about quantum mechanics.
Namely one finds that the Poisson bracket Lie algebra $\mathfrak{poiss}(X,ω)$ of the classical observables on phase space is (for X a connected manifold) a Lie algebra extension of the Lie algebra $\mathfrak{ham}(X)$ of Hamiltonian vector fields on $X$ by the line Lie algebra:
$$ \mathbb{R} \longrightarrow \mathfrak{poiss}(X,\omega) \longrightarrow \mathfrak{ham}(X) \,. $$
This means that under Lie integration the Poisson bracket turns into a central extension of the group of Hamiltonian symplectomorphisms of $(X,ω)$. And either it is the fairly trivial non-compact extension by $\mathbb{R}$, or it is the interesting central extension by the circle group $U(1)$. For this non-trivial Lie integration to exist, $(X,ω)$ needs to satisfy a quantization condition which says that it admits a prequantum line bundle. If so, then this $U(1)$-central extension of the group $Ham(X,\omega)$ of Hamiltonian symplectomorphisms exists and is called… the quantomorphism group $QuantMorph(X,\omega)$:
$$ U(1) \longrightarrow QuantMorph(X,\omega) \longrightarrow Ham(X,\omega) \,. $$
While important, for some reason this group is not very well known. Which is striking, because there is a small subgroup of it which is famous in quantum mechanics: the Heisenberg group.
More exactly, whenever $(X,\omega)$ itself has a compatible group structure, notably if $(X,\omega)$ is just a symplectic vector space (regarded as a group under addition of vectors), then we may ask for the subgroup of the quantomorphism group which covers the (left) action of phase space $(X,\omega)$ on itself. This is the corresponding Heisenberg group $Heis(X,\omega)$, which in turn is a $U(1)$-central extension of the group $X$ itself:
$$ U(1) \longrightarrow Heis(X,\omega) \longrightarrow X \,. $$
At this point it is worthwhile to pause for a second and note how the hallmark of quantum mechanics has appeared as if out of nowhere from just applying Lie integration to the Lie algebraic structures in classical mechanics:
if we think of Lie integrating $\mathbb{R}$ to the interesting circle group $U(1)$ instead of to the uninteresting translation group $\mathbb{R}$, then the name of its canonical basis element 1∈ℝ is canonically ”i”, the imaginary unit. Therefore one often writes the above central extension instead as follows:
$$ i \mathbb{R} \longrightarrow \mathfrak{poiss}(X,\omega) \longrightarrow \mathfrak{ham}(X,\omega) $$
in order to amplify this. But now consider the simple special case where $(X,\omega)=(\mathbb{R}^{2},dp∧dq)$ is the 2-dimensional symplectic vector space which is for instance the phase space of the particle propagating on the line. Then a canonical set of generators for the corresponding Poisson bracket Lie algebra consists of the linear functions p and q of classical mechanics textbook fame, together with the constant function. Under the above Lie theoretic identification, this constant function is the canonical basis element of $i\mathbb{R}$, hence purely Lie theoretically it is to be called ”i”.
With this notation then the Poisson bracket, written in the form that makes its Lie integration manifest, indeed reads
$$ [q,p] = i \,. $$
Since the choice of basis element of $i\mathbb{R}$ is arbitrary, we may rescale here the i by any non-vanishing real number without changing this statement. If we write ”ℏ” for this element, then the Poisson bracket instead reads
$$ [q,p] = i \hbar \,. $$
This is of course the hallmark equation for quantum physics, if we interpret ℏ here indeed as Planck's constant. We see it arise here by nothing but considering the non-trivial (the interesting, the non-simply connected) Lie integration of the Poisson bracket.
This is only the beginning of the story of quantization, naturally understood and indeed “derived” from applying Lie theory to classical mechanics. From here the story continues. It is called the story of geometric quantization. We close this motivation section here by some brief outlook.
The quantomorphism group which is the non-trivial Lie integration of the Poisson bracket is naturally constructed as follows: given the symplectic form $ω$, it is natural to ask if it is the curvature 2-form of a $U(1)$-principal connection $∇$ on complex line bundle $L$ over $X$ (this is directly analogous to Dirac charge quantization when instead of a symplectic form on phase space we consider the the field strength 2-form of electromagnetism on spacetime). If so, such a connection $(L,∇)$ is called a prequantum line bundle of the phase space $(X,ω)$. The quantomorphism group is simply the automorphism group of the prequantum line bundle, covering diffeomorphisms of the phase space (the Hamiltonian symplectomorphisms mentioned above).
As such, the quantomorphism group naturally acts on the space of sections of $L$. Such a section is like a wavefunction, instead that it depends on all of phase space, instead of just on the “canonical coordinates”. For purely abstract mathematical reasons (which we won’t discuss here, but see at motivic quantization for more) it is indeed natural to choose a “polarization” of phase space into canonical coordinates and canonical momenta and consider only those sections of the prequantum line bundle which depend on just the former. These are the actual wavefunctions of quantum mechanics, hence the quantum states. And the subgroup of the quantomorphism group which preserves these polarized sections is the group of exponentiated quantum observables. For instance in the simple case mentioned before where $(X,ω)$ is the 2-dimensional symplectic vector space, this is the Heisenberg group with its famous action by multiplication and differentiation operators on the space of complex-valued functions on the real line.
For more along these lines see at nLab:quantization.
share|cite|improve this answer
Dear Urs: fantastic answer. I fixed up some of the fraktur and other symbols which hadn't seemed to come across from your nLab article so well: you might like to check correctness. – WetSavannaAnimal aka Rod Vance Oct 20 '14 at 11:29
Can you, for a concrete simple example in quantum mechanics, follow this procedure (take classical geometry, choose circle group bundle with connection, write down the expression which amounts to the integration "in the $i$-direction") and express the observables $\langle H\rangle, \langle P\rangle, \dots$ in therms of this. Is there the harmonic oscillator, starting from the classical Hamiltonian, worked out with emphasis on exactly those bundles? – NikolajK Oct 27 '14 at 11:12
Yes, this is called "geometric quantization". It's standard. (I was just offering a motivation for existing theory, not a new theory.) The geometric quantization of the standard examples (e.g. harmonic oscillator) is in all the standard textbooks and lecture notes, take your pick here:… – Urs Schreiber Oct 27 '14 at 11:39
Why would you ever try to motivate a physical theory without appealing to experimental results??? The motivation of quantum mechanics is that it explains experimental results. It is obvious that you would choose a simpler, more intuitive picture than quantum mechanics if you weren't interested in predicting anything.
If you are willing to permit some minimal physical input, then how about this: take the uncertainty principle as a postulate. Then you know that the effect on a system of doing measurement $A$ first, then measurement $B$, is different from doing $B$ first then $A$. That can be written down symbolically as $AB \neq BA$ or even $[A,B] \neq 0$. What kind of objects don't obey commutative multiplication? Linear operators acting on vectors! It follows that observables are operators and "systems" are somehow vectors. The notion of "state" is a bit more sophisticated and doesn't really follow without reference to measurement outcomes (which ultimately needs the Born rule). You could also argue that this effect must vanish in the classical limit, so then you must have $[A,B] \sim \hbar $, where $\hbar$ is some as-yet (and never-to-be, if you refuse to do experiments) undetermined number that must be small compared to everyday units. I believe this is similar to the original reasoning behind Heisenberg's matrix formulation of QM.
The problem is that this isn't physics, you don't know how to predict anything without the Born rule. And as far as I know there is no theoretical derivation of the Born rule, it is justified experimentally!
If you want a foundations viewpoint on why QM rather than something else, try looking into generalised probabilistic theories, e.g. this paper. But I warn you, these provide neither a complete, simple nor trivial justification for the QM postulates.
share|cite|improve this answer
See edit to question. Obviously, you'e going to have to appeal to experiment somewhere, but I feel as if the less we have to reference experiment, the more eloquent the answer would be. – Jonathan Gleason Dec 5 '12 at 18:15
i disagree entirely on that point, but obviously that's personal aesthetics. surely if you don't find experiments a beautiful proof, it would be better to argue that quantum mechanics is the most mathematically elegant physical theory possible, and thereby remove the pesky notion of those dirty, messy experiments completely! – Mark Mitchison Dec 5 '12 at 19:35
This seems to be a good starting point, but the problem is that measurements are not linar operators acting on vectors... But perhaps the example can be adapted. – Bzazz Feb 2 '13 at 11:35
@Bzazz Huh? The outcome of a (von Neumann) measurement is given by the projection of the initial state vector onto one of the eigenspaces of the operator describing the observable. That projection certainly is a linear, Hermitian operator. If the observables don't commute, they don't share the same eigenvectors, and therefore the order of projections matters. – Mark Mitchison Feb 4 '13 at 20:40
(contd.) In the more general case, a measurement is described by a CP map, which is a linear operator over the (vector) space of density matrices. The CP map can always be described by a von Neumann projection in a higher-dimensional space, and the same argument holds. – Mark Mitchison Feb 4 '13 at 20:41
If I would be designing an introduction to quantum physics course for physics undergrads, I would seriously consider starting from the observed Bell-GHZ violations. Something along the lines of David Mermin's approach. If there is one thing that makes clear that no form of classical physics can provide the deepest law of nature, this is it. (This does make reference to experimental facts, albeit more of a gedanken nature. As others have commented, some link to experiments is, and should be, unavoidable.)
share|cite|improve this answer
Excellent answer. What would be really fascinating would be to show Einstein the Bell-GHZ violations. I can't help but wonder what he would make of it. To me, these experiments confirm his deepest concern -- spooky action at a distance! – user7348 Dec 7 '12 at 16:33
Some time ago I have contemplated Einstein's reaction (…): "Einstein would probably have felt his famous physics intuition had lost contact with reality, and he would certainly happily have admitted that Feynman's claim "nobody understands quantum physics" makes no exception for him. I would love to hear the words that the most quotable physicist would have uttered at the occasion. Probably something along the lines "Magical is the Lord, magical in subtle and deceitful ways bordering on maliciousness"." – Johannes Dec 7 '12 at 19:14
Near the end of Dirac's career, he wrote “And, I think it might turn out that ultimately Einstein will prove to be right, ... that it is quite likely that at some future time we may get an improved quantum mechanics in which there will be a return to determinism and which will, therefore, justify the Einstein point of view. But such a return to deteminism could only be made at the expense of giving up some other basic idea which we now asume without question. We would have to pay for it in some way which we cannot yet guess at, if we are to re-introduce determinism.” Directions in Physics p.10 – joseph f. johnson Feb 11 '13 at 9:43
Wonderful article. Even if Einstein had done nothing else than criticize QM, he would still be one of the greatest scientists ever. How long would it have taken us to go looking for these now experimental facts without the EPR paradox? – WetSavannaAnimal aka Rod Vance Aug 31 '13 at 0:33
You should use history of physics to ask them questions where classical physics fail. For example, you can tell them result of Rutherford's experiment and ask: If an electron is orbiting around nucleus, it means a charge is in acceleration. So, electrons should release electromagnetic energy. If that's the case, electrons would loose its energy to collapse on Nucleus which would cease the existence of atom within a fraction of second (you can tell them to calculate). But, as we know, atoms have survived billions of years. How? Where's the catch?
share|cite|improve this answer
+1 I also think using the history of physics is an excellent strategy, and is has the added value of learning the history of physics! The conundrum of the electron not collapsing into the nucleus is a wonderful example, I also suggested the UV catastrophe, which doesn't appeal to any experimental results. – Joe Feb 2 '13 at 9:22
Though there are many good answers here, I believe I can still contribute something which answers a small part of your question.
There is one reason to look for a theory beyond classical physics which is purely theoretical and this is the UV catastrophe. According to the classical theory of light, an ideal black body at thermal equilibrium will emit radiation with infinite power. This is a fundamental theoretical problem, and there is no need to appeal to any experimental results to understand it, a theory which predicts infinite emitted power is wrong.
The quantization of light solves the problem, and historically this played a role in the development of quantum mechanics.
Of course this doesn't point to any of the modern postulates of quantum mechanics you're looking to justify, but I think it's still good to use the UV catastrophe as one of the motivations to look for a theory beyond classical physics in the first place, especially if you want to appeal as little as necessary to experimental results.
share|cite|improve this answer
It is a shame that statistical mechanics is not more widely taught. But, hey, we live in an age when Physics depts don't even teach Optics at the undergrad level anymore... Now the O.P. postulated a context where the students understood advanced Mechanics. So I fear the UV catastrophe, although historically and conceptually most important, will not ring a bell with that audience. – joseph f. johnson Feb 11 '13 at 10:01
All the key parts of quantum mechanics may be found in classical physics.
1) In statistical mechanics the system is also described by a distribution function. No definite coordinates, no definite momenta.
2) Hamilton made his formalism for classical mechanics. His ideas were pretty much in line with ideas which were put into modern quantum mechanics long before any experiments: he tried to make physics as geometrical as possible.
3) From Lie algebras people knew that the translation operator has something to do with the derivative. From momentum conservation people knew that translations have something to do with momentum. It was not that strange to associate momentum with the derivative.
Now you should just mix everything: merge statistical mechanics with the Hamiltonian formalism and add the key ingredient which was obvious to radio-physicists: that you can not have a short (i.e, localized) signal with a narrow spectrum.
Voila, you have quantum mechanics.
In principle, for your purposes, Feynman's approach to quantum mechanics may be more "clear". It was found long after the other two approaches, and is much less productive for the simple problems people usually consider while studying. That's why it is not that popular for starters. However, it might be simpler from the philosophical point of view. And we all know that it is equivalent to the other approaches.
share|cite|improve this answer
As an initial aside, there is nothing uniquely ‘quantum’ about non commuting operators or formulating mechanics in a Hilbert space as demonstrated by Koopman–von Neumann mechanics, and there is nothing uniquely ‘classical’ about a phase space coordinate representation of mechanics as shown by Groenewold and Moyal’s formulation of Quantum theory.
There does of course however exist a fundamental difference between quantum and classical theories. There are many ways of attempting to distil this difference, whether it is seen as non-locality, uncertainty or the measurement problem, the best way of isolating what distinguishes them that I have heard is this:
Quantum mechanics is about how probability phase and probability amplitude interact. This is what is fundamentally lacking in Hilbert space formulations of classical mechanics, where the phase and amplitude evolution equations are fully decoupled. It is this phase-amplitude interaction that gives us the wave-particle behaviour, electron diffraction in the two slits experiment, and hence an easy motivation for (and probably the most common entry route into) quantum mechanics. This phase-amplitude interaction is also fundamental to understanding canonically conjugate variables and the uncertainty problem.
I think that if this approach were to be taken, the necessity of a different physical theory can be most easily initially justified by single-particle interference, which then leads on to the previously mentioned points.
share|cite|improve this answer
So far as I understand, you are asking for a minimalist approach to quantum mechanics which would motivate its study with little reference to experiments.
The bad. So far as I know, there is not a single experiment or theoretical concept that can motivate your students about the need to introduce Dirac kets $|\Psi\rangle$, operators, Hilbert spaces, the Schrödinger equation... all at once. There are two reasons for this and both are related. First, the ordinary wavefunction or Dirac formulation of quantum mechanics is too different from classical mechanics. Second, the ordinary formulation was developed in pieces by many different authors who tried to explain the results of different experiments --many authors won a Nobel prize for the development of quantum mechanics--. This explains why "for a couple of years now", the only answers you have come up with are "incomplete, not entirely satisfactory".
The good. I believe that one can mostly satisfy your requirements by using the modern Wigner & Moyal formulation of quantum mechanics, because this formulation avoids kets, operators, Hilbert spaces, the Schrödinger equation... In this modern formulation, the relation between the classical (left) and the quantum (right) mechanics axioms are
$$A(p,x) \rho(p,x) = A \rho(p,x) ~~\Longleftrightarrow~~ A(p,x) \star \rho^\mathrm{W}(p,x) = A \rho^\mathrm{W}(p,x)$$
$$\frac{\partial \rho}{\partial t} = \{H, \rho\} ~~\Longleftrightarrow~~ \frac{\partial \rho^\mathrm{W}}{\partial t} = \{H, \rho^\mathrm{W}\}_\mathrm{MB}$$
$$\langle A \rangle = \int \mathrm{d}p \mathrm{d}x A(p,x) \rho(p,x) ~~\Longleftrightarrow~~ \langle A \rangle = \int \mathrm{d}p \mathrm{d}x A(p,x) \rho^\mathrm{W}(p,x)$$
where $\star$ is the Moyal star product, $\rho^\mathrm{W}$ the Wigner distribution and $\{ , \}_\mathrm{MB}$ the Moyal bracket. The functions $A(p,x)$ are the same than in classical mechanics. An example of the first quantum equation is $H \star \rho_E^\mathrm{W} = E \rho_E^\mathrm{W}$ which gives the energy eigenvalues.
Now the second part of your question. What is the minimal motivation for the introduction of the quantum expressions at the right? I think that it could be as follows. There are a number of experiments that suggest a dispersion relation $\Delta p \Delta x \geq \hbar/2$, which cannot be explained by classical mechanics. This experimental fact can be used as motivation for the substitution of the commutative phase space of classical mechanics by a non-commutative phase space. Mathematical analysis of the non-commutative geometry reveals that ordinary products in phase space have to be substituted by start products, the classical phase space state has to be substituted by one, $\rho^\mathrm{W}$, which is bounded to phase space regions larger than Planck length--, and Poisson brackets have to be substituted by Moyal brackets.
Although this minimalist approach cannot be obtained by using the ordinary wavefunction or Dirac formalism, there are three disadvantages with the Wigner & Moyal approach however. (i) The mathematical analysis is very far from trivial. The first quantum equation of above is easily derived by substituting the ordinary product by a start product and $\rho \rightarrow \rho^\mathrm{W}$ in the classical expression. The third quantum equation can be also obtained in this way, because it can be shown that
$$ \int \mathrm{d}p \mathrm{d}x A(p,x) \star \rho^\mathrm{W}(p,x) = \int \mathrm{d}p \mathrm{d}x A(p,x) \rho^\mathrm{W}(p,x)$$
A priori one could believe that the second quantum equation is obtained in the same way. This does not work and gives an incorrect equation. The correct quantum equation of motion requires the substitution of the whole Poisson bracket by a Moyal bracket. Of course, the Moyal bracket accounts for the non-commutativity of the phase space, but there is not justification for its presence in the equation of motion from non-commutativity alone. In fact, this quantum equation of motion was originally obtained from the Liouville Von Neuman equation via the formal correspondence between the phase space and the Hilbert space, and any modern presentation of the Wigner & Moyal formulation that I know justifies the form of the quantum equation of motion via this formal correspondence. (ii) The theory is backward incompatible with classical mechanics, because the commutative geometry is entirely replaced by a non-commutative one. As a consequence, no $\rho^\mathrm{W}$ can represent a pure classical state --a point in phase space--. Notice that this incompatibility is also present in the ordinary formulations of quantum mechanics --for instance no wavefunction can describe a pure classical state completely--. (iii) The introduction of spin in the Wigner & Moyal formalism is somewhat artificial and still under active development.
The best? The above three disadvantages can be eliminated in a new phase space formalism which provides a 'minimalistic' approach to quantum mechanics by an improvement over geometrical quantisation. This is my own work and details and links will be disclosed in the comments or in a separated answer only if they are required by the community.
share|cite|improve this answer
+1 for the detailed and interesting answer. But the statement that motivations of QM from a small number of physical principles 'do not work' is quite pessimistic. Much of the mathematical formulation (e.g. the Lorentz transformations) underpinning Einstein's work was already in place when he discovered relativity, precisely because people needed some equations that explained experiments. This situation may be analogous to the current state of affairs with QM, irrespective of quantisation scheme. Then Einstein came along and explained what it all means. Who's to say that won't happen again? – Mark Mitchison Dec 8 '12 at 13:28
@MarkMitchison: Thank you! I eliminated the remarks about Einstein and Weinberg (I did mean something close to what you and Emilio Pisanty wrote above) but my own explanation was a complete mess. I agree with you on that what Einstein did could happen again! Precisely I wrote in a paper dealing with foundations of QM: "From a conceptual point of view, the elimination of the wavefunctions from quantum theory is in line with the procedure inaugurated by Einstein with the elimination of the ether in the theory of electromagnetism." – juanrga Dec 8 '12 at 17:51
-1 I suppose the O.P. would like something that has some physical content or intuition linked to it, and that is lacking in what you suggest, so I do not think that pedagogically it would be very useful for this particular purpose. This is not meant as a criticism of its worth as a contribution to scholarship. – joseph f. johnson Feb 11 '13 at 9:34
@josephf.johnson Disagree. The phase space approach has more physical content and is more intuitive than the old wavefunction approach. – juanrga Feb 14 '13 at 19:32
I think you have misunderstood the whole point of what the O.P. was asking for, although your contribution might have been valuable as an answer to a different question. What your answer lacks is any compelling immanent critique of Classical Mechanics, an explanation of why it cannot possibly be true. And it wasn't experiments that suggested the Heisenberg uncertainty relations since the experiments then weren't good enough to get anywhere near the theoretical limits. Only recently have such fine measurements been attained. – joseph f. johnson Feb 14 '13 at 19:40
I always like to read "BERTLMANN'S SOCKS AND THE NATURE OF REALITY" * by J. Bell to remind myself when and why a classical description must fail.
He basically refers to the EPR-correlations. You could motivate his reasoning by comparing common set theory (e.g. try three different sets: A,B,C and try to merge them somehow) with the same concept of "sets" in Hilbert spaces and you will see that they are not equal (Bell's theorem).
share|cite|improve this answer
It seems to me your question is essentially asking for a Platonic mathematical model of physics, underlying principles from which the quantum formalism could be justified and in effect derived. If so, that puts you in the minority (but growing) realist physicist camp as opposed to the vast majority of traditional instrumentalists.
The snag is the best if not only chance of developing a model like that requires either God-like knowledge or at least, with almost superhuman intuition, a correct guess at the underlying phenomena, and obviously nobody has yet achieved either sufficient to unify all of physics under a single rubrik along those lines.
In other words, ironically, to get at the most abstract explanation requires the most practical approach, rather as seeing at the smallest scales needs the largest microscope, such as the LHC, or Sherlock Holmes can arrive at the most unexpected conclusion only with sufficient data (Facts, Watson, I need more facts!)
So, despite being a fellow realist, I do see that instrumentalism (being content to model effects without seeking root causes, what might be compared with "black box testing") has been and remains indispensable.
share|cite|improve this answer
-1 This is most unfair to the O.P. Why use labels like «Platonist»? The O.P. only asks for two things: an obvious and fundamental problem with Classical Mechanics, & a motivation for trying QM as the most obvious alternative. Asking for motivation is not asking for Platonist derivations nor is asking why should we give QM a chance asking for a derivation. The only physics in your answer is the Fourier transform version of the uncertainty principle, when you remark about fine resolution needing a large microscope. But the OP asks you to motivate that principle, and you merely assert it. – joseph f. johnson Feb 11 '13 at 10:14
I highly recommend this introductory lecture on Quantum Mechanics:
share|cite|improve this answer
Thomas's Calculus has an instructive Newtonian Mechanics exercise which everyone ought to ponder: the gravitational field strength inside the Earth is proportional to the distance from the centre, and so is zero at the centre. And, of course, there is the rigorous proof that if the matter is uniformly distributed in a sphere, then outside the sphere it exerts a gravitational force identical to what would have been exerted if all the mass had been concentrated at the centre.
Now if one ponders this from a physical point of view, «what is matter», one ends up with logical and physical difficulties that were only answered by de Broglie and Schroedinger's theory of matter waves.
This also grows out of pondering Dirac's wise remark: if «big» and «small» are mereley relative terms, there is no use in explaining the big in terms of the small...there must be an absolute meaning to size.
Is matter a powder or fluid that is evenly and continuously distributed and can take on any density (short of infinity)? Then that sphere of uniformly distributed matter must shrink to a point of infinite density in a finite amount of time.... Why should matter be rigid and incompressible? Really, this is inexplicable without the wave theory of matter. Schroedinger's equation shows that if, for some reason, a matter wave starts to compress, then it experiences a restoring force to oppose the compression, so that it can not proceed past a certain point (without pouring more energy into it).
See the related . Only this can explain why the concept of «particle» can have some validity and not need something smaller still to explain it.
share|cite|improve this answer
A few millenia ago, an interesting article was published in The Annals of Mathematics about Newtonian particle mechanics: a system of seven particles and specific initial conditions were discovered whereby one of the particles is whipped up to infinite velocity in finite time without any collisions. But this is not quite decisive enough for your purposes. And Earnshaw's theorem is a little too advanced, although it is often mentioned in your context (e.g., by Feynman and my own college teacher, Prof. Davidon). – joseph f. johnson Feb 11 '13 at 9:56
This is a late in coming relevant comment to the teaching problem you have (but not answer - I tried commenting but it was getting too big).
Something you might mention in your class is modern control systems theory as taught to engineering students. I came to QM after I had studied control systems and practiced it in my job for a number of years and there is a natural feel to QM after this. Now I wonder whether QM might not have influenced the formulation of control systems theory. But basically one has a state space - the linear space of the minimum data one needs to uniquely define the system's future, a Schrödinger like evolution equation and observables that operate on the state and thus gather data for the feedback controller. The interpretation of the observables is radically different from how it's done in QM, however. But "evolving state + measurements" is the summary and even so, uncertainties in the observables leads to whole nontrivial fields of stochastic control systems and robust control systems (those that work even notwithstanding uncertainties in the mathematical models used). The engineering viewpoint also is very experimental - you seek to model your system accurately but you very deliberately don't give a fig how that model arises unless the physics can help you tune a model - but often the problems are so drenched with uncertainty that its just no help at all to probe the physics deeply and indeed control systems theory is about dealing with uncertainty, reacting to it and steering your system on a safe course even though uncertain outside uncontrollable forces buffet it endlessly. There are even shades of the uncertainty principle here: if your state model is uncertain and being estimated (e.g. by a Kalman filter), what your controller does will disturb the system you are trying to measure - although of course this the observer effect and not the Heisenberg principle, one does indeed find oneself trying to minimise the product of two uncertainties. You are wrestling with the tradeoff between the need to act against the need to measure.
This story won't fully motivate the subject in the way you want but it would still be interesting to show that there are a whole group of engineers and mathematicians who think this way and indeed find it very natural and unmysterious even when they first learn it. I think a crucial point here is that no-one frightens students of control theory before they begin with talk of catastrophic failure of theory, the need to wholly reinvent a field of knowledge and intellectual struggles that floored the world's best minds for decades. Of course in physics you have to teach why people went this way, but it's also important to stress that these same great minds who were floored by the subject have smoothed the way for us, so that now we stand on their shoulders and really can see better even though we may be far from their intellectual equals.
share|cite|improve this answer
Classical mechanics is not final theory from the one side and is not further decomposable from the other. So you can't improve it, it is given as is.
For example, you can't explain why if moving body is disappearing from previous point of it's trajectory it should reappear at infinitesimal close point but can't appear a meter ahead (teleporting). What does constraining trajectory points into continuous line? No answer. This is axiom. You can't build a MECHANISM for constraining.
Another example: you can't stop decomposing bodies into parts. You can't reach final elements (particles) and if you do, then you can't explain why these particles are indivisible anymore. The matter should be continuous in classics while you can't imagine how material points exist.
Also, you can't explain, how the entire infinite universe can exist simultaneously in it's whole information. What is happening in absolutely closed box or what is happening in absolute unreachable regions of spacetime? Classics tends us think that reality is real there too. But how it can be if it is absolutely undetectable? Scientific approach says that only what is measurable does exist. So how can it be reality in absolutely closed box (with cat in it)?
In classic mechanic you can't reach absolute identity of building blocks. For example, if all atoms are build of protons, neutrons and electrons, these particles are similar, but not the same. Two electrons in two different atoms are not the same in classic, they are two copies of one prototype, but not the prototype itself. So, you can't define really basic building blocks of reality in classics.
You can't define indeterminism in classics. You can't define unrealized possibilities in classic and can't say what have happened with possibility which was possible but not realized.
You can't define nonlocality in classics. There are only two possibilities in classics: one event affects another (a cause and effect) and two events are independent. You can't imagine two events correlate but don't affect each other! This is possible but unimaginable in classics!
share|cite|improve this answer
Your Answer
|
357c23c22ff6843c | Sunday, November 08, 2009
Google can render your equations for you!
In my last post I mentioned that Knol and Google Docs now have equation editing. What I didn't mention is that this is an undocumented feature of the public Google Chart api, and it's easy to use. For instance, if I wanted to include the Schrödinger equation on this blog. I would construct an url like this:,s,FFFFFF00&chco=AACCFF&chl=i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{r},\,t) = \hat H \Psi(\mathbf{r},t)
Put your code in the chl parameter. The chf parameter lets you specify a background color in RGBA, chco lets you set the foreground color in RGB. When you drop it in inside of an image tag you get this:
If you anticipate making 250,000 calls to the chart server a day, contact Google first at There's no limit to how much you can use it, but they reserve the right to turn you off. |
0425ff29cb40d6ea | @article{399, abstract = {Following an earlier calculation in 3D, we calculate the 2D critical temperature of a dilute, translation-invariant Bose gas using a variational formulation of the Bogoliubov approximation introduced by Critchley and Solomon in 1976. This provides the first analytical calculation of the Kosterlitz-Thouless transition temperature that includes the constant in the logarithm.}, author = {Napiórkowski, Marcin M and Reuvers, Robin and Solovej, Jan}, journal = {EPL}, number = {1}, publisher = {IOP Publishing Ltd.}, title = {{Calculation of the critical temperature of a dilute Bose gas in the Bogoliubov approximation}}, doi = {10.1209/0295-5075/121/10007}, volume = {121}, year = {2018}, } @article{154, abstract = {We give a lower bound on the ground state energy of a system of two fermions of one species interacting with two fermions of another species via point interactions. We show that there is a critical mass ratio m2 ≈ 0.58 such that the system is stable, i.e., the energy is bounded from below, for m∈[m2,m2−1]. So far it was not known whether this 2 + 2 system exhibits a stable region at all or whether the formation of four-body bound states causes an unbounded spectrum for all mass ratios, similar to the Thomas effect. Our result gives further evidence for the stability of the more general N + M system.}, author = {Moser, Thomas and Seiringer, Robert}, issn = {15729656}, journal = {Mathematical Physics Analysis and Geometry}, number = {3}, publisher = {Springer}, title = {{Stability of the 2+2 fermionic system with point interactions}}, doi = {10.1007/s11040-018-9275-3}, volume = {21}, year = {2018}, } @article{6002, abstract = {The Bogoliubov free energy functional is analysed. The functional serves as a model of a translation-invariant Bose gas at positive temperature. We prove the existence of minimizers in the case of repulsive interactions given by a sufficiently regular two-body potential. Furthermore, we prove the existence of a phase transition in this model and provide its phase diagram.}, author = {Napiórkowski, Marcin M and Reuvers, Robin and Solovej, Jan Philip}, issn = {0003-9527}, journal = {Archive for Rational Mechanics and Analysis}, number = {3}, pages = {1037--1090}, publisher = {Springer Nature}, title = {{The Bogoliubov free energy functional I: Existence of minimizers and phase diagram}}, doi = {10.1007/s00205-018-1232-6}, volume = {229}, year = {2018}, } @article{484, abstract = {We consider the dynamics of a large quantum system of N identical bosons in 3D interacting via a two-body potential of the form N3β-1w(Nβ(x - y)). For fixed 0 = β < 1/3 and large N, we obtain a norm approximation to the many-body evolution in the Nparticle Hilbert space. The leading order behaviour of the dynamics is determined by Hartree theory while the second order is given by Bogoliubov theory.}, author = {Nam, Phan and Napiórkowski, Marcin M}, issn = {10950761}, journal = {Advances in Theoretical and Mathematical Physics}, number = {3}, pages = {683 -- 738}, publisher = {International Press}, title = {{Bogoliubov correction to the mean-field dynamics of interacting bosons}}, doi = {10.4310/ATMP.2017.v21.n3.a4}, volume = {21}, year = {2017}, } @article{632, abstract = {We consider a 2D quantum system of N bosons in a trapping potential |x|s, interacting via a pair potential of the form N2β−1 w(Nβ x). We show that for all 0 < β < (s + 1)/(s + 2), the leading order behavior of ground states of the many-body system is described in the large N limit by the corresponding cubic nonlinear Schrödinger energy functional. Our result covers the focusing case (w < 0) where even the stability of the many-body system is not obvious. This answers an open question mentioned by X. Chen and J. Holmer for harmonic traps (s = 2). Together with the BBGKY hierarchy approach used by these authors, our result implies the convergence of the many-body quantum dynamics to the focusing NLS equation with harmonic trap for all 0 < β < 3/4. }, author = {Lewin, Mathieu and Nam, Phan and Rougerie, Nicolas}, journal = {Proceedings of the American Mathematical Society}, number = {6}, pages = {2441 -- 2454}, publisher = {American Mathematical Society}, title = {{A note on 2D focusing many boson systems}}, doi = {10.1090/proc/13468}, volume = {145}, year = {2017}, } @article{1198, abstract = {We consider a model of fermions interacting via point interactions, defined via a certain weighted Dirichlet form. While for two particles the interaction corresponds to infinite scattering length, the presence of further particles effectively decreases the interaction strength. We show that the model becomes trivial in the thermodynamic limit, in the sense that the free energy density at any given particle density and temperature agrees with the corresponding expression for non-interacting particles.}, author = {Moser, Thomas and Seiringer, Robert}, issn = {03779017}, journal = {Letters in Mathematical Physics}, number = {3}, pages = { 533 -- 552}, publisher = {Springer}, title = {{Triviality of a model of particles with point interactions in the thermodynamic limit}}, doi = {10.1007/s11005-016-0915-x}, volume = {107}, year = {2017}, } @article{997, abstract = {Recently it was shown that molecules rotating in superfluid helium can be described in terms of the angulon quasiparticles (Phys. Rev. Lett. 118, 095301 (2017)). Here we demonstrate that in the experimentally realized regime the angulon can be seen as a point charge on a 2-sphere interacting with a gauge field of a non-abelian magnetic monopole. Unlike in several other settings, the gauge fields of the angulon problem emerge in the real coordinate space, as opposed to the momentum space or some effective parameter space. Furthermore, we find a topological transition associated with making the monopole abelian, which takes place in the vicinity of the previously reported angulon instabilities. These results pave the way for studying topological phenomena in experiments on molecules trapped in superfluid helium nanodroplets, as well as on other realizations of orbital impurity problems.}, author = {Yakaboylu, Enderalp and Deuchert, Andreas and Lemeshko, Mikhail}, issn = {00319007}, journal = {APS Physics, Physical Review Letters}, number = {23}, publisher = {American Physiological Society}, title = {{Emergence of non-abelian magnetic monopoles in a quantum impurity problem}}, doi = {10.1103/PhysRevLett.119.235301}, volume = {119}, year = {2017}, } @article{912, abstract = {We consider a many-body system of fermionic atoms interacting via a local pair potential and subject to an external potential within the framework of Bardeen-Cooper-Schrieffer (BCS) theory. We measure the free energy of the whole sample with respect to the free energy of a reference state which allows us to define a BCS functional with boundary conditions at infinity. Our main result is a lower bound for this energy functional in terms of expressions that typically appear in Ginzburg-Landau functionals. }, author = {Deuchert, Andreas}, issn = {00222488}, journal = { Journal of Mathematical Physics}, number = {8}, publisher = {AIP}, title = {{A lower bound for the BCS functional with boundary conditions at infinity}}, doi = {10.1063/1.4996580}, volume = {58}, year = {2017}, } @article{1120, abstract = {The existence of a self-localization transition in the polaron problem has been under an active debate ever since Landau suggested it 83 years ago. Here we reveal the self-localization transition for the rotational analogue of the polaron -- the angulon quasiparticle. We show that, unlike for the polarons, self-localization of angulons occurs at finite impurity-bath coupling already at the mean-field level. The transition is accompanied by the spherical-symmetry breaking of the angulon ground state and a discontinuity in the first derivative of the ground-state energy. Moreover, the type of the symmetry breaking is dictated by the symmetry of the microscopic impurity-bath interaction, which leads to a number of distinct self-localized states. The predicted effects can potentially be addressed in experiments on cold molecules trapped in superfluid helium droplets and ultracold quantum gases, as well as on electronic excitations in solids and Bose-Einstein condensates. }, author = {Li, Xiang and Seiringer, Robert and Lemeshko, Mikhail}, issn = {24699926}, journal = {Physical Review A}, number = {3}, publisher = {American Physical Society}, title = {{Angular self-localization of impurities rotating in a bosonic bath}}, doi = {10.1103/PhysRevA.95.033608}, volume = {95}, year = {2017}, } @article{1079, abstract = {We study the ionization problem in the Thomas-Fermi-Dirac-von Weizsäcker theory for atoms and molecules. We prove the nonexistence of minimizers for the energy functional when the number of electrons is large and the total nuclear charge is small. This nonexistence result also applies to external potentials decaying faster than the Coulomb potential. In the case of arbitrary nuclear charges, we obtain the nonexistence of stable minimizers and radial minimizers.}, author = {Nam, Phan and Van Den Bosch, Hanne}, issn = {13850172}, journal = {Mathematical Physics, Analysis and Geometry}, number = {2}, publisher = {Springer}, title = {{Nonexistence in Thomas Fermi-Dirac-von Weizsäcker theory with small nuclear charges}}, doi = {10.1007/s11040-017-9238-0}, volume = {20}, year = {2017}, } @article{741, abstract = {We prove that a system of N fermions interacting with an additional particle via point interactions is stable if the ratio of the mass of the additional particle to the one of the fermions is larger than some critical m*. The value of m* is independent of N and turns out to be less than 1. This fact has important implications for the stability of the unitary Fermi gas. We also characterize the domain of the Hamiltonian of this model, and establish the validity of the Tan relations for all wave functions in the domain.}, author = {Moser, Thomas and Seiringer, Robert}, issn = {00103616}, journal = {Communications in Mathematical Physics}, number = {1}, pages = {329 -- 355}, publisher = {Springer}, title = {{Stability of a fermionic N+1 particle system with point interactions}}, doi = {10.1007/s00220-017-2980-0}, volume = {356}, year = {2017}, } @article{739, abstract = {We study the norm approximation to the Schrödinger dynamics of N bosons in with an interaction potential of the form . Assuming that in the initial state the particles outside of the condensate form a quasi-free state with finite kinetic energy, we show that in the large N limit, the fluctuations around the condensate can be effectively described using Bogoliubov approximation for all . The range of β is expected to be optimal for this large class of initial states.}, author = {Nam, Phan and Napiórkowski, Marcin M}, issn = {00217824}, journal = {Journal de Mathématiques Pures et Appliquées}, number = {5}, pages = {662 -- 688}, publisher = {Elsevier}, title = {{A note on the validity of Bogoliubov correction to mean field dynamics}}, doi = {10.1016/j.matpur.2017.05.013}, volume = {108}, year = {2017}, } @article{1267, abstract = {We give a simplified proof of the nonexistence of large nuclei in the liquid drop model and provide an explicit bound. Our bound is within a factor of 2.3 of the conjectured value and seems to be the first quantitative result.}, author = {Frank, Rupert and Killip, Rowan and Nam, Phan}, journal = {Letters in Mathematical Physics}, number = {8}, pages = {1033 -- 1036}, publisher = {Springer}, title = {{Nonexistence of large nuclei in the liquid drop model}}, doi = {10.1007/s11005-016-0860-8}, volume = {106}, year = {2016}, } @article{1143, abstract = {We study the ground state of a dilute Bose gas in a scaling limit where the Gross-Pitaevskii functional emerges. This is a repulsive nonlinear Schrödinger functional whose quartic term is proportional to the scattering length of the interparticle interaction potential. We propose a new derivation of this limit problem, with a method that bypasses some of the technical difficulties that previous derivations had to face. The new method is based on a combination of Dyson\'s lemma, the quantum de Finetti theorem and a second moment estimate for ground states of the effective Dyson Hamiltonian. It applies equally well to the case where magnetic fields or rotation are present.}, author = {Nam, Phan and Rougerie, Nicolas and Seiringer, Robert}, journal = {Analysis and PDE}, number = {2}, pages = {459 -- 485}, publisher = {Mathematical Sciences Publishers}, title = {{Ground states of large bosonic systems: The gross Pitaevskii limit revisited}}, doi = {10.2140/apde.2016.9.459}, volume = {9}, year = {2016}, } @article{1491, abstract = {We study the ground state of a trapped Bose gas, starting from the full many-body Schrödinger Hamiltonian, and derive the non-linear Schrödinger energy functional in the limit of a large particle number, when the interaction potential converges slowly to a Dirac delta function. Our method is based on quantitative estimates on the discrepancy between the full many-body energy and its mean-field approximation using Hartree states. These are proved using finite dimensional localization and a quantitative version of the quantum de Finetti theorem. Our approach covers the case of attractive interactions in the regime of stability. In particular, our main new result is a derivation of the 2D attractive non-linear Schrödinger ground state.}, author = {Lewin, Mathieu and Nam, Phan and Rougerie, Nicolas}, journal = {Transactions of the American Mathematical Society}, number = {9}, pages = {6131 -- 6157}, publisher = {American Mathematical Society}, title = {{The mean-field approximation and the non-linear Schrödinger functional for trapped Bose gases}}, doi = {10.1090/tran/6537}, volume = {368}, year = {2016}, } @article{1422, abstract = {We study the time-dependent Bogoliubov–de-Gennes equations for generic translation-invariant fermionic many-body systems. For initial states that are close to thermal equilibrium states at temperatures near the critical temperature, we show that the magnitude of the order parameter stays approximately constant in time and, in particular, does not follow a time-dependent Ginzburg–Landau equation, which is often employed as a phenomenological description and predicts a decay of the order parameter in time. The full non-linear structure of the equations is necessary to understand this behavior.}, author = {Frank, Rupert and Hainzl, Christian and Schlein, Benjamin and Seiringer, Robert}, journal = {Letters in Mathematical Physics}, number = {7}, pages = {913 -- 923}, publisher = {Springer}, title = {{Incompatibility of time-dependent Bogoliubov–de-Gennes and Ginzburg–Landau equations}}, doi = {10.1007/s11005-016-0847-5}, volume = {106}, year = {2016}, } @article{1620, abstract = {We consider the Bardeen–Cooper–Schrieffer free energy functional for particles interacting via a two-body potential on a microscopic scale and in the presence of weak external fields varying on a macroscopic scale. We study the influence of the external fields on the critical temperature. We show that in the limit where the ratio between the microscopic and macroscopic scale tends to zero, the next to leading order of the critical temperature is determined by the lowest eigenvalue of the linearization of the Ginzburg–Landau equation.}, author = {Frank, Rupert and Hainzl, Christian and Seiringer, Robert and Solovej, Jan}, journal = {Communications in Mathematical Physics}, number = {1}, pages = {189 -- 216}, publisher = {Springer}, title = {{The external field dependence of the BCS critical temperature}}, doi = {10.1007/s00220-015-2526-2}, volume = {342}, year = {2016}, } @inproceedings{1428, abstract = {We report on a mathematically rigorous analysis of the superfluid properties of a Bose- Einstein condensate in the many-body ground state of a one-dimensional model of interacting bosons in a random potential.}, author = {Könenberg, Martin and Moser, Thomas and Seiringer, Robert and Yngvason, Jakob}, booktitle = {Journal of Physics: Conference Series}, location = {Shanghai, China}, number = {1}, publisher = {IOP Publishing Ltd.}, title = {{Superfluidity and BEC in a Model of Interacting Bosons in a Random Potential}}, doi = {10.1088/1742-6596/691/1/012016}, volume = {691}, year = {2016}, } @article{1478, abstract = {We consider the Tonks-Girardeau gas subject to a random external potential. If the disorder is such that the underlying one-particle Hamiltonian displays localization (which is known to be generically the case), we show that there is exponential decay of correlations in the many-body eigenstates. Moreover, there is no Bose-Einstein condensation and no superfluidity, even at zero temperature.}, author = {Seiringer, Robert and Warzel, Simone}, journal = {New Journal of Physics}, number = {3}, publisher = {IOP Publishing Ltd.}, title = {{Decay of correlations and absence of superfluidity in the disordered Tonks-Girardeau gas}}, doi = {10.1088/1367-2630/18/3/035002}, volume = {18}, year = {2016}, } @article{1493, abstract = {We introduce a new method for deriving the time-dependent Hartree or Hartree-Fock equations as an effective mean-field dynamics from the microscopic Schrödinger equation for fermionic many-particle systems in quantum mechanics. The method is an adaption of the method used in Pickl (Lett. Math. Phys. 97 (2) 151–164 2011) for bosonic systems to fermionic systems. It is based on a Gronwall type estimate for a suitable measure of distance between the microscopic solution and an antisymmetrized product state. We use this method to treat a new mean-field limit for fermions with long-range interactions in a large volume. Some of our results hold for singular attractive or repulsive interactions. We can also treat Coulomb interaction assuming either a mild singularity cutoff or certain regularity conditions on the solutions to the Hartree(-Fock) equations. In the considered limit, the kinetic and interaction energy are of the same order, while the average force is subleading. For some interactions, we prove that the Hartree(-Fock) dynamics is a more accurate approximation than a simpler dynamics that one would expect from the subleading force. With our method we also treat the mean-field limit coupled to a semiclassical limit, which was discussed in the literature before, and we recover some of the previous results. All results hold for initial data close (but not necessarily equal) to antisymmetrized product states and we always provide explicit rates of convergence.}, author = {Petrat, Sören P and Pickl, Peter}, journal = {Mathematical Physics, Analysis and Geometry}, number = {1}, publisher = {Springer}, title = {{A new method and a new scaling for deriving fermionic mean-field dynamics}}, doi = {10.1007/s11040-016-9204-2}, volume = {19}, year = {2016}, } |
b5717ef0a5418a03 | Future of solitons
Th 70305
The move to higher transmission speeds has opened the door to solitons. Here's an overview of the technology.
Solitons are light pulses that can propagate in a non-linear dispersive medium with no broadening over very long distances. As a result, they have attracted great interest in the field of communications.
Solitons work because two effects occur in optical fibers that serve to offset each other when properly managed. One is chromatic dispersion, which causes pulses containing a spectrum of wavelengths to spread out as they travel along an optical fiber. The other is called self-phase modulation (SPM), which spreads the pulse spectrum over a broader range of wavelengths. Depending on the mutual relationship between dispersion and SPM, the two can balance each other out, so once the pulse comes to equilibrium in the fiber, it retains its shape or dispersion, and SPM can make the pulse compress or even broaden more severely (see Lightwave, "Handling Special Effects: Nonlinearity, Chromatic Dispersion, and Soliton Waves," July 2000, page 84.).
However, attenuation weakens the pulse and consequently fights against maintaining the shape of the pulse along the fiber span. Optical amplifiers have been developed to balance attenuation and maintain pulse shapes. However, optical amplifiers, as is the case with any radiation source, add noise-namely amplified spontaneous emission (ASE) that makes the pulse jitter in the time domain. The fundamental soliton shape (such as sech) has a long tail; if a pulse jitters, its long tail can make it overlap the adjacent pulse, destroying both. This effect is called the Gordon-Haus effect. Erbium-doped fiber amplifiers (EDFAs) typically have large ASE compared to other amplifiers such as Raman amplifiers. So stimulated Raman scattering from the fiber itself can be a good amplifier candidate for restoring soliton pulse strength. Th 70305
Figure 1. The fundamental soliton is one of the solutions of the non-linear Schrödinger equation.
The self-restoring nature of solitons makes them attractive for high-speed, long-haul transmissions. With transmission speeds now approaching 40 Gbits/sec over increasingly longer distances, the stress on systems is rapidly approaching a critical threshold. Solitons could provide interesting future solutions.
Soliton systems have demonstrated very high transmission speeds in laboratory settings. There was even a time at the very beginning of singlemode fiber-optic transmission when system designers considered the use of solitons for long-haul transmissions. The solitons would be created along the fiber from a pulse train based on return-to-zero (RZ) bit code. But the development of the EDFA, with its high noise, modified this strategy. Network system designers quickly adopted the nonreturn-to-zero bit code as much easier to handle-and that was the end of the soliton business.
Now that we are considering 40 Gbits/sec, interest in RZ is back on again. And with the use of the higher power required by the need to provide higher optical signal-to-noise ratio (OSNR) at that bit rate, nonlinear effects such as SPM and Raman stimulated scattering are being generated. They consequently can be used smartly together with dispersion management instead of trying to avoid them. Thus, the potential for new amplifiers based on this Raman effect is attracting interest. And Raman amplifiers have lower noise than EDFAs.
The door is wide open again for soliton transmissions. We are back at the drawing board, looking at fundamental theory to delineate the challenges and stimulate further research. To accomplish that, we must understand solitons as precisely as possible.
Solitons are well represented by the elegant solution of the nonlinear Schrödinger equation of propagation of an electromagnetic wave in a nonlinear dispersive medium, where the index of refraction of this medium has the form n = n(I, λ). The Schrödinger equation can be represented as follows:Th 70302
where ψ(z,t) is the pulse signal envelope propagating over the distance z in the time variable t in a frame of reference traveling at the group velocity of the carrier wave. We need to consider the concept of group velocity because we are dealing with a dispersive medium. β2, β3, β4 are the second, third, and fourth order group velocity dispersion (GVD) parameters-or, in other words, the second, third, and fourth derivatives of the wavenumber β (βnk, with k=w/c) as a function of the optical frequency w or wavelength. In principle, there is no limit to the expansion of β to higher orders except that the effect of these orders quickly becomes negligible.
The Schrödinger equation has the great advantage of producing analytical solutions. One of them is:Th 70303
That solution is called a soliton, and N corresponds to the soliton order. N is defined from the following equation:Th 70304
where T0 is the pulse width, Y the non-linear coefficient, and P0 the pulse power. So we can have soliton transmission when certain conditions of non-linearity, power, pulse width, and dispersion are met.
When N=1, the soliton is called the fundamental soliton. Higher-order solitons are possible.
Another solution, sech2(x) has a narrower spectrum than the sech pulse. Figure 1 illustrates this sech pulse shape.Th 70306
Figure 2. Equilibrium between non-linear effects and dispersion.
The pulse has a typical narrow width but has a long tail that can be disadvantageous if the pulse suffers a jitter (time domain wobbling or oscillation) and overlaps an adjacent pulse in the time domain. Solitons have the tendency to destruct when they overlap.
Another factor to consider is that solitons are an equilibrium product between chromatic dispersion and nonlinear effects. The chirp induced by non-linear effects cancels the one induced by chromatic dispersion (see Figure 2). This cancellation depends on the relationship between the variation of the pulse phase in the time domain (this is the definition of a chirp) induced by the dispersion and the one induced by the SPM, a nonlinear effect present when the power input to the fiber link is sufficiently high in a sufficiently small core fiber over a sufficiently long length of fiber. The product of β2 (the GVD parameter) and C (SPM-induced chirp) is the key issue here (see Figure 3).Th 70307
Figure 3. Pulse-broadening factor for different relation between dispersion and self-phase modulation (b2C). (Source: G.P. Agrawal, Fiber-Optic Communication Systems)
As figure 3 illustrates, when the distance of propagation in the fiber is equal to the dispersion length (LD, the distance at which the pulse starts to severely broaden because of the dispersion), the chirp due to SPM cancels the temporal phase variation due to dispersion, and the pulse does not broaden. This case is illustrated in Figure 2.
Now in dispersion-unshifted fiber where the dispersion is positive (for instance D=17 psec/(nm-km) at 1,550 nm) in the 1,550-nm wavelength range, the GVD parameter β2 is negative and the dispersion is said to be anomalous. In that case, as the chirp C is always positive, the product of C times β2 is always negative, and the pulse compresses if the distance of propagation is smaller than the dispersion length LD. This case is illustrated in Figure 4 by Cβ2<0 with z/LD<1.Th 70308
Figure 4. The Gordon-Haus effect is also called the timing jitter. To maintain power, there is a need not only to amplify, but also to take into account accumulated dispersion and soliton-soliton interference.
When the distance of propagation is longer than LD, the pulse starts to broaden again. Consequently, by careful management of the dispersion along the fiber span and the nonlinear effects (for instance, the chirp from SPM), the soliton can be maintained in the time domain and avoid having its tail or its shape unacceptably changed. Dispersion and cross-phase modulation would play the same role in a DWDM system.Th 70309
Figure 5. In an anomalous dispersion regime, pulse distortion and timing jitter are reduced and peak power is enhanced. That constitutes a form of dispersion management.
This phenomenon of dispersion and SPM management can also be found with other transmission codes, such as RZ code, where the bit pulse is short enough. It is also understood that a soliton can generate itself from an RZ pulse, meaning that from this careful management of dispersion and nonlinear effect, the RZ pulse can transform itself into soliton inside the range z <LD.
The development of successful soliton technologies will depend on the following factors:
• The distances in the long-haul networks will continue to increase, especially considering the demand for longer distances between repeaters (ultra-long-haul network).
• The bit rate will continue to increase to levels of 40 Gbits/sec with shorter pulses, OSNR increases, and consequently higher transmitter output power, bringing concern for nonlinear effects and dispersion.
• New optical amplifiers will be considered with lower noise, higher output power, and longer spans.
• Technical development will bring serious consideration for all-optical networks.
The consequences associated with the technical requirements for successful soliton transmissions will be:
• The soliton pulse must meet certain conditions, such as special sech shape (this shape has a long tail), and the pulse width and peak power must come to N=1 for a fundamental soliton in:
• During soliton-soliton interaction, there must be a separation between neighboring solitons-TB/T0. In other words, solitons must be isolated from one another:
The attraction or separation between neighboring solitons depends on several factors such as relative phase, amplitude, and the soliton separation itself.
• To maintain the power in a lossy fiber, solitons need to be amplified, taking into account the accumulated dispersion and ASE.
Solitons are destroyed if LA>LD; LD=T02/|β2|; LD>200 km; |β2|>1 psec2/km, then LA=30-50 km. Raman amplification can also be considered as a possible solution to extend LA.
We need to develop a technology that allows efficient amplification and simultaneously takes into account not only the accumulated dispersion, but also the amplifier ASE so as to avoid the Gordon-Haus effect (timing jitter-see Figure 4):
• A solution to soliton destruction is an axial variation of dispersion in the dispersion-compensated fiber (DCF) (|β2(z)|=|β2(0)|exp(-αz). DCF generates a frequency chirp through SPM; DCF is not to be applied after soliton amplification, but just before the amplifier.
• Dispersion management should envision increasing fiber segments as beneficial, as it permits longer amplifier spacing. Dispersion management involves an anomalous regime (β2<0) that reduces pulse distortion and lowers time jitter (sometimes by three times). That could induce peak power (or energy) enhancement (see Figure 5).
Several challenges remain in the near term that must be addressed before solitons can be applied effectively. For example, practical application of soliton technologies within the telecommunications industry must develop refined control techniques capable of separating solitons by several times their pulse width. We must also master and regulate slight differences in soliton amplitudes. To harness solitons' exceptional transmission speeds, we must be able to create synchronous amplitudes as well as phase modulation with optical filtering.
As soon as these specific issues are resolved, we may indeed look forward to a new generation of transmissions based on soliton waves.
Dr. André Girard is a senior member of the technical staff at EXFO (Quebec City).
Drop us a line
Address letters to:
Letters to the Editor
98 Spit Brook Road
Nashua, NH 03062-5737
e-mail to: stephenh@pennwell.com.
Include daytime telephone number.
Letters may be edited.
More in Transport |
c1277eba9b8e3269 | Monday, February 28, 2005
Dark Matter and Living Matter
Dark matter and living matter represent two deep mysteries of the recent world view. There however exists an amazing possibility that there might be close connection between these mysteries.
Do Bohr rules apply to astrophysical systems?
D. Da Rocha and Laurent Nottale have proposed that Schrödinger equation with Planck constant hbar replaced with what might be called gravitational Planck constant hbar_{gr}= GmM/v_0 (hbar=c=1). v_0 is a velocity parameter having the value v_0=about 145 km/s giving v_0/c=4.6\times 10^{-4}. This is rather near to the peak orbital velocity of stars in galactic halos. Also subharmonics and harmonics of $v_0$ seem to appear. The support for the hypothesis coming from empirical data is impressive. The support for the hypothesis coming from empirical data is impressive. It is surprising that findings of this caliber have not received any attention in popular journals while the latest revolutions in M-theory gain all possible publicity: also I heard from the article by accident from Victor Christianto to whom I am deeply grateful.
Is dark matter in astroscopic quantum state?
Nottale and Da Rocha believe that their Schrödinger equation results from a fractal hydrodynamics. Many-sheeted space-time however suggests that astrophysical systems are not only quantum systems at larger space-time sheets but correspond to a gigantic value of gravitational Planck constant. The gravitational (ordinary) Schrödinger equation would provide a solution of the black hole collapse (IR catastrophe) problem encountered at the classical level. The basic objection is that astrophysical systems are extremely classical whereas TGD predicts macrotemporal quantum coherence in the scale of life time of gravitational bound states. The resolution of the problem inspired by TGD inspired theory of living matter is that it is the dark matter at larger space-time sheets which is quantum coherent in the required time scale. I have proposed already earlier the possibility that Planck constant is quantized and the spectrum is given in terms of logarithms of Beraha numbers B_n= 4cos^2(pi/n): the lowest Beraha number B_3 =1 is completely exceptional in that it predicts infinite value of Planck constant. The inverse of the gravitational Planck constant could correspond a gravitational perturbation of this as 1/hbar_{gr}= v_0/GMm. The general philosophy would be that when the quantum system would become non-perturbative, a phase transition increasing the value of hbar occurs to preserve the perturbative character and at the transition n=4 --> 3 only the small perturbative correction to 1/hbar (3)=0 remains. This would apply to QCD and to atoms with Z>137 as well. TGD predicts correctly the value of the parameter v_0 assuming that cosmic strings and their decay remnants are responsible for the dark matter. The harmonics of v_0 can be understood as corresponding to perturbations replacing cosmic strings with their n-branched coverings so that tension becomes n^2-fold: much like the replacement of a closed orbit with an orbit closing only after n turns. 1/n-sub-harmonic would result when a magnetic flux tube split into n disjoint magnetic flux tubes.
Planetary system as a testing ground
The study of inclinations (tilt angles with respect to the Earth's orbital plane) leads to a concrete model for the quantum evolution of the planetary system. Only a stepwise breaking of the rotational symmetry and angular momentum Bohr rules plus Newton's equation (or geodesic equation) are needed, and gravitational Shrödinger equation holds true only inside flux quanta for the dark matter.
• During pre-planetary period dark matter formed a quantum coherent state on the (Z^0) magnetic flux quanta (spherical shells or flux tubes). This made the flux quantum effectively a single rigid body with rotational degrees of freedom corresponding to a sphere or circle (full SO(3) or SO(2) symmetry).
• In the case of spherical shells associated with inner planets the SO(3)--> SO(2) symmetry breaking led to the generation of a flux tube with the inclination determined by m and j and a further symmetry breaking, kind of an astral traffic jam inside the flux tube, generated a planet moving inside flux tube. The semiclassical interpretation of the angular momentum algebra predicts the inclinations of the inner planets. The predicted (real) inclinations are 6 (7) resp. 2.6 (3.4) degrees for Mercury resp. Venus). The predicted (real) inclination of the Earth's spin axis is 24 (23.5) degrees.
• The v_0--> v_0/5 transition necessary to understand the radii of the outer planets can be understood as resulting from the splitting of (Z^0) magnetic flux tube to five flux tubes representing Earth and outer planets except Pluto, whose orbital parameters indeed differ dramatically from those of other planets. The flux tube has a shape of a disk with a hole glued to the Earth's spherical flux shell.
• A remnant of the dark matter is still in a macroscopic quantum state at the flux quanta. It couples to photons as a quantum coherent state but the coupling is extremely small due to the gigantic value of hbar_gr scaling alpha by hbar/hbar_gr: hence the darkness. Note however that it is the entire condensate that couples to electromagnetism with this coupling, individual charged particles couple normally.
Living matter and dark matter
The most interesting predictions from the point of view of living matter are following.
• The dark matter is still there and forms quantum coherent structures of astrophysical size. In particular, the (Z^0) magnetic flux tubes associated with the planetary orbits define this kind of structures. The enormous value of h_{gr} makes the characteristic time scales of these quantum coherent states extremely long and implies macro-temporal quantum coherence in human and even longer time scales.
• The rather amazing coincidences between basic bio-rhythms and the periods associated with the states of orbits in solar system suggest that the frequencies defined by the energy levels of the gravitational Schrödinger equation might entrain with various biological frequencies such as the cyclotron frequencies associated with the magnetic flux tubes. For instance, the period associated with n=1 orbit in the case of Sun is 24 hours within experimental accuracy for v_0: the duration of day in Earth and in a good approximation also in Mars! Second example is the mysterious 5 second time scale associated with the Comorosan effect.
Indeed, the basic assumption of TGD inspired quantum biology is that the "electromagnetic bodies" associated with living systems are the intentional agents would conform with the idea that it is dark matter what makes ordinary matter living by acting as quantum controlling agent. Already now there exist a rather detailed theory about how these electromagnetic (or more generally, field-) bodies use biological body as a motor instrument and sensory receptor. For instance, the basic mechanisms of metabolisms would involve flow of matter between space-time sheets liberating energy quanta defining universal metabolic energy currencies same everywhere in Universe and having nothing to do with the details of living systems. The strange time delays of consciousness observed first by Libet suggests that the size of the field body is at least of the order of Earth size as also the frequency scale of EEG suggests (EEG would be involved with communications with magnetic body and biological body). For more details see the chapter "TGD and Astrophysics". For the notion of electromagnetic body see the relevant chapters of the book Genes, Memes, Qualia, and Semitrance.
Unknown said...
Assuming that one had the education of a bachelor's or engineer, what topics should one study to be able to begin to understand TGD in general? The theory seems to utilize such a vast variety of topics!!! said...
Well! Difficult to answer. TGD is unified theory with the difference that the usual length scale reductionism (meaning in this case reduction to Planck scale) is replaced by fractality. This explains why applications appear in all scales from early cosmology to biology and elementary particle physics. I would guess that the necessary tools for
understanding of motivations for the phenomenology require basics of Riemann geometry and and sub-manifold geometry and basic skills needed by practicing theoretical physicist. Differential and partial differential equations and understanding of basic ideas of physics as we know it now.
The philosophy behind quantum consciousness theory and biology requires only some general ideas and understanding of problems of the existing theories. This is the least technical challenge.
The unique communication channel would be some students and possibility to have lecture series about TGD. Unfortunately, the academic environment does not allow to realize this: this is their revenge. This is a big loss for the community itself but who could tell this to the decision makers. |
1773b24552111e94 | Thursday, May 10, 2012
A Universe from Nothing
The book A Universe from Nothing: Why There Is Something Rather than Nothing by Lawrence Krauss has stimulated a lot of aggressive debate between Krauss and some philosphers and of course helped in gaining media attention.
Peter Woit wrote about the debate - not so much about the contents of the book - and regarded the book boring and dull. He sees this book as an end for multiverse mania: bad philosophy and bad physics. I tried to get an idea about what Krauss really says but failed: Woit's posting concentrates on the emotional side (the more negative the better;-)) as blog posting must do to maximize the number of readers.
Peter Woit wrote also a second posting about the same theme. It was about Jim Holt's book Why Does the World Exist?: An Existential Detective Story. Peter Woit found the book brilliant but again it remained unclear to me what Jim Holt really said!
Sean Carroll has a posting about the book talking more about the contents of the book. This posting was much more informative: not just anecdotes and names but an attempt to analyze what is involved.
In the following I will not consider the question "Why There Is Something Rather than Nothing" since I regard it as pseudo question. The very fact that the question is made implies that something - the person who poses the question - exists. One could of course define "nothing" as vacuum state as physicists might do but with this definition the meaning of question changes completely from what it is for philosophers. Instead, I will consider the notion of existence from physics point of view and try to show how non-trivial implications the attempt to define this notion more precisely has.
What do we mean with "existence"?
The first challenge is to give meaning for the question "Why There Is Something Rather than Nothing". This process of giving meaning is of course highly subjective and I will discuss only my own approach. To my opinion the first step is to ask "What existence is?". Is there only single kind of existence or does existence come in several flavors? Indeed, several variants of existence seem to be possible. Material objects, mathematical structures, theories, conscious experiences, etc... It is difficult to see them as members of same category of existence.
This question was not made by Sean Carroll ,who equated all kinds of existence with material existence - irrespective of whether they become manifest as a reading in scale, as mathematical formulas, or via emotional expressions. Carroll did not notice that already this assumption might lead to astray. Carroll did the same as most mainstream physicists would do and I am afraid that also Krauss makes the same error. I dare hope that philosophers criticizing Krauss have avoided this mistake: at least they made clear what they thought about the deph of philosophical thinking of physicists of this century.
Why Carroll might have done something very stupid?
1. The first point is that this vision- known as materialism in philosophy - suffers from serious difficulties. The basic implication is that consciousness is reduced to physical existence. Free will is only an illusion, all our intentions are illusions, ethics is illusion and moral rules rely on illusion. Everything was dictated in Big Bang at least in the statistical sense. Perhaps we should think twice before accepting this view.
2. Second point is that that one ends up with heavy difficulties in physics itself: quantum measurement theory is the black sheep of physics and it is not tactful to talk about quantum measurement theory in the coffee table of physicists. The problem is simply that that the non-determinism of state function reduction - necessary for the interpretation of experiments in Copenhagen interpretation - is in conflict with the determinism of Schrödinger equation. The basic problem does not disappear for other interpretations. How it is possible that the world is both deterministic and deterministic at the same time? There seems to be two causalities: could they relate to two different notions of time? Could the times for Schrödiner equation and state function reduction be different?
I have just demonstrated that when one speaks about ontology, sooner or later begin to talk about time. This is unavoidable. As inhabitants of everyday world we of course know that the experienced time is not same as the geometric time of physicists. But as professional physicists we have been painfully conditioned to identify these two times. Also Carroll as a physics professor makes this identification - and does not even realize what he is doing - and starts to speak about time evolution as Hamiltononian unitary evolution without a single world about the problems of quantum measurement theory.
With this background I am ready to state what the permanent readers of the blog could do themselves. In TGD Universe the notion of existence becomes much more many-faceted thing as in the usual ultranaive approach of materialistic physicist. There are many levels of ontology.
1. Basic division is to "physical"/"objective" existence and conscious existence. Physical states identified as their mathematial representations ("identified" is important!: I will discuss this later) correspond the "objective" existence. Physical states generalize the solutions of Schrödinger equations: they are not counterparts for time=constant snapshots of time evolutions but counterparts for entire time evolutions. Quantum jumps take between these so that state function reduction does not imply failure of determinism and one avoids the basic paradox. This however implies that one must assign subjective time to the quantum jumps and geometric time to the counterparts of evolution of Schrödinger equation. There are two times.
In this framework the talk about the beginning of the Universe and what was before the Big Bang becomes nonsense. One can speak about boundaries of space-time surfaces but they have little to do with the beginning and end which are notions natural in the case of experienced time.
2. One can divide the objective existence into two sub-categories. Quantum existence (quantum states as mathematical objects) and classical existence having space-time surfaces as its mathematical representation. Classical determimism fails in its standard form but generalizes, and classical physics ceases to be an approximation and becomes exact part of quantum theory as Bohr orbitology implies by General Coordinate Invariance alone. We have ended up with tripartimism instead of monistic materialism.
3. One can divide the geometric existence on sub-existences based on ordinary physics obeying real topology and various p-adic physics obeying p-adic topology. p-Adic space-time sheets serve as space-time correlates for cognition and intentionality whereas real space-time sheets are correlates for what we call matter.
4. Zero energy ontology (ZEO) represents also a new element. Physical states are replaced with zero energy states formed by pairs of positive and negative energy states at the boundaries of causal diamond (CD) and correspond in the standard ontology to physical events formed by pairs of initial and final states. Conservations laws hold true only in the scale characterizing given CD. Inside given CD classical conservation laws are exact. This allows to understand why the failure of classical conservation in cosmic scales is consistent with Poincare invariance.
In this framework Schrödinger equation is only a starting point from which one generalizes. The notion of Hamiltonian evolution seen by Carroll as something very deep is not natural in relativistic context and becomes non-sensical in p-adic context. Only the initial and final states of evolution defining the zero energy state are relevant in accordance with strong form of holography. U-matrix, M-matrix and S-matrix become the key notions in ZEO.
5. A very important point is that there is no need to distinguish between physical objects and their mathematical description (as quantum states in Hilbert space of some short). Physical object is its mathematical description. This allows to circumvent the question "But what about theories: do also theories exist physically or in some other sense?". Quantum state is theory about physical state and physicist and mathematician exists in quantum jumps between them. Physical worlds define the Platonia of the mathematician and conscious existence is hopping around in this Platonia: from zero energy state to a new one. And ZEO allows all possible jumps! Could physicist or mathematician wish anything better;-)!
This list of items shows how dramatically the situation changes when one realizes that the materialistic dogma is just an assumption and in conflict with what we have known experimentally for almost century.
Could physical existence be unique?
The identification of physical (or "objective") existence as mathematical existence raises the question whether physics could be unique from the requirement that the mathematical description with which it is identical exists. In finite-dimensional case this is certainly not the case. Given finite-D manifold allows infinite number of different geometries. In infinite-dimensional case the situation changes dramatically. One possible additional condition is that the physics in question is maximally rich in structure besides existing mathematically! Quantum criticality has been my own phrasing for this principle and the motivation comes that at criticality long range fluctuations set on and the system has fractal structure and is indeed extremely richly structured.
This does not yet say much about what are the basic objects of this possibly existing infinite-dimensional space. One can however generalize Einstein's "Classical physics as space-time geometry" program to "Quantum physics as infinite dimensional geometry of world of classical worlds (WCW)" program. Classical worlds are identified as space-time surfaces since also the finite-dimensional classical version of the program must be realized. What is new is "surface": Einstein did not consider space-time as a surface but as an abstract 4-manifold and this led to the failure of the geometrization program. Sub-manifold geometry is however much richer than manifold geometry and gives excellent hopes about the geometrization of electro-weak and color interactions besides gravitation.
If one assumes that space-time as basic objects are surfaces of some dimension in some higher-dimensional space, one can ask whether it is possible for WCW to have a geometry. If one requires geometrization of quantum physics, this geometry must be Kähler. This is a highly non-trivial condition. The simplest spaces of this kind are loop spaces relating closely to string models: their Kähler geometry is unique from the existence of Riemann connection. This geometry has also maximal possible symmetries defined by Kac-Moody algebra, which looks very physical. The mere mathematical existence implies maximal symmetries and maximally beatiful world!
Loops are 1-dimensional but for higher-dimensional objects the mathematical constrains are much more stringent as the divergence difficulties of QFTs have painfully taught us. General Coordinate Invariance emerges as an additional powerful constraint and symmetries related to conformal symmetry generalizing from 2-D case to symmetries of 3-D light-like surfaces turns out to be the key to the construction. The requirement of maximal symmetry realized by conformal invariance leads to correct space-time dimension and also dictates that imbedding space has M4× S decomposition with light-cone boundaries also possess huge conformal symmetries giving rise to additional infinite-D symmetries.
There are excellent reasons to believe that WCW geometry is unique. The existence would be guaranteed by a reduction to generalized number theory: M4× CP2 forced by standard model symmetries becomes the unique choice if one requires that classical number fields are essential part of the theory. "Physics as infinite-D geometry" and "Physics as Generalized Number Theory" would be the basic principles and would imply consistency with standard model symmetries.
Orwin O'Dowd said...
This is about as close as experiment gets to gamma-ray bursts, as we now see from the heart of the universe, and its a ZEO scenario with magnetic dynamics:
Anything stringy would then resemble a "surface tension" in the "skin depth", which is a more than plausible prospect.
I suspect that atoms are intricately patterned in this way, which guises them as alchemical boids that go "quark" in the dead of night, but that's a surreal imagining. And I'm just passing through here.
Ulla said...
◘Fractality◘ said...
If DNA is topological quantum computer, all actions precede through it? said...
Quantum computing like activities are possible always when to molecules or even larger objects are connected by flux tubes. This information processing is universal. DNA-lipid layer system would be however suitable just for this purpose. Minimal function would be realization of memory as braiding patters updated by flows for molecules.
hamed said...
Dear Matti,
Thanks for the posting that would be Controversial for me.
The sentences like “Physical worlds define the Platonic of the mathematician” are lead to some beauty direction to thinking. Because in this view if i study geometry or algebra in a very abstracted manner, I can think that I am studying the physical world, in really!!! But we know that mathematics is very wide and it contains very abstracted theorems in branches of it and is progressing year by year. then For understanding the physical world in a precise manner one should learn all of the mathematics!?
For example for me it shoud be very interesting if some mathematical spaces like Lp spaces exist physically!( they are spaces that one Deals with p-norm instead of 2-norm.
or in number theory I think this view lead to some very deep understanding of physics if one think that what is physical meaning of all of 10 Musean hypernumbers: seditions, w , p , q , m , … . also nu number as unifying concept to allow to transition between all the hypernumber types. And sigma as the creator of axis. And also Antinumbers. Relation between the ten level of hypernumbers are very interesting for me: said...
Dear Hamed,
Interesting question. This idea about Platonia as physical world looks controversial. At first it seems to be in conflict with the vision that the laws of physics are unique, that standard model symmetries are somehow very special, etc..
The point is however that standard model symmetries would be symmetries of mathematics itself! Octonions have SU(3) as automorphism group for instance and CP_2 is coset space SU(3)/SU(2) having interpretation as space of quaternionic planes of octonionic space at given point.
Second point is that physics should be like Turing machine. It should be able to emulate all possible physics which are internally consistent. Finite measurement resolution - if representable quite generally as effective gauge symmetry - would allow to emulate extremely general gauge symmetric theories.
One can also worry about higher dimensional spaces if 8-D space is the imbedding space dimensions. The world of classical worlds is however infinite-D and allows as sub-manifolds finite-D spaces of arbitrary dimension. Also unions of N disjoint n-sub-manifolds are effective N*n-dimensional locally: standard wave-mechanical description of N-particle system indeed uses N*3-D configuration space.
If the number theoretical Brahman=Atman based on the generalization of real number introducing infinite number of real units as ratios of infinite integers is accepted, space-time point becomes infinitely rich structured and WCW might allow realization as M^4xCP_2 which more general definition of space-time point.
There is also the proposal about fractal hierarchy in which arithmetics with + and * are replaced with direct sum and tensor product for Hilbert spaces. Replacing points of Hilbert spaces with Hilbert spaces one obtains hierarchy very similar to infinite primes and also now interpretation in terms of endless second quantization might make sense.
Infinite-dimensionality poses very very strong constraints on mathematical structures: Kahler metric in loop spaces is unique. Infinite-dimensionality would bring in the laws of physics! One might hope that this conditions poses strong enough conditions on the allowed mathematics: for instance, all finite-D structures would be such that they can be induced from infinite-D structures. Mathematicians talk about classifying spaces: probably this is the same basic idea.
It is certainly frustrating to realize how little individual can learn from mathematics during lifetime. I believe that the correct guideline is that mathematics that one learns or perhaps even creates must naturally emerge from applications to real world problems - in my case physics. When I was younger I used to make visits to math library and walk between book shelves with the idea that I might find some miraculous cure to my mathematical problems with TGD. I left the library in a rather depressed mood!;-).
L^p spaces for p=2 are most natural from the point of view of physics. Bilinearity means linearity and quantum superposition would be lost for p different from 2. In infinite-D context p=2 is natural.
hamed said...
Thanks. I want to summarize your argument on proofing that “theory is the same as physical world” and “uniqueness of mathematical structures” in the follow, if I misunderstood it please guide me:
1-Mathematical structures classify into two subcategories:
Some of them like infinite dimensionality are essential for physical world and it is not possible to have a world without these structures. These mathematical structures are very rich.
2-these mathematical structures poses very strong constraints on other mathematical structures. So that because of these mathematical structures are unique therefore the other mathematical structures are unique too.
3-existence of other mathematical structures is very entanglement with these mathematical structures.
4-therefore Essentiality of these mathematical structures for physical world leads to Essentiality of other mathematical structures under the constraints imposed to them.
Orwin said...
Hence Husserl's Esswnces as idealizations at infinity. And continuum mechanics tries to follow, but is now wanting a Natural Philosophy.
A fresh leads on the Kahler extremals problem:
To me, tori which parse as Peirce ternary relations allow the mind to grasp physical dynamics, and here to construct (cognitively) Einsteinian 4-realism. This is not phenomenology of consciousness.
Also: Eintein-Maxwell conformals; Calabi energies; necessary conditions!! said...
To Hamed:
More or less like this. Note however that finite-D induced structures are very rich: one can imbed probably any finite-D geometry to infinite-D symmetric space as surface! Only infinite-D structures are highly unique. Infinite-D mathematical existence is extremely tricky notion as perturbative quantum field theorists have demonstrated with huge amount of sweat and tears.
Fundamental structures are infinite-D and highly unique: Kahler metric is fundamental concept, and its existence relies on maximal symmetries realized a superconformal symmetries characteristic for 3-D ligh-like objects, classical number fields, real and p-adic number fields.
They induce the remaining structures. In particular finite-D structures in the sense of "emulation". Mathematics does not construct n-dimensional spaces for us but only emulates it using formulas.
This is of course only a dream of physicist. Today physicists do not spend enough time to day dreaming;-).
◘Fractality◘ said...
Does ZEO imply that the Universe won't extinguish itself (heat death) at some point?
Living beings, civilizations, gods, are all dissipative systems - islands of negentropy in a sea of chaos.
The more complex a phenomenon, the more energy it must consume to maintain its identity, and thus it creates more disorder?
Does ZEO modify any of that? said...
Dear Fractality:
Thank you for a good question.
Universe suffering a heat death is an outcome of theoretical thinking taken to extreme without taking into account the possibility that the basic assumptions behind second law might not hold true at the limit of vanishing temperature. To me it is amusing that so many physicists take heat death so seriously.
The essential assumption is that quantum coherence in scales considered does not play any role. At ultralow temperatures however quantum coherence becomes important even in standard physics: consider only superfluidity and super conductivity.
You mentioned metabolic energy. This is a good point. The amount of metabolic energy needed depends on external temperature. Metabolic energy quantum in living matter is about same order of magnitude as physiological temperature. At very low temperatures the needed metabolic energy quantum would be very small.
TGD predicts hierarchy of universal metabolic energy quanta identifiable as increments of zero point kinetic energies in the transfer between space-time sheet corresponding to different values of p-adic prime p=about 2^k. There is evidence for this kind of quanta in visible, UV, and IR as unidentified spectral lines usually believed to be molecular spectral lines. ATP-ADP would have same mechanism as a core element.
The new physics elements are also present and bring something new into the picture.
*ZEO predicts infinite hierarchy of CDs (serving as correlates of selves!). The larger the CD associated with mental images the long the time scale of memory recall and planned action for that subself. Electron corresponds to .1 seconds assignable to sensory mental images.
*Hierarchy of Planck constants allowing macroscopic quantum phases. Even at higher temperatures macroscopic quantum phases beceme possible.
* Number theoretic entropy allowing generation of the islands of negentropy. This modifies the view about second law dramatically.
In standard physics there are only only islands of small entropy: In TGD Universe (according to pessimist) living matter can pollute environment actively to become more negentropic as we indeed seem to do;-)!.
Santeri Satama said...
To my limited understanding, and to give credit where credit is due, Bohm had deep comprehension of this problem and this basic problem indeed disappears in Bohm's interpretation of rewriting Schrödingers equation as quantum potential:
This implies other causality which depends only from shape and not from strength and size and which Bohm calls 'active information', which you define as negentropic entanglement. said...
To Santeri:
Here I must disagree. We really observe state function reductions and stationary states. Reductions are inconsistent with the determinism of Schrodinger equation in standard ontology and we must find interpretation for the situation. Bohm's theory tries to keep quantum world deterministic.
Occam's razor does not favor Bohm's theory (BT).
*BT is hidden variable theory.
*Both classical orbits and evolutions of wave functions are assumed.
*BT brings in hypothetical hydrodynamic flow from which some points are selected.
*BT makes also the ad hoc assumption of quantum non-equilibrium stating that Born rule does not hold true in quantum non-equilbrium. What probability density then is? This remains unclear to me! Here presumably the unidentified hidden variable enter into the game. It is argued that this assumption allows to obtain wave function collapse from a theory which is deterministic. I cannot swallow this.
Bohm's theory as also serious mathematical problems.
*It is argued that the classical orbits are given by the guiding equation so that they depend on wave function. I do not see how this description could give rise to classical mechanics where orbits do not depend on wave function.
*The addition of particles does not affect guiding wave - a very strange feature which I find very difficult to accept.
*A further problem is that the theory makes mathematically sense only in wave-mechanics context. In QFT -in particular for fermions- the equations defining hydrodynamical flow do not make sense. Already for bosonic QFT the analogs of Schrodinger equation makes sense only formally due to the extreme nonlinearity.
*There are also serious problems with relativity.
Bohm's notion of active information has counterpart in TGD as negentropic entanglement but to me Bohm's wave mechanics looks very ugly attempt to do quantum theory without giving up the ontology of classical mechanics.
Ulla said...
Comments of Sarfatti:#1 Our past and future cosmic horizons are computers. (He use the Wheeler Pic] If A is the area of a horizon it has A/4Lp^2 QUBITS where Lp^2 = hG/c^3 ~ 10^-66 cm^2.The area of our future horizon is about 10^56 cm^2.
#2 The world of 3D matter sandwiched between our two observer-dependent 2D cosmic horizons are EMANATIONS so to speak, i.e. hologram images. The horizons are the hardware of the anima mundi and the software is Hawking;s "Mind of God" - see last pages of "A Brief History of Time."
This sounds quite TGDish?
Santeri Satama said...
Matti, I believe everyone agrees that Bohm's interpretation is incomplete and your theory - building on also Bohm's work and philosophical ideas whether consciously or unconsciously - is more advanced. But calling BT deterministic does not do it justice.
"When several particles are treated by the causal interpretation then, in addition to the conventional conventional classical potential that acts between them, there is a quantum potential which now depends on all the particles. Most important this potential does not fall of with the distance between particles, so that even distant particles can be strongly connected." (SOC p. 99)
So in BT particles do affect (universal) guiding wave, but holistically and non-locally or in other words, if I understand correctly, on the level of infinite-dimensional Hilbert space.
Main motive for preserving ontology of classical mechanics for Bohm was continuum and dialogue between theories and interpretations (especially Einstein and Bohr) in order to avoid fragmentation and communication breakdowns which hinder scientific creativity, and in that same spirit I have the following question: Bohm's notion of quantum potential and active information seems related not only your negentropic entanglement (and negentropy maximation) but also deeply connected to quantum mathematics of Hilbert spaces. This may be extremely naive miscomprehension or deep question, you decide, but isn't the whole notion and structure of quantum math as Hilbert spaces inside points of Hilbert spaces dependent from or a manifestation of negentropic entanglement?
Or in less abstract language, isn't the ultimate foundation of all abstract mathematical structures love? said...
To Santeri:
TGD does not build on Bohm's work: neither consciously nor sub-consciously. TGD:s startings points and philosophy are very different.
1) Bohm tries to keep physics deterministic: this is the basic idea of the whole approach and was more natural at the time when theory was proposed. I admit that I simply do not understand how state function reduction is thought to result from the theory: the notion of quantum non-equilibrium and hidden variables are thought to make this possible. But these notions are hopelessly misty.
In TGD deterministic classical physics in sense of generalized Bohr orbits becomes exact part of quantum theory: this gives rise to strong form of holography too. This does not however mean that classical time evolutions would become real as in Bohm's theory: one has quantum superpositions of classical Bohr orbits instead of single classical orbit. Only quantum ontology but with quantum classical correspondence.
In Bohm's theory the feedback from classical to quantum is lacking and this leads to non-sensical predictions. Sarfatti tried to get over this problem but did not get anywhere.
2) Bohm indeed tried to resolve Einstein-Bohr debate by trying to keep both classical physics and quantum physics but his attempt was a failure and led to a garden of branching paths so familiar to any working theoretician: a situation analogous to the landscape in string models.
*In TGD general coordinate invariance which together with symmetries of special relativity are key symmetries of the theory: in Bohm's theory one starts from Newtonian framework: wave mechanics. The difficulties are predictable.
3) Bohm hoped to understand state function reduction as a derived notion and tried to solve EB debate using single time and keeping the deterministic world view. Bohm would have proposed something different if theories of consciousness would have been fashion in his time;-).
*In TGD quantum jump, free will, and non-determinism are taken as facts, no attempt to reduce. Two times and two causalities: this is the solution of Einstein-Bohr debate.
Amusingly: all this reflects evolution of time concept: Newtonian time, time of special relativity, time of general relativity, and finally the realization that there are two times and two causalities. Plus huge number of other more or less weird proposals such as no time at all!
4) The notion of active information is attractive concept if one does not drown it to the mathematics of wave mechanics. In similar manner Orch OR was drowned to ad hoc formulas. Theoreticians should avoid formulas as long as possible(;--). But we have the illusion that formulas make it more scientific.
*In TGD NMP + negentropic entanglement realize the analog of active information. There is also analogy with Orch OR. I have talked about conscious information, attention, experience of understanding, rule as quantum superposition of its instances, realization ff sensory qualia, also love, etc... Many interpretations. said...
To Santeri:
About love. I must say that as an inhabitant of extremely cruel world of science (see some of the latest postings of Lubos or visit the comment section of Tommaso's blog to understand what I mean!;-) I find it very difficult to say the word "love", it seems to belong to some another spiritual plane;-).
I would not reduce love lego piece;-). Sounds too engineerish;-) One cannot give formula for love, mathematics cannot catch it: the core of K:s teachings (and all mystic teachings) is just this.
Quantum Math as such does not need negentropic entanglement but this notion seems to be possible to realize in terms of QM.
hamed said...
This comment has been removed by the author.
◘Fractality◘ said...
Are computers part of this negentropic pollution?
hamed said...
Dear Matti,
Thanks for clarifications about Bohm theory.
some questions:
In M4*CP2 sometimes you speak about Dynamics of 3-surfaces and sometimes you speak about dynamics of space-time. I think when you speak about dynamics of 3-surfaces you dealt with geometric time and when you speak about dynamics of space-time you speak about subjective time. Isn’t it?
NMP define dynamic of space-time but the minimization of kahler action (something like minimal surface in string theory) defines dynamics of the 3-surface. Isn’t it?
But you wrote “Kahler action would define the fundamental dynamics for space-time surfaces”. it is contradiction with my understanding.?
If NMP define dynamic of space-time, is it essential to talk about dynamics of space-time surface in the level of classical TGD? Because it leads to confusion of the listener.
Basic ideas of TGD are very controversial in the view of current physics. So I should be very attentive when I explain them. Therefore I should explain TGD step by step to others and when I speak about a basic idea if it is possible I should try to don’t speak anything about other basic ideas.
When I explain space-time is a sub manifold in M4*CP2, is it possible to continue about geometrization of forces without explaining space-time sheets at first?(I think it is not possible!) Space-time sheets seem very fiction in the view of physicist. For avoiding this, I think that I should to explain essentiality of many space-time sheets.
I think that I can speak about geometrization of forces and space-time sheets without speaking about “TGD as a Poincare invariant theory of gravitation”. I can explain it after them. Isn’t it? Or it is essential to explain at first beside them?
In really I am thinking that what is best strategy to explain TGD step by step without confusion of the listener. That’s hard ;)
Ulla said...
Ye, Hamed, that's hard :) I stopped at that place myself, but now I think I can proceed with the three-body problem (of Poincare) bringing in unsolvable infinity (Life?). After that the different spacetimes. Or...?
TGD is a knot^10 :D
Love as math seems very odd to me :) Love is about entanglement and continuum, math about discreateness, I think (except this quantum math?).
Maybe some drug would overbridge the gap (bring in more continuous coherence)? Love as White Light? The tiny little space inside, like a ZEO center (wormhole?) connecting to infinity? It is not dependent on any other as Lubos or Tommaso, it is about Self and choises.
Santeri Satama said...
Matti, in regard to the relation of negentropic entanglement and QM, to quote your own words:
"Maybe it would be useful to talk about consciousness only when one has negentropic entanglement: positive information, knowledge. Otherwise awareness.
I think that emotions are very high level consciousness unlike often thought. They provide summaries about the hole and it would be natural to assign the to negentropic fusions of a large number of mental images giving rise to stereo consciousness."
No doubt the creative consciouss experience of fusion of various mathematical ideas and forms into QM involved also deep intellectual and emotional pleasure. So to say that "QM as such does not need NE" or active information sounds like removing your self and universal self-consciousness of creative gnothi seauton from the process.
With Brahman-Atman identity as basis it should be quite obvious that QM is the number theoretical realization of the very old and well known metaphor of Indra's net, and the path you took to your realization to overcome the limitations of the set theory involved the consciouss and emotional aspects of negentropic entanglement you describe in the quotes above. QM as n:th degree of order that allows also more detailed description of NE does not mean that NE - the very process of becoming conscious of QM - could and should be reduced to QM alone. Rather, there is negentropic entanglement between the pair of QM and NE itself. That is, if we are supposed to take you and your work seriously (enough) ;). said...
To Santeri:
I want just to emphasize that there are several levels of existence and most problems result from erratically identifying these levels.
The level giving rise to conscious experience does not reduce to mathematics. The evolution of state of consciousness does not correspond to a solution of field equations. This is the whole point and I want to make this absolutely clear since it provides the solution to so many paradoxes.
In hidden variable theories one can argue that there are physical variables and those related to consciousness and non-determinism of volition is apparent since the dynamics of physical variables is that of shadow and only looks non-determistic. This was the dream of Sarfatti.
Probably also the vision of Bohm was that non-deterministic state function reduction could be understood as dynamics of a shadow. The selection rules of state function reduction makes the fulfillment of this dream highly implausible. said...
To Fractality: If we take the pessimistic option seriously computer could be seen as tools of this pollution.
Santeri Satama said...
Dynamics of "Shadow" in the Jungian sense?
So etymologically the word refers to process of actualizing (state function reduction) instead of potential or dynamis to actualize-exist. In Bohm's language explicate and implicate orders. Western metaphysics has been plagued by the idea 'substance'/'hypokeimenon', whether defined as particles, quantum fields or vacuums and considered these substance-stuffs "True-Existence" because they are considered something non-mutable time-invariable. Substance that can be defined, controlled and manipulated.
Then there is the mystery of Platonia-substance, substantive form of possible forms and it's dialectical relation with "no-thingness/-vacuum" of ZEO. And despite your denial philosophical connections to BT, I'm still under the impression that as in BT, the philosophical starting point of your theory is also what Whitehead calls organic realism instead the substance metaphysics of materialism.
It's very easy to get drawn into the overly analytic and fragmenting metaphysics of English language and scholastic philosophy and analytically define more and more levels of existence and substance. And to get entengled into the fighting mode of the science vs. philosophy, theory vs. theory etc. debates that emanate and radiate e.g. from the Krauss controvercy.
But we can both also speak Finnish and share and comprehend the unity of non-analytical/synthetic and etymologically more sound phenomenological existence in expressions like: havainnoidutaan, ilmennytään, todellistutaan, ollaan. Tunnetaan, nähdään ja kuullaan. said...
some questions:
To Hamed:
I already thought having answered but noticed that I was wrong. I attach my answers between the lines.
[Hamed] In M4*CP2 sometimes you speak about Dynamics of 3-surfaces and sometimes you speak about dynamics of space-time. I think when you speak about dynamics of 3-surfaces you dealt with geometric time and when you speak about dynamics of space-time you speak about subjective time. Isn’t it? NMP define dynamic of space-time but the minimization of kahler action (something like minimal surface in string theory) defines dynamics of the 3-surface. Isn’t it? But you wrote “Kahler action would define the fundamental dynamics for space-time surfaces”. it is contradiction with my understanding.?
[MP] Sorry. This is just loose language on my side. The strictly correct manner to speak is to assign dynamics to 3-surfaces. Space-time surfaces are "orbits" of 3-surfaces. I also often talk about space-time sheets when I should actually speak about 3-surfaces.
[MP] NMP *does not define* dynamics of space-time!;-). It defines dynamics of consciousness and tells that the information gain in quantum jumps is maximized. NMP is mathematically analogous to second law (and implies it for ensembles) in that it tells only overall direction of dynamics but does not fix time evolution completely as action principles. Kahler action is the variational principle at space-time level: preferred extremals.
To be continued.... said...
To Hamed:
[MP] Space-time sheets are just sub-manifolds which are *representable as graphs for maps from M^4 to CP_2*: QFT like limit of TGD! There are also other kinds of sub-manifolds: string like objects with 2-D M^4 projection, and CP_2 type extremals with 1-D light-like projection! These are not called space-time sheets.
You can take large number of space-time sheets representing asymptotic regions to various subsystems. They are small deformations of a canonical imbedded M^4 extremely near to each other. They touch each other here and there. This is just the many-sheeted space-time. The replacement of superposition of classical fields with superposition of their effects forces many-sheeted space-time in TGD. Particle touches several sheets and experiences corresponding forces. Nothing ad hoc!! Sorry for repeating this idea: it is so beautiful!;-)
*Multi-sheeted covering of imbedding space associated with the hierarchy of Planck constants is something different from many-sheeted space-time. I have tried to make this explicit as often as possible. Here one has effective covering of the imbedding space inducing multi-sheeted structure for space-time surface. There is a good argument that also this notion reduces to the basic dynamics of Kahler action. Normal derivatives of imbedding space coordinates as many-valued functions of canonical momentum densities leads to effective covering: this is a basic implication of extreme non-linearity of Kahler action which in turn forced the geometrization of quantum physics in terms of WCW geometry.
*These notions are not anything new and ad hoc but follow naturally from the basic assumptions. Even the (effective( hierarchy of Planck constants, if I am correct. The only really new and therefore controversial element is sub-manifold geometry as a manner to realize Einstein's original program.
%%%%%%%%% said...
To Hamed:
[MP] You cannot!;-). The fundamental idea of TGD approach is to solve the energy problem of general relativity realized in terms of sub-manifold gravity. This also leads to the geometrization of standard model quantum numbers. ZEO allows to have consistency with the fact that apparently energy is not conserved in cosmology: conservation laws become a length scale dependent notion, which is not actually anything new for the pragmatic physicsts who have talked about renormalization of coupling constants since the times of Dirac.
[MP] Good luck! You need it;-)! said...
Dynamics of shadow in geometric sense. Shadow behaves apparently non-determistically since the variables in orthogonal direction serve as hidden variables. Consider mechanics in n-dimensional space and restrict the consideration on k<n-dimensional sub-space: shadow. The dynamics of n-k hidden degrees of freedom affects also the k-dimensional dynamics but since they are hidden variables you see this as non-determinism.
Existence in the sense you refer to it would be subjective existence. Existence as mathematical object is different kind of existence and would be the existence according to materialist.
I call my philosophy tripartistics as opposed to materialistic view of standard physics, and dualistic view of Bohm (non-hidden and hidden variables or classical particles + guiding waves). The completely new element is subjective existence - quantum jumps. About Whitehead I cannot say.
I would speak about vacuum state, not nothingness, which to academic philosopher means something different. In positive energy ontology vacuum state is the ground state in which energy momentum and other quantum numbers vanish. It would be fermionic Fock vacuum. In ZEO all states satisfy this condition. One could define non-vacuum states as those for which positive energy part (and thus also negative energy part) has non-vanishing quantum numbers.
Physics needs philosophy but physicists must build it themselves. Going to philosophy library does not help. The statues of academic philosophy did not know anything about modern physics. Finnish language indeed expresses more naturally what also mystics talk about. The intellectual, linguistic manner to see the world is painting pictures using words and picture is never the reality. One should take it as art rather than warfare.
"Nothingness" is also a problem in set theory: one constructs natural numbers by starting from empty set. But it is obvious that empty set has no operational meaning. One ends up with the well-known problems with infinite sets. Russell antinomy for instance. Could it be that a more natural definition of natural numbers could be as products of primes just like elementary particles are building blocks of physical states? In this approach the notion of infinity would be number theoretical (divisibility concept) and based on infinite primes.
Ulla said...
I would not want to draw in Jungian Shadow (M-matrix?) into this, and I must confess I don't know Whiteheads organic realism. I read:
Whitehead firmly believed that the sharp division between nature and mind, established by Descartes, had "poisoned all subsequent philosophy", and held that in reality "we cannot determine with what molecules the brain begins and the rest of the body ends". He deemed human experience to be "an act of self-origination including the whole of nature, limited to the perspective of a focal region, located within the body, but not necessarily persisting in any fixed coordination within a definite part of the brain". Upon this concept of human experience, Whitehead founded his new metaphysical "philosophy of the organism", his cosmology, his defense of speculative reason, his ideas on the process of nature and his rational approach to God.
In his Philosophy of Organism or Organic Realism, now usually known as Process Philosophy, he posited subjective forms to complement Plato's eternal objects (or Forms). The theory identified metaphysical reality with change and dynamism, and held that change in not illusory or purely accidental to the substance, but rather the very cornerstone of reality or Being. His view of God, as the source of the universe, was therefore as growing and changing, just as the entire universe is in constant flow and change (essentially a kind of Theism, although his God differs essentially from the revealed God of Abrahamic religion). Later process philosophers, including Charles Hartshorne (1897 - 2000), John B. Cobb Jr. (1925 - ) and David Ray Griffin (1939 - ), developed the theory further into a full-blown Process Theology. Whitehead's rejection of mind-body Dualism was similar to elements in Buddhism, although many Christians and Jews have found Process Theology a fruitful way of understanding God and the universe.
Whitehead believed that "there are no whole truths; all truths are half-truths". His political views sometimes appear to be very close to Libertarianism, although he never used the label, and many Whitehead scholars have read his work as providing a philosophical foundation for the Social Liberalism of the New Liberal of the first half of the 20th Century.
But there are also names as Kant, Popper, Hegel etc, philosophers mentioned, so...
Santeri Satama said...
From a philosophers answer to Krauss (
The sets of basic assumptions on which theories are build is often called metaphysics, which is a part of philosophy just like logic etc., but more importantly philosophy is about philosophical attitude, not about names and set of books in library. Philosophical attitude towards scientific theory building - at least in BT and PT - is to consider it a form of playfull and creative art and they share the criticism of anti-philosophical attitude of many materialists who make the set of materialistic metaphysics into authoritarian dogma to be followed religiously and to be defended by political bullying and warfare.
When theory building runs into trouble, e.g. search for the axiomatic mathematical foundation of QFT, a dialogue with philosophers and/or visit to library to pick a copy of Gödel's proof could and would help to prevent further banging of head against the wall. ZEO or not, TGD process is not happening in intellectual and social vacuum where narratives of theoretical physics reinvent philosophy or metaphysics from nothing. Notions of intellectual property and patent rights and tribalism of academic fields are very poor philosophy of the Ego in the world where novel ideas appear synchronistically. And again, there are archetypal ideas such as Brahman-Atman identity and Indra's Net being constantly reinvented and reformulated in various languages, now including QM. Those are ancient philosophical ideas that TGD is based upon, but in what sense "physicists must build it themselves"? said...
To Ulla:
Thank you for the Whitehead;-). I cannot invent immediate disagreement with Whitehead. Accepting subjective existence as as a genuine level of existence and giving up dualism is common to us. said...
To Santeri:
Philosophical attitude includes the historical view: it is no point in reinventing the wheel. Philosophical assumptions are important but only when theory is relatively well-formulated. One cannot start by saying: OK, I will construct dualistic physics. The most important aspect of philosophical attitude is genuine passion to answer the difficult age old questions related to time, free will, what "mind stuff" could be,.. There are also more concrete questions, what is energy, mass, what is the origin of quantum numbers, what the mysterious state function reduction means, ... Most of these questions are taboos nowadays: standard model--> GUT-->SUSY-->String models-->M-theory and the conclusion that those stlll asking are idiots as one particular besser wisser whom you certainly know would formulate it! ;-)
The trouble with physics is that most people doing physics have become mere pragmatic appliers of methods. When taken to extreme this leads to attitude that the basic goal of particle physics is to determine experimentally to what point of SUSY parameter space physics corresponds. This is insanity but a natural outcome of the attitude "science as a mere methodology".
Maybe physicists must rediscover the ancient ideas themselves from their starting points. Here open mind is enough.
Santeri Satama said...
The Krauss debate brought up also Aharonov-Bohm effect (
David Albert: "Professor Kraus’ argument for the ‘reality’ of virtual particles, and for the instability of the quantum-mechanical vacuum, and for the larger and more imposing proposition that ‘nothing is something’, hinges on the claim that “the uncertainty in the measured energy of a system is inversely proportional to the length of time over which you observe it”. And it happens that we have known, for more than half a century now, from a beautiful and seminal and widely cited and justly famous paper by Yakir Aharonov and David Bohm, that this claim is false."
What's the TGD interpretation of Aharonov-Bohm effect and it's metaphysical implications? E.g. what kind of causality is in question? said...
Krauss' argument "nothing is something" is to me game with badly chosen words. I would not call quantum physical vacuum state "nothing". In ZEO any zero energy state can be obtained from vacuum so that it has infinite potential of producing different structures.
Aharonov- Bohm effect is purely topological effect which does not depend on theory. Particle going around a closed loop in vanishing magnetic field can experience a non-trivial effect resulting from so alled non-integrable a phase factor. Any theory involving gauge fields predicts this effect. I would not call it causality. The effect represents topological physics and topological QFT:s made this branch of physics industry.
Santeri Satama said...
Due to my mathematical handicap, I'm forced to question meanings of these words, that if a "particle" is supposed to be a special case of "field", how can vector potential affect or inform particle in case of vanishing field?
The jargon of theoretical physics (shorts for deep and difficult mathematical concepts) often brings to mind the times when priests preached in Latin to congregations that understood nothing of Latin. A game of badly chosen words, about which philosophers of language and politics and hermeneutics and science have various and often critical views. said...
To Santeri:
The language is a real problem. I try to explain why it is a problem knowing that also here I encounter the language problem.
The field-particle correspondence is difficult conceptually also for physicists and involves a lot of mis-understandings. Basically one has two different abstraction levels and their correspondence.
Particle as a point of 3-space and dynamical evolution as particle orbit defines classical Newtonian ontology. This is simple. Quantum states of particle as wave functions in 3-D space E^3 for positions of particle is quantum ontology in Newtonian framework.
This brings in first quantization as abstraction (statements about statements in logic): one can have only quantum-classical correspondence as many-to-one map. The space of wave functions is infinite-D: configuration space is 3-D. Particular quantal particle states (say momentum eigenstates) have direct classical counterparts. The choice of this correspondence is of course not unique.
Indeed, one could call completely localized wave functions particles (well-defined position at one particular moment). One could also call momentum eigenstates which are completely delocalized wave functions particles. Wave particle duality relates these two alternative quantum classical correspondences.
Second quantization brings in further abstraction level and a layer of confusion unless one is fully aware of mathematics involved: basically hierarchy of abstractions.
Consider photons. The space of wave functions for photons in 3-D space E^3 is replaced with the space of wave functions in infinite-D space of classical gauge field potentials in E^3, which is already infinite-D. Fock state- a state with single photon which is analog of harmonic oscillator wave function in the infinite-D space of gauge potentials would be the counterpart of photon as classical free particle.
If there are non-contractible loops (non-trivial first homotopy) or if the scalar is many-valued, Bohm-Aharov is possible. For instance, depending on angle around z-axis such that it is changes by non-integer multiple as one goes over full circle, one has Bohm Aharonov. In TGD this kind of 3-surfaces can be considered. said...
To Santeri:
Concerning your question about vector potential.
By gauge invariance gauge field is representable as "curl" of vector potential. This symmetry means that only two polarizations orthogonal to photon's momentum remain in spectrum. Photon is massless. Vector potential can be non-vanishing even if field vanishes: it is enough that it is a gradient of scalar.
If space has no non-contractible loops and if scalar is single valued function, no Bohm Aharonov effect results.
Bohm in his mixture of ontologies corresponding to two different abstraction levels (particle state as position of E^3 and particle state as wave function in E^3) would perhaps say that vector potential informs or affects particle. One cannot say this in the standard quantum ontology. It does not make sense.
After all this explaining I must confess that the role of vector potential in particle description is QFT based notion and it is now strongly challenged in twistor approach in which the notion is given up;-)!
Also in TGD one describes particle states as wave functionals in the space of 3-surfaces (by holography effectively partonic 2-surfaces at boundaries of CD) and there is a strong connection to twistor approach.
Things become conceptually clear once one accepts infinite-D mathematics of "world of classical worlds". In TGD they are 3-surfaces, in original string model (rather than the horrible conceptual fuss of M-theory) they are 1-surfaces- strings.
Orwin said...
Here's a crisp view of Krishnamurti as the philosopher against all dogmatism:
Would it help to explain tripartistics? Modern philosophy has no language for pluralism, and folk culture now reclaims "body, mind and spirit."
Ulla said...
Listen or read it.
Orwin said...
The herb fewerfew (Tanacetum parhenicem)(migraine, headaches and fevers) was planted outside a house to clean the air, and I find even as a modern supplement it works best in just that way - or on the drying plate to "earth" another herb. This is like external lights acting against EM pollution by instansiating boundary conditions. These are preferred (chosen, selected) extremals! But preferred for what by what?
Orwin said...
Air pollution and heart disease: Evidence: hard; link: unknown.
But it doesn't seem you can translate the language of tradition into metrics: e.g. Mind = Liouville/Toda. Its more a matter of interfaces or mediations.
Ulla said...
The point is that brain is just a tool for decoherence (creating illusions). Waken - sleep - death diminish the use of brain (creation of subsystems) to zero, but there are other ways to measure things than with the body. It seems the body is just an end of a wormhole (BEC?) and it is tripartistic? Note that emeotions are also divided. (This is quite ironic, when Matti and I started the conversations I talked of the importance of emotions, now he has realized that, and I start talk of the importance of emotions as constraints, as pain :) A human can be totally distorted by pain, uncapable of sleeping normally, and when she dies all strain is gone. A psychic patient can be totally insensitive to pain. No feelings whatsoever. Why? They use the body differently?).
Herbs? Do you do herbs? That I would want to discuss, but not here, I guess? Flavonoids are interesting. They are called good, but are they really? Why not? Why do we have an antioxidant paradox? etc.
There is no theory of biology today, just guesses. Due to wrong basis?
Santeri Satama said...
Matti, thanks for the clarification, which again shows the importance of philosophy of communication - and the importance of admitting confusion and emotional frustration with linguistic communication breakdowns.
It's helpfull to recollect that comprehension is literally a collective enterprize ('grasping together') and to keep in mind the simile about blind men and elephant ( and that also theoretical physics is one limited point of view to whole of being and to be meaningfull needs to be able to communicate and share it's point of view with other points of view. said...
To Santeri:
Communication is difficult even when the communicators are willing to communicate. Sad to say that too often this is not the case.
To Ulla:
Neuroscientist wakes up from what is regarded as completely unconscious state and tells that she has been fully conscious and writes a book about it. She is not taken seriously at all! Does not fit with the dogma! This is only one example of what kinds of idiots scientists believing on dogma become.
Douglas Adams has in one of his books a hilarious piece of satire about skeptic scientists. Some communications (or attempts to communicate) with finnish skeptics have taught me that they do the hilarious satire themselves.
Ulla said...
Ye, I have read some 'highs'. It always make my stomach wanting to turn inside out.
I think you shout under wrong tree. Matti is one of the very few scientists that really tries to communicate, but unfortunately his theory is so hard to grasp because it is complex (knot^10). As also Hamed has noticed the different parts are so entangled with each other that there are no beginning nor any end, It resembles very much traditional chinese medicine in that case. And notice that not even experts know enough to fully understand it, because they are experts only in some area. Today there are very few humans that know so much that they could grasp it. And to make it more simple is not either any easy task. The math is a hard nut.
I have many times thought that I understood, and then noticed I did not. I have checked and crosstested with my limited knowledge in physics (which make me grin at myself a little), and seldom there has been errors, only things not yet known, which isn't Mattis fault.
I share Mattis doubts about philosophy. It cannot guide, but it can be used as model afterwards. Philosophy has the same faults as math, everything is possible. Sorry to say if you are a philosopher.
This video was a neuroscientist man, and maybe then it is easier accepted. Jill Bolte Taylor was a female.
hamed said...
Dear Matti,
so Thanks,
At your answers:
“The strictly correct manner to speak is to assign dynamics to 3-surfaces” and
I regarded dynamics and evolution the same meaning. TGD have two kinds of evolutions, one is informational evolution that is related to consciousness and another is geometric time evolution.
Before this I thought that the first one is evolution of space-time and the other one is evolution of 3-surface, but now I learned from the answers that each of them is evolution of the 3-surface.Is it correct? At the first one at each quantum jump the 3-surface is replaced with another 3-surface. A transition from p-adic 3-surface to real 3-surface occur rather than p-adic space time to real space time? Although it is not difference practically!
I deduced that at the sequence of quantum jumps from a 3-surface to another one, the direction of evolution is not unique by NMP and there are 3-surfaces at the end that provide the condition of NMP (There is something like degeneracy). Then what cause makes only one 3-surface of them occur? Is there only pure chance?! But there is obvious when a person will for doing something, if all external factors are appropriate, he can do it and exactly the same work without of any chance that governs on his behavior. Illusion of “I” doesn’t help you for the answer;) said...
Dear Hamed,
As any 4-D classical action principle, one can see Kahler action as defining a dynamics for some 3-D configuration: usually they are field configurations, in TGD they are geometric objects. Preferred extremal property selects preferred orbits as analogs of Bohr orbits in TGD Universe: this is what distinguishes TGD from field theories based on path integrals (all orbits are allowed and "classical" ones correspond to stationary phase and thus extremals of action). One can assign to a collection of 3-surfaces at second end of CD a space-time surface as preferred extremal - this is holography. The holography has motivated my somewhat fuzzy use of space-time sheets and 3-surfaces: if holography would be globally true then 3-surface at end of CD would dictate space-time sheet uniquely. This not the case by the failure of strict determinism due to vacuum degeneracy of Kahler action. (A good exercise would be to look through the Kaehler action and what vacuum extremals are!). I apologize!
[Hamed] At “NMP is mathematically analogous to second law (and implies it for ensembles) in that it tells only overall direction of dynamics but does not fix time evolution completely as action principles”
a) An important point: quantum states are quantum superpositions of 3-surfaces!!!! To each of these 3-surfaces in superposition one can assign space-time sheet satisfying field equations modulo non-uniqueness due to failure of strict determinism. In this sense and only in this sense classical physics is exact part of quantum theory!! Bohm believed differently: he would have said that it is possible to speak both about quantum superpositions of 3-surfaces/associated space-time surfaces and single space-time surface. You have got the impression that I share this belief of Bohm! I definitely do not!!
One can speak about single space-time sheet only in stationary phase approximation for vacuum functional which is exponent of Kahler function (Kahler action from Euclidian regions) and imaginary analog of Morse function (Kahler action from Minkowskian regions). In this sense TGD and quantum field theories are analogous. Path integral is however replaced by functional integral with phase factor (hybrid of path integral and functional integral) and one can hope that it is therefore mathematically well-defined.
b) What NMP says is about what happens in state function reduction cascade for given subsystem-complement pair. It is formulated solely in terms of entanglement entropy for quantum jumps. It defines dynamics for subjective existence. Kahler action defines dynamics for geometric existence.
c) Quantum classical correspondence suggests that there could be however some correlate for NMP and its outcome second law at the space-time level. NMP and second law could correspond to the non-inversibility of the dynamics of Kahler action and for the arrow of time for zero energy states meaning that they are state function reduced at either end of CD. The breakdown of strict determinism at some points or sub-manifolds of space-time sheet is analogous to what happens in hydrodynamics in a flow which becomes supersonic. The hydrodynamical equations bifurcate and second law is used to select the bifurcating branch uniquely. Somehing like this might occur now.
◘Fractality◘ said...
You've spoken about God of the Old Testament as the manifestation of collective consciousness.
Monotheistic dualism that separates God from everything else presents an almost whimsical picture of a God who is a supreme egoist creating the universe for the express purpose of being worshipped – rewarding those who do it properly (according to his rules) and punishing those who don’t. That this is reminiscent of all authoritarian power is no accident, for authoritarian secular power uses an authoritarian religion with its sacred symbolisms and its morality based on duty and sacrifice to justify itself. The question of whether God created the authoritarian form (as fundamentalists believe), or whether the form projected a God to justify itself, is not trivial.”
Is God our mistake, or are we God's mistake?
Ulla said...
I see the answer to that as we have created a picture of God as something outside the creation. When he is inherent in everything, maybe as the omnipotent vacuum problem?
What God is in reality is very different from our limited view of matters. As instance in NDE they travel through a 'tunnel' (wormhole?) out of this void (GR) into another kind of existence (mirror world?), and this world we cannot describe bu words. So we have created the metaphor God? Kind of a talisman. But WE as humans direct our own evolution, and we and only we have the responsibility for it. There are good and less good choises we can make. In this way we ARE God (his tools).
There is a beautiful word. To live in the hands of God. Think at what it means in reality!
Ulla said...
I should have been silent :)
hamed said...
Dear Matti,
so thanks.
I think there are some bases to understanding of your answer about non-determinism correctly and I must wait :(!
I listed some questions and when think about them I deduce that each of them relates in some manner to understanding of M4 * CP2! It is very basic building of TGD that is not avoidable :).
I found an article on “STATUS OF SUPERSTRING AND M-THEORY” in and start to read it. because i think It is needed for me to learn bases of string theories at introductory level. i'd like to read it in the viewpoint of TGD;)
At your answer to Santeri, you wrote that first quantization as abstraction is like statements about statements in logic. Why? And also second quantization.
“basically hierarchy of abstractions”!!! Then what is third quantization?! said...
Dear Hamed,
TGD can be also seen as a generalization of super string model so that getting some background in superstrings certainly helps. Also an article about old fashioned hadronic string model would help.
By the way, string model started from purely geometric formulation with string sheets identified as minimal surfaces. Then the Polyakov formulation emerged and one introduced metric on string world sheets as an independent dynamical variable which for extremals was equal to induced metric. This allowed to develop calculational formalism but led to astray, Eventually one made also the geometry of 10-D space dynamical and one had double gravity instead of reduction of gravity to the geometrodynamics of string world sheets. Pragmatism is not always good in theoretical physics!
First quantization means replacing configuration space of particle (Euclidian 3-space) with wave functions in this space. From space to function space. In case of Boolean algebra this means transition to the Boolean algebra of Boolean statements about Boolean statements. Reflective level of Boolean consciousness.
Second quantization means that space of wave functions is replaced with space of functions in the space of wave functions. Another abstraction.
The hierarchy of infinite primes and many-sheeted space-time lead to the proposal that this hierarchy of quantizations continues. Hadrons, atoms, etc., even galaxies are in well-defined mathematical sense elementary particles at some level of this hierarchy.
Ulla said...
The theory of time reversal and duality of Markov processes was applied to non-relativistic quantum particles in Chapter III. In this chapter we apply the stochastic theory to relativistic quantum particles. We will consider the relativistic Schrödinger equation of a spinless particle in an electromagnetic field. It will be shown that the relativistic quantum particles no longer have continuous paths but move only through pure jumps in contrast to the continuous movement of non-relativistic quantum particles.
Krauss treated only relativistic aspects?
Ulla said...
Schrödinger Equations and Diffusion Theory
AvMasao Nagasawa
Dov Henis said...
2012: Restructure Science Plans, Policies, Budgets
Eppur Si Muove, Higgs Particle YOK
Regardless Of Whatever Whoever
Regardless Of Whatever Is Said By Whoever Says It -
Higgs Particle YOK.
S Hawking is simply wrong in accepting it. Obviously wrong.
Everyone who accepts the story of the Higgs particle is simply wrong.
Plain commonsense.
Universe expansion and re-contraction proceed simultaneously..
Dov Henis (comments from 22nd century)
Refresh Present SCIENCE Comprehensions And Restructure Science Plans, Policies And Budgets
Who Suppresses Science Creativity? Does Academia Suppress Creativity?
Again and again, ad absurdum:
USA Science? Re-Comprehend Origins And Essence
* Evolution Is The Quantum Mechanics Of Natural Selection.
* Life’s Evolution is the quantum mechanics of biology.
Update Concepts-Comprehension…
Earth life genesis from aromaticity-H bonding
Universe-Energy-Mass-Life Compilation
Seed of human-chimp genome diversity
New Era For Science Including Genomics
Dov Henis (comments from 22nd century)
Universe Inflation And Expansion
Inflation on Trial
Astrophysicists interrogate one of their most successful theories
Inflation and expansion are per Newton.
Common sense.
Dov Henis (comments from 22nd century) |
59a43cf32c7dd56a | Skip to main content
Origin of the blueshift of photoluminescence in a type-II heterostructure
• 3727 Accesses
• 16 Citations
Blueshifts of luminescence observed in type-II heterostructures are quantitatively examined in terms of a self-consistent approach including excitonic effects. This analysis shows that the main contribution to the blueshift originates from the well region rather than the variation of triangular potentials formed in the barrier region. The power law for the blueshift, ΔE PL P laser m, from m = 1/2 for lower excitation Plaser to m = 1/4 for higher excitation, is obtained from the calculated results combined with a rate equation analysis, which also covers the previously believed m = 1/3 power law within a limited excitation range. The present power law is consistent with the blueshift observed in a GaAsSb/GaAs quantum well.
Interest has recently been increasing in type-II heterostructures in which electrons and holes are separated in adjacent different materials, thereby forming spatially indirect excitons [19]. The wavefunction of the indirect exciton is significantly extended in space compared with that of a direct exciton in a type-I system where both electrons and holes are confined in the same layer, which allows large controllability of the wavefunction distribution. In addition, the long radiative lifetime originating from spatially indirect recombination is attractive for applications such as optical memories [10, 11].
The separation of charge carriers in a type-II system also induces electrostatic potential (Hartree potential), which causes band bending and a resultant significant change in the exciton wavefunction distribution. Experimentally, this band-bending effect has been observed in power-dependent photoluminescence (PL) measurements, in the blueshift of PL peaks with increasing excitation power [1, 2, 5, 6, 12]. The mechanism of this effect has been discussed in terms of a triangular potential model in which photogenerated electrons and holes form a dipole layer, creating a triangular-like potential at the interface [1]. With increasing excitation power, the potential becomes steeper and the quantization energy increases, giving rise to a blueshift of the recombination energy. Following this model, the blueshift is proportional to the cube root of the excitation power, which has been generally accepted for the characterization and distinction of type-II heterostructures.
However, detailed examinations of the observed power dependency sometimes show deviations from the cube root of power law. This is especially noticeable when the excitation power dependence is examined over a wide range. Here, we reexamine the characteristic blueshift in a type-II system using a GaAsSb/GaAs quantum well (QW). We observe that the blueshift does not obey a single-exponent power law, but instead tends to saturate with increasing excitation power. This is analyzed on the basis of a self-consistent band calculation. The dominant contribution to the blueshift originates from the variation of the QW energy level rather than the variation of the triangular potentials formed in the barrier layers, which modifies the cube-root power law.
The sample containing a 6-nm GaAsSb QW was grown on a GaAs(001) substrate by MOMBE. The Sb composition of GaAsSb was set at 8%, which was confirmed by XRD. At this Sb concentration, the band lineup between GaAs and GaAsSb becomes a type-II alignment with holes confined in the GaAsSb well [13, 14]. The excitation power dependence of the PL was measured at 23 K using the 633 nm line of a He-Ne laser with an intensity range of 1 to 100 W cm-2. The incident beam was chopped using an optical chopper to avoid heating.
Results and discussion
Figure 1 shows the normalized PL spectra of the sample as a function of excitation power. The spectra show a typical blueshift with increasing excitation power. The shift of the PL peak energy is summarized in the inset, which clearly shows that the cube-root power law only holds within a limited range. The power exponent is greater than 1/3 at low excitation, then decreases and becomes smaller than 1/3 at high excitation.
Figure 1
Low-temperature PL spectra of a 6-nm GaAsSb QW at different excitation densities. The inset plots the PL peak energy shift as a function of the excitation power density fitted with the conventional cube-root power law.
To elucidate the origin of the characteristic blueshift, let us start with a semi-quantitative analysis of the band bending in a type-II system. We will deal with a single QW structure, and study the band bending effect numerically using a simple one-band model for both the electron and the heavy hole. The excitonic effect is not taken into account at this stage. The one-particle effective mass Schrödinger equations are given by
2 2 m iz d 2 d z 2 + V i z + ϕ z ψ i z = E i ψ i z ,
where i = e (electron) or h (hole), m iz is the carrier effective mass in the growth direction z, V i (z) the heterostructure potential and φ(z) the self-consistent Hartree potential induced by the spatial separation of the charged carriers. The Hartree potential is obtained from Poisson’s equation,
d 2 d z 2 ϕ z = e 2 ε ε 0 n h z n e z ,
in which ε is the dielectric constant, ε0 is the permittivity of vacuum, and n i is the carrier density determined by the normalized wavefunctions ψ i (z):
n i z = n s ψ i z 2 .
The sheet charge density ns is a parameter which is an increasing function of the excitation power. Equations 1, 2 and 3 are solved iteratively until they converge.
Figure 2a shows the calculated band diagram of the GaAsSb/GaAs QW for a sheet charge density ns = 1 × 1011 cm-2. The self-consistent potential is shown by the solid lines, and the flat-band potential by the dashed lines. Electron and heavy-hole wavefunctions with their eigenenergies under the bending band are also plotted in Figure 2a. The parameters used in the calculation are summarized in Table 1. In this heterostructure, holes are confined in the GaAsSb well, whereas electrons are loosely bound to the triangular potential wells formed at the GaAsSb/GaAs interfaces. The ground state energy of the electron under the bending band is lower than that under the flat band because of the attractive Hartree potential, which results in a redshift of the transition energy. However, the hole ground state is also pushed down by the band bending, which leads to a blueshift. The total transition energy is thus dependent on two competing shifts.
Figure 2
One-band model calculation for a type-II QW. (a) Calculated band diagram of a 6-nm GaAsSb/GaAs QW for a sheet charge density of 1 × 1011 cm-2: self-consistent potential (solid lines) and flat-band potential (dashed lines). Electron and heavy-hole wavefunctions with their eigenenergies are also plotted. (b) Calculated energy shift of the ground state for the electron (ΔEe) and the heavy-hole (ΔEhh) with respect to the flat-band condition. The transition energy shift (ΔEPL) is given by the difference between the two energy shifts.
Table 1 Material parameters used for the calculation of a GaAsSb/GaAs QW
To see how the transition energy shifts with the excitation, we calculated the energy shifts of the electron and heavy hole as a function of the sheet charge density Figure 2b. The energy shift of the optical transition, ΔEPL, is given by the difference between the two energy shifts: ΔEPL = ΔEe - ΔEhh. As the sheet charge density increases, both the electron and heavy-hole energy levels monotonically decrease due to the increasing Hartree potential. Furthermore, the heavy-hole energy shift is always larger than the electron energy shift. As a result, the transition energy shift ΔEPL shows a blueshift with increasing excitation. Indeed, this trend is generally true for a type-II structure; the confined carrier (here the hole) is more susceptible to the Hartree potential. This response is partly because the potential well for the electrons is formed at the skirt of the Hartree potential, while holes are affected by the peak height of the Hartree potential. In addition, an increase in the steepness of the triangular well for the electrons raises the quantization energy, compensating for the energy decrease due to the increased well depth.
Having confirmed that the blueshift is mainly caused by the energy shift of the hole in the well, we consider the power dependency of the peak shift. To the zero-order approximation, the hole energy change is proportional to the depth of the Hartree potential, which is, in turn, proportional to the sheet charge density if the holes and electrons are completely separated. The calculated energy shift in Figure 2b shows sublinear dependence on the sheet charge density, indicating that the distribution of electrons and holes under the bending band plays an important role. For more quantitative evaluation, especially at low excitation regimes, it is necessary to include excitonic effects in the calculation. We performed a calculation of the exciton energy under the bending band following [17]. The Schrödinger equation for the exciton is
[ 2 2 m ρ 1 ρ ρ ρ ρ 2 2 m ez 2 z e 2 2 2 m hz 2 z h 2 + V e z e + V h z h e 2 4 π ε ε 0 1 ρ 2 + z e z h 2 ] ψ ( ρ , z e , z h ) = E ψ ( ρ , z e , z h ) .
Here, 1/m ρ = 1/m + 1/m and is the in-plane reduced mass, and ρ is the in-plane electron–hole distance. The Hartree potential is included in the calculation through the modified heterostructure potential V i z i = V i ( z ) ϕ z i , where i = e (electron) corresponds to the upper sign, and h (hole) to the lower sign. For simplicity, we ignore the spatially dependent dielectric screening in Equation 4. To solve Poisson’s equation, the carrier density ni is obtained by
n e , h z e , h = n s 0 d ρ d z h , e 2 π ρ ψ ρ , z e , z h 2 .
Again, the sheet charge density ns is a parameter.
Figure 3a plots the probability density of the electron under the flat band (ns = 0 cm-2) and the bending band (ns = 5 × 1011 cm-2). The electron probability density is calculated from the wavefunction ψ(ρ, z e , z h ) by p e (z e ) = ∫ 0dρ ∫ − dz h 2πρ|ψ(ρ, z e , z h )|2. It is clearly seen that the electron is attracted to the well under the flat band due to the presence of Coulomb interaction. Binding energy of 3.7 meV is obtained for the exciton by comparing with the energies of the single-particle calculation Equation 1. Figure 3b shows the exciton energy shift in the GaAsSb/GaAs QW as a function of the sheet charge density. The energy shift increases linearly with the sheet charge density at the low density level, and subsequently shows sublinear dependence at the higher density regime. The power exponent at the high density regime is found to be approximately ~0.5. Although we do not have a clear explanation of the origin of the 1/2 exponent at the high power regime, the sublinear increase in the energy shift can be qualitatively understood in terms of the spatial distribution of the electron wavefunction. At a low charge density where the Hartree potential is small, most of the electrons remain in the GaAs barrier, and the electron probability density inside the GaAsSb well is negligibly small. Thus, the spatially separated charges increase in proportion to the sheet charge density, which results in a linear increase in the Hartree potential. In contrast, a significant portion of the electron wavefunction penetrates into the well at high charge density ns = 5 × 1011 cm-2. This penetration decreases the net charge that forms the Hartree potential, leading to the sublinear increase with increasing sheet charge density.
Figure 3
Calculation considering excitonic effects, in comparison with the experimental results. (a) Plot of the probability density for the electron under the flat band and bending band. (b) Double logarithmic plot of the exciton energy shift versus sheet charge density for a 6-nm GaAsSb QW. (c) The same data as in the inset of Figure 1, fitted with another power law.
Finally, the calculated energy shift can be connected with the experimental excitation power density, through the following rate equation
G = B Δ n 2 .
G is the generation rate of the photocarrier and is proportional to the excitation power, Δn is the photogenerated excess carrier density and Β is the bimolecular radiative recombination coefficient. Here, we ignore nonradiative recombination since the linearity of the PL intensity with the excitation power ensures the radiative dominant regime [18]. Combining Equation 6 with the numerically calculated carrier density dependence of the PL energy shift shown in Figure 3b, the following power law for the blueshift is derived:
Δ E PL Δ n m G m ' , m ' = m / 2 = 1 / 2 ~ 1 / 4 ,
with the power factor m’ depending on the excitation power. We show again the experimental PL peak shift in the inset of Figure 3b, along with the new power law. Transition from the low excitation regime (m’ = 1/2) to the high excitation regime (m’ = 1/4) is obvious. Between the two extremes, we can see the conventionally applied m’ = 1/3 power law regime.
We have analyzed the blueshift of the PL peak in a type-II QW. A one-band calculation shows that the blueshift is mainly caused by the energy shift of the confined carrier in the well. More quantitative analysis based on a self-consistent calculation including excitonic effects illustrated the transition from a linear to a sublinear increase in the blueshift with increasing sheet charge density. Combining the calculated result with the carrier rate equation, the blueshift was found to be proportional to the m-th root of the excitation power density, in which m = 1/2 ~ 1/4 and is dependent on the excitation power. The more comprehensive theory presented here predicts the 1/3-power law in the literature over a limited range of carrier density only. The above power law is consistent with the experimental results obtained from a type-II GaAsSb/GaAs QW.
Metal-organic molecular beam epitaxy
Quantum well
X-ray diffraction.
1. 1.
Ledentsov NN, Böhrer J, Heinrichsdorff F, Grundmann M, Bimberg D, Ivanov SV, Meltser BY, Shaposhnikov SV, Yassievich IN, Faleev NN, Kop'ev PS, Alferov ZI: Radiative states in type-II GaSb/GaAs quantum wells. Phys. Rev. B 1995, 52: 14058. 10.1103/PhysRevB.52.14058
2. 2.
Hatami F, Grundmann M, Ledentsov NN, Heinrichsdorff F, Heitz R, Böhrer J, Bimberg D, Ruvimov SS, Werner P, Ustinov VM, Kop'ev PS, Alferov ZI: Carrier dynamics in type-II GaSb/GaAs quantum dots. Phys. Rev. B 1998, 57: 4635. 10.1103/PhysRevB.57.4635
3. 3.
Ribeiro E, Govorov AO, Carvalho W, Medeiros-Ribeiro G: Aharanovo-Bohm signature for neutral polarized excitons in type-II quantum dot ensembles. Phys. Rev. Lett. 2004, 92: 126402.
4. 4.
Madureira JR, de Godoy MPF, Brasil MJSP, Iikawa F: Spatially indirect excitons in type-II quantum dots. Appl. Phys. Lett. 2007, 90: 212105. 10.1063/1.2741601
5. 5.
Alonso-Álvarez D, Alén B, García JM, Ripalda JM: Optical investigation of type II GaSb/GaAs self-assembled quantum dots. Appl. Phys. Lett. 2007, 91: 263103. 10.1063/1.2827582
6. 6.
Kawazu T, Mano T, Noda T, Sakaki H: Optical properties of GaSb/GaAs type-II quantum dots grown by droplet epitaxy. Appl. Phys. Lett. 2009, 94: 081911. 10.1063/1.3090033
7. 7.
Tatebayashi J, Khoshakhlagh A, Huang SH, Dawson LR, Balakrishnan G, Huffaker DL: Formation and optical characteristics of strain-relieved and densely stacked GaSb/GaAs quantum dots. Appl. Phys. Lett. 2006, 89: 203116. 10.1063/1.2390654
8. 8.
Dheeraj DL, Patriarche G, Zhou H, Hoang TB, Moses AF, Grønsberg S, Helvoort AT, Fimland BO, Weman H: Growth and characterization of wurtzite GaAs nanowires with defect-free zinc blende GaasSb inserts. Nano Lett. 2008, 8: 4459. 10.1021/nl802406d
9. 9.
Akopian N, Patriarche G, Liu L, Harmand JC, Zwiller V: Crystal phase quantum dots. Nano Lett. 2010, 10: 1198. 10.1021/nl903534n
10. 10.
Muto S: On a possiblity of wavelength-domain-multiplication memory using quantum boxes. Jpn. J. Appl. Phys. 1995, 34: L210. 10.1143/JJAP.34.L210
11. 11.
Geller M, Kapteyn C, Muller-Kirsch L, Heitz R, Bimberg D: Hole storage in GaSb/GaAs quantum dots for memory devices. Phys. Stat. Sol. (b) 2003, 238: 258. 10.1002/pssb.200303023
12. 12.
Suzuki K, Hogg RA, Arakawa Y: Structural and optical properties of type II GaSb/GaAs self-assembled quantum dots grown by molecular beam epitaxy. J. Appl. Phys. 1999, 85: 8349. 10.1063/1.370622
13. 13.
Ichii A, Tsou Y, Garmire E: An empirical rule for band offsets between III-V alloy compounds. J. Appl. Phys. 1993, 74: 2112. 10.1063/1.354734
14. 14.
Noh MS, Ryou JH, Dupuis RD, Chang YL, Weissman RH: Band lineup of pseudomorphic GaAs1-xSbx quantum-well structures with GaAs, GaAsP, and InGaP barriers grown by metal organic chemical vapor deposition. J. Appl. Phys. 2006, 100: 093703. 10.1063/1.2363237
15. 15.
Yu PY, Cardona M: Fundamentals of Semiconductors. 3rd edition. Berlin: Springer; 2005.
16. 16.
Vurgaftman I, Meyer JR, Ram-Mohan LR: Band parameters for III-V compound semiconductors and their alloys. J Appl Phys 2001, 89: 5815. 10.1063/1.1368156
17. 17.
Penn C, Schaffler F, Bauer G, Glutsch S: Application of numerical exciton-wave-function calculations to the question of band alignment in Si/SiGe quantum wells. Phys Rev B 1999, 59: 13314. 10.1103/PhysRevB.59.13314
18. 18.
Fukatsu S, Usami N, Shiraki Y: Luminescence from Si1-xGex/Si quantum wells grown by Si molecular-beam epitaxy. J Vac Sci Technol B 1993, 11: 895. 10.1116/1.586732
Download references
This work was supported in part by Hokkaido University and Hokkaido Innovation through Nano Technology Support (HINTS).
Author information
Correspondence to Masafumi Jo.
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors' contributions
MJ and MS conceived and designed the experiments. MS and SM performed the sample growth. MS conducted the optical measurements. MJ carried out the numerical calculation and drafted the manuscript. HS and HK participated in the coordination of the study. IS supervised the project. All authors discussed the results and commented on the manuscript.
Authors’ original submitted files for images
Authors’ original file for figure 1
Authors’ original file for figure 2
Authors’ original file for figure 3
Rights and permissions
Reprints and Permissions
About this article
Cite this article
Jo, M., Sato, M., Miyamura, S. et al. Origin of the blueshift of photoluminescence in a type-II heterostructure. Nanoscale Res Lett 7, 654 (2012).
Download citation
• Quantum well
• type-II
• Blueshift
• Excitons
• GaSb
• GaAs
• Photoluminescence
• 71.35.-y: Excitons
• 78.55.Cr: Photoluminescence of III-V semiconductor
• 81.15.Hi: Molecular beam epitaxy |
c5bf47578c36e314 | Polymer physics
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Polymer physics is the field of physics that studies polymers, their fluctuations, mechanical properties, as well as the kinetics of reactions involving degradation and polymerisation of polymers and monomers respectively.[1][2][3][4]
While it focuses on the perspective of condensed matter physics, polymer physics is originally a branch of statistical physics. Polymer physics and polymer chemistry are also related with the field of polymer science, where this is considered the applicative part of polymers.
Polymers are large molecules and thus are very complicated for solving using a deterministic method. Yet, statistical approaches can yield results and are often pertinent, since large polymers (i.e., polymers with a large number of monomers) are describable efficiently in the thermodynamic limit of infinitely many monomers (although the actual size is clearly finite).
Thermal fluctuations continuously affect the shape of polymers in liquid solutions, and modeling their effect requires using principles from statistical mechanics and dynamics. As a corollary, temperature strongly affects the physical behavior of polymers in solution, causing phase transitions, melts, and so on.
The statistical approach for polymer physics is based on an analogy between a polymer and either a Brownian motion, or other type of a random walk, the self-avoiding walk. The simplest possible polymer model is presented by the ideal chain, corresponding to a simple random walk. Experimental approaches for characterizing polymers are also common, using Polymer characterization methods, such as size exclusion chromatography, Viscometry, Dynamic light scattering, and Automatic Continuous Online Monitoring of Polymerization Reactions (ACOMP)[5][6] for determining the chemical, physical, and material properties of polymers. These experimental methods also helped the mathematical modeling of polymers and even for a better understanding of the properties of polymers.
• Flory is considered the first scientist establishing the field of polymer physics.[1]
• French scientists contributed a lot since the 70s (e.g. de Gennes, J. des Cloizeaux).
• Doi and Edwards wrote a very famous book in polymer physics.[3]
• Russian and Soviet schools of physics (I. M. Lifshitz, A. Yu. Grosberg, A.R. Khokhlov ) have been very active in the development of polymer physics.[7][8]
Models of polymer chains are split into two types: "ideal" models, and "real" models. Ideal chain models assume that there are no interactions between chain monomers. This assumption is valid for certain polymeric systems, where the positive and negative interactions between the monomer effectively cancel out. Ideal chain models provide a good starting point for investigation of more complex systems and are better suited for equations with more parameters.
Ideal Chains[edit]
• The freely-jointed chain is the simplest model of a polymer. In this model, fixed length polymer segments are linearly connected, and all bond and torsion angles are equiprobable.[9] The polymer can therefore be described by a simple random walk and ideal chain.
• The freely-rotating chain improves the freely-jointed chain model by taking into account that polymer segments make a fixed bond angle to neighbouring units because of specific chemical bonding. Under this fixed angle, the segments are still free to rotate and all torsion angles are equally likely.
• The hindered rotation model assumes that the torsion angle is hindered by a potential energy. This makes the probability of each torsion angle proportional to a Boltzmann factor:
P(\theta)\propto{}\exp\left(-U(\theta)/kT\right)[clarification needed]
• In the rotational isomeric state model, the allowed torsion angles are determined by the positions of the minima in the rotational potential energy. Bond lengths and bond angles are constant.
• The Worm-like chain is a more complex model. It takes the persistence length into account. Polymers are not completely flexible; bending them requires energy. At the length scale below persistence length, the polymer behaves more or less like a rigid rod.
Real Chains[edit]
Interactions between chain monomers can be modelled as excluded volume. This causes a reduction in the conformational possibilities of the chain, and leads to a self-avoiding random walk. Self-avoiding random walks have different statistics to simple random walks.
Solvent and temperature effect[edit]
The statistics of a single polymer chain depends on the solvent. For a good solvent the chain is more expanded while for a bad solvent the chain segments stay close to each other. In the limit of a very bad solvent the polymer chain merely collapses to form a hard sphere, while in good solvent the chain swells in order to maximize the number of polymer-fluid contacts. For this case the radius of gyration is approximated using Flory's mean field approach which yields a scaling for the radius of gyration of:
R_g \sim N^\nu,
where R_g is the radius of gyration of the polymer, N is the number of bond segments (equal to the degree of polymerization) of the chain.
For good solvent, \nu=3/5; for bad solvent, \nu=1/3. Therefore polymer in good solvent has larger size and behaves like a fractal object. In bad solvent it behaves like a solid sphere.
In the so-called \theta solvent, \nu=1/2, which is the result of simple random walk. The chain behaves as if it were an ideal chain.
The quality of solvent depends also on temperature. For a flexible polymer, low temperature may correspond to poor quality and high temperature makes the same solvent good. At a particular temperature called theta (θ) temperature, the solvent behaves as if an ideal chain.
Excluded volume interaction[edit]
The ideal chain model assumes that polymer segments can overlap with each other as if the chain were a phantom chain. In reality, two segments cannot occupy the same space at the same time. This interaction between segments is called the excluded volume interaction.
The simplest formulation of excluded volume is the self-avoiding random walk, a random walk that cannot repeat its previous path. A path of this walk of N steps in three dimensions represents a conformation of a polymer with excluded volume interaction. Because of the self-avoiding nature of this model, the number of possible conformations is significantly reduced. The radius of gyration is generally larger than that of the ideal chain.
Whether a polymer is flexible or not depends on the scale of interest. For example, the persistence length of double-stranded DNA is about 50 nm. Looking at length scale smaller than 50 nm (Known as the McGuinness limit), it behaves more or less like a rigid rod.[10] At length scale much larger than 50 nm, it behaves like a flexible chain.
Example model (simple random-walk, freely jointed)[edit]
The study of long chain polymers has been a source of problems within the realms of statistical mechanics since about the 1950s. One of the reasons however that scientists were interested in their study is that the equations governing the behavior of a polymer chain were independent of the chain chemistry. What is more, the governing equation turns out to be a random walk, or diffusive walk, in space. Indeed, the Schrödinger equation is itself a diffusion equation in imaginary time, t' = it.
Random walks in time[edit]
The first example of a random walk is one in space, whereby a particle undergoes a random motion due to external forces in its surrounding medium. A typical example would be a pollen grain in a beaker of water. If one could somehow "dye" the path the pollen grain has taken, the path observed is defined as a random walk.
Consider a toy problem, of a train moving along a 1D track in the x-direction. Suppose that the train moves either a distance of +b or −b (b is the same for each step), depending on whether a coin lands heads or tails when flipped. Lets start by considering the statistics of the steps the toy train takes (where Si is the ith step taken):
\langle S_{i} \rangle = 0 ; due to a priori equal probabilities
\langle S_{i} S_{j} \rangle = b^2 \delta_{ij}.
The second quantity is known as the correlation function. The delta is the kronecker delta which tells us that if the indices i and j are different, then the result is 0, but if i = j then the kronecker delta is 1, so the correlation function returns a value of b2. This makes sense, because if i = j then we are considering the same step. Rather trivially then it can be shown that the average displacement of the train on the x-axis is 0;
x = \sum_{i=1}^{N} S_i
\langle x \rangle = \left\langle \sum_{i=1}^N S_i \right\rangle
\langle x \rangle = \sum_{i=1}^N \langle S_i \rangle.
As stated \langle S_i \rangle = 0, so the sum is still 0. It can also be shown, using the same method demonstrated above, to calculate the root mean square value of problem. The result of this calculation is given below
x_\mathrm{rms} = \sqrt {\langle x^2 \rangle} = b \sqrt N.
From the diffusion equation it can be shown that the distance a diffusing particle moves in a medium is proportional to the root of the time the system has been diffusing for, where the proportionality constant is the root of the diffusion constant. The above relation, although cosmetically different reveals similar physics, where N is simply the number of steps moved (is loosely connected with time) and b is the characteristic step length. As a consequence we can consider diffusion as a random walk process.
Random walks in space[edit]
Random walks in space can be thought of as snapshots of the path taken by a random walker in time. One such example is the spatial configuration of long chain polymers.
There are two types of random walk in space: self-avoiding random walks, where the links of the polymer chain interact and do not overlap in space, and pure random walks, where the links of the polymer chain are non-interacting and links are free to lie on top of one another. The former type is most applicable to physical systems, but their solutions are harder to get at from first principles.
By considering a freely jointed, non-interacting polymer chain, the end-to-end vector is
\mathbf{R} = \sum_{i=1}^{N} \mathbf r_i
where ri is the vector position of the i-th link in the chain. As a result of the central limit theorem, if N ≫ 1 then we expect a Gaussian distribution for the end-to-end vector. We can also make statements of the statistics of the links themselves;
• \langle \mathbf{r}_{i} \rangle = 0 ; by the isotropy of space
• \langle \mathbf{r}_{i} \cdot \mathbf{r}_{j} \rangle = 3 b^2 \delta_{ij} ; all the links in the chain are uncorrelated with one another
Using the statistics of the individual links, it is easily shown that
\langle \mathbf R \rangle = 0
\langle \mathbf R \cdot \mathbf R \rangle = 3Nb^2.
Notice this last result is the same as that found for random walks in time.
Assuming, as stated, that that distribution of end-to-end vectors for a very large number of identical polymer chains is gaussian, the probability distribution has the following form
P = \frac{1}{\left (\frac{2 \pi N b^2}{3} \right )^{3/2}} \exp \left(\frac {- 3\mathbf R \cdot \mathbf R}{2Nb^2}\right).
What use is this to us? Recall that according to the principle of equally likely a priori probabilities, the number of microstates, Ω, at some physical value is directly proportional to the probability distribution at that physical value, viz;
\Omega \left ( \mathbf{R} \right ) = c P\left ( \mathbf{R} \right )
where c is an arbitrary proportionality constant. Given our distribution function, there is a maxima corresponding to R = 0. Physically this amounts to there being more microstates which have an end-to-end vector of 0 than any other microstate. Now by considering
S \left ( \mathbf {R} \right ) = k_B \ln \Omega {\left ( \mathbf R \right) }
\Delta S \left( \mathbf {R} \right ) = S \left( \mathbf {R} \right ) - S \left (0 \right )
\Delta F = - T \Delta S \left ( \mathbf {R} \right )
where F is the Helmholtz free energy, and it can be shown that
\Delta F = k_B T \frac {3R^2}{2Nb^2} = \frac {1}{2} K R^2 \quad ; K = \frac {3 k_B T}{Nb^2}.
which has the same form as the potential energy of a spring, obeying Hooke's law.
This result is known as the entropic spring result and amounts to saying that upon stretching a polymer chain you are doing work on the system to drag it away from its (preferred) equilibrium state. An example of this is a common elastic band, composed of long chain (rubber) polymers. By stretching the elastic band you are doing work on the system and the band behaves like a conventional spring, except that unlike the case with a metal spring, all of the work done appears immediately as thermal energy, much as in the thermodynamically similar case of compressing an ideal gas in a piston.
It might at first be astonishing that the work done in stretching the polymer chain can be related entirely to the change in entropy of the system as a result of the stretching. However, this is typical of systems that do not store any energy as potential energy, such as ideal gases. That such systems are entirely driven by entropy changes at a given temperature, can be seen whenever it is the case that are allowed to do work on the surroundings (such as when an elastic band does work on the environment by contracting, or an ideal gas does work on the environment by expanding). Because the free energy change in such cases derives entirely from entropy change rather than internal (potential) energy conversion, in both cases the work done can be drawn entirely from thermal energy in the polymer, with 100% efficiency of conversion of thermal energy to work. In both the ideal gas and the polymer, this is made possible by a material entropy increase from contraction that makes up for the loss of entropy from absorption of the thermal energy, and cooling of the material.
See also[edit]
External links[edit]
1. ^ a b P. Flory, Principles of Polymer Chemistry, Cornell University Press, 1953. ISBN 0-8014-0134-8.
2. ^ Pierre Gilles De Gennes, Scaling Concepts in Polymer Physics CORNELL UNIVERSITY PRESS Ithaca and London, 1979
3. ^ a b M. Doi and S. F. Edwards, The Theory of Polymer Dynamics Oxford University Inc NY, 1986
4. ^ Michael Rubinstein and Ralph H. Colby, Polymer Physics Oxford University Press, 2003
5. ^ US patent 6052184 and US Patent 6653150, other patents pending
6. ^ F. H. Florenzano; R. Strelitzki; W. F. Reed, “Absolute, Online Monitoring of Polymerization Reactions”, Macromolecules 1998, 31(21), 7226-7238
7. ^ Vladimir Pokrovski, The Mesoscopic Theory of Polymer Dynamics, Springer, 2010
8. ^ A. Yu. Grosberg, A.R. Khokhlov. Statistical Physics of Macromolecules, 1994, American Institute o Physics
9. ^ H. Yamakawa, "Helical Wormlike Chains in Polymer Solution", (Springer Verlag, Berlin, 1997)
10. ^ G.McGuinness, Polymer Physics, Oxford University Press, p347 |
77ac4c1c5b7aad56 | About this Journal Submit a Manuscript Table of Contents
ISRN Optics
Volume 2013 (2013), Article ID 783865, 51 pages
Review Article
Universal Dynamical Control of Open Quantum Systems
Weizmann Institute of Science, 76100 Rehovot, Israel
Received 25 March 2013; Accepted 24 April 2013
Academic Editors: M. D. Hoogerland, D. Kouznetsov, A. Miroshnichenko, and S. R. Restaino
Due to increasing demands on speed and security of data processing, along with requirements on measurement precision in fundamental research, quantum phenomena are expected to play an increasing role in future technologies. Special attention must hence be paid to omnipresent decoherence effects, which hamper quantumness. Their consequence is always a deviation of the quantum state evolution (error) with respect to the expected unitary evolution if these effects are absent. In operational tasks such as the preparation, transformation, transmission, and detection of quantum states, these effects are detrimental and must be suppressed by strategies known as dynamical decoupling, or the more general dynamical control by modulation developed by us. The underlying dynamics must be Zeno-like, yielding suppressed coupling to the bath. There are, however, tasks which cannot be implemented by unitary evolution, in particular those involving a change of the system’s state entropy. Such tasks necessitate efficient coupling to a bath for their implementation. Examples include the use of measurements to cool (purify) a system, to equilibrate it, or to harvest and convert energy from the environment. If the underlying dynamics is anti-Zeno like, enhancement of this coupling to the bath will occur and thereby facilitate the task, as discovered by us. A general task may also require state and energy transfer, or entanglement of noninteracting parties via shared modes of the bath which call for maximizing the shared (two-partite) couplings with the bath, but suppressing the single-partite couplings. For such tasks, a more subtle interplay of Zeno and anti-Zeno dynamics may be optimal. We have therefore constructed a general framework for optimizing the way a system interacts with its environment to achieve a desired task. This optimization consists in adjusting a given “score” that quantifies the success of the task, such as the targeted fidelity, purity, entropy, entanglement, or energy by dynamical modification of the system-bath coupling spectrum on demand.
1. Introduction
Due to the ongoing trends of device miniaturization, increasing demands on speed and security of data processing, along with requirements on measurement precision in fundamental research, quantum phenomena are expected to play an increasing role in future technologies. Special attention must hence be paid to omnipresent decoherence effects, which hamper quantumness [170]. These may have different physical origins, such as coupling of the system to an external environment (bath), noise in the classical fields controlling the system, or population leakage out of a relevant system subspace. Their consequence is always a deviation of the quantum state evolution (error) with respect to the expected unitary evolution if these effects are absent. In operational tasks such as the preparation, transformation, transmission, and detection of quantum states, these effects are detrimental and must be suppressed by dynamical control. The underlying dynamics must be Zeno-like yielding suppressed coupling to the bath.
Environmental effects generally hamper or completely destroy the “quantumness” of any complex device. Particularly fragile against environment effects is quantum entanglement (QE) in multipartite systems. This fragility may disable quantum information processing and other forthcoming quantum technologies: interferometry, metrology, and lithography. Commonly, the fragility of QE rapidly mounts with the number of entangled particles and the temperature of the environment (thermal “bath”). This QE fragility has been the standard resolution of the Schrödinger-cat paradox: the environment has been assumed to preclude macrosystem entanglement.
In-depth study of the mechanisms of decoherence and their prevention is therefore an essential prerequisite for applications involving quantum information processing or communications [3]. The present paper aimed at furthering our understanding of these formidable issues. It is based on progress by our group, as well as others, towards a unified approach to the dynamical control of decoherence and disentanglement. This unified approach culminates in universal formulae allowing design of the required control fields.
Most theoretical and experimental methods that aimed at assessing and controlling (suppressing) decoherence of qubits (two-level systems that are the quantum mechanical counterparts of classical bits) have focused on one of two particular situations: (a) single qubits decohering independently, or (b) many qubits collectively perturbed by the same environment. Thus, quantum communication protocols based on entangled two-photon states have been studied under collective depolarization conditions, namely, identical random fluctuations of the polarization for both photons [71, 72]. Entangled qubits that reside at the same site or at equivalent sites of the system, for example, atoms in optical lattices, have likewise been assumed to undergo identical decoherence.
By contrast, more general problems of decay of nonlocal mutual entanglement of two or more small systems are less well understood. This decoherence process may occur on a time scale much shorter than the time for either body to undergo local decoherence, but much longer than the time each takes to become disentangled from its environment. The disentanglement of individual particles from their environment is dynamically controlled by interactions on non-Markovian time-scales, as discussed below. Their disentanglement from each other, however, may be purely Markovian [7375], in which case the present non-Markovian approach to dynamical control/prevention is insufficient.
1.1. Dynamical Control of Single-Particle Decay and Decoherence on Non-Markovian Time Scales
Quantum-state decay to a continuum or changes in its population via coupling to a thermal bath is known as amplitude noise (AN). It characterizes decoherence processes in many quantum systems, for example, spontaneous emission of photons by excited atoms [76], vibrational and collisional relaxation of trapped ions [1], and the relaxation of current-biased Josephson junctions [77]. Another source of decoherence in the same systems is proper dephasing or phase noise (PN) [78], which does not affect the populations of quantum states but randomizes their energies or phases.
For independently decohering qubits, a powerful approach for the suppression of decoherence appears to be the “dynamical decoupling” (DD) of the system from the bath [7992]. The standard “bang-bang” DD, that is, -phase flips of the coupling via strong and sufficiently frequent resonant pulses driving the qubit [8284], has been proposed for the suppression of proper dephasing [93].
This approach is based on the assumption that during these strong and short pulses there is no free evolution; that is, the coupling to the bath is intermittent with control fields. These -pulses hence serve as a complete phase reversal, meaning that the evolution after the pulse negates the deleterious effects of dephasing prior to the pulse, similar to spin-echo technique [94]. However, some residual decoherence remains and increases with the interpulse time interval, and thus in order to combat decoherence effectively, the pulses should be very frequent. While standard DD has been developed for combating first-order dephasing, several extensions have been suggested to further optimize DD under proper dephasing, such as multipulse control [89], continuous DD [88], concatenated DD [90], and optimal DD [95, 96]. DD has also been adapted to suppress other types of decoherence couplings such as internal state coupling [91] and heating [84].
Our group has proposed a universal strategy of approximate DD [97103] for both decay and proper dephasing, by either pulsed or continuous wave (CW) modulation of the system-bath coupling. This strategy allows us to optimally tailor the strength and rate of the modulating pulses to the spectrum of the bath (or continuum) by means of a simple universal formula. In many cases, the standard -phase “bang-bang” (BB) is then found to be inadequate or nonoptimal compared to dynamic control based on the optimization of the universal formula [104].
Our group has purported to substantially expand the arsenal of decay and decoherence control. We have presented a universal form of the decay rate of unstable states into any reservoir (continuum), dynamically modified by perturbations with arbitrary time dependence, focusing on non-Markovian time-scales [97, 99, 100, 102, 105]. An analogous form has been obtained by us for the dynamically modified rate of proper dephasing [100, 101, 105]. Our unified, optimized approach reduces to the BB method in the particular case of proper dephasing or decay via coupling to spectrally symmetric (e.g., Lorentzian or Gaussian) noise baths with limited spectral width (see below). The type of phase modulation advocated for the suppression of coupling to phonon or photon baths with frequency cutoff [103] is, however, drastically different from the BB method. Other situations to which our approach applies, but not the BB method, include amplitude modulation of the coupling to the continuum, as in the case of decay from quasibound states of a periodically tilted washboard potential [99]: such modulation has been experimentally shown [106] to give rise to either slowdown of the decay (Zeno-like behavior) or its speedup (anti-Zeno-like behavior), depending on the modulation rate.
The theory has been generalized by us to finite temperatures and to qubits driven by an arbitrary time-dependent field, which may cause the failure of the rotating-wave approximation [100]. It has also been extended to the analysis of multilevel systems, where quantum interference between the levels may either inhibit or accelerate the decay [107].
Our general approach [99] to dynamical control of states coupled to an arbitrary “bath” or continuum has reaffirmed the intuitive anticipation that, in order to suppress their decay, we must modulate the system-bath coupling at a rate exceeding the spectral interval over which the coupling is significant. Yet our analysis can serve as a general recipe for optimized design of the modulation aimed at an effective use of the fields for decay and decoherence suppression or enhancement.
1.2. Control of Symmetry-Breaking Multipartite Decoherence
Control of multiqubit or, more generally, multipartite decoherence is of even greater interest, because it can help protect the entanglement of such systems, which is the cornerstone of many quantum information processing applications. However, it is very susceptible to decoherence, decays faster than single-qubit coherence, and can even completely disappear in finite time, an effect dubbed entanglement sudden death (ESD) [73, 74, 108113]. Entanglement is effectively protected in the collective decoherence situation, by singling out decoherence-free subspaces (DFS) [114], wherein symmetrically degenerate many-qubit states, also known as “dark” or “trapping” states [78], are decoupled from the bath [87, 115117].
Symmetry is a powerful means of protecting entangled quantum states against decoherence, since it allows the existence of a decoherence-free subspace or a decoherence-free subsystem [77, 78, 8087, 102, 114120]. In multipartite systems, this requires that all particles be perturbed by the same environment. In keeping with this requirement, quantum communication protocols based on entangled two-photon states have been studied under collective depolarization conditions, namely, identical random fluctuations of the polarization for both photons [71].
Entangled states of two or more particles, wherein each particle travels along a different channel or is stored at a different site in the system, may present more challenging problems insofar as combating and controlling decoherence effects are concerned: if their channels or sites are differently coupled to the environment, their entanglement is expected to be more fragile and harder to protect.
To address these fundamental challenges, we have developed a very general treatment. Our treatment does not assume the perturbations to be stroboscopic, that is, strong or fast enough, but rather to act concurrently with the particle-bath interactions. This treatment extends our earlier single-qubit universal strategy [97, 99, 100, 104, 121, 122] to multiple entangled systems (particles) which are either coupled to partly correlated (or uncorrelated) finite-temperature baths or undergo locally varying random dephasing [107, 123126]. Furthermore, it applies to any difference between the couplings of individual particles to the environment. This difference may range from the large-difference limit of completely independent couplings, which can be treated by the single-particle dynamical control of decoherence via modulation of the system-bath coupling, to the opposite zero-difference limit of completely identical couplings, allowing for multiparticle collective behavior and decoherence-free variables [86, 87, 115117, 127130]. The general treatment presented here is valid anywhere between these two limits and allows us to pose and answer the key question: under what conditions, if any, is local control by modulation, addressing each particle individually, preferable to global control, which does not discriminate between the particles?
We show that in the realistic scenario, where the particles are differently coupled to the bath, it is advantageous to locally control each particle by individual modulation, even if such modulation is suboptimal for suppressing the decoherence of a single particle. This local modulation allows synchronizing the phase-relation between the different modulations and eliminates the cross coupling between the different systems. As a result, it allows us to preserve the multipartite entanglement and reduces the multipartite decoherence problem to the single particle decoherence problem. We show the advantages of local modulation, over global modulation (i.e., identical modulation for all systems and levels), as regards the preservation of arbitrary initial states, preservation of entanglement, and the intriguing possibility of entanglement increase compared to its initial value.
The experimental realization of a universal quantum computer is widely recognized to be difficult due to decoherence effects, particularly dephasing [1, 131133], whose deleterious effects on entanglement of qubits via two-qubit gates [134136] are crucial. To help overcome this problem, we put forth a universal dynamical control approach to the dephasing problem during all the stages of quantum computations [125, 137], namely, (i) storage, wherein the quantum information is preserved in between gate operations, (ii) single-qubit gates, wherein individual qubits are manipulated, without changing their mutual entanglement, and (iii) two-qubit gates, that introduce controlled entanglement. We show that in terms of reducing the effects of dephasing, it is advantageous to concurrently and specifically control all the qubits of the system, whether they undergo quantum gate operations or not. Our approach consists in specifically tailoring each dynamical quantum gate, with the aim of suppressing the dephasing, thereby greatly increasing the gate fidelity. In the course of two-qubit entangling gates, we show that cross dephasing can be completely eliminated by introducing additional control fields. Most significantly, we show that one can increase the gate duration, while simultaneously reducing the effects of dephasing, resulting in a total increase in gate fidelity. This is at odds with the conventional approaches, whereby one tries to either reduce the gate duration, or increase the coherence time.
A general task may also require state and energy transfer [138], or entanglement [139] of noninteracting parties via shared modes of the bath [123, 140] which call for maximizing the shared (two-partite) couplings with the bath, but suppressing the single-partite couplings.
It is therefore desirable to have a general framework for optimizing the way a system interacts with its environment to achieve a desired task. This optimization consists in adjusting a given “score” that quantifies the success of the task, such as the targeted fidelity, purity, entropy, entanglement, or energy by dynamical modification of the system-bath coupling spectrum on demand. The goal of this work is to develop such a framework.
1.3. Dynamical Protection from Spontaneous Emission
Schemes of quantum information processing that are based on optically manipulated atoms face the challenge of protecting the quantum states of the system from decoherence, or fidelity loss, due to atomic spontaneous emission (SE) [1, 141, 142]. SE becomes the dominant source of decoherence at low temperatures, as nonradiative (phonon) relaxation becomes weak [4, 5]. SE suppression cannot be achieved by frequent modulations or perturbations of the decaying state, because of the extremely broad spectrum of the radiative continuum (“bath”) [76, 97]. A promising means of protection from SE is to embed the atoms in photonic crystals (three-dimensionally periodic dielectrics) that possess spectrally wide, omnidirectional photonic bandgaps (PBGs) [6]: atomic SE would then be blocked at frequencies within the PBG [68]. Thus far, studies of coherent optical processes in a PBG have assumed fixed values of the atomic transition frequency [9]. However, in order to operate quantum logic gates, based on pairwise entanglement of atoms by field-induced dipole-dipole interactions [10, 143, 144], one should be able to switch the interaction on and off, most conveniently by AC Stark-shifts of the transition frequency of one atom relative to the other, thereby changing its detuning from the PBG edge. The question then arises: should such frequency shifts be performed adiabatically, in order to minimize the decoherence and maximize the quantum-gate fidelity? The answer is expected to be affirmative, based on the existing treatments of adiabatic entanglement and protection from decoherence [11, 12, 129] and on the tendency of nonadiabatic evolution to spoil fidelity and promote transitions to the continuum [13]. Surprisingly, our analysis (Section 6) demonstrates that only an appropriately phased sequence of “sudden” (strongly nonadiabatic) changes of the detuning from the PBG edge may yield higher fidelity of qubit and quantum gate operations than their adiabatic counterparts. This unconventional nonadiabatic protection from decoherence is valid for qubits that are strongly coupled to the continuum edge [14, 145], as opposed to the weak coupling approach in Sections 25.
1.4. Outline
In this paper we develop, step by step, the framework for universal dynamical control by modulating fields of multilevel systems or qubits, aimed at suppressing or preventing their noise, decoherence, or relaxation in the presence of a thermal bath. Its crux is the general master equation (ME) of a multilevel, multipartite system, weakly coupled to an arbitrary bath and subject to arbitrary temporal driving or modulation. The present ME, derived by the technique [146, 147], is more general than the ones obtained previously in that it does not invoke the rotating wave approximation and therefore applies at arbitrarily short times or for arbitrarily fast modulations.
Remarkably, when our general ME is applied to either AN or PN, the resulting dynamically controlled relaxation or decoherence rates obey analogous formulae provided that the corresponding density-matrix (generalized Bloch) equations are written in the appropriate basis. This underscores the universality of our treatment. It allows us to present a PN treatment that does not describe noise phenomenologically, but rather dynamically starting from the ubiquitous spin-boson Hamiltonian.
In Sections 2 and 3, we present a universal formula for the control of single-qubit zero-temperature relaxation and discuss several limits of this formula. In Sections 4 and 5, we extend this formula to multipartite or multilevel systems. In Section 6 dynamical control in the strong coupling regime is considered. In Section 7, the treatment is extended to the control of finite-temperature relaxation and decoherence and culminates in single-particle Bloch equations with dynamically modified decoherence rates that essentially obey the universal formula of Section 3. We then discuss in Section 7.4 the possible modulation arsenal for either AN or PN control. In Section 8, we discuss the extensions of the universal control formula to entangled multipartite systems. The formalism is applicable in a natural and straightforward manner to such systems [123]. It allows us to focus on the ability of symmetries to overcome multipartite decoherence [87, 114117]. In Section 8, we discuss the implementations of the universal formula to multipartite quantum computation. Section 9 discusses some general aspects of multipartite dynamical control. We develop a general optimization strategy for performing a chosen unitary or nonunitary task on an open quantum system. The goal is to design a controlled time-dependent system Hamiltonian by variationally minimizing or maximizing a chosen function of the system state, which quantifies the task success (score), such as fidelity, purity, or entanglement. If the time dependence of the system Hamiltonian is fast enough to be comparable to or shorter than the response time of the bath, then the resulting non-Markovian dynamics is shown to optimize the chosen task score to second order in the coupling to the bath. This strategy can not only protect a desired unitary system evolution from bath-induced decoherence but also take advantage of the system-bath coupling so as to realize a desired nonunitary effect on the system. Section 10 summarizes our conclusions whereby this universal control can effectively protect complex systems from a variety of decoherence sources.
2. Modulation-Affected Control of Decay into Continua and Zero-Temperature Baths: Weak-Coupling Theory
2.1. Framework
Consider the decay of a state via its coupling to a bath, described by the orthonormal basis , which forms either a discrete or a continuous spectrum (or a mixture thereof). The total Hamiltonian is Here is the dynamically modulated Hamiltonian of the system, with being the energy of . The time-dependent frequency can be attributed to the controllable dynamically imposed Stark shift, or to proper dephasing (uncontrolled, random fluctuation). The term is the time-dependent Hamiltonian of the bath, with being the energies of . The time-dependent frequencies , like , may arise from proper dephasing or dynamical Stark shifts. Finally denotes the off-diagonal coupling of with the continuum/bath, with being the dynamical modulation function and the system-bath coupling matrix elements.
We write the wave function of the system as with the initial condition being A one-level system which can exchange its population with the bath states represents the case of autoionization or photoionization. However, the above Hamiltonian describes also a qubit, which can undergo transitions between the excited and ground states and , respectively, due to its off-diagonal coupling to the bath. The bath may consist of quantum oscillators (modes) or two-level systems (spins) with different eigenfrequencies. Typical examples are spontaneous emission into photon or phonon continua. In the rotating-wave approximation (RWA), which is alleviated in Section 7, the present formalism applies to a relaxing qubit, under the substitutions
3. Single-Qubit Zero-Temperature Relaxation
To gain insight into the requirements of decoherence control, consider first the simplest case of a qubit with states and energy separation relaxing into a zero-temperature bath via off-diagonal () coupling, Figure 1(a). The Hamiltonian is given by the sum extending over all bath modes, where are the annihilation and creation operators of mode , respectively, with and denoting the bath vacuum and th-mode single excitation, respectively, and being the corresponding transition matrix element and . being Hermitian conjugate. We have also taken the rotating wave approximation (RWA). The general time-dependent state can be written as The Schrödinger equation results in the following coupled equations [83]: One can go to the rotating frame, define , , and get: where is the bath response/correlation function, expressible in terms of a sum over all transition matrix elements squared oscillating at the respective mode frequencies .
Figure 1: (a) Schematic drawing of a two-level system with off-diagonal coupling to a continuum or a bath. (b) Schematic drawing of a bath comprised of many harmonic oscillators with different frequencies, whose temporal dephasing after correlation time renders the system-bath interaction practically irreversible.
It is the spread of oscillation frequencies that causes the environment response to decohere after a (typically short) correlation time (Figure 1(b)). Hence, the Markovian assumption that the correlation function decays to instantaneously, , is widely used: it is in particular the basis for the venerated Lindblad’s master equation describing decoherence [148]. It leads to exponential decay of at the Golden Rule (GR) rate [76, 78] as
We, however, are interested in the extremely non-Markovian time scales, much shorter than , on which all bath modes excitations oscillate in unison and the system-bath exchange is fully reversible. How does one probe, or, better still, maintain the system in a state corresponding to such time scales?
To this end, we assume modulations of and , that result in the time-dependent modulation function , which has two components, namely, an amplitude modulation and phase modulation . The modulation function is related to in 3 via with This modulation may pertain to any intervention in the system-bath dynamics: (i) measurements that effectively interrupt and completely dephase the evolution, describable by stochastic [149], (ii) coherent perturbations that describe phase modulations of the system-bath interactions [99, 124].
For any , the exact equation (12) is then rewritten as
We now resort to the crucial approximation that varies slower than either or . This approximation is justifiable in the weak-coupling regime (to second order in ), as discussed below. Under this approximation, (18) is transformed into a differential equation describing relaxation at a time-dependent rate as where is the instantaneous time-dependent relaxation rate and is the Lamb shift due to the coupling to the bath. One can separate the spectral representation of into the real and imaginary parts which satisfy the Kramers-Kronig relations with denoting the principal value. Henceforth, we shall concentrate on the relaxation rate, as it determines the excited state population, where is the average relaxation rate.
It is advantageous to consider the frequency domain, as it gives more insight into the mechanisms of decoherence. For this purpose, we define the finite-time Fourier transform of the modulation function as
The average time-dependent relaxation rate can be rewritten, by using the Fourier transforms of and , in the following form: where is the spectral-response function of the bath, and is the finite-time spectral intensity of the (random or coherent) intervention/modulation function, where the factor comes about from the definition of the decoherence rate averaged over the interval.
The relaxation rate described by (25)–(27) embodies our universal recipe for dynamically controlled relaxation [99, 124], which has the following merits: (a) it holds for any bath and any type of interventions, that is, coherent modulations and incoherent interruptions/measurements alike; (b) it shows that in order to suppress relaxation we need to minimize the spectral overlap of , given to us by nature, and , which we may design to some extent; (c) most importantly, it shows that in the short-time domain, only broad (coarse-grained) spectral features of and are important. The latter implies that, in contrast to the claim that correlations of the system with each individual bath mode must be accounted for, if we are to preserve coherence in the system, we actually only need to characterize and suppress (by means of ) the broad spectral features of , the bath response function. The universality of (25)–(27) will be elucidated in what follows, by focusing on several limits.
3.1. The Limit of Slow Modulation Rate
If corresponds to sufficiently slow rates of interruption/modulation , the spectrum of is much narrower than the interval of change of around , the resonance frequency of the system. Then can be replaced by , so that the spectral width of plays no role in determining , and we may as well replace by a spectrally finite, flat (white-noise) reservoir; that is, we may take the Markovian limit. The result is that (25) coincides with the Golden Rule (GR) rate, (14) (Figure 2(a)) as Namely, slow interventions do not affect the onset and rate of exponential decay.
Figure 2: Frequency-domain representation of the dynamically controlled decoherence rate in various limits (Section 7). (a) Golden Rule limit. (b) Quantum Zeno effect (QZE) limit. (c) Anti-Zeno effect (AZE) limit. Here, and are the modulation and bath spectra, respectively, and are the interval of change and width of , respectively, and is the interruption rate.
3.2. The Limit of Frequent Modulation
Frequent interruptions, intermittent with free evolution, are represented by a repetition of the free-evolution modulation spectrum where being the time-interval between consecutive interruptions. If describes extremely frequent interruptions or measurements , is much broader than . We may then pull out of the integral, whereupon (25) yields This limit is that of the quantum Zeno effect (QZE), namely, the suppression of relaxation as the interval between interruptions decreases [150152]. In this limit, the system-bath exchange is reversible and the system coherence is fully maintained (Figure 2(b)). Namely, the essence of the QZE is that sufficiently rapid interventions prevent the excitation escape to the continuum, by reversing the exchange with the bath.
3.3. Intermediate Modulation Rate
In the intermediate time-scale of interventions, where the width of is broader than the width of (so that the Golden Rule is violated) but narrower than the width of (so that the QZE does not hold), the overlap of and grows as the rate of interruptions, or modulations, increases. This brings about the increase of relaxation rates with the rate of interruptions, marking the anti-Zeno effect (AZE) [85, 102, 153] (Figure 2(c)). On such time-scales, more frequent interventions (in particular, interrupting measurements) enhance the departure of the evolution from reversibility. Namely, the essence of the AZE is that if you do not intervene in time to prevent the excitation escape to the continuum, then any intervention only drives the system further from its initial state.
We note that the AZE can only come about when the peaks of and do not overlap, that is, the resonant coupling is shifted from the maximum of . If, by contrast, the peaks of and do coincide, any rate of interruptions would result in QZE (Figure 2(b)). This can be understood by viewing as an averaging kernel of around . If is the maximum of the spectrum, any averaging can only be lower than this maximum, which is the Golden Rule decay rate. Hence, any rate of interruptions can only decrease the decay rate with respect to the Golden Rule rate, that is, cause the QZE.
3.4. Quasiperiodic Amplitude and Phase Modulation (APM)
The modulation function can be either random or regular (coherent) in time, as detailed below. Consider first the most general coherent amplitude and phase modulation (APM) of the quasiperiodic form, Here () are arbitrary discrete frequencies with the minimum spectral distance . If is periodic with the period , then and become the Fourier components of . For a general quasiperiodic , one obtains Here equals the average of over a period of the order of , , and , whereas is a bell-like function of normalized to 1.
For a sufficiently long time, the function becomes narrower than the respective characteristic width of around , and one can set
Thus, when where is the effective correlation (memory) time of the reservoir, (25) is reduced to For the validity of (37), it is also necessary that This condition is well satisfied in the regime of interest, that is, weak coupling to essentially any reservoir, unless (for some harmonic ) is extremely close to a sharp feature in , for example, a band edge [145], a case covered by Section 6. Otherwise, the long-time limit of the general decay rate (25) under the APM is a sum of the GR rates, corresponding to the resonant frequencies shifted by , with the weights .
Formula (37) provides a simple general recipe for manipulating the decay rate by APM. Its powerful generality allows for the optimized control of decay, not only for a single level but also for a band characterized by a spectral distribution (e.g., inhomogeneous or vibrational spectrum). We can then choose and in (37) so as to minimize the decay convoluted with . In what follows, various limits of (37) will be analyzed.
3.5. Coherent Phase Modulation (PM)
3.5.1. Monochromatic Perturbation
Let Then where is a frequency shift, induced by the ac Stark effect (in the case, e.g., of atoms) or by the Zeeman effect (in the case of spins). In principle, such a shift may drastically enhance or suppress relative to . It provides the maximal variation of achievable by an external perturbation, since it does not involve any averaging (smoothing) of incurred by the width of : the modified can even vanish, if the shifted frequency is beyond the cut-off frequency of the coupling, where . Conversely, the increase of due to a shift can be much greater than that achievable by repeated measurements, that is, the anti-Zeno effect [97, 98, 101, 102]. In practice, however, ac Stark shifts are usually small for (cw) monochromatic perturbations, whence pulsed perturbations should often be used.
3.5.2. Impulsive Phase Modulation
Let the phase of the modulation function periodically jump by an amount at times . Such modulation can be achieved by a train of identical, equidistant, narrow pulses of nonresonant radiation, which produce pulsed frequency shifts . Now where is the integer part. One then obtains that The decay, according to (22), has then the form (at ) where is defined by (25).
For sufficiently long times
For small phase shifts, , the peak dominates, whereas In this case, one can retain only the term in (37) (unless is changing very fast). Then the modulation acts as a constant shift
With the increase of , the difference between the and peak heights diminishes, vanishing for . Then that is, for contains two identical peaks symmetrically shifted in opposite directions (the other peaks decrease with as , totaling 0.19).
The above features allow one to adjust the modulation parameters for a given scenario to obtain an optimal decrease or increase of . The phase-modulation (PM) scheme with a small is preferable near the continuum edge, since it yields a spectral shift in the required direction (positive or negative). The adverse effect of peaks in then scales as and hence can be significantly reduced by decreasing . On the other hand, if is near a symmetric peak of , is reduced more effectively for , as in [80, 81], since the main peaks of at and then shift stronger with than the peak at for .
3.6. Amplitude Modulation (AM)
Amplitude modulation (AM) of the coupling arises, for example, for radiative-decay modulation due to atomic motion through a high- cavity or a photonic crystal [154, 155] or for atomic tunneling in optical lattices with time-varying lattice acceleration [106, 156]. Let the coupling be turned on and off periodically, for the time and , respectively, that is, (). Now [157] so that (see (43)) where is given by (25) and (50).
This case is also covered by (37) and (38), where the parameters are now found to be with
It is instructive to consider the limit wherein and is much greater than the correlation time of the continuum; that is, does not change significantly over the spectral intervals . In this case, one can approximate the sum (37) by the integral (25) with characterized by the spectral broadening ~1. Then (25) for reduces to that obtained when ideal projective measurements are performed at intervals [97]. Thus the AM scheme can imitate measurement-induced (dephasing) effects on quantum dynamics, if the interruption intervals exceed the correlation time of the continuum.
The decay probability , calculated for parameters similar to [106], completely coincides with that obtained for ideal impulsive measurements at intervals [97, 98, 101] and demonstrates either the quantum Zeno effect (QZE) or the anti-Zeno effect (AZE) behavior, depending on the rate of modulation.
Since the Hamiltonian for atoms in accelerated optical lattices is similar to the Legett Hamiltonian for current-biased Josephson junctions [77], the present theory has been extended to describe effects of current modulations on the rate of macroscopic quantum tunneling in Josephson junctions in [100].
Projective measurements at an effective rate , whether impulsive or continuous, usually result in a broadened (to a width ) modulation function , without a shift of its center of gravity [97, 98, 101, 158, 159], This feature was shown in [97] to be responsible for either the standard quantum Zeno effect whereby scales as or the anti-Zeno effect whereby grows with . In contrast, a weak and broadband chaotic field, such that where is the mean intensity, is the bandwidth, and is the effective polarizability (electric or magnetic, depending on the system), would give rise to a Lorentzian dephasing function with a substantial shift This shift would have a much stronger effect on than the QZE or AZE, which are associated with the rate , since
4. Multipartite Decay Control
4.1. Multipartite PN Control by Resonant Modulation
One can describe phase noise, or proper dephasing, by a stochastic fluctuation of the excited-state energy, , where is a stochastic variable with zero mean, and is the second moment. For multipartite systems, where each qubit can undergo different proper dephasing, , one has an additional second moment for the cross dephasing, . A general treatment of multipartite systems undergoing this type of proper dephasing is given in [107]. Here we give the main results for the case of two qubits.
Let us take two TLS, or qubits, which are initially prepared in a Bell state. We wish to obtain the conditions that will preserve it. In order to do that, we change to the Bell basis, which is given by For an initial Bell-state , where , one can then obtain the fidelity, , as where where is the amplitude of the resonant field applied on qubit , , and the corresponds to and to . Expressions (61)–(67) provide our recipe for minimizing the Bell-state fidelity losses. They hold for any dephasing time-correlations and arbitrary modulation.
One can choose between two modulation schemes, depending on our goals. When one wishes to preserve and initial quantum state, one can equate the modified dephasing and cross dephasing rates of all qubits, . This results in complete preservation of the singlet only, that is, , for all , but reduces the fidelity of the triplet state. On the other hand, if one wishes to equate the fidelity for all initial states, one can eliminate the cross dephasing terms, by applying different modulations to each qubit (Figure 3), causing for all . This requirement can be important for quantum communication schemes.
Figure 3: Cross decoherence as a function of local modulation. Here two qubits are modulated by continuous resonant fields, with amplitudes . The cross decoherence decays as the two qubits’ modulations become increasingly different. The bath parameters are , where is the correlation time, and .
5. Dynamical Control of Zero-Temperature Decay in Multilevel Systems
5.1. General Formalism
Here we discuss in detail a model for dynamical decay modifications in a multilevel system. The system with energies , , is coupled to a zero-temperature bath of harmonic oscillators with frequencies . Using the factorized coupling defined in Section 2.1, the corresponding Hamiltonian is found to be as in 1, where where now each level has a different modulation and a different coupling to the bath and denotes a gate operation.
The system evolution is divided into two phases, one of storage without gate operations and a gate operation of finite duration
The full wave function is given by Similarly to what was said in Section 2.1, one can consider two types of situations. The above equations (68)–(72) were written for an -level system which can exchange its population with the reservoir. In addition, one can consider an -level system, where transitions are possible between any level and a lower level , the reservoir consisting of quantum systems, as described in Section 2.1. The theory in Section 5 holds for both situations, with the minor difference that one should substitute as in (70) and (72) and perform a similar substitution in (76) below.
In order to find the solution, one has to diagonalize the system hamiltonian by introducing a matrix that rotates the amplitudes as such that, by defining , one gets where are the eigenvalues of the new rotated system. Thus the transformed wave function becomes Using these rotated state amplitudes, a procedure similar to that used for one level, one finds that they obey the following integrodifferential equations, assuming slowly varying as Here, the and matrices are given by with and being the modulation and reservoir-response matrices, respectively, given by where During the storage phase, one has , and , and during the gate-operation phase, , , and .
The solution to (77) is of the form
To simplify the analysis, one can define the fluence and the modulation spectral matrices as The relevant imaginary parts of the spectral response of the reservoir can be expressed, analogously to (20) and (21), by the Kramers-Kronig relations
Defining we shall now represent in different regimes (phases).
(i) As a reference, it is important to consider the decoherence effects with no modulations at all, that is, . In this case, one obtains a diagonal decoherence matrix This means that interference of decaying levels and cancels out in the long time limit, and the decoherence is without cross relaxation.
(ii) During the storage phase, (84) results in One can easily see that for the off-diagonal terms, a simple separation into decay rates and energy shifts is inapplicable in this formulation.
(iii) During gate operations, (84) assumes the form In a more compact and enlightening form, one can rewrite this equation as , where is given in (86).
6. The Strong-Coupling Regime: Decay Control Near Continuum Edge by Nonadiabatic Interference
The analysis expounded thus far has been based on a perturbative treatment of the system-bath coupling. Here, we address the regime of strong system-bath coupling, as in the case of a resonance frequency very near to the continuum edge, a situation that may be encountered in atomic excitation near the ionization energy, vibrational excitation frequency in a solid near the Debye cutoff, or an atomic excitation in a photonic crystal near a photonic bandgap. In the strong-coupling regime, it is advantageous to work in the combined basis of the system (qubit) and field (bath) states that incorporate the system-bath interaction. Dynamical control of the decay can then be analysed by exact solution of the Schrödinger equation in this basis. Analytical expressions are obtainable for alternating static evolutions with different parameters (e.g., resonant frequency), the dynamical control resulting from their interference. Specifically, we shall consider optical manipulations of atoms embedded in photonic crystals with atomic transition frequencies near a photonic bandgap (PBG), that is, near the edge of the photonic mode continuum, where the qubit is strongly coupled to the continuum, and spontaneous emission (SE) is only partially blocked, because an initially excited atom then evolves into a superposition of decaying and stable states, the stable state representing photon-atom binding [14, 145]. In what follows we shall demonstrate the ability of appropriately alternating sudden changes of the detuning to augment the interference of the emitted and back-scattered photon amplitudes, thereby increasing the probability amplitude of the stable (photon-atom bound) state. As a result, phase-gate operations affected by dipole-dipole interactions can be performed with higher fidelity than in the case of adiabatic frequency change.
6.1. Hamiltonian and Equations of Motion
We consider a two-level atom with excited and ground states and coupled to the field of a discrete (or defect) mode and to the photonic band structure (PBS) in a photonic crystal. The hamiltonian of the system in the rotating-wave approximation assumes the form [145] Here, is the energy of the atomic transition frequency, and are, respectively, the creation and annihilation operators of the field mode at frequency , is the mode density of the PBS, and and are the coupling rates to the atomic dipole of a mode from the continuum and the discrete mode, respectively.
Let us first consider the initial state obtained by absorbing a photon from the discrete mode as where is the vacuum state of the field. Then the evolution of the wavefunction has the general form where we have denoted by and the single-photon state of the relevant modes. The Schrödinger equation then leads to the set of coupled differential equations This evolution reflects the interplay between the off-resonant Rabi oscillations of and , at the driving rate , and the partly inhibited oscillatory decay from to via coupling to the continuum . This decay depends on the detuning of from the continuum edge at (the upper cutoff of the PBG). For a spectrally steep edge (see below), we are in the regime of strong coupling to the mode continuum (as in a high-Q cavity [8]) which allows for the existence of an oscillatory, nondecaying, component of , associated with a photon-atom bound state [7, 145].
6.2. Periodic Sudden Changes of the Detuning
Let us now introduce abrupt changes of , that is, of the detuning from the upper cutoff, , of the PBG (by fast AC-Stark modulations as discussed below), at intervals . In the sudden-change approximation for , the amplitudes of the excited state, the discrete mode and the continuum still evolve according to (91), except that from to the atomic transition frequency is , that is, the detuning , while for , we have , that is, . This dynamics leads to the relation Here, and are solutions of (91) with a static (fixed) atomic transition frequency, or . However, the initial condition at the instant of the frequency change from to is no longer the excited state (89) but the superposition In other words, the dynamics is equivalent to two successive static evolutions, the second one starting from initial conditions .
Using the Laplace transform of the system (91) with the initial condition (93), it is possible to express the dynamic amplitude of the excited state after the sudden change as where we have used the initial conditions and the solution of (91) for the initial condition (89).
There is an advantageous feature to the sudden change: since the time dependence of in (92) arises from the static amplitudes , , and at the shifted time , a consequence of the sudden change is to revive the excited-state population oscillations, which tend to disappear at long times in the static case. Hence, by applying several successive sudden changes, we should be able to maintain large-amplitude oscillations of the coherence between and . The scenario leading to the largest amplitude consists in periodic shifts of the energy detuning from to . When the initial detuning is large and we first reduce it to before it increases to , the dynamic population and the coherence, thanks to the revival of oscillations, are periodically larger than the static ones. This remarkable result occurs unexpectedly: it implies that successive abrupt changes can reverse the decay to the continuum, even though they cannot be associated with the Zeno effect: they occur at intervals much longer than the correlation (Zeno) time of the radiative continuum, which is utterly negligible ( s) [97], or even longer than the static-oscillation half period. The fact that this happens only for the rather “counter-intuitive” ordering of detuning values (from large to small then back again) is a manifestation of interference between successive static evolutions: their relative phases determine the beating between the emitted and reabsorbed (back-scattered) photon amplitudes and thereby the oscillation of .
Let us now consider the initial superposition and a nonnegligible coupling constant . In this case, the periodic dynamic population of the excited state also strongly exceeds the static one. Most importantly, the instantaneous dynamic fidelity is periodically enhanced as compared to the static one, as demonstrated numerically.
In order to use these results for quantum logic gates, let us consider the example of the dipole-dipole induced control-phase gate, which consists in shifting the phase of the target-qubit excited state by via interaction with the control qubit [10, 143, 144]. The phase shift must be accumulated gradually, to preserve the coherence of the system. We have found that ten or twenty sudden shifts of or , respectively, alternating with appropriate detuning changes, can keep the fidelity high, with little decoherence. The system begins to evolve following the “counter-intuitive” detuning sequence discussed above (not to be confused with the adiabatic STIRAP method [11, 12, 129]). As soon as two sudden changes of the detuning have been performed, the conditional phase shift of or takes place and the process is further repeated. The total gate operation is completed within the time interval of maximum fidelity. The fidelity of the system relative to its initial state during the realization of a control phase gate, with alternating detunings, is perhaps our most impressive finding. We find that the fidelity is increased using the “counterintuitive” sequence of detunings (solid line) as compared to the static (fixed) choice of maximal detuning (long-dashed line), or compared to the dynamically enhanced fidelity obtained without gate operations (dot-dashed line).
6.3. Comparison with the Weak-Coupling Regime
We have compared the results of this method, which allows for possibly strong coupling of with the continuum edge, with those of the universal formula of Section 2 (25), which expresses the decay rate of by the convolution of the modulation spectrum and the PBS coupling spectrum. We find good agreement with this formula only in the regime of weak coupling to the PBG edge, when the dimensionless detuning parameter , as expected from the limitations of the theory in Section 2.
6.4. Experimental Scenario
The following experimental scenario may be envisioned for demonstrating the proposed effect: pairs of qubits are realizable by two species of active rare-earth dopants [17, 18] or quantum dots in a photonic crystal. The transition frequency of one species is initially detuned by from the PBG edge with coupling constant and by ~3 MHz from the resonance of the other species. This is abruptly modulated by nonresonant laser pulses which exert ~3 MHz AC Stark shifts. Between successive shifts, the qubits are near resonant with their neighbours and therefore become dipole-dipole coupled, thus affecting the high-fidelity phase-control gate operation [10, 143, 144]. The required pulse rate is , much lower than the pulse rate stipulated under similar conditions by previously proposed strategies [81, 99, 118].
7. Finite-Temperature Relaxation and Decoherence Control
So far we have treated the case of an empty (zero-temperature) bath. In order to account for finite-temperature situations, where the bath state is close to a thermal (Gibbs) state, we resort to a master equation (ME) for any dynamically controlled reduced density matrix of the system [100, 124] that we have derived using the Nakajima-Zwanzig formalism [70, 146, 147, 160]. This ME becomes manageable and transparent under the following assumptions. (i) The weak-coupling limit of the system-bath interaction prevails, corresponding to the neglect of terms. This is equivalent to the Born approximation, whereby the back effect of the system on the bath and their resulting entanglement are ignored. (ii) The system and the bath states are initially factorisable. (iii) The initial mean value of vanishes.
We present the general form of the Nakajima-Zwanzig formalism and resort to the aforementioned assumptions only when necessary. Hence, the formalism may seem cumbersome, yet it can be simplified greatly if the assumptions are made from the outset (see [70]).
7.1. Explicit Equations for Factorisable Interaction Hamiltonians
We now wish to write the ME explicitly for time-dependent Hamiltonians of the following form [100]: where and are the system and bath Hamiltonians, respectively, and , the interaction Hamiltonian, is the product of operators and which act on the system and bath, respectively.
Finally, defining the correlation function for the bath, we obtain the ME for in the Born approximation as
We focus on two regimes: a two-level system coupled to either an amplitude- or phase-noise (AN or PN) thermal bath. The bath Hamiltonian (in either regime) will be explicitly taken to consist of harmonic oscillators and be linearly coupled to the system Here are the annihilation and creation operators of mode , respectively, and is the coupling amplitude to mode .
7.1.1. Amplitude-Noise Regime
We first consider the AN regime of a two-level system coupled to a thermal bath. We will use off-resonant dynamic modulations, resulting in AC-Stark shifts. The Hamiltonians then assume the following form: where is the dynamical AC-Stark shifts, is the time-dependent modulation of the interaction strength, and the Pauli matrix .
7.1.2. Phase-Noise Regime
Next, we consider the PN regime of a two-level system coupled to a thermal bath via operator. To combat it, we will use near-resonant fields with time-varying amplitude as our control. The Hamiltonians then assume the following forms: where is the time-dependent resonant field, with real envelope , is the time-dependent modulation of the interaction strength, and .
Since we are interested in dephasing, phases due to the (unperturbed) energy difference between the levels are immaterial.
7.2. Universal Master Equation
To derive a universal ME for both amplitude- and phase-noise scenarios, we move to the interaction picture and rotate to the appropriate diagonalizing basis, where the appropriate basis for the AN case of (100) is while for the PN case of (102) the basis is
In this rotated and tilted frame, where is the phase-modulation due to the time-dependent control in the system Hamiltonian.
Allowance for arbitrary time-dependent intervention in the system and interaction dynamics , , respectively, yields the following universal ME for a dynamically controlled decohering system [100, 124]: Here is the modulated interaction operator, where denotes the rotated and tilted frame, and . The modulation function is given by for both AN and PN. It is important to note that is a function of (not of ): this convolutionless form of the ME is fully non-Markovian to second order in , as proven exactly in [124].
7.3. Universal Modified Bloch Equations
The resulting modified Bloch equations, in the appropriate diagonalizing basis (see (104) for AN and (105) for PN), are given by The time-dependent relaxation rates are real, and the only difference between them is the complex conjugate of the combined modulation function, . They can be very different for a complex correlation function.
One can derive the corresponding time-averaged relaxation rates of the upper and lower states as
For both AN (see (100)) and PN (see (102)), where is the zero-temperature bath spectrum, and are the frequency-dependent density of bath modes and the transition matrix element, respectively, is the temperature-dependent bath mode population, and is the inverse temperature. Also, is the Heaviside function, that is, the zero-temperature bath spectrum is defined only for positive frequencies . Hence, the first right-hand side of (114) is nonzero for positive frequencies and the second right-hand side is nonzero for negative frequencies.
For either AN or PN, we may control the decoherence by either off-resonant or near-resonant modulations, respectively. The modulation spectrum has the same form for both (see Section 7.4) as where the modulation function is given in (107) and (109). The time-dependent modulation phase factor is obtained for AN in the form of an AC-Stark shift, time-integrated over where is the Rabi frequency of the control field and is the detuning. The corresponding phase factor for PN is the integral of the Rabi frequency , that is, the pulse area of the resonant control field, (107) (Figure 4).
Figure 4: Schematic drawing of system and bath. (a) Amplitude noise (AN) (red) combatted by AC-Stark shift modulation (green). (b) Phase noise (PN) (red) combatted by resonant-field modulation (green).
Hence, upon making the appropriate substitutions, the Bloch equations (110) have the same universal form for either AN or PN. An arbitrary combination of AN and PN requires a more detailed treatment, yet the universal form is maintained.
7.3.1. Dynamically Modified Decay Rates
Since we are interested here in dynamical control of relaxation, we shall concentrate on the transition rates rather than the level shifts. The average rate of the transition and its counterpart are given by Here the upper (lower) sign corresponds to the subscript , and can be shown [161] to be nonnegative, with , and vanishes for at : . For the oscillator bath, one finds that where and is the average number of quanta in the oscillator (bath mode) with frequency .
We apply (118) to the case of coherent modulation of quasiperiodic form, (see (31)). Without a limitation of the generality, we can assume that . We then find, using (118), that the rates tend to the long-time limits where or Equation (121) shows that is given by the overlap of the modulation spectrum with the bath-CF spectrum . The limits (123) are approached when and . Here is the bath memory (correlation) time, defined as the inverse of , the spectral interval over which changes around the relevant frequencies.
Had we used the standard dipolar RWA hamiltonian in the case of an oscillator bath, dropping the antiresonant terms in , we would have arrived at the transition rates wherein the integration is performed from 0 to , rather than from to , as in (121). This means that the RWA transition rates hold for a slow modulation, when at , being peaked near . However, whenever the suppression of requires modulation at a rate comparable to , the RWA is inadequate. For instance, (120) and (124) imply that, at , the rate vanishes identically, irrespective of , in contrast to the true upward-transition rate in (121), which may be comparable to for ultrafast modulation. The difference between the RWA and non-RWA decay rates stems from the fact that the RWA implies that a downward (upward) transition is accompanied by emission (absorption) of a bath quantum, whereas the non-RWA (negative-frequency) contribution to in (121) allows for just the opposite: downward (upward) transitions that are accompanied by absorption (emission). The latter processes are possible since the modulation may cause level to be shifted below .
The validity of the (decohering) qubit model in the presence of modulation at a rate is now elucidated: it requires that , being the effective transition rate from level to any other level , and, in particular, . If are strongly suppressed by the modulation, the TLS model holds for long times.
7.3.2. Dynamically Modified Proper Dephasing
We turn now to proper dephasing when it dominates over decay. The random frequency fluctuations are typically characterized by a (single) correlation time , with ensemble mean . When the field is used only for gate operations, we assume that it does not affect proper dephasing. The ensemble average over results in with the dephasing rate The dephasing CF is the counterpart of the bath CF .
At , the decoherence rate and shift approach their asymptotic values For the validity of (127), it is necessary that We assume the secular approximation, which holds if
By analogy with (118), one can obtain that where is given by (117) with As follows from (131), is a symmetric function,
The proper dephasing rate associated with is In the presence of a constant [cw ], it is modified into For a sufficiently strong field, the dephasing rate can be suppressed by the factor . This suppression reflects the ability of strong, near-resonant Rabi splitting to shift the system out of the randomly fluctuating bandwidth, or average its effects. Quantum gate operations may be performed by slight modulations of the control field, which can flip the qubit without affecting proper dephasing. By comparison, the “bang-bang” (BB) method involving -periodic -pulses [2, 82, 84] is an analog of the above “parity kicks.” Using the analog of (121), such pulses can be shown to suppress approximately according to (135) with . This BB method requires pulsed fields with Rabi frequencies , that is, much stronger fields than the cw field in (135). Using s, cw Rabi frequencies exceeding 1 MHz achieve a significant dephasing suppression.
7.4. Modulation Arsenal
Any modulation with quasi-discrete, finite spectrum is deemed quasiperiodic, implying that it can be expanded as where are arbitrary discrete frequencies such that where is the minimal spectral interval.
One can define the long-time limit of the quasi-periodic modulation, when where is the bath-memory (correlation) time, defined as the inverse of the largest spectral interval over which and change appreciably near the relevant frequencies . In this limit, the average decay rate is given by (Figure 5(a)) as
Figure 5: Spectral representation of the bath coupling, , and the modulation, . (a) General quasi-periodic modulation, with peaks at . (b) On-off modulation, with repetition rate for . (c) Impulsive phase modulation, (-pulses), . (d) Monochromatic modulation, or impulsive phase modulation, with small phase shifts, , and repetition rate.
7.4.1. Phase Modulation (PM) of the Coupling
Monochromatic Perturbation. Let Then where is a frequency shift, induced by the AC Stark effect (in the case of atoms) or by the Zeeman effect (in the case of spins). In principle, such a shift may drastically enhance or suppress relative to the Golden Rule decay rate, that is, the decay rate without any perturbation as
Equation (40) provides the maximal change of achievable by an external perturbation, since it does not involve any averaging (smoothing) of incurred by the width of : the modified can even vanish, if the shifted frequency is beyond the cutoff frequency of the coupling, where (Figure 5(d)). This would accomplish the goal of dynamical decoupling [8187, 118, 162]. Conversely, the increase of due to a shift can be much greater than that achievable by repeated measurements, that is, the anti-Zeno effect [97, 98, 101, 102]. In practice, however, AC Stark shifts are usually small for (cw) monochromatic perturbations, whence pulsed perturbations should often be used, resulting in multiple shifts, as per (139).
Dynamical Decoupling. Dynamical decoupling (DD) is one of the best known approaches to combat decoherence, especially dephasing [7992, 95, 96]. A full description of this approach is beyond the scope of this work, but we present its most essential aspects and how it can be incorporated into the general framework described above.
7.4.2. Standard DD
DD is based on the notion that the phase-modulation control fields are short and strong enough such that the free evolution can be neglected during these pulses. Hence, the propagator can be decomposed into the free propagator, followed by the control-field propagator, free propagator, and so forth. The control fields used result in the periodic accumulation of -phases; that is, each pulse has a total area of , whose effects are similar to time-reversal or the spin-echo technique [94]. Thus, the free evolution propagator after the control -pulse negates the effects of the free evolution propagator prior to the control fields, up to first order of the noise in the Magnus expansion.
While the formalism of dynamical decoupling is quite different from the formalism presented here, it can be easily incorporated into the general framework of universal dynamical decoherence control by introducing impulsive phase modulation. Let the phase of the modulation function periodically jump by an amount at times . Such modulation can be achieved by a train of identical, equidistant, narrow pulses of nonresonant radiation, which produce pulsed AC Stark shifts of . When , this modulation corresponds to dynamical-decoupling (DD) pulses.
For sufficiently long times (see (138)), one can use (139), with
For small phase shifts, , the peak dominates, whereas In this case, one can retain only the term in (139), unless is changing very fast with frequency. Then the modulation acts as a constant shift (Figure 5(d)) as
As increases, the difference between the and peak heights diminishes, vanishing for . Then |
adb0e671ff5c6df9 |
The Physics Behind Schrödinger's Cat Paradox
Google honors the physicist today with a Doodle. We explain the science behind his famous paradox.
View Images
Erwin Schrödinger, one of the fathers of quantum mechanics, is famed for a number of important contributions to physics, especially the Schrödinger equation, for which he received the Nobel Prize in Physics in 1933.
His feline paradox thought experiment has become a pop culture staple, but it was Erwin Schrödinger's work in quantum mechanics that cemented his status within the world of physics.
The Nobel prize-winning physicist would have turned 126 years old on Monday and to celebrate, Google honored his birth with a cat-themed Doodle, which pays tribute to the paradox Schrödinger proposed in 1935 in the following theoretical experiment.
A cat is placed in a steel box along with a Geiger counter, a vial of poison, a hammer, and a radioactive substance. When the radioactive substance decays, the Geiger detects it and triggers the hammer to release the poison, which subsequently kills the cat. The radioactive decay is a random process, and there is no way to predict when it will happen. Physicists say the atom exists in a state known as a superposition—both decayed and not decayed at the same time.
Until the box is opened, an observer doesn't know whether the cat is alive or dead—because the cat's fate is intrinsically tied to whether or not the atom has decayed and the cat would, as Schrödinger put it, be "living and dead ... in equal parts" until it is observed. (More physics: The Physics of Waterslides.)
In other words, until the box was opened, the cat's state is completely unknown and therefore, the cat is considered to be both alive and dead at the same time until it is observed.
"If you put the cat in the box, and if there's no way of saying what the cat is doing, you have to treat it as if it's doing all of the possible things—being living and dead—at the same time," explains Eric Martell, an associate professor of physics and astronomy at Millikin University. "If you try to make predictions and you assume you know the status of the cat, you're [probably] going to be wrong. If, on the other hand, you assume it's in a combination of all of the possible states that it can be, you'll be correct."
Immediately upon looking at the cat, an observer would immediately know if the cat was alive or dead and the "superposition" of the cat—the idea that it was in both states—would collapse into either the knowledge that "the cat is alive" or "the cat is dead," but not both.
Schrödinger developed the paradox, says Martell, to illustrate a point in quantum mechanics about the nature of wave particles.
"What we discovered in the late 1800s and early 1900s is that really, really tiny things didn't obey Newton's Laws," he says. "So the rules that we used to govern the motion of a ball or person or car couldn't be used to explain how an electron or atom works."
At the very heart of quantum theory—which is used to describe how subatomic particles like electrons and protons behave—is the idea of a wave function. A wave function describes all of the possible states that such particles can have, including properties like energy, momentum, and position.
"The wave function is a combination of all of the possible wave functions that exist," says Martell. "A wave function for a particle says there's some probability that it can be in any allowed position. But you can't necessarily say you know that it's in a particular position without observing it. If you put an electron around the nucleus, it can have any of the allowed states or positions, unless we look at it and know where it is."
That's what Schrödinger was illustrating with the cat paradox, he says.
"In any physical system, without observation, you cannot say what something is doing," says Martell. "You have to say it can be any of these things it can be doing—even if the probability is small."
Comment on This Story |
f71e33e490bf5ced | Dismiss Notice
Join Physics Forums Today!
Very basic questions - Hamiltonian
1. Jun 26, 2008 #1
After several failures in the past (why does the universe have to be so complicated?!), I'm once again trying to learn to understand the basics of QM, out of sheer frustration with not knowing what the heck physicists are talking about all the time. I know, I still have a long way to go.
Anyway, I just started again and I'm already confused about something.
I'm led to understand that the rules governing the time evolution of the quantum state, together with the definitions of the observable quantities (in the form of their associated operators) take the place of Newton's laws of motion and the classical definitions of the quantities. Is that about right?
If so, why can't I find a list of the definitions of the operators of all the usual physics quantities written somewhere? I've found the operators for momentum, position, and spin, and they seem to make sense to me (with what little I know), but I can't find the definition of the Hamiltonian, which is needed for me to know how energy is defined in quantumland. More importantly, since the entire rules for how stuff happens are encoded in the Schrödinger equation, which relies on the undefined Hamiltonian, I can't imagine (or compute) how anything happens at all.
Surely it has a definition somewhere? I mean, it can't just be that I make one up... thereby making up any laws of motion I want for my universe...
Thanks, and apologies for my utter n00bity.
2. jcsd
3. Jun 26, 2008 #2
User Avatar
Homework Helper
Gold Member
The energy of a particle in quantum mechanics is the same as in classical mechanics :
[tex] H = \frac{p^2}{2m} + V(x) [/tex].
Just replace the classical quantities with the respective operators, and you get the hamiltonian operator.
4. Jun 26, 2008 #3
All classical dynamical variables may be defined in terms of position and momentum. Therefore you have already found what you need.
Regarding the Hamiltonian.....
dx has shown you the general form of the Hamiltonian, where the first term is simply the kinetic energy (again, define in terms of momentum as mentioned above) and the second term is the potential. In a sense, you do get to "make up" your Hamiltonian. The potential is determined by you, God, and/or may be inherent to your system.
For example, in the case of atomic hydrogen (one proton, one electron) the potential is simply the Coulomb potential. If you would like to set up your own crazy experiment with a charged particle placed off-center in an electric octupole, then you will have a more involved potential making for a more involved Hamiltonian.
5. Jun 26, 2008 #4
Thanks. But how/why can I replace quantities with operators? As I understood it, the relationship between a quantity and the operator is complicated and indirect: the quantity lives in the eigenvalues of the operator. I wouldn't expect to be able to add two operators in general and get the operator corresponding to a quantity that is the sum of the quantities represented by those two operators... would I? *brain smokes*
And I guess the operator for that potential V(x) is just the operator that multiplies V(x) by phi(x) for all x... that appears to work out right.
6. Jun 26, 2008 #5
Not exactly sure what you mean by "quantities." Physical observables are represented by operators. All of your classical observables may be defined by position and momentum, so that's all you need (spin is another story, but that doesn't have a classical counterpart).
What you might be getting tripped up on is the fact that operating the Hamiltonian on the wavefunction yields an eigenvalue equation. Well this is only true when you take out time dependence, hence the time-independent Schrodinger equation. Well, this is actually DERIVED from the more general time-dependent Schrodinger equation. From this derivation we see that the time-independent wavefunctions are actually energy eigenfunctions.
The same does not hold true for either momentum or position. The reason for this is that you may not have a determinant state of either momentum or position (a la Heisenberg). So your eigenvalue equations will not exactly behave in the simple manner as we have discussed above.
7. Jun 26, 2008 #6
I believe that this is not something that can be proved through an arguement or derivation. I think that this is embedded in the postulates of quantum mechanics.
Only if there is a corresponding classical equivalent.
The x in V(x) is replaced with the operator X. This operator is defined such that X|[itex]\Psi[/itex]> = x|[itex]\Psi[/itex]>. V(X) is defined by its series expansion in the operator X.
8. Jun 26, 2008 #7
I was trying to put together another response, but I think my understanding of math is just not up to the task, which I think points to the fundamental problem. Maybe I still don't even understand the framework of the basic formulation of the theory. Can anyone recommend maybe what kinds of mathematics I should study to learn enough to approach basic QM? Any recommendations for URLs or, less desirably, books that might be a good first step?
9. Jun 26, 2008 #8
User Avatar
Homework Helper
Gold Member
You can study elementary QM using just the Schrodinger equation if you know some basic calculus and differential equations. If you want to study the full theory of QM you need to know at least advanced calculus and linear algebra (and also a good amount of physics to appreciate it fully).
10. Jun 26, 2008 #9
OK, well I think my vector cal is pretty good and my linear algebra is decent and improving, so I would think I should be able to understand this. But I guess I'm not getting the concept of an observable or an operator or something because so far it isn't making sense and I'm not even effectively communicating what the problem is.
11. Jun 26, 2008 #10
User Avatar
Homework Helper
Gold Member
If you're comfortable with vector calculus and linear algebra, you should be able to start learning quantum mechanics.
In classical mechanics, systems have states which are points in a set. For example, a particle on a line has a position which is an element of R. The observables in classical mechanics are basically just the states, or numbers representing the states. In quantum mechanics, these points in sets are replaced by vectors in a vector space. The observables of quantum mechanics are not the states, but operators on this vector space. It's normal to feel a little uncomfortable with this at the beginning, but I suggest you not worry about this and go ahead and learn quantum mechanics. You can worry about what exactly it means and how you interpret it after you learn it.
12. Jun 26, 2008 #11
Wait, you're saying the observable somehow *is* the operator? I don't even get wat that means... I need a better book.
13. Jun 26, 2008 #12
You know what, never mind. I'm chalking this up as another abortive attempt. I'll wait a few years and try again. Sorry about this.
14. Jun 26, 2008 #13
User Avatar
Homework Helper
Gold Member
No I'm sorry, I wasn't clear. Let's consider a particle moving on a line. In classical mechanics, it is assumed that at every instant of time t, the particle has a certain position on the line x. That is it's state at time t. In quantum mechanics, the thing that is known at every instant of time is not its position, but something else. This is the state vector. This is a complex function defined on the line: [tex] |\psi(x)> [/tex]. That is the analog of the classical state.
Now I will give a certain interpretation, which is not accepted by everyone. You can think of this state as somehow representing your knowledge about the particle. This is loosely just a probability density on the line. Now there are certain states which correspond to a certain knowledge that the particle is at a particular position. These are the eigenstates of the position operator [tex] \hat{X} [/tex]. In the case of a particle moving on the line, they are the Dirac delta functions. There are also momentum eigenstates, which are "states of knowledge" where the momentum is exactly known. These are the eigenvectors of the momentum operator which are
[tex] e^{i \frac{p}{h}x}[/tex].
If your "state of knowledge" is one of the momentum eigenstates, then it means that you know certainly what the momentum will be when you measure it.
15. Jun 26, 2008 #14
Given a classical observable f(q,p) there is a self adjoint operator F(Q,P) acting on Hilbert space that represents the expected outcomes.
For simple cases
[tex] H = \frac{p^2}{2m} + V(q) [/tex].
this is easily understood. Unfortunately what to do for the
[tex]q^np^m [/tex]
terms is a little more complicated and not fully understood. A common formula is to transform
16. Jun 26, 2008 #15
User Avatar
Homework Helper
Gold Member
No! don't give up. Quantum mechanics is fascinating. Maybe you're just not reading the right books or something. I suggest the first few chapters of Feynman lectures vol 3 for an introduction. Also, Leonard Susskind (one of the founders of string theory) has a excellent set of video lectures on quantum theory you can find on youtube. Then try J. J. Sakurai's book if you are confident about your maths. Otherwise, try Shankar or Griffiths.
17. Jun 27, 2008 #16
User Avatar
Staff Emeritus
Science Advisor
Gold Member
One of the postulates of QM is that states are represented by vectors in some vector space. A vector is often written using the "ket" notation: [itex]|\alpha\rangle[/itex]. If you describe the system as being in state [itex]|\alpha\rangle[/itex], then an observer who's translated in time relative to you would describe the system as being in another state [itex]|\alpha'\rangle[/itex]. There must be an operator U(t) that takes [itex]|\alpha\rangle[/itex] to [itex]|\alpha'\rangle[/itex], and it must be an exponential because it has to satisfy U(t+t')=U(t)U(t'), so we can write it as [itex]U(t)=e^{At}[/itex], where A is some operator. (Its existence is guaranteed by the condition U(t+t')=U(t)U(t')). It's convenient to choose U(t) to be unitary so that it doesn't change the norm of the state vectors it acts on. The requirement that U(t) be unitary implies that A must be anti-hermitian (i.e. [itex]A^\dagger=-A[/itex]). We prefer to work with hermitian operators because their eigenvalues are real, so let's define H=iA. H is now hermitian, and [itex]U(t)=e^{-iHt}[/itex].
That's the definition of the Hamiltonian. It's the generator of translations in time. The momentum operators can be defined the same way as generators of translations in space, and the spin operators can be defined as generators of rotations.
18. Jun 27, 2008 #17
User Avatar
Staff Emeritus
Science Advisor
Gold Member
Linear algebra is by far the most important kind, but you can probably just study those parts you need when you run into difficulties in your QM book. You obviously have to understand complex numbers too.
Yes, an observable is an operator.
19. Jun 27, 2008 #18
OK, let's keep going I guess. Most of what's being said here are things I already understand, or at least think I understand. It's the details that are confusing me.
Let me try stating the path I've followed and see if there are any objections. Maybe this will let me debug my brain glitch.
1. The state of a particle is described by a set of complex numbers called a wavefunction. From one point of view, there can be said to be N complex numbers for each point in space, where N is the number of legal spin states. In this representation (the position/spin eigenstate basis) the probability of observing the particle with a particular combination of spin and position equals the squared magnitude of the complex number for that particular combination. Spin and position are therefore two quantities that the wavefunction tells us something about.
2. The procedure for extracting information about other quantities (that is, quantities other than position and spin, which were very convenient in this basis) is more involved. To find out what the probability is of measuring the system to have a value p for a quantity (say, momentum), I have to solve the equation P Phi = p Phi, where Phi is an unknown vector of zillions of complex numbers and P is the operator corresponding to the quantity in question (momentum in this case). The result will be a wavefunction Phi that is the eigenstate of the system for the momentum value p. So now I take the inner product of Phi with the actual wavefunction and the result will be the probability of measuring the particle to have a momentum of p.
*catches breath*
3. Now that our wavefunction has some defined connection to measurable quantities, we have to find the rules for how wavefunctions behave. These rules are defined by saying that every wavefunction is a superposition of energy eigenstates (obviously). In each of those eigenstates, the components all keep the same magnitude with time, but their phases all spin around (counterclockwise!) at a frequency proportional to the value of energy for this eigenstate (that is, the eigenvalue).
4. But wait! This doesn't help us yet! In order for that to have any bearing on what happens to the observable quantities in time, we need to know what the energy eigenstates look like in terms of the basis we were using at the beginning (the position/spin eigenstate basis), and that can only be done if we know the energy operator so we can solve an equation as in #2. So apparently the total energy operator is related to the momentum operator the same way that the energy quantity/eigenvalue/<insert jargon here> is related to the momentum <whatever>. And that is true because...?
20. Jun 30, 2008 #19
I have vague memories but...
1) express the quantity/observable in terms of coordinates x, y, z and momentum [tex]p_x[/tex], [tex]p_y[/tex], [tex]p_z[/tex]
2) the operator corresponding to a simple coordinate is the multiplication by such coordinate
3) the operator corresponding to a momentum is [tex]-i \frac{h}{2\pi} \frac{\delta}{\delta x}[/tex]
Therefore for example, if you need the operator for kinetic energy
then you substitute each p with the corresponding
square it to
[tex]-\frac{h^2}{4\pi^2} \frac{\delta^2}{\delta x^2}[/tex]
and the final operator will be
[tex]-\frac{h^2}{8m\pi^2} \nabla^2[/tex]
21. Jun 30, 2008 #20
Interesting. That derivation appears to implicitly assume that an operator applied twice corresponds to a quantity that is the square of the quantity represented by the original operator. Add that to the list of fascinating and unintuitive properties of quantum operators that have yet to be explained to me in any form I can understand. The other thing on the list right now is that adding two operators produces the operator corresponding to the quantity that is the sum of the quantites represented by the original two operators.
If there is a trend here, it sounds like it would be that whatever you do to the operators will always "look the same" notationally as whatever happens to the quantities. I wonder how this mysterious connection is achieved, and how far it goes. I'm still thinking about it. Maybe this new information will help me come up with the answer.
I'm trying to gather enough information that I can infer a full list of the basic postulates of QM. That way, I will be able to understand everything else based on how it is derived from those postulates. I don't know yet whether this particular question is related to some more postulates that I don't know, but it sounds interesting enough that it might be.
Have something to add?
Similar Discussions: Very basic questions - Hamiltonian
1. Very basic question (Replies: 3) |
cde6123b10b9ec6e | Normal mode
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other types of mode, see Mode (disambiguation).
A flash photo of cup of black coffee vibrating in normal modes
Mode numbers[edit]
A mode of vibration is characterized by a modal frequency and a mode shape. It is numbered according to the number of half waves in the vibration. For example, if a vibrating beam with both ends pinned displayed a mode shape of half of a sine wave (one peak on the vibrating beam) it would be vibrating in mode 1. If it had a full sine wave (one peak and one valley) it would be vibrating in mode 2.
In a system with two or more dimensions, such as the pictured disk, each dimension is given a mode number. Using polar coordinates, we have a radial coordinate and an angular coordinate. If you measured from the center outward along the radial coordinate you would encounter a full wave, so the mode number in the radial direction is 2. The other direction is trickier, because only half of the disk is considered due to the antisymmetric (also called skew-symmetry) nature of a disk's vibration in the angular direction. Thus, measuring 180° along the angular direction you would encounter a half wave, so the mode number in the angular direction is 1. So the mode number of the system is 2-1 or 1-2, depending on which coordinate is considered the "first" and which is considered the "second" coordinate (so it is important to always indicate which mode number matches with each coordinate direction).
Each mode is entirely independent of all other modes. Thus all modes have different frequencies (with lower modes having lower frequencies) and different mode shapes.
A mode shape of a drum membrane, with nodal lines shown in pale green.
In a one-dimensional system at a given mode the vibration will have nodes, or places where the displacement is always zero. These nodes correspond to points in the mode shape where the mode shape is zero. Since the vibration of a system is given by the mode shape multiplied by a time function, the displacement of the node points remain zero at all times.
When expanded to a two dimensional system, these nodes become lines where the displacement is always zero. If you watch the animation above you will see two circles (one about half way between the edge and center, and the other on the edge itself) and a straight line bisecting the disk, where the displacement is close to zero. In a real system these lines would equal zero exactly, as shown to the right.
Coupled oscillators[edit]
Consider two equal bodies (not affected by gravity), each of mass, m, attached to three springs, each with spring constant, k. They are attached in the following manner:
Coupled Harmonic Oscillator.svg
where the edge points are fixed and cannot move. We'll use x1(t) to denote the horizontal displacement of the left mass, and x2(t) to denote the displacement of the right mass.
If we denote acceleration (the second derivative of x(t) with respect to time) as \scriptstyle \ddot x, the equations of motion are:
m \ddot x_1 = - k x_1 + k (x_2 - x_1) = - 2 k x_1 + k x_2 \,\!
m \ddot x_2 = - k x_2 + k (x_1 - x_2) = - 2 k x_2 + k x_1 \,\!
Since we expect oscillatory motion of a normal mode (where ω is the same for both masses), we try:
x_1(t) = A_1 e^{i \omega t} \,\!
x_2(t) = A_2 e^{i \omega t} \,\!
Substituting these into the equations of motion gives us:
-\omega^2 m A_1 e^{i \omega t} = - 2 k A_1 e^{i \omega t} + k A_2 e^{i \omega t} \,\!
-\omega^2 m A_2 e^{i \omega t} = k A_1 e^{i \omega t} - 2 k A_2 e^{i \omega t} \,\!
Since the exponential factor is common to all terms, we omit it and simplify:
(\omega^2 m - 2 k) A_1 + k A_2 = 0 \,\!
k A_1 + (\omega^2 m - 2 k) A_2 = 0 \,\!
And in matrix representation:
\omega^2 m - 2 k & k \\
k & \omega^2 m - 2 k
\end{bmatrix} \begin{pmatrix} A_1 \\ A_2 \end{pmatrix} = 0
For this to be generically true for any amplitude, the matrix on the left must be singular i.e. must not be invertible, such that one cannot multiply both sides of the equation by the inverse, leaving the right matrix equal to zero. It follows that the determinant of the matrix must be equal to 0, so:
(\omega^2 m - 2 k)^2 - k^2 = 0 \,\!
Solving for \omega, we have two positive solutions:
\omega_1 = \sqrt{\frac{k}{m}},
\omega_2 = \sqrt{\frac{3 k}{m}}.
If we substitute ω1 into the matrix and solve for (A1A2), we get (1, 1). If we substitute ω2, we get (1, −1). (These vectors are eigenvectors, and the frequencies are eigenvalues.)
The first normal mode is:
\vec \eta_1 = \begin{pmatrix} x^1_1(t) \\ x^1_2(t) \end{pmatrix} = c_1 \begin{pmatrix} 1 \\ 1 \end{pmatrix} \cos{(\omega_1 t + \varphi_1)}
Which corresponds to both masses moving in the same direction at the same time.
The second normal mode is:
\vec \eta_2 = \begin{pmatrix} x^2_1(t) \\ x^2_2(t) \end{pmatrix} = c_2 \begin{pmatrix} 1 \\ -1 \end{pmatrix} \cos{(\omega_2 t + \varphi_2)}
This corresponds to the masses moving in the opposite directions, while the center of mass remains stationary.
The general solution is a superposition of the normal modes where c1, c2, φ1, and φ2, are determined by the initial conditions of the problem.
The process demonstrated here can be generalized and formulated using the formalism of Lagrangian mechanics or Hamiltonian mechanics.
Standing waves[edit]
A standing wave is a continuous form of normal mode. In a standing wave, all the space elements (i.e. (xyz) coordinates) are oscillating in the same frequency and in phase (reaching the equilibrium point together), but each has a different amplitude.
The general form of a standing wave is:
\Psi(t) = f(x,y,z) (A\cos(\omega t) + B\sin(\omega t))
where ƒ(xyz) represents the dependence of amplitude on location and the cosine\sine are the oscillations in time.
Physically, standing waves are formed by the interference (superposition) of waves and their reflections (although one may also say the opposite; that a moving wave is a superposition of standing waves). The geometric shape of the medium determines what would be the interference pattern, thus determines the ƒ(x, yz) form of the standing wave. This space-dependence is called a normal mode.
Usually, for problems with continuous dependence on (xyz) there is no single or finite number of normal modes, but there are infinitely many normal modes. If the problem is bounded (i.e. it is defined on a finite section of space) there are countably many (a discrete infinity of ) normal modes (usually numbered n = 1, 2, 3, ...). If the problem is not bounded, there is a continuous spectrum of normal modes.
Elastic solids[edit]
See: Einstein solid and Debye model
In any solid at any temperature, the primary particles (e.g. atoms or molecules) are not stationary, but rather vibrate about mean positions. In insulators the capacity of the solid to store thermal energy is due almost entirely to these vibrations. Many physical properties of the solid (e.g. modulus of elasticity) can be predicted given knowledge of the frequencies with which the particles vibrate. The simplest assumption (by Einstein) is that all the particles oscillate about their mean positions with the same natural frequency ν. This is equivalent to the assumption that all atoms vibrate independently with a frequency ν. Einstein also assumed that the allowed energy states of these oscillations are harmonics, or integral multiples of . The spectrum of waveforms can be described mathematically using a Fourier series of sinusoidal density fluctuations (or thermal phonons).
The fundamental and the first six overtones of a vibrating string. The mathematics of wave propagation in crystalline solids consists of treating the harmonics as an ideal Fourier series of sinusoidal density fluctuations (or atomic displacement waves).
Debye subsequently recognized that each oscillator is intimately coupled to its neighboring oscillators at all times. Thus, by replacing Einstein's identical uncoupled oscillators with the same number of coupled oscillators, Debye correlated the elastic vibrations of a one-dimensional solid with the number of mathematically special modes of vibration of a stretched string (see figure). The pure tone of lowest pitch or frequency is referred to as the fundamental and the multiples of that frequency are called its harmonic overtones. He assigned to one of the oscillators the frequency of the fundamental vibration of the whole block of solid. He assigned to the remaining oscillators the frequencies of the harmonics of that fundamental, with the highest of all these frequencies being limited by the motion of the smallest primary unit.
The normal modes of vibration of a crystal are in general superpositions of many overtones, each with an appropriate amplitude and phase. Longer wavelength (low frequency) phonons are exactly those acoustical vibrations which are considered in the theory of sound. Both longitudinal and transverse waves can be propagated through a solid, while, in general, only longitudinal waves are supported by fluids.
In the longitudinal mode, the displacement of particles from their positions of equilibrium coincides with the propagation direction of the wave. Mechanical longitudinal waves have been also referred to as compression waves. For transverse modes, individual particles move perpendicular to the propagation of the wave.
According to quantum theory, the mean energy of a normal vibrational mode of a crystalline solid with characteristic frequency υ is:
The term (1/2) represents the "zero-point energy", or the energy which an oscillator will have at absolute zero. E (ν ) tends to the classic value kT at high temperatures
By knowing the thermodynamic formula,
the entropy per normal mode is:
The free energy is:
F(v)=E-TS=kT\log \left(1-e^{-\frac{hv}{kT}}\right)
which, for kT >> , tends to:
F(v)=kT\log \left(\frac{hv}{kT}\right)
In order to calculate the internal energy and the specific heat, we must know the number of normal vibrational modes a frequency between the values ν and ν + . Allow this number to be f (ν)dν. Since the total number of normal modes is 3N, the function f (ν) is given by:
\int f(v)\,dv=3N
The integration is performed over all frequencies of the crystal. Then the internal energy U will be given by:
U=\int f(v)E(v)\,dv
Quantum mechanics[edit]
In quantum mechanics, a state \ | \psi \rang of a system is described by a wavefunction \ \psi (x, t) which solves the Schrödinger equation. The square of the absolute value of \ \psi , i.e.
\ P(x,t) = |\psi (x,t)|^2
is the probability density to measure the particle in place x at time t.
Usually, when involving some sort of potential, the wavefunction is decomposed into a superposition of energy eigenstates, each oscillating with frequency of \omega = E_n / \hbar . Thus, we may write
|\psi (t) \rang = \sum_n |n\rang \left\langle n | \psi ( t=0) \right\rangle e^{-iE_nt/\hbar}
The eigenstates have a physical meaning further than an orthonormal basis. When the energy of the system is measured, the wavefunction collapses into one of its eigenstates and so the particle wavefunction is described by the pure eigenstate corresponding to the measured energy.
Normal modes are generated in the earth from long wavelength seismic waves from large earthquakes interfering to form standing waves.
For an elastic, isotropic, homogeneous sphere, spheroidal, toroidal and radial (or breathing) modes arise. Spheroidal modes only involve P and SV waves (like Rayleigh waves) and depend on overtone number n and angular order l but have degeneracy of azimuthal order m. Increasing l concentrates fundamental branch closer to surface and at large l this tends to Rayleigh waves. Toroidal modes only involve SH waves (like Love waves) and do not exist in fluid outer core. Radial modes are just a subset of spheroidal modes with l=0. The degeneracy doesn’t exists on Earth as it is broken by rotation, ellipticity and 3D heterogeneous velocity and density structure.
We either assume that each mode can be isolated, the self-coupling approximation, or that many modes close in frequency resonant, the cross-coupling approximation. Self-coupling will change just the phase velocity and not the number of waves around a great circle resulting in a stretching or shrinking of standing wave pattern. Cross-coupling can be caused by rotation of Earth leading to mixing of fundamental spheroidal and toroidal modes, or by aspherical mantle structure or Earth’s ellipticity.
See also[edit]
• Blevins, Robert D. Formulas for natural frequency and mode shape.
• Tzou, H. S.; Bergman, L. A. Dynamics and Control of Distributed Systems.
• Deuss, Arwen (2010–2011). Physics of the Earth as a Planet Lecture Notes. Cambridge University.
External links[edit] |
dc13cee87a148074 | The free particle: probability current
Required math: calculus
Required physics: Schrödinger equation
Reference: Griffiths, David J. (2005), Introduction to Quantum Mechanics, 2nd Edition; Pearson Education – Problem 2.19.
The rate of change of probability of a particle in a given range of {x} can be written as the difference in probability current at the two ends. The current is defined as
\displaystyle J(x,t)\equiv\frac{i\hbar}{2m}\left(\frac{\partial\Psi^*}{\partial x}\Psi-\frac{\partial\Psi}{\partial x}\Psi^*\right) \ \ \ \ \ (1)
For the free particle, a stationary state is given by
\displaystyle \Psi(x,t)=Ae^{ikx}e^{-i\hbar k^{2}t/2m} \ \ \ \ \ (2)
The probability current for this state is found by working out the derivative:
\displaystyle \frac{\partial\Psi}{\partial x} \displaystyle = \displaystyle ikAe^{ikx}e^{-i\hbar k^{2}t/2m}\ \ \ \ \ (3)
\displaystyle \displaystyle = \displaystyle ik\Psi \ \ \ \ \ (4)
So we get
\displaystyle J(x,t) \displaystyle = \displaystyle \frac{i\hbar k}{2m}\left(-i\left|\Psi\right|^{2}-i\left|\Psi\right|^{2}\right)\ \ \ \ \ (5)
\displaystyle \displaystyle = \displaystyle \frac{\hbar k}{m}\left|A\right|^{2} \ \ \ \ \ (6)
(The complex exponentials cancel out in {\left|\Psi\right|^{2}}.) Since the current is positive, it ‘flows’ in the positive {x} direction. Note that the current is independent of {x}, so the probability of a particle being found in any given range of {x} is constant. (Actually, as we’ve seen, a free particle can’t exist in a single stationary state since such a state cannot be normalized.)
2 thoughts on “The free particle: probability current
1. Pingback: Finite step potential – scattering « Physics tutorials
2. George
If I understood the theory well, a free particle cannot exist in a stationary state so we should probably have the integral over k form of the wave equation? Are we allowed to use stationary states as a mathematically useful trick which does not have any physical meaning, and getting right results? The integral form shouldn’t have given the same result?
Leave a Reply
|
9487777d3832a6de | Paper using analogy between atomic and planetary spin-orbit coupling
Posted: August 22, 2016 by tallbloke in Astrophysics, solar system dynamics
On researchgate, I found this interesting paper by I.A. Arbab which proposes an explanation of planetary spin rates by leveraging an analogy with the electron spin-orbit coupling in the Hydrogen atom.
Food for thought. Here’s a taster:
I hope people with greater ability than I have will take a look.
1. oldbrew says:
Massive planets do spin faster in the solar system, as the paper says. However Jupiter is a lot more massive than Saturn but only spins slightly faster, so it’s not straightforward.
Other papers by Arbab:
2. tchannon says:
Saturn, Jupiter and Neptune have no known surface. Internal conditions are crazy. Neptune iirc has a spin axis on its side.
Fun maths.
3. oldbrew says:
Uranus has the side-on spin axis.
NB they talk of impacts, but how does an impact on a ball of gas work?
4. oldmanK says:
Spin is a vector, has direction. So what about Venus?
5. tallbloke says:
Given the near 2:1 resonance of Uranus-Neptune and the near 2:5 resonance of Saturn-Jupiter, they are probably best considered as pairs
Venus is in some sort of coupling which forces it to present the same face to Earth at every Ea-Ve conjunction and nearly the same face to Jupiter at every Ju-Ve conjunction. The only way Venus can fulfill both these conditions is to spin slowly backwards.
Also worth noting that Mercury spins around 4 times faster than venus, and orbits around 4 times faster than Earth.
6. tchannon says:
Uranus not Neptune then oldbrew.
Their supposition is based upon
“The researchers began by model[l]ing the single-impact scenario.” Kind of.
“To account for the discrepancy, the researchers tweaked their simulations’ paramaters a bit.” [s/a/e]
How doth one smite a gurt blog [sic] of gas?
Much the same as diving into the sea from 1000 feet, it is after all only water. Mass times speed, slugs, newtons and all that.
Hidden in this is the strange world of scale. What happens varies by octave or decade, the order of things. What might gently meld under benign rates is awful at fraction speed of light.
If these gas giants are just gas it will nevertheless be in a novel state of matter under the intense pressures deep inside. Maybe behaves as a solid.
7. oldbrew says:
TB: Venus length of day (LOD) is in a 3:2 ratio with Mercury LOD*.
Also Mercury’s own spin rate is about 1:2 with that of the Sun (depending on how it’s measured).
*This is only possible because Venus spin is retrograde:
184 spins = 199 orbits = 383 LOD (184 + 199)
As TB notes, the planet pairings are relevant.
Each of the Earth:Mars, Jupiter:Saturn and Uranus:Neptune pairs are close to matching in their spin rates (within about 2.5%-7%).
The gas giants are further away from the Sun so get less tidal drag force from it. Also being much more massive than the rocky planets they should have more resistance to such drag forces, allowing a faster spin rate.
TB: ‘Venus is in some sort of coupling which forces it to present the same face to Earth at every Ea-Ve conjunction’
Five Venus LOD = 1 Venus-Earth conjunction, accurate within about 5 hours per conjunction.
25 Venus LOD = ~13 Venus orbits = almost 8 sidereal years.
8. A C Osborn says:
This was the bit I really liked “When applied to stars…without the need of Dark Matter.
9. Tenuk says:
Yes. Dark matter was always a fudge factor to make the numbers fit, just as was Einstein’s gravitational constant. Before that they had the ether, but I think the missing mass is simply photons, which are constantly being recycled by matter.
10. E.M.Smith says:
I’d made the analogy of planet spin orbit coupling to atomic spin orbit coupling some years ago, but got stuck on the math (never did like angular momentum problems…). Maybe I’ll take a look… after a run to Starbucks😉
Intuitively, it ought to work at all scales, I just can’t see how…
11. E.M.Smith says:
OK… read it. Once through quickly…
It is all based on an analogy of electromagnetism to some kind of gavitomagnetics (whatever that is) and makes a lot of “leaps” in the math. Maybe valid, but each one needs unrolling and vetting…
They derive a formula with a “constant” plug number in it, and solves for that number. Planets yielding one value, moon earth a very different value… so how does a constant have multiple values?
It may well be brilliance beyond my ken, but a quick look-over has far more loose ends and questions than I like.
I’ll revisit again with Starbucks aboard and see if more becomes clear. As it stands now, I like the ideas, but not the proof…
The big idea is gravitomagnetics, but that is just assumed… though an interesting idea… unify gravity and magnetics (then go for grand unified theory…).
12. E.M.Smith says:
OK, he didn’t make gravitomagnetics up, but incorporates it by reference
looks like some digging to do for me to catch up on that axis… so OK, no fault in not explaining it to his audience who can be assumed familiar with it…
13. dai davies says:
As above, so below. It’s resonance all the way down.
The best way to model the solar system is using Hamiltonian mechanics, but it’s deep physics and tricky maths. It’s also the basis of the Schrödinger equation in quantum mechanics so the parallel is clear. I’ve been thinking of writing a revue but still don’t know very much about it – where to start, even.
Parameters expressed as integer ratios are ubiquitous in QM, as in solar system in Table 1 at the end of the Gkolias article.
A few references:
Resonance In the Solar System, Steve Bache, 2012,
A simple PPT style overview with historical background going back to Anaximander.
Ioannis Gkolias, The theory of secondary resonances in the spin-orbit problem,
We study the resonant dynamics in a simple one degree of freedom, time dependent Hamiltonian model describing spin-orbit interactions. The equations of motion admit periodic solutions associated with resonant motions, the most important being the synchronous one in which most evolved satellites of the Solar system, including the Moon, are observed. Such primary resonances can be surrounded by a chain of smaller islands which one refers to as secondary resonances. …
Alessandra Celletti, Quasi–Periodic Attractors And Spin/Orbit Resonances, November 2007,[…]
Mechanical systems, in real life, are typically dissipative, and perfectly conservative systems arise as mathematical abstractions. In this lecture, we shall consider nearly–conservative mechanical systems having in mind applications to celestial mechanics. In particular we are interested in the spin–orbit model for an oblate planet (satellite) whose center of mass revolves around a “fixed” star; the planet is not completely rigid and averaged effects of tides, which bring in dissipation, are taken into account. We shall see that a mathematical theory of such systems is consistent with the strange case of Mercury, which is the only planet or satellite in the Solar system being stack in a 3:2 spin/orbit resonance (i.e., it turns three times around its rotational spin axis, while it makes one revolution around the Sun).
Eric B. Ford, Architectures of planetary systems and implications for their formation, PNAS vol. 111 no. 35
… With the advent of long-term, nearly continuous monitoring by Kepler, the method of transit timing variations (TTVs) has blossomed as a new technique for characterizing the gravitational effects of mutual planetary perturbations for hundreds of planets. TTVs can provide precise, but complex, constraints on planetary masses, densities, and orbits, even for planetary systems with faint host stars. …
Luis Acedo, 2014, Quantum Mechanics of the Solar System,
According to the correspondence principle, as formulated by Bohr, both in the old and the modern quantum theory, the classical limit should be recovered for large values of the quantum numbers in any quantum system. … We also consider the perturbed Kepler problem with a central perturbation force proportional to the inverse of the cube of the distance to the central body. …
14. oldbrew says:
The Moon is an interesting case. Its rotation period is very similar to that of the Sun.
‘Solar rotation is arbitrarily taken to be 27.2753 days for the purpose of Carrington rotations. Each rotation of the Sun under this scheme is given a unique number called the Carrington Rotation Number, starting from November 9, 1853. ‘
‘It is customary to specify positions of celestial bodies with respect to the vernal equinox. Because of Earth’s precession of the equinoxes, this point moves back slowly along the ecliptic. Therefore, it takes the Moon less time to return to an ecliptic longitude of zero than to the same point amidst the fixed stars: 27.321582 days (27 d 7 h 43 min 4.7 s). This slightly shorter period is known as tropical month; cf. the analogous tropical year of the Sun.’
NB the Moon’s orbit is synchronous i.e. 1 orbit of the Sun = 1 rotation of the Moon.
The Sun-Moon time difference is about 1.1 hours using the Carrington period.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Google+ photo
Connecting to %s |
693ac173b2c820b9 | Take the 2-minute tour ×
The quantum-mechanical motion problem of an electron in electric field of the nucleus is well known. The quantum-mechanical description of electron motion in a magnetic field is also not difficult, since it needs to solve the Schrödinger equation of the form: $$\frac{(\hat p + eA)^2} {2m} \psi = E \psi $$ But if we want to consider the motion of an electron in a magnetic monopole field, the difficulty arises because the definition of the vector potential in the whole space. See, for example. Was this problem solved? What interesting consequences derived from this task? (for energy levels, angular momentum etc.)
share|improve this question
This gives you coherent state representation for angular momentum. – Slaviks Mar 16 '12 at 21:54
The difficulty is not in the vector potential--- you can just use a Dirac string. The difficulty is getting the actual solution. – Ron Maimon Apr 15 '12 at 7:42
1 Answer 1
up vote 3 down vote accepted
The classical version of this problem was solved by Henri Poincaré way back in 1896. This is also problem 5.43 in Electrodynamics by Griffiths. The classical trajectories are geodesics on the surface of a cone. A recent treatment of the classical version of this problem is here.
The quantum mechanical version was also solved long back by Igor Tamm in 1931. This is discussed in section 2.3 of the book Magnetic monopoles by Y M Shnir, who follows the treatment in Charge quantization and nonintegrable Lie algebras by Hurst.
The quantum mechanical version of the problem turns out to be separable in spherical polar coordinates. The angular part has the generalized spherical harmonics as its eigenvalues, while the radial solution is the same as the radial wave function of the standard Schroedinger equation. The centrifugal potential in the Schroedinger equation turns out to be always repulsive which implies that there are no bound states for this system of an electron in a magnetic monopole field. However a dyon field does have bound state solutions.
share|improve this answer
Your Answer
|
50ed8311fc1a3e99 | Open Access Nano Express
Ab initio calculation of valley splitting in monolayer δ-doped phosphorus in silicon
Daniel W Drumm12*, Akin Budi12, Manolo C Per23, Salvy P Russo2 and Lloyd C L Hollenberg1
Author Affiliations
1 School of Physics, The University of Melbourne, Parkville, Victoria 3010, Australia
2 School of Applied Sciences, RMIT University, Melbourne, Victoria 3001, Australia
3 Virtual Nanoscience Laboratory, CSIRO Materials Science and Engineering, Parkville, Victoria 3052, Australia
For all author emails, please log on.
Nanoscale Research Letters 2013, 8:111 doi:10.1186/1556-276X-8-111
Received:16 October 2012
Accepted:26 January 2013
Published:27 February 2013
© 2013 Drumm et al.; licensee Springer.
The differences in energy between electronic bands due to valley splitting are of paramount importance in interpreting transport spectroscopy experiments on state-of-the-art quantum devices defined by scanning tunnelling microscope lithography. Using VASP, we develop a plane-wave density functional theory description of systems which is size limited due to computational tractability. Nonetheless, we provide valuable data for the benchmarking of empirical modelling techniques more capable of extending this discussion to confined disordered systems or actual devices. We then develop a less resource-intensive alternative via localised basis functions in SIESTA, retaining the physics of the plane-wave description, and extend this model beyond the capability of plane-wave methods to determine the ab initio valley splitting of well-isolated δ-layers. In obtaining an agreement between plane-wave and localised methods, we show that valley splitting has been overestimated in previous ab initio calculations by more than 50%.
Density functional theory; Valley splitting; δ-Doped layers; Phosphorus in silicon; Basis sets; 73.22.-f;; 71.15.Mb
The study of the quantum properties of low-dimensional and doped structures is central to many nanotechnology applications [1-15]. Quantum devices in silicon have been the subject of concentrated recent interest, both experimental and theoretical, including the recent discussion of Ohm’s law at the nanoscale [16]. Efforts to make such devices have led to atomically precise fabrication methods which incorporate phosphorus atoms in a single monolayer of a silicon crystal [17-20]. These dopant atoms can be arranged into arrays [21] or geometric patterns for wires [16,22] and associated tunnel junctions [23], gates, and quantum dots [24,25] - all of which are necessary components of a functioning device [26]. The patterns themselves define atomically abrupt regions of doped and undoped silicon. While silicon, bulk-doped silicon, and the physics of the phosphorus incorporation [27] are well understood, models of this quasi-two-dimensional phosphorus sheet are still in their initial stages. In particular, it is critical in many applications to understand the effect of this confinement on the conduction band valley degeneracy, inherent in the band structure of silicon. For example, the degeneracy of the valleys has the potential to cause decoherence in a spin-based quantum computer [28,29], and the degree of valley degeneracy lifting (valley splitting) defines the conduction properties of highly confined planar quantum dots [26].
The importance of understanding valley splitting in monolayer δ-doped Si:P structures has led to a number of theoretical works in recent years, spanning several techniques, from pseudo-potential theories via planar Wannier orbital bases [30], density functional theory (DFT) via linear combination of atomic orbital (LCAO) bases [31,32], to tight-binding models [33-37] and effective mass theories (EMT) [38-40]. We note that several of these papers are based upon the assumption that the effective masses of δ-doped P in Si remain unchanged from bulk-doped values [38,39], an assumption which has been challenged [30,33]. Others assume doping over a multi-atomic plane band [33,38] which no longer represents the state of the art in fabrication. There is currently little agreement between the valley splitting values obtained using these methods, with predictions ranging between 5 to 270 meV, depending on the calculational approach and the arrangement of dopant atoms within the δ-layer. Density functional theory has been shown to be a useful tool in predicting how quantum confinement or doping perturbs the bulk electronic structure in silicon- and diamond-like structures [41-45]. The work of Carter et al. [31] represents the first attempt using DFT to model these devices by considering explicitly doped δ-layers, using a localised basis set and the assumption that a basis set sufficient to describe bulk silicon will also adequately describe P-doped Si. It might be expected, therefore, that the removal of the basis set assumption will lead to the best ab initio estimate of the valley splitting available, for a given arrangement of atoms. In the context of describing experimental devices, it is important to separate the effects of methodological choices, such as this, from more complicated effects due to physical realities, including disorder.
In this paper, we determine a consistent value of the valley splitting in explicitly δ-doped structures by obtaining convergence between distinct DFT approaches in terms of basis set and system sizes. We perform a comparison of DFT techniques, involving localised numerical atomic orbitals and delocalised plane-wave (PW) basis sets. Convergence of results with regard to the amount of Si ‘cladding’ about the δ-doped plane is studied. This corresponds to the normal criterion of supercell size, where periodic boundary conditions may introduce artificial interactions between replicated dopants in neighbouring cells. A benchmark is set via the delocalised basis for DFT models of δ-doped Si:P against which the localised basis techniques are assessed. Implications for the type of modelling being undertaken are discussed, and the models extended beyond those tractable with plane-wave techniques. Using these calculations, we obtain converged values for properties such as band structures, energy levels, valley splitting, electronic densities of state and charge densities near the δ-doped layer.
The paper is organised as follows: the ‘Methods’ section outlines the parameters used in our particular calculations; we present the results of our calculations in the ‘Results and discussion’ section and draw conclusions in the ‘Conclusions’ section. An elucidation of effects modifying the bulk band structure follows in Appendices 1 and 2 to provide a clear contrast to the properties deriving from the δ-doping of the silicon discussed in the paper. The origin of valley splitting is discussed in Appendix 3.
Density functional theory calculations have been carried out using both plane-wave and LCAO basis sets. For the PW basis set, the Vienna ab initio simulation package (VASP) [46] software was used with projector augmented wave [46,47] pseudo-potentials for Si and P. Due to the nature of the PW basis set, there exists a simple relationship between the cut-off energy and basis set completeness. For the structures considered in this work, the calculations were found to be converged for PW cut-offs of 450 eV.
Localised basis set calculations were performed using the Spanish Initiative for Electronic Simulations with Thousands of Atoms (SIESTA) [48] software. In this case, the P and Si ionic cores were represented by norm-conserving Troullier-Martins pseudo-potentials [49]. The Kohn-Sham orbitals were expanded in the default single-ζ polarized (SZP) or double-ζ polarized (DZP) basis sets, which consist of 9 and 13 basis functions per atom, respectively. Both the SZP and DZP sets contain s-, p-, and d-type functions. These calculations were found to be converged for a mesh grid energy cut-off of 300 Ry. In all cases, the generalized gradient approximation PBE [50] exchange-correlation functional was used.
The lattice parameter for bulk Si was calculated using an eight-atom cell and found to be converged for all methods with a 12 × 12 × 12 Monkhorst-Pack (MP) k-point mesh [51]. The resulting values are presented in Table 1 and were used in all subsequent calculations.
Table 1. Eight-atom cubic unit cell equilibrium lattice parameters for different methods used in this work
In modelling δ-doped Si:P, as used in another work [26], we adopted a tetragonal supercell description of the system, akin to those of other works [30,31]. In accordance with the experiment, we inserted the P layer in a monatomic (001) plane as one atom in four to achieve 25% doping. This will henceforth be referred to as 1/4 monolayer (ML) doping. In this case, the smallest repeating in-plane unit had 4 atoms/ML (to achieve one in four dopings) and was a square with the sides parallel to the [110] and <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a>10] directions. The square had a side length <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a> (see Figure 1), where a is the simple cubic lattice constant of bulk silicon. The phosphorus layers had to be separated by a considerable amount of silicon due to the large Bohr radius of the hydrogen-like orbital introduced by P in Si (approximately 2.5 nm). Carter et al. [31] showed that this far exceeded the sub-nanometre cell side length. If desired, cells with a lower in-plane density of dopants may be constructed by lengthening the cell in the x and y directions, such that more Si atoms occupy the doped monolayer in the cell - though this would significantly increase the computational cost of such a calculation.
thumbnailFigure 1. (001) Planar slice of the c(2×2) structure at the 1/4 ML doped monolayer. One of the Si sites has been replaced by a P atom (shown in dark gray). The periodic boundaries are shown in black.
A collection of tetragonal cells comprising 4, 8, 16, 32, 40, 60, 80, 120, 160 and 200 monolayers was constructed, having four atomic sites per monolayer and oriented with faces in the [110], [ <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a>10], and [001] directions (see Figure 2). Cells used in PW calculations began at 4 layers and ran to 80 layers; larger cells were not computationally tractable with this method. SZP and DZP models began at 40 layers to overlap with PW for the converging region and were then extended to their tractable limit (200 and 160 layers, respectively) to study convergence past the capability of PW.
thumbnailFigure 2. Ball and stick model of a δ-doped Si:P layer viewed along the [110] direction. Thirty-two layers in the [001] direction are shown. Si atoms (small gray spheres), P atoms (large dark gray spheres), covalent bonds (gray sticks), repeating cell boundary (solid line).
For tetragonal cells, the k-point sampling was set as a 9 × 9 × N Γ-centred MP mesh as we have found that failing to include Γ in the mesh can lead to the anomalous placement of the Fermi level on band structure diagrams. N varied from 12 to 1 as the cells became more elongated (see Appendix 1). We note that, as mentioned in the work of Carter et al. [32], the large supercells involved required very gradual (<0.1%) mixing of the new density matrix with the prior step, leading to many hundreds of self-consistent cycles before convergence was achieved.
Although it has been previously found that relaxing the positions of the nuclei gave negligible differences (<0.005 Å) to the geometry [31], this was for a 12-layer cell and may not have included enough space between periodic repetitions of the doping plane for the full effect to be seen. Whilst a 40-layer model was optimised in the work of Carter et al. [32], this made use of a mixed atom pseudo-potential and is not explicitly comparable to the models presented here. We have performed a test relaxation on a 40-layer cell using the PW basis (VASP). The maximum subsequent ionic displacement was 0.05 Å, with most being an order of magnitude smaller. The energy gained in relaxing the cell was less than 37 meV (or 230 μeV/atom). We therefore regarded any changes to the structure as negligibly small, confirming the results of Carter et al. [31,32], and proceeded without ionic relaxation.
Single-point energy calculations were carried out with both software programs; for VASP, the electronic energy convergence criterion was set to 10−6eV, and the tetrahedron method with Blöchl correction [52] was used. For SIESTA, a two-stage process was carried out: Fermi-Dirac electronic smearing of 300 K was applied in order to converge the density matrix within a tolerance of one part in 10−4; the calculation was then restarted with the smearing of 0 K, and a new electronic energy tolerance criterion of 10−6 eV was applied (except for the 120- and 160-layer DZP models for which this was intractable; a tolerance of 10−4 eV was used in these cases). This two-stage process aided convergence as well as ensuring that the energy levels obtained were comparably accurate across methods. In addition, for each doped cell thus developed and studied, an undoped bulk Si cell of the same dimensions was constructed to aid in isolating those features primarily due to the doping.
Results and discussion
Analysis of band structure
Once converged charge densities were obtained, band structures were calculated along the M–Γ–X high-symmetry pathway (as shown in Appendix 1), using at least 20 k-points between high-symmetry points. For comparative purposes, the band structures have all been aligned at the valence band maximum (VBM).
Figure 3 contrasts the bulk and doped band structures for the 40-layer PW calculation. DZP and SZP results are qualitatively similar on this scale, albeit with different band energies in the SZP model, and are omitted in the interest of clarity in the diagram. As discussed in Appendix 2, it is evident from the bulk values that the elongated cells have led to the folding of two conduction band minimum valleys towards the Γ point. Also visible is the difference that the doping potential makes to the system; what was the lowest unoccupied orbital (Γ1 band) in the bulk is now dragged down in energy by the extra ionic potential. It is of note that the region near Γ, corresponding to the kz valleys which can be modelled as having different effective masses to the kx,y valleys, [30] is brought lower than the region corresponding to the kx,y valleys and is non-degenerate. The second (Γ2) band behaves in a similar fashion. The third (δ) band appears to maintain a minimum away from the Γ point in the ΣTET direction (which is equivalent to the ΔFCC direction; see Appendix 1) but in a less parabolic fashion than the lower two; its minimum is similar to the value at Γ. This band is non-degenerate along this particular direction in k-space, but due to the supercell symmetry, it is actually fourfold degenerate, in contrast to the other bands.
thumbnailFigure 3. Full band structure (colour online) of the 40-layer tetragonal system calculated using PW (VASP). Bulk band structure (shaded gray background), doped band structure (solid black) and Fermi level (labelled solid red).
The Fermi level for the doped system is also shown, clearly being crossed by all three of these bands which are therefore able to act as open channels for conduction.
As mentioned above, the band structures are similar across all methods, but upon detailed inspection, important differences come to light. A closer look at the δ band shows a qualitative difference between the predictions using SZP (Figure 4c) and the PW and DZP results (Figure 4a,b): the models with a more complete basis predict the band minimum to occur in the ΣTETFCC) direction, below the value at Γ, while the SZP band structure shows the reverse - the minimum at Γ, a similar amount below a secondary minimum in the ΣTET direction, a qualitative difference.
thumbnailFigure 4. Band structure (colour online) of the 40-layer tetragonal system zoomed in on the δ band. (a) PW (VASP), (b) DZP (SIESTA) and (c) SZP basis sets were used. Fermi level is shown by a solid horizontal red line.
The difference between the energies of the first two band minima (Γ1− Γ2, illustrated in Figure 5), or the valley splitting, from the PW and DZP calculations, agrees with each other to within ∼6 meV. Significantly, the value obtained using our SZP basis set differs by 52 meV, some 55% larger than the value obtained using the PW basis set. The importance of this discrepancy cannot be overstated; valley splitting is directly relatable to experimentally observable resonances in transport spectroscopy of devices made with this δ-doping technology (see [26]).
thumbnailFigure 5. Minimum band energies for tetragonal systems with 1/4 ML doping. (a) PW (VASP), (b) DZP (SIESTA) and (c) SZP (SIESTA) basis sets were used. Fermi level also shown where appropriate. Bold numbers indicate energy differences between band minima.
In the smallest cells (<16 layers), less than three bands are observed. This is likely due to the lack of cladding in the z direction, leading to a significant interaction between the dopant layers, raising the energy of each band. Whilst the absolute energy of each level still varies somewhat, even with over 100 layers incorporated, we find that the Γ1–Γ2 values are well converged with 80 layers of cladding for all methods (see Figure 5). Indeed, they may be considered reasonably converged even at the 40-layer level (0.5 meV or less difference to the largest models considered). The differences between the energies of the second and third band minima (Γ2δ splittings) are also shown in Figure 5 and show good convergence (within 1 meV) for cells of 80 layers or larger.
The Fermi level follows a similar pattern to the Γ- and δ-levels. In particular, the gap between the Fermi level and Γ1 level does not change by more than 1 meV from 60 to 160 layers.
Given that the properties of interest are the differences between the energy levels, rather than their absolute values (or position relative to the valence band), in the interest of computational efficiency, we observe that using the DZP basis with 80 layers of cladding is sufficient to achieve consistent, converged results.
Valley splitting
Table 2 summarises the valley splitting values of 1/4 ML P-doped silicon obtained using different techniques, showing a large variation in the actual values. In order to make sense of these results, it is important to note two major factors that affect valley splitting: the doping method and the arrangement of phosphorus atoms in the δ-layer. As the results from the work of Carter et al. [32] show, the use of implicit doping causes the valley splitting value to be much smaller than in an explicit case (∼7 meV vs. 120 meV). It is also shown that the use of random P coverage on the δ-layer reduces the valley splitting value by only 40 to 50 meV compared with the fully ordered placement, leaving a large discrepancy between the valley splitting results from implicit and explicit doping. This large decrease in valley splitting due to implicit doping can be explained by the smearing of the doping layer in the direction normal to the δ-layer, thereby decreasing the quantum confinement effect responsible for breaking the degeneracy in the system. Carter et al. [32] also shows that the arrangement of the phosphorus atoms in the δ-layer strongly influences the valley splitting value. In particular, they showed that there is a difference of up to 220 meV between P doping along the [110] direction and along the [100] direction. It should be noted, however, that deterministic nearest-neighbour donor placements are not yet physically realisable due to the P incorporation mechanism currently employed [27,53]. Similarly, the perfectly ordered arrangement discussed here is highly improbable, given the experimental limitations, but represents the ideal case from which effects such as disorder can be studied.
Table 2. Valley splitting values of 1/4 ML P-doped silicon obtained using different techniques
Our results show that valley splitting is highly sensitive to the choice of basis set. Due to the nature of PW basis set, it is straightforward to improve its completeness by increasing the plane-wave cut-off energy. In this way, we establish the most accurate valley splitting value within the context of density functional theory. Using this benchmark value, we can then establish the validity and accuracy of other basis sets, which can be used to extend the system sizes to that beyond what is practical using a PW basis set. As seen in Table 2, the valley splitting value converges to 93 meV using 80-layer cladding. The DZP localised basis set gives an excellent agreement at 99.5 meV using 80-layer cladding (representing a 7% difference). On the other hand, our SZP localised basis set gave a value of 145 meV using the same amount of cladding. This represents a significant difference of 55% over the value obtained using PW basis set and demonstrates that SZP basis sets are unsuitable for accurate determination of valley splitting in these systems.
Density of states
The electronic density of states (eDOS) was calculated for each cell. Figure 6 compares the unscaled eDOS for bulk 80-layer cells to that of doped cells varying from 40 to 80 layers. The bulk bandgap is visible, with the conduction band rising sharply to the right of the figure. The doped eDOS exhibits density in the bulk bandgap, although the features of the spectra differ slightly according to the basis set used.
thumbnailFigure 6. Electronic densities of states for tetragonal systems with 0 and 1/4 ML doping. The DZP (SIESTA) basis set was used. The Fermi level is indicated by a solid vertical line with label, and 50-meV smearing was applied for visualization purposes.
The Fermi energy exhibits convergence with respect to the amount of cladding, as reported above. It is also notable that the eDOS within the bandgap are nearly identical regardless of the cell length (in z). This indicates that layer-layer interactions are negligibly affecting the occupied states and, therefore, that the applied ‘cladding’ is sufficient to insulate against these effects.
Electronic width of the plane
In order to quantify the extent of the donor-electron distribution, we have integrated the local density of states between the VBM and Fermi level and have taken the planar average with respect to the z-position. Figure 7 shows the planar average of the donor electrons (a sum of both spin-up and spin-down channels) for the 80-layer cell calculated using the DZP basis set. After removing the small oscillations related to the crystal lattice to focus on the physics of the δ-layer, by Fourier transforming, a Lorentzian function was fitted to the distribution profile. (Initially, a three-parameter Gaussian fit similar to that used in [40] was tested, but the Lorentzian gave a better fit to the curve.)
thumbnailFigure 7. Planar average of donor-electron density as a function of z-position for 1/4 ML-doped 80-layer cell. The DZP basis set was used. The fitted Lorentzian function is also shown.
Table 3 summarises the maximum donor-electron density and the full width at half maximum (FWHM) for the 1/4 ML-doped cells, each calculated from the Lorentzian fit. Both of these properties are remarkably consistent with respect to the number of layers, indicating that they have converged sufficiently even at 40 layers.
Table 3. Calculated maximum donor-electron density,ρmax, and FWHM
Our results differ from a previous DFT calculation [32] which cited an FWHM of 5.62 Å for a 1/4 ML-doped, 80-layer cell calculated using the SZP basis set (and 10 × 10 × 1 k-points). We note that those values were taken from the unfitted, untransformed donor-electron distribution and represent an approximately 15% underestimation in comparison with the DZP result. The peak height is not shown in the work of Carter et al. [32], but the value from another work [31] (1.7 × 1021 e/cm3) is a factor of 0.44 smaller than the peak we observe here. This may be due, to some extent, to the larger width of the SZP model leading to an associated lowering of the peak density.
In this article, we have studied the valley splitting of the monolayer δ-doped Si:P, using a density functional theory model with a plane-wave basis to establish firm grounds for comparison with less computationally intensive localised-basis ab initio methods. We found that the description of these systems (by density functional theory, using SZP basis functions) overestimates the valley splitting by over 50%. We show that DZP basis sets are complete enough to deliver values within 10% of the plane-wave values and, due to their localised nature, are capable of calculating the properties of models twice as large as is tractable with plane-wave methods. These DZP models are converged with respect to size well before their tractable limit, which approaches that of SZP models.
Valley splittings are important in interpreting transport spectroscopy experiment data, where they relate to families of resonances, and in benchmarking other theoretical techniques more capable of actual device modelling. It is therefore pleasing to have an ab initio description of this effect which is fully converged with respect to basis completeness as well as the usual size effects and k-point mesh density.
We have also studied the band structures with all three methods, finding that the DZP correctly determines the δ-band minima away from the Γ point, where the SZP method does not. We show that these minima occur in the Σ direction for the type of cell considered, not the δ direction as has been previously reported. Having established the DZP methodology as sufficient to describe the physics of these systems, we then calculated the electronic density of states and the electronic width of the δ-layer. We found that previous SZP descriptions of these layers underestimate the width of the layers by almost 15%.
We have shown that the properties of interest of δ-doped Si:P are well converged for 40-layer supercells using a DZP description of the electronic density. We recommend the use of this amount of surrounding silicon, and technique, in any future DFT studies of these and similar systems - especially if inter-layer interactions are to be minimised.
Appendix 1
Subtleties of bandstructure
Regardless of the type of calculation being undertaken, a band structure diagram is inherently linked to the type (shape and size) of cell being used to represent the system under consideration. For each of the 14 Bravais lattices available for three-dimensional supercells, a particular Brillouin zone (BZ) with its own set of high-symmetry points exists in reciprocal space [54]. Similarly, each BZ has its own set of high-symmetry directions. Some of these BZs share a few high-symmetry point labels (or directions), such as X or L (δ or Σ), and they all contain Γ, but these points are not always located in the same place in reciprocal space.
A simple effect of this can be seen by increasing the size of a supercell. This has the result of shrinking the BZ and the coordinates of high-symmetry points on its boundary by a corresponding factor. Consider the conduction band minimum (CBM) found at the δ valley in the Si conduction band. This is commonly located at <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a> in the δ direction towards X (also Y, Z and their opposite directions). Should we increase the cell by a factor of 2, the BZ will shrink (BZ→BZ’), placing the valley outside the new BZ boundary (past X’); however, a valid solution in any BZ must be a solution in all BZs. This results in the phenomenon of band folding, whereby a band continuing past a BZ boundary reenters the BZ on the opposite side. Since the X direction in a face-centred cubic (FCC) BZ is sixfold symmetric, a solution near the opposite BZ boundary is also a solution near the one we are focussing on. This results in the appearance that the band continuing past the BZ boundary is ‘reflected’, or folded, back on itself into the first BZ. Since the new BZ boundary in this direction is now at <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a>, the location of the valley will be at <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a>, as mentioned in the work of Carter et al. [31]. Each further increase in the size of the supercell will result in more folding (and a denser band structure). Care is therefore required to distinguish between a new band and one which has been folded due to this effect when interpreting band structure.
Continuing with our example of silicon, whilst the classic band structure [55] is derived from the bulk Si primitive FCC cell (containing two atoms), it is often more convenient to use a simple cubic (SC) supercell (eight atoms) aligned with the 〈100〉 crystallographic directions. In this case, we experience some of the common labelling; the δ direction is defined in the same manner for both BZs, although we see band folding (in a similar manner to that discussed previously) due to the size difference of the reciprocal cells (see Figure 8). We also see a difference in that, although the Σ direction is consistent, the points at the BZ boundaries have different symmetries and, therefore, label (KFCC, MSC). (The LFCC point and ⋀ FCC direction have no equivalent for tetragonal cells, and hence, we do not consider band structure in that direction here).
Consider now the δ-doping case discussed in the ‘Methods’ section, where we wish to align our cell with the [110] and [ <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a>10] directions (by rotating the cell 45° anticlockwise about z; this will also require a resizing of the cell in the plane to maintain periodicity - see Figure 9), to allow us to include precisely four atoms per monolayer (as required for the minimal representation of 1/4 ML doping). We now have a situation where the XTET point in the new tetragonal BZ (see Figure 10) is no longer in the direction of the XSC point in the simple cubic BZ, despite both X points being in the centre of a face of their BZ. Due to the rotation, what used to be the ΔSC direction in the simple cubic BZ is now the ΣTET direction (pointing towards M at the corner of the BZ in the kz = 0 plane) in the tetragonal BZ. The tetragonal CBM, while physically still the same as the CBM in the FCC or simple cubic BZ, is not represented in the same fashion (see Figure 11).
thumbnailFigure 9. Geometrical difference between the simple cubic and tetragonal cells. A (001) planar cut through an atomic monolayer is shown.
thumbnailFigure 10. The Brillouin zone for a tetragonal cell. The M–Γ–X path used in this work is shown.
thumbnailFigure 11. Band structure (colour online) diagram for tetragonal bulk Si structures with increasing number of layers. The VASP plane wave method was used (see ‘Methods’ section).
Appendix 2
Band folding in the z direction
Increasing the z dimension of the cell leads to successive folding points being introduced as the BZ shrinks along kz (see Appendix 1). This has the effect of shifting the conduction band minima in the ± kz directions closer and closer to the Γ point (see Figure 8a) and making the band structure extremely dense when plotting along kz. This results in the value of the lowest unoccupied eigenstate at Γ being lowered as what were originally other sections of the band are successively mapped onto Γ, and after a sufficient number of folds, the value at Γ is indistinct from the original CBM value. The effects of this can be seen in Table 4, which describes increasingly elongated tetragonal cells of bulk Si. When we then plot the band structure in a different direction, e.g. along kx, the translation of the minima from ± kz onto the Γ point appear as a new band with twofold degeneracy. The degeneracy of the original band seems to drop from six- to fourfold, in line with the reduced symmetry (we only explicitly calculate one, and the other three occur due to symmetry considerations). This is half of the origin of the ‘Γbands’ (more details are presented in Appendix 3). Once the kz valleys are sited at Γ, parabolic dispersion corresponding to the transverse kinetic energy terms is observed along kx and ky, at least close to the band minimum (see Figure 11) - in contrast to the four ‘δbands’ whose dispersion (again parabolic) is governed by the longitudinal kinetic energy terms. The different curvatures are related to the different effective masses (transverse, longitudinal) of the silicon CBM. It should be noted that the bands are still degenerate in energy at this stage - their minima (and range) occur at (over) the same energy (energies) even though their projections onto the kx axis are different.
Table 4. Energy levels of tetragonal bulk Si structures
All methods considered in Table 4 show the LUMO at Γ (folded in along ± kz) approaching the CBM value as the amount of cladding increases; at 80 layers, the LUMO at Γ is within 1 meV of the CBM value. It is also of note that the PW indirect bandgap agrees well with the DZP value and less so with the SZP model. This is an indication that, although the behaviour of the LUMO with respect to the cell shape is well replicated, the SZP basis set is demonstrably incomplete. Conversely, pairwise comparisons between the PW and DZP results show agreement to within 5 meV.
It is important to distinguish effects indicating convergence with respect to cladding for doped cells (i.e. elimination of layer-layer interactions) from those mentioned previously derived from the shape and size of the supercell. Strictly, the convergence (with respect to the amount of encapsulating Si) of those results we wish to study in detail, such as the differences in energy between occupied levels in what was the bulk bandgap, provides the most appropriate measure of whether sufficient cladding has been applied.
Appendix 3
Valley splitting
Here, we discuss the origins of valley splitting, in the context of phosphorus donors in silicon. Following on from the discussion of Si band minima in Appendices 1 and 2, we have, via elongation of the supercell and consequent band folding, a situation where, instead of the sixfold degeneracy (due to the underlying symmetries of the Si crystal lattice), we see an apparent splitting of these states into two groups (6 → 2 + 4, or 2 Γ + 4 δ minima).
We now consider what happens in perfectly ordered δ-doped monolayers, as per the main text. Here, we break the underlying Si crystal lattice symmetries by including foreign elements in the lattice. By placing the donors regularly (according to the original Si lattice pattern) in one [001] monolayer, we reduce the symmetry of the system to tetragonal, with the odd dimension being transverse to the plane of donors. This dimension can be periodic (as in the supercells described earlier), infinite (as in the EMT model of Drumm et al. [40]) or extremely long on the atomic scale (as the experiments are).
Immediately, therefore, we expect the same apparent 2 + 4 breaking of the original sixfold degenerate conduction band minima. Of course, as we have introduced phosphorus (which has one more electron and one more proton than silicon), this next band (still actually sixfold degenerate in bulk silicon) will be occupied and will now be influenced by the new potential. The sub-bands interact differently with the potential, thanks to the different curvatures in their dispersion relations and drop by different amounts into the bandgap. As discussed in detail in Drumm et al. [40], the filling of these sub-bands is partial rather than complete (or absent) and is governed by both the energy of their minima and their respective effective masses. We now have an actual breaking of the sixfold degeneracy into a true 2 + 4 system.
If we still look closer, we might expect these lower degeneracies to spontaneously break - nature, after all, is said to abhor degeneracy. Indeed, this does occur, but for this special case of δ-doped Si:P, the effect is enhanced by the strong V-shaped potential about the monolayer due to the extra charge in the donor nuclei [40]. Consideration of odd and even solutions to the effective mass Schrödinger equation for this sub-band leads to their superposition(s) and subsequent energy difference. This is enhanced further in the Kohn-Sham formalism, as evidenced in previous sections. (The four δ minima also split but on a far-reduced scale not visible using current DFT techniques.) We thus expect, in the DFT picture, to see 6 →2 + 4→1 + 1 + 4 sub-band structure, namely the Γ1, Γ2 and δ bands. The valley splitting which is the main focus of this paper is the energy difference between the Γ1 and Γ2 band minima due to the superposition of solutions.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
DWD, SPR, and LCLH conceived the study. Density functional theory calculations were carried out by DWD, AB, and MCP. All authors contributed to the discussion of results and drafting of the the final manuscript. All authors read and approved the final manuscript.
The authors acknowledge funding by the ARC Discovery grant DP0986635. This research was undertaken on the NCI National Facility in Canberra, Australia, which is supported by the Australian Commonwealth Government. We thank Oliver Warschkow, Damien Carter and Nigel Marks for their feedback on our manuscript.
1. Shen G, Chen D: One-dimensional nanostructures and devices of II-V group semiconductors.
Nanoscale Res Lett 2009, 4(8):779-788. PubMed Abstract | Publisher Full Text | PubMed Central Full Text OpenURL
2. Dresselhaus MS, Chen G, Tang MY, Yang R, Lee H, Wang D, Ren Z, Fleurial J-P, Gogna P: New directions for Low-dimensional thermoelectric materials.
Adv Mater 2007, 19:1043-1053. Publisher Full Text OpenURL
3. Lu YH, Hong ZX, Feng YP, Russo SP: Roles of carbon in light emission of ZnO.
Appl Phys Lett 2010, 96(9):091914. Publisher Full Text OpenURL
4. Zhao YS, Fu H, Peng A, Ma Y, Xiao D, Yao J: Low-dimensional nanomaterials based on small organic molecules: preparation and optoelectronic properties.
Adv Mater 2008, 20:2859-2876. Publisher Full Text OpenURL
5. Drumm DW, Per MC, Russo SP, Hollenberg LCL: Thermodynamic stability of neutral Xe defects in diamond.
Phys Rev B 2010, 82:054102. OpenURL
6. Tsu R: Superlattices: problems and new opportunities, nanosolids.
Nanoscale Res Lett 2011, 6:127. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL
7. Luo D-S, Lin L-H, Su Y-C, Wang Y-T, Peng ZF, Lo S-T, Chen KY, Chang Y-H, Wu J-Y, Lin Y, Lin S-D, Chen J-C, Huang C-F, Liang C-T: A delta-doped quantum well system with additional modulation doping.
Nanoscale Res Lett 2011, 6:139. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL
8. Webber BT, Per MC, Drumm DW, Hollenberg LCL, Russo SP: Ab initio thermodynamics calculation of the relative concentration of NV- and NV0 defects in diamond.
Phys Rev B 2012, 85:014102. OpenURL
9. Conibeer G, Perez-Wurfl I, Hao X, Di D, Lin D: Si solid-state quantum dot-based materials for tandem solar cells.
Nanoscale Res Lett 2012, 7:193. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL
10. Dick R: Inter-dimensional effects in nano-structures.
Nanoscale Res Lett 2012, 7(1):581. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL
11. Budi A, Drumm DW, Per MC, Tregonning A, Russo SP, Hollenberg LCL: Electronic properties of multiple adjacent δ-doped Si:P layers: the approach to monolayer confinement.
Phys Rev B 2012, 86:165123. OpenURL
12. Sun HH, Guo FY, Li DY, Wang L, Wang DB, Zhao LC: Intersubband absorption properties of high Al content Al(x)Ga(1−x)N/GaN multiple quantum wells grown with different interlayers by metal organic chemical vapor deposition.
Nanoscale Res Lett 2012, 7:649. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text OpenURL
13. De Padova P, Ottaviani C, Ronci F, Colonna S, Olivieri B, Quaresima C, Cricenti A, Dávila ME, Hennies F, Pietzsch A, Shariati N, Le Lay G: Mn-silicide nanostructures aligned on massively parallel silicon nano-ribbons.
J Phys: Condens Matter 2013, 25:014009. Publisher Full Text OpenURL
14. Barnard AS, Russo SP, Snook IK: Ab initio modelling of B and N in C29 and C29H24 nanodiamond.
J Chem Phys 2003, 118:10725-10728. Publisher Full Text OpenURL
15. Erogbogbo F, Liu X, May JL, Narain A, Gladding P, Swihart MT, Prasad PN: Plasmonic gold and luminescent silicon nanoplatforms for multimode imaging of cancer cells.
Integr Biol 2013, 5:144. Publisher Full Text OpenURL
16. Weber B, Mahapatra S, Ryu H, Lee S, Fuhrer A, Reusch TCG, Thompson DL, Lee WCT, Klimeck G, Hollenberg LCL, Simmons MY: Ohm’s law survives to the atomic scale.
Science 2012, 335:64. PubMed Abstract | Publisher Full Text OpenURL
17. Tucker JR, Shen T-C: Prospects for atomically ordered device structures based on STM lithography.
Solid-State Electron 1998, 42:1061. Publisher Full Text OpenURL
18. O’Brien JL, Schofield SR, Simmons MY, Clark RG, Dzurak AS, Curson NJ, Kane BE, McAlpine NS, Hawley ME, Brown GW: Towards the fabrication of phosphorus qubits for a silicon quantum computer.
Phys Rev B 2001, 64:161401(R). OpenURL
19. Shen T-C, Ji J-Y, Zudov MA, Du R-R, Kline JS, Tucker JR: Ultradense phosphorus delta layers grown into silicon from PH3 molecular precursors.
Appl Phys Lett 2002, 80:1580. Publisher Full Text OpenURL
20. Fuechsle M, Ruess FJ, Reusch TCG, Mitic M, Simmons MY: Surface gate and contact alignment for buried, atomically precise scanning tunneling microscopy-patterned devices.
J Vac Sci Technol B 2007, 25:2562. Publisher Full Text OpenURL
21. Pok W, Reusch TCG, Scappucci G, Ruess FJ, Hamilton AR, Simmons MY: Electrical characterization of ordered Si:P dopant arrays.
IEEE Trans Nanotechnol 2007, 6:213. OpenURL
22. Ruess FJ, Goh KEJ, Butcher MJ, Reusch TCG, Oberbeck L, Weber B, Hamilton AR, Simmons MY: Narrow, highly P-doped, planar wires in silicon created by scanning probe microscopy.
Nanotechnology 2007, 18:044023. Publisher Full Text OpenURL
23. Ruess FJ, Pok W, Goh KEJ, Hamilton AR, Simmons MY: Electronic properties of atomically abrupt tunnel junctions in silicon.
Phys Rev B 2007, 75:121303(R). OpenURL
24. Ruess FJ, Pok W, Reusch TCG, Butcher MJ, Goh KEJ, Oberbeck L, Scappucci G, Hamilton AR, Simmons MY: Realization of atomically controlled dopant devices in silicon.
Small 2007, 3:563. PubMed Abstract | Publisher Full Text OpenURL
25. Fuhrer A, Füchsle M, Reusch TCG, Weber B, Simmons MY: Atomic-scale, all epitaxial in-plane gated donor quantum dot in silicon.
Nano Lett 2009, 9:707. PubMed Abstract | Publisher Full Text OpenURL
26. Fuechsle M, Mahapatra S, Zwanenburg FA, Friesen M, Eriksson MA, Simmons MY: Spectroscopy of few-electron single-crystal silicon quantum dots.
Nature Nanotechnology 2010, 5:502. PubMed Abstract | Publisher Full Text OpenURL
27. Wilson HF, Warschkow O, Marks NA, Schofield SR, Curson NJ, Smith PV, Radny MW, McKenzie DR, Simmons MY: Phosphine dissociation on the Si(001) surface.
Phys Rev Lett 2004, 93:226102. PubMed Abstract | Publisher Full Text OpenURL
28. Koiller B, Hu X, Das Sarma S: Exchange in silicon-based quantum computer architecture.
Phys Rev Lett 2002, 88:27903. OpenURL
29. Boykin TB, Klimeck G, Friesen M, Coppersmith SN, von Allmen P, Oyafuso F, Lee S: Valley splitting in low-density quantum-confined heterostructures studied using tight-binding models.
Phys Rev B 2004, 70:165325. OpenURL
30. Qian G, Chang Y-C, Tucker JR: Theoretical study of phosphorus δ-doped silicon for quantum computing.
Phys Rev B 2005, 71:045309. OpenURL
31. Carter DJ, Warschkow O, Marks NA, McKenzie DR: Electronic structure models of phosphorus δ-doped silicon.
Phys Rev B 2009, 79:033204. OpenURL
32. Carter DJ, Marks NA, Warschkow O, McKenzie DR: Phosphorus δ-doped silicon: mixed-atom psuedopotentials and dopant disorder effects.
Nanotechnology 2011, 22:065701. PubMed Abstract | Publisher Full Text OpenURL
33. Cartoixa X, Chang Y-C: Fermi-level oscillation in n-type δ-doped Si: a self-consistent tight-binding approach.
Phys Rev B 2005, 72:125330. OpenURL
34. Lee S, Ryu H, Klimeck G, Jiang Z: Million atom electronic structure and device calculations on peta-scale computers. In Proc. of the 13th Int. Workshop on Computational Electronics. Tsinghua University, Beijing; Unknown Month 27.
vol 10. Piscataway: IEEE; 2009. doi:
Publisher Full Text OpenURL
35. Ryu H, Lee S, Weber B, Mahapatra S, Simmons MY, Hollenberg LCL, Klimeck G: Quantum transport in ultra-scaled phosphorus-doped silicon nanowires. In Proceedings of the 2010 IEEE Silicon Nanoelectronics Workshop, Honolulu, USA, 13-14 June 2010. Piscataway: IEEE; 2010.
Publisher Full Text OpenURL
36. Ryu H, Lee S, Klimeck G: A study of temperature-dependent properties of N-type δ-doped Si band-structures in equilibrium. In Proc. of the 13th Int. Workshop on Computational Electronics. Tsinghua University, Beijing; Unknown Month 27.
vol 10. Piscataway: IEEE; 2009. doi:
Publisher Full Text OpenURL
37. Lee S, Ryu H, Campbell H, Hollenberg LCL, Simmons MY, Klimeck G: Electronic structure of realistically extended atomistically resolved disordered Si:P δ-doped layers.
Phys Rev B 2011, 84:205309. OpenURL
38. Scolfaro LMR, Beliaev D, Enderlein R, Leite JR: Electronic structure of n-type δ-doping multiple layers and superlattices in silicon.
Phys Rev B 1994, 50:8699. Publisher Full Text OpenURL
39. Rodriguez-Vargas I, Gaggero-Sager LM: Sub-band and transport calculations in double n-type δ-doped quantum wells in Si.
J Appl Phys 2006, 99:033702. Publisher Full Text OpenURL
40. Drumm DW, Hollenberg LCL, Simmons MY, Friesen M: Effective mass theory of monolayer δ doping in the high-density limit.
Phys Rev B 2012, 85:155419. OpenURL
41. Delley B, Steigmeier EF: Quantum confinement in Si nanocrystals.
Phys Rev B 1993, 47:1397. OpenURL
42. Delley B, Steigmeier EF: Size dependence of band gaps in silicon nanostructures.
Appl Phys Lett 1995, 67:2370. Publisher Full Text OpenURL
43. Ramos LE, Teles LK, Scolfaro LMR, Castineira JLP, Rosa AL, Leite JR: Structural, electronic, and effective-mass properties of silicon and zinc-blende group-III nitride semiconductor compounds.
Phys Rev B 2001, 63:165210. OpenURL
44. Zhou ZY, Brus L, Friesner R: Electronic structure and luminescence of 1.1- and 1.4-nm silicon nanocrystals: oxide shell versus hydrogen passivation.
Nano Lett 2003, 3:163. Publisher Full Text OpenURL
45. Barnard AS, Russo SP, Snook IK: Ab initio modelling of band states in doped diamond.
Philos Mag 2003, 83:1163. Publisher Full Text OpenURL
Phys Rev B 1999, 59:1758. Publisher Full Text OpenURL
47. Blöch PE: Projector augmented-wave method.
Phys Rev B 1994, 50:17953. Publisher Full Text OpenURL
48. Artacho E, Anglada E, Dieguez O, Gale JD, Garcia A, Junquera J, Martin RM, Ordejon P, Pruneda JM, Sanchez-Portal D, Soler JM: The siesta method; developments and applicability.
J Phys Condens Matter 2008, 20:064208. PubMed Abstract | Publisher Full Text OpenURL
49. Troullier N, Martins JL: Efficient pseudopotentials for plane-wave calculations.
Phys Rev B 1993, 43:1991. OpenURL
50. Perdew JP, Burke K, Ernzerhof M: Generalized gradient approximation made simple.
Phys Rev Lett 1996, 77:3865. PubMed Abstract | Publisher Full Text OpenURL
51. Monkhorst HJ, Pack JD: Special points for Brillouin-zone integrations.
Phys Rev B 1976, 13:5188. Publisher Full Text OpenURL
52. Blöchl PE, Jepsen O, Andersen OK: Improved tetrahedron method for Brillouin-zone integrations.
Phys Rev B 1994, 49:16223. Publisher Full Text OpenURL
53. Wilson HF, Warschkow O, Marks NA, Curson NJ, Schofield SR, Reusch TCG, Radny MW, Smith PV, McKenzie DR, Simmons MY: Thermal dissociation and desorption of PH3 on Si(001): a reinterpretation of spectroscopic data.
Phys Rev B 2006, 74:195310. OpenURL
54. Bradley CJ, Cracknell JP: The Mathematical Theory of Symmetry in Solids: Representation Theory for Point Groups and Space Groups. Oxford: Clarendon Press; 1972. OpenURL
55. Chelikowsky JR, Cohen ML: Electronic structure of silicon.
Phys Rev B 1974, 10:5095. Publisher Full Text OpenURL |
f4439085e8c27867 | adiabatic approximation
Show Summary Details
Quick Reference
An approximation used in quantum mechanics when the time dependence of parameters, such as the internuclear distance between atoms in a molecule, is slowly varying. This approximation means that the solution of the Schrödinger equation at one time goes continuously over to the solution at a later time. It was formulated by Max Born and the Soviet physicist Vladimir Alexandrovich Fock (1898–1974) in 1928. The Born-Oppenheimer approximation is an example of the adiabatic approximation.
Subjects: Chemistry — Physics.
Reference entries
|
f8d7d547830ca6ce |
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
Quantum chemistry/physics
1. Oct 21, 2008 #1
Hi all.
This post is about quantum chemistry, but my question arises when looking at the problem from a physical point of view.
The Schrödinger equation gives us the stationary states of a system, and lets say that we are looking at a system with two stationary states (Dirac notation - but the LaTeX does not work, so bear with me) |1> and |2> with an associated eigenenergy. These two orthonormal states span the Hilbert space we are working in.
Now here's my question: I am looking at a figure of a molecule with six orbitals, and now each orbital is represented by an orthonormal basis |1>, |2>, |3>, |4>, ..., |6>. An eigenstate is then a linear combination of these basis-vectors (orbitals) with an associated energy.
Question: How am I do interpret these basis-vectors |1>, |2>, |3>, |4>, ..., |6>? They surely cannot represent stationary states (i.e. solutions to the time-independent Schrödinger equation), because then a linear combination of them would not have an eigenenergy.
Thanks in advance. Any help will be greatly appreciated, since I cannot get help from anywhere else at the moment.
Best regards,
2. jcsd
3. Oct 22, 2008 #2
Can I get a moderator to move this thread to the "Advanced Physics Homework Help"? I think it belongs there more than in this section.
Thanks in advance.
Have something to add?
Similar Discussions: Quantum chemistry/physics
1. Physical Chemistry (Replies: 1)
2. Quantum chemistry (Replies: 4)
3. Physical Chemistry (Replies: 1)
4. Physical Chemistry (Replies: 0) |
ac0ee11d0bfe1088 | First published Tue Apr 5, 2016
[Editor's Note: The following new entry by John Manchak and Bryan W. Roberts replaces the former entry on this topic by the previous author.]
A supertask is a task that consists in infinitely many component steps, but which in some sense is completed in a finite amount of time. Supertasks were studied by the pre-Socratics and continue to be objects of interest to modern philosophers, logicians and physicists. The term “super-task” itself was coined by J.F. Thomson (1954).
Here we begin with an overview of the analysis of supertasks and their mechanics. We then discuss the possibility of supertasks from the perspective of general relativity.
1. Mechanical properties
Strange things can happen when one carries out an infinite task.
For example, consider a hotel with a countably infinite number of rooms. One night when the hotel is completely occupied, a traveler shows up and asks for a room. “No problem,” the receptionist replies, “there’s plenty of space!” The first occupant then moves to the second room, the second to the third room, the third to the fourth room, and so on all the way up. The result is a hotel that has gone from being completely occupied to having one room free, and the traveler can stay the night after all. This supertask was described in a 1924 lecture by David Hilbert, as reported by Gamow (1947).
One might take such unusual results as evidence against the possibility of supertasks. Alternatively, we might take them to seem strange because our intuitions are based on experience with finite tasks, and which break down in the analysis of supertasks. For now, let us simply try to come to grips with some of the unusual mechanical properties that supertasks can have.
1.1 Missing final and initial steps: The Zeno walk
Supertasks often lack a final or initial step. A famous example is the first of Zeno’s Paradoxes, the Paradox of the Dichotomy. The runner Achilles begins at the starting line of a track and runs ½ of the distance to the finish line. He then runs half of the remaining distance, or ¼ of the total. He then runs half the remaining distance again, or ⅛ of the total. And he continues in this way ad infinitum, getting ever-closer to the finish line (Figure 1.1.1). But there is no final step in this task.
The Zeno Dichotomy supertask
Fig 1.1.1. The Zeno Dichotomy supertask.
There is also a “regressive” version of the Dichotomy supertask that has no initial step. Suppose that Achilles does reach the finish line. Then he would have had to travel the last ½ of the track, and before that ¼ of the track, and before that ⅛ of the track, and so on. In this description of the Achilles race, we imagine winding time backwards and viewing Achilles getting ever-closer to the starting line (Figure 1.1.2). But now there is no initial step in the task.
Regressive version of the Zeno Dichotomy supertask
Fig 1.1.2. Regressive version of the Zeno Dichotomy supertask.
Zeno, at least as portrayed in Aristotle’s Physics, argued that as a consequence, motion does not exist. Since an infinite number of steps cannot be completed, Achilles will never reach the finish line (or never have started in the regressive version). However, modern mathematics provides ways of explaining how Achilles can complete this supertask. As Salmon (1998) has pointed out, much of the mystery of Zeno’s walk is dissolved given the modern definition of a limit. This provides a precise sense in which the following sum converges:
Although it has infinitely many terms, this sum is a geometric series that converges to 1 in the standard topology of the real numbers. A discussion of the philosophy underpinning this fact can be found in Salmon (1998), and the mathematics of convergence in any real analysis textbook that deals with infinite series. From this perspective, Achilles actually does complete all of the supertask steps in the limit as the number of steps goes to infinity. One might only doubt whether or not the standard topology of the real numbers provides the appropriate notion of convergence in this supertask. A discussion of the subtleties of the choice of topology has been given by Mclaughlin (1998).
Max Black (1950) argued that it is nevertheless impossible to complete the Zeno task, since there is no final step in the infinite sequence. The existence of a final step was similarly demanded on a priori terms by Gwiazda (2012). But as Thomson (1954) and Earman and Norton (1996) have pointed out, there is a sense in which this objection equivocates on two different meanings of the word “complete.” On the one hand “complete” can refer to the execution of a final action. This sense of completion does not occur in Zeno’s Dichotomy, since for every step in the task there is another step that happens later. On the other hand, “complete” can refer to carrying out every step in the task, which certainly does occur in Zeno’s Dichotomy. From Black’s argument one can see that the Zeno Dichotomy cannot be completed in the first sense. But it can be completed in the second. The two meanings for the word “complete” happen to be equivalent for finite tasks, where most of our intuitions about tasks are developed. But they are not equivalent when it comes to supertasks.
Hermann Weyl (1949, §2.7) suggested that if one admits that the Zeno race is possible, then one should equally admit that it is possible for a machine to carry out an infinite number of tasks in finite time. However, one difference between the Zeno run and a machine is that the Zeno run is continuous, while the tasks carried out by a machine are typically discrete. This led Grünbaum (1969) to consider the “staccato” version of the Zeno run, in which Achilles pauses for successively shorter times at each interval.
1.2 Missing limits: Thomson’s Lamp
Supertasks are often described by sequences that do not converge. J. F. Thomson (1954) introduced one such example now known as Thomson’s Lamp, which he thought illustrated a sense in which supertasks truly are paradoxical.
Suppose we switch off a lamp. After 1 minute we switch it on. After ½ a minute more we switch it off again, ¼ on, ⅛ off, and so on. Summing each of these times gives rise to an infinite geometric series that converges to 2 minutes, after which time the entire supertask has been completed. But when 2 minutes is up, is the lamp on or off?
Thomson's lamp
Fig 1.2.1. Thomson’s lamp.
It may seem absurd to claim that it is on: for each moment that the lamp was turned on, there is a later moment at which it was turned off. But it would seem equally absurd to claim that it is off: for each moment that the lamp is turned off, there is a later moment that it was turned on. This paradox, according to Thomson, suggests that the supertask associated with the lamp is impossible.
To analyze the paradox, Thomson suggested we represent the “on” state of the map with the number 1 and the “off” state with 0. The supertask then consists in the sequence of states,
\[ 0, 1, 0, 1, 0, 1, \ldots . \]
This sequence does not converge to any real number in the standard real topology. However, one might redefine what it means for a sequence to converge in response to this. For example, we could define convergence in terms of the arithmetic mean. Given a sequence \(x_n\), the Cesàro mean is the sequence \(C_1 = x_1\), \(C_2 = (x_1 + x_2)/2\), \(C_3 = (x_1 + x_2 + x_3)/3\), and so on. These numbers describe the average value of the sequence up to a given term. One says that a sequence \(x_n\) Cesàro converges to a number \(C\) if and only if \(C_n\) converges (in the ordinary sense) to \(C\). It is then well-known that the sequence \(0, 1, 0, 1, \ldots\) Cesàro converges to ½ (see e.g. Bashirov 2014).
Thomson pointed out that this argument is not very helpful without an interpretation of what lamp-state is represented by ½. We want to know if the lamp is on or off; saying that its end state is associated with a convergent arithmetic mean of ½ does little to answer the question. However, this approach to resolving the paradox has still been pursued, for example by Pérez Laraudogoita, Bridger and Alper (2002) and by Dolev (2007).
Are there other consistent ways to describe the final state of Thomson’s lamp in spite of the missing limit?
Benacerraf (1962) pointed out a sense in which the answer is yes. The description of the Thomson lamp only actually specifies what the lamp is doing at each finite stage before 2 minutes. It says nothing about what happens at 2 minutes, especially given the lack of a converging limit. It may still be possible to “complete” the description of Thomson’s lamp in a way that leads it to be either on after 2 minutes or off after 2 minutes. The price is that the final state will not be reached from the previous states by a convergent sequence. But this by itself does not amount to a logical inconsistency.
Such a completion of Thomson’s description was explicitly constructed by Earman and Norton (1996) using the following example of a bouncing ball.
Suppose a metal ball bounces on a conductive plate, bouncing a little lower each time until it comes to a rest on the plate. Suppose the bounces follow the same geometric pattern as before. Namely, the ball is in the air for 1 minute after the first bounce, ½ minute after the second bounce, ¼ minute after the third, ⅛ minute after the fourth, and so on. Then the entire infinite sequence of bounces is a supertask.
Now suppose that the ball completes a circuit when it strikes the metal plate, thereby switching on a lamp. This is a physical system that implements Thomson’s lamp. In particular, the lamp is switched on and off infinitely many times over the course of a finite duration of 2 minutes.
Thomson's lamp implemented as a physical circuit
Fig 1.2.2. Thomson’s lamp implemented by a bouncing ball: contact of the bouncing ball with the plate switches the Thomson lamp on. The supertask ends with the lamp on.
What is the state of this lamp after 2 minutes? The ball will have come to rest on the plate, and so the lamp will be on. There is no mystery in this description of Thomson’s lamp.
Alternatively, we could arrange the ball so as to break the circuit when it makes contact with the plate. This gives rise to another implementation of Thomson’s lamp, but one that is off after 2 minutes when the ball comes to its final resting state.
Thomson's lamp implemented as a physical circuit
Fig 1.2.3. Another implementation of Thomson’s lamp: contact of the bouncing ball with the plate switches the Thomson lamp off. The supertask ends with the lamp off.
These examples show that is possible to fill in the details of Thomson’s lamp in a way that either renders it definitely on after the supertask, or definitely off. For this reason, Earman and Norton conclude with Benacerraf that the Thomson lamp is not a matter of paradox but of an incomplete description.
As with the Zeno Dichotomy, there is a regressive version of the Thomson lamp supertask. Such a lamp has been studied by Uzquiano (2012), although as a set of instructions rather than a set of tasks. Consider a lamp that has been switched on at 2 seconds past the hour, off at 1 second past, on at ½ a second past, off at ¼ a second past, and so on. What is the state of the lamp on the hour, just before the supertask has begun? This supertask can be viewed as incomplete in the same way as the original Thomson lamp. Insofar as the mechanics of bouncing balls and electric circuits described in Earman and Norton’s lamp are time reversal invariant, it follows that the time-reversed system is a possibility as well, which is spontaneously excited to begin bouncing, providing a physical implementation of the regressive Thomson lamp. However, whether the reversed Thomson lamp is a physical possibility depends on whether or not the system is time reversible. A difficulty is that its initial state will not determine the subsequent history of an infinity of alternations.
1.3 Discontinuous quantities: The Littlewood-Ross Paradox
Sometimes supertasks require a physical quantity to be discontinuous in time. One example of this, known as Ross’ paradox, was described by John Littlewood (1953) as an “infinity paradox” and expanded upon by Sheldon Ross (1988) in his well-known textbook on probability. It goes as follows.
Suppose we have a jar—a very large jar—with the capacity to hold infinitely many balls. We also have a countably infinite pile of balls, numbered 1, 2, 3, 4, …. First we drop balls 1–10 into the jar, then remove ball 1. (This adds a total of nine balls to the jar.) Then we drop balls 11–20 in the jar, and remove ball 2. (This brings the total up to eighteen.) Suppose that we continue in this way ad infinitum, and that we do so with ever-increasing speed, so that we will have used up our entire infinite pile of balls in finite time (Figure 1.3.1). How many balls will be in the jar when this supertask is over?
The Littlewood-Ross Paradox
Fig 1.3.1. The Littlewood-Ross procedure.
Both Littlewood (1953) and Ross (1976) responded that the answer is zero. Their reasoning went as follows.
Ball 1 was removed at the first stage. Ball 2 was removed at the second stage. Ball n was removed at the nth stage, and so on ad infinitum. Since each ball has a label n, and since each label n was removed at the nth stage of the supertask, there can be only be zero balls left in the jar at the end after every stage has been completed. One can even identify the moment at which each of them was removed.
Some may be tempted to object that, on the contrary, the number of balls in the jar should be infinite when the supertask is complete. After the first stage there are 9 balls in the jar. After the second stage there are 18. After the third stage there are 27. In the limit as the number of stages approaches infinity, the total number of balls in the jar diverges to infinity. If the final state of the jar is determined by what the finite-stage states are converging to, then the supertask should conclude with infinitely many balls in the jar.
If both of these responses are equally reasonable, then we have a contradiction. There cannot be both zero and infinity balls in a jar. It is in this sense that the Littlewood-Ross example might be a paradox.
Allis and Koetsier (1991) argued that only the first response is justified because of a reasonable “principle of continuity”: that the positions of the balls in space are a continuous function of time. Without such a principle, the positions of the balls outside the jar could be allowed to teleport discontinuously back into the jar as soon as the supertask is complete. But with such a principle in place, one can conclude that the jar must be empty at the end of the supertask. This principle has been challenged by Van Bendegum (1994), with a clarifying rejoinder by Allis and Koetsier (1996).
Earman and Norton (1996) follow Allis and Koetsier (and Littlewood and Ross) in demanding that the worldlines of the balls in the jar be continuous, but point out that there is a different sense of discontinuity that develops as a consequence. (A ‘worldline’ is used here to describe the trajectory of a particle through space and time; it is discussed more below in the section on Time in Relativistic Spacetime.) Namely, if one views the number of balls in the jar as approximated by a function \(N(t)\) of time, then this “number function” is discontinuous in the Littlewood-Ross supertask, blowing up to an arbitrarily large value over the course of the supertask before dropping discontinuously to 0 once it is over. In this sense, the Littlewood-Ross paradox presents us with a choice, to either,
1. Take the worldline of each ball in the jar to be continuous in time; or
2. Take the number \(N(t)\) of balls in the jar to be approximated by a continuous function of time;
but not both. The example thus seems to require a physical quantity to be discontinuous in time: either in the worldlines of the balls, or in the number of balls in the jar.
A variation of the Littlewood-Ross example has been posed as a puzzle for decision theory by Barrett and Arntzenius (1999, 2002). They propose a game involving an infinite number of $1 bills, each numbered by a serial number 1, 2, 3, …, and in which a person begins with $0. The person must then choose between the following two options.
• Option A: accept $1; or
• Option B: first accept $2n+1, where n is the number of times the offer has been made, and then return whatever bill the player holds with the smallest serial number.
At each finite stage of the game it appears to be rational to choose Option B. For example, at stage n=1 Option B returns $3, while Option A returns $1. At stage n=2 Option B returns $7 while Option A returns $1. And so on.
However, suppose that one plays this game as a supertask, so that the entire infinite number of offers is played in finite time. Then how much money will the player have? Following exactly the same reasoning as in the Littlewood-Ross paradox, we find that the answer is $0. For each bill’s serial number, there is a stage at which that bill was returned. So, if we presume the worldlines of the bills must be continuous, then the infinite game ends with the player winning nothing at all. This is a game in which the rational strategy at each finite stage does not provide a winning strategy for the infinite game.
There are variations on this example that have a more positive yield for the players. For example, Earman and Norton (1996) propose the following pyramid marketing scheme. Suppose that an agent sells two shares of a business for $1,000 each to a pair of agents. Each agent splits their share in two and sells it for $2,000 to two more agents, thus netting $1,000 while four new agents go into debt for $1,000 each. Each of the four new agents then do the same, and so on ad infinitum. How does this game end?
If the pool of agents is only finitely large, then the last agents will get saddled with the debt while all the previous agents make a profit. But if the pool is infinitely large, and the pyramid marketing scheme becomes a supertask, then all of the agents will have profited when it is completed. At each stage in which a given agent is in debt, there is a later stage in which the agent sells to shares and makes $1,000. This is thus a game that starts with equal total amount of profit and debt, but concludes having converted the debt into pure profit.
1.4 Classical mechanical supertasks
The discussions of supertasks so far suggest that the possibility of supertasks is not so much a matter of logical possibility as it is “physical possibility.” But what does “physical possibility” mean? One natural interpretation is that it means, “possible according to some laws of physics.” Thus, we can make the question of whether supertasks are possible more precise by asking, for example, whether supertasks compatible with the laws of classical particle mechanics.
Earman and Norton’s (1996) bouncing ball provides one indication that the answer is yes. Another particularly simple example was introduced by Pérez Laraudogoita (1996, 1998), which goes as follows.
Suppose an infinite lattice of particles of the same mass are arranged so that there is a distance of ½ between the first and the second, a distance of ¼ between the second and the third, a distance of ⅛ between the third and the fourth, and so on. Now imagine that a new particle of the same mass collides with the first particle in the lattice, as in Figure 1.4.1. If it is a perfectly elastic collision, then the incoming particle will come to rest and the velocity will be transferred to the struck particle. Suppose it takes ½ of a second for the second collision to occur. Then it will take ¼ of a second for the third to occur, ⅛ of a second for the fourth, and so on. The entire infinite process will thus be completed after 1 second.
Jon Pérez Laraudogoita's 'Beautiful Supertask'
Fig 1.4.1. Jon Pérez Laraudogoita’s ‘Beautiful Supertask’
Earman and Norton (1998) observed several curious facts about this system. First, unlike Thomson’s lamp, this supertask does not require unbounded speeds. The total velocity of the system is never any more than the velocity of the original moving particle. Second, this supertask takes place in a bounded region of space. So, there are no boundary conditions “at infinity” that can rule out the supertask. Third, although energy is conserved in each local collision, the global energy of this system is not conserved, since after finite time it becomes a lattice of infinitely many particles all at rest. Finally, the supertask depends crucially on there being an infinite number of particles, and the width of these particles must shrink without bound while keeping the mass fixed. This means the mass density of the particles must grow without bound. The failure of global energy conservation and other curious features of this system have been studied by Atkinson (2007, 2008), Atkinson and Johnson (2009, 2010) and by Peijnenburg and Atkinson (2008) and Atkinson and Peijnenburg (2014).
Another kind of classical mechanical supertask was described by Pérez Laraudogoita (1997). Consider again the infinite lattice of particles of the same mass, but this time suppose that the first particle is motionless, that the second particle is headed towards the first with some velocity, and that the velocity of each successive particle doubles (Figure 1.4.2). The first collision sets the first particle in motion. But a later collision then sets it moving faster, and a later collision even faster, and so on.
A supertask that relies on unbounded speed
Fig 1.4.2. A supertask that relies on unbounded speed.
It is not hard to arrange this situation so that the first collision happens after ½ of a second, the second collision after ¼ of a second, the third after ⅛ of a second, and so on (Pérez Laraudogoita 1997). So again we have a supertask that is completed after one second.
What is the result of this supertask? Their answer is that none of the particles remain in space. They cannot be anywhere in space, since for each horizontal position that a given particle can occupy there is a time before 1 second that it is pushed out of that position by a collision. The worldline of any one of the particles from this supertask can be illustrated using Figure 1.4.3. This is what Malament (2008, 2009) has referred to as a “space evader” trajectory. The time-reversed “space invader” trajectory is one in which the vacuum is spontaneously populated with particles after some fixed time.
Worldline of the supertask particle
Fig 1.4.3. Worldline of the supertask particle.
Earman and Norton (1998) gave some variations on this supertask, including one which occurs in a bounded region in space. Unlike the example of Pérez Laraudogoita (1996), this supertask also essentially requires particles to be accelerated to arbitrarily high speeds, and in this sense is essentially non-relativistic. See Pérez Laraudogoita (1999) for a rejoinder.
This supertask is modeled on an example of Benardete (1964), who considered a space ship that successively doubles its speed until it escapes to spatial infinity. Supertasks of this kind were also studied by physicists like Lanford (1975, §4), who identified a system of particles colliding elastically that can undergo an infinite number of collisions in finite time. Mather and McGehee (1975) pointed out a similar example. Earman (1986) discussed the curious behavior of Lanford’s example as well, pointing out that such supertasks provide examples of classical indeterminism, but can be eliminated by restricting to finitely many particles or by imposing appropriate boundary conditions.
1.5 Quantum mechanical supertasks
It is possible to carry some of the above considerations of supertasks over from classical to quantum mechanics. The examples of quantum mechanical supertasks that have been given so far are somewhat less straightforward than the classical supertasks above. However, they also bear a more interesting possible relationship to physical experiments.
Example 1: Norton’s Lattice
Norton (1999) investigated whether there exists a direct quantum mechanical analogue of the kinds of supertasks discussed above. He began by considering the classical scenario shown in Figure 1.5.1 of an infinite lattice of interacting harmonic oscillators. Assuming the springs all have the same tension and solving the equation of motion for this system, Norton found that it can spontaneously excite, producing an infinite succession of oscillations in the lattice in a finite amount of time.
Norton's harmonic oscillator supertask
Fig 1.5.1. Norton’s infinite harmonic oscillator system.
Using this example as a model, Norton produced a similar supertask for a quantum lattice of harmonic oscillators. Begin with an infinite lattice of 2-dimensional quantum systems, each with a ground state \(\ket{\phi}\) and an excited state \(\ket{\chi}\). Consider the collection of vectors,
\[\begin{align} \ket{0} &= \ket{\phi} \otimes \ket{\phi} \otimes \ket{\phi} \otimes \ket{\phi} \otimes \cdots \\ \ket{1} &= \ket{\chi} \otimes \ket{\phi} \otimes \ket{\phi} \otimes \ket{\phi} \otimes \cdots \\ \ket{2} &= \ket{\phi} \otimes \ket{\chi} \otimes \ket{\phi} \otimes \ket{\phi} \otimes \cdots \\ \ket{3} &= \ket{\phi} \otimes \ket{\phi} \otimes \ket{\chi} \otimes \ket{\phi} \otimes \cdots \\ \ket{4} &= \ket{\phi} \otimes \ket{\phi} \otimes \ket{\phi} \otimes \ket{\chi} \otimes \cdots \\ \;\vdots& \end{align}\]
For simplicity, we restrict attention to the possible states of the system that are spanned by this set. We posit a Hamiltonian that has the effect of leaving |0⟩ invariant; of creating |1⟩ and destroying |2⟩; of creating |2⟩ and destroying |3⟩; and so on. Norton then solved the differential form of the Schrödinger equation for this interaction and argued that it admits solutions in which all of the nodes in the infinite lattice start in their ground state, but all become spontaneously excited in finite time.
Norton’s quantum supertask requires a non-standard quantum system because the dynamical evolution he proposes is not unitary, even though it obeys a differential equation in wavefunction space that takes the form of the Schrödinger equation (Norton 1999, §5). Nevertheless, Norton’s quantum supertask has fruitfully appeared in physical applications, having been found to arise naturally in a framework for perturbative quantum field theory proposed by Duncan and Niedermaier (2013, Appendix B).
Example 2: Hepp Measurement
Although quantum systems may sometimes be in a pure superposition of measurable states, we never observe our measurement devices to be in such states when they interact with quantum systems. On the contrary, our measurement devices always seem to display definite values. Why? Hepp (1972) proposed to explain this by modeling the measurement process using a quantum supertask. This example was popularized by Bell (1987, §6) and proposed as a solution to the measurement problem by Wan (1980) and Bub (1988).
Here is a toy example illustating the idea. Suppose we model an idealised measuring device as consisting in an infinite number of fermions. We imagine that the fermions do not interact with each other, but that a finite number of them will couple to our target system whenever we make a measurement. Then an observable characterising the possible outcomes of a given measurement will be a product corresponding to some finite number n of observables,
\[ A = A_1 \otimes A_2 \otimes \cdots \otimes A_n \otimes I \otimes I \otimes I \otimes \cdots \]
Restricting to a finite number of fermions at a time has the effect of splitting the Hilbert space of states into special subspaces called superselection sectors, which have the property that when \(\ket{\psi}\) and \(\ket{\phi}\) come from different sectors, any superposition \(a\ket{\psi} + b\ket{\phi}\) with \(|a|^2 + |b|^2 = 1\) will be a mixed state. It turns out in particular that the space describing the state in which all the fermions are \(z\)-spin-up is in a different superselection sector than the space in which they are all spin down. Although this may be puzzling for the newcomer, it can be found in any textbook that deals with superselection. And it allows us to construct an interesting supertask describing the measurement process. The following simplified version of it was given by Bell (1987).
Suppose we wish to measure a single fermion. We model this as a wavefunction that zips by the locations of each fermion in our measurement device, interacting locally with the individual fermions in the device as it goes (Figure 1.5.2). The interaction is set up in such a way that every fermion is passed in finite time, and such that after the process is completed, the measurement device indicates what the original state of the fermion being measured was. In particular, suppose the single fermion begins in a \(z\)-spin-up state. Then, after it has zipped by each of the infinite fermions, they will all be found in the \(z\)-spin-up state. If the single fermion begins in a \(z\)-spin-down state, then the infinite collection of fermions would all be \(z\)-spin-down. What if the single fermion was in a superposition? Then the infinite collection of fermions would contain some mixture of \(z\)-spin up and \(z\)-spin down states.
Bell's implementation of the Hepp measurement supertask
Fig 1.5.2. Bell’s implementation of the Hepp measurement supertask.
Hepp found that, because of the superselection structure of this system, this measurement device admits mixed states that can indicate the original state of the single fermion, even when the latter begins in a pure superposition. Suppose we denote the \(z\)-spin observable for the nth fermion in the measurement device as, \(s_n = I \otimes I \otimes \cdots (n\,times) \cdots \otimes \sigma_z \otimes I \cdots.\) We now construct a new observable, given by,
\[ S = \lim_{n\rightarrow\infty} \tfrac{1}{n}(s_1 + s_2 + \cdots + s_n). \]
This observable has the property that \(\langle \psi, S\phi\rangle = 1\) if \(\ket{\psi}\) and \(\ket{\phi}\) both lie in the same superselection sector as the state in which all the fermions in the measurement device are \(z\)-spin-up. It also has the property that \(\langle\psi,S\phi\rangle = -1\) if they lie in the same superselection sector as the all-down state. But more interestingly, suppose the target fermion that we want to measure is in a pure superposition of \(z\)-spin-up and \(z\)-spin-down states. Then, after it zips by all the fermions in the measurement device, that measurement device will be left in a superposition of the form \(a\ket{\uparrow} + b\ket{\downarrow}\), where \(\ket{\uparrow}\) is the state in which all the fermions in the device are spin-up and \(\ket{\downarrow}\) is the state in which they are all spin down. Since \(\ket{\uparrow}\) and \(\ket{\downarrow}\) are in different superselection sectors, it follows that their superposition must be a mixed state. In other words, this model allows the measurement device to indicate the pure state of the target fermion, even when that state is a pure superposition, without the device itself being in a pure superposition.
The supertask underpinning this model requires an infinite number of interactions. As Hepp and Bell described it, the model was unrealistic because it required an infinite amount of time. However, a similar system was shown by Wan (1980) and Bub (1988) to take place in finite time. Their approach appears at first glance to be a promising model of measurement. However, Landsman (1991) pointed out that it is inadequate on one of two levels: either the dynamics is not automorphic (which is the analogue of unitarity for such systems), or the task is not completed in finite time. Landsman (1995) has argued that neither of these two outcomes is plausible for a realistic local description of a quantum system.
Example 3: Continuous Measurement
Another quantum supertask is found in the so-called Quantum Zeno Effect. This literature begins with a question: what would happen if we were to continually monitor a quantum system, like an unstable atom? The predicted effect is that the system would not change, even if it is an unstable atom that would otherwise quickly decay.
Misra and Sudarshan (1977) proposed to make the concept of “continual monitoring” precise using a Zeno-like supertask. Imagine that an unstable atom is evolving according to some law of unitary evolution \(U_t\). Suppose we measure whether or not the atom has decayed by following that regressive form of Zeno’s Dichotomy above. Namely, we measure it at time \(t\), but also at time \(t/2\), and before that at time \(t/4\), and at time \(t/8\), and so on. Let \(E\) be a projection corresponding to the initial undecayed state of the particle. Finding the atom undecayed at each stage in the supertask then corresponds to the sequence,
\[ EU_tE,\; EU_{t/2}E,\; EU_{t/4}E,\; EU_{t/8}E,\ldots. \]
Misra and Sudarshan use this sequence as a model for continuous measurement, by supposing that the sequence above converges to an operator \(T(t)=E\), and that it does so for all times \(t\) greater than or equal to zero. The aim is for this to capture the claim that the atom is continually monitored beginning at a fixed time \(t=0\). They prove from this assumption that, for most reasonable quantum systems, if the initial state is undecayed in the sense that \(\mathrm{Tr}(\rho E)=1\), then the probability that the atom will decay in any given time interval \([0,t]\) is equal to zero. That is, continual monitoring implies that the atom will never decay.
These ideas have given rise to a large literature of responses. To give a sampling: Ghirardi et al. (1979) and Pati (1996) have objected that this Zeno-like model of a quantum measurement runs afoul of other properties of quantum theory, such as the time-energy uncertainty relations, which they argue should prevent the measurements in the supertask sequence above from being made with arbitrarily high frequency. Bokulich (2003) has responded that, nevertheless, such a supertask can still be carried out when the measurement commutes with the unitary evolution, such as when \(E\) is a projection onto an energy eigenstate.
2. Supertasks in Relativistic Spacetime
In Newtonian physics, time passes at the same rate for all observers. If Alice and Bob are both present at Alice’s 20th and 21st birthday parties, both people will experience an elapsed time of one year between the two events. (This is true no matter what Alice or Bob do or where Alice and Bob go in between the two events.) Things aren’t so simple in relativistic physics. Elapsed time between events is relative to the path through spacetime a person takes between them. It turns out that this fact opens up the possibility of a new type of supertask. Let’s investigate this possibility in a bit more detail.
2.1 Time in Relativistic Spacetime
A model of general relativity, a spacetime, is a pair \((M,g)\). It represents a possible universe compatible with the theory. Here, \(M\) is a manifold of events. It gives the shape of the universe. (Lots of two-dimensional manifolds are familiar to us: the plane, the sphere, the torus, etc.) Each point on \(M\) represents a localized event in space and time. A supernova explosion (properly idealized) is an event. A first kiss (properly idealized) is also an event. So is the moon landing. But July 20, 1969 is not an event. And the moon is not an event.
Manifolds are great for representing events. But the metric \(g\) dictates how these events are related. Is it possible for a person to travel from this event to that one? If so, how much elapsed time does a person record between them? The metric \(g\) tells us. At each event, \(g\) assigns a double cone structure. The cone structures can change from event to event; we only require that they do so smoothly. Usually, one works with models of general relativity in which one can label the two lobes of each double cone as “past” and “future” in a way which involves no discontinuities. We will do so in what follows. (See figure 2.1.1.)
Events in spacetime and the associated double cones
Fig 2.1.1. Events in spacetime and the associated double cones.
Intuitively, the double cone structure at an event demarcates the speed of light. Trajectories through spacetime which thread the inside of the future lobes of these “light cones” are possible routes in which travel stays below the speed of light. Such a trajectory is a worldline and, in principle, can be traversed by a person. Now, some events cannot be connected by a worldline. But if two events can be connected by a worldline, there is an infinite number of worldlines which connect them.
Each worldline has a “length” as measured by the metric \(g\); this length is the elapsed time along the worldline. Take two events on a manifold \(M\) which can be connected by a worldline. The elapsed time between the events might be large along one worldline and small along another. Intuitively, if a worldline is such that it stays close to the boundaries of the cone structures (i.e. if the trajectory stays “close to the speed of light”), then the elapsed time is relatively small. (See Figure 2.1.2.) In fact, it turns out that if two events can be connected by a worldline, then for any number \(t>0\), there is a worldline connecting the events with an elapsed time less than \(t\)!
Elapsed time is worldline dependent
Fig 2.1.2. Elapsed time is worldline dependent.
2.2 Malament-Hogarth Spacetimes
The fact that, in relativistic physics, elapsed time is relative to worldlines suggests a new type of bifurcated supertask. The idea is simple. (A version of the following idea is given in Pitowsky 1990.) Two people, Alice and Bob, meet at an event \(p\) (the start of the supertask). Alice then follows a worldline with a finite elapsed time which ends at a given event \(q\) (the end of the supertask). On the other hand, Bob goes another way; he follows a worldline with an infinite elapsed time. Bob can use this infinite elapsed time to carry out a computation which need not halt after finitely many steps. Bob might check all possible counterexamples to Goldbach’s conjecture, for example. (Goldbach’s conjecture is the statement that every even integer n which is greater than 2 can be expressed as the sum of two primes. It is presently unknown whether the conjecture is true. One could settle it by sequentially checking to see if each instantiated statement is true for \(n=4\), \(n=6\), \(n=8\), \(n=10\), and so on.) If the computation halts, then Bob sends a signal to Alice at \(q\) saying as much. If the computation fails to halt, no such signal is sent. The upshot is that Alice, after a finite amount of elapsed time, knows the result of the potentially infinite computation at \(q\).
Let’s work a bit more to make the idea precise. We say that a half-curve is a worldline which starts at some event and is extended as far as possible in the future direction. Next, the observational past of an event q, OP(q), is the collection of all events x such that there a is a worldline which starts at x and ends at q. Intuitively, a (slower than light) signal may be sent from an event x to an event q if and only if x is in the set OP(q). (See figure 2.2.1.)
The observational past of an event and a half-curve
Fig 2.2.1. The observational past of an event and a half-curve. A signal can be sent to \(q\) from every point in \(OP(q)\). No signal can be sent to \(q\) from any point on the half-curve \(\gamma\).
We are now ready to define the class of models of general relativity which allow for the type of bifurcated supertask mentioned above (Hogarth 1992, 1994).
Definition. A spacetime \((M,g)\) is Malament-Hogarth if there is an event \(q\) in \(M\) and a half-curve \(\gamma\) in \(M\) with infinite elapsed time such that \(\gamma\) is contained in \(OP(q)\).
One can see how the definition corresponds to the story above. Bob travels along the half-curve \(\gamma\) and records an infinite elapsed time. Moreover, at any event on Bob’s worldline, Bob can send a signal to the event \(q\) where Alice finds the result of the computation; this follows from the fact that \(\gamma\) is contained in \(OP(q)\). Note that Alice’s worldline and the starting point \(p\) mentioned in the story did not make it to the definition; they simply weren’t needed. The half curve \(\gamma\) must start at some event – this event is our starting point \(p\). Since \(p\) is in \(OP(q)\), there is a worldline from \(p\) to \(q\). Take this to be Alice’s worldline. One can show that this worldline must have a finite elapsed time.
Is there a spacetime which satisfies the definition? Yes. Let \(M\) be the two-dimensional plane in standard \(t,x\) coordinates. Let the metric \(g\) be such that the light cones are oriented in the \(t\) direction and open up as the absolute value of \(x\) approaches infinity. The resulting spacetime (Anti-de Sitter spacetime) is Malament-Hogarth (see Figure 2.2.2).
Anti-de Sitter Spacetime is Malament-Hogarth
Fig 2.2.2. Anti-de Sitter Spacetime is Malament-Hogarth. A signal can be sent to \(q\) from every point on the half-curve \(\gamma\).
2.3 How Reasonable Are Malament-Hogarth Spacetimes?
In the previous section, we showed the existence of models of general relativity which seem to allow for a type of bifurcated supertask. Here, we ask: Are these models “physically reasonable”? Earman and Norton (1993, 1996) and Etesi and Németi (2002) have articulated a number of potential physical problems concerning Malament-Hogarth spacetimes. First of all, we would like Bob’s worldline to be reasonablly traversable. In the Anti-de Sitter model above, the half-curve \(\gamma\) has an infinite total acceleration. Bob would need an infinite amount of fuel to traverse it! (Malament 1985)
Another problem for the Anti-de Sitter spacetime is that a “divergent blueshift” phenomenon occurs. Intuitively, the frequency of any signal Bob sends to Alice is amplified more and more as he goes along. Eventually, even the slightest thermal noise will be amplified to such an extent that communication is all but impossible. So, if the counterexample to Goldbach’s conjecture comes late in the game (or not at all), it is not clear that Alice can ever know this.
One can find Malament-Hogarth spacetimes which can escape both of the problems mentioned above. Let \(M\) be a two-dimensional plane in standard \(t, x\) coordinates which is then “rolled up” along the \(t\) axis. Let the metric \(g\) be such that the light cones are oriented in the \(t\) direction and do not change from point to point. (See Figure 2.3.1.)
An acausal Malament-Hogarth spacetime
Fig 2.3.1. An acausal Malament-Hogarth spacetime.
Because worldliness can wrap around and around the cylinder, \(OP(q)=M\) for any event \(q\). This allows for great freedom in choosing Bob’s worldline \(\gamma\). In fact, we can choose it so that the total acceleration is zero – no fuel is needed to traverse it. Moreover, we can choose it so that there is also no divergent blueshift phenomenon (see Earman and Norton 1993). But, alas, we have a new problem: the spacetime is acausal. A worldline can start and end at the same event allowing for a type of “time travel”. It is unclear if spacetimes allowing for time travel are physically reasonable (see Smeenk and Wüthrich 2011). It turns out that more complicated examples can be constructed which avoid all the potential problems mentioned so far and more (Manchak 2010). But such examples contain spacetime “holes” which may not be physically reasonable (see Manchak 2009). More work is needed to see if such problems can also be overcome.
We conclude with one final potential problem which threatens to render all Malament-Hogarth spacetimes physically unreasonable. Penrose (1979) has conjectured that all physically reasonable spacetimes are free of a certain type of “naked singularities” and the breakdown of determinism they bring. Whether Penrose’s conjecture is true or not is the subject of much debate (Earman 1995). But it turns out that every Malament-Hogarth spacetime harbors these naked singularities (Hogarth 1992). In sum, it is still an open question whether Malament-Hogarth spacetimes are simply artifact of the formalism of general relativity or if the kind of bifurcated supertask they suggest can be implemented in our own universe.
• Atkinson, D., 2007, “Losing energy in classical, relativistic and quantum mechanics”, Studies in History and Philosophy of Modern Physics, 38(1): 170–180.
• –––, 2008, “A relativistic zeno effect. Synthese”, 160(1): 5–12.
• Atkinson, D. and Johnson, P., 2009, “Nonconservation of energy and loss of determinism I. Infinitely many colliding balls”, Foundations of Physics, 39(8): 937–957.
• –––, 2010, “Nonconservation of energy and loss of determinism II. Colliding with an open set”, Foundations of Physics, 40(2): 179–189.
• Atkinson, D. and Peijnenburg, J., 2014, “How some infinities cause problems in classical physical theories”, In Allo, P. and Kerkhoeve, B. V., editors, Modestly radical or radically modest: Festschrift for Jean Paul Van Bendegem on the Occasion of his 60th Birthday, pages 1–10. London: College Publications.
• Allis, V. an T. Koetsier, 1991, “On Some Paradoxes of the Infinite”, The British Journal for the Philosophy of Science, 42: 187–194.
• –––, 1995, “On Some Paradoxes of the Infinite II”, The British Journal for the Philosophy of Science, 46: 235–247.
• Barrett, J. A. and F. Arntzenius, 1999, “An infinite decision puzzle”, Theory and Decision, 46(1), 101–103.
• –––, 2002, “Why the infinite decision puzzle is puzzling”, Theory and decision 52(2): 139–147.
• Bashirov, A.E., 2014, Mathematical Analysis Fundamentals Waltham, MA: Elsevier.
• Bell, J. S., 2004, Speakable and unspeakable in quantum mechanics: Collected papers on quantum philosophy. Cambridge: Cambridge University Press.
• Benacerraf, P., 1962, “Tasks, Super-Tasks, and the Modern Eleatics”, The Journal of Philosophy, 59(24): 765–784.
• Benardete, J. A., 1964, Infinity: An essay in metaphysics., Oxford: Oxford University Press.
• Black, M., 1951, “Achilles and the Tortoise”, Analysis, 11(5): 91–101.
• Blum, L., Cucker, F., Shub, M. and Smale, S., 1998, Complexity and real computation, New York: Springer-Verlag.
• Bokulich, A., 2003, “Quantum measurements and supertasks”, International Studies in the Philosophy of Science, 17(2): 127–136.
• Bub, J., 1988, “How to solve the measurement problem of quantum mechanics”, Foundations of Physics 18(7): 701–722.
• Carl, M., Fischbach, T., Koepke, P., Miller, R., Nasfi, M. and Weckbecker, G., 2010, “The basic theory of infinite time register machines”, Archive for Mathematical Logic 49(2): 249–273.
• Copeland, B. J., 2015, “The Church-Turing Thesis”, The Stanford Encyclopedia of Philosophy (Summer 2015 Edition), Edward N. Zalta (ed.), URL = <>.
• Deolalikar, V., Hamkins, J.D. and Schindler, R., 2005, “P ≠ NP ∩ co-NP for infinite time Turing machines”, Journal of Logic and Computation, 15(5): 577–592.
• Dolev, Y., 2007, “Super-tasks and Temporal Continuity”, Iyyun: The Jerusalem Philosophical Quarterly, 56: 313–329.
• Duncan, A. and M. Niedermaier, 2013, “Temporal breakdown and Borel resummation in the complex Langevin method”, Annals of Physics, 329: 93–124.
• Earman, J., 1986, A Primer On Determinism, Dordrecht, Holland: D. Reidel Publishing Company.
• –––, 1995, Bangs, Crunches, Wimpers, and Shrieks. Oxford University Press.
• Earman, J. and J. Norton, 1993, “Forever is a Day: Supertasks in Pitowsky and Malament-Hogarth Spacetimes”, Philosophy of Science, 60: 22–42.
• –––, 1996, “Infinite Pains: the Trouble with Supertasks”, in A. Morton and S. Stich (eds), Benacerraf and His Critics, Oxford: Blackwell, 231–261.
• Etesi, G. and I. Németi, 2002, “Non-Turing Computations Via Malament-Hogarth Space-Times”, International Journal of Theoretical Physics, 41: 341–370.
• Gamow, G., 1947, One Two Three… Infinity: Facts and Speculations of Science. Dover.
• Ghiradi, G.C., C. Omero, T. Weber and A. Rimini, 1979, “Small-time behaviour of quantum nondecay probability and Zeno’s paradox in quantum mechanics”, Nuovo Cimento, 52(A): 421
• Grünbaum, A., 1969, “Can an Infinitude of Operations be performed in Finite Time?” British Journal for the Philosophy of Science, 20: 203–218.
• Gwiazda, J., 2012, “A proof of the impossibility of completing infinitely many tasks”, Pacific Philosophical Quarterly, 93: 1–7.
• Hamkins, J. D. and Lewis, A., 2000, “Infinite time Turing machines”, Journal of Symbolic Logic, 65(2): 567–604.
• Hamkins, J. D. and Miller, R. G., 2009, “Post’s problem for ordinal register machines: an explicit approach”, Annals of Pure and Applied Logic, 160(3): 302–309.
• Hamkins, J.D., Miller, R., Seabold, D., and Warner, S., 2008, “Infinite time computable model theory”, in New Computational Paradigms: Changing Conceptions of What is Computable, S. B. Cooper, B. Löwe, and A. Sorbi (Eds.), New York: Springer, pgs. 521–557.
• Hamkins, J.D. and Welch, P.D., 2003, “\(P^f \ne NP^f\) for almost all \(f\)”, Mathematical Logic Quarterly, 49(5): 536–540.
• Hepp, K., 1972, “Quantum theory of measurement and macroscopic observables”, Helvetica Physica Acta 45(2): 237–248.
• Hogarth, M., 1992, “Does General Relativity Allow an Observer to View an Eternity in a Finite Time?” Foundations of Physics Letters, 5: 173–181.
• Hogarth, M., 1994, “Non-Turing Computers and Non-Turing Computability”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1: 126–138.
• Immerman, N., 2016, “Computability and Complexity”, The Stanford Encyclopedia of Philosophy (Spring 2016 Edition), Edward N. Zalta (ed.), forthcoming URL = <>.
• Koepke, P., 2005, “Turing computations on ordinals” Bulletin of Symbolic Logic, 11(3): 377–397.
• –––, 2006, “Infinite Time Register Machines”, In Logical Approaches to Computational Barriers, Second Conference on Computability in Europe, CiE 2006, Swansea, UK, July 2006, Proceedings, A. Beckmann, U. Berger, B. Löwe, J. V. Tucker (Eds.), Berlin: Springer-Verlag, pgs. 257-266, Lecture Notes in Computer Science 3988.
• –––, 2009, “Ordinal Computability”, In Mathematical Theory and Computational Practice. 5th Conference on Computability in Europe, CiE 2009, Heidelberg, Germany, July 19–24, 2009. Proceedings, K. Ambos-Spies, B. Löwe, W. Merkle (Eds.), Heidelberg: Springer-Verlag, pgs. 280–289, Lecture Notes in Computer Science 5635.
• Koepke, P. and Miller, R., 2008, “An enhanced theory of infinite time register machines”. In Logic and Theory of Algorithms. 4th Conference on Computability in Europe, CiE 2008 Athens, Greece, June 15–20, 2008, Proceedings, Beckmann, A., Dimitracopoulos, C. and Löwe, B. (Eds.), Berlin: Springer, pgs. 306–315, Lecture Notes in Computer Science 5028.
• Koepke, P. and Seyfferth, B., 2009, “Ordinal machines and admissible recursion theory”, Annals of Pure and Applied Logic, 160(3): 310–318.
• Kremer, P., 2015, “The Revision Theory of Truth”, The Stanford Encyclopedia of Philosophy (Summer 2015 Edition), Edward N. Zalta (ed.), URL = <>.
• Kühnberger, K.-U., Löwe, B., Möllerfeld, M. and Welch, P.D., 2005, “Comparing inductive and circular definitions: parameters, complexities and games”, Studia Logica, 81: 79–98.
• Landsman, N. P., 1991, “Algebraic theory of superselection sectors and the measurement problem in quantum mechanics”, International Journal of Modern Physics A 6(30): 5349–5371.
• –––, 1995, “Observation and superselection in quantum mechanics”, Studies in History and Philosophy of Modern Physics, 26(1): 45–73.
• Lanford, O. E. (1975) “Time Evolution of Large Classical Systems”, in Dynamical systems, theory and applications, J. Moser (Ed.), Springer Berlin Heidelberg, pgs. 1–111.
• Littlewood, J. E., 1953, A Mathematician’s Miscellany, London: Methuen & Co. Ltd.
• Löwe, B., 2001, “Revision sequences and computers with an infinite amount of time”, Journal of Logic and Computation, 11: 25–40.
• –––, 2006, “Space bounds for infinitary computation”, in Logical Approaches to Computational Barriers, Second Conference on Computability in Europe, CiE 2006, Swansea, UK, July 2006, Proceedings, A. Beckmann, U. Berger, B. Löwe, J. V. Tucker (eds.), Berlin: Springer-Verlag, pgs. 319–329, Lecture Notes in Computer Science 3988.
• Löwe, B. and Welch, P. D., 2001, “Set-Theoretic Absoluteness and the Revision Theory of Truth”, Studia Logica, 68: 21–41.
• Mather, J. N. and R. McGehee, 1975, “Solutions of the Collinear Four-Body Problem Which Become Unbounded in a Finite Time”, in Dynamical systems, theory and applications, J. Moser (Ed.), Springer Berlin Heidelberg, pgs. 573–597.
• Misra, B., and E. C. G. Sudarshan, 1977, “The Zeno’s paradox in quantum theory”, Journal of Mathematical Physics, 18(4): 756–763.
• Malament, D., 1985, “Minimal Acceleration Requirements for ”Time Travel“ in Gödel Space-Time”, Journal of Mathematical Physics, 26: 774–777.
• –––, 2008, “Norton’s Slippery Slope”, Philosophy of Science, 75: 799–816.
• Manchak, J., 2009, “Is Spacetime Hole-Free?” General Relativity and Gravitation, 41: 1639–1643.
• –––, 2010, “On the Possibility of Supertasks in General Relativity”, Foundations of Physics, 40: 276–288.
• Mclaughlin, William I., 1998, “Thomson’s lamp is dysfunctional”, Synthese, 116: 281–301.
• Norton, J. D., 1999, “A Quantum Mechanical Supertask”, Foundations of Physics, 29(8): 1265–1302.
• Penrose, R., 1979, “Singularities and Time-Asymmetry”, in S. Hawking and W. Israel (eds.), General Relativity: And Einstein Centenary Survey, Cambridge: Cambridge University Press, 581–638.
• Pati, A. K., 1996, “Limit on the frequency of measurements in the quantum Zeno effect”, Physics Letters A 215(1–2): 7–13.
• Peijnenburg, J. and Atkinson, D., 2008, “Achilles, the tortoise, and colliding balls”, History of Philosophy Quarterly, 25(3): 187–201.
• Pérez Laraudogoita, J., 1996, “A beautiful supertask”, Mind, 105(417): 81–83.
• –––, 1997, “Classical particle dynamics, indeterminism and a supertask”, The British Journal for the Philosophy of Science, 48(1): 49–54.
• –––, 1998, “Infinity Machines and Creation Ex Nihilo”, Synthese, 115(2): 259–265.
• –––, 1999, “Earman and Norton on Supertasks that Generate Indeterminism”, The British Journal for the Philosophy of Science, 50: 137–141.
• Pérez Laraudogoita, J., M. Bridger and J. S. Alper, 2002, “Two Ways Of Looking At A Newtonian Supertask”, Synthese, 131(2): 173–189
• Pitowsky, I., 1990, “The Physical Church Thesis and Physical Computational Complexity”, Iyyun, 39: 81–99.
• Rin, B., 2014, “The Computational Strengths of α-tape Infinite Time Turing Machines”, Annals of Pure and Applied Logic, 165(9): 1501–1511.
• Ross, S. E., 1976, A First Course in Probability, Macmillan Publishing Co. Inc.
• Sacks, G. E., 1990, Higher recursion theory, Heidelberg: Springer-Verlag, Perspectives in mathematical logic.
• Salmon, W., 1998, “A Contemporary Look at Zeno’s Paradoxes: An Excerpt from Space, Time and Motion”, in Metaphysics: The Big Questions, van Inwagen and Zimmerman (Eds.), Malden, MA: Blackwell Publishers Ltd.
• Schindler, R., 2003, “P ≠ NP for infinite time Turing machines”, Monatshefte der Mathematik 139(4): 335–340.
• Smeenk, C. and C. Wüthrich, 2011, “Time Travel and Time Machines”, in C. Callender (ed.), The Oxford Handbook of Philosophy of Time, Oxford: Oxford University Press, 577–630.
• Thomson, J. F., 1954, “Tasks and super-tasks”, Analysis, 15(1): 1–13.
• Uzquiano, G., 2011, “Before-Effect without Zeno-Causality” Noûs, 46(2): 259–264.
• Van Bendegem, J. P., 1994, “Ross’ Paradox Is an Impossible Super-Task”, The British Journal for the Philosophy of Science, 45(2): 743–748.
• Wan, K-K., 1980, “Superselection rules, quantum measurement, and the Schrödinger’s cat”, Canadian Journal of Physics, 58(7): 976–982.
• Welch, P. D. 2001, “On Gupta-Belnap revision theories of truth, Kripkean fixed points, and the Next stable set”, Bulletin for Symbolic Logic, 7: 345–360.
• –––, 2008, “The extent of computation in Malament-Hogarth spacetimes”, British Journal for the Philosophy of Science, 59(4): 659–674.
• Weyl, H., 1949, Philosophy of Mathematics and Natural Science, Princeton: Princeton University Press.
• Winter, J., 2009, “Is \(\mathbf{P} = \mathbf{PSPACE}\) for Infinite Time Turing Machines?”, in Infinity in logic and computation. Revised selected papers from the International Conference (ILC 2007) held at the University of Cape Town (UCT), Cape Town, November 3–5, 2007. M. Archibald, V. Brattka, V. Goranko and B. Löwe (Eds.), Berlin: Springer-Verlag, pgs. 126–137. Lecture Notes in Computer Science 5489.
Other Internet Resources
Copyright © 2016 by
John Manchak <>
Bryan W. Roberts <>
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free |
2af5b61accd063e8 | Dismiss Notice
Join Physics Forums Today!
Strange thought
1. Jun 29, 2006 #1
Ok, so we all know the following...
Elementary quantum mechanics is constructed by postulating the canonical commutation relation (CCR) between the coordinate of a particle and its conjugate momentum.
Quantum Field Theory, on the other hand, is constructed by postulating the exitence of a field and by imposing the same sort of CCR on field at every space-time point (as though each point on the field were an elementary quantum mechanical particle).
Doesn't it seem strange that the particles that emerge as a result of applying CCR to fields themselves obey the CCR imposed by ordinary quantum mechanics, (in the low energy limit)? I think its kind of spooky.
2. jcsd
3. Jun 29, 2006 #2
User Avatar
Science Advisor
Homework Helper
I think the best introduction to the concept of the difference between QM and QFT is "Quantum Field Theory in a Nutshell: Books: A. Zee". Just reading the first chapter will likely reduce some of the spookiness.
4. Jul 1, 2006 #3
User Avatar
Science Advisor
Homework Helper
Gold Member
This is an excellent question, TriTertButoxy.
I have always been annoyed by the standard introduction to quantum field theory in which one imposes those commutation relations on classical fields. This always seemed very strange to me. What the heck are those classical fields that one quantizes? (no one has ever seen a classical approximation of a meson field or of a fermion field!)
As far as I know, only Weinberg presents things in a way that really makes sense, in which the starting point is not the quantization of classical fields but the need to allow the number of particles to vary, so the need to introduce annihilation and creation operators. Then , imposing Lorentz invariance, one is led to quantum fields obeying the usual commutation relations (postulated from the start in standard presentations).
Going back to your question, and as far as I understand, the commutation relations (CRs) on the quantum fields have nothing to do with the commutation relations on X and P imposed in nonrelativistic quantum mechanics! In the sense that there is no way to start from the CR of quantum fields and do some approximation to get back to the QM CRs (one way to see this clearly is that there is no analogue of the position operator X in QFT).
I have to warn you that my views on QFT are non standard (a couple of years a go I launched a thread that led to a very long discussion on this and it was clear that I think differently than almost everyone else!). The only book hat seemed to have address things the way I needed to see them addressed is Weinberg's book. Before reading his introduction to quantum fields, I had always been very confused and annoyed with the standard presentation of QFT. It just did not make sense to me. It seemed to involve such a huge leap of faith with no justification at all (we start with classical fields and we quantize them...as if this should be obvious as the thing to do!!!). And book after book after book keeps starting from exactly the same point, without justifying why this must be the way to go. Until finally Weinberg did it the "right" way (in my point of view, of course). The only problem with Weinberg is that it is so dense that it is difficult to follow for a beginner. I wish that there would be an introductory book on QFT that would present things this way!!!
Anyway, just my two cents.
Last edited: Jul 1, 2006
5. Jul 2, 2006 #4
My understanding of Quantum Field Theory is rather rudimentary. Nevertheless, I feel I may have some useful thoughts to offer in this thread.
... And so, was born the MISNOMER "Second Quantization".
By way of analogy to the photon, those classical fields would just be the fields which one is to envision in the (hypothetical) case where a COHERENT source produces an extremely LARGE number of the particles in question.
Of course, with regard to photons it is possible (and, moreover, quite reasonable) to expect to be able to DISCOVER the relevant field equations on the basis of classical principles alone. After all ... historically, this is precisely how it happened.
But on the contrary -- say for example, with regard to the electron -- it seems completely unreasonable to think that one could come up with the Dirac Equation by way of only classical physics principles. ... How the heck can you do that?!
This difference, however, is only a technical one. On the conceptual level,
"the Dirac Equation is to the electron" as "the free-field Maxwell Equations are to the photon".
Therefore, it makes just as much sense to take the Dirac Field and quantize it, as it does to take the Maxwell Field and quantize it. ... Wouldn't you say?
EDIT: This post is only part 1. The next part will appear below, some time later ... I hope.
Last edited: Jul 3, 2006
6. Jul 3, 2006 #5
User Avatar
Science Advisor
Homework Helper
Gold Member
I hear what you are saying. And I agree that this is the standard point of view.
Two comments.
First, this is not the way it is presented in any textbook on QFT I know (if it is done in in some textbook, I would love to hear about it). If one was serious about presenting things this way, one would have to show in depth the connection between a coherent quantum EM field and the classical field. I am still amazed (and a bit annoyed) by the fact that this is never done in details in textbooks. I could not even find the words "coherent states" in the index of Peskin and Schroeder!!
(and I must admit that I still don't feel I really understand very well this concept and the connection with definite phase and amplitude and so on)
If this is the best way to introduce quantume fields, then one should start by showing very clearly the connection between a coherent state of an EM quantum field and classical fields.
On the other hand, I find that it is much more natural to start with the observation that any particle is observed with a discrete nature (as "packets" of energy and momentum) and that is is natural to
build a formalism of varying number of particles (because of special relativity) and to see the concept of field as emerging from allowing the number of particles to vary *and* imposing Lorentz invariance (this is what Weinberg does). This starts from the *observation* that particles are observed and *leads* to quantum fields as a "bookkeeping" device. I find this much more intuitive than to start from postulated classical fields (which do not exist, and which have never been observed even as an approximation of a quantum field!!).
That's just my point of view, which I seem to share only with Weinberg!!:wink:
7. Jul 3, 2006 #6
User Avatar
Staff Emeritus
Science Advisor
Gold Member
Hi Pat,
I agree with most of what you write here ; especially the rather obscure voodoo at the start of most QFT books. But this is usually how many subjects are pedagogically introduced: "listen, I'm not gonna justify everything, but take my word for it now, and let's get going..." ; because otherwise 3/4 of the course time is devoted to the first chapter, with still some questions remaining, and no elementary application can be done.
I have to say Weinberg's approach was an eye-opener to me, but I didn't feel lousy about the "field" approach either. In fact, Weinberg is a bit of a party-pooper because with the equivalence of both viewpoints (variable particle count + relativity <==> quantized relativistic fields) it's now impossible to see one approach as more fundamental over the other.
The way I saw QFT before was: the quantum "prescription" applied to a specific classical model, namely the model of relativistic fields, in the same way as the quantum prescription was applied to newtonian/hamiltonian point particle mechanics, and resulted in non-relativistic quantum mechanics.
You set up a configuration space, and derive from it a hilbert space.
QFT was then just a model, where classical fields where taken as "source of inspiration", in the same way as NRQM was just a model, where newtonian mechanics was taken as a source of inspiration.
In what way this was "justified" was only to be determined by empirical success or failure.
The "miracle" for me was that this model of QFT generated the appearance of particle-like things, all by itself. Moreover, these particle-like things, in the right limits, behave as particles in NRQM. Is this a coincidence, or something deep ?
In another limit of QFT, of course, we find back our "source of inspiration", namely the classical field (in the same way as we find back something that looks like newtonian mechanics in NRQM). The only field for which this really works out are massless boson fields (EM). But this is much less of a surprise, because we put it in from the start. The true "miracle" is that particles come out of the "quantized classical field" model.
Now, Weinberg turns things around, and asks: what kind of model can spit out particle-like stuff and be in agreement with relativity ? And he finds that such a model must look like a quantized classical field. So the miracle here is that the field comes out. We put in "particles" by hand, plus relativity, and we get out "fields".
So what came first ? The field, or the particles ? Weinberg shows that the notions are equivalent.
Last edited: Jul 3, 2006
8. Jul 3, 2006 #7
User Avatar
Staff Emeritus
Science Advisor
Gold Member
A book that does this much better than any "high energy physics" QFT book is "optical coherence and quantum optics" by Mandel and Wolf.
9. Jul 3, 2006 #8
User Avatar
Science Advisor
Homework Helper
Gold Member
hi Patrick... I have enjoyed very much all the exchanges we have had over this issue:smile:
I see what you are saying but what I find annoying is the following. Ok, we have learned about this mysterious theory of QM where we start from classical hamiltonian and replace X and P by operators and so on.
And now QFT is introduced. The "field theory" way of approaching things is to again impose strange commutation relations (which are *different* than the already strange CR of QM and are just justified by analogy) now defined on even weirder things than classical momenta and positions: they are defined on classical fields which are unobservable to start with (except for the EM case)!!
So it's a different type of quantization, applied to new and strange uobservable "classical" fields. :yuck:
So there are two leaps of faith here: accepting those new CRs and then accepting those weird classical fields as a starting point.
However, the "Weinberg approach" in my view is *so* much more physical and intuitive. The only thing that is needed here is to accept special relativity. That because of E=mc^2, the number of particles may vary and that equations must be Lorentz covariant, etc. *This* is much more intuitive to me than the field approach!!
There are additional bonuses from this approach, too. First, the field operator Phi (for a scalar case, let's say) is clearly secondary in Weinberg's approach. It is just a way to combine creation and annihilation operators with a certain Lorentz symmetry. It is never thought of as an observable. In the standard field theory approach, because one *starts* with that field, it feels like it is a key physical operator or something. It's annoying that it is the central quantity in the field approach and yet nobody says at the get go that this is not an observable, that we will never care about its eigenvalues and so on. In the Weinberg approach, it clearly has a secondary role. For example, Weinberg applies that the *Hamiltonian* at spacelike points commute and derive conditions from there. The standard approach is to impose this on the field Phi even though it's not an observable (and it is left unsaid that this is a sufficient condition because physical observables will be built out of fields Phi).
So to me it is annoying and confusing to put the field Phi at the forefront when in fact it is not even an observable.
Another point is that the usual field approach gives the impression that the CRs of the fields is similar to the CRs of X and P in QM. But actually, the CRs of the fields have nothing whatsoever to do with position vs momentum. They arise because of the varying number of particles and the CRs of the annihilation/creation operators. I do know that one can turn the tables around and see the annihilation/creation operators arising from the CRs of the field (and the conjugate momentum) but these CRs are totally unrelated to a Heisenberg-like uncertainty in position vs momentum of a quantum particle. So it seems to me that the analogy used to get the CRs of quantum fields from the CRs of QM obfuscates their meaning.
I have to admit that I, on the other hand, *did* feel lousy about the field approach :frown: .
I can see that they are equivalent, but the field approach to me has much more the feel of a "trick" to organize things in a very efficient way, whereas the Weinberg approach makes much more sense physically. There might be something deeper, physically, about the field approach but I haven't seen it.
The only situation in which I *would* start with the field approach would be in Condensed Matter type of situations, to explain phonons, say. Then it makes sense to me to do it that way.
For me it was an annoyance to wonder about this question (miracle or not)!!! It always felt like a miracle to me (before I read Weinberg), and I was wondering what was new in the CRs of the classical fields as opposed to the CRs of QM. What annoyed me was that nobody was stating cleraly :"This is a miracle, there is something deep here"!! OR "There is nothing deep here, and here's the reason why". I have been so annoyed for years because I wanted to find out which it was!!
Very true. But pedagogically speaking, I find that it makes much more sense to accept that there are quantized packets of energy called mesons and electrons, etc and that because of relativity their number may vary and to work from that than to start from inexistent classical fields and to quantize them using new CRs that are inspired by analogy with QM.
I guess that if I was teaching QFT now, I would never be able to get myself to try to get the students to swallow the field theory approach (I mean, I am pretty sure that most students *would* be willing to accept it and by the time we would be doing applications, they would have forgotten about their initial misgivings, which is I think what happens in almost every QFT class...But as the instructor I would feel very uneasy about that approach.
But hey, you know that this is a pet peeve of mine:tongue2:
I appreciate your comments, Patrick (as always).
Last edited: Jul 3, 2006
10. Jul 3, 2006 #9
WOW !!!
Initially, I felt I may have some insights to share in this thread. But now it seems that any of my current thoughts on these matters can at best be not much more than trivial to the two of you Pats.
I do have some questions, though.
First of all, I would like to verify whether or not I have sufficiently comprehended Pat's (that is, nrqed's) grievances against the "field" approach and also whether or not we see eye to eye on some of the basic physics and math. Please bear with me.
Consider a world which respects Galilean invariance and in which the Schrödinger equation is exact. Then, Pat (nrqed), am I correct in the following assertions?
a) You would have no complaints against construing the Schrödinger field as a 'classical' field, since it does have physical meaning in terms of probability amplitudes.
b) Nevertheless, you would harbor complaints against its direct quantization as a field, because a quantized probability-amplitude makes no physical sense.
c) You would not assert that the CRs of the fields have nothing to do with CRs of X and P, because X and P can be 'fashioned' from the field operator.
11. Jul 3, 2006 #10
User Avatar
Science Advisor
Homework Helper
Gold Member
Just a few thoughts that crossed my mind while driving today. I am basically repeating myself so everryone is welcome to ignore this post but it will help *me* clarify my point of view.
If I were to compare side by side the "field" approach vs Weinberg's approach, I would say the following:
In the field approach, the starting point is the introduction of mysterious classical fields that have never been observed (except the EM field but again, I am talking about the usual presentation where the EM field is discussed much later because of all its complications) and the introduction of mysterious CRs which are only justified by analogy with QM and are applied to fields which are not observables in the first place!
(And again, the CRs of the fields and corresponding conjugate momenta have nothing to do, in the end, with commutation relations with any sort of position and momentum operators, they arise from the CRs of the annihilation and destruction operators! So the fact that they are "justified" using an analogy with the CRs of QM, is a huge cheat, IMNSHO:tongue2: )
So in the field approach, in the foreground are those mysterious fields and those mysterious CRs.
*Then*, one obtains as a side result that there are quantized packest of energy and momenta. And that their number is not conserved.
In Weinberg's approach, however, what is placed in the foreground is that a) there are things which act like discrete bundles of energy and momentum (like electrons:biggrin: ) and that b) from E=mc^2 one expects their number not to be conserved.
Now, by analogy with the harmonic oscillator formalism one introduces annihilation and creation operators, one builds operators, one imposes causality, Lorentz covariance, etc etc...And everything comes out working nicely, with quantum fields now a *byproduct* of the whole thing. Two huge bonuses (IMHO): first, no need to even *ever* introduce the weird classical fields that are the starting point of the field approach! They are gone! Second bonus: no need to postulate the weird CRs between those weird quantum fields and their conjugate momentum!
They *follow* from the CRs of the annihilation/creation operators and *their* CRs are obvious to justify since they must raise or lower the energy by an amount equal to the energy of a particle.
Now, I do understand that the field approach is an efficient way to organize things and to build in Lorentz covariance, etc. But as way to *teach* the subject, I find that they are incredibly obscure and confusing. Well, maybe it's because I am not too smart but they sure confused me for years and years until I finally came to peace with them because of Weinberg (whom I could never thank enough for writing this book!)
Anyway, sorry for the babbling.... Had to get this off my chest:wink:
12. Jul 3, 2006 #11
User Avatar
Science Advisor
Homework Helper
Gold Member
Don't feel that way!!!
I highly appreciate your comments!
It is very useful to me to hear comments from others (especially if they are starting to learn QFT) because it gives me much needed perspective. I like to know how other people think, what they find natural, etc. I learn a lot from that.
I think I would have a problem already at this level. Because a probability wave is already in itself completely different from any classical wave (for example the E or B field in classical E&M).
But I see your point so let's say that we are ok with treating this as a "classical" wave.
Yes, that would definitely bother the heck out of me!
But not only because of the quantization of a *probability* amplitude. The quantization of a classical field in itself bothers me because it is an additional "leap of faith", on top of the usual things we have to make when doing QM (the existence and role of the wavefunction, the replacement of classical quantities by operators, measurements, etc). Now we have to accept a new type of commutation relations, on top of those from NQRM.
( ASIDE: I know that everybody says that "second quantization" is a misleading term, but I personally think that it's a perfectly appropriate expression in the context of the usual presentations on QFT.
(in one type of presentation, people say that one is rewriting the wavefunction as an operator..then it *is* litterally a second quantization (i.e. a quantization on *top* of another quantization). On the other hand, when one quantizes classical fields, the expression "second quantization" is still appropriate because the quantization used there is totally different in nature from the QM quantization, so it's a second quantization in the sense now of "a different type of quantization").)
I am not sure what you mean. I think we would have to be more explicit to discuss this. I don't know what you have in mind by saying that they can be "fashioned: from the field operator. In NRQM, one cannot write X or P in terms of [itex] \psi [/itex].
In NRQM, one promotes P_x and X to operators. So in the case of this "wave", one would expect to impose commutation relations on [itex] p_x \psi [/itex] and [itex] x \psi [/itex]. That's the only way in which I could see a "natural" generalization to impose commutation relations on. However, if you write the Lagrangian leading to Schrodinger equation, the quantities unto which one imposes the QFT commutation relations are obviously not the above expressions! Far from it!
Thanks for the interesting discussion btw!!:smile: :smile:
Last edited: Jul 3, 2006
13. Jul 3, 2006 #12
From your response to item a), I see that for you a necessary feature of a "classical" field is:
The field amplitude can, under appropriate conditions, be measured.
This condition is too strong for my tastes. I prefer the much weaker condition:
The field amplitude exists.
"What? There are things which exist but cannot be measured?"
Well, not exactly ... but maybe.
It could be that I have never been able to create the appropriate conditions. Or it could be that I have not yet found the right device which couples to it. ... Or maybe it just cannot be measured. Or maybe it exist only as a representation within space of the influence of something which resides outside of space.
Pardon my voodoo ... but I'm serious!
Perhaps it exists as an agent which contributes to phenomena that I can measure, yet it itself cannot be measured directly.
Next ... my item b), it turns out, wasn't quite the right probe I was looking for. I'll have to think about that one some more and see if I can hone it down.
The larger part of your reply, though, concerning that additional "leap of faith" over and above a heap of such leaps (and again in the ASIDE that that additional leap renders appropriate an expression which by your estimation was already fine, but not by mine) ... all of that, I would like to put aside until after we part with Galileo for something relativity more intricate.
Regarding c), now I'll be more explicit. Quantization of the Schrödinger field leads to a field operator Ψ(x,t) such that the object
is in fact the ket |x> on the single-particle subspace, so that
is the single-particle position operator X. Since we have |x>, we can get |p> by means of a Fourier-type transformation, and with that construct P, conjugate to X.
This is what I meant when I wrote X and P can be 'fashioned' from the field operator.
So, item c) amounts to: "Do you accept the above to be true? And if so, would this be this be an instance in which you are inclined to retract your statement that the CRs of the fields have nothing to do with the CRs of X and P?"
14. Jul 4, 2006 #13
um, I'm like a 2nd year undergraduate, so I probably don't have a clue what I'm going on about. But here I go anyway
nrqed said that the CRs of NRQM were strange enough, let alone the CRs of QFT. But aren't they just they standard "quantization" procedure (which is admittedly mysterious, but one can accept) applying to the Poisson brackets of Hamiltonian mechanics? And couldn't one show the same Poisson brackets for fields and conjugate momenta of classical fields? And then apply the same (admittedly mysterious) quantization procedure?
My point is that probably the CCRs for NRQM are just as mysterious as those for QFT, not any more mysterious or any less.
15. Jul 4, 2006 #14
User Avatar
Science Advisor
Homework Helper
Gold Member
Hi Masudr. Thanks for your input!
I will try to clarify my position (of course this is all an issue of perosnal taste and is very subjective. I can say that I don't find something nature=al or pedagogically sound and if someone else disagrees, that's the end of the discussion. So I am not trying to prove anything or to convince anyone of anything, I am just expressing my opinions. But I am learning a lot by seeing how others think of those things and that's why I like to have those discussions).
Two points.
A) In QM we apply the quantization rules to position and momenta of point particles, quantities which have a clear classical meaning and which we are used to work with classically. However in QFT, we apply quantization to "classical fields" that we have to postulate from the start and which have never been observed at all (like the Dirac field and so on). The only exception is the EM field, but there is no such thing as observations of classical scalar of spin 1/2 fields. So we have to "ivent" those "classical" fields before quantizing. And that's one of the two things that "bothers" me. In the "Weinberg approach", the quantum fields are a *byproduct* of the derivation and there is no need to even introduce classical fields which is great from my point of view since they have never been observed (again, except for the EM field).
B) I agree with you that one may take the position that the key idea about quantization is to replace the classical Poisson brackets by quantum CRs. However, I still see this step applied to fields as being distinct to this step applied to point particles. One can argue by "analogy" to "justify" the CR of fields from the CRs of point particles, but in the end, there is *still* a leap involved there. The CRs for the fields do not mathematically follow from the CRs for point particles. They are *analogue* but one cannot *mathematically* go from the CRs of point particles to the CRs of the fields (the only case where this can be done is if we seriously think of the field as being made of a large number of particles, i.e. a physical string. But the waves that we quantize in particle physics are not of that nature) . I agree that the analogy is strong but it is still a *new* quantization procedure that must be taken as an extra axiom on top of the axiom for the CRs of point particles. So in that sense, this is *really* a "second quantization". It's the quantization applied to fields as opposed to applied to point particles of NRQM.
There is nothing wrong with adding a new axiom to QM, this is not what I am saying. But my point is first that it sohould be recognized as such: a new axiom. Despite analogies with point particles, it is still a new axiom.
My second point is that in Weinberg's approach, those CRs on the *fields* are not postulated at all! They are a byproduct! Where do they come from? The staring point is the assumption that the number of particles is not constant (because of mc^2). Then it is natural to introduce annihilation and creation operators which then have an obvious set of CRs. So, in the field approach, there is an axiom giving the CRs of "classical fields" (which again, are not even observed) whereas in Weinberg's approach this axiom is replaced by the "axiom" that particle number is not conserved. I personally much prefer the second approach!:tongue2:
Does this make sense to you? I would appreciate yoru feedback!
Thanks again for you comment.
16. Jul 4, 2006 #15
I must disagree. What is the classical notion of spin? You could tell me that infinitesimal rotations don't commute, but that's just as bad (for me, at least) as saying that we will quantize a spin-1/2 field.
Interesting... the reason it doesn't bother me is because I see it like this. We have, the rules of QM. Then we have different Hamiltonians. So, we have a classical field we know: EM. We apply QM to it, and we get all these results. Then we can have different Lagrangians (e.g. the Dirac field Lagrangian etc.)
In all fairness, I haven't seen the Weinberg approach, and it does sound quite cool. I shall remember to give his text a look when I study QFT in my 4th year.
There's no analogy involved here. We have, say, the simple harmonic oscillator. We apply the rules of QM to it, i.e. represent states by those that satisfy the TDSE, observables by linear operators that obey the CRs etc. We now have a field with Lagrangian given by some expression. We apply QFT to it, so we have these position valued operators, etc. etc.
So we have Lagrangians and Hamiltonians for fields, and we quantize the fields just like we quantize particles. Your objection is similar to objecting that the rules that apply for classical particles shouldn't apply to classical fields, unless I have missed something.
I would agree with this.
But even the classical EM field isn't particles oscillating on a string. The EM field is something quite mysterious as it is. We can get the field equations for a string by saying classical field theory is the study of continuous/infinite classical particles. But we can't really say that for the EM field. Again, what we do is set up the formalism, and then apply the rules to a brand new Lagrangian, and find that it works.
Is it really applied on top of the rest of the axioms? We impose the CCRs for the field operators, but dont the ones for the particles of the field emerge from that? Or do we separately impose that too?
As you have said, it is all a matter of taste and opinion, and of course, that is fair enough. I don't happen to have that much of a problem with the "standard" approach, although I must say, I haven't looked at the standard approach in much detail, nor the Weinberg approach, so I'm not really in a position to comment.
No worries.
17. Jul 5, 2006 #16
Whoops !!!
The answers are: yes, no.
Nowhere in the above have I invoked the CRs of the fields to say what I say about the particle. According what is written there, the single-particle X operator will mathematically follow from no more than the existence of Ψ(x,0). And then, constructing P from X, well again that too has nothing to do with the CRs of the fields.
Besides, the CRs of the fields are connected to spin, whereas those of particle have no such connection.
Yes, yes. I see it now: an ADDITIONAL LEAP OF FAITH, of course(!), is involved when one invokes the CRs of the fields.
In NRQM, invoking the CRs (or anti-CRs) of the fields is the same as POSTULATING symmetric (or antisymmetric) subspaces for the many-Boson (or many-Fermion) system.
In hindsight, this is painfully obvious. :eek:
... Whoops !!! :rolleyes:
18. Jul 5, 2006 #17
User Avatar
Science Advisor
Homework Helper
Gold Member
Hi Masudr.
Again, this is all quite subjective and a matter of taste, to a large extent. My arguments are more about pedagogy in teaching the subject (and writing textbooks). What is more "logical" as a way to introduce the subject. As a student I was bewildered by the standard presentation and did not know what was "profound", what was supposed to be analogies vs derivations, what were new axioms vs derived results, etc. And I was bewildered by the starting point of it all. To be honest, I could never get myself to teach the subject the standard way if I were to teach this class tomorrow.
Again, here's briefly my objection:
In the standard field approach, one starts with those unobserved classical field theories (like the KG field or Dirac field). One then quantizes them by postulating the CRs for fields. The presence of particle-like excitations comes out as a byproduct. (By the way, I might feel much better about this approach if they would also apply this approach to *noncovariant* field theories and show what happens then. And show clearly what is the difference and why there is a difference with applying this technique to invariant (scalar) vs non-invariant Lagrangian densities.)
(Of course, the EM field case is different because we do observe E and B fields classically. But then textbooks should start with EM fields, quantize them, show clearly the connection with classical fields through coherent states, the connection with the classical amplitude and phase and on and on. Then, *after* a solid intuition has been built on the connection between classical field theories, quantum fields and particle excitations, books should discuss why coherent states of massive particles are not observed, and so on. And *then* it would make sense to go on to quantizing KG and Dirac fields. But things are never presented this way:mad: On the other hand, Weinberg's approach bypasses all the pedagogical difficulties of the field approach, IMHO)
In Weinberg's approach, one starts with the idea that particles can be created or annihilated. Then one has many-body theories. If in addition one imposes Lorentz invariance, one is forced to introduce quantum fields.
The fields and their CRs are a *byproduct*. No need to postulate strange classical fields to quantize. No need to postulate CRs on the fields. The difference between Lorentz invariant and non-invariant theories is clear.
I find this second approach much more satisfying. Seems to be so much more logical than the field approach. And a *much* better way to *teach* the subject. Of course, after all is absorbed and the field connection is made, then the field approach should be shown too.
To be honest, I am still not totally sure whether there is something "deep" about the field approach. I still have to understand exactly how the CR between the fields and their conjugate momenta arise out of Weinberg's approach vs how they arise from the usual Lagrange/Hamiltonian approach.
I did not mean to say that everything we quantize in NQRM has a clear classical analogue. I agree with you that it's not the case. I was focusing on the fundamental CR, between X and P. My point is that we impose CRs on these quantities. And then when we get to a KG field or a Dirac field, we impose a *new* set of CRs on the field and conjugate field momenta, which are themselves quantity unobservable classically (and this is a key difference even with the spin case. Spin has no classical analogue, but you get a Stern-Gerlach apparatus and you can measure it. If books want to start with a classical KG or Dirac field, theyshould explain how one would go about measuring their phase or amplitude :devil: )
On the other hand, Weinberg's approach does not require to introduce those unobservables classical fields and does not require to postulate their CRs. What *is* postulated is the need for a formalism with a vraying number of particles. The CRs are the CRs of annihilation/creation operators which are quite natural from the example of the harmonic oscillator where the energy levels different by integer multiple of an energy quantum.
I understand. But QFT books never show clearly the correspondence between the quantum EM fields and the classical fields we all know and love (not just showing that photons quanta arise..but showing what's the connection with classical phase and amplitude, talking about measurements of E and B fields, etc). If the field approach is to be taken as a starting point, it seems to me that the first chapter of any QFT book should be an in-depth discussion of EM fields. But QFt books sometimes don't even mention coherent states:surprised Or why we don't observe classical limits of the KG or Dirac field, etc.
I highly recommend doing so. Unfortunately, its is very dense so it takes a lot of work to see the basic ideas through all the notation. If I would teach QFT tomorrow, I would unfortunately not feel that I could use that as a textbook. I wish there was a lower level book on QFT that would introduce the ideas this way.
You agreed that the CRs of the quantum fields cannot be derived from the CRs of point particles. My point is that the CRs of fields must be seen as a new axiom. Sure, the basic principle is similar to what we do for point particles so it's easy to accept/ But it is still a new axiom. And on top of that it is applied to fields which are not even observed classically.
So the field approach says : look, we have all those particles around (electrons, mesons, etc). To do calculations, we will introduce those "classical fields" (that we have never observed) and we will quantize them, based on an analogy with point particles NRQM and assuming that it makes sense to take the continuum limit for these unobservable fields. To *me* this seems like voodoo. And my definition of voodoo is this: I would personally would have never thought about doing this if someone had not told me that this was the way to go.
Sure, you will say that it's a natural way to go given EM fields. But then why not work out in details the EM fields and the quantum field/classical field correspondence limit in great depth before jumping to those abstract KG and Dirac fields?
On the other hand, Weinberg's approach says: look, we have those particles around. Because of relativity, we expect particle number to change. Let's go from there. That's it:approve:
I agree!!! The point that I had made is that strictly speaking, the correspondence between the CRs of point particles and of QFTs can only be made explicit (i.e. one can derive one from the other) in the case of a vibrating string. I agree completely that even in the EM case, there is a "leap of faith" involved in defining the CRs. In the sense that it's a new axiom. And the correspondence quantum field/classical field is far from obvious and trivial in the EM case.
So my point is that if one is serious about presenting the field apporach, one shoudl devote some time discussing the EM field in depth.
On the other hand, Weinberg's approach does not require to postulate those CRs for quantum fields as a starting point. One only need to introduce creation/annihilation operators. And one can build any field theory (KG, Dirac, EM) from this principle. I personally find this much more satisfying.
As far as I can tell, they are completely separate (although another poster suggested an idea that I still have to look at).
If you (or anyone) could show me how to start from the field CRs and recover the position-momentum CRs for a single particle, I would be very interested. The CRs of the field arise from the annihilation/destruction operators CRs, which don't say anything about the X,P CRs of a single particle state.
This is one of my main arguments. One talks about the field and its conjugate momentum and use an analogy with the position and momentum of a single particle in order to "justify" the CRs for the fields. But it seems to me to be very decieving, because in the end, the CRs of the field and conjugate field momentum have nothing to do with position/momentum CRs. They are connected instead to the varying number of particles. So it feels like a cheat to me. It uses an analogy to get the right CR but later one realizes that the analogy has no basis. This is one of the main things that bothers me about the usual approach!
(again, in the case of an actual string, then there really is a connection between the CRs of the position and momenta of each particle and the CRs of the field, but as you pointed out, even EM is not a quantized string!)
Sometimes I feel like an alien:tongue2:
Because from the moment I first read about QFT I was bothered about what appeared to me to be huge leaps of faith with no logical basis. I do know that some leaps of faith are required in QM but at least see how experiments and observations led to those leaps fo faith. Then I would read about QFT and the very first thing they would say is "relativistic wavefunction euqations have problems. So "obviously" what we will do is to consider classical fields and quantize them:surprised :surprised It's the biggest non-sequitur that I have seen in physics . I mean, special relativity involves a weird leap of faith, but at least I could see why it made sense to make this leap. I surely would never had been able to do it myself, but after I learned the theory, it did make sense to me. Same thing for GR. But the way textbooks present QFT, it did not make any sense to me.
I understand that historically, quantization of the EM field played a major role. But then why don't textbooks work this out in details first?
But again, now that I have read Weinberg, it makes even more sense to me to start this way (with the idea of varying number of particles) and to build from there. No non-sequitur involved there.
Again, just a question of taste.
Last edited: Jul 5, 2006
19. Jul 5, 2006 #18
User Avatar
Science Advisor
Homework Helper
Gold Member
well, I see what you mean. Whether we have CR or anti-CR depend on the spin, yes. But I would say that the Crs of the fields are connected to the *varying number of particles*. In weinberg's approach , one *starts* for the need for annihilation and creation operators and then one *derives* the CRs for the fields. I find it so much more satisfying to start from the need to have varying number of particles (because of relativity) than to have to postulate those weird unobserved classical fields and to postuale field CRs. But apparently, I am alone feeling this way (with Weinberg) :wink:
Right! In Weinberg's approach, there *is* a leap of faith, but that's the need for varying number of particles. I would have much rather learned QFT this way than to have to accept as a starting point those weird classical fields and then to accept the field CRs as a new axiom.
Right. But again, I think you are focusing on the distinction between CRs vs anti-CRs. But even if we consider only bosonic excitations, the question can still be asked: what is the principle behind the need to invoke CRs of the fields and their conjugate momenta? The answer is the need to have a *many-body* system. It has nothing to do with position/momentum CR of a point particle.
And this is one of my pet peeves. One invokes the analogy with the position/momentum CR of NRQM to justify the axiom of the CRs of quantum fields. But later one realizes that the CRs of the quantum fields have nothing to do with position/momentum CRs. They have to do with varying number of particles. So it feels like the analogy used to postulate the CRs in the first place is a cheat.
(to be honest, I am still wondering if there is something deep going on here that I am missing. On one hand the CRs can be derived simply by considering varying number of particles. On the other hand, they can be derived by assuming a continuum analogue of position/momentum CRs on fields. Is the fact that the two approaches lead to the same result "deep" or not? I am still unclear about this. And I really want to understand this because I feel that once I can answer this to my satisfaction, no matter what the answer is, I will finally understand QFT at more than a very shallow level)
:smile: I hope this helps make my point of view sound a bit less crazy :tongue2:
20. Jul 7, 2006 #19
User Avatar
Science Advisor
Homework Helper
Gold Member
Thank you for the suggestion, Patrick. I will try to get my hands on a copy. I always found it so strange that QFT books don't cover the crucial topic of the correspondence between the quanized EM fields and classical EM. This seems to me to be one way to make the "quantization of classical fields" less of a huge non sequitur.
Thanks again.
21. Jul 7, 2006 #20
Hello again (after a bit of a break)
Well, we don't impose a new set of CCRs: it's really the old set. But this time we are applying them to a new system.
That is:
• In NRQM we quantize hypothetical (or not) classical particle systems with any Hamiltonian. So, we upgrade observables to operators, and upgrade the Poisson brackets of position and conjugate momenta (from Hamiltonian particle mechanics) to commutators on those operators.
• In QFT, we quantize hypothetical (or not) classical fields with any Hamiltonian. So, we upgrade observables to operators, and upgrade the Poisson brackets of position and conjugate momenta (from Hamiltonian field mechanics) to commutators on those operators.
So it's not an additional imposition: it's just the standard quantization procedure (which I assume you are happy with for particles). Luckily for us, the Hamiltonian for some fields ends up looking like the Hamiltonian for an oscillator...
Have something to add?
Similar Discussions: Strange thought
1. An strange equation (Replies: 1)
2. Some thought (Replies: 3)
3. Just a thought (Replies: 6)
4. Quantum thoughts (Replies: 21)
5. Entanglement thoughts (Replies: 30) |
2c70d0b46289aee6 |
Entanglement (physics)
From Citizendium, the Citizens' Compendium
Revision as of 18:37, 9 November 2010 by Boris Tsirelson (Talk | contribs) (more)
Jump to: navigation, search
This article is developing and not approved.
Main Article
Related Articles [?]
Bibliography [?]
External Links [?]
Citable Version [?]
(CC) Photo: Mike Seyfang
Photonics is widely used when creating entanglement.
There are three interrelated meanings of the word entanglement in physics. They are listed below and then discussed, both separately and in relation to each other.
• A combination of empirical facts, observed or only hypothetical, incompatible with the conjunction of three fundamental assumptions about nature, called "counterfactual definiteness", "relativistic local causality" and "no-conspiracy" (see below), but compatible with the conjunction of the last two of them ("relativistic local causality" and "no-conspiracy"). Such a combination will be called "empirical entanglement" (which is not a standard terminology[1]).
• A prediction of the quantum theory stating that the empirical entanglement must occur in appropriate physical experiments (called "quantum entanglement").
• In quantum theory there is a technical notion of "entangled state".
Entanglement cannot be reduced to shared randomness, and does not imply faster-than-light communication.
Due to quantum entanglement, quantum information is different from classical information, which leads to quantum communication, quantum games, quantum cryptography and quantum computation.
Empirical entanglement
Some people understand it easily, others find it difficult and confusing.
It is easy, since no physical or mathematical prerequisites are needed. Nothing like Newton laws, Schrödinger equation, conservation laws, nor even particles or waves. Nothing like differentiation or integration, nor even linear equations.
It is difficult and confusing for the very same reason! It is highly abstract. Many people feel uncomfortable in such a vacuum of concepts and rush to return to the particles and waves.
The framework, and local causality
The following concepts are essential here.
• A physical apparatus that has a switch and several lights. The switch can be set to one of several possible positions. A little after that the apparatus flashes one of its lights.
• "Local causality": widely separated apparata are incapable of signaling to each other.
Otherwise the apparata are not restricted; they may use all kinds of physical phenomena. In particular, they may receive any kind of information that reaches them. We treat each apparatus as a black box: the switch position is its input, the light flashed is its output; we need not ask about its internal structure.
However, not knowing what is inside the black boxes, can we know that they do not signal to each other? There are two approaches, non-relativistic ("loose") and relativistic ("strict").
The loose approach: we open the black boxes, look, see nothing like mobile phones and rely on our knowledge and intuition.
The strict approach: we do not open the black boxes. Rather, we place them, say, 360,000 km apart (the least Earth-Moon distance) and restrict the experiment to a time interval of, say, 1 sec. Relativity theory states that they cannot signal to each other, for a good reason: a faster-than-light communication in one inertial reference frame would be a backwards-in-time communication in another inertial reference frame!
Below, the strict approach is used (unless explicitly stated otherwise). Thus, the apparata are not restricted. They may contain mobile phones or whatever. They may interact with any external equipment, be it cell sites or whatever.
Falsifiabilty, and no-conspiracy assumption
A claim is called falsifiable (or refutable) if it has observable implications. If some of these implications contradict some observed facts then the claim is falsified (refuted). Otherwise it is corroborated.
The relativistic local causality was never falsified; that is, a faster-than-light signaling was never observed. Does it mean that local causality is corroborated? This question is more intricate than it may seem.
Let A, B be two widely separated apparata, xA the input (the switch position) of A, and yB the output (the light flashed) of B. (For now we do not need yA and xB.) Local causality claims that xA has no influence on yB.
An experiment consisting of n trials is described by xA(i), yB(i) for i = 1,2,...,n. Imagine that n = 4 and
xA(1) = 1, xA(2) = 2, xA(3) = 1, xA(4) = 2,
yB(1) = 1, yB(2) = 2, yB(3) = 1, yB(4) = 2.
The data suggest that xA influences yB, but do not prove it. Two alternative explanations are possible:
• the apparatus B chooses yB at random (say, tossing a coin); the four observed equalities yB(i) = xA(i) are just a coincidence (of probability 1/16);
• the apparatus B alternates 1 and 2, that is, yB(i) = 1 for all odd i but yB(i) = 2 for all even i.
Consider a more thorough experiment: n = 1000, and xA(i) are chosen at random, say, tossing a coin. Imagine that yB(i) = xA(i) for all i = 1,2,...,n. The influence of xA on yB is shown very convincingly! But still, an alternative explanation is possible.
For choosing xA, the coin must be tossed within the time interval scheduled for the trial, since otherwise a slower-than-light signal can transmit the result to the apparatus B before the end of the trial. However, is the result really unpredictable in principle (not just in practice)? Not necessarily so. Moreover, according to classical mechanics, the future is uniquely determined by the past! In particular, the result of the coin tossing exists in the past as a complicated function of a huge number of coordinates and momenta of micro particles.
It is logically possible, but quite unbelievable that the future result of coin tossing is somehow spontaneously singled out in the microscopic chaos and transmitted to the apparatus B in order to influence yB. The no-conspiracy assumption claims that such exotic scenarios may be safely neglected.
The conjunction of the two assumptions, relativistic local causality and no-conspiracy, is falsifiable, but was never falsified; thus, both assumptions are corroborated.
Below, the no-conspiracy is always assumed (unless explicitly stated otherwise).
Counterfactual definiteness
In this section a single apparatus is considered.
A trial is described by a pair (x,y) where x is the input (the switch position) and y is the output (the light flashed). Is y a function of x? We may repeat the trial with the same x and get a different y (especially if the apparatus tosses a coin). We can set the switch to x again, but we cannot set all molecules to the same microstate. Still, we may try to imagine the past changed, asking a counterfactual question:[2]
• Which outcome the experimenter would have received (in the same trial) if he/she did set the switch to another position?
It is meant that only the input x is changed in the past, nothing else. The question may seem futile, since an answer cannot be verified empirically. Strangely enough, the question will appear to be very useful in the next section.
Classical physics can interpret the question as a change of external forces acting on a mechanical system of a large number of microscopic particles. It is unfeasible to calculate the answer, but anyway, the question makes sense, and the answer exists in principle:
for some function f : XY, where X is the finite set of all possible inputs, and Y is the finite set of all possible outputs. Existence of this function f is called "counterfactual definiteness".
Repeating the experiment we get
y(i) = fi(x(i))
for i = 1,2,... Each time a new function fi appears; thus x(i)=x(j) does not imply y(i)=y(j). In the case of a single apparatus, counterfactual definiteness is not falsifiable, that is, has no observable implications. Surprisingly, for two (and more) apparata the situation changes dramatically.
Local causality and counterfactual definiteness
For two apparata, A and B, an experiment is described by two pairs, (xA,yA) and (xB,yB) or, equivalently, by a combined pair ((xA,xB), (yA,yB)). Counterfactual definiteness alone (without local causality) takes the form
or, equivalently,
Assume in addition that A and B are widely separated and the local causality applies. Then xA cannot influence yB, and xB cannot influence yA, therefore
These fA, fB are one-time functions; another trial may involve different functions.
An alternative language is logically equivalent, but makes the presentation more vivid. Imagine an experimenter, Alice, near the apparatus A, and another experimenter, Bob, near the apparatus B. Alice is given some input xA and must provide an output yA. The same holds for Bob, xB and yB. Once the inputs are received, no communication is permitted between Alice and Bob until the outputs are provided. The input xA is an element of a prescribed finite set XA (not necessarily a number); the same holds for yA and YA, xB and XB, yB and YB.
It may seem that the apparata A, B are of no use for Alice and Bob. Significantly, this is an illusion.
The simplest example of empirical entanglement is presented here. First, its idea is explained informally.
Alice and Bob pretend that they know a 2×2 matrix
consisting of numbers 0 and 1 only, satisfying four conditions:
a = b, c = d, a = c, but bd.
Surely they lie; these four conditions are evidently incompatible. Nevertheless Alice commits herself to show on request any row of the matrix, and Bob commits himself to show on request any column. We expect the lie to manifest itself on the intersection of the row and the column (not always but sometimes). However, Alice and Bob promise to always agree on the intersection!
More formally, xA=1 requests from Alice the first row, xA=2 the second; in every case yA must be either or . From Bob, xB=1 requests the first column, in which case yB must be or ; and xB=2 requests the second column, in which case yB must be or . The agreement on the intersection means that, for example, if xA=2 and xB=1 then the first element of the row yA must be equal to the second element of the column yB.
Without special apparata (A and B), Alice and Bob surely cannot fulfill their promise. Can the apparata help? This crucial question is postponed to the section "Quantum entanglement". Here we consider a different question: is it logically possible, under given assumptions, that Alice and Bob fulfill their promise?
Under all the three assumptions (counterfactual definiteness, local causality and no-conspiracy) we have yA = fA(xA) and yB = fB(xB) for some functions fA, fB. (These functions may change from one trial to another.) Specifically, fA(1) and fA(2), being two rows, form a 2×2 matrix satisfying the conditions a=b, c=d. Also fB(1) and fB(2), being two columns, form a 2×2 matrix satisfying the conditions a=c, bd. These two matrices necessarily differ at least in one of the four elements (since the four conditions are incompatible). Therefore it can happen that Alice and Bob disagree on the intersection, and moreover, it happens with the probability at least 0.25. In the long run, Alice and Bob cannot fulfill their promise.
Waiving the counterfactual definiteness (but retaining local causality and no-conspiracy) we get the opposite result: Alice and Bob can fulfill their promise. Here is how.
Given xA and xB, there are two allowed yA and two allowed yB, thus, 4 allowed combinations (yA, yB). Two of them agree on the intersection of the row and the column; the other two disagree. Imagine that the apparata A, B choose at random (with equal probabilities 0.5, 0.5) one of the two combinations (yA, yB) that agree on the intersection. For example, given xA=2 and xB=1, we get either yA = and yB = , or yA = and yB = .
This situation is compatible with local causality, since yB gives no information about xA; also yA gives no information about xB. For example, given xA=2 and xB=1, we get either yB = or yB = , with probabilities 0.5, 0.5; exactly the same holds given xA=1 and xB=1.
Thus, empirical entanglement is logically possible. The question of its existence in the nature is addressed in the section "Quantum entanglement".
Entanglement is not just shared randomness
Widely separated apparata, unable to signal to each other, can be correlated. Correlations are established routinely by communication. For example, Alice and Bob, reading their copies of a newspaper, learn the result of yesterday's lottery drawing. This is called shared randomness. Likewise, the apparata A, B can share randomness by receiving signals from some external common source. However, shared randomness obeys the three assumptions (counterfactual definiteness, local causality and no-conspiracy) and therefore cannot produce entanglement. In other words, entanglement as a resource is substantially stronger than shared randomness.
Quantum entanglement
Classical bounds and quantum bounds
Classical physics obeys the counterfactual definiteness and therefore negates entanglement. Classical apparata A, B cannot help Alice and Bob to always win (that is, agree on the intersection). What about quantum apparata? The answer is quite unexpected.
First, quantum apparata cannot ensure that Alice and Bob win always. Moreover, the winning probability does not exceed
no matter which quantum apparata are used.
Second, there exist quantum apparata that ensure a winning probability higher than 3/4 = 0.75. This is a manifestation of entanglement, since under the three classical assumptions (counterfactual definiteness, local causality and no-conspiracy) the winning probability cannot exceed 3/4 (the classical bound). But moreover, ideal quantum apparata can reach the winning probability (the quantum bound), and non-ideal quantum apparata can get arbitrarily close to this bound.
Third, a modification of the game, called "magic square game", makes it possible to win always. To this end we replace 2×2 matrices with 3×3 matrices, still of numbers 0 and 1 only, with the following conditions:
• the parity of each row is even,
• the parity of each column is odd.
The classical bound is equal to 8/9; the quantum bound is equal to 1.
Experimental status
Many amazing entanglement-related predictions of the quantum theory were tested in ingenious experiments using high-tech equipment. All tested predictions are confirmed. Still, each one of these experiments has a "loophole", that is, admits alternative, entanglement-free explanations. Such explanations are highly contrived. They would be rejected as unbelievable in a routine development of science. However, the entanglement problem is exceptional: fundamental properties of nature are at stake! Entanglement is also unbelievable for many people. Thus, the problem is still open; finer experiments will follow, until an unambiguous result will be achieved.
Communication channels
According to the quantum theory, quantum objects manifest themselves via their influence on classical objects (more exactly, on classically described degrees of freedom). Every object admits a quantum description, but some objects may be described classically for all practical purposes, since their thermal fluctuations hide their quantal properties. These are called classical objects. Macroscopic bodies (more exactly, their coordinates) under usual conditions are classical. Digital information in computers is also classical.
A communication channel may be thought of as a chain of physical objects and physical interactions between adjacent objects. If all objects in the chain are quantal, the channel is called quantal. If at least one object in the chain is classical, the channel is called classical.
For example, newspapers, television, mobile phones and the Internet implement only classical channels. Quantum channels are usually implemented by sending a particle (photon, electron) or another microscopic object (ion) from a nonclassical source to a nonclassical detector through a low-noise medium.
Classical communication (that is, communication through a classical channel) can create shared randomness, but cannot create entanglement. Moreover, entanglement creation is impossible when Alice's apparatus A is connected to a source S by a quantum channel but Bob's apparatus B is connected to S by a classical channel. Here is an explanation.
The classical channel S-B is a chain containing a classical object C. By assumption, no chain of interactions connects A and B (via S, or otherwise) bypassing C. Therefore A and B are conditionally independent given a possible state c of C. The response yA of A to xA given c need not be a function gA(c,xA) of c and xA (uniqueness is not guaranteed), but still, we may choose one of possible responses yA and let gA(c,xA) = yA (so-called uniformization). Similarly, gB(c,xB) = yB. Now, given c, the two one-time functions fA(xA) = gA(c,xA) and fB(xB) = gB(c,xB) lead to a possible disagreement of Alice and Bob (on the intersection of the row and the column) by the argument used before (in the section "Example"). A more thorough analysis shows that the classical bound on the winning probability, deduced before from the counterfactual definiteness, holds also in the case treated here.
Entangled quantum states
A bipartite or multipartite quantum state, pure or mixed, is called entangled, if it cannot be prepared by means of shared randomness and local quantum operations. A quantum state that can be used for violating classical bounds, that is, for producing empirical entanglement, is necessarily entangled. It is unclear whether the converse implication holds, or not. Some entangled mixed states, so-called Werner states, obey classical bounds for all one-stage experiments. But multi-stage experiments in general are still far from being well understood.
Nonlocality and entanglement
In general
The words "nonlocal" and "nonlocality" occur frequently in the literature on entanglement, which creates a lot of confusion: it seems that entanglement means nonlocality! This situation has two causes, pragmatical and philosophical.
Here is the pragmatical cause. The word "nonlocal" sounds good. The phrase "non-CFD" (where CFD denotes counterfactual definiteness) sounds much worse, but is also incorrect; the correct phrase, involving both CFD and locality (and no-conspiracy, see the lead) is prohibitively cumbersome. Thus, "nonlocal" is often used as a conventional substitute for "able to produce empirical entanglement".[3]
The philosophical cause. Many people feel that CFD is more trustworthy than RLC (relativistic local causality), and NC (no-conspiracy) is even more trustworthy. Being forced to abandon one of them, these people are inclined to retain NC and CFD at the expence of abandoning RLC.
However, the quantum theory is compatible with RLC+NC. A violation of RLC+NC is called faster-than-light communication (rather than entanglement); it was never observed, and never predicted by the quantum theory. Thus RLC and NC are corroborated, while CFD is not. In this sense CFD is less trustworthy than RLC and NC.
For quantum states
Quantitative measures for entanglement are scantily explored in general. However, for pure bipartite quantum states the amount of entanglement is usually measured by the so-called entropy of entanglement. On the other hand, several natural measures of nonlocality are invented (see above about the meaning of "nonlocality"). Strangely enough, non-maximally entangled states appear to be more nonlocal than maximally entangled states, which is known as "anomaly of nonlocality"; nonlocality and entanglement are not only different concepts, but are really quantitatively different resources.[4] According to the asymptotic theory of Bell inequalities, even though entanglement is necessary to obtain violation of Bell inequalities, the entropy of entanglement is essentially irrelevant in obtaining large violation.[5]
1. Experts often call it "nonlocality", thus confusing non-experts; see Sect. 4.1.
2. "Die Geschichte kennt kein Wenn" (Karl Hampe). Whether physics has subjunctive mood or not, this is the question of counterfactual definiteness.
3. Physical terminology can mislead non-experts. Some examples: "quantum telepathy"; "quantum teleportation"; "Schrödinger cat state"; "charmed particle".
4. A.A. Methot and V. Scarani, "An anomaly of non-locality" (2007), Quantum Information and Computation, 7:1/2, 157-170; also arXiv:quant-ph/0601210.
5. M. Junge and C. Palazuelos, "Large violation of Bell inequalities with low entanglement" (2010), arXiv:1007.3043. |
0c31ff67d9eb7322 | Open Access
• Maxim S Zholudev1,
• Anton V Ikonnikov1Email author,
• Frederic Teppe2,
• Milan Orlita3,
• Kirill V Maremyanin1,
• Kirill E Spirin1,
• Vladimir I Gavrilenko1,
• Wojciech Knap2,
• Sergey A Dvoretskiy4 and
• Nikolay N Mihailov4
Nanoscale Research Letters20127:534
DOI: 10.1186/1556-276X-7-534
Received: 17 July 2012
Accepted: 15 September 2012
Published: 26 September 2012
Cyclotron resonance study of HgTe/CdTe-based quantum wells with both inverted and normal band structures in quantizing magnetic fields was performed. In semimetallic HgTe quantum wells with inverted band structure, a hole cyclotron resonance line was observed for the first time. In the samples with normal band structure, interband transitions were observed with wide line width due to quantum well width fluctuations. In all samples, impurity-related magnetoabsorption lines were revealed. The obtained results were interpreted within the Kane 8·8 model, the valence band offset of CdTe and HgTe, and the Kane parameter E P being adjusted.
Cyclotron resonance HgTe/CdTe heterostructures HgTe quantum wells Far-IR magnetospectroscopy
HgTe/CdTe-based quantum wells (QWs) exhibit a number of remarkable properties. At the critical HgTe QW thickness (6.3 to 7 nm depending on Cd content in the barrier), the forbidden gap is absent and both electrons and holes are characterized by the linear energy-momentum law of massless Dirac fermions [1, 2]. When HgTe QW width exceeds this critical value, the energy band structure is inverted (the conduction band states are formed by p-type wavefunctions while s-type wavefunctions form the valence band states; see, e.g., [1, 3] and references therein). In the inverted band structure regime, HgTe QWs are shown to be two-dimensional (2D) topological insulators that have attracted a great fundamental interest [1, 2, 4, 5]. It was demonstrated [4] that a quantum spin Hall insulator state exists in such systems that can be destroyed by magnetic field due to crossing of Landau levels of different bands [6]. Actually, these two levels have recently shown to display the effect of the avoided crossing [7, 8]. Hole-like symmetry of conduction-band Bloch functions enhances spin-dependent effects like the Rashba splitting that has been shown to achieve 30 meV [3, 9]. Wide HgTe/CdTe QWs have an indirect band structure [10]. If the well is wide (above 12.5 nm), the side maxima of the valence band overlap with the conduction band. Then, the Fermi level can cross both valence and conduction bands and a semimetallic state can be implemented which has been revealed by magnetotransport measurements [11, 12]. On the other hand, narrow HgTe QWs have been proposed as a material for detectors of THz radiation since they possess certain advantages over bulk HgCdTe solid solutions that are widely used for mid-infrared (IR) photodetectors. An alternative way to tune the QW structure is to admix Cd into a wide HgTe QW. In [13, 14], 30-nm-wide Hg1-xCd x Te QWs with a Cd content x > 0.13 are shown to have normal band structure. However, properties of such wells are not identical to those of normal-band-structure HgTe QWs with the same bandgap, namely wide HgCdTe QWs demonstrate indirect band structure, i.e., the side maximum in the valence band exceeds that in the center of the Brillouin zone. An informative method to probe the energy band structure both in bulk semiconductors and in QWs is the cyclotron resonance (CR) technique. However, at the moment, there have been no systematic studies on CR in HgTe/CdTe QWs with different band structures (cf.[69, 1319]). In this work, we present the first results on CR measurements in a semimetallic sample with wide HgTe QW (inverted band structure) as well as in two samples with normal band structures: narrow HgTe QW (for the first time) and wide HgCdTe (about 15% of cadmium).
The structures under investigation were grown by molecular beam epitaxy on semi-insulating GaAs(013) substrates [20]. The ZnTe and thick relaxed CdTe buffer layers, a 100-nm Cd y Hg1-yTe lower barrier layer, a Hg1-xCd x Te QW, and a 100-nm Cd y Hg1-yTe upper barrier layer were grown one by one, followed by a 50-nm CdTe cap layer (see Table 1). The structures were not intentionally doped. In all samples, there were 2D carriers in QWs at liquid helium temperature at dark conditions. Sample 1 was semimetallic, which was confirmed by transport measurements. In sample 2, the dark electron concentration was about 4×1010 cm-2 and it can be raised up to 1011 cm-2 by visible (or near-IR) light illumination (positive persistent photoconductivity (PPC)). In sample 3, the dark electron concentration was about 1011 cm-2, but, in contrast to the previous sample, visible (or near-IR) light illumination resulted to the concentration decrease down to complete freezing of free carriers (negative PPC).
Table 1
Sample parameters
Sample number
Growth number
QW width (nm)
Band structure
aDetermined from PC measurements.
CR studies were carried out at T=4.2 K on 5×5 mm samples placed in the liquid helium. We used two superconducting coils having maximum magnetic fields of 3 and 11 T. CR spectra were measured in the Faraday configuration in two ways: by sweeping the magnetic field up to 3 T at a constant frequency of the terahertz radiation and in a static magnetic field up to 11 T. In the first case, the radiation was generated using quantum cascade lasers (QCLs) operating at 2.6, 3.2, and 4.3 THz (pulse length 10 μ s, repetition rate 5 to 10 kHz). The radiation transmitted through the sample was detected using a Ge:Ga impurity photodetector. In the second case, a BRUKER 113V Fourier transform (FT) spectrometer (Bruker Optik GmbH, Ettlingen, Germany) was used with a globar radiation source. The spectral resolution was 4 cm-1. The transmitted radiation was detected by a composite bolometer. The measured spectra presented here were normalized by sample transmission at B=0 and then divided by the rate of reference signals (signal without sample) at nonzero and zero magnetic fields. The latter enables us to eliminate the influence of the magnetic field on the bolometer sensitivity.
CR measurements in static magnetic fields up to 11 T were carried out in the Laboratoire National des Champs Magnétiques Intenses in Grenoble (LNCMI-G). All other measurements were performed at the Institute for Physics of Microstructures in Nizhny Novgorod.
Theoretical calculations
The band structure in the absence of the magnetic field and the Landau levels (LLs) in the QWs under study were calculated in the axial approximation in the same way as described in [19, 21] in the four-band model. The calculation is based on the envelope function method proposed by Burt [22]. The envelope functions were found as the solutions of the time-independent Schrödinger equation with the 8·8 Hamiltonian taking into account a built-in strain. To calculate the envelope functions and the corresponding values of the electron energy, the structure was approximated by a superlattice of weakly interacting QWs. The lattice period was chosen such that the interaction between the wells would not significantly affect the energy spectrum. The calculation was performed by expanding the envelope functions in plane waves. The expression for the Hamiltonian of the heterostructure grown on the (013) plane was derived by the method described in [23]. The components of the built-in strain tensor were calculated with the use of the formulas from [21]. The band parameters of the materials used in the calculation are taken from [21]. Two parameters of the model were adjusted to get better agreement between calculated and measured transition energies. The first parameter is the valence band offset of CdTe and HgTe. This parameter is not known well and, according to [23], is 570 ± 60 meV. We have used the value 620 meV. The second parameter is the Kane parameter E P which is the same for both materials according to the model used. We assumed E P as 20.8 eV (instead of 18.8 eV [23]). The difference between the results of our calculations with ‘traditional’ and ‘adjusted’ parameters is shown on fan charts for all three samples under study. The dependences of all parameters, except the bandgap, on the content of the solid solution Hg1-xCd x Te were assumed to be linear in x. The concentration dependence of the bandgap was described by the formula from [23]. It should be noted that the axial approximation we used is quite good for the conduction band but can give a small error for the valence band (see, for example, Figure one in [2]). According to our estimations, using axial approximation could result in the error in LL energies in the valence band up to 2 meV (16 cm-1).
Figure 1 exemplifies the fan chart of calculated LLs in sample 1. According to calculation results of the energy band spectra in the absence of the magnetic field in this sample, there is a strong overlapping between the conduction band and the side maxima in the valence band. As easy to see from Figure 1, in this sample, in addition to crossing of the lowest LL n = − 2 in the conduction band and the ‘top’ LL n = 0 in the valence band (typical for narrow QWs with inverted band structure [6, 7, 19]), LLs with high numbers in the valence band indeed overlap with those in the conduction band. Therefore, at the Fermi level position between 40 and 80 cm-1, this structure would be semimetallic with 2D electrons and holes coexisting in HgTe QW at the thermal equilibrium.
Figure 1
Calculated Landau levels for sample 1 using the adjusted parameters. Integers denote the number n of the Landau levels. Vertical arrows indicate the experimentally observed transitions.
Results and discussion
Figure 2 presents typical CR spectra in sample 1 with inverted band structure obtained with a FT spectrometer. The positions of all observed absorption peaks versus the magnetic field are plotted in Figure 3. The symbol size characterizes the line intensity: bigger points correspond to more intense lines. The calculated energies of allowed transitions between LLs (Δn=1) are also plotted in Figure 3. There are two stronger lines in the spectra: line β and line Π. In high magnetic fields, line β definitely arises from the transition between n = − 2 and n = − 1 LLs (cf.[6, 7, 19]). In this case, LL n = − 2 is fully occupied, level n = − 1 is empty (see Figure 1), and a 2 1 transition is allowed, so we have a strong line in the spectra. In moderate magnetic fields, line β can also be attributed to a 1 0 transition in the conduction band: at B<4 T, energies of 2 1 and 1 0 transitions are closed to each other; as the magnetic field decreases, the occupancy of LL n = − 1 in the conduction band in the semimetallic sample 1 increases, so the intensity of the 2 1 transition goes down while that of the 1 0 one increases. Weak line βi, observed in high magnetic fields below line β, in our opinion, can be attributed to electron transitions between LL n = − 2 and residual donor states pertained to LL n = − 1.
Figure 2
Typical CR spectra for sample 1. The numbers against the CR lines are the magnetic field values in Tesla. Gray stripes are Reststrahlen bands.
Figure 3
Energies of cyclotron transitions versus the magnetic field for sample 1. Solid lines correspond to the calculated transitions with adjusted parameters; thin dotted lines, with traditional parameters. Symbols are experimental data. The size of symbols indicates the intensity of CR lines: the smallest symbols correspond to weak lines.
The second strong line Π is a hole CR apparently. It crosses X-axes in a nonzero magnetic field (≈5 T), which means that the transition takes place between LLs crossing approximately in this field. The only allowed transition satisfying this condition is the 1 0 one in the valence band. Some discrepancy between measured and calculated energies (see Figure 3) is due to violation of axial approximation. Thus, line Π is the first observed hole CR in HgTe QWs in quantizing magnetic fields. A weaker line Πican be, by analogy, attributed to the transitions between the filled LL n = 1 in the valence band and impurity state pertained to empty LL n = 0.
In the magnetic field range 3.5 to 5 T in the CR spectra in sample 1, we have observed a weaker line α that is known to result from the interband transition 0 1 [6, 7, 19]. In B < 3.5 T, LL n = 1 seems to be occupied and the absorption decreases, while in B>5 T, the ‘initial’ level n = 0 seems to rise over the Fermi level.
Weak high-frequency lines I1 to I3 probably resulted from some interband transitions (cyclotron or impurity). At the moment, it is difficult to identify them only because of the great number of allowed transitions between valence and conduction band LLs in this frequency range. At last, the line U whose spectral position does not depend on the magnetic field most probably resulted from transitions between impurity states pertained to LLs n = − 2 in the valence and conduction bands (since direct transitions between these two LLs are forbidden in the Faraday configuration).
Investigations of CR absorptions in sample 2 also revealed a lot of spectral features. In this sample, in addition to the magnetoabsorption study with a FT spectrometer, we also measured CR with QCLs at different 2D electron concentrations varied using the positive PPC effect. As easy to see from Figure 4, the rise of the electron concentration results in the increase of the CR line intensity only while its position is unchanged. This is an indication that the observed CR line resulted from transitions from one and the same LL (namely n = 1 in the conduction band; see Figure 5) because in classical magnetic fields, a gradual shift of the CR line to higher magnetic fields with the concentration increase is observed [19].
Figure 4
Typical CR spectra for sample 2 measured using 3.2-THz QCL. In the absence of visible light illumination (1) and at various levels of illumination (2 to 5). The carrier density in units of 1010 cm-2 is 3.5 (1), 5.4 (2), 7.2 (3), 9.3 (4), and 10.3 (5).
Figure 5
Energies of cyclotron transitions versus the magnetic field for sample 2. Solid lines correspond to the calculated transitions with adjusted parameters; thin dotted lines, with traditional parameters. Symbols are experimental data. The size of symbols indicates the intensity of CR lines: the smallest symbols correspond to weak lines.
Experimental data obtained in sample 2 with both the FT spectrometer and the QCLs, as well as calculated energies of allowed transitions between conduction band LLs versus magnetic field are presented in Figure 5. It is clearly seen that the data obtained with different techniques correspond fairly well (see lower left corner in Figure 5). Besides, using QCL operating at 4.3 THz made it possible to measure CR in the phonon absorption band around 150 cm-1 (see Figure 2) due to a high stability of QCL radiation intensity.
The main lines in absorption spectra in sample 2 are α, γ, and δ. This sample has a normal band structure; therefore, all the transitions take place within the conduction band. The LL structure is analogous to that of sample 100708 studied earlier (see Figure one in [14]). Line α corresponds to the transition 0 1 from the lowest LL in the conduction band. In high magnetic fields over 4 T, the LL filling factor is less than unity and all the electrons in the QW occupy LL n=0; therefore, only CR line α is observed. However, in lower magnetic fields, the electrons populate the next LL n = − 1 (see Figure one in [14]) and the transitions 1 0 (line γ) are observed. At still smaller magnetic fields, the third LL in the conduction band is occupied that leads to a decrease in the intensity of transition 0 1 (line α) and in the appearance of line δ(transition 1 2 ).
The observed intensive absorption line γ is to be considered separately. Its position corresponds fairly well to the transition between two lowest LLs 0 1 . In magnetic fields over 5.5 T, where this line is observed, LL n = 0 is filled while that of n = − 1 is empty. However, according to our calculations within the axial model, the square of the electrodipole matrix element for this transition is by 4 orders of magnitude less than that for transition 0 1 (line α). Actually, the 0 1 transition corresponds to electron spin resonance that should not be observed in the Faraday configuration. Nevertheless, line γ is clearly seen in the absorption spectra. Probably, this line resulted from transitions between shallow-donor impurity states pertained to LLs 0 and − 1. It is also possible that because of the absence of the axial symmetry in reality, the square of the matrix element for this transition will be significantly higher. In any case, the origin of line γ (which has been observed in a number of samples with normal band structure) requires further investigations.
Weak line αi seems to result from the transition between the 1s-like state of residual shallow donors pertained to LL n = 0 and the excited 2p+ -like state pertained to LL n = 1. In contrast to impurity lines βi and Πi observed in sample 1 with inverted band structure (Figure 3), the energy of the transitions corresponding to line αi exceeds that of line α since the binding energy of the 1s-like ground state is greater than that of the excited 2p+-like state. The origin of other weak lines observed in the absorption spectra in sample 2 requires further studies.
The last sample 3 under study contains narrow HgTe QW with nominal width of 6.3 nm that should correspond to zero bandgap [1, 2]. In contrast to the previous one, this sample demonstrated negative persistent photoconductivity at illumination by visible light down to electron freezing out. The latter enables us to measure the spectrum of interband photoconductivity (Figure 6; cf.[13]). One can see a distinct low-frequency edge of the conductivity at 380 cm-1 (47 meV). According to the theoretical model used, the gap value of 47 meV corresponds to a significantly narrower QW with normal band structure. Therefore, general features of the LL fan chart in this sample are the same as those in sample 2 (see Figure one in [14]).
Figure 6
Typical CR spectra and photoconductivity spectrum for sample 3. The numbers against the CR lines are the magnetic field values in Tesla. Arrows indicate the observed cyclotron peaks. Gray stripes are Reststrahlen bands. The ‘bandgap’ mark indicates the value of the bandgap for 4.8-nm HgTe QW.
Typical CR spectra in sample 3 are plotted in Figure 6, and the overall data are presented in Figure 7 together with calculated energies of CR transitions versus the magnetic field. There are four main lines in the spectra to be considered: α, β, U1, and U2. The nature of lines α and β is well known. As discussed above, line α corresponds to the transition from the lowest LL in the conduction band 0 1 . Line β corresponds to the interband transition from the top LL in the valence band to the conduction band 2 1 . The intensity of this line decreases in magnetic fields below 3 T (see Figure 7) because of partial filling of the ‘final’ LL for this transition n = − 1. To the best of our knowledge, this is the first observation on the interband transition in the HgTe QW with normal band structure. At present, such transitions have been observed in HgTe QWs with inverted band structure only (see, e.g., [6, 7, 19]). It should be mentioned that the extrapolation of the spectral position of line β to B = 0 gives slightly less bandgap (340 cm-1) than that obtained from the photoconductivity spectrum (measured on another sample cut from the same wafer). Therefore, in our calculations, we used a compromised (between CR and photoconductivity data) QW width of 4.8 nm. Let us note that in this sample 3 with the narrowest QW, the linewidth of the interband transition (β) exceeds significantly that of the intraband one (α), while in broad QWs, they are approximately the same (see, e.g., Figure 2; [7, 19]). To our opinion, a spreading of the interband line β in sample 3 with narrow QW resulted from the enhanced role of one-monolayer fluctuations (about 0.5 nm) of this narrow QW width which, in turn, leads to bandgap fluctuations.
Figure 7
Energies of cyclotron transitions versus the magnetic field for sample 3. Solid lines correspond to the calculated transitions with adjusted parameters; thin dotted lines, with traditional parameters. Symbols are experimental data. The size of symbols indicates the intensity of CR lines: the smallest symbols correspond to weak lines.
The nature of the most intense low-frequency line in CR spectra U1 is not quite clear. It persists up to the maximal magnetic field used (11 T) when the LL filling factor is much less than unity, so it cannot be attributed to transitions between higher LLs in the conduction band. On the other hand, the transition energies are much less than the bandgap. Therefore, the only reasonable explanation is to attribute this absorption line to intracenter excitation of residual donors. In wide QWs (such as in sample 2), the shallow-donor binding energies are small compared to those of the CR ones because of small electron effective masses (of the order 10-2m0, where m0 is the free electron mass). However, in narrow QWs, the donor binding energies increase significantly since the QW potential pushes the donor wavefunction to the impurity ion. A weaker absorption line U2 seems to result from some impurity interband transition since the line is as broad as line β. As a whole, the accordance between measured and calculated data in sample 3 with the narrowest QW (Figure 7) is worst than those in samples 1 and 2 with wider QWs. The latter means that the theoretical model for the description of such narrow QWs is to be elaborated.
In conclusion, we have measured CR in a set of nominally undoped HgCdTe QWs with different band structures in quantizing magnetic fields. The results obtained are interpreted on the basis of Landau level calculations within the Kane 8·8 model. In wide semimetallic HgTe QWs with inverted band structure, both intra- and interband transitions between Landau levels are identified, the CR line being accompanied by impurity satellites. A hole CR line has been observed for the first time. In two samples with normal band structure: wide (30 nm) HgCdTe QW and narrow (4.8 nm) HgTe QW, interband CR transitions have been revealed in the spectra, the interband absorption line width in the narrow QW being spread due to QW width fluctuations. The adjusted material parameters: valence band offset of CdTe and HgTe 620 meV (instead of 570 meV) and the Kane parameter E P 20.8 eV (instead of 18.8 eV), are proposed from the comparison of the experimental and calculation data.
cyclotron resonance
Landau level
persistent photoconductivity
quantum cascade laser
quantum well.
We are grateful to Yu.G. Sadof’ev and Trion Technology Inc., USA, for providing us with quantum cascade lasers. This work was supported by the Russian Foundation for Basic Research (grants 11-02-00958, 11-02-97061, and 11-02-93111), the Ministry of Education and Science of the Russian Federation (state contract nos. 16.740.11.0321 and 16.518.11.7018), the Council of the President of the Russian Federation for Support of Young Scientists and Leading Scientific Schools (project nos. MK-1114.2011.2 and NSh-4756.2012.2), and the Russian Academy of Sciences. The Montpellier team would also like to acknowledge the CNRS via GDR-I project ‘Semiconductor sources and detectors of THz frequencies’ and the GIS-Teralab.
Authors’ Affiliations
Institute for Physics of Microstructures of the Russian Academy of Sciences
Laboratoire Charles Coulomb (LCC), Universite Montpellier II
Laboratoire National des Champs Magnetiques Intenses (LNCMI-G)
1. Bernevig A, Hughes T, Zhang SC: Quantum spin Hall effect and topological phase transition in HgTe quantum wells. Science 2006, 314(5806):1757–1761. 10.1126/science.1133734View ArticleGoogle Scholar
2. Büttner B, Liu CX, Tkachov G, Novik EG, Brüne C, Buhmann H, Hankiewicz EM, Recher P, Trauzettel B, Zhang SC, Molenkamp LW: Single valley Dirac fermions in zero-gap HgTe quantum wells. Nat Phys 2011, 7: 418–422. 10.1038/nphys1914View ArticleGoogle Scholar
3. Gui YS, Becker CR, Dai N, Liu J, Qiu ZJ, Novik EG, Schäfer M, Shu XZ, Chu JH, Buhmann H, Molenkamp LW: Giant spin-orbit splitting in a HgTe quantum well. Phys Rev B 2004, 70: 115328.View ArticleGoogle Scholar
4. König M, Wiedmann S, Brüne C, Roth A, Buhmann H, Molenkamp LW, Qi XL, Zhang SC: Quantum spin Hall insulator state in HgTe quantum wells. Science 2007, 318(5851):766–770. 10.1126/science.1148047View ArticleGoogle Scholar
5. König M, Buhmann H, Molenkamp LW, Hughes T, Liu CX, Qi XL, Zhang SC: The quantum spin Hall effect: theory and experiment. J Phys Soc Jpn 2008, 77(3):031007. 10.1143/JPSJ.77.031007View ArticleGoogle Scholar
6. Schultz M, Merkt U, Sonntag A, Rössler U, Winkler R, Colin T, Helgesen P, Skauli T, Løvold S: Crossing of conduction- and valence-subband Landau levels in an inverted HgTe/CdTe quantum well. Phys Rev B 1998, 57: 14772–14775. 10.1103/PhysRevB.57.14772View ArticleGoogle Scholar
7. Orlita M, Masztalerz K, Faugeras C, Potemski M, Novik EG, Brüune C, Buhmann H, Molenkamp LW: Fine structure of zero-mode Landau levels in HgTe/HgxCd1-xTe quantum wells. Phys Rev B 2011, 83: 115307.View ArticleGoogle Scholar
8. Zholudev M, Teppe F, Orlita M, Consejo C, Torres J, Dyakonova N, Wróbel J, Grabecki G, Mikhailov N, Dvoretskii S, Ikonnikov A, Spirin K, Aleshkin V, Gavrilenko V, Knap W: Magnetospectroscopy of 2D HgTe based topological insulators around the critical thickness. Phys Rev B unpublished unpublished
9. Spirin KE, Ikonnikov AV, Lastovkin AA, Gavrilenko VI, Dvoretskii SA, Mikhailov NN: Spin splitting in HgTe/CdHgTe (013) quantum well heterostructures. JETP Lett 2010, 92: 63–66. 10.1134/S0021364010130126View ArticleGoogle Scholar
10. Ortner K, Zhang XC, Pfeuffer-Jeschke A, Becker CR, Landwehr G, Molenkamp LW: Valence band structure of HgTe/Hg1-xCdxTe single quantum wells. Phys Rev B 2002, 66(7):075322.View ArticleGoogle Scholar
11. Kvon ZD, Olshanetsky EB, Kozlov DA, Mikhailov NN, Dvoretskii SA: Two-dimensional electron-hole system in a HgTe-based quantum well. JETP Lett 2008, 87(9):502–505. 10.1134/S0021364008090117View ArticleGoogle Scholar
12. Gusev GM, Olshanetsky EB, Kvon ZD, Mikhailov NN, Dvoretsky SA, Portal JC: Quantum Hall effect near the charge neutrality point in a two-dimensional electron-hole system. Phys Rev Lett 2010, 104(16):166401.View ArticleGoogle Scholar
13. Ikonnikov AV, Lastovkin AA, Spirin KE, Zholudev MS, Rumyantsev VV, Maremyanin KV, Antonov AV, Aleshkin VY, Gavrilenko VI, Dvoretskii SA: Terahertz spectroscopy of quantum-well narrow-bandgap HgTe/CdTe-based heterostructures. JETP Lett 2010, 92(11):756–761. 10.1134/S0021364010230086View ArticleGoogle Scholar
14. Ikonnikov AV, Zholudev MS, Maremyanin KV, Spirin KE, Lastovkin AA, Gavrilenko VI, Dvoretskii SA, Mikhailov NN: Cyclotron resonance in HgTe/CdTe(013) narrowband heterostructures in quantized magnetic fields. JETP Lett 2012, 95(8):406–410. 10.1134/S002136401208005XView ArticleGoogle Scholar
15. Schultz M, Heinrichs F, Merkt U, Colin T, Skauli T, Løvold S: Rashba spin splitting in a gated HgTe quantum well. Semicond Sci Technol 1996, 11(8):1168. 10.1088/0268-1242/11/8/009View ArticleGoogle Scholar
16. Kvon ZD, Danilov SN, Mikhailov NN, Dvoretsky SA, Prettl W, Ganichev SD: Cyclotron resonance photoconductivity of a two-dimensional electron gas in HgTe quantum wells. Phys E 2008, 40(6):1885–1887. 10.1016/j.physe.2007.08.115View ArticleGoogle Scholar
17. Kozlov DA, Kvon ZD, Mikhailov NN, Dvoretskii SA, Portal JC: Cyclotron resonance in a two-dimensional semimetal based on a HgTe quantum well. JETP Lett 2011, 93(3):170–173. 10.1134/S0021364011030088View ArticleGoogle Scholar
18. Kvon ZD, Danilov SN, Kozlov DA, Zoth C, Mikhailov NN, Dvoretskii SA, Ganichev SD: Cyclotron resonance of Dirac ferions in HgTe quantum wells. JETP Lett 2012, 94(11):816–819. 10.1134/S002136401123007XView ArticleGoogle Scholar
19. Ikonnikov AV, Zholudev MS, Spirin KE, Lastovkin AA, Maremyanin KV, Aleshkin VY, Gavrilenko VI, Drachenko O, Helm M, Wosnitza J, Goiran M, Mikhailov NN, Dvoretskii SA, Teppe F, Diakonova N, Consejo C, Chenaud B, Knap W: Cyclotron resonance and interband optical transitions in HgTe/CdTe(013) quantum well heterostructures. Semicond Sci Technol 2011, 26(12):125011. 10.1088/0268-1242/26/12/125011View ArticleGoogle Scholar
20. Dvoretsky S, Mikhailov N, Sidorov Y, Shvets V, Danilov S, Wittman B, Ganichev S: Growth of HgTe quantum wells for IR to THz detectors. J Electron Mater 2010, 39(7):918–923. 10.1007/s11664-010-1191-7View ArticleGoogle Scholar
21. Novik EG, Pfeuffer-Jeschke A, Jungwirth T, Latussek V, Becker CR, Landwehr G, Buhmann H, Molenkamp LW: Band structure of semimagnetic Hg1-yMnyTe quantum wells. Phys Rev B 2005, 72(3):035321.View ArticleGoogle Scholar
22. Burt MG: The justification for applying the effective-mass approximation to microstructures. J Phys: Condens Matter 1992, 4(32):6651. 10.1088/0953-8984/4/32/003Google Scholar
23. Becker CR, Latussek V, Pfeuffer-Jeschke A, Landwehr G, Molenkamp LW: Band structure and its temperature dependence for type-III HgTe/Hg1-xCdxTe superlattices and their semimetal constituent. Phys Rev B 2000, 62(15):10353–10363. 10.1103/PhysRevB.62.10353View ArticleGoogle Scholar
© Zholudev et al.; licensee Springer. 2012
|
129b9bc3a7ea208e | Psychology Wiki
Quantum logic
Revision as of 17:36, March 6, 2007 by Dr Joe Kiff (Talk | contribs)
34,190pages on
this wiki
Quantum physics
Quantum psychology
Schrödinger cat
Quantum mechanics
Introduction to...
Mathematical formulation of...
Fundamental concepts
Decoherence · Interference
Uncertainty · Exclusion
Transformation theory
Ehrenfest theorem · Measurement
Double-slit experiment
Davisson-Germer experiment
Stern–Gerlach experiment
EPR paradox · Schrodinger's Cat
Schrödinger equation
Pauli equation
Klein-Gordon equation
Dirac equation
Advanced theories
Quantum field theory
Quantum electrodynamics
Quantum chromodynamics
Quantum gravity
Feynman diagram
Copenhagen · Quantum logic
Hidden variables · Transactional
Many-worlds · Many-minds · Ensemble
Consistent histories · Relational
Consciousness causes collapse
Orchestrated objective reduction
Bohm ·
In mathematical physics and quantum mechanics, quantum logic is an operator algebraic system for constructing and manipulating logical combinations of quantum mechanical events. It can be regarded as a kind of propositional logic suitable for understanding the apparent anomalies regarding quantum measurement, most notably those concerning composition of measurement operations of complementary variables. This research area and its name originated in the 1936 paper by Garrett Birkhoff and John von Neumann, who attempted to reconcile some of the apparent inconsistencies of classical boolean logic with the facts related to measurement and observation in quantum mechanics.
Quantum logic has been proposed as the correct logic for propositional inference generally, most notably by the philosopher Hilary Putnam, at least at one point in his career. This thesis was an important ingredient in Putnam's paper Is Logic Empirical? in which he analysed the epistemological status of the rules of propositional logic. Putnam attributes the idea that anomalies associated to quantum measurements originate with anomalies in the logic of physics itself to the physicist David Finkelstein. It should be noted, however, that this idea had been around for some time and had been revived several years earlier by George Mackey's work on group representations and symmetry.
This logic has some unusual properties; for instance, the distributive law of propositional logic,
p and (q or r) = (p and q) or (p and r),
fails in this logic.
The more common view regarding quantum logic, however, is that it provides a formalism for relating observables, system preparation filters and states. In this view, the quantum logic approach resembles more closely the C*-algebraic approach to quantum mechanics; in fact with some minor technical assumptions it can be subsumed by it. The similarities of the quantum logic formalism to a system of deductive logic is regarded more as a curiosity than as a fact of fundamental philosophical importance.
Introduction Edit
In his classic treatise Mathematical Foundations of Quantum Mechanics, von Neumann noted that projections on a Hilbert space can be viewed as propositions about physical observables. The set of principles for manipulating these quantum propositions was called quantum logic by von Neumann and Birkhoff. In his book (also called Mathematical Foundations of Quantum Mechanics) Mackey attempted to provide a set of axioms for this propositional system as an orthocomplemented lattice. Mackey viewed elements of this set as potential yes or no questions an observer might ask about the state of a physical system, questions that would be settled by some measurement. Moreover Mackey defined a physical observable in terms of these basic questions. Mackey's axiom system is somewhat unsatisfactory though, since it assumes that the partially ordered set is actually given as the orthocomplemented closed subspace lattice of a separable Hilbert space. Piron, Ludwig and others have attempted to give axiomatizations which do not require such explicit relations to the lattice of subspaces.
The remainder of the following article assumes the reader is familiar with the spectral theory of self-adjoint operators on a Hilbert space. However, the main ideas can be understood using the finite-dimensional spectral theorem.
Projections as propositions Edit
The so-called Hamiltonian formulations of classical mechanics have three ingredients: states, observables and dynamics. In the simplest case of a single particle moving in R3, the state space is the position-momentum space R6. We will merely note here that an observable is some real-valued function f on the state space. Examples of observables are position, momentum or energy of a particle. For classical systems, the value f(x), that is the value of f for some particular system state x, is obtained by a process of measurement of f. The propositions concerning a classical system are generated from basic statements of the form
• Measurement of f yields a value in the interval [a, b] for some real numbers a, b.
It follows easily from this characterization of propositions in classical systems that the corresponding logic is identical to that of some boolean algebra of subsets of the state space. By logic in this context we mean the rules that relate set operations and ordering relations, such as de Morgan's laws. These are analogous to the rules relating boolean conjunctives and material implication in classical propositional logic. For technical reasons, we will also assume that the algebra of subsets of the state space is that of all Borel sets. The set of propositions is ordered by the natural ordering of sets and has a complementation operation. In terms of observables, the complement of the proposition {fa} is {f < a}.
We summarize these remarks as follows:
• The proposition system of a classical system is a lattice with a distinguished orthocomplementation operation: The lattice operations of meet and join are respectively set intersection and set union. The orthocomplementation operation is set complement. Moreover this lattice is sequentially complete, in the sense that any sequence {Ei}i of elements of the lattice has a least upper bound, specifically the set-theoretic union:
\operatorname{LUB}(\{E_i\}) = \bigcup_{i=1}^\infty E_i
In the Hilbert space formulation of quantum mechanics as presented by von Neumann, a physical observable is represented by some (possibly unbounded) densely-defined self-adjoint operator A on a Hilbert space H. A has a spectral decomposition, which is a projection-valued measure E defined on the Borel subsets of R. In particular, for any bounded Borel function f, the following equation holds:
f(A) = \int_{\mathbb{R}} f(\lambda) d \operatorname{E}(\lambda)
In case f is the indicator function of an interval [a, b], the operator f(A) is a self-adjoint projection, and can be interpreted as the quantum analogue of the classical proposition
• Measurement of A yields a value in the interval [a, b].
The propositional lattice of a quantum mechanical system Edit
This suggests the following quantum mechanical replacement for the orthocomplemented lattice of propositions in classical mechanics. This is essentially Mackey's Axiom VII:
• The orthocomplemented lattice Q of propositions of a quantum mechanical system is the lattice of closed subspaces of a complex Hilbert space H where orthocomplementation of V is the orthogonal complement V.
Q is also sequentially complete: any pairwise disjoint sequence{Vi}i of elements of Q has a least upper bound. Here disjointness of W1 and W2 means W2 is a subspace of W1. The least upper bound of {Vi}i is the closed internal direct sum.
Henceforth we identify elements of Q with self-adjoint projections on the Hilbert space H.
The structure of Q immediately points to a difference with the partial order structure of a classical proposition system. In the classical case, given a proposition p, the equations
I = p \vee q
0 = p\wedge q
have exactly one solution, namely the set-theoretic complement of p. In these equations I refers to the atomic proposition which is identically true and 0 the atomic proposition which is identically false. In the case of the lattice of projections there are infinitely many solutions to the above equations.
Having made these preliminary remarks, we turn everything around and attempt to define observables within the projection lattice framework and using this definition establish the correspondence between self-adjoint operators and observables : A Mackey observable is a countably additive homomorphism from the orthocomplemented lattice of the Borel subsets of R to Q. To say the mapping φ is a countably additive homomorphism means that for any sequence {Si}i of pairwise disjoint Borel subsets of R, {φ(Si)}i are pairwise orthogonal projections and
\phi(\bigcup_{i=1}^\infty S_i) = \sum_{i=1}^\infty \phi(S_i)
Theorem. There is a bijective correspondence between Mackey observables and densely-defined self-adjoint operators on H.
This is the content of the spectral theorem as stated in terms of spectral measures.
Statistical structure Edit
Imagine a forensics lab which has some apparatus to measure the speed of a bullet fired from a gun. Under carefully controlled conditions of temperature, humidity, pressure and so on the same gun is fired repeatedly and speed measurements taken. This produces some distribution of speeds. Though we will not get exactly the same value for each individual measurement, for each cluster of measurements, we would expect the experiment to lead to the same distribution of speeds. In particular, we can expect to assign probability distributions to propositions such as {a ≤ speed ≤ b}. This leads naturally to propose that under controlled conditions of preparation, the measurement of a classical system can be described by a probability measure on the state space. This same statistical structure is also present in quantum mechanics.
A quantum probability measure is a function P defined on Q with values in [0,1] such that P(0)=0, P(I)=1 and if {Ei}i is a sequence of pairwise orthogonal elements of Q then
\operatorname{P}\!\left(\sum_{i=1}^\infty E_i\right) = \sum_{i=1}^\infty \operatorname{P}(E_i).
The following highly non-trivial theorem is due to A. Gleason:
Theorem. Suppose H is a separable Hilbert space of complex dimension at least 3. Then for any quantum probability measure on Q there exists a unique trace class operator S such that
\operatorname{P}(E) = \operatorname{Tr}(S E)
for any self-adjoint projection E.
The operator S is necessarily non-negative (that is all eigenvalues are non-negative) and of trace 1. Such an operator is often called a density operator.
Physicists commonly regard a density operator as being represented by a (possibly infinite) density matrix relative to some orthonormal basis.
For more information on statistics of quantum systems, see quantum statistical mechanics.
Automorphisms Edit
An automorphism of Q is a bijective mapping α:QQ which preserves the orthocomplemented structure of Q, that is:
\alpha\!\left(\sum_{i=1}^\infty E_i\right) = \sum_{i=1}^\infty \alpha(E_i)
for any sequence {Ei}i of pairwise orthogonal self-adjoint projections. Note that this property implies monotonicity of α. If P is a quantum probability measure on Q, then E → α(E) is also a quantum probability measure on Q. By the Gleason theorem characterizing quantum probability measures quoted above, any automorphism α induces a mapping α* on the density operators by the following formula:
\operatorname{Tr}(\alpha^*(S) E) = \operatorname{Tr}(S \alpha(E))
The mapping α* is bijective and preserves convex combinations of density operators. This means
\alpha^*(r_1 S_1 + r_2 S_2) = r_1\alpha^*(S_1) + r_2 \alpha^*(S_2) \quad
whenever 1 = r1 + r2 and r1, r2 are non-negative real numbers. Now we use a theorem of Richard Kadison:
Theorem. Suppose β is a bijective map from density operators to density operators which is convexity preserving. Then there is an operator U on the Hilbert space which is either linear or conjugate-linear, preserves the inner product and is such that
\beta(S) = U S U^*
for every density operator S. In the first case we say U is unitary, in the second case U is anti-unitary.
Remark. This note is included for technical accuracy only, and should not concern most readers. The result quoted above is not directly stated in Kadison's paper, but can be reduced to it by noting first that β extends to a positive trace preserving map on the trace class operators, then applying duality and finally applying a result of Kadison's paper.
The operator U is not quite unique; if r is a complex scalar of modulus 1, then r U will be unitary or anti-unitary if U is and will implement the same automorphism. In fact, this is the only ambiguity possible.
It follows that automorphisms of Q are in bijective correspondence to unitary or anti-unitary operators modulo multiplication by scalars of modulus 1. Moreover, we can regard automorphisms in two equivalent ways: as operating on states (represented as density operators) or as operating on Q.
Non-relativistic dynamics Edit
In non-relativistic physical systems, there is no ambiguity in referring to time evolution since there is a global time parameter. Moreover an isolated quantum system evolves in a deterministic way: if the system is in a state S at time t then at time s> t, the system is in a state Fs,t(S). Moreover, we assume
• The dependence is reversible: The operators Fs,t are bijective.
• The dependence is homogeneous: Fs,t = Fs-t,0.
• The dependence is convexity preserving: That is, each Fs,t(S) is convexity preserving.
• The dependence is weakly continuous: The mapping RR given by t → Tr(Fs,t(S) E) is continuous for every E in Q.
By Kadison's theorem, there is a 1-parameter family of unitary or anti-unitary operators {Ut}t such that
\operatorname{F}_{s,t}(S) = U_{s-t} S U_{s-t}^*
In fact,
Theorem. Under the above assumptions, there is a strongly continuous 1-parameter group of unitary operators {Ut}t such that the above equation holds.
Note that it easily from uniqueness from Kadison's theorem that
U_{t+s} = \sigma(t,s) U_t U_s
where σ(t,s) has modulus 1. Now the square of an anti-unitary is a unitary, so that all the Ut are unitary. The remainder of the argument shows that σ(t,s) can be chosen to be 1 (by modifying each Ut by a scalar of modulus 1.)
Pure states Edit
A convex combinations of statistical states S1 and S2 is a state of the form S = p2 S1 +p2 S2 where p1, p2 are non-negative and p1 + p2 =1. Considering the statistical state of system as specified by lab conditions used for its preparation, the convex combination S can be regarded as the state formed in the following way: toss a biased coin with outcome probabilities p1, p2 and depending on outcome choose system prepared to S1 or S2
Density operators form a convex set. The convex set of density operators has extreme points; these are the density operators given by a projection onto a one-dimensional space. To see that any extreme point is such a projection, note that by the spectral theorem S can be represented by a diagonal matrix; since S is non-negative all the entries are non-negative and since S has trace 1, the diagonal entries must add up to 1. Now if it happens that the diagonal matrix has more than one non-zero entry it is clear that we can express it as a convex combination of other density operators.
The extreme points of the set of density operators are called pure states. If S is the projection on the 1-dimensional space generated by a vector ψ of norm 1 then
\operatorname{Tr}(S E) = \langle E \psi | \psi \rangle
for any E in Q. In physics jargon, if
S = | \psi \rangle \langle \psi | ,
where ψ has norm 1, then
\operatorname{Tr}(S E) = \langle \psi | E | \psi \rangle .
Thus pure states can be identified with rays in the Hilbert space H.
The measurement process Edit
Consider a quantum mechanical system with lattice Q which is in some statistical state given by a density operator S. This essentially means an ensemble of systems specified by a repeatable lab preparation process. The result of a cluster of measurements intended to determine the truth value of proposition E, is just as in the classical case, a probability distribution of truth values T and F. Say the probabilities are p for T and q = 1 - p for F. By the previous section p = Tr(S E) and q = Tr(S (I-E)).
Perhaps the most fundamental difference between classical and quantum systems is the following: regardless of what process is used to determine E immediately after the measurement the system will be in one of two statistical states:
• If the result of the measurement is T
\frac{1}{\operatorname{Tr}(E S)} E S E.
• If the result of the measurement is F
\frac{1}{\operatorname{Tr}((I-E) S)}(I- E) S (I- E).
(We leave to the reader the handling of the degenerate cases in which the denominators may be 0.) We now form the convex combination of these two ensembles using the relative frequencies p and q. We thus obtain the result that the measurement process applied to a statistical ensemble in state S yields another ensemble in statistical state:
\operatorname{M}_E(S) = E S E + (I - E) S (I - E)
We see that a pure ensemble becomes a mixed ensemble after measurement. Measurement, as described above, is a special case of quantum operations.
Limitations of quantum logicEdit
Quantum logic provides a satisfactory foundation for a theory of reversible quantum processes. Examples of such processes are the covariance transformations relating two frames of reference, such as change of time parameter or the transformations of special relativity. Quantum logic also provides a satisfactory understanding of density matrices. Quantum logic can be stretched to account for some kinds of measurement processes corresponding to answering yes-no questions about the state of a quantum system. However, for more general kinds of measurement operations (that is quantum operations), a more complete theory of filtering processes is necessary. Such an approach is provided by the consistent histories formalism.
In any case, these quantum logic formalisms must be generalized in order to deal with supergeometry (which is needed to handle Fermi-fields) and non-commutative geometry (which is needed in string theory and quantum gravity theory). Both of these theories use a partial algebra with an "integral" or "trace". The elements of the partial algebra are not observables; instead the "trace" yields "greens functions" which generate scattering amplitudes. One thus obtains a local S-matrix theory (see D. Edwards).
Since around 1978 the Flato school ( see F. Bayen ) has been developing an alternative to the quantum logics approach called deformation quantization (see Weyl quantization ).
[[See also==
References Edit
• S. Auyang, How is Quantum Field Theory Possible?, Oxford University Press, 1995.
• F.Bayen,M.Flato,C.Fronsdal,A.Lichnerowicz and D.Sternheimer, Deformation theory and quantization I,II,Ann. Phys. (N.Y.),111 (1978) pp. 61-110,111-151.
• G. Birkhoff and J. von Neumann, The Logic of Quantum Mechanics, vol 37, 1936.
• D. Cohen, An Introduction to Hilbert Space and Quantum Logic, Springer-Verlag, 1989. This is a thorough but elementary and well-illustrated introduction, suitable for advanced undergraduates.
• D. Edwards, The Mathematical Foundations of Quantum Field Theory: Fermions, Gauge Fields, and Super-symmetry, Part I: Lattice Field Theories, International J. of Theor. Phys., Vol. 20, No. 7 (1981).
• D. Finkelstein, Matter, Space and Logic, Boston Studies in the Philosophy of Science vol V, 1969
• A. Gleason, Measures on the Closed Subspaces of a Hilbert Space, Journal of Mathematics and Mechanics, 1957.
• R. Kadison, Isometries of Operator Algebras, Annals of Mathematics, vol 54 pp 325-338, 1951
• G. Ludwig, Foundations of Quantum Mechanics, Springer-Verlag, 1983.
• R. Omnès, Understanding Quantum Mechanics, Princeton University Press, 1999. An extraordinarily lucid discussion of some logical and philosophical issues of quantum mechanics, with careful attention to the history of the subject. Also discusses consistent histories.
• N. Papanikolaou, Reasoning Formally About Quantum Systems: An Overview, ACM SIGACT News, 36(3), pp. 51-66, 2005.
• C. Piron, Foundations of Quantum Physics, W. A. Benjamin, 1976.
• H. Putnam, Is Logic Empirical?, Boston Studies in the Philosophy of Science vol. V, 1969
• H. Weyl, The Theory of Groups and Quantum Mechanics, Dover Publications, 1950.
External links Edit
es:Lógica cuántica fr:Logique quantique zh:量子逻辑
Around Wikia's network
Random Wiki |
b29ab4e251077ac0 | Conical Quantum Dot
Model ID: 723
Quantum dots are nano- or microscale devices created by confining free electrons in a 3D semiconducting matrix. Those tiny islands or droplets of confined “free electrons” (those with no potential energy) present many interesting electronic properties. They are of potential importance for applications in quantum computing, biological labeling, or lasers, to name only a few.
Quantum dots can have many geometries including cylindrical, conical, or pyramidal. This model studies the electronic states of a conical InAs quantum dot grown on a GaAs substrate. To compute the electronic states taken on by the quantum dot/wetting layer assembly embedded in the GaAs surrounding matrix, the 1-band Schrödinger equation is solved. The four lowest electronic energy levels with corresponding eigenwave functions for those four states are solved in this model using the Coefficient form in COMSOL Multiphysics.
The model was based upon the paper: R. Melnik and M. Willatzen, “Band structure of conical quantum dots with wetting layers,” Nanotechnology, vol. 15, 2004, pp. 1-8.
This model was built using the following:
COMSOL Multiphysics |
c5f0a4b1ba14c92b | Friday , 9 December 2016
17 Equations That Changed the World
[The above, is what mathematician Ian Stewart said when asked why he came out with a book titled “In Pursuit of the Unknown: 17 Equations That Changed the World“which takes a look at the most pivotal equations of all time, and puts them in a human, rather than technical context”.]
Number 17 on the list, for example, is a derivative pricing equation called Black Scholes which contributed to the financial crisis. People took the theoretical equation too seriously, overreached its assumptions, used it to justify poor decisions, and built a trillion dollar house of cards on it making the crisis inevitable: From an email exchange with Professor Stewart:
The 17 equations that changed the world are outlined below or can be seen in a slide show format by going here:
1. The Pythagorean Theorem
The Pythagorean Theorem
What does it mean: The square of the hypotenuse of a triangle is equal to the sum of the squares of its legs.
History: Though attributed to Pythagoras, it is not certain that he was the first person to prove it. The first clear proof came from Euclid, and it is possible the concept was known 1000 years before Pythoragas by the Babylonians.
Importance: The equation is at the core of much of geometry, links it with algebra, and is the foundation of trigonometry. Without it, accurate surveying, mapmaking, and navigation would be impossible.
Modern use: To pinpoint relative location for GPS navigation.
2. The logarithm and its identities
The logarithm and its identities
What does it mean: You can multiply numbers by adding related numbers.
History: The initial concept was discovered by the Scottish Laird John Napier of Merchiston in an effort to make the multiplication of large numbers, then incredibly tedious and time consuming, easier and faster. It was later refined by Henry Briggs to make reference tables easier to calculate and more useful.
Importance: Logarithms were revolutionary, making calculation faster and more accurate for engineers and astronomers. That’s less important with the advent of computers, but they’re still an essential to scientists.
Modern use: To inform our understanding of radioactive decay.
3. The fundamental theorem of calculus
The fundamental theorem of calculus
What does it mean?: Allows the calculation of an instantaneous rate of change.
History: Calculus as we currently know it was described around the same [time] in the late 17th century by Isaac Newton and Gottfried Leibniz. There was a lengthy debate over plagiarism and priority which may never be resolved. We use the leaps of logic and parts of the notation of both men today.
Importance: According to Stewart, “More than any other mathematical technique, it has created the modern world.” Calculus is essential in our understanding of how to measure solids, curves, and areas. It is the foundation of many natural laws, and the source of differential equations.
Modern use: To provide optimal solutions to mathematical problems associated with medicine, economics, and computer science.
4. Newton’s universal law of gravitation
Newton's universal law of gravitation
What does it mean?: Calculates the force of gravity between two objects.
History: Isaac Newton derived his laws with help from earlier work by Johannes Kepler. He also used, and possibly plagiarized, the work of Robert Hooke.
Importance: Used techniques of calculus to describe how the world works. Even though it was later supplanted by Einstein’s theory of relativity, it is still essential for practical description of how objects interact with each other. We use it to this day to design orbits for satellites and probes.
Modern use: To find optimal gravitational “tubes” or pathways for space mission launches so they can be as energy efficient as possible and also to make satellite TV possible.
5. The origin of complex numbers
The origin of complex numbers
What does it mean?: The square of an imaginary number is negative.
History: Imaginary numbers were originally posited by famed gambler/mathematician Girolamo Cardano, then expanded by Rafael Bombelli and John Wallis. They still existed as a peculiar, but essential, problem in math until William Hamilton described this definition.
Importance: According to Stewart “…. most modern technology, from electric lighting to digital cameras could not have been invented without them.” Imaginary numbers allow for complex analysis, which allows engineers to solve practical problems working in the plane.
Modern use: To allow engineers to solve practical problems working in the plane.
6. Euler’s formula for polyhedra
Euler's formula for polyhedra
What does it mean?: Describes a space’s shape or structure regardless of alignment.
History: The relationship was first described by Descartes, then refined, proved, and published by Leonhard Euler in 1750.
Importance: Fundamental to the development of topography which extends geometry to any continuous surface. An essential tool for engineers and biologists.
Modern use: To understand the behavior and function of DNA.
7. The normal distribution
The normal distribution
What does it mean?: Defines the standard normal distribution, a bell shaped curve in which the probability of observing a point is greatest near the average, and declines rapidly as one moves away.
History: The initial work was by Blaise Pascal, but the distribution came into its own with Bernoulli. The bell curve we currently [use] comes from Belgian mathematician Adolphe Quetelet.
Importance: The equation is the foundation of modern statistics. Science and social science would not exist in their current form without it.
Modern use: To determine whether drugs are sufficiently effective relative to negative side effects in clinical trials.
8. The wave equation
The wave equation
What does it mean?: A differential equation that describes the behavior of waves, originally the behavior of a vibrating violin string.
History: The mathematicians John Bournoulli and Francois D’Alembert were the first to describe this relationship in the 18th century, albeit in slightly different ways.
Importance: The behavior of waves generalizes to the way sound works, how earthquakes happen, and the behavior of the ocean.
Modern use: To predict geological formations from the ensuing sound waves generated from setting off explosives.
9. The Fourier transform
The Fourier transform
What does it mean?: Describes patterns in time as a function of frequency.
History: Joseph Fourier discovered the equation, which extended from his famous heat flow equation, and the previously described wave equation.
Importance: The equation allows for complex patterns to be broken up, cleaned up, and analyzed. This is essential in many types of signal analysis.
Modern use: To compress information for the JPEG image format and discover the structure of molecules.
10. The Navier-Stokes equations
The Navier-Stokes equations
What does it mean?: The left side is the acceleration of a small amount of fluid, the right indicates the forces that act upon it.
History: Leonhard Euler made the first attempt at modeling fluid movement, French engineer Claude-Louis Navier and Irish mathematician George Stokes made the leap to the model still used today
Importance: Once computers became powerful enough to solve this equation, it opened up a complex and very useful field of physics allowing for, among other things, the development of modern passenger jets.
Modern use: To make vehicles more aerodynamic.
11. Maxwell’s equations
Maxwell's equations
What does it mean?: Maps out the relationship between electric and magnetic fields.
History: Michael Faraday did pioneering work on the connection between electricity and magnetism, James Clerk Maxwell translated it into equations, fundamentally altering physics.
Importance: Helped predict and aid the understanding of electromagnetic waves, helping to create many technologies we use today.
Modern use: Radar, television, and modern communications.
12. Second law of thermodynamics
Second law of thermodynamics
What does it mean?: Energy and heat dissipate over time.
History: Sadi Carnot first posited that nature does not have reversible processes. Mathematician Ludwig Boltzmann extended the law, and William Thomson formally stated it.
Importance: Essential to our understanding of energy and the universe via the concept of entropy. It helps us realize the limits on extracting work from heat, and helped lead to a better steam engine.
Modern use: Helped prove that matter is made of atoms, which has been somewhat useful.
13. Einstein’s theory of relativity
Einstein's theory of relativity
What does it mean?: Energy equals mass times the speed of light squared.
History: The less known (among non-physicists) genesis of Einstein’s equation was an experiment by Albert Michelson and Edward Morley that proved light did not move in a Newtonian manner in comparison to changing frames of reference. Einstein followed up on this insight with his famous papers on special relativity (1905) and general relativity (1915).
Importance: Probably the most famous equation in history. Completely changed our view of matter and reality.
Modern use: Helped lead to nuclear weapons, and if GPS didn’t account for it, your directions would be off thousands of yards.
14. The Schrödinger equation
The Schrödinger equation
What does it mean?: Models matter as a wave, rather than a particle.
History: Louis-Victor de Broglie pinpointed the dual nature of matter in 1924. The equation you see was derived by Erwin Schrodinger in 1927, building off of the work of physicists like Werner Heisenberg.
Importance: Revolutionized the view of physics at small scales. The insight that particles at that level exist at a range of probable states was revolutionary.
Modern use: Essential to the use of the semiconductor and transistor, and thus, most modern computer technology.
15. Shannon’s information theory
Shannon's information theory
What does it mean?: Estimates the amount of data in a piece of code by the probabilities of its component symbols.
History: Developed by Bell Labs engineer Claude Shannon in the years after World War 2.
Importance: According to Stewart, “It is the equation that ushered in the information age.” By stopping engineers from seeking codes that were too efficient, it established the boundaries that made everything from CDs to digital communication possible.
Modern use: Pretty much anything that involves error detection in coding. Anybody use the internet lately?
16. The logistic model for population growth
The logistic model for population growth
What does it mean?: Estimates the change in a population of creatures across generations with limited resources.
History: Robert May was the first to point out that this model of population growth could produce chaos in 1975. Important work by mathematicians Vladimir Arnold and Stephen Smale helped with the realization that chaos is a consequence of differential equations.
Importance: Helped in the development of chaos theory, which has completely changed our understanding of the way that natural systems work.
Modern use: To model earthquakes and forecast the weather.
17. The Black–Scholes model
The Black–Scholes model
Related Article:
1. The World’s 10 Most Important Numbers are HERE!
|
19bebbad4f41c5db | The Community for Technology Leaders
RSS Icon
Issue No.08 - August (2004 vol.26)
pp: 1020-1036
Guy Gilboa , IEEE
<p><b>Abstract</b>—The linear and nonlinear scale spaces, generated by the inherently real-valued diffusion equation, are generalized to complex diffusion processes, by incorporating the free Schrödinger equation. A fundamental solution for the linear case of the complex diffusion equation is developed. Analysis of its behavior shows that the generalized diffusion process combines properties of both forward and inverse diffusion. We prove that the imaginary part is a smoothed second derivative, scaled by time, when the complex diffusion coefficient approaches the real axis. Based on this observation, we develop two examples of nonlinear complex processes, useful in image processing: a regularized shock filter for image enhancement and a ramp preserving denoising process.</p>
Scale-space, image filtering, image denoising, image enhancement, nonlinear diffusion, complex diffusion, edge detection, shock filters.
Guy Gilboa, Nir Sochen, Yehoshua Y. Zeevi, "Image Enhancement and Denoising by Complex Diffusion Processes", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.26, no. 8, pp. 1020-1036, August 2004, doi:10.1109/TPAMI.2004.47
40 ms
(Ver 2.0)
Marketing Automation Platform Marketing Automation Tool |
be38d86fd5949e3d | Handbook of Dynamical Systems, 1st Edition
Handbook of Dynamical Systems, 1st Edition,B. Fiedler,ISBN9780444501684
B Fiedler
North Holland
Print Book + eBook
USD 396.00
USD 660.00
Buy both together and save 40%
Print Book
In Stock
Estimated Delivery Time
USD 340.00
eBook Overview
VST (VitalSource Bookshelf) format
USD 320.00
Add to Cart
just a few, are ubiquitous dynamical concepts throughout the articles.
B. Fiedler
Affiliations and Expertise
Freie Universität Berlin, Institut für Mathematik I, Berlin, Germany
Handbook of Dynamical Systems, 1st Edition
A. Finite-Dimensional Methods
1. Mechanisms of phase-locking and frequency control in pairs of coupled neural oscillators (N. Kopell, G.B. Ermentrout).
2. Invariant manifolds and Lagrangian dynamics in the ocean and
atmosphere (C. Jones, S. Winkler).
3. Geometric singular perturbation analysis of neuronal dynamics (J.E. Rubin, D. Terman).
B. Numerics
4. Numerical continuation, and computation of normal forms (W.-J. Beyn, A. Champneys, E. Doedel, W. Govaerts,Y.A. Kuznetsov, B. Sandstede).
5. Set oriented numerical methods for dynamical systems (M. Dellnitz, O. Junge).
6. Numerics and exponential smallness (V. Gelfreich).
7. Shadowability of chaotic dynamical systems (C. Grebogi, L. Poon, T. Sauer, J.A. Yorke, D. Auerbach).
8. Numerical analysis of dynamical systems (J. Guckenheimer).
C. Topological Methods
9. Conley index (K. Mischaikow, M. Mrozek).
10. Functional differential equations (R.D. Nussbaum).
D. Partial Differential Equations
11. Navier--Stokes equations and dynamical systems (C. Bardos, B. Nicolaenko).
12. The nonlinear Schrödinger equation as both a PDE and a
dynamical system (D. Cai, D.W. McLaughlin, K.T.R. McLaughlin).
13. Pattern formation in gradient systems (P.C. Fife).
14. Blow-up in nonlinear heat equations from the dynamical systems point of view (M. Fila, H. Matano).
15. The Ginzburg--Landau equation in its role as a modulation
equation (A. Mielke).
16. Parabolic equations:
asymptotic behavior and dynamics on invariant manifolds (P. Poláčik).
17.Global attractors in partial differential equations (G. Raugel).
18. Stability of travelling waves (B. Sandstede).
Free Shipping
Shop with Confidence
Free Shipping around the world
▪ Broad range of products
▪ 30 days return policy
Contact Us |
ced94c666965b20f | Viewpoint: Searching high and low for bottomonium
• Stephen Godfrey, Ottawa-Carleton Institute for Physics, Department of Physics, Carleton University, Ottawa, Canada K1S 5B6
Physics 1, 11
The BABAR collaboration at SLAC has observed the radiative decay of an excited state of bottomonium (the bound state of a bottom quark and its antiparticle) to its ground state ηb. Observing this long-sought ground state should enable better tests of quantum chromodynamic calculations of quark interactions and the computational approach called lattice quantum chromodynamics.
Illustration: Alan Stonebraker; bottom panel courtesy of P. Grenier and the BaBar collabora
Figure 1: (Top) The bb¯ spectrum showing electromagnetic transitions between levels. The states that have been observed are labeled with masses taken from the Particle Data Book [16] and unobserved states are shown unlabelled with masses given by Godfrey and Isgur [9]. The unlabeled arrows show electric dipole transitions and the labeled transition (green arrow) is the M1 transition observed by the BABAR collaboration in their discovery of the ηb. The dashed line indicates the threshold above which bb¯ will have large decay rate to BB¯ final states. (Bottom) The inclusive photon spectrum observed by BABAR is shown, including the background components (green, blue) that must be subtracted to obtain the ηb signal (red) [3].(Top) The bb¯ spectrum showing electromagnetic transitions between levels. The states that have been observed are labeled with masses taken from the Particle Data Book [16] and unobserved states are shown unlabelled with masses given by Godfrey and I... Show more
Just over thirty years ago, a new generation of quarks was discovered when Fermilab announced they had found the bottom quark [1], adding to the known up, down, strange, and charm quarks. The discovery was indirect—the actual detection involved finding bottom-antibottom quark pairs (bb¯) that form bound states via strong interactions and have a rich spectroscopy analogous to that of the hydrogen atom [2]. These composite particles are called bottomonium, an analogy to the well-known electron-positron pairs called positronium. The first two bb¯ states that were discovered are named upsilon particles (Υ and Υ) and were found in 1977 during experiments with collisions of 400-GeV protons on nuclear targets at Fermilab [1]. Subsequently, a variety of other excited states (all spin triplets) have been observed.
However, no spin-singlet state had been seen until the observation of the ground state called ηb, now reported in Physical Review Letters [3] by the BABAR collaboration. The difference in mass between the Υ and the ηb is important in understanding quark-antiquark states (generally called quarkonia) by testing existing models, the applicability of perturbative quantum chromodynamics to the bb¯ system, and the results of the numerical approach, known as lattice quantum chromodynamics (lattice QCD), to calculate hadron properties [4]. More importantly, having a measured value will challenge theorists to perform more precise calculations that can be compared to experiment.
Heavy quarkonia, which are bound states of a heavy quark and antiquark, are well described by nonrelativistic potential models originally derived to describe charm-anticharm (cc¯) states [5, 6]. The potentials incorporate general features of quantum chromodynamics (QCD)—the theory of quarks and gluons describing the strong interactions. At short distances, these QCD-motivated potentials take the form of a one-gluon exchange potential, analogous to the photon exchange that is responsible for the Coulomb interaction in quantum electrodynamics (QED). Added to this are relativistic corrections, such as spin-spin and spin-orbit terms, all with “color” factors reflecting the more complicated group structure of QCD compared to QED. The spin-spin term, for example, is analogous to the hyperfine interaction that gives rise to the 21-cm line in hydrogen. At large separation the potential is described by a linearly rising interaction that confines the quarks. The QCD-motivated phenomenological potential is in good agreement with results obtained using numerical lattice-QCD methods [4]. Lattice QCD is a nonperturbative approach that deals with the nonlinear nature of the strong interaction by dividing space and time into discrete grid points and then integrating over quark and gluon configurations.
In these potential models, quarkonium energy levels are found by solving a nonrelativistic Schrödinger equation, although more sophisticated calculations take into account relativistic corrections [7]. The calculations yield energy levels that are characterized by the radial quantum number n, which is equal to one plus the number of nodes of the radial wave function, and L, the relative orbital angular momentum between the quark and antiquark. In fact, much of the nomenclature is familiar from atomic physics. The orbital levels are labeled by S, P, D (corresponding to L=0,1,2). The spins of the quark and antiquark couple to give total spin S=0 (spin-singlet) or S=1 (spin-triplet) states. S and L couple to give the total angular momentum of the state J, which can take on values J=L-1, L, or L+1. Thus the L=0 states are 1S0 and 3S1; the L=1 states are 1P1 and 3P0, 3P1, 3P2, etc.
In addition to the spin-independent potential, there are spin-dependent interactions that give rise to splittings within multiplets [7]. With these, we can predict Υ-ηb splittings in bottomonium and similar splittings in charmonium that are analogous to the hyperfine splittings in hydrogen. Splittings within P-wave and higher L-state multiplets are due to spin-orbit and spin-spin interactions arising from one-gluon exchange and a relativistic spin-orbit precession term. The contact spin-spin splitting between the singlet and triplet P-wave states is predicted to be small due to its short range and because the wavefunction at the origin for P-wave states is zero.
The observations of the ηb by BABAR [3] and the charmonium state hc by CLEO [8] are important validations of this picture. In the experiment reported by Aubert et al., electrons and positrons from the PEP-II storage ring at SLAC collide with a center-of-mass energy of 10.355 GeV. This energy is selected so that the collisions create Υ(3S) particles, some of which then decay radiatively to the ηb(1S) state. Figure 1 shows the bb¯ spectrum of observed states along with predictions for missing states [9] by Isgur and myself. The commonly used names of observed levels are shown. Note that bb¯ states with mass greater than two times the mass of a B meson (the ground state of a meson made up of a bottom-quark and a light up or down quark), will have a large decay rate into B-B¯ pairs so the branching ratio for radiative decays will be small.
Electromagnetic transitions between the levels can be calculated in the quark model and provide an important tool in understanding the quarkonium internal structure [10]. The theory and terminology of electromagnetic transitions between quarkonium states closely follows the treatment given for transitions in the hydrogen atom in undergraduate quantum mechanics textbooks with the replacement of the electric charge of the electron with that of the quark charge and one has to include both the quark and antiquark transition amplitudes. The leading-order transition amplitudes are due to electric dipole transitions (E1) between states with the same total spin and magnetic dipole transitions (M1) which flip the quark or antiquark spin and are inversely proportional to the constituent quark mass. The predictions for E1 transitions, 3PJ3S1, in the bottomonium system are in good agreement with experimental data [10]. Recently, the CLEO experiment observed a 1D bottomonium state in a cascade of E1 transitions with a mass of 10161.1±0.6±1.6MeV/c2 [11], which is in good agreement with theoretical predictions [7].
In the nonrelativistic limit the spatial overlap integrals for M1 transitions equal one between S-wave states within the same multiplet (that is, they are favored transitions) and zero for transitions between states with different radial quantum numbers (that is, these transitions are hindered). Relativistic corrections leads to small overlaps for these hindered transitions, which can be compensated by large phase-space factors [12]. Until the observation of the hindered Υ(3S) to ηb transition, no M1 transitions had been observed in the bottomonium system.
Until now, all of the observed states in the bottomonium system were spin-triplet states but quark models predict the existence of spin-singlet partners including the ground state. As mentioned above, while the decay amplitudes for hindered transitions are much smaller than those for favored transitions, this can be compensated with the larger available phase space in transitions such as Υ(3S)ηb(1S). BABAR collected a large data set by tuning the e+e- energy to the mass of the Υ(3S) and observed a signal in the photon energy with Eγ=921.2+2.1/-2.8(stat)±2.4(syst)MeV where the first error is statistical and the second systematic, which they interpreted as an M1 transition to the ηb(1S) [3]. This corresponds to an ηb(1S) mass of 9388.9+3.1/-2.3(stat)±2.7(syst)MeV/c2 with corresponding Υ(1S)-ηb hyperfine mass splitting of 71.4+2.3/-3.1(stat)±2.7(syst)MeV/c2.
The measured Υ(3S)-ηb splitting is consistent with potential model predictions although a significant subset of predictions lie outside experimental one-sigma error bounds [12]. A recent lattice-QCD calculation predicts a value of 61±14MeV/c2, which is consistent within the large errors [4]. Two recent calculations using a perturbative-QCD approach predict splittings of 39±14MeV/c2 [13] and 44±11MeV/c2 [14], both being over two standard deviations away from the BABAR measurement. One can see that the recent BABAR result poses a serious challenge to theorists, which should spur renewed effort to improve calculations. More precise measurement of the ηb mass would allow precision tests of lattice-QCD and perturbative-QCD calculations of the Υ-ηb splitting.
The large amount of data that BABAR has accumulated on the Υ(3S) state should allow searches for other missing bb¯ states. In particular, it may be possible to observe the ηb(2S) state via M1 transitions. Many models predict the branching ratio to the ηb(2S) to be only a factor of 2 or 3 smaller than that to the ηb(1S) and therefore possibly observable. Other interesting possibilities consist of searching for the hb(11P1) in the processes Υ(3S)π0hb(11P1)π0γηb and the sequential process Υ(3S)π+π-hb(11P1)π+π-γηb [15]. The discovery of these states would represent an important step in completing the bottomonium spectrum and provide an important test of QCD-based models and calculations. Measurement of the hyperfine mass splittings between the triplet and singlet quarkonium states is crucial to understanding the role of spin-spin interactions in quarkonium models and in testing QCD calculations [7].
The author gratefully acknowledges Nathan Isgur and Jon Rosner for teaching him much of what he knows about this subject. This research was supported in part by the Natural Sciences and Engineering Research Council of Canada.
1. S. W. Herb et al., Phys. Rev. Lett. 39, 252 (1977)
2. W. Kwong, J. L. Rosner, and C. Quigg, Annu. Rev. Nucl. Part. Sci. 37, 325 (1987)
3. B. Aubert et al. (BABAR), Phys. Rev. Lett. 101, 071801 (2008)
4. A. Gray, I. Allison, C. T. H. Davies, E. Gulez, G. P. Lepage, J. Shigemitsu, and M. Wingate (HPQCD and UKQCD Collaborations), Phys. Rev. D 72, 094507 (2005)
5. E. Eichten and K. Gottfried, Physics Letters B 66, 286 (1977)
6. W. Celmaster, H. Georgi, and M. Machacek, Phys. Rev. D 17, 886 (1978)
7. N. Brambilla, et al. (Quarkonium Working Group), arXiv:hep-ph/0412158
8. J. L. Rosner et al. (CLEO Collaboration), Phys. Rev. Lett. 95, 102003 (2005)
9. S. Godfrey and N. Isgur, Phys. Rev. D 32, 189 (1985)
10. E. Eichten, S. Godfrey, H. Mahlke, and J. L. Rosner, arXiv:hep-ph/0701208
11. G. Bonvicini et al. (CLEO Collaboration), Phys. Rev. D 70, 032001 (2004)
12. S. Godfrey and J. L. Rosner, Phys. Rev. D 64, 074011 (2001); 65, 039901(E) (2002)
13. B. A. Kniehl, A. A. Penin, A. Pineda, V. A. Smirnov, and M. Steinhauser, Phys. Rev. Lett. 92, 242001 (2004)
14. S Recksiegel and Y. Sumino, Phys. Lett. B 578, 369 (2004)
15. S. Godfrey and J. L. Rosner, Phys. Rev. D 66, 014012 (2002)
16. W.-M. Yao et al., J. Phys. G: Nucl. Part. Phys. 33, 1 (2006)
About the Author
Image of Stephen Godfrey
Stephen Godfrey received his B.A.Sc. from the University of Toronto in 1976, his M.Sc. from the Weizmann Institute in 1978, and his Ph.D. from the University of Toronto in 1983. He was a postdoctoral fellow at TRIUMF in Vancouver and at Brookhaven National Laboratory before becoming an Assistant Professor at the University of Guelph in 1987. He moved to Carleton University in Ottawa, Canada, in 1990 where he is currently a Professor of Physics. His research is in particle physics phenomenology, ranging from hadron spectroscopy to physics beyond the standard model at the LHC.
Subject Areas
Particles and Fields
Related Articles
Synopsis: Little Higgs Gives Warm Inflaton a Hand
Synopsis: Little Higgs Gives Warm Inflaton a Hand
Synopsis: Spotting Dark Matter with Supermaterials
Particles and Fields
Synopsis: Spotting Dark Matter with Supermaterials
Synopsis: Strange Mesonic Atoms Detected
Particles and Fields
Synopsis: Strange Mesonic Atoms Detected
More Articles |
56354526141514fb | The Reference Frame - Latest Comments most important events in our and your superstringy Universe as seen from a conservative physicist's viewpointenMon, 28 Jul 2014 08:01:22 -0000Re: The Reference Frame: Brainwashed sheep's obsession with "villain" Vladimir Putin<p>NATO is an attractive club for the nervous countries on Russian perimeter. And there would be nothing wrong if every former Eastern bloc countries were in it. I do not recall any "faux outrage", if anything the butchering of separatist Chechens together with their civilians would have deserved even more attention than it got at that time.</p>maznakMon, 28 Jul 2014 08:01:22 -0000Re: The Reference Frame: Brainwashed sheep's obsession with "villain" Vladimir Putin<p>Well there are many schools of thought. I subscribe to the one that believes that nervous countries on the perimeter of Russia, "the country that does not know where it ends" in the words of Vaclav Havel, actively seek any kind of protection available. Without the comfort of geographical separation that we Czechs now enjoy, I can imagine why they run under the NATO umbrella. And given the NATO history and doctrine, peaceful and non-expansionist Russia has nothing to worry about.</p>maznakMon, 28 Jul 2014 07:56:17 -0000Re: The Reference Frame: Universe is maths, but only some maths is relevant or true<p>Max, RE modus ponendo ponens , ("P implies Q; P is asserted to be true, so therefore Q must be true." )</p><p>I think it's a mistake to place such confidence in modus ponendo ponens. E.g. B. Russell remarks in an essay (Chap. 16, Non-demonstrative inference) in My Philosophical Development (Routledge, London)</p><p>"I came to the conclusion that inductive arguments, unless they are confined within the limits of common sense, will lead to false conclusions much more often than to true ones."<br>I think that this is too little appreciated by all kinds of scientists today. Perhaps you should consider Russell's arguments in that chapter.</p>anonymousMon, 28 Jul 2014 07:19:24 -0000Re: The Reference Frame: What Born's rule can't be derived from<p>Hi Gene, I am still not seeing why relativity would not be regarded fundamental. Historically, QM was initially non-relativistic, and it worked OK to explain the spectrum of hydrogen on the accuracy level available at that time. Once the measurements turned more accurate, physicists had to sit down again and modify QM to include relativistic corrections. This procedure was then repeated a few times until they could finally reproduce effects like the Lamb shift. On the other hand, Einstein did not derive relativity from QM as a limit case. It was the opposite, relativity was there first, and the founders of QM were forced to incorporate its mechanisms into their equations.</p><p>Imagine, spectroscopy had not been invented by the 1920s - would the Schrödinger equation have been set up nonetheless? Or QED, without the availability of precision measurements like Lamb shift or the gyromagnetic ratio of the electron? I don't think so, because the formalisms first had to be tweaked for quite some time to bring theory in close agreement with those measurements (see also the book of Schweber about "QED and the men who made it").</p><p>Einstein, however, derived his equations essentially through thought experiments, based on very basic principles. They were available prior to those precision experiments, which subsequently verified his ideas to an incredible level of accuracy (an obvious exception was the invariance of the speed of light, which was known at that time and taken by Einstein as a building block of his theory). Nobody had ever been thinking about gravity waves before Einstein presented his equations. Now, their indirect measurement through the Hulse-Taylor binary pulsar represents one of the most precise agreements between experiment and theory ever observed in physics - without any prior fine tuning of the theory. Einstein himself could have done the corresponding calculations using his original formalism.</p><p>That is why I would regard relativity truly fundamental.</p>HolgerMon, 28 Jul 2014 02:06:40 -0000Re: The Reference Frame: What Born's rule can't be derived from<p>Thanks very much for this articulate rebuttal. At the end of Carroll's latest post, I notice he sounds like some fervent devotee -- a regular True Believer. He's all filled with parallel universes evangelical fervor. Sigh.</p>M MahinSun, 27 Jul 2014 21:26:31 -0000Re: The Reference Frame: What Born's rule can't be derived from<p>ESP--- sounds like apartheid to me :)<br>Also it sounds like a good sound bite for Alan Sokal to use in his next paper for Social Context mag.</p>GordonSun, 27 Jul 2014 19:46:12 -0000Re: The Reference Frame: What Born's rule can't be derived from<p>Sort of similar to Dirac's "derivation" of his equation. He derived it by playing around with equations and matrices.</p>GordonSun, 27 Jul 2014 19:41:27 -0000Re: The Reference Frame: What Born's rule can't be derived from<p>Thanks Kashyap.</p>TomSun, 27 Jul 2014 17:04:00 -0000Re: The Reference Frame: What Born's rule can't be derived from<p>I agree completely even if I am only a dinosaur physicist. It has been fifty years since I learned any new math but it seems to me that there are only two fundamental things in physics, quantum mechanics and statistical mechanics. Both have to be inviolate else reason itself has no meaning. <br>I’m sue these two must be connected at some very deep level as well.</p>Gene DaySun, 27 Jul 2014 16:18:56 -0000Re: The Reference Frame: What Born's rule can't be derived from<p>Thanks for that paper.</p>Gary EhlenbergerSun, 27 Jul 2014 13:06:44 -0000Re: The Reference Frame: What Born's rule can't be derived from<p>Dear Holger, I am convinced it's right to say that the postulates of quantum mechanics deserve the label "fundamental principles" much more than Einstein's equations - and in this sense, they are analogous to the "equivalence principle" from which the equations of GR are deduced.</p><p>But the particular simple form of Einstein's equations, just with the Einstein tensor, and a stress-energy tensor, isn't fundamental or exact in any way. General equations obeying all the good principles also contain arbitrary higher-derivative terms (higher order in the Riemann tensor and its derivatives, with proper contractions), and may be coupled to many forms of matter including extended objects and things not described by fields at all.</p><p>So the simplicity of Einstein's equations - the fact that only the Einstein tensor appears on the lef hand side - is nothing fundamental at all. It's really a consequence of approximations. At long enough distances, all the more complicated terms that are *exactly* equally justified, symmetric, and beautiful become negligible.</p><p>On the other hand, the form of Schrodinger's equations or other universal laws of quantum mechanics is *exact* and *undeformable*, so it's much more fundamental.</p><p>Schrodinger's equation itself is just one among numerous ways - not exactly the deepest one - to describe dynamics in quantum mechanics - the equation behind Schrodinger's picture (there's also the Heisenberg picture and the Feynman approach to QM, not to mention the Dirac interaction picture and other pictures).</p><p>The wisdom inside Schrodinger's equation may perhaps be divided to several more "elementary" principles and insights. The wave function, when its evolution carries the dynamical information, is evolving unitarily with time. And the generator of the unitary transformations is the Hamiltonian. These two pieces combine to Schrodinger's equation.</p><p>The unitarity of all transformations as represented in QM is a very general principle that could again be called a universal postulate, or it's derivable from other closely related principles that are the postulates. It holds for all transformations, including rotations etc., not just for the time translations generated by the Hamiltonian.</p><p>The map between the evolution in time and the Hamiltonian is really due to Emmy Noether, so the Hamiltonian's appearance in this equation in QM is due to the quantum mechanical reincarnation of Noether's theorem. The theorem is very deep by itself, even in classical physics.</p><p>Again, I am not saying that the principles behind GR aren't deep. But Einstein's equations *are not* these principles. They're just a random product obeying some principles and its simplicity is only due to people's laziness, not because this simplified form would be fundamentally exact. It's not. The postulates of quantum mechanics however *are* and have to be exact. I feel that you get these things upside down.</p>Luboš MotlSun, 27 Jul 2014 12:38:11 -0000Re: The Reference Frame: Brainwashed sheep's obsession with "villain" Vladimir Putin<p>The US, from about 1991 until about 2008, was unarguably the world's dominant power. After the banker-created financial crash of 2008 - and the $16 trillion bank bailout - the economic pieces on BigZ's grand chessboard of power began to crumble and fall, and by 2013 China had become the dominant world economy. So the US had its generation of unchallenged supremacy, but the US neoliberalcon kleptocrats drove the US ship of state aground with ill considered, expensive wars; incessant bank bailouts which continue to the tune of some $50 billion per month; trade policies which exported US jobs to India and China; and economic policies that funneled 95% of income gains into the pockets of the 1%. What is happening now in Ukraine (and Africa as well) is a last ditch attempt to finish looting the world before the whole crumbling US edifice collapses. Putin is demonized because he has failed to cooperate with that grand plan.</p>cynholtSun, 27 Jul 2014 12:23:43 -0000Re: The Reference Frame: What Born's rule can't be derived from<p>This is little bit off topic for this blog post. But mention of quaternion, by Dirac and you, reminded me of a paper by my colleague. He (Horia Petrache) is professionally an experimental biophysicist but he is very good mathematician. He likes to do such things in his spare time after he is done with biophysics and physics teaching! The paper is pedagogical. It discusses how hyper complex numbers can be arrived at from simple group theory. He thinks that this may be known to mathematicians but may be possibly new and interesting to physicists. He would like to get comments from anyone who is interested in such stuff. His e-mail<br>address is given in the paper.<br><a href="" rel="nofollow"></a></p>kashyap vasavadaSun, 27 Jul 2014 12:01:47 -0000Re: The Reference Frame: Brainwashed sheep's obsession with "villain" Vladimir Putin<p>I’m sure you know that Winston Churchill said that the best argument against democracy is a five minute conversation with the average voter but that all other forms of governance are worse. <br>It is not an ideal world.</p>Gene DaySun, 27 Jul 2014 11:59:37 -0000Re: The Reference Frame: Brainwashed sheep's obsession with "villain" Vladimir Putin<p>The "sphere of influence” serves the vital need to increase Russia’s internal cohesiveness/stability. Russia does not enjoy the huge geographical advantages of my own country, the US. <br>Demonizing Putin can bring no good. We ought to remain neutral in the Ukraine, in my view.</p>Gene DaySun, 27 Jul 2014 11:53:14 -0000Re: The Reference Frame: Brainwashed sheep's obsession with "villain" Vladimir Putin<p>You totally miss the real reason for Putin’s actions in accepting the dogma of his megalomania. I urge you to try and look at things from his point of view on the assumption that he is a reasonable and, perhaps, even a kind man. Even if you do not believe this, give it a try. <br>Putin’s world is vastly different from yours in that he has to worry about internal stability in mother Russia. It has been true for centuries that Russia has needed secure, stable and friendly neighbors in order to preserve its own, internal, national integrity. He is not “the bad guy” but just a leader doing the best in the position he occupies. I do not envy him. <br>Europe certainly does not need a good bloodletting; they have enough of that and western interference can only serve to increase that likelihood.</p>Gene DaySun, 27 Jul 2014 11:49:12 -0000Re: The Reference Frame: Brainwashed sheep's obsession with "villain" Vladimir Putin<p>Accepting that America supported Maidan, and accepting that shooting down MH17 was a mistake, there is still only one warmonger here. Russians are taking over key positions in the "separatist" movement, as Putin moves to take by force that which he could not win any other way. No matter how much justification he may feel, there is only one way to see what is happening - Russia is taking territory from neighboring countries by use of force. The fig leaf of a native separatist movement has been pushed aside in recent days. Maybe Europe needs a good bloodletting, a new war to focus their attention, but I fear that any attempt to soft pedal what Mr. Putin is doing will be seen as apology for a coming mass murder, and, no matter how lofty or justified his goals, he is the bad guy, now.</p>Michael GershSun, 27 Jul 2014 10:30:26 -0000Re: The Reference Frame: What Born's rule can't be derived from<p>A historical aside: Some time ago I had a look at Born's original paper "Quantenmechanik der Stoßvorgänge" which is in German. I realized, that interestingly he first got "his" rule wrong, he considered Φ instead of |Φ|^2. But then he corrected it by adding a footnote:<br>"Anmerkung bei der Korrektur: Genauere Überlegung zeigt, daß die Wahrscheinlichkeit dem Quadrat der Größe Φ proportional ist."</p>MarkusMSun, 27 Jul 2014 09:05:46 -0000Re: The Reference Frame: What Born's rule can't be derived from<p>Not sure if I am on the right track here - but is it not possible to derive Einstein's field equations from a small set of fundamental assumptions (like equivalence principle, invariance of the speed of light in vacuum)?. Obviously not its source term (the energy momentum tensor) which needs input from the physical setup of the system, but pretty much all the rest of the equations are derived from something more fundamental. Einstein set up his equations, then he extracted predictions like gravity waves, light deflection which were subsequently found in corresponding measurements.</p><p>That would be different from the Schrödinger equation, which had been designed to yield numbers in agreement with existing spectra. In this sense: Is not the Schrödinger equation a rather empirical product, while Einstein's equations are from first principles?</p>HolgerSun, 27 Jul 2014 08:59:38 -0000Re: The Reference Frame: Brainwashed sheep's obsession with "villain" Vladimir Putin<p>Luboš M. I'm fully with you. But what I learned from the commentators is that all logic is not enough if somebody is growing up intolerant. </p>HolgerFSun, 27 Jul 2014 08:08:06 -0000Re: The Reference Frame: Brainwashed sheep's obsession with "villain" Vladimir Putin<p>You dont have to be the Russian President, because he's clearly following your line of thinking - he's bringing Russia down with the idea of preserving its "greatness" - a task worthy of a true ayatollah. If he was truly peaceful, reasonable and even smart, there would be no rockets in the separatist's hands - only trade agreements. But things are not simple, right? One have to consider the mass stupidity and ignorance of the population a.k.a national interests and use it to some advantage. There are also clever people dreaming of war glory, invasions and empires (which all fell due to economic reasons - the last colonial country in Europe - Portugal - is one of the poorest states now). The good leader will try to find the balance, while the bad one will run for the popularity. Russia would have invaded Ukraine if only it was capable of doing so, and obviously it's not (otherwise it would be fact), contrary to the sport-level of euphoria in some people.</p><p>Well, anyone is free to choose a side to defend and I guess someone has to take care of the Third World's interests. As the proud son of Mother Russia Sergey Brin has puted it: Russia is Nigeria with snow. Of course, he was not correct. In Index of Economic Freedom (very informative indicator) Nigeria is higher than Russia, which shares place with Burundi. So if we hear from a source, that the world have arrived at a time when a backward state is loaded with the mission of saving the world from the leading, developed nations, maybe we have to check the credibility of the source. Putin makes all efforts to keep Russia at the level of its well-deserved fame. He is making exactly the opposite of what he actually tries to achieve - he is bringing NATO to its borders. That's the only result from his actions, besides the increasing financial isolation - another sife effect of his cop-turned-thug level of thinking.</p><p>Nobody would benefit from severed relationships with Russia, it must be noted. But this statement alone is serving the Russian madness. Russia is the one which will lose the most from any form of isolation. Lets look at some numbers from the period 2011-2013. The Russian export for EU is twice the size of the EU export - about 230 billions. Fifty-five percents of all Russian export is to the EU. The european investments in Russia constitutes about 80% of all investments (almost 190 bil), while Russian investments are 76 bil and represents a negligible part of all investments in the EU. Europe imports 45% of its gas and 33% of its oil from Russia, while Russia exports 88% oil and 70% gas to Europe and so on... Lets have the data in mind when we praise the wild lands at East, lest they enlight our non-brainwashed minds with their darkness.</p>mr. criticSun, 27 Jul 2014 07:24:37 -0000Re: The Reference Frame: Brainwashed sheep's obsession with "villain" Vladimir Putin<p>As a Canadian I'm a bit sensitive to U.S. extraterritoriality. I would have thought the European's would have had bigger ballz though. With a little foresight I think the French could have owned a U.S. bank or two. Simply hedge against the Euro big time and then destroy it's value (i.e. trade war, piss the Germans off). From this position of strength they could have sued the U.S. for economic peace. Perhaps have the dollar cleared in a DMZ. Too clever?</p>AJSun, 27 Jul 2014 06:47:45 -0000Re: The Reference Frame: Brainwashed sheep's obsession with "villain" Vladimir Putin<p>Who is brainwashed? Go to Donbass, open eyes. I have many family there. My father was teacher of math physics for all his life then go retired and had a shop in home village. Now the shop is robbed by separatists and he is called a jew, even if he is not a jew. All jew properties will be nationalised - they say. Poroshenko is the jew, they write it on the internet, on posters, they say it on gatherings. American jews, european jews, banker jews, ukrainian jews want to destroy ukraine. nazi-jews - sepraratist live in paranoia. Jews everywhere but there are rally almost no jews in ukraine (really very few) -only in mind of separatists. Jews pay for nazis - they scare people with Azov Batalion, but if i were jew i would be scare more of separatist that want take all from businessman, they fight capitalism, not the Azov Batalion nobody never not saw. They live in fantasy world, all enemies are controled by USA jews, Poland has military bases in Odessa, Sweden has special soldiers sent, CIA control ukrainian army, ukraine shot areaoplane on Malaysia to kill Putin, capitalist fight agaisnt Russia last bastion of freedom from jewish bankers, they pin hammer and sickles to saint George ribbons -this all is insane. All is fighting against them, Americans, Poles, Lithuanians, Latvians, Estonians, Swedes Canadians. But for them its ok that they have Serbians, Bulgarians or even Chechens. They supported by nazi skinheads from Russia, Poland, Hungary. Nazis proud to fight nazis.. Deep, deep, deep paranoia. Come do Donetsk. I was there already. See with your own eyes and then respond how you can support all of this mindless amok with no sense. Go, buy ticket, I want to see you there. I want to know if after talking with these people your mind and intelligence wont feel ofended by their bullshit, and propaganda. Did you saw russian national tv. Are you not offended by this? They treat people like morons and you clap them? It so easy to write all those thing if you were not there, so buy a ticket and please go! Or come to Russia if you afraide of coming to Donetsk. Come to Moscow. See what people say like that Poland, Hungaria and Romania are anexing west ukraine - so many people believe this sick bullshit. Come see this, and then say if you are not offended.</p>whitemoskitoSun, 27 Jul 2014 06:39:21 -0000Re: The Reference Frame: Brainwashed sheep's obsession with "villain" Vladimir Putin<p>Yes, Obama was elected, Gene, and so was Putin and most others.</p><p>Accidentally, I just got this video</p><p><a href="" rel="nofollow"></a></p><p>where Obama says that individuals are too small insects who have to surrender all their rights in the name of the New World Order. Is that video genuine? I can't believe it. I've been listening to it 10 times, trying to see some discontinuity proving it's fake, but so far I failed!</p><p>Thank God, it was cut and taken from the context. The full version is here:</p><p><a href="" rel="nofollow"></a></p><p>and the "ordinary men and women" comment is just an "alternative vision" that Obama doesn't necessarily endorse.</p>Luboš MotlSun, 27 Jul 2014 05:44:19 -0000Re: The Reference Frame: What Born's rule can't be derived from<p>LOL, right. Now people - often without such affiliations - are bombarded by letters, preprints, media articles, and blog posts by cranks who often sit on faculties.</p>Luboš MotlSun, 27 Jul 2014 02:29:40 -0000 |
f917bcc5eb91c6f7 | Quantum probability
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Quantum probability was developed in the 1980s as a noncommutative analog of the Kolmogorovian theory of stochastic processes[1] [2] [3] [4] [5] . One of its aims is to clarify the mathematical foundations of quantum theory and its statistical interpretation.[6] [7]
A significant recent application to physics is the dynamical solution of the quantum measurement problem[8] ,[9] by giving constructive models of quantum observation processes which resolve many famous paradoxes of quantum mechanics.
Some recent advances are based on quantum stochastic filtering[10] and feedback control theory as applications of quantum stochastic calculus.
Orthodox quantum mechanics[edit]
Orthodox quantum mechanics has two seemingly contradictory mathematical descriptions:
1. deterministic unitary time evolution (governed by the Schrödinger equation) and
2. stochastic (random) wavefunction collapse.
Most physicists are not concerned with this apparent problem. Physical intuition usually provides the answer, and only in unphysical systems (e.g., Schrödinger's cat, an isolated atom) do paradoxes seem to occur.
Orthodox quantum mechanics can be reformulated in a quantum-probabilistic framework, where quantum filtering (see Bouten et al.[11] [12] for introduction or Belavkin, 1970s [13] [14] [15]) gives the natural description of the measurement process. This new framework encapsulates the standard postulates of quantum mechanics, and thus all of the science involved in the orthodox postulates.
In classical probability theory, information is summarized by the sigma-algebra F of events in a classical probability space (Ω, F,P). For example, F could be the σ-algebra σ(X) generated by a random variable X, which contains all the information on the values taken by X. We wish to describe quantum information in similar algebraic terms, in such a way as to capture the non-commutative features and the information made available in an experiment. The appropriate algebraic structure for observables, or more generally operators, is a *-algebra. A (unital) *- algebra is a complex vector space A of operators on a Hilbert space H that
• contains the identity I and
• is closed under composition (a multiplication) and adjoint (an involution *): a ∈ A implies a*A.
A state P on A is a linear functional P : AC (where C is the field of complex numbers) such that 0 ≤ P(a* a) for all a ∈ A (positivity) and P(I) = 1 (normalization). A projection is an element p ∈ A such that p2 = p = p*.
Mathematical definition[edit]
The basic definition in quantum probability is that of a quantum probability space, sometimes also referred to as an algebraic or noncommutative probability space.
Definition : Quantum probability space.
A pair (A, P), where A is a *-algebra and P is a state, is called a quantum probability space.
This definition is a generalization of the definition of a probability space in Kolmogorovian probability theory, in the sense that every (classical) probability space gives rise to a quantum probability space if A is chosen as the *-algebra of bounded complex-valued measurable functions on it.
The projections pA are the events in A, and P(p) gives the probability of the event p.
1. ^ L. Accardi, A. Frigerio, and J.T. Lewis (1982). "Quantum stochastic processes". Publ. Res. Inst. Math. Sci. 18 (1): 97–133. doi:10.2977/prims/1195184017.
2. ^ R.L. Hudson, K.R. Parthasarathy; Parthasarathy (1984). "Quantum Ito's formula and stochastic evolutions". Comm. Math. Phys. 93 (3): 301–323. Bibcode:1984CMaPh..93..301H. doi:10.1007/BF01258530.
3. ^ K.R. Parthasarathy (1992). An introduction to quantum stochastic calculus. Monographs in Mathematics 85. Basel: Birkhäuser Verlag.
4. ^ D. Voiculescu, K. Dykema, A. Nica (1992). Free random variables. A noncommutative probability approach to free products with applications to random matrices, operator algebras and harmonic analysis on free groups. CRM Monograph Series 1. Providence, RI: American Mathematical Society.
5. ^ P.-A. Meyer (1993). "Quantum probability for probabilists". Lecture Notes in Mathematics (Berlin: Springer-Verlag) 1538.
6. ^ John von Neumann (1929). "Allgemeine Eigenwerttheorie Hermitescher Funktionaloperatoren". Mathematische Annalen 102: 49–131. doi:10.1007/BF01782338.
7. ^ John von Neumann (1932). Mathematische Grundlagen der Quantenmechanik. Die Grundlehren der Mathematischen Wissenschaften, Band 38. Berlin: Springer.
8. ^ V. P. Belavkin (1995). "A Dynamical Theory of Quantum Measurement and Spontaneous Localization". Russian Journal of Mathematical Physics 3 (1): 3–24. arXiv:math-ph/0512069. Bibcode:2005math.ph..12069B.
9. ^ V. P. Belavkin (2000). "Dynamical Solution to the Quantum Measurement Problem, Causality, and Paradoxes of the Quantum Century". Open Systems and Information Dynamics 7 (2): 101–129. arXiv:quant-ph/0512187. doi:10.1023/A:1009663822827.
10. ^ V. P. Belavkin (1999). "Measurement, filtering and control in quantum open dynamical systems". Reports on Mathematical Physics 43 (3): A405–A425. arXiv:quant-ph/0208108. Bibcode:1999RpMP...43A.405B. doi:10.1016/S0034-4877(00)86386-7.
11. ^ Luc Bouten, Ramon van Handel, Matthew James (2007). "An introduction to quantum filtering". SIAM J. Control Optim. 46 (6): 2199–2241. arXiv:math/0601741v1. doi:10.1137/060651239.
12. ^ Luc Bouten, Ramon van Handel, Matthew R. James (2009). "A discrete invitation to quantum filtering and feedback control". SIAM Review 51 (2): 239–316. arXiv:math/0606118v4. Bibcode:2009SIAMR..51..239B. doi:10.1137/060671504.
13. ^ V. P. Belavkin (1972/1974). "Optimal linear randomized filtration of quantum boson signals". Problems of Control and Information Theory 3 (1): 47–62.
14. ^ V. P. Belavkin (1975). "Optimal multiple quantum statistical hypothesis testing". Stochastics (Gordon & Breach Sci. Pub) 1: 315–345. doi:10.1080/17442507508833114.
15. ^ V. P. Belavkin (1978). "Optimal quantum filtration of Makovian signals [In Russian]". Problems of Control and Information Theory, 7 (5): 345–360.
External links[edit] |
36e8d082a11c6635 | Psychology Wiki
Quantum brain dynamics
34,141pages on
this wiki
Quantum physics
Quantum psychology
Schrödinger cat
Quantum mechanics
Introduction to...
Mathematical formulation of...
Fundamental concepts
Decoherence · Interference
Uncertainty · Exclusion
Transformation theory
Ehrenfest theorem · Measurement
Double-slit experiment
Davisson-Germer experiment
Stern–Gerlach experiment
EPR paradox · Schrodinger's Cat
Schrödinger equation
Pauli equation
Klein-Gordon equation
Dirac equation
Advanced theories
Quantum field theory
Quantum electrodynamics
Quantum chromodynamics
Quantum gravity
Feynman diagram
Copenhagen · Quantum logic
Hidden variables · Transactional
Many-worlds · Many-minds · Ensemble
Consistent histories · Relational
Consciousness causes collapse
Orchestrated objective reduction
Bohm ·
In neuroscience, quantum brain dynamics (QBD) is a hypothesis to explain the function of the brain within the framework of quantum field theory. Although there are many blank areas in understanding the brain dynamics and especially how it gives rise to conscious experience it should be noted that quantum mechanics is only conisdered by some to be capable of explaining the enigma of consciousness. There is currently no experimental verification of this hypothesis. QBD is thus classified as protoscience.
Mari Jibu and Kunio Yasue (1995) were the first researchers that tried to popularize the quantum field theory of Nambu-Goldstone bosons as the one and only reliable quantum theory of fundamental macroscopic dynamics realized in the brain with which a deeper understanding of consciousness can be obtained. This hypothesis was originated by Ricciardi and Umezawa (1967) in a general framework of the spontaneous symmetry breaking formalism, and since then developed into a quantum field theoretical framework of brain functioning called quantum brain dynamics (Jibu and Yasue 1995) and that of general biological cell functioning called quantum biodynamics (Del Giudice et. al., 1986; 1988). There, Umezawa proposed a general theory of quanta of long-range coherent waves within and between brain cells, and showed a possible mechanism of memory storage and retrieval in terms of Nambu-Goldstone bosons characteristic to the spontaneous symmetry breaking formalism.
References Edit
• Conte, E, Todarello, O, Federici, A, Vitiello, F, Lopane, M, Khrennikov, A, Zbilut JP (2007). Some remarks on an experiment suggesting quantum-like behavior of cognitive entities and formulation of an abstract quantum mechanical formalism to describe cognitive entity and its dynamics. Chaos, Solitons and Fractals 31: 1076-1088 [1]
• Del Giudice E, Doglia S, Milani M, Vitiello G (1986). Electromagnetic field and spontaneous symmetry breaking in biological matter. Nucl. Phys. B 275: 185-199.
• Del Giudice E, Preparata G, Vitiello G (1988). Water as a free electric dipole laser. Physical Review Letters 61: 1085-1088. Abstract
• Georgiev DD, Glazebrook JF (2006). Dissipationless waves for information transfer in neurobiology - some implications. Informatica 30: 221-232. Free full text
• Jibu M, Yasue K (1995). Quantum Brain Dynamics: An Introduction. John Benjamins, Amsterdam.
• Jibu M, Yasue K (1997). What is mind? Quantum field theory of evanescent photons in brain as quantum theory of consciousness. Informatica 21: 471-490. Abstract
• Ricciardi LM, Umezawa H (1967). Brain and physics of many-body problems. Kybernetik 4: 44-48.
• Weiss V, Weiss H (2003). The golden mean as clock cycle of brain waves. Chaos, Solitons and Fractals 18: 643-652. Free full text
See also Edit
Around Wikia's network
Random Wiki |
b8baa44ca141a848 | Sunday, November 23, 2014
Skepticism and Science
Framing a context for the value of content.
Being a skeptic is for scientist a core state, the value of skepticism is rooted in the need of science to ask questions and on having in mind that whatever model we have now to explain a phenomenon is only temporary an it can, and most likely, change in the future. The interconnectedness between the phenomenon and the surroundings does not allow the invention of models to be separated from the anthropomorphic view of the person creating the model. Therefore it is necessary to see what is the context of the people developing these ideas. Culture in general and language in particular restrict and guide the construction of hypothesis and theories.
Science education is more than teaching a set of rules given by theories or the transmission of content boxed in a set of models. Science education has to develop the connection with previous experiences in our society. These connections allow the student see how these ideas, hypothesis, and theories were developed and how they apply to our lives. As an example I can mention when teaching and explaining how the periodic table of the elements work I made the connection with my previous research on rare earths (aka Lanthanides) and the noble gases (aka inert gases). Not only teaching the names of these elements but having a story behind their nomenclature and behavior allowed the student get a feeling of discovery and a sense of awe of God’s creation. Knowing becomes an individual's integral status of relationship with his/her own history and environment.
What is necessary to know about the students when teaching science?
These students have gone to the traumatic experience of ‘directed’ education where ‘educators’ have induced in these students indoctrinated thinking void of ‘critical thinking’ which for the context of this writing is scientific skepticism. This scientific skepticism is so much needed in today’s society.
In his book "Think: Why You Should Question Everything" Guy P. Harrison (for a link to his website click here ) warns about the lack of critical thinking in our society and teaches us that thinking like a scientist is the only way to avoid being swindled by crooks, kooks, and demagogues selling all sort of silly, and wrong ideas. Including commercial products that are harmful to us and to our environment. Being critical thinkers is a matter of personal security and wellbeing.
The need to develop critical thinking, i.e. skepticism in my students is what drives me to be critical and skeptical, and to teach with a sense of awe and feelings of discovery at every step even when the topic at hand seems to be old and fully developed like the idea of the periodic table. We know that the periodic table as it is normally presented is not at all perfect and even though is highly useful it need some explanation and adaptation. At the same time students need to know that new ways of presenting the idea of 'periodicity' of the elements (in some cases by the use of a 'table') are currently being developed as this link shows. Click here for the link.
The question now becomes, how the context of an idea can be used to reflect on the value and accuracy of the model proposed by it?
Sunday, November 9, 2014
Difficult Concepts in Science
Learning scientific concepts has an inherent difficulty that arises from the fact that they are expressed in common language terminology but with a specific meaning. For example the word 'difference' that the dictionary definition would state as: "not equal", in mathematics is specific to the idea of a quantitative value 'A - B' "the result of arithmetic subtraction" (Mac's dictionary). In particular chemistry uses symbolism to express these differences, a capital Greek letter Δ (delta) for major differences like the difference in temperature, between two physical states; and lower case δ (delta) for minor/slight differences like the one encountered in electromagnetic polarities within the atom. These major differences are of extreme importance when looking at energy changes during physical and chemical reactions, and they can be expressed as difference in enthalpy, entropy, volume, or any other variable of state that only depends on the values at the end and beginning of the process not on the path that the change followed from initial to final state. Of course we can also apply the idea of big difference when dealing with non conservative phenomena that is dependent on the path followed, such as when dealing with friction generated loss of energy during a process.
It sure become critical in the discussion of these phenomena to keep in mind the definition of all variables and parameters in the process, and these is what makes these concepts difficult to understand.
So, I think, I have to start with the definition of definition!
From my Mac's Dictionary:
"definition |ˌdefəˈni sh ən|nouna statement of the exact meaning of a wordesp. in a dictionary.• an exact statement or description of the nature, scope, or meaningof something our definition of what constitutes poetry.• the action or process of defining something.the degree of distinctness in outline of an object, image, or sound, esp. of an image in a photograph or on a screen.• the capacity of an instrument or device for making images distinct in outline [in combination high-definition television.PHRASESby definition by its very nature; intrinsically underachievement, by definition, is not due to lack of talent.
A definition is astatement of the meaning of a term (awordphrase, or other set of symbols).[a] The term to be defined is the definiendum. The term may have many different senses and multiple meanings. For each meaning, a definiens is a cluster of words that defines that term (and clarifies the speaker's intention).
A definition will vary in aspects like precision or popularity. There are also different types of definitions with different purposes and focuses (e.g. intensional, extensional, descriptive, stipulative, and so on).
A chief difficulty in the management of definitions is the necessity of using other terms that are already understood or whose definitions are easily obtainable or demonstrable (e.g. a need, sometimes, for ostensive definitions).
dictionary definition typically contains additional details about a word, such as an etymology and the language or languages of its origin, or obsolete meanings. "
As a noun definition is a statement of the exact meaning of the word. Exact in the sense of providing meaning that not only is accurate but precise so one can use the meaning repetitively within different contexts. But as 2 above: provides a degree of distinctness characterized by its relationship to the topic. Within a metaphor the words "atomic view" and "microscopic view" can be interchanged without changing the intent of those words, while in the description of an item, atom and microscope are completely different.
With this in mind let's retake the idea of 'atom' for an initial analysis of what constitute a difficult concept in science. The last sentence in our definition of definition it is stated that additional details about etymology should be given, so atom mean without a parts from the Greek, so we infer it is the smallest part of the world, but we now know that the atom has parts, protons, neutrons, electrons, that themselves are made of smaller parts (subatomic) components such as muons, mesons, quarks, bosons, and others with a variable set of colors and flavors as you find out in Wikipedia.
So the question about understanding what an atom is becomes inherently complicated and a simple explanation of what an atom is becomes elusive. One can of course simplify with models or analogies but it must be understood that the simplification will undoubtedly produce inaccuracies and misinterpretations that can, if magnified lead to critical errors of understanding. One example of this could be the lack of understanding many people have regarding the significance of 'orbital' as a 'mathematical' description of the probable localization of the electron around the nucleus within the atom. An electron that is modeled as a small particle (dot in the drawing) but mathematically is represented by a wave or probability function as stated by the Schrödinger equationödinger_equation.
As an educator I have to make sure that the student understand the complexities of nature as well as the difficulties of concepts describing the behavior and properties of phenomena within nature while at the same time providing students with mechanisms, formulas, and procedures that will permit them apply their skill to the solution of basic problems, even without a full understanding of the deep meaning of the phenomena.
This is the art of making difficult concepts easy to understand.
Sunday, October 12, 2014
Online Content Education
As I think about the title of this post, "Online Content Education", I become aware of the apparent contradiction or stress between the words content and education. Transmitting information -bits of facts and data could be considered "Content Education" but is it education in the sense of a formative process? What about the need to think critically, or the ability to communicate complex ideas?
These require added context and have to be developed during the learning process.
Science teaching appears to be one of the topics where content is well defined, and measurable outcomes could be designed for specific subjects. For instance in chemistry one can teach the periodic table and assess learning outcomes by developing questions that directly reflect if the student understands the periodic table.
It seems like a simple task; understanding the periodic table seems like a topic that can be boxed into a simple set of questions. Questions that would have a 'right' answer, which can be stated within a multiple choice set of questions where all but one are wrong. We can do that today easily within an 'online' format expanding access, allowing students who otherwise wouldn't be able to learn.
On the other hand if content is not the only thing, how will online instruction be detrimental to learning? In today's The Oregonian I read a guest column by Ramin Farahmandpur (Professor in the Department of Educational Leadership and Policy in Portland State University's Graduate School of Education) that clearly articulates how students in online classes lose the opportunity given by classroom discussion and interaction. Prof. Farahmandpur uses the word 'shortchange' to describe the loss of learning opportunities during online instruction and mentions how Western Governors University (A well known online private non-for-profit organization) had in 2012 the lowest graduation rates according to the CBS Money Watch Report. To read more click the following link
Friday, October 10, 2014
Content and Context in Higher Ed
Science is supposed to be about content. Concepts, hypothesis, and theories are used to understand how the world works and to develop technology that is fundamental for the betterment of our society. Many would say that this last is why science is so important, and why we should as a society support its progress. Who could be against the advances of modern medicine, and engineering?
This view of science lead to the assumption that teaching science should be simply the transmission of ideas, the teaching of content. So we can always test that it is happening by a simple question: can the student solve such and such problem? Questions like "what is the temperature if .....?" are the standard questions in any assessment of student knowledge.
In a way this is OK, this will allow the student to be a "problem solver" but, will s/he be a "critical thinker"? I think that this is not enough. If we are not critical thinkers our ability to solve problems will be also impaired.
This week I'm teaching gas behavior in my general chemistry class. The mathematical expression that relates volume, pressure, amount, and temperature is known as the 'ideal gas law" PV = nRT. Working with this formula amounts to simple algebra, should not give much trouble. It looks like there is no context. So why should I talk about Robert Boyle a fellow of the Royal Society who in the XVII century developed what is now known as Boyle's law relating the volume and the pressure of a gas, or Jacques Charles a French aristocrat, member of the Paris Science Academy, who lived through the French Revolution and was probably the first to fly an unmanned balloon full of hydrogen in 1783. Charles Law relates temperature with volume of a gas and even though it was Gay-Lussac who published in 1802, Charles was given credit for his unpublished work.
It seems to me that this honesty in the scientific world has become less of a norm, I'm sad to say.
Then we have Avogadro (always concerned with the amounts of substances) lived the last part of the XVIII and first half of the XIX centuries. He of course saw the relationship between the amount of gas and the volume. Now we know this relationship as Avogadro's Law.
When in the late 1800's these laws where condensed into one: The Ideal Gas Law PV =nRT
Water vapor engineering was born. And "steam' energy became the driver of the second industrial revolution 1840-1870 by introducing "steam" engines to trains and boats transforming transportation.
Now the question I have is: why should students learn about all the history when learning how to solve problems with PV =nRT? Is the ideal gas law going to change if circumstances change? What can I learn from the fact that many minds where involved in the development of the "law"?
Are the answers to these questions self evident?
Wednesday, October 8, 2014
Opening Opportunities - Freedom to Flourish - A Counter System
It is a fact!
More and more students are coming out of high-school ill prepared. In my previous posting here I talked about an article in the Oregonian (10/7/14) where the average low SAT scores of Oregon high school graduates is mentioned. This is -as with any problem- an opportunity. And Warner Pacific College is stepping up to the challenge!
This is what WPC's president Dr. Andrea Cook has to say about it: "At Warner Pacific, we develop significant relationships with our students, and believe it’s an essential means of educating, challenging and serving students who might otherwise not finish their education. The reality is our educational system has been designed for advantaged people. In order to make education more fully accessible, we need to create a “counter system” that grants access to a wider population—that’s what we’re about." (Quote from the 'president's message in WPC website )
We are proud of the approach we are taking helping students that come from underserved cultures and backgrounds and helping them succeed and flourish. WPC is opening opportunities by recognizing the need for change in higher education. By embracing these challenges and turning them around making them opportunities.
The world is in dire need of STEM graduates in particular and in need of higher education in general, so this is how we can be part of the solution. Bringing the opportunity to study science to a population that is not normally served to do so is of great importance. It will of course create problems as these students are not well prepared for the rigor of the sciences curricula. But there are many things in favor of the success of these students, one is their eagerness to succeed, their gumption for life, their capacity for adventure, and their freedom to flourish!
Tuesday, October 7, 2014
Unprepared Students
The Oregonian today in their front page has and article about Oregon's students not having good SAT scores. Therefor not being ready for college, so what is the meaning of this when they actually go to college? Are colleges prepared for unprepared students?
What are colleges doing to bridge the gap between what is supposed to be the preparation of these students and what in reality is?
It seems that not much, at least not much in respect to structural change. No doubt there have been many isolated actions that are trying to address the issue such as having level 90 classes as pre-requisites for unprepared students. But these isolated and non-structural attempts to help students are not part of the widely recognized view of the need for change.
This is what I have been thinking can be done:
1. Accept that they come to college unprepared. Closing our eyes to the problem is of course not going to help. Blaming teachers for the students' undesired performance will not help the students.
2. Redefine the purpose of the first year. One objective of the redesigning is to group students in a way that they can get the benefits of peer support and tutoring.
3. Train professors teaching the freshman class on technologies and didactics relevant to the needs of these students.
4. Redesign curricula for the college years in a way that some majors may finish in less that the standard four years and some will finish in more than the standard four years.
5. Make college more affordable by redesigning the classroom time relationship to the credit hour that has been in place for decades.
Without a doubt I know there are other things we should do. Even though right now I can't think what these are!
In science we see how developments are happening at a vertiginous speed. Science education can not afford continuing without a change. Even though we have to recognize that many new teaching technologies have been developed around the idea of "active" learning and Process Oriented Guided Inquiry Learning (POGIL) but these have been within the so called 'traditional' curricula, within the traditional 'credit hour' scheme. We have to change that.
OK, we have to change that, but where do we start? How do we start? Who should start?
Sunday, September 21, 2014
Diversity and Leadership in Science
Mariette DiChristina Editor in Chief of Scientific American wrote in the last issue (October, 2014) a very insightful editorial titled 'You're Invited'. In it she exposes the need for collaboration in any successful endeavor and mentioned the changes in communication that she has leaded, including inviting bloggers and participating in international forums like the World Economic Forum in Davos, Switzerland. In the same issue another editorial 'Preferential Treatment" the fact that 'good intentions are not enough to end racial and gender bias' exposes the situation within science as is commonly perceived in other fields.
Then in page 42 an article by Katherine W. Phillips (Paul Calello Professor of Leadership and Ethics; and Senior Vice-Dean at Columbia Business School) "How Diversity Works" articulates how "being around people who are different from us makes us more creative, more diligent and hard-working".
The same will apply to learning science.
Learning is an individual task but it is best accomplished in the company of others with which one is interacting intellectually. Challenging questions, and time will allow the ideas to evolve and consolidate. In the interest of creativity and motivation having views from different perspectives and cultures for sure will be nurturing.
The question is then: how can we go beyond good intentions? As Phillips write 'the first thing to acknowledge about diversity is that it can be difficult."
How can having students in a class that have a diverse level of experience in the topic help all to a better understanding? |
0ac8332c41c31284 | This experience is optimized for Internet Explorer version 9 and above.
Please upgrade your browser
Send the Gift of Lifelong Learning!
Physics and Our Universe: How It All Works
Physics and Our Universe: How It All Works
Gifting Information
• 500 characters remaining.
Frequently Asked Questions
1. Find the course you would like to eGift.
3. Click 'Send e-Gift'
5. Proceed with the checkout process as usual.
Q: How will I know they have received my eGift?
Q: Can I update or change my email address?
Video title
Priority Code
Physics and Our Universe: How It All Works
Physics and Our Universe: How It All Works
Professor Richard Wolfson Ph.D.
Middlebury College
Course No. 1280
Course No. 1280
Video or Audio?
Which Format Should I Choose? Video Download Audio Download DVD CD
Watch or listen immediately with FREE streaming
Available on most courses
Available on most courses
Stream to your internet connected PC or laptop
Available on most courses
Download files for offline viewing or listening
Receive DVDs or CDs for your library
Play as many times as you want
Video formats include Free Streaming
Video formats include Free Streaming
Course Overview
About This Course
60 lectures | 30 minutes per lecture
Physics is the fundamental science. It explains how the universe behaves at every scale, from the subatomic to the extragalactic. It describes the most basic objects and forces and how they interact. Its laws tell us how the planets move, where light comes from, what keeps birds aloft, why a magnet attracts and also repels, and when a falling object will hit the ground, and it gives answers to countless other questions about how the world works.
Physics also gives us extraordinary power over the world, paving the way for devices from radios to GPS satellites, from steam engines to nanomaterials. It's no exaggeration to say that every invention ever conceived makes use of the principles of physics. Moreover, physics not only underlies all of the natural sciences and engineering, but also its discoveries touch on the deepest philosophical questions about the nature of reality.
View More
Which makes physics sound like the most complicated subject there is. But it isn't. The beauty of physics is that it is simple, so simple that anyone can learn it. In 60 enthralling half-hour lectures, Physics and Our Universe: How It All Works proves that case, giving you a robust, introductory college-level course in physics. This course doesn't stint on details and always presents its subject in all of its elegance—yet it doesn't rely heavily on equations and mathematics, using nothing more advanced than high school algebra and trigonometry.
Your teacher is Professor Richard Wolfson, a noted physicist and educator at Middlebury College. Professor Wolfson is author or coauthor of a wide range of physics textbooks, including a widely used algebra-based introduction to the subject for college students. He has specially designed Physics and Our Universe to be entirely self-contained, requiring no additional resources. And for those who wish to dig deeper, he includes an extensive list of suggested readings that will enhance your understanding of basic physics.
Explore the Fundamentals of Reality
Intensively illustrated with diagrams, illustrations, animations, graphs, and other visual aids, these lectures introduce you to scores of fundamental ideas such as these:
• Newton's laws of motion: Simple to state, these three principles demolish our intuitive sense of why things move. Following where they lead gives a unified picture of motion and force that forms the basis of classical physics.
• Bernoulli effect: In fluids, an increase in speed means a decrease in pressure. This effect has wide application in aerodynamics and hydraulics. It explains why curve balls curve and why plaque in an artery can cause the artery to collapse.
• Second law of thermodynamics: Echoing the British novelist and physicist C. P. Snow, Professor Wolfson calls this law about the tendency toward disorder "like a work of Shakespeare's" in its importance to an educated person's worldview.
• Maxwell's equations: Mathematically uniting the theories of electricity and magnetism, these formulas have a startling outcome, predicting the existence of electromagnetic waves that move at the speed of light and include visible light.
• Interference and diffraction: The wave nature of light looms large when light interacts with objects comparable in size to the light's wavelength. Interference and diffraction are two intriguing phenomena that appear at these scales.
• Relativity and quantum theory: Introduced in the early 20th century, these revolutionary ideas not only patched cracks in classical mechanics but led to realms of physics never imagined, with limitless new horizons for research.
A Course of Breathtaking Scope
The above ideas illustrate the breathtaking scope of Physics and Our Universe, which is broken into six areas of physics plus an introductory section that take you from Isaac Newton's influential "clockwork universe" in the 17th century to the astonishing ideas of modern physics, which have overturned centuries-old views of space, time, and matter. The seven sections of the course are these:
• Introduction: Start the course with two lectures on the universality of physics and its special languages.
• Newtonian Mechanics: Immerse yourself in the core ideas that transformed physics into a science.
• Oscillations, Waves, Fluids: See how Newtonian mechanics explains systems involving many particles.
• Thermodynamics: Investigate heat and its connection to the all-important concept of energy.
• Electricity and Magnetism: Explore electromagnetism, the dominant force on the atomic through human scales.
• Optics: Proceed from the study of light as simple rays to phenomena involving light's wave properties.
• Beyond Classical Physics: Review the breakthroughs in physics that began with Max Planck and Albert Einstein.
As vast as this scope is, you will not be overwhelmed, because one set of ideas in physics builds on those that precede it. Professor Wolfson constantly reviews where you've been, tying together different concepts and giving you a profound sense of how one thing leads to another in physics. Since the 17th century, physics has expanded like a densely branching tree, with productive new shoots continually forming, some growing into major limbs, but all tracing back to the sturdy foundation built by Isaac Newton and others—which is why Physics and Our Universe and most other introductory physics courses have a historical focus, charting the fascinating growth of the field.
An interesting example is Newtonian mechanics. Developments in the late 19th century showed that Newton's system breaks down at very high speeds and small scales, which is why relativity and quantum theory replaced classical physics in these realms. But the Newtonian approach is still alive and well for many applications. Newtonian mechanics will get you to the moon in a spacecraft, allow you to build a dam or a skyscraper, explain the behavior of the atmosphere, and much more. On the other hand, for objects traveling close to the speed of light or events happening in the subatomic realm, you learn that relativity and quantum theory are the powerful new tools for describing how the world works.
Seeing Is Believing
Physics would not be physics without experiments, and one of the engaging aspects of this course is the many on-screen demonstrations that Professor Wolfson performs to illustrate physical principles in action. With a showman's gifts, he conducts scores of experiments, including the following:
• Whirling bucket: Why doesn't water fall out of a bucket when you whirl it in a vertical circle? It is commonly believed that there is a force holding the water up. But this is a relic of pre-Newtonian thinking dating to Aristotle. Learn to analyze what's really going on.
• Bowling ball pendulum: Would you bet the safety of your skull on the conservation of energy? Watch a volunteer release a pendulum that swings across the room and hurtles back directly at her nose, which escapes harm thanks to the laws of physics.
• Big chill: What happens when things get really cold? Professor Wolfson pours liquid nitrogen on a blown-up balloon, demonstrating dramatic changes in the volume of air in the balloon. Discover other effects produced by temperature change.
• Energy and power: How much power is ordered up from the grid whenever you turn on an electric light? Get a visceral sense by watching a volunteer crank a generator to make a light bulb glow. Try a simple exercise to experience the power demand yourself.
• Total internal reflection: How does a transparent medium such as glass act as an almost perfect mirror without a reflective coating? See a simple demonstration that reveals the principle behind rainbows, binoculars, and optical fibers.
• Relativity revelation: What gave Einstein the idea for his special theory of relativity? Move a magnet through a coil, then move a coil around a magnet. You get the same effect. But in Einstein's day there were two separate explanations, which made him think ...
Math for Those Who Want to Probe Deeper
Professor Wolfson doesn't just perform memorable experiments. He introduces basic mathematics to analyze situations in detail—for example, by calculating exactly the speed a rollercoaster needs to travel to keep passengers from falling out at the top of a loop-the-loop track, or by showing that the reason high voltage is used for electrical power transmission is revealed in the simple expression that applies Ohm's law, relating current and voltage, to the formula for power.
You also see how amazing insights can be hidden in seemingly trivial mathematical details. Antimatter was first postulated when physicist Paul Dirac was faced with a square root term in an equation, and instead of throwing out one of the answers as would normally have been done, he decided to pursue the implications of two solutions.
Whenever Professor Wolfson introduces an equation, he explains what every term in the equation means and the significance of the equation for physics. You need not go any further than this to follow his presentation, but for those who wish to probe deeper he works out solutions to many problems, showing the extraordinary reach of mathematics in analyzing nature. But he stresses that physics is not about math; it's the ideas of physics that are crucial.
Understand the World in a New Way
Above all, the ideas of physics are simple. As you discover in this course, just a handful of important concepts permeate all of physics. Among them are
• conservation of energy,
• conservation of momentum,
• second law of thermodynamics,
• conservation of electric charge,
• principle of relativity, and
• Heisenberg uncertainty principle.
The key is not just to think in terms of these principles, but also to let go of common misconceptions, such as the idea that force causes motion; in fact, force causes change in motion. As you progress through Physics and Our Universe, you'll inevitably start to see the world differently.
"I love teaching physics and I love to see the understanding light up in people's eyes," says Professor Wolfson. "You'll see common, everyday phenomena with new understanding, like slamming on the brakes of your car and hearing the antilock brake system engage and knowing the physics of why it works; like going out on a very cold day and appreciating why your breath is condensing; like turning on your computer and understanding what's going on in those circuits. You will come to a much greater appreciation of all aspects of the world around you."
View Less
60 Lectures
• 1
The Fundamental Science
Take a quick trip from the subatomic to the galactic realm as an introduction to physics, the science that explains physical reality at all scales. Professor Wolfson shows how physics is the fundamental science that underlies all the natural sciences. He also describes phenomena that are still beyond its explanatory power. x
• 2
Languages of Physics
Understanding physics is as much about language as it is about mathematics. Begin by looking at how ordinary terms, such as theory and uncertainty, have a precise meaning in physics. Learn how fundamental units are defined. Then get a taste of the basic algebra that is used throughout the course. x
• 3
Describing Motion
Motion is everywhere, at all scales. Learn the difference between distance and displacement, and between speed and velocity. Add to these the concept of acceleration, which is the rate of change of velocity, and you are ready to delve deeper into the fundamentals of motion. x
• 4
Falling Freely
Use concepts from the previous lecture to analyze motion when an object is under constant acceleration due to gravity. In principle, the initial conditions in such cases allow the position of the object to be determined for any time in the future, which is the idea behind Isaac Newton's "clockwork universe." x
• 5
It's a 3-D World!
Add the concept of vector to your physics toolbox. Vectors allow you to specify the magnitude and direction of a quantity such as velocity. The vector's direction can be along any axis, allowing analysis of motion in three dimensions. Then use vectors to solve several problems in projectile motion. x
• 6
Going in Circles
Circular motion is accelerated motion, even if the speed is constant, because the direction, and hence the velocity, is changing. Analyze cases of uniform and non-uniform circular motion. Then close with a problem challenging you to pull out of a dive in a jet plane without blacking out or crashing. x
• 7
Causes of Motion
For most people, the hardest part of learning physics is to stop thinking like Aristotle, who believed that force causes motion. It doesn't. Force causes change in motion. Learn how Galileo's realization of this principle, and Newton's later formulation of his three laws of motion, launched classical physics. x
• 8
Using Newton's Laws—1-D motion
Investigate Newton's second law, which relates force, mass, and acceleration. Focus on gravity, which results in a force, called weight, that's proportional to an object's mass. Then take a ride in an elevator to see how your measured weight changes due to acceleration during ascent and descent. x
• 9
Action and Reaction
According to Newton's third law, "for every action there is an equal and opposite reaction." Professor Wolfson has a clearer way of expressing this much-misunderstood phrase. Also, see several demonstrations of action and reaction, and learn about frictional forces through examples such as antilock brakes. x
• 10
Newton's Laws in 2 and 3 Dimensions
Consider Newton's laws in cases of two and three dimensions. For example, how fast does a rollercoaster have to travel at the top of a loop to keep passengers from falling out? Is there a force pushing passengers up as the coaster reaches the top of its arc? The answer may surprise you. x
• 11
Work and Energy
See how the precise definition of work leads to the concept of energy. Then explore how some forces "give back" the work done against them. These conservative forces lead to the concept of stored potential energy, which can be converted to kinetic energy. From here, develop the important idea of conservation of energy. x
• 12
Using Energy Conservation
A dramatic demonstration with a bowling ball pendulum shows how conservation of energy is a principle you can depend on. Next, solve problems in complicated motion using conservation of energy as a shortcut. Close by drawing the distinction between energy and power, which are often confused. x
• 13
Newton realized that the same force that makes an apple fall to the ground also keeps the moon in its orbit around Earth. Explore this force, called gravity, by focusing on circular orbits. End by analyzing why an orbiting spacecraft has to decrease its kinetic energy in order to speed up. x
• 14
Systems of Particles
How do you analyze a complex system in motion? One special point in the system, called the center of mass, reduces the problem to its simplest form. Also learn how a system's momentum is unchanged unless external forces act on it. Then apply the conservation of momentum principle to analyze inelastic and elastic collisions. x
• 15
Rotational Motion
Turn your attention to rotational motion. Rotational analogs of acceleration, force, and mass obey a law related to Newton's second law. This leads to the concept of angular momentum and the all-important -conservation of angular momentum, which explains some surprising and seemingly counterintuitive phenomena involving rotating objects. x
• 16
Keeping Still
What's the safest angle to lean a ladder against a wall to keep the ladder from slipping and falling? This is a problem in static equilibrium, which is the state in which no net force or torque (rotational force) is acting. Explore this condition and develop tools for determining whether equilibrium is stable or unstable. x
• 17
Back and Forth—Oscillatory Motion
Start a new section in which you apply Newtonian mechanics to more complex motions. In this lecture, study oscillations, a universal phenomenon in systems displaced from equilibrium. A special case is simple harmonic motion, exhibited by springs, pendulums, and even molecules. x
• 18
Making Waves
Investigate waves, which transport energy but not matter. When two waves coexist at the same point, they interfere, resulting in useful and surprising applications. Also examine the Doppler effect, and see what happens when an object moves through a medium faster than the wave speed in that medium. x
• 19
Fluid Statics—The Tip of the Iceberg
Fluid is matter in a liquid or gaseous state. In this lecture, study the characteristics of fluids at rest. Learn why water pressure increases with depth, and air pressure decreases with height. Greater pressure with depth causes buoyancy, which applies to balloons as well as boats and icebergs. x
• 20
Fluid Dynamics
Explore fluids in motion. Energy conservation requires low pressure where fluid velocity is high, and vice versa. This relation between pressure and velocity results in many practical and sometimes counterintuitive phenomena, collectively called the Bernoulli effect—explaining why baseballs curve and how airplane speedometers work. x
• 21
Heat and Temperature
Beginning a new section, learn that heat is a flow of energy driven by a temperature difference. Temperature can be measured with various techniques but is most usefully quantified on the Kelvin scale. Investigate heat capacity and specific heat, and solve problems in heating a house and cooling a nuclear reactor. x
• 22
Heat Transfer
Analyze heat flow, which involves three important heat-transfer mechanisms: conduction, which results from direct molecular contact; convection, involving the bulk motion of a fluid; and radiation, which transfers energy by electromagnetic waves. Study examples of heat flow in buildings and in the sun's interior. x
• 23
Matter and Heat
Heat flow into a substance usually raises its temperature. But it can have other effects, including thermal expansion and changes between solid, liquid, and gaseous forms—collectively called phase changes. Investigate these phenomena, starting with an experiment in which Professor Wolfson pours liquid nitrogen onto a balloon filled with air. x
• 24
The Ideal Gas
Delve into the deep link between thermodynamics, which looks at heat on the macroscopic scale, and statistical mechanics, which views it on the molecular level. Your starting point is the ideal gas law, which approximates the behavior of many gases, showing how temperature, pressure, and volume are connected by a simple formula. x
• 25
Heat and Work
The first law of thermodynamics relates the internal energy of a system to the exchange of heat and mechanical work. Focus on isothermal (constant temperature) and adiabatic (no heat flow) processes, and see how they apply to diesel engines and the atmosphere. x
• 26
Entropy—The Second Law of Thermodynamics
Turn to an idea that has been compared to a work of Shakespeare: the second law of thermodynamics. According to the second law, entropy, a measure of disorder, always increases in a closed system. Order can only increase at the cost of even greater entropy elsewhere in the system. x
• 27
Consequences of the Second Law
The second law puts limits on the efficiency of heat engines and shows that humankind's energy use could be better planned. Learn why it makes sense to exploit low-entropy, high-quality energy for uses such as transportation, motors, and electronics, while using high-entropy random thermal energy for heating. x
• 28
A Charged World
Embark on a new section of the course, devoted to electromagnetism. Begin by investigating electric charge, which is a fundamental property of matter. Coulomb's law states that the electric force depends on the product of the charges and inversely on the square of the distance between them. x
• 29
The Electric Field
On of the most important ideas in physics is the field, which maps the presence and magnitude of a force at different points in space. Explore the concept of the electric field, and learn how Gauss's law describes the field lines emerging from an enclosed charge. x
• 30
Electric Potential
Jolt your understanding of electric potential difference, or voltage. A volt is one joule of work or energy per coulomb of charge. Survey the characteristics of voltage—from batteries, to Van de Graaff generators, to thunderstorms, which discharge lightning across a potential difference of millions of volts. x
• 31
Electric Energy
Study stored electric potential energy in fuels such as gasoline, where the molecular bonds represent an enormous amount of energy ready to be released. Also look at a ubiquitous electronic component called the capacitor, which stores an electric charge, and discover that all electric fields represent stored energy. x
• 32
Electric Current
Learn the definition of the unit of electric current, called the ampere, and how Ohm's law relates the current in common conductors to the voltage across the conductor and the conductor's resistance. Apply Ohm's law to a hard-starting car, and survey tips for handling electricity safely. x
• 33
Electric Circuits
All electric circuits need an energy source, such as a battery. Learn what happens inside a battery, and analyze simple circuits in series and in parallel, involving one or more resistors. When capacitors are incorporated into circuits, they store electric energy and introduce time dependence into the circuit's behavior. x
• 34
In this introduction to magnetism, discover that magnetic phenomena are really about electricity, since magnetism involves moving electric charge. Learn the right-hand rule for the direction of magnetic force. Also investigate how a current-carrying wire in a magnetic field is the principle behind electric motors. x
• 35
The Origin of Magnetism
No matter how many times you break a magnet apart, each piece has a north and south pole. Why? Search for the origin of magnetism and learn how magnetic field lines differ from those of an electric field, and why Earth has a magnetic field. x
• 36
Electromagnetic Induction
Probe one of the most fascinating phenomena in all of physics, electromagnetic induction, which shows the direct relationship between electric and magnetic fields. In a demonstration with moving magnets, see how the relative motion of a magnet and an electric conductor induces current in the conductor. x
• 37
Applications of Electromagnetic Induction
Survey some of the technologies that exploit electromagnetic induction: the electric generators that supply nearly all the world's electrical energy, transformers that step voltage up or down for different uses, airport metal detectors, microphones, electric guitars, and induction stovetops, among many other applications. x
• 38
Magnetic Energy
Study the phenomenon of self-inductance in a solenoid coil, finding that the magnetic field within the coil is a repository of magnetic energy, analogous to the electric energy stored in a capacitor. Close by comparing the complementary aspects of electricity and magnetism. x
• 39
Direct current (DC) is electric current that flows in one direction; alternating current (AC) flows back and forth. Learn how capacitors and inductors respond to AC by alternately storing and releasing energy. Combining a capacitor and inductor in a circuit provides the electrical analog of simple harmonic motion introduced in Lecture 17. x
• 40
Electromagnetic Waves
Explore the remarkable insight of physicist James Clerk Maxwell in the 1860s that changing electric fields give rise to magnetic fields in the same way that changing magnetic fields produce electric fields. Together, these changing fields result in electromagnetic waves, one component of which is visible light. x
• 41
Reflection and Refraction
Starting a new section of the course, discover that light often behaves as rays, which change direction at boundaries between materials. Investigate reflection and refraction, answering such questions as, why doesn't a dust mote block data on a CD? How do mirrors work? And why do diamonds sparkle? x
• 42
See how curving a mirror or a piece of glass bends parallel light rays to a focal point, allowing formation of images. Learn how images can be enlarged or reduced, and the difference between virtual and real images. Use your knowledge of optics to solve problems in vision correction. x
• 43
Wave Optics
Returning to themes from Lecture 18 on waves, discover that when light interacts with objects comparable in size to its wavelength, then its wave nature becomes obvious. Examine interference and diffraction, and see how these effects open the door to certain investigations, while hindering others. x
• 44
Cracks in the Classical Picture
Embark on the final section of the course, which covers the revolutionary theories that superseded classical physics. Why did classical physics need to be replaced? Discover that by the late 19th century, inexplicable cracks were beginning to appear in its explanatory power. x
• 45
Earth, Ether, Light
Review the famous Michelson-Morley experiment, which was designed to detect the motion of Earth relative to a conjectured "ether wind" that supposedly pervaded all of space. The failure to detect any such motion revealed a deep-seated contradiction at the heart of physics. x
• 46
Special Relativity
Discover the startling consequences of Einstein's principle of relativity—that the laws of physics are the same for all observers in uniform motion. One result is that the speed of light is the same for all observers, no matter what their relative motion—an idea that overturns the concept of simultaneity. x
• 47
Time and Space
Einstein's special theory of relativity upends traditional notions of space and time. Solve the simple formulas that show the reality of time dilation and length contraction. Conclude by examining the twins paradox, discovering why one twin who travels to a star and then returns ages more slowly than the twin back on Earth. x
• 48
Space-Time and Mass-Energy
In relativity theory, contrary to popular views, reality is what's not relative—that is, what doesn't depend on one's frame of reference. See how space and time constitute one such pair, merging into a four-dimensional space-time. Mass and energy similarly join, related by Einstein's famous E = mc2. x
• 49
General Relativity
Special relativity is limited to reference frames in uniform motion. Following Einstein, make the leap to a more general theory that encompasses accelerated frames of reference and necessarily includes gravity. According to Einstein's general theory of relativity, gravity is not a force but the geometrical structure of spacetime. x
• 50
Introducing the Quantum
Begin your study of the ideas that revolutionized physics at the atomic scale: quantum theory. The word "quantum" comes from Max Planck's proposal in 1900 that the atomic vibrations that produce light must be quantized—that is, they occur only with certain discrete energies. x
• 51
Atomic Quandaries
Apply what you've learned so far to work out the details of Niels Bohr's model of the atom, which patches one of the cracks in classical physics from Lecture 44. Although it explains the energies of photons emitted by simple atoms, Bohr's model has serious limitations. x
• 52
Wave or Particle?
In the 1920s physicists established that light and matter display both wave- and particle-like behavior. Probe the nature of this apparent contradiction and the meaning of Werner Heisenberg's famous uncertainty principle, which introduces a fundamental indeterminacy into physics. x
• 53
Quantum Mechanics
In 1926 Erwin Schrödinger developed an equation that underlies much of our modern quantum-mechanical description of physical reality. Solve a simple problem with the Schrödinger equation. Then learn how the merger of quantum mechanics and special relativity led to the discovery of antimatter. x
• 54
Drawing on what you now know about quantum mechanics, analyze how atoms work, discovering that the electron is not a point particle but behaves like a probability cloud. Investigate the exclusion principle, and learn how quantum mechanics explains the periodic table of elements and the principle behind lasers. x
• 55
Molecules and Solids
See how atoms join to make molecules and solids, and how this leads to the quantum effects that underlie semiconductor electronics. Also probe the behavior of matter in ultradense white dwarfs and neutron stars, and learn how a quantum-mechanical pairing of electrons at low temperatures produces superconductivity. x
• 56
The Atomic Nucleus
In the first of two lectures on nuclear physics, study the atomic nucleus, which consists of positively charged protons and electrically neutral neutrons, held together by the strong nuclear force. Many combinations of protons and neutrons are unstable; such nuclei are radioactive and decay with characteristic half lives. x
• 57
Energy from the Nucleus
Investigate nuclear fission, in which a heavy, unstable nucleus breaks apart; and nuclear fusion, where light nuclei are joined. In both, the released energy is millions of times greater than the energy from chemical reactions and comes from the conversion of nuclear binding energy to kinetic energy. x
• 58
The Particle Zoo
By 1960 a myriad of seeming elementary particles had been discovered. Survey the standard model that restored order to this subatomic chaos, describing a universe whose fundamental particles include six quarks; the electron and two heavier cousins; elusive neutrinos; and force-carrying particles such as the photon. x
• 59
An Evolving Universe
Trace the discoveries that led astronomers to conclude that the universe began some 14 billion years ago in a big bang. Detailed measurements of the cosmic microwave background and other observations point to an initial period of tremendous inflation, followed by slow expansion and an as-yet inexplicable accelerating phase. x
• 60
Humble Physics—What We Don't Know
Having covered the remarkable discoveries in physics, turn to the great gap in our current knowledge, namely the nature of the dark matter and dark energy that constitute more than 95% of the universe. Close with a look at other mysteries that physicists are now working to solve. x
Lecture Titles
Clone Content from Your Professor tab
Your professor
Richard Wolfson
Ph.D. Richard Wolfson
Middlebury College
Professor Wolfson's published work encompasses diverse fields such as medical physics, plasma physics, solar energy engineering, electronic circuit design, observational astronomy, theoretical astrophysics, nuclear issues, and climate change. His current research involves the eruptive behavior of the sun's outer atmosphere, or corona, as well as terrestrial climate change and the sun–Earth connection.
Professor Wolfson is the author of several books, including the college textbooks Physics for Scientists and Engineers, Essential University Physics,and Energy, Environment, and Climate. He is also an interpreter of science for the nonspecialist, a contributor to Scientific American, and author of the books Nuclear Choices: A Citizen's Guide to Nuclear Technology and Simply Einstein: Relativity Demystified.
View More information About This Professor
Also By This Professor
View All Courses By This Professor
Rated 4.1 out of 5 by 29 reviewers.
Rated 5 out of 5 by My math potbelly Dr. Wolfson’s PHYSICS AND OUR UNIVERSE: HOW IT ALL WORKS is an extremely comprehensive overview of physics. He goes as far as one can go using only high school algebra and trigonometry. 60 lectures is nevertheless a heavy time commitment. It is designed, I think, for three basic constituencies: 1) Young adults thinking of a career in physics. At that level, it gives a marvellous bird’s-eye view without forcing them to slog through the minutia of every sub-discipline (mechanics, thermodynamics, optics, etc.) using calculus. That will come soon enough if they stick with it. What is crucial at the beginning is to get a feeling for the whole, its trends, heroes and challenges. Is this fun for them or not? 2) Adults with a science or engineering background who wish to enjoy a good overview of the subject with some of the latest developments. They are likely to nod with pleasure during the whole thing. 3) The rest of us (ROU) with liberal arts backgrounds who may have enjoyed other TTC science courses and sadomasochistically WANT MORE. Let me review these audiences in more detail: 1) My sons are too old to consider physics as a possible career. Still, this lecture series is organized very visually to make each lesson as intuitive as possible. There are plenty of concrete experiments using simple objects, as well as a large flat-screen TV to explain each equation and calculation. Finally, almost every lesson ends with a problem and its solution for the mathematically inclined. 2) Not much to say here. The engineers know their pleasures. The cool flat screen is a clever way to introduce a lot of written material that could have been placed directly on our screen. But then, the reaction would likely be “What? I paid for a DVD and got a PowerPoint presentation?” Watch instead a man standing next to a large flat screen and that’s OK. More eye candy. 3) This is where I stand; very painful. I enjoyed Filippenko’s UNDERSTANDING THE UNIVERSE, Wysession’s HOW THE EARTH WORKS and Schumacher’s QUANTUM MECHANICS. I even lapped up the more philosophical offerings such as Goldman’s GREAT SCIENTIFIC IDEAS THAT CHANGED THE WORLD and Grabiner’s MATHEMATICS, PHILOSOPHY AND THE REAL WORLD. All great courses. So I should be OK, eh? I sat through the complete lecture series (2/day) to get a quick sense of the whole. Very, very impressed. Wolfson moves quickly and concisely. Nooo fat here. But the new concepts pile on rapidly, and since they are closely interrelated across the span of 60 lessons, you are quickly reduced to a wide-eyed, Homer Simpson-like stupor — I even nodded off before my significant other at one mortifying — if you don’t feverishly review the Course Guidebook and/or Wikipedia after each course. A single sitting is obviously only the first step in a long journey for the likes of me. For some of the ROU, in other words, this is obviously a hugely ASPIRATIONAL PRODUCT. We buy it for the person we wish to become, not the sad excuse of a human being that greets us in the mirror every morning. To sum up, this is a very, very good and complete course. But it is also very, very challenging if your math pecs reflect the deficiencies of a liberal arts education combined with middle-aged forgetfulness. If the existing TTC science courses satisfy your needs, this one might be too much of a good thing. If they leave you hungering for more and you love math, I strongly recommend Wolfson's course. November 22, 2011
Rated 5 out of 5 by TTC and Dr. Wolfson have done it again. Professor Wolfson and The Teaching Company have collaborated to produce another excellent science course. This non-Calculus based college-level introductory course in physics is much more than a survey course because of its depth and its use of enough algebra and trigonometry to let you not only understand the ideas presented but to actually solve problems involving these concepts. The list of topics covered and the pace are almost breathtaking but will leave you feeling quite knowledgeable as well as satisfied. After finishing this course you will understand just how important the study of physics is because of the myriad of situations in which its concepts are essential to the advancement of mankind and why it is considered the fundamental science. October 15, 2011
Rated 5 out of 5 by Another Stepping Stone for TTC This course has many novel features including a giant screen TV, multiple videos, experiments almost every lectures and much deeper mathematical explanations of physical phenomenons. All of these combined with the already outstanding quality the teaching company is notorious for, serves not only to enhance the learning experience, but also to entertain and keep our interest peaked. I am amazed how much knowledge I had missed my entire life before watching this course. If you have not been versed already in the teaching company's collection, and you are interested in physics, I highly recommend this course. I had watched Dr Wolfson's first course on relativity many years ago and listened to his course on how the earth's climate behave, and I enjoyed both. My mind was literally changed by the course on relativity, no less. This course does contain a lot of what both other courses have to offer, but better. For those who have seen other TTC course on physics, I think you will really enjoy this one. A lot of new material, and the topics you may have seen before is covered more deeply and with an original breakdown of the different component. The course is also faster and more densely packed. It kept me coming back. Out of all the different courses from TTC I have watched, this is the most fun, deep, and immersive. Thank you TTC, and thanks to you Dr. Wolfson! October 6, 2011
Rated 2 out of 5 by ambiguous title & what the course does not deliver With the title "Physics and Our Universe: How It All Works" and the course description, I was expecting "it" to refer to how the universe works. However, what Professor Wolfson consistently delivers instead is how the physics describing the universe works. That is, the emphasis is on how physics goes about measuring and calculating different aspects of the universe, not on exploring the nature of the universe itself. What Wolfson does, he does admirably; but, this is not what I was expecting or interested in. December 6, 2014
2 next>>
Questions & Answers
Customers Who Bought This Course Also Bought
Some courses include Free digital streaming.
Enjoy instantly on your computer, laptop, tablet or smartphone.
Buy together as a Set
Choose a Set Format |
531fbe245b4618c6 | Symmetry, Integrability and Geometry: Methods and Applications (SIGMA)
SIGMA 4 (2008), 014, 7 pages arXiv:0802.0482
Symmetry Transformation in Extended Phase Space: the Harmonic Oscillator in the Husimi Representation
Samira Bahrami a and Sadolah Nasiri b
a) Department of Physics, Zanjan University, Zanjan, Iran
b) Institute for Advanced Studies in Basic Sciences, Iran
Received October 08, 2007, in final form January 23, 2008; Published online February 04, 2008
In a previous work the concept of quantum potential is generalized into extended phase space (EPS) for a particle in linear and harmonic potentials. It was shown there that in contrast to the Schrödinger quantum mechanics by an appropriate extended canonical transformation one can obtain the Wigner representation of phase space quantum mechanics in which the quantum potential is removed from dynamical equation. In other words, one still has the form invariance of the ordinary Hamilton-Jacobi equation in this representation. The situation, mathematically, is similar to the disappearance of the centrifugal potential in going from the spherical to the Cartesian coordinates. Here we show that the Husimi representation is another possible representation where the quantum potential for the harmonic potential disappears and the modified Hamilton-Jacobi equation reduces to the familiar classical form. This happens when the parameter in the Husimi transformation assumes a specific value corresponding to Q-function.
Key words: Hamilton-Jacobi equation; quantum potential; Husimi function; extended phase space.
pdf (190 kb) ps (136 kb) tex (10 kb)
1. Bohm D., Hiley B.J., Unbroken quantum realism, from microscopic to macroscopic levels, Phys. Rev. Lett. 55 (1985), 2511-2514.
2. Holland P.R., The quantum theory of motion, Cambridge University Press, 1993, 68-69.
3. Takabayashi T., The formulation of quantum mechanics in terms of ensemble in phase space, Progr. Theoret. Phys. 11 (1954), 341-373.
4. Muga J.G., Sala R., Snider R.F., Comparison of classical and quantum evolution of phase space distribution functions, Phys. Scripta 47 (1993), 732-739.
5. Brown M.R., The quantum potential: the breakdown of classical symplectic symmetry and the energy of localization and dispersion, quant-ph/9703007.
6. Holland P.R., Quantum back-reaction and the particle law of motion, J. Phys. A: Math. Gen. 39 (2006), 559-564.
7. Shojai F., Shojai A., Constraints algebra and equation of motion in Bohmian interpretation of quantum gravity, Classical Quantum Gravity 21 (2004), 1-9, gr-qc/0409035.
8. Carroll R., Fluctuations, gravity, and the quantum potential, gr-qc/0501045.
9. Nasiri S., Quantum potential and symmetries in extended phase space, SIGMA 2 (2006), 062, 12 pages, quant-ph/0511125.
10. Carroll R., Some fundamental aspects of a quantum potential, quant-ph/0506075.
11. Sobouti Y., Nasiri S., A phase space formulation of quantum state functions, Internat. J. Modern Phys. B 7 (1993), 3255-3272.
12. Nasiri S., Sobouti Y., Taati F., Phase space quantum mechanics - direct, J. Math. Phys. 47 (2006), 092106, 15 pages, quant-ph/0605129.
13. Nasiri S., Khademi S., Bahrami S., Taati F., Generalized distribution functions in extended phase space, in Proceedings QST4, Editor V.K. Dobrev, Heron Press Sofia, 2006, Vol. 2, 820-826.
14. Wigner E., On the quantum correction for thermodynamic equillibrium, Phys. Rev. 40 (1932), 749-759.
15. Lee H.W., Theory and application of the quantum phase space distribution functions, Phys. Rep. 259 (1995), 147-211.
16. de Gosson M., Symplectically covariant Schrödinger equation in phase space, J. Phys. A: Math. Gen. 38 (2005), 9263-9287, math-ph/0505073.
17. Jannussis A., Patargias N., Leodaris A., Phillippakis T., Streclas A., Papatheos V., Some remarks on the nonnegative quantum mechanical distribution functions, Preprint, Department of Theoretical Physics, University of Patras, 1982.
18. Husimi K., Some formal properties of the density matrix, Proc. Phys.-Math. Soc. Japan 22 (1940), 264-314.
Previous article Next article Contents of Volume 4 (2008) |
2c0a653105b86004 | March 20, 2006
Computational Chemistry advance: 100 times more accurate
James Sims of NIST and Stanley Hagstrom of IU announced a new high-precision calculation of the energy required to pull apart the two atoms in a hydrogen molecule (H2). Accurate to 1 part in 100 billion, these are the most accurate energy values ever obtained for a molecule of that size, 100 times better than the best previous calculated value or the best experimental value. This advance could be useful for creating better computer simulations for molecular nanotechnology. The algorithmic improvement to get faster or more accurate solutions is adapted to use parallel processing (about 140 processors used for the calculation over a weekend).
More computer systems are being developed that will help take advantage of this kind of algorithmic advance. 1000 processors via FPGAs for $100,000 later this year and next.
Intel is promising hundreds of processor cores within ten years.
Backgroun on supercomputer architectures
1. vector processors that can execute particular types of mathematical problems very quickly. (traditional Cray type machines)
2. large numbers of regular processors typically placed in a large number of networked computers. (Big Blue type supercomputers)
3. field-programmable gate arrays (FPGAs), chips that can be reconfigured on the fly to run specific programs very quickly.
4. Multithreaded chips
Details on the algorithmic advance:
The calculation requires solving an approximation of the Schrödinger equation, one of the central equations of quantum mechanics. It can be approximated as the sum of an infinite number of terms, each additional term contributing a bit more to the accuracy of the result. For all but the simplest systems or a relative handful of terms, however, the calculation rapidly becomes impossibly complex. Precise calculations have been done for systems of three components but this is for four. Their calculations were carried out to 7,034 terms. Two earlier algorithms were merged. They also developed improved computer code for a key computational bottleneck (high-precision solution of the large-scale generalized matrix eigenvalue problem) using parallel processing. The final calculations were run on a 147-processor parallel cluster at NIST over the course of a weekend.
Форма для связи
Email *
Message * |
d96a5fc14ea13f6a |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
From what I understand the Dirac equation is supposed to be an improvement on the Schrödinger equation in that it is consistent with relativity theory. Yet all methods I have encountered for doing actual ab initio quantum mechanical calculations uses the Schrödinger equation. If relativistic effects are important one adds a relativistic correction. If the Dirac equation is a more correct description of reality, shouldn't it give rise to easier calculations? If it doesn't, is it really a more correct description?
share|cite|improve this question
I don't think "more correct description of reality" is positively correlated with "easier calculations". – twistor59 May 11 '13 at 11:45
Are you referring to the general Schrodinger equation for time evolution in quantum mechanics: $i\hbar\partial_t|\psi\rangle = H|\psi\rangle$ or the special case of the equation for a non-relativistic particle with Hamiltonian $H = p^2/2m$? The distinction is important because the former is valid even in the context of relativistic quantum field theories; it is the fundamental equation of the time evolution of quantum states even for relativistic quantum systems. – joshphysics May 11 '13 at 18:37
up vote 5 down vote accepted
Think it with an example, Einstein's field equations are much more precise than Newton's law of gravity, but it's much more complicated to solve a Classical Mechanics problem with General Relativity.
More fundamental and precise doesn't mean that it will give easier calculations. If it did, then then chemistry, medicine, etc... wouldn't exist because they can be described almost completely using Dirac's equation.
share|cite|improve this answer
Good answer. A question though: What do you mean by "almost completely". As far as I can see, eighter it's a complete illusion that you can integrate out the very low scale physics and obtain all the right chemical behaviour because nowbody dared to try it yet (and I mean finer results than just stoichiometry), or the other extreme, you get the exact relations on the chemical level. – NikolajK May 11 '13 at 15:15
I was thinking that the tiny gravitational effects, QCD, etc... would have some effect. Now I wonder if these effects would be able to interfere with your calculations and give you wrong results. – jinawee May 11 '13 at 15:34
IMO, the reason why the Dirac equation is not used a lot is because we have a better theory of relativistic quantum mechanics called Quantum Field Theory.
The Dirac equation is one of the key equations of QFT but the calculations in QFT do not rely on solving the Dirac equation explicitly. Rather properties of the solutions of the Dirac equation (normalization and orthogonality of spinors, projection operators on positive/negative "energy" states) are used.
share|cite|improve this answer
It is true that the Dirac equation takes into account theory of relativity, so in this respect it is more correct than the Schroedinger equation.
However, the problem with the Dirac equation is that it involves function $\psi$ defined on space-time, not on configuration space, and it describes naturally one particle under action of electromagnetic field. For more than one particle, this means a big difference. For example, for two interacting particles, like two electrons in a field of fixed nuclei, we have the Schroedinger equation for function $\psi$ on 6-dimensional space, but it is not clear how to do similar thing with the Dirac equation, because if we want to claim it is more accurate, we need to describe the interaction between the particles in a better way than just by electrostatic potential.
This is partially accomplished by the Breit equation, which is kind of modified Schroedinger equation containing relativistic corrections, but still not entirely consistent with relativity and having some problems with the new terms; some quantities diverge which should not so it is not a satisfactory equation.
This and other problems lead people to reinterpret the Dirac equation as describing a kind of "quantum field", not a distinct particle. Unfortunately the resulting theory seems too difficult and problematic to be used regularly for complicated calculations of properties of molecules. The non-relativistic theory is much more developed for this purpose and people that worked on it (for example, John Slater, David Cook) say that in its basic form it works quite well for common atoms and molecules (I think unless one wants to include more subtle details like those relativistic corrections).
share|cite|improve this answer
Hi @Jan Lalinsky: Are you related to this user. To merge accounts, go here. – Qmechanic Jun 2 at 18:46
The Dirac equation is indeed widely used in ab initio calculations in quantum chemistry (you may wish to google "relativistic quantum chemistry"). For example, the use of the Dirac equation is especially important where you have heavy nuclei: one requires the Dirac equation to explain operation of even such mundane devices as lead-acid batteries (Phys. Rev. Lett. 106, 018301 (2011)).
share|cite|improve this answer
Your Answer
|
d61dd11ae15ee431 |
PhysicsWorld Archive
Fortran faces the future at 50
1. Peter Crouch
2. Clive Page
3. John Pelan
December 2007
Article Type
[Icon] Full text (PDF, 1,033K)
Article summary
It may come as a surprise to find out that computational physics – the discipline that today uses powerful supercomputers to crunch through vast amounts of data on, say, the Earth's climate – actually began in the late 1920s, some three decades before the first electronic computers were built. One of the people responsible for kick-starting this field was the physicist and mathematician Douglas Hartree, who at the time was trying to calculate atomic wavefunctions in order to determine the structural properties of atoms. While the Schrödinger equation can easily be solved for the hydrogen atom, repeating the feat for multi-electron atoms involves calculations that at the time were generally thought to be intractable. Hartree, however, developed a technique known as the self-consistent field method, which allowed these problems to be solved numerically. Unfortunately, his method was so laborious using the facilities available at the time – mechanical calculators operated by humans – that very little use was made of it until electronic computers became available. |
e273ca21fb83a1d7 | It is susceptible to attack by many insect-pests, and more severe
It is susceptible to THZ1 price attack by many insect-pests, and more severely affected by the fruit and shoot borer (FSB). These insects effectively damage (60–70%) the crop even following the average 4.6 kg of insecticides and pesticides per hectare [2]. Therefore, to control the indiscriminate use of insecticides, the transgenic approach is being opted that is eco-friendly and shows promise to control the FSB infecting brinjal. The use of insecticidal proteins from the bacterium Bacillus thuringiensis (Bt) in the improvement of crop productivity via transgenic crop (Bt crop) MGCD0103 manufacturer is being promoted in most cases. However, the potential risk associated with the impact of
transgenic crops on non-target microorganisms and flora and fauna in the environment, is still a matter of concern. Bt crops have the potential to alter the microbial community dynamics in the soil agro-ecosystem
owing to the release of toxic Cry proteins into the soil via root exudates [3], and through decomposition of the crop residues [4]. The available reports, however, are not consistent regarding the nature of interaction of transgenic crops with the native microbial community. Icoz and Stotzky [5] presented a comprehensive analysis of the fate and effect of Bt crops in soil ecosystem and emphasized LY2109761 research buy for the risk assessment studies of transgenic crops. Phylogenetically, actinomycetes are the member of taxa under high G + C sub-division of the Gram positive bacteria [6]. Apart from bacteria and fungi, actinomycetes are an important microbial group known to be actively involved in degradation of complex organic materials in soils and contribute to the biogeochemical cycle [7]. The presence of Micromonospora in soils contributes to the production
of secondary metabolite (antibiotics) like anthraquinones [8], and Arthrobacter globiformis degrades substituted phenyl urea in soil [9]. Nakamurella group are known for the production of catalase and storing polysaccharides [10]. Thermomonospora, common to decaying organic Branched chain aminotransferase matter, are known for plant cell degradation [11]. Frankia is widely known for N2 fixation [12], Sphaerisporangium album in starch hydrolysis and nitrate reduction in soils [13], Agromyces sp. degrades organophosphate compounds via phosphonoacetate metabolism through catabolite repression by glucose [14]. Janibacter in rhizospheric soils, are widely known to degrade 1, 1-dichloro-2, 2- bis (4-chlorophenyl) ethylene (DDE) [15], while Streptomyces for the production of chitinase as well as antibiotics [16]. These studies suggest that most of the representative genera of actinomycetes in the soil, contribute to maintenance of the soil fertility. Most studies on transgenic crops have been carried out on cotton, corn, tomato, papaya, rice, etc., with emphasis on protozoal, bacterial and fungal communities [5].
VanSaun2, Lynn M Matrisian2, D Lee Gorden2 1 Department of Surg
VanSaun2, Lynn M. Matrisian2, D. Lee Gorden2 1 Department of Surgery, St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea Republic, 2 Department of Cancer Biology, Vanderbilt University, Nashville, Tennessee, USA Purpose: Pro-inflammatory processes of the early postoperative states may induce peritoneal metastases in patients with advanced diseases. To identify that wound 4-Hydroxytamoxifen cost healing response
after an abdominal incision leads to increased MMP-9 activity locally, therefore providing a favorable environment for peritoneal metastasis. Increased MMP9 in a post-operative injury setting increases the number and severity of peritoneal metastasis when compared to mice without wounds. Methods: Eighteen C57bl/6 J male
mice were obtained at 8 weeks of age. Metastatic tumors were initiated using a peritoneal injection model with syngeneic MC38 murine colon cancer cells. Peritoneal EPZ5676 cost injections were performed into the intraperitoneum at right lower quadrant area via 25G syringe. A 1.5 cm upper midline incision was made in the abdominal wall to recapitulate the postoperative wound model. The abdominal wall was closed by a continuous 4-0 prolene suture with 5 stitches. Mice were sacrificed at various time points. And we observed the rate of the peritoneal metastasis from each group. Results: By making incision into the abdominal wall, we induced inflammation of the mouse and observed the incidence of the peritoneal metastasis was increased(Fig.1). Early stage of wound healing process
increases pro-inflammatory cytokines and number of inflammatory cells in the peritoneum, and this leads to increase pro-MMP9 proteins. And the inflammatory process which initiated by the wound, in turn, increased the proliferation of the mesothelial cells and provoked expression of the inflammatory cells and increased parietal peritoneal metastasis. Conclusion: stage of wound healing process increases pro-inflammatory cytokines and number of inflammatory cells in the peritoneum, and this leads to increase pro-MMP9 proteins. So the increased pro-MMP9 proteins play a key role on the growth and progressions of cancer cells in peritoneal Cobimetinib concentration metastasis. Figure 1. Poster No. 87 Cytokine-Mediated Activation of Gr-1 + Inflammatory Cells and Macrophages in Squamous Cell Selleck YM155 Carcinoma towards a Tumor-Supporting Pro-Invasive and Pro-Angiogenic Phenotype Nina Linde 1 , Dennis Dauscher1, Margareta M. Mueller1 1 Group Tumor and Microenvironment, German Cancer Research Center, Heidelberg, Germany Inflammatory cells have been widely accepted to contribute to tumor formation and progression. In a HaCaT model for human squamous cell carcinoma (SCC) of the skin, we have observed that infiltration of inflammatory cells does not only promote tumorigenesis but is indispensable for persisten angiogenesis and the development of malignant tumors.
Recent studies suggest that BRCA proteins are required for protec
Recent studies suggest that BRCA proteins are required for protecting the genome from damage [12]. Mutations in BRCA genes have been established to predispose women to breast and ovarian cancer, the end point of BRCA protein
dysfunction. Mutations in both genes are spread throughout the entire gene. More than 600 different mutations have been identified in BRCAl gene and 450 mutations in BRCA. The majorities of mutations, known to be disease-causing, results in a truncated protein due to frame shift, nonsense, or splice site alternations. Nonsense mutations occur when the nucleotide substitution produces a stop codon (TGA, TAA, or TAG) and translation of the protein is terminated at this point. Frame shift mutations occur when one or more nucleotides are either inserted or deleted, resulting in missing or non-functional protein. Splice
site mutations cause abnormal inclusion or exclusion of DNA in the coding sequence, resulting in an abnormal protein. Other kind of mutations results from a single nucleotide substitution is missense mutations in which the substitution changes a single amino acid but does not affect the remainder of the protein translation [13, 14]. Studies of BRCAl mutation occurrence suggested that nearly half of the families at high risk for breast cancer carried BRCAl mutation [15]. However, other analysis suggest that the actual incidence of BRCAl in high risk families (>3 cases of breast and/or ovarian SC79 cost cancer) might be as low as 12.8% to 16% [4]. Substantial variation in the prevalence of BRCA1 mutations in high risk families in various
countries has been observed which are more common than BRCA2 mutations [16, 17]. The main objectives of the present work were to identify germline mutations in BRCA1 (exons 2, 8, 13, 22) and BRCA2 (exon 9) genes for the early detection of presymptomatic mutation carriers in Egyptian healthy females who were first degree relatives of affected women from families with and without family history of breast cancer. Subjects and Methods Patients and families Sixty breast cancer patients (index patients), Selleckchem Fludarabine derived from 60 families, considered being at high risk, due to medicinal examination and they were grid 3 patients, were selected for molecular genetic testing of BRCA1 and BRCA2 genes. They were referred to the Clinical Oncology Unit in Medical Research learn more Institute, Alexandria University, for chemotherapy as part of their curative treatment after mastectomy. Selected index patients were preferred to be at early onset age at diagnosis, possessing a positive family history and bilateral breast cancer. The study also included one hundred and twenty healthy first degree female relatives of index patients either sisters and/or daughters for early detection of mutation carriers. The decision to undergo genetic testing was taken after the participants were informed about benefits and importance of genetic testing.
CrossRef 31 Tang CG, Chen YH, Xu B, Ye XL, Wang ZG: Well-width d
CrossRef 31. Tang CG, Chen YH, Xu B, Ye XL, Wang ZG: Well-width dependence of in-plane optical anisotropy in (001) GaAs/AlGaAs quantum
wells induced by in-plane uniaxial strain and interface asymmetry . J Appl Phys 2009,105(10):103108.CrossRef 32. Tang CG, Chen YH, Ye XL, Wang ZG, Zhang WF: Strain-induced in-plane optical anisotropy in (001) GaAs/AlGaAs superlattice studied by reflectance difference spectroscopy . J Appl Phys 2006,100(11):113122.CrossRef 33. Krebs O, Voisin P: Giant optical anisotropy of semiconductor heterostructures with no common atom and the quantum-confined Pockels effect . Phys Rev Lett 1996, 77:1829.CrossRef 34. Yu J, Chen Y, Cheng S, Lai Y: Spectra of circular and linear photogalvanic effect at BIX 1294 in vivo inter-band GDC-0449 mw excitation in In 0.15 Ga 0.85 As/Al 0.3 Ga 0.7 As multiple quantum wells . Phys E: Low-dimensional Systems and Nanostructures 2013,49(0):92–96. 35. Takagi T: Refractive index of Ga 1-x In x As prepared by vapor-phase epitaxy . Japanese J Appl Phys 1978, 17:1813–1817.CrossRef 36. Park YS, Reynolds DSC: Exciton structure in photoconductivity of CdS, CdSe, and CdS: Se single crystals . Phys Rev 1963, 132:2450–2457.CrossRef 37. Ohno Y, Terauchi R, Adachi T, Matsukura
F, Ohno H: Spin relaxation CX-5461 nmr in GaAs(110) quantum wells . Phys Rev Lett 83:4196–4199. 38. Damen TC, Via L, Cunningham JE, Shah J, Sham LJ: Subpicosecond spin relaxation dynamics of excitons and free carriers in GaAs quantum wells . Phys Rev Lett 1991, 67:3432–3435.CrossRef 39. Roussignol P, Rolland P, Ferreira R, Delalande C, Bastard G, Vinattieri A, Martinez-Pastor Protein kinase N1 J, Carraresi L, Colocci M, Palmier JF, Etienne B: Hole polarization and slow hole-spin relaxation
in an n-doped quantum-well structure . Phys Rev B 1992, 46:7292–7295.CrossRef 40. Mattana R, George J-M, Jaffrès H, Nguyen Van Dau F, Fert A, Lépine B, Guivarc’h A, Jézéquel G: Electrical detection of spin accumulation in a p-type GaAs quantum well . Phys Rev Lett 2003, 90:166601.CrossRef 41. Bulaev DV, Loss D: Spin relaxation and decoherence of holes in quantum dots . Phys Rev Lett 2005, 95:076805.CrossRef 42. Gvozdic DM, Ekenberg U: Superefficient electric-field-induced spin-orbit splitting in strained p-type quantum wells . Europhys Lett 2006, 73:927.CrossRef 43. Chao CY, Chuang SL: Spin-orbit-coupling effects on the valence-band structure of strained semiconductor quantum wells . Physical Review B 1992,46(7):4110.CrossRef 44. Foreman BA: Analytical envelope-function theory of interface band mixing . Phys Rev Lett 1998,81(2):425.CrossRef 45. Muraki K, Fukatsu S, Shiraki Y, Ito R: Surface segregation of in atoms during molecular-beam epitaxy and its influence on the energy-levels in InGaAs/GaAs quantum-wells . Appl Phys Lett 1992,61(5):557–559.CrossRef 46. Chen YH, Wang ZG, Yang ZY: A new interface anisotropic potential of zinc-blende semiconductor interface induced by lattice mismatch . Chinese Phys Lett 1999,16(1):56–58.CrossRef 47.
The inset in (e) shows the corresponding selected area diffractio
The inset in (e) shows the corresponding selected area diffraction pattern with a zone axis of [1–30]. The second processing parameter we investigated was the vapor pressure. Figure 3a,b,c show our SEM studies for 100, 300, and 500 Torr, respectively. It turns out that
CoSi Selleckchem U0126 nanowires grew particularly well at the reaction pressure of 500 Torr. In this experiment, the higher the vapor pressure, the longer the nanowires grown. Additionally, with the increasing vapor pressure, the number of nanoparticles reduces, Tariquidar but the size of the nanoparticles increases. Figure 3 SEM images of CoSi nanowires. At vapor pressures AZD8931 research buy of (a) 100, (b) 300, and (c) 500 Torr, respectively. For the synthesis of cobalt silicide nanowires, the third and final processing parameter we studied was the gas flow rate. We conducted experiments
at the gas flow rate of 200, 250, 300, and 350 sccm, obtaining the corresponding results shown in Figure 4a,b,c,d, respectively. It can be found in the SEM images of Figure 4 that at 850°C ~ 880°C, the number of CoSi nanowires reduced with the increasing gas flow rate; thus, more CoSi nanowires appeared as the gas flow rate was lower. Figure 4 SEM images of CoSi nanowires. At gas flow rates of (a) 200, (b) 250, (c) 300, and (d) 350 sccm, respectively. The growth mechanism of the cobalt silicide nanowires in this work is of interest. Figure 5
is the schematic illustration of the growth mechanism, showing the proposed growth steps of CoSi nanowires with a SiOx outer layer. When the system temperature did not reach the reaction temperature, CoCl2 reacted with H2 (g) to form Co following step (1) of Figure 5: Figure 5 The schematic illustration of the growth mechanism. (1) CoCl2(g) + H2(g) → Co(s) + 2HCl(g), (2) 2CoCl2(g) + 3Si(s) → 2CoSi(s) + SiCl4(g), (3) SiCl4(g) + 2H2(g) → Si(g) + 4HCl(g), (4) 2Si(g) + O2(g) → 2SiO(g), and (5) Co(solid or vapor) + 2SiO(g) → CoSi(s) + SiO2(s). The Co atoms agglomerated to PTK6 form Co nanoparticles on the silicon substrate. When the system temperature reached the reaction temperatures, 850°C ~ 880°C, CoCl2 reacted with the silicon substrate to form a CoSi thin film and SiCl4 based on step (2) of Figure 5: The SiCl4 product then reacted with H2(g) to form Si(g) following step (3) of Figure 5: The Si here reacted with either residual oxygen or the exposed SiO2 surface to form SiO vapor from step (4) of Figure 5[30]: The SiO vapor reacted with Co nanoparticles via vapor-liquid–solid mechanism.
We will comment not only on the strengths but also on the technic
We will comment not only on the strengths but also on the technical pitfalls and the current limitations of the technique, discussing the performance of DFT and the foreseeable achievements in the near future. Theoretical background To appreciate the special place of DFT in the modern arsenal of quantum chemical methods, it is useful first to have a look into VX-689 nmr the more traditional wavefunction-based approaches. These attempt to provide approximate solutions to the Schrödinger equation,
the fundamental equation of quantum mechanics that describes any given chemical system. The most fundamental of these approaches originates from the pioneering work of Hartree and Fock in the 1920s (Szabo and Ostlund 1989). The HF
method assumes that the exact N-body wavefunction of the C59 wnt chemical structure system BIBF 1120 solubility dmso can be approximated by a single Slater determinant of N spin-orbitals. By invoking the variational principle, one can derive a set of N-coupled equations for the N spin orbitals. Solution of these equations yields the Hartree–Fock wavefunction and energy of the system, which are upper-bound approximations of the exact ones. The main shortcoming of the HF method is that it treats electrons as if they were moving independently of each other; in other words, it neglects electron correlation. For this reason, the efficiency and simplicity of the HF method are offset by poor performance for systems of relevance to bioinorganic chemistry. Thus, HF is now principally used merely as a starting acetylcholine point for more elaborate “post-HF” ab initio quantum chemical approaches, such as coupled cluster or configuration interaction methods, which provide different ways of recovering the correlation missing from HF and approximating the exact wavefunction. Unfortunately, post-HF methods usually present difficulties in their application to bioinorganic and biological systems, and their cost is currently still prohibitive for molecules containing more than about 20 atoms. Density functional theory attempts to address both the inaccuracy
of HF and the high computational demands of post-HF methods by replacing the many-body electronic wavefunction with the electronic density as the basic quantity (Koch and Holthausen 2000; Parr and Yang 1989). Whereas the wavefunction of an N electron system is dependent on 3N variables (three spatial variables for each of the N electrons), the density is a function of only three variables and is a simpler quantity to deal with both conceptually and practically, while electron correlation is included in an indirect way from the outset. Modern DFT rests on two theorems by Hohenberg and Kohn (1964). The first theorem states that the ground-state electron density uniquely determines the electronic wavefunction and hence all ground-state properties of an electronic system.
The extracted ΦB values of these samples are presented in the Fig
The extracted ΦB values of these samples are presented in the Figure 4. The highest ΦB value attained by the ABT-737 nmr sample annealed in O2 ambient (3.72 eV) was higher than that of metal-organic selleck inhibitor decomposed CeO2 (1.13 eV) spin-coated on n-type GaN substrate [20]. No ΦB value has been extracted for the sample annealed in N2 ambient due to the low E B and high J of this sample, wherein the gate oxide breaks down prior to the FN tunneling mechanism. Figure 7 Experimental data fitted well with
FN tunneling model. Experimental data (symbol) of samples annealed in O2, Ar (HJQ and KYC, unpublished work), and FG ambient fitted well with FN tunneling model (line). Table 1 compares the computed ΔE c values from the XPS characterization with the ΦB value extracted from the FN tunneling model. From this table, it is distinguished that the E B of the sample annealed in O2 ambient is dominated by the breakdown of IL as PI3K Inhibitor Library datasheet the obtained
value of ΦB from the FN tunneling model is comparable with the value of ΔE c(IL/GaN) computed from the XPS measurement. For samples annealed in Ar and FG ambient, the acquisition of ΦB value that is comparable to the ΔE c(Y2O3/GaN) indicates that the E B of these samples is actually dominated by the breakdown of bulk Y2O3. Since the leakage current of the sample annealed in N2 ambient is not governed by FN tunneling mechanism, a conclusion in determining whether the
E B of this sample is dominated by the breakdown of IL, Y2O3, or a combination of both cannot be deduced. Based on the obtained values of ΔE c(Y2O3/GaN), ΔE c(IL/GaN), and ΔE c(Y2O3/IL), the E B of this sample is unlikely to be dominated by IL due to the acquisition of a negative ΔE c(IL/GaN) value for this sample. Thus, the E B of this sample is most plausible to be dominated by either Y2O3 or a combination of Y2O3 and IL. However, the attainment of ΔE c(Y2O3/IL) value which is larger than that of ΔE c(Y2O3/GaN) value obtained for the samples annealed in Ar and FG ambient eliminates the latter possibility. The reason behind Methisazone it is if the E B of the sample annealed in N2 ambient is dominated by the combination of Y2O3 and IL, this sample should be able to sustain a higher E B and a lower J than the samples annealed in Ar and FG ambient. Therefore, the E B of the sample annealed in N2 ambient is most likely dominated by the breakdown of bulk Y2O3. Table 1 Comparison of the obtained Δ E c and Φ B values XPS: conduction band offset J-E Y 2 O 3 /GaN IL/GaN Y 2 O 3 /IL Barrier height O2 3.00 3.77 0.77 3.72 Ar 1.55 1.40 0.15 1.58 FG 0.99 0.68 0.31 0.92 N2 0.70 −2.03 2.73 a aNot influenced by FN tunneling. Therefore, barrier height is not extracted from the FN tunneling model.
A 100 μL drop of MSgg was mounted on top of the biofilm and NO mi
A 100 μL drop of MSgg was mounted on top of the biofilm and NO microprofiles SBI-0206965 research buy were measured immediately with an NO microsensor as described previously [43]. For each experimental treatment, MSgg was supplied either with or without 300 μM of the NO donor SNAP. SNAP was mixed
to MSgg directly before the experiment. Experimental treatments were as followed: (i) wild-type: B. subtilis 3610 for which MSgg agar and drop were added without further supplementation; (ii) wild-type: B. subtilis 3610 for which MSgg agar and drop were supplemented with 100 μM L-NAME; and (iii) B. subtilis 3610 Δnos for which MSgg agar and drop were added without further supplementation. Acknowledgements We thank Bernhard Fuchs (MPI Bremen) for help with flow cytometry and Pelin Yilmaz (MPI Bremen) for help during initial check details stages of swarming experiments. This study was supported by the Max Planck Society. Electronic supplementary material Additional file 1: Figure S1. Theoretical formation of NO from the NO donor Noc-18. The figure shows the calculated formation of NO over time for different starting concentrations of Noc-18. Figure S2. Theoretical formation of NO from the NO donor SNAP. The figure shows the calculated formation of NO over time for different starting concentrations of SNAP. (PDF 160 KB) References 1. Bredt DS, Snyder SH: Nitric-Oxide – a Physiological Messenger Molecule. Annu Rev Biochem 1994, 63:175–195.PubMedCrossRef
2. Alderton WK, Cooper CE, Knowles RG: Nitric oxide synthases: structure,
function and inhibition. Biochem J 2001, 357:593–615.PubMedCrossRef 3. Stamler JS, Lamas S, Fang FC: Nitrosylation: The prototypic redox-based signaling mechanism. Cell 2001, 106:675–683.PubMedCrossRef 4. Sudhamsu J, Crane BR: Bacterial nitric oxide synthases: what are they good for? Trends Microbiol 2009, 17:212–218.PubMedCrossRef 5. Adak S, Aulak KS, Stuehr DJ: Direct evidence for nitric oxide production by a nitric-oxide synthase-like protein from Bacillus subtilis. J Biol Chem 2002, 277:16167–16171.PubMedCrossRef 6. Gusarov I, Nudler E: NO-mediated cytoprotection: Instant Luminespib cost adaptation to oxidative stress Carteolol HCl in bacteria. Proc Natl Acad Sci USA 2005, 102:13855–13860.PubMedCrossRef 7. Gusarov I, Shatalin K, Starodubtseva M, Nudler E: Endogenous Nitric Oxide Protects Bacteria Against a Wide Spectrum of Antibiotics. Science 2009, 325:1380–1384.PubMedCrossRef 8. Kers JA, Wach MJ, Krasnoff SB, Widom J, Cameron KD, Bukhalid RA, Gibson DM, Crane BR, Loria R: Nitration of a peptide phytotoxin by bacterial nitric oxide synthase. Nature 2004, 429:79–82.PubMedCrossRef 9. Spiro S: Regulators of bacterial responses to nitric oxide. Fems Microbiol Rev 2007, 31:193–211.PubMedCrossRef 10. Zumft WG: Nitric oxide reductases of prokaryotes with emphasis on the respiratory, heme-copper oxidase type. J Inorg Biochem 2005, 99:194–215.PubMedCrossRef 11. Aguilar C, Vlamakis H, Losick R, Kolter R: Thinking about Bacillus subtilis as a multicellular organism.
Similar observations were made for the total score of these quest
Similar observations were made for the total score of these questionnaires (Fig. 3). Doramapimod ic50 TH-302 price Patients with a fracture on the right side had significantly higher scores immediately after the fracture for the IOF physical function domain [right vs left, median (interquartile range, IQR): 89 (75, 96) vs 71 (61, 86), P = 0.002]. A fracture on the dominant side was associated with higher scores than a fracture on the non-dominant side with regard to physical function [89 (75, 96) vs 70 (59, 82), P < 0.001] and overall score [67 (54, 79) vs 56 (47, 67), P = 0.016]. The latter is shown in Fig. 4. Patients undergoing surgical treatment had lower scores of Qualeffo-41, indicating better quality of life, on general health
(P = 0.013) and mental health selleck compound (P = 0.004) than patients with non-surgical treatment. Patients
using analgesics had a higher scores of the IOF-wrist fracture questionnaire on pain (P = 0.009), on physical function (P = 0.001) and a higher overall score (P = 0.002) than patients not using analgesics. Table 5 Comparison of IOF-wrist domain and EQ-5D scores over time IOF-wrist EQ-5D Pain Upper limb symptoms Physical function General health Overall score Overall score Baseline 50 (25, 50) 25 (8, 42) 75 (61, 93) 75 (50, 75) 60 (50, 73) 0.59 (0.26, 0.72) 104 104 105 92 105 104 6 weeks 25 (25, 50) 29 (8,42) 57 (36, 79) 50 (25, 75) 48 (31, 65) 0.66 (0.59, 0.78) 0.002 0.688 <0.001 0.001 <0.001 <0.001 17-DMAG (Alvespimycin) HCl 98 98 98 95 98 97 3 months 25 (25, 50) 25 (8, 42) 25 (11, 46) 25 (0, 50) 25 (13, 46) 0.76 (0.66, 0.88) <0.001 0.007 <0.001 <0.001 <0.001 <0.001 89 89 89 88 89
85 6 months 25 (0, 50) 17 (8, 33) 14 (0, 33) 25 (0, 50) 15 (4, 34) 0.78 (0.69, 1.00) <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 87 87 87 87 87 86 12 months 0 (0, 25) 8 (0, 25) 4 (0, 29) 0 (0, 25) 8 (2, 27) 0.80 (0.69, 1.00) <0.001 <0.001 <0.001 <0.001 <0.001 <0.001 87 87 87 86 87 85 Data presented as: median score (IQR) p value for difference between time point score and baseline score No. of subjects Fig. 2 IOF-wrist fracture median domain scores by time point Fig. 3 IOF-wrist fracture and Qualeffo-41 (spine) median overall scores by time point Fig. 4 IOF-wrist fracture median overall score by side of fracture and by time point Utility data could be calculated from the EQ-5D results. Immediately after the fracture, the utility was 0.59, increasing to 0.76 after 3 months and to 0.80 after 1 year. Assuming that the quality of life and the utility after 1 year are similar to that before the fracture, the utility loss due to the distal radius fracture is more than 0.20 in the first weeks. Most of the utility loss was regained after 3 months. Discussion The results from this study show that the IOF-wrist fracture questionnaire has an adequate repeatability, since the kappa statistic was moderate to good for most questions and quite similar to data obtained with Qualeffo-41 [10].
The concentration of RNA was adjusted to 100 ng/μl, and the sampl
The concentration of RNA was adjusted to 100 ng/μl, and the samples were stored at −70°C.
cDNA templates were synthesized from 50 ng RNA with PrimeScript™ 1st strand cDNA Synthesis Kit (TaKaRa) and gene-specific primers at 42°C for 15 m, 85°C for 5 s. Real-time PCR was performed with the cDNA and SYBR Premix Ex Taq (TaKaRa) using a StepOne Real-Time PCR System (Applied Biosystems). The quantity of cDNA measured by real-time PCR was normalised to the abundance of 16S cDNA. Real-time RT-PCR was repeated three times in triplicate parallel experiments. Statistical analysis The paired t test was used for statistical comparisons between groups. The level of statistical significance was set at a P value of ≤ 0.05. Results AI-2 inhibits biofilm formation FRAX597 order in a concentration-dependent manner under static conditions Previous studies showed that biofilm formation was influenced by the LuxS/AI-2 system both in Gram-positive and Gram-negative bacteria [32, 34]. The genome of S. aureus encodes a typical luxS gene, which plays a role in the regulation of capsular polysaccharide synthesis and virulence [43]. In this study, to investigate whether LuxS/AI-2
system regulates Protein Tyrosine Kinase inhibitor biofilm formation in S. aureus, we monitored the biofilm formation of S. aureus WT NCT-501 concentration strain RN6390B and the isogenic derivative ΔluxS strain using a microtitre plate assay. As shown in Figure 1A, the WT strain formed almost no biofilm after 4 h incubation at 37°C. However, the ΔluxS strain formed strong biofilms as measured by quantitative spectrophotometric analysis based
on OD560 after crystal violet staining (Figure 1A). This discrepancy could be complemented by introducing a plasmid that contains the luxS gene (Figure 1B). Figure 1 Biofilm formation under static conditions and chemical complementation by DPD of different concentrations. Biofilm growth of S. aureus WT (RN6390B), ΔluxS and ΔluxS complemented with different concentrations of chemically synthesized DPD in 24-well plates for 4 h under aerobic conditions (A1: 0.39 nM, A2: 3.9 nM, A3: 39 nM, A4: 390 nM). The cells that adhered to the plate after staining with crystal violet were measured by OD560 . The effects of LuxS could be attributed to its central metabolic function or the AI-2-mediated next QS regulation, which has been reported to influence biofilm formation in some strains [32–34]. To determine if AI-2, as a QS signal, regulates biofilm formation in S. aureus, the chemically synthesized pre-AI-2 molecule DPD at concentrations from 0.39 nM to 390 nM was used to complement the ΔluxS strain. The resulting data suggested that exogenous AI-2 could decrease biofilm formation of the ΔluxS strain and the effective concentration for complementation was from 3.9 nM to 39 nM DPD (Figure 1A). As expected, these concentrations were within the range that has been reported [51]. The phenomenon that the higher concentration of AI-2 does not take effect on biofilm formation is very interesting, which has also been found in other species [51]. |
b9681c4c149d2c8c | Skip to main content
Chemistry LibreTexts
Quantum Tunneling
• Page ID
• Quantum tunneling is a phenomenon where particles may "tunnel through" a barrier which they have insufficient kinetic energy to overcome according to classical mechanics. Tunneling is a result of the wavelike nature of quantum particles, and cannot be predicted by any classical system.
Tunneling was first directly observed in the early 1900's when researchers were studying the electrical behavior of closely spaced electrodes in gases. An unexplained component of current was observed, even through high vacuum, though no explanation was found. Within a few years, physicists began to find solutions to the new Schrödinger equation. When solving for a double potential well, it was found that the particle could traverse two wells if the barrier separating them was sufficiently small and narrow.
The researchers were observing a phenomenon now called "field emission," where at high potentials electrons may tunnel through the potential barrier at the metal-vacuum interface, where they would then fall through the potential to the counter-electrode. Electron tunneling is exploited in technologies through Esaki diodes, Zener diodes, field emission-based electron guns, and scanning tunneling microscopy. For phenomena more germane to the study of chemistry, tunneling is responsible for the rate of alpha decay, since the alpha particles can escape from the atom even though the reaction that generates them would not produce enough energy to allow them to escape.
Basic description
As previously stated, quantum tunneling is a result of the wave nature of quantum particles. A traveling or standing wave function incident on a non-infinite potential decays in the potential as a function of \( A_0 e^{-\alpha x}\), where A0 is the amplitude at the boundary, \(\alpha \) is proportional to the potential, and x is the distance into the potential. If a second well exists at infinite distance from the first well, the probability goes to zero, so the probability of a particle existing in the second well is zero. If a second well is brought closer to the first well, the amplitude of the wave function at this boundary is not zero, so the particle may tunnel into that well from the first well. It would appear that the particles are 'leaking' through the barrier; they can travel through it without having to surmount it.
An important point to keep in mind is that tunneling conserves energy. The final sum of the kinetic and potential energy of the system cannot exceed the initial sum. Therefore, the potential on both sides of the barrier does not need to be the same, but the sum of the ground state energy and the potential on the opposite side of the barrier may not be larger than the initial particle energy and potential.
Advanced description
Tunneling is found from the Schrödinger equation, so we will start there. To simplify the calculations, the time-independent, one-dimensional Schrödinger equation is used (Equation \ref{eq1}).
\[ -{\frac{\hbar^2}{2m}} {\frac{\partial^2 \psi}{\partial x^2}} + V(r) \psi = E \psi \label{eq1}\]
To find solutions to a particular system, the potential \(V (x)\) must be defined. In this case, we will set the potential to zero for all space, except for the region between 0 and \(a\), which we will set as \(V_0\). This is represented by the piecewise function in Equation \ref{eq2}.
V =
0 & \text{if } -\infty<x\leq 0\\[3pt]
V_0 & \text{if } 0<x<a\\[3pt]
0 & \text{if } a\leq x<\infty
To solve this, the equation must be solved separately for each region. However, the boundary conditions at 0 and \(a\) for each region must be consistent such that \(\psi (x)\) is continuous for all \(x\), so that \(\psi (x)\) is a valid wavefunction. The general solution for each region, before applying the boundary conditions, is then
\psi =
A\sin kx + B\cos kx & \text{if } -\infty<x\leq 0\\[3pt]
Ce^{-\alpha x} + De^{\alpha x} & \text{if } 0<x<a\\[3pt]
E\sin kx + F\cos kx & \text{if } a\leq x<\infty
where \(k = \frac{\sqrt{2mE}}{\hbar} \) and \(\alpha = \frac{\sqrt{2m(V_o -E)}}{\hbar} \). To enforce continuity, the boundaries of each region are set equal. This is expressed as \(\psi_1 (0) = \psi_2 (0) \) and \(\psi_1 (a) = \psi_2 (a) \).
\[A\sin 0 + B\cos 0 = Ce^{0} + De^{0}\]
which implies that \( A=0 \), and \( B=C+D \). At the opposite boundary,
\[A\sin ka + B\cos ka = Ce^{-\alpha a} + De^{\alpha a}\]
It may be observed that, as \( a \) goes to infinity, the right hand side of this equation goes to infinity, which does not make physical sense. To reconcile this, \( D \) is set to zero. For the final region, \( E \) and \( F \), present a potentially intractable problem. However, if one realizes that the value at the boundary \( a \) is driving the wave in the region \( a \) to \( \infty \), it may also be realized that the wavefunction could be rewritten as \( Ce^{-\alpha a}\cos(k(x-a)) \), phase shifting the wavefunction by the value of \( a \) and setting the amplitude to the boundary value. The wavefunction is then
\psi =
Be^{-\alpha x} & \text{if } 0<x<a\\[3pt]
Be^{-\alpha a}\cos(k(x-a)) & \text{if } a\leq x<\infty
The amplitude of the wavefunction is attenuated by the barrier as \( e^{-a\frac{\sqrt{2m(V_o -E)}}{\hbar}} \), where \( a \) represents the width of the barrier and \( (V_o -E) \) is the difference between the potential energy of the barrier and the current energy of the particle. Since the square of the wavefunction is the probability distribution, the probability of transmission through a barrier is \( e^{-2a\frac{\sqrt{2m(V_o -E)}}{\hbar}} \). As the barrier width or height approaches zero, the probability of a particle traveling through the barrier becomes 1. Also of note is that \( k \) is unchanged on the other side of the barrier. This implies that the energy of the particles are exactly the same as it was before they tunneled through the barrier, as stated earlier, the only thing that changes is the quantity of particles going in that direction. The rest are reflected off the barrier, and go back the way they came.
Contributors and Attributions
• Ryan Bunk |
7e92ac698083e745 | Work Packages Other Topics
WKB expansion for a fractional Schrödinger equation with applications to controllability
Authors: - 05 April 2018
Download Code
In [3], we develop a WKB analysis for the propagation of the solutions to the following one-dimensional nonlocal Schrödinger equation
with highly oscillatory initial datum
In (\ref{main_eq}), $\ffl{s}{}$ is the fractional Laplacian, defined for all $s\in(0,1)$ and for any function $f$ sufficiently smooth as the following singular integral
with $\ccs$ a normalization constant given by
where $\Gamma$ is the usual Gamma function. The parameter $\varepsilon$ in (\ref{main_eq}) and in (\ref{in_dat}) represents the fast space and time scale introduced in the equation, as well as the typical wavelength of oscillations of the initial data. Moreover, we will assume the initial phase $u_{\textrm{\small in}}$ to be an $L^2(\RR)$ function, so that we have $u_0\in L^2(\RR)$.
Our study is motivated by control problems. Indeed, the well-known boundary controllability and identifiability properties of solutions of wave-like equations hold because of the fact that the energy of solutions is driven by characteristics that reach the boundary where the controllers or observers are placed. This, in particular, allows the so-called observability of the solutions (namely, the possibility to obtain estimates of the total energy in terms of the energy concentrated on the support of the control along time), which is by now known to be equivalent to control properties. In the framework of wave-like processes, observability is possible if and only if the geometric control condition (GCC), requiring all rays of geometric optics to enter the control region during the control time, holds ([1]).
In the particular case under analysis, the construction that we obtain is then applied to the study of controllability properties for the one dimensional fractional Schrödinger equation
where $\omega$ is a neighborhood of the boundary of the space domain $(-1,1)$. In particular, it is possible to find confirmation to the following facts, proved in [2]:
• For $s> 1/2$, null controllability holds in any finite time $T>0$. In other words, given any $u_0\in L^2(-1,1)$ there exists a control function $g\in L^2(\omega\times(0,T))$ such that the solution to (\ref{schr_control}) satisfies $u(x,T)=0$.
• For $s=1/2$, the same result holds if we assume the controllability time $T$ to be large enough, i.e $T\geq T_0>0$.
• For $ s< 1/2 $, the equation (\ref{schr_control}) is not null controllable.
The above facts are related to the well-known property that the solutions to wave-like equations propagate along the so-called rays of geometric optics. These rays are nothing more than the projection to the physical time-space of the null bicharacteristic, solutions to the Hamiltonian system associated to the equation.
In our case, since $\mathcal{P}_s=i\partial_t+\ffl{s}{}$ is a pseudo-differential operator with symbol $ p_s(x,t,\xi,\tau) = \tau - |\xi|^{2s} $ , the Hamiltonian system is given by
Moreover, without losing generality we may assume $t_0=0$. Then, (\ref{char_syst}) can be solved explicitly, and we obtain the following expressions for the bicharacteristics
In particular, the rays of $ \mathcal{P}_s $ are given by the curves
Notice that, as one expects since the operator has constant coefficients, these rays are straight lines.
The approach that we use for building localized solutions is quite standard. In particular, we look for quasi-solutions to (\ref{main_eq}) introducing the ansatz
where the normalization constant $\varepsilon^s$ is chosen asking that the function $\ue{u}$ has $H^s(\RR)$-norm of the order $\mathcal O(1)$. The identification of the $a_j$-s is then carried out imposing
thus obtaining a series of PDEs in which it is possible to clearly separate the leading order terms, with respect to $\varepsilon$, from several remainders which will vanish as $\varepsilon\to 0$. This generates a cascade system for the functions $a_j$, which can then be determined as the solution of certain given Partial Differential Equations. In our case, the cascade system is the following one
with $\tau:=\varepsilon^{\frac 32 s}t$ and where $\mathcal{D}^{\,\beta}$ denotes the following fractional derivative of order $\beta$
Moreover, (\ref{cascade_system}) is uniquely solvable with initial conditions imposed at $\tau = 0$ and this, of course, allows to identify the expressions of the functions $a_j$. See [3] for more details.
Now, it is possible to show that the quasi-solutions that can be computed by using the ansatz (\ref{ansatz}) are in fact localized along rays. In more detail, we have the following result
Theorem 1. Let $u_{\textrm{\small in}}\in L^2(\RR)$ and let $\ue{u}$ be constructed employing the expansion (\ref{ansatz}). Then, for any $\varepsilon>0$ we have:
1. The functions $\ue{u}$ are approximate solutions to (\ref{main_eq}): $$ \begin{align*} \norm{u_0(x)-\ue{u}(x,0)}{L^2(\RR)} = \mathcal{O}(\varepsilon^{\frac{1}{2}}), \end{align*} $$ $$ \begin{align*} \norm{u(x,t)-\ue{u}(x,t)}{L^2(\RR)} = \mathcal{O}(\varepsilon^{\frac 12}). \end{align*} $$
2. The initial energy of $\ue{u}$ remains bounded as $\varepsilon\to 0$, i.e. $$ \begin{align*} \norm{\ue{u}(x,0)}{H^s(\RR)}^2 \approx 1. \end{align*} $$
3. The energy of $\ue{u}$ is exponentially small off the ray $(t,x(t))$: $$ \begin{align*} \int_{|x-x(t)|>\varepsilon^{\frac 14}} \left|\ffl{\frac s2}{\ue{u}}(x,t)\right|^2\,dx = \mathcal O(\varepsilon^{\frac 14}). \end{align*} $$
Since the quasi-solution $\ue{u}$ to our original system are concentrated along the rays, they propagate with the group velocity of the plane wave solutions, which can be computed as follows. First of all, rewrite
Hence, $ v:= \vert x/t \vert $ is obtained solving the equation
from which we immediately find that
Moreover, this group velocity can be analyzed in terms of $s$ and of the frequency $\xi_0$. Firstly, we immediately see that for $s=1/2$ we have
i.e. the velocity is constant and independent of the frequency $\xi_0$. For $s\in(0,1/2)$, instead, we have that $1-2s>0$. Hence, taking $\varepsilon<1$, we easily get
Finally, for $s\in(1/2,1)$ the situation is the opposite. We have $1-2s<0$ and, for $\varepsilon<1$,
In view of these behaviors, we can conclude that:
• For $s> 1/2$, the group velocity increases with the frequency.
• For $s=1/2$, the group velocity remains constant.
• For $s<1/2$, the group velocity decreases with the frequency.
Therefore, the high-frequency solutions are travelling faster and faster, for $s>1/2$, and slower and slower, for $s<1/2$. This behavior has then consequences from the point of view of observation properties for the solutions. In more detail,
• For $s>1/2$, the velocity of propagation of the rays allow them to be observable in any finite time $T>0$.
• For $s=1/2$, the velocity of propagation being constant, a minimum observation time $T_0$ is needed.
• for $s<1/2$ the high frequency rays may not reach the control region, thus implying the failing of controllability properties.
This, in particular, confirms the already known results presented in [2].
In what follows, we present some simulations which show the propagation of solutions to the fractional Schrödinger equation (\ref{main_eq}) corresponding to initial data in the form (\ref{in_dat}).
For the numerical resolution of the equation, we employed a uniform mesh in the space variable and a FE discretization of the fractional Laplacian, obtained following the methodology presented in [4]. Moreover, we used a Crank-Nicholson scheme in time, which is known to be stable for the Schrödinger equation. The initial data $u_0$ has been chosen as
where the profile $u_{\textrm{in}}(x)$ is given by a Gaussian with standard deviation measured in terms of the parameter $\gamma$, which is related to the mesh size $h$. In particular we chose $\gamma = h^{0.9}$. Finally, for the oscillations we considered frequencies $\xi_0=\pi^2/16$ and $\xi_0=2\pi^2$.
In Video 1, we show the plots for $\xi_0=2\pi^2$ and different values of $s\in(0,1)$. The space domain has been chosen to be the interval $(-1,1)$, while we considered a time interval of $5$ seconds.
Video 1. Propagation of the solution for $\xi_0 = 2\pi^2$ and different values of $s$.
It is seen there that, for small values of $s$, say $s=0.1$, the solution remains concentrated along rays which propagate only in the vertical direction. In other words, there is no propagation in space and, as we mentioned before, this implies that it will not be possible to control these solutions, no matter how one places the controls. For $s=0.5$, instead, the plots show that the solutions propagate along rays which reach the boundary of the space domain in finite time and are reflected according to the laws of optics. This translate in the fact that, provided that the time is large enough, it will be possible to control these solutions, acting with a control distributed in a neighbourhood $\omega$ of the boundary. The case of high values of the power $s$ of the fractional Laplacian is the most puzzling one. For instance, for $s=0.9$ our plots seem to show a lost of concentration of the solution along the ray, while our theoretical results would suggest that this concentration is preserved. On the other hand, we believe that what the simulations are showing is not in contradiction with the theory. In our opinion, it is only a numerical effect, which has two possible interpretations.
First of all, for $s>1/2$, the velocity of propagation of the solutions is increasing. As a consequence, in the time framework that we are considering the waves reach the boundary and are reflected many more times than in the case $s=1/2$. This large number of reflections is quite hard to clearly distinguish a the discrete level, even when working with very fine meshes (for our simulations, for instance, we are considering a mesh of $500$ points, corresponding to a step $h$ of the order of $10^{-3}$).
A second possible interpretation is that this strange phenomenon appearing in the plot can be explained with the accumulation of higher order terms in the asymptotic expansion of $\ue{z}$ which, combined with the small size of the space interval considered, enhance a chaotic behavior. This interpretation is supported by the fact that, as it is shown in Video 3, enlarging the space domain up to $(-6,6)$ it is possible to appreciate again the localization of the solution along the rays.
In Video 2, the simulations have been run with an initial datum with frequency $\xi_0=\pi^2/16$. The plots obtained show a behavior which is totally analogous with what observed in Video 1:
• For $s=0.1$, the solutions are once again concentrated along vertical rays, without propagation in time and, therefore, without possibility of being controlled.
• For $s=0.5$, we have propagation with constant velocity, and the ray reaches the boundary in finite time.
• For $s=0.9$ the chaotic comportment is still present.
Video 2. Propagation of the solution for $\xi_0 = \pi^2/16$ and different values of $s$.
Once again, the most surprising case is the last one, for $s>0.5$, in which the simulations seem to display dispersive features. Nevertheless, as we mentioned before, we retain that this does not contradict the theoretical results, and we think that this chaotic behavior is purely a numerical effect generated by the small size of the space interval $(-1,1)$, by the high velocity of propagation of the solutions, and by the accumulation of high order terms in the asymptotic expansion. In fact, we can observe that the enlargement of the space domain up to $(-6,6)$ seems to fix the problem, and the localization of the solution along the rays appears once again (see Video 3). This, in our opinion, justifies our interpretation of the phenomenon.
Video 3. Propagation of the solution for $s=0.9$ on the space interval $(-6,6)$.
[1] Bardos, C., Lebeau, G., and Rauch, J. Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary. SIAM J. Control Optim. 30, 5 (1992), 1024–1065.
[2] Biccari, U. Internal control for non-local Schrödinger and wave equations involving the fractional Laplace operator. ESAIM: COCV: to appear. (2018).
[3] Biccari, U., and Aceves, A. B. WKB expansion for a fractional Schröodinger equation with applications to controllability. Submitted. (2018).
[4] Biccari,U., and Hernández-Santamaría, V. Controllability of a one-dimensional fractional heat equation: theoretical and numerical aspects. HAL Preprint - hal-01562358. (2017). |
679807f0a3b4e1ae | Spanning the gap from Schrödinger to Newton by deriving a classical force field from quantum mechanical training data for zeolites
1. Spanning the gap from Schrödinger to Newton by deriving a classical force field from quantum mechanical training data for zeolites
25237 / Model and software development
Promotor(en): L. Vanduyfhuys / Begeleider(s): L. Vanduyfhuys, M. Bocus
Background and problem
Most molecular simulations that are currently applied, rely on the Born-Oppenheimer approximation. Herein, one first solves the electronic Schrödinger equation and constructs so-called Born-Oppenheimer surfaces (also called potential energy surfaces or PES) representing the potential energy as a function of the nuclear coordinates. Next, this PES is used to solve the nuclear Schrödinger equation. However, for molecular systems consisting out of heavy nuclei, one can approximate the dynamics of the nuclei by means of classical mechanics, in other words by solving Newtons equations of motion. Even though this represents a huge reduction in complexity, molecular simulations using the quantum mechanical PES are still beyond the capabilities of current supercomputers when investigating large molecular systems (> 1000 atoms) and trace their dynamic evolution over large periods of time (> 1 ns). In order to make such large length and time scales accessible, one can apply an additional approximation, which is represented by the titular force fields.
When using a force field in a molecular simulation, one does not compute the potential energy and forces corresponding to a particular point on the PES by solving the electronic Schrödinger equation. Instead, the PES is locally approximated by means of a well-chosen mathematical expression. Such expression starts from the many-body expansion of the potential energy V in a sum of two-body, three-body, four-body, … contributions. To further illustrate this, we consider one of the most simple, yet most widely used, expressions for a force field as given below:
The first contribution is a 2-body term representing chemical bonds between neighboring atoms by means of harmonic springs using Hooke’s law (see figure 1). The bond lengths figuring in this expression are dependent on the nuclear coordinates R. The second term is a 3-body contribution as function of the angle θ between two neighboring bonds (which again depends on the nuclear coordinates R). The third term is a 4-body contribution in terms of the dihedral angles. Together, the first three contributions represent to covalent energy describing the intramolecular interactions. The last two contributions cover the intermolecular interactions consisting of electrostatic interactions between fixed atomic charges (fourth term) and van der Waals interactions by means of the Lennard-Jones potential (fifth term). Al these contributions contain parameters that are a priori unknown, such as the equilibrium bond lengths r0 , the force constants Kr , … As such, the development of a new force fields boils down to choosing the appropriate mathematical form as well as estimating the best suited values for the unknown parameters.
Figure 1: Illustration of the potential energy contribution of harmonic bond terms in a force field (right) associated with chemical bonds in a molecular structure (left).
At the Center for Molecular Modeling, we have developed a state-of-the-art methodology to derive new force fields [1,2,3]. This approach, which is implemented in software packages Horton [4] and QuickFF [5], estimates the unknown parameters such as covalent force constants and atomic charges in such a way that they reproduce well-chosen quantum mechanical input data as well as possible. This methodology has been well tested for various nanoporous crystalline materials such as covalent organic frameworks (COFs), which are purely organic, and metal organic frameworks (MOFs), which are hybrid organic-inorganic. However, it has not yet been applied to construct force fields for the purely inorganic nanoporous materials known as zeolites. Zeolites, depicted in the figure below, are nanoporous (i.e. have empty voids on the nanoscale) aluminosilicate (i.e. consist of Al, Si and O) minerals (i.e. pure crystalline materials that can be found in nature), which have applications in wide array applications including cat litter, softening of water, refinement of crude oil and chemical catalysis. One of the known difficulties of developing zeolite force fields is the treatment of the electrostatic interactions as well as an accurate representation of the floppy Si-O-Si bend terms.
Figure 2: Illustration of the nanoporous molecular structure of the zeolite ZSM-5. Yellow balls correspond to Si, red balls to oxygen.
In this thesis, the student will derive force fields for well-characterized zeolite frameworks such as MFI, FER, AFI, BEA, CHA, … for which experimental and theoretical data are available. Furthermore, for frameworks containing relatively little atoms (e.g. CHA with 108 atoms) a comparison with periodically generated ab initio data will be performed. More specifically, the student will first make himself/herself familiar with the methodology using Horton and QuickFF. As such, (s)he will derive force fields for the previously mentioned zeolites and perform various molecular simulations that allow to compute the structural (equilibrium geometry), mechanical (bulk modulus), thermal (thermal expansion coefficient) and adsorption properties (water and methanol uptake). By comparing with experimental and/or computational data from literature, the force field will be validated and possible weaknesses can be identified. Finally, the student will be encouraged to suggest possible improvements to the force field to allow for a better validation
1. Study programme
Master of Science in Engineering Physics [EMPHYS], Master of Science in Physics and Astronomy [CMFYST]
Quantum mechanics, Born-Oppenheimer surface, potential energy, force field, electrostatic interactions, charges
[1] L. Vanduyfhuys et al., J. Comput. Chem., 2018, 39(16), 999
[2] L. Vanduyfhuys et al., J. Comput. Chem., 2015, 36(13), 1015
[3] T. Verstraelen et al., J. Chem. Theory Comput., 2016, 12(8), 3894
[4] HORTON,
[5] QuickFF, |
a2066a3894f245b5 | Who was Max Planck?
Early Life and Education:
Black Body Radiation:
Quantum Mechanics:
World War I and World War II:
Death and Legacy:
What Is Bohr’s Atomic Model?
Atomic theory has come a long way over the past few thousand years. Beginning in the 5th century BCE with Democritus‘ theory of indivisible “corpuscles” that interact with each other mechanically, then moving onto Dalton’s atomic model in the 18th century, and then maturing in the 20th century with the discovery of subatomic particles and quantum theory, the journey of discovery has been long and winding.
Arguably, one of the most important milestones along the way has been Bohr’ atomic model, which is sometimes referred to as the Rutherford-Bohr atomic model. Proposed by Danish physicist Niels Bohr in 1913, this model depicts the atom as a small, positively charged nucleus surrounded by electrons that travel in circular orbits (defined by their energy levels) around the center.
Atomic Theory to the 19th Century:
Discovery of the Electron:
By the late 19th century, scientists also began to theorize that the atom was made up of more than one fundamental unit. However, most scientists ventured that this unit would be the size of the smallest known atom – hydrogen. By the end of the 19th century, this would change drastically, thanks to research conducted by scientists like Sir Joseph John Thomson.
Through a series of experiments using cathode ray tubes (known as the Crookes’ Tube), Thomson observed that cathode rays could be deflected by electric and magnetic fields. He concluded that rather than being composed of light, they were made up of negatively charged particles that were 1ooo times smaller and 1800 times lighter than hydrogen.
The Plum Pudding model of the atom proposed by John Dalton. Credit: britannica.com
The Plum Pudding model of the atom proposed by J.J. Thomson. Credit: britannica.com
This effectively disproved the notion that the hydrogen atom was the smallest unit of matter, and Thompson went further to suggest that atoms were divisible. To explain the overall charge of the atom, which consisted of both positive and negative charges, Thompson proposed a model whereby the negatively charged “corpuscles” were distributed in a uniform sea of positive charge – known as the Plum Pudding Model.
The Rutherford Model:
Subsequent experiments revealed a number of scientific problems with the Plum Pudding model. For starters, there was the problem of demonstrating that the atom possessed a uniform positive background charge, which came to be known as the “Thomson Problem”. Five years later, the model would be disproved by Hans Geiger and Ernest Marsden, who conducted a series of experiments using alpha particles and gold foil – aka. the “gold foil experiment.”
Credit: glogster.com
Diagram detailing the “gold foil experiment” conducted by Hans Geiger and Ernest Marsden. Credit: glogster.com
The Bohr Model:
In addition, Bohr’s model refined certain elements of the Rutherford model that were problematic. These included the problems arising from classical mechanics, which predicted that electrons would release electromagnetic radiation while orbiting a nucleus. Because of the loss in energy, the electron should have rapidly spiraled inwards and collapsed into the nucleus. In short, this atomic model implied that all atoms were unstable.
Diagram of an electron dropping from a higher orbital to a lower one and emitting a photon. Image Credit: Wikicommons
The model also predicted that as electrons spiraled inward, their emission would rapidly increase in frequency as the orbit got smaller and faster. However, experiments with electric discharges in the late 19th century showed that atoms only emit electromagnetic energy at certain discrete frequencies.
Bohr resolved this by proposing that electrons orbiting the nucleus in ways that were consistent with Planck’s quantum theory of radiation. In this model, electrons can occupy only certain allowed orbitals with a specific energy. Furthermore, they can only gain and lose energy by jumping from one allowed orbit to another, absorbing or emitting electromagnetic radiation in the process.
These orbits were associated with definite energies, which he referred to as energy shells or energy levels. In other words, the energy of an electron inside an atom is not continuous, but “quantized”. These levels are thus labeled with the quantum number n (n=1, 2, 3, etc.) which he claimed could be determined using the Ryberg formula – a rule formulated in 1888 by Swedish physicist Johannes Ryberg to describe the wavelengths of spectral lines of many chemical elements.
Influence of the Bohr Model:
While Bohr’s model did prove to be groundbreaking in some respects – merging Ryberg’s constant and Planck’s constant (aka. quantum theory) with the Rutherford Model – it did suffer from some flaws which later experiments would illustrate. For starters, it assumed that electrons have both a known radius and orbit, something that Werner Heisenberg would disprove a decade later with his Uncertainty Principle.
In addition, while it was useful for predicting the behavior of electrons in hydrogen atoms, Bohr’s model was not particularly useful in predicting the spectra of larger atoms. In these cases, where atoms have multiple electrons, the energy levels were not consistent with what Bohr predicted. The model also didn’t work with neutral helium atoms.
The Bohr model also could not account for the Zeeman Effect, a phenomenon noted by Dutch physicists Pieter Zeeman in 1902, where spectral lines are split into two or more in the presence of an external, static magnetic field. Because of this, several refinements were attempted with Bohr’s atomic model, but these too proved to be problematic.
In the end, this would lead to Bohr’s model being superseded by quantum theory – consistent with the work of Heisenberg and Erwin Schrodinger. Nevertheless, Bohr’s model remains useful as an instructional tool for introducing students to more modern theories – such as quantum mechanics and the valence shell atomic model.
It would also prove to be a major milestone in the development of the Standard Model of particle physics, a model characterized by “electron clouds“, elementary particles, and uncertainty.
We have written many interesting articles about atomic theory here at Universe Today. Here’s John Dalton’s Atomic Model, What is the Plum Pudding Model, What is the Electron Cloud Model?, Who Was Democritus?, and What are the Parts of the Atom?
How Does Light Travel?
Ever since Democritus – a Greek philosopher who lived between the 5th and 4th century’s BCE – argued that all of existence was made up of tiny indivisible atoms, scientists have been speculating as to the true nature of light. Whereas scientists ventured back and forth between the notion that light was a particle or a wave until the modern era, the 20th century led to breakthroughs that showed us that it behaves as both.
These included the discovery of the electron, the development of quantum theory, and Einstein’s Theory of Relativity. However, there remains many unanswered questions about light, many of which arise from its dual nature. For instance, how is it that light can be apparently without mass, but still behave as a particle? And how can it behave like a wave and pass through a vacuum, when all other waves require a medium to propagate?
Theory of Light to the 19th Century:
During the Scientific Revolution, scientists began moving away from Aristotelian scientific theories that had been seen as accepted canon for centuries. This included rejecting Aristotle’s theory of light, which viewed it as being a disturbance in the air (one of his four “elements” that composed matter), and embracing the more mechanistic view that light was composed of indivisible atoms.
In many ways, this theory had been previewed by atomists of Classical Antiquity – such as Democritus and Lucretius – both of whom viewed light as a unit of matter given off by the sun. By the 17th century, several scientists emerged who accepted this view, stating that light was made up of discrete particles (or “corpuscles”). This included Pierre Gassendi, a contemporary of René Descartes, Thomas Hobbes, Robert Boyle, and most famously, Sir Isaac Newton.
The first edition of Newton's Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light (1704). Credit: Public Domain.
The first edition of Newton’s Opticks: or, a treatise of the reflexions, refractions, inflexions and colours of light (1704). Credit: Public Domain.
Newton’s corpuscular theory was an elaboration of his view of reality as an interaction of material points through forces. This theory would remain the accepted scientific view for more than 100 years, the principles of which were explained in his 1704 treatise “Opticks, or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light“. According to Newton, the principles of light could be summed as follows:
• These corpuscles are perfectly elastic, rigid, and weightless.
This represented a challenge to “wave theory”, which had been advocated by 17th century Dutch astronomer Christiaan Huygens. . These theories were first communicated in 1678 to the Paris Academy of Sciences and were published in 1690 in his Traité de la lumière (“Treatise on Light“). In it, he argued a revised version of Descartes views, in which the speed of light is infinite and propagated by means of spherical waves emitted along the wave front.
Double-Slit Experiment:
By the early 19th century, scientists began to break with corpuscular theory. This was due in part to the fact that corpuscular theory failed to adequately explain the diffraction, interference and polarization of light, but was also because of various experiments that seemed to confirm the still-competing view that light behaved as a wave.
The most famous of these was arguably the Double-Slit Experiment, which was originally conducted by English polymath Thomas Young in 1801 (though Sir Isaac Newton is believed to have conducted something similar in his own time). In Young’s version of the experiment, he used a slip of paper with slits cut into it, and then pointed a light source at them to measure how light passed through it.
According to classical (i.e. Newtonian) particle theory, the results of the experiment should have corresponded to the slits, the impacts on the screen appearing in two vertical lines. Instead, the results showed that the coherent beams of light were interfering, creating a pattern of bright and dark bands on the screen. This contradicted classical particle theory, in which particles do not interfere with each other, but merely collide.
The only possible explanation for this pattern of interference was that the light beams were in fact behaving as waves. Thus, this experiment dispelled the notion that light consisted of corpuscles and played a vital part in the acceptance of the wave theory of light. However subsequent research, involving the discovery of the electron and electromagnetic radiation, would lead to scientists considering yet again that light behaved as a particle too, thus giving rise to wave-particle duality theory.
Electromagnetism and Special Relativity:
Prior to the 19th and 20th centuries, the speed of light had already been determined. The first recorded measurements were performed by Danish astronomer Ole Rømer, who demonstrated in 1676 using light measurements from Jupiter’s moon Io to show that light travels at a finite speed (rather than instantaneously).
Prof. Albert Einstein delivering the 11th Josiah Willard Gibbs lecture at the meeting of the American Association for the Advancement of Science on Dec. 28th, 1934. Credit: AP Photo
By the late 19th century, James Clerk Maxwell proposed that light was an electromagnetic wave, and devised several equations (known as Maxwell’s equations) to describe how electric and magnetic fields are generated and altered by each other and by charges and currents. By conducting measurements of different types of radiation (magnetic fields, ultraviolet and infrared radiation), he was able to calculate the speed of light in a vacuum (represented as c).
In 1905, Albert Einstein published “On the Electrodynamics of Moving Bodies”, in which he advanced one of his most famous theories and overturned centuries of accepted notions and orthodoxies. In his paper, he postulated that the speed of light was the same in all inertial reference frames, regardless of the motion of the light source or the position of the observer.
Exploring the consequences of this theory is what led him to propose his theory of Special Relativity, which reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, simplified the mathematical calculations, and accorded with the directly observed speed of light and accounted for the observed aberrations. It also demonstrated that the speed of light had relevance outside the context of light and electromagnetism.
For one, it introduced the idea that major changes occur when things move close the speed of light, including the time-space frame of a moving body appearing to slow down and contract in the direction of motion when measured in the frame of the observer. After centuries of increasingly precise measurements, the speed of light was determined to be 299,792,458 m/s in 1975.
Einstein and the Photon:
In 1905, Einstein also helped to resolve a great deal of confusion surrounding the behavior of electromagnetic radiation when he proposed that electrons are emitted from atoms when they absorb energy from light. Known as the photoelectric effect, Einstein based his idea on Planck’s earlier work with “black bodies” – materials that absorb electromagnetic energy instead of reflecting it (i.e. white bodies).
At the time, Einstein’s photoelectric effect was attempt to explain the “black body problem”, in which a black body emits electromagnetic radiation due to the object’s heat. This was a persistent problem in the world of physics, arising from the discovery of the electron, which had only happened eight years previous (thanks to British physicists led by J.J. Thompson and experiments using cathode ray tubes).
At the time, scientists still believed that electromagnetic energy behaved as a wave, and were therefore hoping to be able to explain it in terms of classical physics. Einstein’s explanation represented a break with this, asserting that electromagnetic radiation behaved in ways that were consistent with a particle – a quantized form of light which he named “photons”. For this discovery, Einstein was awarded the Nobel Prize in 1921.
Wave-Particle Duality:
Subsequent theories on the behavior of light would further refine this idea, which included French physicist Louis-Victor de Broglie calculating the wavelength at which light functioned. This was followed by Heisenberg’s “uncertainty principle” (which stated that measuring the position of a photon accurately would disturb measurements of it momentum and vice versa), and Schrödinger’s paradox that claimed that all particles have a “wave function”.
In accordance with quantum mechanical explanation, Schrodinger proposed that all the information about a particle (in this case, a photon) is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. At some location, the measurement of the wave function will randomly “collapse”, or rather “decohere”, to a sharply peaked function. This was illustrated in Schrödinger famous paradox involving a closed box, a cat, and a vial of poison (known as the “Schrödinger Cat” paradox).
In this illustration, one photon (purple) carries a million times the energy of another (yellow). Some theorists predict travel delays for higher-energy photons, which interact more strongly with the proposed frothy nature of space-time. Yet Fermi data on two photons from a gamma-ray burst fail to show this effect. The animation below shows the delay scientists had expected to observe. Credit: NASA/Sonoma State University/Aurore Simonnet
Artist’s impression of two photons travelling at different wavelengths, resulting in different- colored light. Credit: NASA/Sonoma State University/Aurore Simonnet
According to his theory, wave function also evolves according to a differential equation (aka. the Schrödinger equation). For particles with mass, this equation has solutions; but for particles with no mass, no solution existed. Further experiments involving the Double-Slit Experiment confirmed the dual nature of photons. where measuring devices were incorporated to observe the photons as they passed through the slits.
When this was done, the photons appeared in the form of particles and their impacts on the screen corresponded to the slits – tiny particle-sized spots distributed in straight vertical lines. By placing an observation device in place, the wave function of the photons collapsed and the light behaved as classical particles once more. As predicted by Schrödinger, this could only be resolved by claiming that light has a wave function, and that observing it causes the range of behavioral possibilities to collapse to the point where its behavior becomes predictable.
The development of Quantum Field Theory (QFT) was devised in the following decades to resolve much of the ambiguity around wave-particle duality. And in time, this theory was shown to apply to other particles and fundamental forces of interaction (such as weak and strong nuclear forces). Today, photons are part of the Standard Model of particle physics, where they are classified as boson – a class of subatomic particles that are force carriers and have no mass.
So how does light travel? Basically, traveling at incredible speeds (299 792 458 m/s) and at different wavelengths, depending on its energy. It also behaves as both a wave and a particle, able to propagate through mediums (like air and water) as well as space. It has no mass, but can still be absorbed, reflected, or refracted if it comes in contact with a medium. And in the end, the only thing that can truly divert it, or arrest it, is gravity (i.e. a black hole).
What we have learned about light and electromagnetism has been intrinsic to the revolution which took place in physics in the early 20th century, a revolution that we have been grappling with ever since. Thanks to the efforts of scientists like Maxwell, Planck, Einstein, Heisenberg and Schrodinger, we have learned much, but still have much to learn.
For instance, its interaction with gravity (along with weak and strong nuclear forces) remains a mystery. Unlocking this, and thus discovering a Theory of Everything (ToE) is something astronomers and physicists look forward to. Someday, we just might have it all figured out!
We have written many articles about light here at Universe Today. For example, here’s How Fast is the Speed of Light?, How Far is a Light Year?, What is Einstein’s Theory of Relativity?
If you’d like more info on light, check out these articles from The Physics Hypertextbook and NASA’s Mission Science page.
What Are The Parts Of An Atom?
Continue reading “What Are The Parts Of An Atom?”
How Do Black Holes Evaporate?
The LHC. Image Credit: CERN
The LHC. Image Credit: CERN
Thanks for watching! Never miss an episode by clicking subscribe.
Proton Parts
The proton has three parts, two up quarks and one down quark … and the gluons which these three quarks exchange, which is how the strong (nuclear) force works to keep them from getting out.
The proton’s world is a totally quantum one, and so it is described entirely by just a handful of numbers, characterizing its spin (a technical term, not to be confused with the everyday English word; the proton’s spin is 1/2), electric charge (+1 e, or 1.602176487(40)×10-19 C), isospin (also 1/2), and parity (+1). These properties are derived directly from those of the proton parts, the three quarks; for example, the up quark has an electric charge of +2/3 e, and the down -1/3 e, which sum to +1 e. Another example, color charge: the proton has a color charge of zero, but each of its constituent three quarks has a non-zero color charge – one is ‘blue’, one ‘red’, and one ‘green’ – which ‘sum’ to zero (of course, color charge has nothing whatsoever to do with the colors you and I see with our eyes!).
Murray Gell-Mann and George Zweig independently came up with the idea that the proton’s parts are quarks, in 1964 (though it wasn’t until several years later that good evidence for the existence of such parts was obtained). Gell-Mann was later awarded the Nobel Prize of Physics for this, and other work on fundamental particles (Zweig has yet to receive a Nobel).
The quantum theory which describes the strong interaction (or strong nuclear force) is quantum chromodynamics, QCD for short (named in part after the ‘colors’ of quarks), and this explains why the proton has the mass it does. You see, the up quark’s mass is about 2.4 MeV (mega-electron volts; particle physicists measure mass in MeV/c2), and the down’s about 4.8 MeV. Gluons, like photons, are massless, so the proton should have a mass of about 9.6 MeV (= 2 x 2.4 + 4.8), right? But it is, in fact, 938 MeV! QCD accounts for this enormous difference by the energy of the QCD vacuum inside the proton; basically, the self-energy of ceaseless interactions of quarks and gluons.
Further reading: The Physics of RHIC (Brookhaven National Lab), How are the protons and neutrons held together in a nucleus?, and Are protons and neutrons fundamental? (the Particle Adventure) are three good places to go!
Some of the Universe Today articles relevant to proton parts are: Final Detector in Place at the Large Hadron Collider, Hidden Stores of Deuterium Discovered in the Milky Way, and New Study Finds Fundamental Force Hasn’t Changed Over Time.
Two Astronomy Cast episodes you won’t want to miss, on proton parts: The Strong and Weak Nuclear Forces, and Inside the Atom.
What is Schrodinger’s Cat?
Schrodinger’s cat is named after Erwin Schrödinger, a physicist from Austria who made substantial contributions to the development of quantum mechanics in the 1930s (he won a Nobel Prize for some of this work, in 1933). Apart from the poor cat (more later), his name is forever associated with quantum mechanics via the Schrödinger equation, which every physics student has to grapple with.
Schrodinger’s cat is actually a thought experiment (Gedankenexperiment) – and the cat may not have been Erwin’s, but his wife’s, or one of his lovers’ (Erwin had an unconventional lifestyle) – designed to test a really weird implication of the physics he and other physicists was developing at the time. It was motivated by a 1935 paper by Einstein, Podolsky, and Rosen; this paper is the source of the famous EPR paradox.
In the thought experiment, Schrodinger’s cat is placed inside a box containing a piece of radioactive material, and a Geiger counter wired to a flask of poison in such a way that if the Geiger counter detects a decay, then the flask is smashed, the poison gas released, and the cat dies (fun piece of trivia: an animal rights group accused physicists of cruelty to animals, based on a distorted version of this thought experiment! though maybe that’s just an urban legend). The half-life of the radioactive material is an hour, so after an hour, there is a 50% probability that the cat is dead, and an equal probability that it is alive. In quantum mechanics, these two states are superposed (a technical term), and the cat is neither dead nor alive, or half-dead and half-alive, or … which is really, really weird.
Now the theory – quantum mechanics – has been tested perhaps more thoroughly than any other theory in physics, and it seems to describe how the universe behaves with extraordinary accuracy. And the theory says that when the box is opened – to see if the cat is dead, alive, half-dead and half-alive, or anything else – the wavefunction (describing the cat, Geiger counter, etc) collapses, or decoheres, or that the states are no longer entangled (all technical terms), and we see only a dead cat or cat very much alive.
There are several ways to get your mind around what’s going on – or several interpretations (you guessed it, yet another technical term!) – with names like Copenhagen interpretation, many worlds interpretation, etc, but the key thing is that the theory is mute on the interpretations … it simply says you can calculate stuff using the equations, and what your calculations show is what you’ll see, in any experiment.
Fast forward to some time after Schrödinger – and Einstein, Podolsky, and Rosen – had died, and we find that tests of the EPR paradox were proposed, then conducted, and the universe does indeed seem to behave just like schrodinger’s cat! In fact, the results from these experimental tests are used for a kind of uncrackable cryptography, and the basis for a revolutionary kind of computer.
Keen to learn more? Try these: Schrödinger’s Rainbow is a slideshow review of the general topic (California Institute of Technology; caution, 3MB PDF file!); Schrodinger’s cat comes into view, a news story on a macroscopic demonstration; and Schrödinger’s Cat (University of Houston).
Schrodinger’s cat is indirectly referenced in several Astronomy Cast episodes, among them Quantum Mechanics, and Entanglement; check them out!
Sources: Cornell University, Wikipedia
What is Loop Quantum Gravity?
The two best theories we have, today, in physics – the Standard Model and General Relativity – are mutually incompatible; loop quantum gravity (LQG) is one of the best proposals for combining them in a consistent way.
General Relativity is a theory of spacetime, but it is not a quantum theory. Since the universe seems to be quantized in so many ways, one approach to extending GR is to quantize spacetime … somehow. In LQG, space is made up of a network of quantized loops of gravitational fields (see where the name comes from?), which are called spin networks (and which become spin foam when viewed over time). The quantization is at the Planck scale (as you would expect). LQG and string theory – perhaps the best known of theories which aim to both go deeper and encompass the Standard Model and General Relativity – differ in many ways; one of the most obvious is that LQG does not introduce extra dimensions. Another big difference: string theory aims to unify all forces, LQG does not (though it does include matter).
Starting with the Einstein field equations of GR, Abhay Ashtekar kicked of LQG in 1986, and in 1988 Carlo Rovelli and Lee Smolin built on Ashtekar’s work to introduce the loop representation of quantum general relativity. Since then lots of progress has been made, and so far no fatal flaws have been discovered. However, LQG suffers from a number of problems; perhaps the most frustrating is that we don’t know if LQG becomes GR as we move from the (quantized) Planck scale to the (continuum) scale at which our experiments and observations are done.
OK, so what about actual tests of LQG, you know, like in the lab or with telescopes?
Well, there are some, potential tests … such as whether the speed of light is indeed constant, and recently the Fermi telescope team reported the results of just such a test (result? No clear sign of LQG).
Interested in learning more? There is a lot of material freely available on the web, from easy reads like Quantum Foam and Loop Quantum Gravity and Lee Smolin’s Loop Quantum Gravity, to introductions for non-experts like Abhay Ashtekar’s Gravity and the Quantum, to reviews like Carlo Rovelli’s Loop Quantum Gravity, to this paper on an attempt to explain some observational results using loop quantum gravity (Loop Quantum Gravity and Ultra High Energy Cosmic Rays).
As you’d expect, Universe Today has several articles on, or which feature, loop quantum gravity; here is a selection What was Before the Big Bang? An Identical, Reversed Universe, Before the Big Bang?, and Before the Big Bang.
Source: Wikipedia |
d53a203fe302171c | Through Einstein’s equation we learn that mass can be transformed into energy, and vice versa. Albert Einstein was an absolute master of finding equations that exemplify this property. Volume IV presents the foundations of quantum physics in a simple way, with little math, using many puzzles and observations taken from everyday life. Quantum electrodynamics (QED) is the study of how electrons and photons interact. String theorists had already been working to translate this geometric problem into a physical one. John David Jackson is Professor Emeritus at the University of California, Berkeley. Hidden Structure What Is a Particle? When I studied physics, mathematics students had to follow a few thorough courses in physics, in quantum mechanics, for exam-ple. At its heart quantum mechanics is a mathemat- ically abstract subject expressed in terms of the language of complex linear vector spaces — in other words, linear algebra. I. In a sense, they’re writing a full dictionary of the objects that appear in the two separate mathematical worlds, including all the relations they satisfy. Mathematics has the wonderful ability to connect different worlds. Nevertheless, a rigorous construction of interacting quantum field theory is still missing. It is given that you ought to know some of the material in order to understand it properly. Auflage, Kindle Ausgabe von Walter Thirring (Autor), E.M. Harrell (Übersetzer) Format: Kindle Ausgabe. I know the Math of QM, and this book didn't properly explain any of it. But since mathematics is the language of nature, it’s required to quantify the prediction of quantum mechanics. Mathematical methods in quantum mechanics : with applications to Schr odinger operators / Gerald Teschl. Bring your club to Amazon Book Clubs, start a new book club and invite your friends to join, or find a club that’s right for you for free. Revision Courses Courses for GCSEs, A-levels and University Admissions. Don't worry, you don't need to know much about quantum physics to read this article. With over twenty experimental and theoretical research groups, Munich is one of the leading research centres in this field. A string can be thought to probe all possible curves of every possible degree at the same time and is thus a super-efficient “quantum calculator.”. The second part starts with a detailed study of the free Schr odinger operator respectively position, momentum and angular momentum operators. This book is not intended to be an exercise in mathematical skills. The general form of the Schrödinger equation is also valid and in QFT, both in relativistic and nonrelativistic situations. Quantum physics is the physics of Schrödinger and Heisenberg, governed by the uncertainty principle: If we measure some aspects of a physical system with complete precision, others must remain undetermined. If your definition of introductory is a very high level listing of the theory and equations with no examples and little context or description, then this will meet your expectations. One of his favorite complementary pairs was truth and clarity. 4.7 out of 5 stars 120. Reviewed in the United States on March 17, 2009. I love the message: the mathematics needed for quantum mechanics is relevant for many areas of classical physics. Review questions are Conference Regensburg 2013, Sep 29 - Oct 2 In fact, the math was mostly developed in the context of classical physics, enabling quantum mechanics to develop at a remarkable pace once the concepts were discovered. The result highlights a fundamental tension: Either the rules of quantum mechanics don’t always apply, or at least one basic assumption about reality must be wrong. I love the message: the mathematics needed for quantum mechanics is relevant for many areas of classical physics. It is apparently not the role of mathematicians to clean up after physicists! Mathematics of Classical and Quantum Physics (Dover Books on Physics) Frederick W. Byron. The most overlooked symbol in any equation is the humble equal sign. We’ll repeat it many times: quantum physics isn’t about mathematics, it’s about the behaviour of nature at its core. Take E = mc2, without a doubt the most famous equation in history. Quantum Physics and the Hamiltonian. Paperback. Mathematical Tools. But the book cannot be understood unless one first learns to comprehend the language and read the letters in which it is composed. I would recommend it for a senior undergraduate. Money would be much more well spent getting a decent math textbook. Great book for obtaining/reviewing the math skills needed for quantum mechanics. To give a completedescription of a system, then, we need to say what type of system it isand what its state is at each moment in its history. General physics/quantum physics. We’ll Mirror symmetry illustrates a powerful property of quantum theory called duality: Two classical models can become equivalent when considered as quantum systems, as if a magic wand is waved and all the differences suddenly disappear. "I think I can safely say that nobody understands quantum mechanics." This book is a really good summary of the mathematics you sould know if you want to start studying quantum mechanics. December 3, 2020 . Paolo Bertozzini Quantum Mathematics for Quantum Physics. The predictions of QED regarding the scattering of photons … Maths for Physics. Springer has numerous books in its range that explain the basics and methods of quantum physics in a simple and understandable way to both students and interested laypersons. But he is also the rare crossover physicist who is known for important theoretical insights, especially ones involving quantum chance and nonlocality. Both points of views have their advantages, offering different perspectives on the same physical phenomenon. Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. Around 1990, a group of string theorists asked geometers to calculate this number. Quantum physics allows ideas to flow freely from one field to the other and provides an unexpected “grand unification” of these two mathematical disciplines. Computer Science. The two tuned out to be equivalent. We investigate the mathematical structure of physical theories. But a second ingredient was necessary to find the actual solution: an equivalent formulation of the physics using a so-called “mirror” Calabi–Yau space. The course assumes some previous knowledge of physics and mathematics. Quantum field theory (QFT) is a framework that allows the combination of quantum mechanics with special relativity. The concept emerged from the fact that, as Werner Heisenberg proved with his uncertainty principle, in quantum mechanics one can measure either the momentum p of a particle or its position q, but not both at the same time. Only 20 left in stock - order soon. In quantum mechanics one considers instead the collection of all possible paths from A to B, however long and convoluted. You can look at the world with a mathematical eye or with a complementary physical eye, but don’t dare to open both. In contrast to the way an ordinary mirror reflects an image, here the original space and its mirror are of very different shapes; they do not even have the same topology. This note introduces Quantum Mechanics at an advanced level addressing students of Physics, Mathematics, Chemistry and Electrical Engineering. This is Feynman’s famous “sum over histories” interpretation. Mathematical Tools. Moreover, it is still unclear how to combine quantum physics and general relativity in a unified physical theory. These dialogs are very insightful, since Emmy seems to ask all the questions a human also would have. The Basics of String Theory At its core, string theory uses a model of one-dimensional strings in place of the particles of quantum physics. It also analyzes reviews to verify trustworthiness. Quantum mechanics explains how the universe works at a scale smaller than atoms. Mathematical formulation of quantum mechanics. The course assumes some previous knowledge of physics and mathematics. Just as you can wrap a rubber band around a cylinder multiple times, the curves on a Calabi-Yau space are classified by an integer, called the degree, that measures how often they wrap around. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. A succinct way to summarize that theory is that mass tells space how to curve, and space tells mass how to move. Niels Bohr was very fond of the notion of complementarity. But these days we seem to be witnessing the reverse: the unreasonable effectiveness of quantum theory in modern mathematics. Unless you have many, many years of advanced mathematics under your belt don't bother. To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Quantum Mechanics by Richard Fitzpatrick. … If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in.” (On the other hand, he also stated: “If all mathematics disappeared today, physics would be set back exactly one week,” to which a mathematician had the clever riposte: “This was the week that God created the world.”). Wolfgang Pauli wittily summarized this duality in a letter to Heisenberg dated October 19, 1926, just a few weeks after the discovery: “One can see the world with the p-eye, and one can see it with the q-eye, but if one opens both eyes, then one becomes crazy.”. Quantum Mechanics: The Theoretical Minimum … It can be seen as a probability amplitude for a string propagating in the Calabi–Yau space, where the sum-over-histories principle has been applied. Nevertheless, a rigorous construction of interacting quantum field theory is still missing. We have tried to convey to students that it is the use of probability amplitudes rather than probabilities that makes quantum mechanics the extraordinary thing that it is, and to grasp that the theory’s mathematical structure follows almost inevitably from the concept of a probability amplitude. Our publications include the Would be even better if there was a glossary. (Courtesy: iStock/Traffic Analyzer) 2020, despite all its multitudinous awfulness, was a good year for quantum science.Here at Physics World, we come across important papers nearly every week on topics from quantum algorithms and improved qubit architectures to better quantum sensors and imaginative new experiments on quantum fundamentals. Assembled in this way, it has a straightforward physical interpretation. In all its understated elegance, it connects the physical concepts of mass and energy that were seen as totally distinct before the advent of relativity. The number of degree-two curves was only computed around 1980 and turns out to be much larger: 609,250. Well-organized text designed to complement graduate-level physics texts in classical mechanics, electricity, magnetism, and quantum mechanics. Mathematical Foundations of Quantum Mechanics (Princeton Landmarks in Mathematics and Physics) | von Neumann, John | ISBN: 9780691028934 | Kostenloser Versand für alle Bücher mit Versand und Verkauf duch Amazon. Relativistic quantum physics and quantum field theory. What could be the underlying reason for this unreasonable effectiveness of quantum theory? Moreover, it is still unclear how to combine quantum physics and general relativity in a unified physical theory. Bohmian Mechanics: The Physics and Mathematics of Quantum Theory (Fundamental Theories of Physics) | Dürr, Detlef, Teufel, Stefan | ISBN: 9783540893431 | Kostenloser Versand für alle Bücher mit Versand und Verkauf duch Amazon. This book is a little gem! This article was reprinted in Spanish at The first and most famous example of such an equivalence is the well-known particle-wave duality that states that every quantum particle, such as an electron, can be considered both as a particle and as a wave. Schr odinger operators. Quantum communication is a new form of ... Boris Tsirelson pinpointed a problem in physics now known as Tsirelson’s Problem. Its stimulating influence in mathematics will have a lasting and rewarding impact, whatever its final role in fundamental physics turns out to be. There's a problem loading this menu right now. For $10, it's a great deal for anyone learning quantum mechanics. Classical physics is the physics of Newton, where we imagine that we can measure everything with complete precision, at least in principle. Conformal field theory and string theory form two prominent classes of theories which are largely determined by the condition of mathematical self-consistency. Mirror symmetry is another perfect example of the power of the equal sign. Quantum Mathematical Physics - A Bridge between Mathematics and Physics. bra–ket notation; canonical commutation relation; complete set of commuting observables; Heisenberg picture; Hilbert space ; Interaction picture; Measurement in quantum mechanics; quantum field theory; quantum logic; quantum operation; Schrödinger picture; semiclassical; statistical ensemble; wavefunction; wave–particle duality; … For example, the preferred path could be along a geodesic — a path of minimal length in a curved space. A Panorama of Mathematical Physics. The term “mirror” is deceptively simple. This holistic approach of considering everything at once is very much in the spirit of modern mathematics, where the study of “categories” of objects focuses much more on the mutual relations than on any specific individual example. Also, there are no examples actually pertaining to QM. There was a problem loading your book clubs. The laws of physics will then assign to each path a certain weight that determines the probability that a particle will move along that particular trajectory. 1. Reviewed in the United States on March 21, 2020, Reviewed in the United States on July 5, 2014. Mathematics Institute, LMU Munich Theresienstrasse 39 D-80333 Munich Office: B328 Tel: +49 (0) 89 2180-4456 Email: Secretary: Mrs. Edith Höchst Working group: Analysis, Mathematical Physics and Numerics The study of mechanics during the scientific revolution of the 17th century brought us calculus. Perhaps the pair of mathematical rigor and physical intuition should be added as another example of two mutually exclusive qualities. The uncertainty principle states that certain pairs of … The mathematical physicist and Nobel laureate Eugene Wigner has written eloquently about the amazing ability of mathematics to describe reality, characterizing it as “the unreasonable effectiveness of mathematics in the natural sciences.” The same mathematical concepts turn up in a wide range of contexts. 4.3 out of 5 stars 23. Many problems, suggestions for further reading. Galileo famously wrote about a book of nature waiting to be decoded: “Philosophy is written in this grand book, the universe, which stands continually open to our gaze. Quanta Magazine moderates comments to facilitate an informed, substantive, civil conversation. Uncertainty Study the uncertainty principle. It is comforting to see how mathematics has been able to absorb so much of the intuitive, often imprecise reasoning of quantum physics and string theory, and to transform many of these ideas into rigorous statements and proofs. Prime members enjoy FREE Delivery and exclusive access to music, movies, TV shows, original audio series, and Kindle books. It will also serve as an excellent text book for an advanced course in either quantum physics or applied mathematics." To get the free app, enter your mobile phone number. ers mathematical foundations of quantum mechanics from self-adjointness, the spectral theorem, quantum dynamics (including Stone’s and the RAGE theorem) to perturbation theory for self-adjoint operators. The conversational tone is a surprising contrast to Jackson's "Classical Electrodymanics". Mathematical Foundations of Quantum Mechanics: New Edition (Princeton Landmarks in Mathematics and Physics, 53) John von Neumann. Quantum science and technology is a vibrant and multidisciplinary field of research at the interface of physics, mathematics, computer science and material science. Could the logical structure of quantum theory, once fully understood and absorbed, inspire a new realm of mathematics that might be called “quantum mathematics”? Richard Feynman. Cluster of excellence „Quantum Universe“ SFB 676 Particles, Strings, and the Early Universe; Research Training Group 1670 Mathematics inspired by String Theory and Quantum Field Theory; BMBF-FSP 104 Physics with the CMS experiment (in German only) Helmholtz Alliance for Astroparticle Physics (HAP) Helmholtz Alliance Physics at the Terascale Biology | Chemistry. Thirring writes concisely but with a clarity that makes the book easy to read. Quantum physical effects play an important role for LED, transistor, laser or electron microscope. It’s hard to overestimate the shock of this result in mathematical circles. A physical quantity is a mutually exclusive andjointly exhaustive family of physical properties (for those who knowt… For example, researchers might want to count the number of curves on Calabi-Yau spaces — six-dimensional solutions of Einstein’s equations of gravity that are of particular interest in string theory, where they are used to curl up extra space dimensions. $84.57. We counted a handful of fairly simple (though important) quantum physics formulas and 34 images. The difficult computation on the original manifold translates into a much simpler expression on the mirror manifold, where it can be computed by a single integral. Top subscription boxes – right to your door, © 1996-2020,, Inc. or its affiliates. You can discover that colours, atoms and living beings only exist because nature features a quantum of action, also called Planck's constant. Quantum mechanics was developed in just two years, 1925 and 1926 (see here if you want to know why). Quantum Mathematical Physics: Atoms, Molecules and Large Systems (English Edition) 2. There were initially two versions, one formulated by Werner Heisenberg and one by Erwin Schrödinger. On the other side is the realm of algebraic geometry, the world of complex numbers. The book states that it is an introductory survey. Gisin, 67, is primarily an experimenter. Upon checking, the geometers confirmed there was, but how did the physicists know? Advanced undergraduates and graduate students studying quantum mechanics will find this text a valuable guide to mathematical methods. In doing so, they had developed a way to calculate the number of curves of any degree all at once. It is comforting to see how mathematics has been able to absorb so much of the intuitive, often imprecise reasoning of quantum physics and string theory, and to transform many of these ideas into rigorous statements and proofs. Please try again. | (Graduate Studies in Mathematics ; v. 99) Includes bibliographical references and index. This book introduces the main ideas of quantum mechanics in language familiar to mathematicians. $17.54. The mathematical physics group is concerned with problems in statistical mechanics, atomic and molecular physics, quantum field theory, and, in general, with the mathematical foundations of theoretical physics. A quantum of energy is the least amount possible (or the least extra amount), and quantum mechanics describes how that energy moves or interacts. Perhaps the most basic mathematical tool in quantum theory is the concept of the Hilbert space, which is named for the German mathematician David Hilbert (1862-1943). Topics include theory of vector spaces, analytic function theory, Green's function method of solving differential and partial differential equations, theory of groups, and more. After viewing product detail pages, look here to find an easy way to navigate back to pages you are interested in. It is capable of connecting two different mathematical worlds. Revision Revision notes, key points, worksheets and questions by topic from past papers Maths | Physics. On Sunday mornings, in lieu of church, Gisin makes a habit of sitting quietly in his chair at h… I tried taking a Quantum Physics course on, but I failed miserably, as I was not able to understand the mathematical operators (one being the Laplace operator). Attacking these challenging problems of contemporary physics requires highly advanced mathematical methods as well as … November 12, 2020. Mathematicians are close to applying this exactitude to homological mirror symmetry, a program that vastly extends string theory’s original idea of mirror symmetry. In his later years, Bohr tried to push this idea into a much broader philosophy. If you are new to quantum physics, this is a great read and you will learn a lot about quantum physics with little effort. There are many books that treat this topic. The equation of Einstein’s general theory of relativity, although less catchy and well-known, links the worlds of geometry and matter in an equally surprising and beautiful manner. (Physicalia, 25/3, 2003) "This work is written uncompromisingly for the mathematical physicist … . There was an error retrieving your Wish Lists. Please try again. quantum physics. This is especially true for string theory. Its a waste of money and time. There is of course a long-standing and intimate relationship between mathematics and physics. p. cm. Quantum Mechanics by Richard Fitzpatrick. One very nice treatment can be found in Chapter 6 of Frederick W. Byron and Robert C. Fuller, Mathematics of Classical and Quantum Physics (Dover Publications, Inc., New York, 1992), originally published by the Addison-Wesley … It is written in the language of mathematics.” From more modern times we can quote Richard Feynman, who was not known as a connoisseur of abstract mathematics: “To those who do not know mathematics it is difficult to get across a real feeling as to the beauty, the deepest beauty, of nature. Nowadays, certainly in the Netherlands, someone who studies mathematics won’t in general learn anything about physics. This is further evidence of the deep and as yet undiscovered logic that underlies quantum theory and, ultimately, reality. This book is a self-contained advanced textbook on the mathematical-physical aspects of quantum many-body systems, which begins with a pedagogical presentation of the necessary background information before moving on to subjects of active research, including topological phases of matter. English | Psychology. Get Quanta Magazine delivered to your inbox, Get highlights of the most important news delivered to your email inbox. As a 12th grade student , I'm currently acquainted with single variable calculus, algebra, and geometry, obviously on a high school level. Remarkably, ideas from quantum theory turn out to carry tremendous mathematical power as well, even though we have little daily experience dealing with elementary particles. In fact, the math was mostly developed in the context of classical physics, enabling quantum mechanics to develop at a remarkable pace once the concepts were discovered. Foundations of Relational Quantum Theory. QC174.17.S3T47 2009 2008045437 515’.724{dc22 Copying and reprinting. Quantum computing may be just around the corner or it may be, for all practical purposes, permanently out of reach: The physics needed for a useful quantum computer has not yet been discovered, and may in fact not exist.A quantum computer, real or potential, is essentially different from an adding machine. Professor of Physics at The University of Texas at Austin “These lecture notes outline a single semester course on non-relativistic quantum mechanics which is primarily intended for upper-division undergraduate physics majors. On the contrary, in many cases completely new lines of thought had to be developed in order to find the proofs. Past Papers Past GCSE/IGCSE and A-level papers. Et voilà! Access codes and supplements are not guaranteed with used items. Title. Some of these items ship sooner than the others. In particular, the string propagation in both spaces turns out to be identical. Mathematics for Quantum Mechanics: An Introductory Survey of Operators, Eigenvalues, and Linear…. The geometers devised a complicated computer program and came back with an answer. Physical systems are divided into types according totheir unchanging (or ‘state-independent’) properties, andthe stateof a system at a time consists of a completespecification of those of its properties that change with time (its‘state-dependent’ properties). This bar-code number lets you verify that you're getting exactly the right version or edition of a book. In general, they are poorly understood and an indication that our understanding of quantum theory is incomplete at best. By Natalie Wolchover. One starts to feel sorry for the poor students who have to learn all this! Klein–Gordon and Dirac equations Quantum physics is considered to be one of the best researched disciplines of modern science. Read Later. A striking example of the magic of quantum theory is mirror symmetry — a truly astonishing equivalence of spaces that has revolutionized geometry. Don’t let the title deceive you. This present document has been put together to ease you into the mathematics of quantum mechanics. ISBN 978-0-8218-4660-5 (alk. One of the central problems of quantum mechanics is to calculate the energy levels of a system. : 1.1 It is the foundation of all quantum physics including quantum chemistry, quantum field theory, quantum technology, and quantum information science. In order to navigate out of this carousel please use your heading shortcut key to navigate to the next or previous heading. Although ideas from quantum physics play an important role in many parts of modern mathematics, there are few books about quantum mechanics aimed at mathematicians. It is also called quantum physics or quantum theory.Mechanics is the part of physics that explains how things move and quantum is the Latin word for 'how much'. Paolo Bertozzini Quantum Mathematics for Quantum Physics. This shopping feature will continue to load items when the Enter key is pressed. It's a good book for review of concepts. It was developed in the late 1940s by Richard Feynman, Julian Schwinger, Sinitro Tomonage, and others. Within quantum theory it makes perfect sense to combine the numbers of curves of all degrees into a single elegant function. Your recently viewed items and featured recommendations, Select the department you want to search in, Mathematics for Quantum Mechanics: An Introductory Survey of Operators, Eigenvalues, and Linear Vector Spaces (Dover Books on Mathematics). Social scientists are now following their lead by applying it to … But in the realm of quantum theory, they share many properties. Moderators are staffed during regular business hours (New York time) and can only accept comments written in English. The emphasis is on mathematical methods and insights that lead to better understanding of the paradoxical aspects of quantum physics … Remarkably, these proofs often do not follow the path that physical arguments had suggested. Theyare largely “derived“in quantum terms, with no appeal to classical physics. The bizarre world of quantum theory — where things can seem to be in two places at the same time and are subject to the laws of probability — not only represents a more fundamental description of nature than what preceded it, it also provides a rich context for modern mathematics. One is the realm of symplectic geometry, the branch of mathematics that underlies much of mechanics. String theory is a mathematical theory that tries to explain certain phenomena which is not currently explainable under the standard model of quantum physics. The energy operator, called the Hamiltonian, abbreviated H, gives you the total energy.Finding the energy levels of a system breaks down to finding the eigenvalues of … "Quantum field theory combines relativity, quantum mechanics, and many-particle physics to provide a theoretical basis for the most fundamental understanding of our universe. Astrology and architecture inspired Egyptians and Babylonians to develop geometry. A classical result from the 19th century states that the number of lines — degree-one curves — is equal to 2,875. Theory of everything: Is there a theory which explains the values of all fundamental physical constants, i.e., of all coupling constants, all elementary particle masses and all mixing angles of elementary particles? Modular Algebraic Quantum Gravity. In my view, it is closely connected to the fact that in the quantum world everything that can happen does happen. |
ed85142996e862d6 | Wavelets are proposed as appropriate analysis tool for the proposed NMEP, additionally to Fourier analysis technique. There are at least two approaches to wavelet analysis, both are addressing the somehow contradiction by itself, that a function over the one-dimensional space R can be unfolded into a function over the two-dimensional half-plane (HoM1):
the first approach is the interpretation of the wavelet transform as a time-frequency analysis tool, where a one-dimensional information (a one-parameter family of purely oscillations) is somehow 'unfolded' into a two-dimensional time-frequency plane, i.e. a function over the real line is mapped into a function over the time-frequency plane that tells 'when' and which 'frequency' occurs. The (human perception) hearing process of a concert is somehow reflecting this kind of compromise (!!) between the (mathematically correct) (either) time localization or frequency localization on the one hand side, and the human perceived melodies, and hence music, just based on a received one-dimensional signal on the other hand side. The interpretation of the wavelet transform of a one-dimensional signal in this context is about a time-frequency analysis with the physical parameters "time" and "frequency" with constant relative bandwidth.
The second approach uses the wavelet analysis as a mathematical microscope. The idea is to look at the details that are added if one goes from a scale "a" to a scale " a-da", where "da" is infinitesimally small. This second approach is closely linked to approximation theory, e.g. in the context of the building of Calderon-Zygmund operators, based on the truncation of kernels (MeY). This mathematical microscope tool 'unfolds' a function over the one-dimensional space R into a function over the two-dimensional half-plane of "positions" and "details" (where is which detail generated?). This two-dimensional parameter space may also be called the position-scale half-plane. The interpretation of the wavelet transform in this context is about a mathematical microscope with the physical parameters "position (parameter a)", "enlargement" and "optics (wavelet function g)".
In the context of this homepage (see also (FeP)) we propose the second approach based on the following rational:
1. (HoM1): the time-frequency approach is based on an (analytical) smoothing (cut-off) function v (compactly supported with a normed L(1)-norm). The related wavelet function g of the second approach is basically in the form g(w) = (d/dw)v(w) with corresponding reduced regularity requirements, whereby the normed L(1)-norm condition of the regular function v(w) is replaced by the admissibility condition of the wavelet function g(w).
The second approach overcomes current handicaps in the usage of cut-off frequency functions occurring for example
a. in relativistic scattering theory of spin-1/2 particles (key words: Coulomb potential, effective cross-section, cut-off frequency function)
handicap: infrared disaster
b. in microscopic dynamics of plasmas with its related Landau damping effect, i.e. the damping (exponential decrease as a function of time) of longitudinal space charge waves in plasma to prevent an instability from developing, and to create a region of stability in the parameter space. The effect is caused by energy exchange between an electromagnetic wave with given phase velocity and particles in the plasma with approximately (!) "equal" velocity: the mathematical tool is perturbation ("approximation") theory in the Vlasovian frame solving the Cauchy problem of the evolution (Vlasov-Poisson) equations
handicap: the mathematical assumption of "analytical" functions (w/o physical justifications/requirements) of the current proof of non-linear Landau damping (C. Villani).
2. (LoA) remark 1.1.10: The second mathematical microscope approach enables a purely (distributional) Hilbert scale framework where the "microscope observations" of two wavelet (optics) functions f, g can be compared with each other by the corresponding "reproducing" ("duality") formula (see also (*) below), whereby
- the "bra(c)"-wavelet transform W(f) is inverted by the adjoint operator of the "(c)ket"-wavelet transform W(g) (given corresponding admissibility conditions are valid)
- the identity (*) provides also some additional degree of freedom in the way that in order to analyze a signal s(t) the wavelet f can be chosen properly according to the special situation of the underlying mathematical model. The prize to be paid is only later, when the "re-building" wavelet g needs to be built accordingly to enable the corresponding "synthesis"
- the Hilbert transform operator (which is valid for every Hilbert scale) is a "natural" partner of the wavelet transform operator, as it is skew-symmetric, rotation invariant and each Hilbert transformed "function" has vanishing constant Fourier term. The example in the context above is the Hilbert transform of the Gaussian/Maxwellian distribution function, the (odd) Dawson function, with the "polynomial degree" point of zero at +/- infinite.
Further details
The sine and cosine functions have unbounded support and they do not vanish at infinity. Their spectra are very local consisting of a finite sum of Dirac measures. Conversely, if one use approximations based on finite sum of Dirac measures the spectrum of the corresponding basis "functions" (which is basically the (cosine(x*s) + i * sine(x*s) function) does not vanishes at infinity in the frequency domain.
The wavelet concept is trying to overcome this issue, while basically looking for an orthogonal basis of a Hilbert space (e.g. L(2)=H(0) or H(-1/2)), constructed from a unique generation function g (the scaling function), via translation, dilation and linear combinations, whereby g can be localized in x (space variable) and s (Fourier variable). The admissibility condition for a wavelet governs the behavior of the wavelets in the neighborhood of the frequency zero. The (wavelet) admissibility condition is obviously related to the H(-1/2) Hilbert space norm in case of space dimension m=1.
We note that the hypothesis that a function g has compact support is essential to become a wavelet. Otherwise, it can be shown that there are infinitely supported solutions of the corresponding scaling equation. For instance, the Hilbert transform of the function g satisfies the scaling recursion whenever g does.
We further note the two fundamental examples of universal scaling functions (scaling functions for every rank), the sinc and the Haar scaling functions, which are Fourier transforms of each other.
The wavelet transform W(g)(v) of a function v with respect to a wavelet function g is an isometric mapping, whereby the corresponding adjoint operator is given by the inverse wavelet transform on its range. Let u,v denote two elements of a Hilbert space with inner product (u,v), let ((*,*)) denote the inner product of the Hilbert space H(-1/2). Let further f,g denote two wavelets with bounded inner product ((f,g)) and let (((*,*))) denote the inner product of the corresponding wavelet transforms W(f)(u), W(g)(v) with respect to the underlying Haar measure. Then (up to a constant) it holds
(*) (((W(f)(u),W(g)(v)))) = ((f,g)) * (u,v) .
This identity (in combination with the below) enables a combined wave-wavelet ((H(0),H(-1)) concept for analysis of the H(-1/2) = H(0) * H(0)(ortho) framework, whereby in this specific case it holds (u,v):=((u,v)).
In (PaR) the wavelet transform for a class of distributions is provided, whereby the corresponding inversion formula is established by interpreting convergence in a weak distributional sense. In the context of above we note that log2(sin(x/2)) (with its corresponding 1st and 2nd derivatives, the cot(x) and the 1/(sin(x)*sin(x)) functions) is a L(2) function fulfilling the admissibility condition.
The Gaussian function stands out since it minimizes the Heisenberg uncertainty principle (DaS). The corresponding windowed Fourier (integral) transform is e.g. applied in quantum physics, where it is used for defining and investigating coherent states. It is related to the Weyl-Heisenberg group, while the corresponding wavelet (integral) transform is related to the affine group. In other words, from a group theory perspective windowed Fourier transforms and wavelet transforms are identical.
The wavelet mother function, which is directly connected to the Gaussian function (which is not a wavelet) is the Mexican hat function. It is basically the second derivative of the Gaussian function. In (DaS) a new interpretation of the Mexican hat function is provided: it can be interpreted as a minimizing function of an uncertainty principle, in case its rotation invariant form "A" has a certain form/representation.
The affine-linear group (where each element of that group has two components, while the Weyl-Heisenberg group has three components) of unitary operators equipped with the Haar measure is locally compact, i.e. the group multiplication and the inverse operation of the group are continuous mappings. For local compact groups there is an orthogonality relationship valid, which provides the common group theoretical denominator of windowed Fourier and wavelet transforms (GrA).
Linking back to the primary topic of this homepage we note that the continuous, periodic Riemann function (Fourier) series representation belongs to the space C(1/2), (HoM). It is best analyzed by a continuous wavelet transform using the specific complex wavelet g(x):=1/(x+i). The corresponding wavelet transform is given by the Jacobi Theta function. S. Jaffard (JaS) drew his attention to the irrational points of the Riemann function, building on the fundamental result of J. Gerver, that the Riemann function belongs to the space C(1) on rational p/q with p,q odd numbers (i.e. the derivative of the Riemann function R still does exist and equal -1 at each rational point of the type t=p/q where both numbers p and q are odd), while the Riemann function is non-differentiable elsewhere.
The Davenport and Chowla identity is about the identity of two infinite series, the Riemann function on the one hand side, and an infinite series built from the Liouville function (a prime number-theoretical entity) and the saw-tooth Fourier series on the other side (ChK). "The corresponding integrated identity can be derived from the functional equation only, but to differentiate it, one needs the estimate for the error term for the Liouville function. This is as deep as the prime number theorem and is known to be very difficult."
From the RH related papers we recall that the periodic saw-tooth function, as well as its Hilbert transform, belongs to the periodical L(2)-Hilbert space. The Hilbert transform corresponds to the log(2sin(x)) function, i.e. its first derivative is given by the cot(x)-function.
A natural extension of this result is given in (OsK), where the (generalized) solution of the Cauchy initial value problem for the Schrödinger equation is analyzed. The real part of the restriction of this solution on the line x=0 is given by the Riemann function. A basic role in (OsK) is played by a representation of the differences of the function via Poisson's summation formula and the oscillatory Fresnel integral.
Looking to open questions about the irrationality/transcendence of certain numbers like the Euler constant (in the context of the overall "solution concept of a fractional Hilbert space H(a) framework of this homepage to solve/answer the RH-, NSE-, YME-Millenium problems") we note the following: the Riemann function (continuous, periodic, Fourier representation) belongs to the H(0) Hilbert space, while its dual representation with respect to the inner product of the H(-1/2) Hilbert space is given by the Fourier series representation of the cot-function. The latter one belongs to the Hilbert space H(-1), same as the entire Zeta function on the critical line, i.e. their corresponding weak forms belong to the Hilbert space H(-1/2).
The Sobolev embedding theorem provides the relationship to the continuous (and the corresponding n-time differentiable) space(s) C(n). In this sense Gerver's theorem (resp. its generalization for the solution of the Cauchy initial value problem of the Schrödinger equation, (OsK)) might provide opportunities for new related irrationality proof techniques.
(ChK) Chakraborty K., Kanemitsu S., Long L. H., Quadratic reciprocity and Riemann's "non-differentiable" function, Research in Number Theory, Springer Open Journal, (2015) 1-14
(DaS) Dahlke S., Maass P., The Affine Uncertainty Principle in One and Two Dimensions, Computers Math. Applic. Vol. 30, No. 3-6, (1995), pp. 293-305
(FeP) Federbush P., Navier and Stokes Meet the Wavelet, Commun. Math. Phys. 155, 219-248 (1993)
(GrA) Grossmann A., Morlet J., Paul T., Transforms associated to square integrable group representations I: General results, J. Math. Phys. 26, (1985) pp. 2473-2479
(HoM) Holschneider M., Tchamitchian P., Pointwise analysis of Riemann's "non-differentiable" function, Invest. Math. 105, (1991) pp. 157-175
(HoM1) Holschneider M., Wavelets, An Analysis Tool, Oxford Science Publications, Clarendon Press, Oxford, 1995
(LoA) Louis A. K., Maaß P., Rieder A., Wavelets, Theorie und Anwendungen, B. G. Teubner Verlag, Stuttgart, 1998
(MeY) Meyer Y., Coifman R., Wavelets, Calderon-Zygmund and multilinear operators, Cambridge studies in advanced mathematics 48, Cambridge University Press, 1996
(OsK) Oskolkov K. I., Chakhkiev M. A., On Riemann "Nondifferentiable" Equation and Schrödinger Equation, Proceedings of the Steklov Institute of Mathematics, 2010, Vol. 269, pp. 186-196
(PaR) Pathak R. S., Singh A., Distributional Wavelet Transform, Proc. Natl. Acad. Sci., India, Sect. A. Phys. Sci. 86(2), (2016) 273-277 |
e43b1cf44c421e9a | Universal nature of Van der Waals forces for Coulomb systems
Elliott Lieb, Walter E. Thirring
Research output: Contribution to journalArticlepeer-review
36 Scopus citations
The nonrelativistic Schrödinger equation is supposed to yield a pairwise R-6 attractive interaction among atoms or molecules for large separation, R. Up to now this attraction has been investigated only in perturbation theory or else by invoking various assumptions and approximations. We show rigorously that the attraction is at least as strong as R-6 for any shapes of the molecules, independent of other features such as statistics or sign of charge of the particles. More precisely, we prove that two neutral molecules can always be oriented such that the ground-state energy of the combined system is less than the sum of the ground-state energies of the isolated molecules by a term -cR-6 provided R is larger than the sum of the diameters of the molecules. When several molecules are present, a pairwise bound of this kind is derived. In short, we prove that in the quantum mechanics of Coulomb systems everything binds to everything else if the nuclear motion is neglected.
Original languageEnglish (US)
Pages (from-to)40-46
Number of pages7
JournalPhysical Review A
Issue number1
StatePublished - Jan 1 1986
All Science Journal Classification (ASJC) codes
• Atomic and Molecular Physics, and Optics
Fingerprint Dive into the research topics of 'Universal nature of Van der Waals forces for Coulomb systems'. Together they form a unique fingerprint.
Cite this |
ca6d32e1060059ea | Five Popular Posts Of The Month
Monday, August 17, 2020
The true mystery of quantum mechanics.
The true mystery of quantum mechanics
In all known experiments, all 100 % of them conducted to this day for about a century, all quantum objects have been revealing themselves as objects localized in space, i.e. as particles.
The most common examples of such experiments are:
the photoelectric effect,
the Compton’s scattering,
the Rutherford experiment (and all other collision experiments),
mass-spectrometry (including trajectory visualizing techniques like a cloud chamber),
counting techniques (e.g. a Geiger counter, a scintillation counter).
That is why each existing quantum object is called “a particle” (with a specific name – an electron, a proton, a photon, etc.).
The most puzzling feature of those particles is that even though they exist and exhibit themselves as particles, they behave in a way similar to the behavior of classical waves, e.g. the waves on a surface of water.
Our common sense makes us to believe that nothing can be a particle and a wave at the same time – they both are “size-less”, but a particle has no size because it is basically a dot, and a wave has no size because it is being spread over a vast region of space (theoretically, in the most abstract sense – over the whole universe).
And yet, in order to explain all know experiments scientists had to treat quantum objects in a very contradictory way – like particles and like waves – at the same time.
For about a century, this “contradiction” has been the source of deep confusions and intense discussions.
Those discussions had led to several famous word-tags, such as:
The Schrödinger’s cat,
Wave-particle duality,
The uncertainty principle,
A wave-function collapse,
and also, to one of the most discussed quantum thought experiments: a double-slit electron diffraction.
Since I already have publications on those matters, I forward readers to this page.
I would recommend to start from:
Since all those pieces were a reaction to something I read, they may have similar parts, as well as ideas unique to that particular piece.
Here I just want to add two short notes – one on a wave-function collapse, and another one on the double-slit experiment.
I. A standard textbook on quantum mechanics describes two types of evolution of a wave function.
For example, Dr. Richard Fitzpatrick, Professor of Physics at The University of Texas at Austin, writes: “There are two types of time evolution of the wavefunction in quantum mechanics. First, there is a smooth evolution which is governed by Schrödinger's equation. This evolution takes place between measurements. Second, there is a discontinuous evolution which takes place each time a measurement is made.”
This is a very common view shared by many physicists: “In general, quantum systems exist in superpositions of those basis states that most closely correspond to classical descriptions, and, in the absence of measurement, evolve according to the Schrödinger equation. However, when a measurement is made, the wave function collapses—from an observer's perspective—to just one of the basis states, and the property being measured uniquely acquires the eigenvalue of that particular state. After the collapse, the system again evolves according to the Schrödinger equation.”
But not all scientist share that view. Following the Wikipedia: “The existence of the wave function collapse is required in
the consistent histories approach, self-dubbed "Copenhagen done right"
This is an illustration of simple fact that even today, almost a hundred years later after the development of the quantum theory of matter, physicists are still not united about its interpretation.
This is a notable fact.
Physicists do not have different interpretations of the classical mechanics, or classical electrodynamics. They even agree on the meaning of the special and general relativity theories.
But when they talk about quantum mechanics – they are divided.
My view on the so-called wave-function collapse is simple – it does not exist. The wave function always evolves according to the Schrödinger’s equation, but when a quantum object interacts with a large classical system the equation is simply too complicated for scientists to solve, or even analyze – using current mathematical tools. For example, a problem with three electrons orbiting a heavy nucleolus is already borderline complicated. An act of a measurement – as an act of an interaction between a quantum and a classical systems – is much more complicated than that. And physicists cover up their inability to solve the problem of measurements by invoking a miracle called “a collapse”.
There is no such thing as “a discontinuous evolution”. There is evolution that is too yet difficult to be analyzed.
II. One of the premises of a double-slit electron diffraction experiment is that after traveling through the slits (in any way that fits the view of an author) they do not travel using a certain path, but reach the screen via many possible paths, and for each path there is a number that is called “the probability amplitude for an electron for “choosing” that path”. And the probability for an electron to get from point A (e.g. the slit #1) to point B (e.g. a given location on a screen) is based on the sum of all probability amplitudes for all possible paths leaving point A and arriving at point B.
This picture leads to a clear and robust mathematical description, called “path integrals” – one of the most fundamental mathematical instruments used in all quantum theories.
It works.
If explains all known experiments.
The only problem with it – it contradicts the nature of the experiment used for its own development.
Electron diffraction exists. Experiments show it. those experiments have become so routine, there is a lab on that.
And yet, the notion that electrons can travel via different paths is wrong.
It is not easy to see the path of those electrons, what we see is the interference pattern on a fluorescent screen.
But we can use a robust analogy to visualize what would we see if we could see the trajectories of those diffracted electrons.
That analogy is based on the original similarity between particles and waves, or, more specifically, between quantum particles and light waves.
At the dawn of the quantum mechanics, light waves were used as a means for understanding wave-like behavior of electrons, and other quantum particles.
Light waves from diffraction patterns, electrons form diffraction patterns, hence electrons are kind of like waves.
But why don’t we use this similarity backwards?
Electrons are particles that exhibit a wave-like behavior. Light waves exhibit a wave-like behavior. Hence light is also made of particles – photons.
That was the idea the brought the Nobel Prize to Albert Einstein.
Let’s use this similarity again.
We know that light is a quantum matter formed by photons. And we know that those photons travel through a double-slit in a special way. Hence electrons, because they are also quantum particles, should travel through a double-slit in the same special way.
And that way does NOT show many possible paths – not at all!
In fact, what we see in a very standard diffraction experiment is a set of several specific trajectories.
In reality, an actual experiments is much simpler and clearer when it is done with a diffraction grating (optical – for photons, or crystalloid – for electrons).
When photons travel through a diffraction grating they travel along a small number of clear paths. It is impossible to predict which path will be “selected” by which photon, but we do NOT see photons traveling in a cloud that “collapses” when that cloud reaches a screen (again – no “wave-function collapse”!).
This pictures shows an example of those trajectories (here is a short video).
I am sure, a similar experiment with electrons traveling through a cloud chamber would show a similar picture.
This picture proves that the model of many different paths for an electron to travel from the grating (or slits) to the screen is simply wrong – despite the fact that it mathematically correctly describes the probability to fins a particle at a given location on a screen.
How can it be that a model that contradicts the physical nature of a process (traveling toward a screen), also provide correct mathematical description of the results of that process (arriving at a screen)?
There is no answer to this question.
To this day – now one knows why quantum mechanics works so well.
This experiment also shows a common methodological misconception – namely, that quantum particles travel according to the wave function provided by the solution of a Schrödinger’s equation with the given potential energy.
That wave-function will give the probability of fining a particle at the given point at the given time – but saying that a particle can travel along many paths with different probability amplitudes is wrong – it contradicts a simple experiment.
This experiment also illuminates one of the most famous mysteries of the double-slit diffraction experiment – how do electrons or photons get through the slits? Because the way they get thought the slits (or a grating) prescribes their future behavior – particularly, the path they travel through to a screen. For example, in the picture, we see that each photon “selects” one of the three paths, the probability to travel along a different path is zero (or almost zero).
We have three possibilities to describe the behavior of particles in this experiments, and the next one is even worse than the previous one.
(A) Particles “learn” what path they have choose when they interact with the grating (or slits) and then they travel along that chosen path. If that is a case, then a particle should “learn” its path even if there is only one slit! BTW: a fact escaping the mind of an every single author discussing the double-slit electron diffraction experiment.
A single-slit diffraction is well known for light, and it should be observed for electrons and other quantum particles, as well. But the explanation should be based on the solution of a Schrödinger’s equation for a particle interaction with a large classical object, and as we know from part II, no one yet knows how to do that (even for one slit!). Plus, this would negate the basis for the path-integral approach – if everything is “decided” at the beginning of each path (i.e. at the end of the particle–grating/slit interaction) then for each path its probability is set before that path begins.
(B) Particles “learn” what path they have to choose based on the whole system – that includes the grating (slits, a slit) and a screen. Clearly, this is even more complicated problem. The way around is to ignore the particle–grating/slit interaction and invoke the path-integral approach. But to explain how a far located screen affects the “choice” of a path we would run into a non-locality, or a faster-than-light interactions.
(C) In addition to the mystery of “learning the path”, particles may be able to “jump” from one path onto another, and back. That would definitely lead to a non-locality and a faster-than-light interactions.
This experiment demonstrates that the real mystery of quantum mechanics is not “how do electrons travel through two slits at the same time?” (they don’t) but “how do electrons “chose” their path?”
No comments:
Post a Comment |
2512cf221411d8f1 | Pajama Sets For Men, Jungle Tree Png, Vintage Schwinn 3 Wheel Bicycle, Cornelia And Her Jewels Answer Key, Start Kde From Command Line Ubuntu, 1000 Acres Vodka Website, Rum And Coke Jello Shots, Civil Vs Mechanical Engineering Job Outlook, " />
artificial boundary definition
The aim of the paper is to design high-order artificial boundary conditions for the Schrödinger equation on unbounded domains in parallel with a treatment of the heat equation. This is the best area to edit Artificial Boundary Method in the past bolster or repair your product, and we wish it can be pure perfectly. a natural boundary would be something like a river, a canyon, a mountain ridge, a lake, an ocean. 2013. John McCarthy came up with the name "Artificial Intelligence" in 1955. A natural boundary; a natural object or landmark used as a boundary of a tract of land, or as a beginning point for a boundary line. boundary conditions npl plural noun: Noun always used in plural form--for example, "jeans," "scissors." Meaning, pronunciation, picture, example sentences, grammar, usage notes, synonyms and more. Spedizione gratuita per ordini superiori a 25 euro. Artificial intelligence definition is - a branch of computer science dealing with the simulation of intelligent behavior in computers. 2. —Public boundary. Artificial intelligence (AI), also known as machine intelligence, is a branch of computer science that aims to imbue software with the ability to analyze its environment using either predetermined rules and search algorithms, or pattern recognizing machine learning models, and then make decisions based on those analyses. Artificial Boundary Method, Libro in Tedesco di Han Houde, Wu Xiaonan. See more. We first introduce a circular artificial boundary to divide the unbounded definition domain into a bounded computational domain and an unbounded exterior domain. English-Chinese law dictionary (法律英汉双解大词典). Definition: The production boundary includes: (a) the production of all individual or collective goods or services that are supplied to units other than their producers, or intended to be so supplied, including the production of goods or services used up in the process of producing such goods or services; BOUNDARY: traduzioni in italiano, sinonimi, pronuncia e definizioni in inglese. BOUNDARY, estates. Natural monuments are confined to the works of nature and artificial monuments are the works of man. Define Artificial Boundary. | Meaning, pronunciation, translations and examples ‘An artificial intelligence program on a computer learns how to play the game, and plays entire games against itself.’ ‘The current state of robotic technology or artificial intelligence is not such that a robot could REFUSE to do anything.’ For example, in the following image representing a binary classification problem, the decision boundary is the frontier between the orange class and the blue class: Introduce an artificial boundary and find the exact and approximate artificial boundary conditions for the given problem, which lead to a bounded computational domain. A boundary is every separation, natural or artificial (man-made), which marks the confines or line of division of two contiguous estates. Definition of boundary noun in Oxford Advanced Learner's Dictionary. ppt presentation on artificial intelligence 1. artificial intelligence ( a.i.) Learn more. Boundaries are either natural or artificial. Artificial intelligence (AI) is the ability of a computer program or a machine to think and learn. Detailed discussions treat different types of problems, including Laplace, Helmholtz, heat, … It is also a field of study which tries to make computers "smart". They work on their own without being encoded with commands. USGS disc set in stone. presented by techie prophets 2. group members snigdha sen chowdhury sandipan ghosh dayeeta mukherjee dipanjan das … Artificial life definition, the simulation of any aspect of life, as through computers, robotics, or biochemistry. Boundary definition, something that indicates bounds or limits; a limiting or bounding line. Boundary definition: The boundary of an area of land is an imaginary line that separates it from other areas. Journal of Biomimetics, Biomaterials and Biomedical Engineering Materials Science. Boundary also signifies stones or other materials inserted in the earth on the confines of two estates. artificial boundary line "金山词霸2003法学大词典": 人为边界线. Artificial Boundary Method online right now by past colleague below. Boundary Monuments: Artificial and Natural Markers Patrick C. Garner, P.L.S. Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. an artificial boundary would be an arbitrary line drawn by humans to divide some piece of territory---like the imaginary, intangible (untouchable, unseeable) line in the middle of Lake Erie dividing it between Canada and the USA By this term is understood in general, every separation, natural or artificial, which marks the confines or line of division of two contiguous estates. 3. How to use artificial in a sentence. There is 3 substitute download source for Artificial Boundary Method. Acquistalo su! Artificial intelligence (AI), is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals.Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. (equation: limits) condizioni al contorno nfpl sostantivo plurale femminile: Identifica esseri, oggetti o concetti che assumono genere femminile e numero plurale: suore, pinze, vertigini : See more. Definition - What does Artificial Intelligence (AI) mean? AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and … The research shows that: • There was consistency in how firms define operational risk because these definitions are based on Basel II/Solvency II definitions but there is less consistency in how insurance risk is defined. ‘The boundary line for private properties is usually where the sandy area ends and the vegetation begins.’ ‘The decision to draw the boundary according to county lines made little social, economic, or geographical sense.’ ‘Grey weathered posts, with white ant mounds creeping up around them, mark the boundary.’ Defect and Diffusion Forum Pubblicato da … AI (Artificial Intelligence): AI (pronounced AYE-EYE) or artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. boundary definition: 1. a real or imagined line that marks the edge or limit of something: 2. the limit of a subject or…. Artificial intelligence definition: Artificial intelligence is a type of computer technology which is concerned with making... | Meaning, pronunciation, translations and examples means a boundary formed by a straight line or curve of prescribed radius joining points established on the ground by Monuments. Definitions of artificial intelligence begin to shift based upon the goals that are trying to be achieved with an AI system. n. 171. Artificial definition is - humanly contrived often on a natural model : man-made. springer, Artificial Boundary Method' systematically introduces the artificial boundary method for the numerical solutions of partial differential equations in unbounded domains. We use cookies to enhance your experience on our website, including to provide targeted advertising and track usage. Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Boundary Lines Law and Legal Definition Adjoining landowners can find themselves in disputes over fences, overhanging branches, water rights, subjacent and lateral support and party walls. --Ray Skelton, The Legal Elements of Boundaries and Adjacent Properties, 1930 Figure 1. Abstract We consider the numerical solution of the time-fractional diffusion-wave equation on a two-dimensional unbounded spatial domain. Da, il miglior dizionario online inglese → italiano gratuito. implement an industry standard boundary definition before any divergence in practice becomes embedded. decision boundary The separator between classes learned by a model in a binary class or multi-class classification problems. How to use artificial intelligence in a sentence. An artificial boundary, consisting of some monument or landmark set up by the hand of man to mark the baginning or direction of a boundary line of lands. 3 Toull.
Leave a Reply |
932ac2fc827365ee | Skip to main content
Chemistry LibreTexts
16.3: Classical Treatment of Nuclear Motion
• Page ID
• For all but very elementary chemical reactions (e.g., D + HH \(\rightarrow\) HD + H or F + HH \(\rightarrow\) FH + H) or scattering processes (e.g., CO (v,J) + He \(\rightarrow\) CO (v',J') + He), the above fully quantal coupled equations simply can not be solved even when modern supercomputers are employed.Fortunately, the Schrödinger equation can be replaced by a simple classical mechanics treatment of nuclear motions under certain circumstances.
For motion of a particle of mass \(\mu\) along a direction R, the primary condition under which a classical treatment of nuclear motion is valid
\[ \dfrac{\lambda}{4\pi}\dfrac{1}{\text{p}}\bigg|\dfrac{\text{dp}}{\text{dR}}\bigg| \ll 1 \]
relates to the fractional change in the local momentum defined as:
\[ \text{p} = \sqrt{2\mu (\text{E - E}_j\text{(R)})} \]
along R within the 3N - 5 or 3N - 6 dimensional internal coordinate space of the molecule, as well as to the local de Broglie wavelength
\[ \lambda = \dfrac{2\pi\hbar}{|\text{p}|}. \]
The inverse of the quantity \( \dfrac{1}{\text{p}}\bigg| \dfrac{\text{dp}}{\text{dR}} \bigg| \) can be thought of as the length over which the momentum changes by 100%. The above condition then states that the local de Broglie wavelength must be short with respect to the distance over which the potential changes appreciably. Clearly, whenever one is dealing with heavy nuclei that are moving fast (so |p| is large), one should anticipate that the local de Broglie wavelength of those particles may be short enough to meet the above criteria for classical treatment.
It has been determined that for potentials characteristic of typical chemical bonding (whose depths and dynamic range of interatomic distances are well known), and for all but low-energy motions (e.g., zero-point vibrations) of light particles such as Hydrogen and Deuterium nuclei or electrons, the local de Broglie wavelengths are often short enough for the above condition to be met (because of the large masses \(\mu\) of non-Hydrogenic species) except when their velocities approach zero (e.g., near classical turning points). It is therefore common to treat the nuclear-motion dynamics of molecules that do not contain H or D atoms in a purely classical manner, and to apply so-called semi-classical corrections near classical turning points. The motions of H and D atomic centers usually require quantal treatment except when their kinetic energies are quite high.
Classical Trajectories
To apply classical mechanics to the treatment of nuclear-motion dynamics, one solves Newtonian equations
\[ \text{m}_{\text{k}}\dfrac{\text{d}^2\text{X}_{\text{k}}}{\text{dt}^2} = - \dfrac{\text{dE}_{\text{j}}}{\text{dX}_{\text{k}}} \]
where \(\text{X}_{\text{k}}\) denotes one of the 3N cartesian coordinates of the atomic centers in the molecule, m\(_{\text{k}}\) is the mass of the atom associated with this coordinate, and \( \frac{\text{dE}_{\text{j}}}{\text{dX}_{\text{k}}} \) is the derivative of the potential, which is the electronic energy \(\text{E}_{\text{j}}\)(R), along the \(\text{k}^{\text{th}}\) coordinate's direction. Starting with coordinates {\(\text{X}_{\text{k}}\)(0)} and corresponding momenta {\(\text{P}_{\text{k}}\)(0)} at some initial time t = 0, and given the ability to compute the force - \( \frac{\text{dE}_{\text{j}}}{\text{dX}_{\text{k}}} \) at any location of the nuclei, the Newton equations can be solved (usually on a computer) using finite-difference methods:
\[ \text{X}_{\text{k}}(t+\delta t) = \text{X}_{text{k}}(t) + \text{P}_{\text{k}}(t) \dfrac{\delta t}{\text{m}_{\text{k}}} \]
\[ \text{P}_{\text{k}}(t+\delta t) = \text{P}_{\text{k}}(t) - \dfrac{\text{dE}_j}{\text{dX}_k}(t) \delta t. \]
In so doing, one generates a sequence of coordinates {\(\text{X}_k(\text{t}_n)\)} and momenta {\(\text{P}_k(\text{t}_n)\)}, one for each "time step" tn. The histories of these coordinates and momenta as functions of time are called "classical trajectories". Following them from early times, characteristic of the molecule(s) at "reactant" geometries, through to late times, perhaps characteristic of "product" geometries, allows one to monitor and predict the fate of the time evolution of the nuclear dynamics. Even for large molecules with many atomic centers, propagation of such classical trajectories is feasible on modern computers if the forces - \( \frac{\text{dE}_j}{\text{dX}_k} \) can be computed in a manner that does not consume inordinate amounts of computer time.
In Section 6, methods by which such force calculations are performed using firstprinciples quantum mechanical methods (i.e., so-called ab initio methods) are discussed. Suffice it to say that these calculations are often the rate limiting step in carrying out classical trajectory simulations of molecular dynamics. The large effort involved in the ab initio determination of electronic energies and their gradients - \(\frac{\text{dE}}{\text{dX}_k}\) motivate one to consider using empirical "force field" functions V\(_j\) (R) in place of the ab initio electronic energy E\(_j\) (R). Such model potentials V\)_j\) (R), are usually constructed in terms of easy to compute and to differentiate functions of the interatomic distances and valence angles that appear in the molecule. The parameters that appear in the attractive and repulsive parts of these potentials are usually chosen so the potential is consistent with certain experimental data (e.g., bond dissociation energies, bond lengths, vibrational energies, torsion energy barriers).
For a large polyatomic molecule, the potential function V usually contains several distinct contributions:
\[ V = V_{\text{bond}^+} V_{\text{bend}^+} V_{\text{vanderWaals}^+}V_{\text{torsion}^+}V_{\text{electrostatic}}. \]
Here \(\text{V}_{\text{bond}}\) gives the dependence of V on stretching displacements of the bonds (i.e., interatomic distances between pairs of bonded atoms) and is usually modeled as a harmonic or Morse function for each bond in the molecule:
\[ \text{V}_{\text{bond}} = \sum\limits_J \dfrac{1}{2}k_j (R_j - R_{\text{eq,J}})^2 \]
\[ \text{V}_{\text{bond}} = \sum\limits_J \text{D}_{\text{e,J}} \left( 1-e^{-a_J(\text{R}_J - \text{R}_{\text{eq,J }) }} \right)^2 \]
where the index J labels the bonds and the \(k_J \text{ , } a_J \text{ and } R_{eq,J}\) are the force constant and equilibrium bond length parameters for the \(J^{th}\) bond.
\(\text{V}_{\text{bend}}\) describes the bending potentials for each triplet of atoms (ABC) that are bonded in a A-B-C manner; it is usually modeled in terms of a harmonic potential for each such bend:
\[ \text{V}_{\text{bend}} = \sum\limits_J \dfrac{1}{2}k^{\theta}_{J}\left( \theta_J - \theta_{\text{eq,J}} \right)^2. \]
The \(\theta_{\text{eq,J}} \text{ and } k^{\theta}_J\) are the equilibrium angles and force constants for the J\(^{th}\) angle.
\(\text{V}_{\text{vanderWaals}}\) represents the van der Waals interactions between all pairs of atoms that are not bonded to one another. It is usually written as a sum over all pairs of such atoms (labeled J and K) of a Lennard-Jones 6-12 potential:
\[ \text{V}_{\text{vanderWaals}} = \sum\limits_{J<k} \left[ a_{J,K}(R_{J,K})^{-12} - b_{J,K}(R_{J,K})^{-6} \right] \]
where \(a_{J,K} \text{ and } b_{J,K}\) are parameters relating to the repulsive and dispersion attraction forces, respectively for the \(\text{J}^{\text{th}} \text{ and } \text{K}^{\text{th}}\) atoms.
\(\text{V}_{\text{torsion}}\) contributions describe the dependence of V on angles of rotation about single bonds. For example, rotation of a CH\(_3\) group around the single bond connecting the carbon atom to another group may have an angle dependence of the form:
\[ \text{V}_{\text{torsion}} = V_0(1 - cos(3\theta)) \]
where \(\theta\) is the torsion rotation angle, and \(V_0\) is the magnitude of the interaction between the C-H bonds and the group on the atom bonded to carbon.
\(\text{V}_{\text{electrostatic}}\) contains the interactions among polar bonds or other polar groups (including any charged groups). It is usually written as a sum over pairs of atomic centers (J and K) of Coulombic interactions between fractional charges {Q\(_J\)} (chosen to represent the bond polarity) on these atoms:
\[ \text{V}_{\text{electrostatic}} = \sum\limits_{J<K}\dfrac{\text{Q}_J\text{Q}_K}{\text{R}_{J,K}} \]
Although the total potential V as written above contains many components, each is a relatively simple function of the Cartesian positions of the atomic centers. Therefore, it is relatively straightforward to evaluate V and its gradient along all 3N Cartesian directions in a computationally efficient manner. For this reason, the use of such empirical force fields in so-called molecular mechanics simulations of classical dynamics is widely used for treating large organic and biological molecules.
Initial Conditions
No single trajectory can be used to simulate chemical reaction or collisions that relate to realistic experiments. To generate classical trajectories that are characteristic of particular experiments, one must choose many initial conditions (coordinates and momenta) the collection of which is representative of the experiment. For example, to use an ensemble of trajectories to simulate a molecular beam collision between H and Cl atoms at a collision energy E, one must follow many classical trajectories that have a range of "impact parameters" (b) from zero up to some maximum value b\(_{max}\) beyond which the H ....Cl interaction potential vanishes. The figure shown below describes the impact parameter as the distance of closest approach that a trajectory would have if no attractive or repulsive forces were operative.
Figure 16.3.1: Insert caption here!
Moreover, if the energy resolution of the experiment makes it impossible to fix the collision energy closer than an amount \(\delta\)E, one must run collections of trajectories for values of E lying within this range.
If, in contrast, one wishes to simulate thermal reaction rates, one needs to follow trajectories with various E values and various impact parameters b from initiation at t = 0 to their conclusion (at which time the chemical outcome is interrogated). Each of these trajectories must have their outcome weighted by an amount proportional to a Boltzmann factor \( e^{\frac{-E}{RT}} \), where R is the ideal gas constant and T is the temperature because this factor specifies the probability that a collision occurs with kinetic energy E.
As the complexity of the molecule under study increases, the number of parameters needed to specify the initial conditions also grows. For example, classical trajectories that relate to \(\text{F + H}_2 \rightarrow \text{ HF + H }\) need to be specified by providing (i) an impact parameter for the F to the center of mass of \(\text{H}_2\), (ii) the relative translational energy of the F and \(\text{H}_2\), (iii) the radial momentum and coordinate of the \(\text{H}_2\) molecule's bond length, and (iv) the angular momentum of the \(\text{H}_2\) molecule as well as the angle of the H-H bond axis relative to the line connecting the F atom to the center of mass of the \(\text{H}_2\) molecule. Many such sets of initial conditions must be chosen and the resultant classical trajectories followed to generate an ensemble of trajectories pertinent to an experimental situation.
It should be clear that even the classical mechanical simulation of chemical experiments involves considerable effort because no single trajectory can represent the experimental situation. Many trajectories, each with different initial conditions selected so they represent, as an ensemble, the experimental conditions, must be followed and the outcome of all such trajectories must be averaged over the probability of realizing each specific initial condition.
Analyzing Final Conditions
Even after classical trajectories have been followed from t = 0 until the outcomes of the collisions are clear, one needs to properly relate the fate of each trajectory to the experimental situation. For the \(\text{F + H}_2 \rightarrow \text{ HF + H}\) example used above, one needs to examine each trajectory to determine, for example, (i) whether HF + H products are formed or non-reactive collision to produce F + \(\text{H}_2\) has occurred, (ii) the amount of rotational energy and angular momentum that is contained in the HF product molecule, (iii) the amount of relative translational energy that remains in the H + FH products, and (iv) the amount of vibrational energy that ends up in the HF product molecule.
Because classical rather than quantum mechanical equations are used to follow the time evolution of the molecular system, there is no guarantee that the amount of energy or angular momentum found in degrees of freedom for which these quantities should be quantized will be so. For example, \(\text{ F + H}_2 \rightarrow \text{ HF + H }\) trajectories may produce HF molecules with internal vibrational energy that is not a half integral multiple of the fundamental vibrational frequency w of the HF bond. Also, the rotational angular momentum of the HF molecule may not fit the formula \( J(J+1)\frac{h^2}{8\pi^2I} \), where I is HF's moment of inertia.
To connect such purely classical mechanical results more closely to the world of quantized energy levels, a method know as "binning" is often used. In this technique, one assigns the outcome of a classical trajectory to the particular quantum state (e.g., to a vibrational state v or a rotational state J of the HF molecule in the above example) whose quantum energy is closest to the classically determined energy. For the HF example at hand, the classical vibrational energy \(\text{E}_{\text{cl.vib}}\) is simply used to define, as the closest integer, a vibrational quantum number v according to:
\[ v = \dfrac{\text{E}_{\text{cl.vib}}}{\hbar\omega}-\dfrac{1}{2}. \]
Likewise, a rotational quantum number J can be assigned as the closest integer to that determined by using the classical rotational energy \(\text{E}_{\text{cl.vib}}\) in the formula:
\[ J = \dfrac{1}{2}\left[ \sqrt{1+\dfrac{32\pi^2IE_{\text{cl,rot}}}{h^2}} -1 \right] \]
which is the solution of the quadratic equation \( J(J+1)\frac{h^2}{8\pi^2I} = \text{E}_{\text{cl,rot}} \) By following many trajectories and assigning vibrational and rotational quantum numbers to the product molecules formed in each trajectory, one can generate histograms giving the frequency with which each product molecule quantum state is observed for the ensemble of trajectories used to simulate the experiment of interest. In this way, one can approximately extract product-channel quantum state distributions from classical trajectory simulations.
Contributors and Attributions |
cbcc87dc4f3f03a9 | Landau Genius Scale ranking of the smartest physicists ever
Landau Genius Scale ranking of the smartest physicists ever
Photo by: Photo12/Universal Images Group via Getty Images
Lev Landau (1908-1968) was one of Soviet Union's best physicists. He made contributions to nuclear theory, quantum field theory, and astrophysics, among others. In 1962, he won a Nobel Prize in Physics for developing the mathematical theory of superfluidity. Landau also wrote an immensely influential textbook on physics, teaching generations of scientists.
A brilliant mind, Landau liked to classify everything in his life. He ranked people by their intelligence, beauty (he had a penchant for blondes), contributions to science, how they dressed, and even how they talked – often with a healthy dose of sarcasm.
One of the most famous of Landau's classifications that has been passed down is his ranking of the greatest physicists of the 20th century. Of course, it wouldn't have later physicists, as he died in 1968, but these are arguably the most significant names.
This scale is logarithmic, meaning people ranked as rank 1 contributed ten times more (according to Landau) than people ranked as class 2, and so forth. In other words, the higher the number, the less valuable the physicist.
Here's how this scale broke down:
Rank 0.5 – Albert Einstein (1879 - 1955)
Einstein, the creator of the Theory of General Relativity, is in a class of his own. Landau thought he was by far the greatest mind among a very impressive group that redefined modern physics.
Landau added, however, that if the list was to be expanded to scientists of the previous centuries, Isaac Newton (1643 - 1727), the titan of classical physics, would also join Einstein at first place with 0.5.
Rank 0.5 – Albert Einstein
Albert Einstein With Displaced Children From Concentration Camps. 1949.
Photo by Keystone-France/Gamma-Keystone via Getty Images
Rank 1
The group in this class of the smartest physicists included the top minds that developed the theories of quantum mechanics.
Werner Heisenberg (1901 - 1976) - a German theoretical physicist, who's achieved pop-culture fame by being the name of Walter White's alter ego in Breaking Bad. He is known for the Heiseinberg Uncertainty Principle and his 1932 Nobel Prize award flatly states it was for nothing less than "the creation of quantum mechanics".
Erwin Schrödinger (1887 - 1961) - an Austrian-Irish physicist who gave us the infamous "Schroedinger's Cat" thought experiment and other mind-benders from quantum mechanics. The Nobel-prize-winner's Schrödinger equation calculates the wave function of a system and how it changes over time.
Erwin Schrödinger. 1933.
Paul Dirac (1902 - 1984) - another quantum mechanics giant, this English theoretical physicist shared the 1933 Nobel Prize with Erwin Schrödinger "for the discovery of new productive forms of atomic theory."
Niels Bohr (1885 - 1962) - a Danish physicist who made founder-level additions to what we know of atomic structure and quantum theory, which led to his 1922 Nobel Prize in Physics.
Satyendra Nath Bose (1894 - 1974) - an Indian mathematician and physicist, known for his quantum mechanics work. He collaborated with Einstein to develop the Bose-Einstein statistics and the theory of the Bose–Einstein condensate. Boson particles are named after him.
Satyendra Nath Bose. 1930s.
Eugene Wigner (1902 - 1995) - a Hungarian-American theoretical physicist who received the 1963 Nobel Prize in Physics for work on the theory of the atomic nucleus and the elementary particles. Famously, he took part in the meeting with Leo Szilard and Albert Einstein that led to them writing a letter to President Franklin D. Roosevelt which resulted in the creation of the Manhattan Project.
Louis de Broglie (1892 - 1987) - a French theorist who made key contributions to quantum theory. He proposed the wave nature of electrons, suggesting that all matter has wave properties – an example of the concept of wave-particle duality, central to the theory of quantum mechanics.
Enrico Fermi (1901 - 1954) - an American physicist who's been called the "architect of the nuclear age" as well as the "architect of the Atomic bomb". He also created the world's first nuclear reactor and won the 1938 Nobel Prize in Physics for work on induced radioactivity and for discovering transuranium elements.
Enrico Fermi. 1950s.
Wolfgang Pauli (1900-1958) - an Austrian theoretical theorist, known as one of the pioneers of quantum physics. He won the 1945 Nobel Prize in Physics for discovering a new law of nature – the exclusion principle (aka the Pauli principle) and developing spin theory.
Max Planck (1858-1947) - a German theoretical physicist who won the 1918 Nobel Prize in Physics for energy quanta. He was the originator of quantum theory, the physics of atomic and subatomic processes.
Rank 2.5
Lev Landau. 1962.
Rank 2.5 is where Landau initially ranked himself, rather modestly, thinking he didn't produce any foundational accomplishments. He later moved his prominence, as his achievement mounted, to the higher 1.5.
U.S. Navy ships
Credit: Getty Images
Surprising Science
Keep reading Show less
Hack your brain for better problem solving
Tips from neuroscience and psychology can make you an expert thinker.
Credit: Olav Ahrens Røtne via Unsplash
Mind & Brain
This article was originally published on Big Think Edge.
Solve problems with others (occasionally)
A problem-solving booster
Live and learn and learn some more
Request a demo today!
How AI learned to paint like Rembrandt
Credit: Rijksmuseum
Culture & Religion
• Neural networks were used to fill in the missing pieces.
Keep reading Show less
Culture & Religion
Pragmatism: How Americans define truth
|
2587fa7e1d859c59 | Path integral formulation
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about a formulation of quantum mechanics. For integrals along a path, also known as line or contour integrals, see line integral.
This formulation has proven crucial to the subsequent development of theoretical physics, because manifest Lorentz covariance (time and space components of quantities enters equations in the same way) is easier to achieve than in the operator formalism of canonical quantization. Unlike previous methods, the path-integral allows a physicist to easily change coordinates between very different canonical descriptions of the same quantum system. Another advantage is that it is in practice easier to guess the correct form of the Lagrangian of a theory, which naturally enters the path integrals, than the Hamiltonian. Possible downsides of the approach include that unitarity (this is related to conservation of probability; the probabilities of all physically possible outcomes must add up to one) of the S-matrix is obscure in the formulation. The path-integral approach has been proved to be equivalent to the other formalisms of quantum mechanics and quantum field theory. Thus, by deriving either approach from the other, problems associated with one or the other approach (as exemplified by Lorentz covariance or unitarity) go away.
The basic idea of the path integral formulation can be traced back to Norbert Wiener, who introduced the Wiener integral for solving problems in diffusion and Brownian motion.[1] This idea was extended to the use of the Lagrangian in quantum mechanics by P. A. M. Dirac in his 1933 paper.[2][3] The complete method was developed in 1948 by Richard Feynman. Some preliminaries were worked out earlier in his doctoral work under the supervision of John Archibald Wheeler. The original motivation stemmed from the desire to obtain a quantum-mechanical formulation for the Wheeler–Feynman absorber theory using a Lagrangian (rather than a Hamiltonian) as a starting point.
Quantum action principle[edit]
In quantum mechanics, as in classical mechanics, the Hamiltonian is the generator of time-translations. This means that the state at a slightly later time differs from the state at the current time by the result of acting with the Hamiltonian operator (multiplied by the negative imaginary unit, i). For states with a definite energy, this is a statement of the de Broglie relation between frequency and energy, and the general relation is consistent with that plus the superposition principle.
The Hamiltonian in classical mechanics is derived from a Lagrangian, which is a more fundamental quantity relative to special relativity. The Hamiltonian indicates how to march forward in time, but the time is different in different reference frames. So the Hamiltonian is different in different frames, and this type of symmetry is not apparent in the original formulation of quantum mechanics.
In quantum mechanics, the Legendre transform is hard to interpret, because the motion is not over a definite trajectory. In classical mechanics, with discretization in time, the Legendre transform becomes,
where the partial derivative with respect to holds q(t + ε) fixed. The inverse Legendre transform is:
If the multiplications implicit in this formula are reinterpreted as matrix multiplications, the first factor is
Next comes:
or evolve an infinitesimal time into the future.
Finally, the last factor in this interpretation is
which means change basis back to q at a later time.
This is not very different from just ordinary time evolution: the H factor contains all the dynamical information – it pushes the state forward in time. The first part and the last part are just Fourier transforms to change to a pure q basis from an intermediate p basis.
L dt
Dirac (1933), p. 69
Feynman's interpretation[edit]
Dirac's work did not provide a precise prescription to calculate the sum over paths, and he did not show that one could recover the Schrödinger equation or the canonical commutation relations from this rule. This was done by Feynman.[4] That is, the classical path arises naturally in the classical limit.
3. The contribution of a path is proportional to eiS/ħ, where S is the action given by the time integral of the Lagrangian along the path.
In order to find the overall probability amplitude for a given process, then, one adds up, or integrates, the amplitude of the 3rd postulate over the space of all possible paths of the system in between the initial and final states, including those that are absurd by classical standards. In calculating the probability amplitude for a single particle to go from one space-time coordinate to another, it is correct to include paths in which the particle describes elaborate curlicues, curves in which the particle shoots off into outer space and flies back again, and so forth. The path integral assigns to all these amplitudes equal weight but varying phase, or argument of the complex number. Contributions from paths wildly different from the classical trajectory may be suppressed by interference (see below).
Concrete formulation[edit]
Feynman's postulates can be interpreted as follows:
Time-slicing definition[edit]
For a particle in a smooth potential, the path integral is approximated by zigzag paths, which in one dimension is a product of ordinary integrals. For the motion of the particle from position xa at time ta to xb at time tb, the time sequence
can be divided up into n + 1 smaller segments tjtj − 1, where j = 1,...,n + 1, of fixed duration
This process is called time-slicing.
An approximation for the path integral can be computed as proportional to
where L(x,v,t) is the Lagrangian of the one-dimensional system with position variable x(t) and velocity v = (t) considered (see below), and dxj corresponds to the position at the jth time step, if the time integral is approximated by a sum of n terms.[5]
In the limit n → ∞, this becomes a functional integral, which, apart from a nonessential factor, is directly the product of the probability amplitudes xb, tb|xa, ta (more precisely, since one must work with a continuous spectrum, the respective densities) to find the quantum mechanical particle at ta in the initial state xa and at tb in the final state xb.
where H is the Hamiltonian,
and the abovementioned "zigzagging" corresponds to the appearance of the terms:
In the Riemannian sum approximating the time integral, which are finally integrated over x1 to xn with the integration measure dx1...dxn, j is an arbitrary value of the interval corresponding to j, e.g. its center, xj + xj − 1/2.
the singularity is removed and a time-sliced approximation exists, that is exactly integrable, since it can be made harmonic by a simple coordinate transformation, as discovered in 1979 by İsmail Hakkı Duru and Hagen Kleinert.[6][7] The combination of a path-dependent time transformation and a coordinate transformation is an important tool to solve many path integrals and is called generically the Duru–Kleinert transformation.
Free particle[edit]
The path integral representation gives the quantum amplitude to go from point x to point y as an integral over all paths. For a free particle action (for simplicity let m = 1, ħ = 1):
the integral can be evaluated explicitly.
Splitting the integral into time slices:
and the result is:
The proportionality constant is not really determined by the time slicing approach, only the ratio of values for different endpoint choices is determined. The proportionality constant should be chosen to ensure that between each two time-slices the time evolution is quantum-mechanically unitary, but a more illuminating way to fix the normalization is to consider the path integral as a description of a stochastic process.
This means that any superposition of Ks will also obey the same equation, by linearity. Defining
then ψt obeys the free Schrödinger equation just as K does:
Simple harmonic oscillator[edit]
The Lagrangian for the simple harmonic oscillator is
Write its trajectory x(t) as the classical trajectory plus some perturbation, x(t) = xc(t) + δx(t) and the action as S = Sc + δS. The classical trajectory can be written as:
This trajectory corresponds to the classical action:
Next, expand the non-classical contribution to the action δS, as a Fourier series, which gives
This means the propagator is
for some normalization
Using the infinite product representation of the sinc function
the propagator can be written as
Let T = tfti. We can write our propagator in terms of energy eigenstates as,
Using the identities i sin ωT = 1/2eiωT (1 − e−2iωT) and cos ωT = 1/2eiωT (1 + e−2iωT),
We can absorb all the terms after the first eiωT/2 into R(T), thereby giving,
We can expand R(T) in the powers of eiωT. All the terms in that expansion get multiplied by the eiωT/2 factor in the front and so we get terms that look like
Comparing that to the eigenstate expansion, we get the energy spectrum for simple harmonic oscillator,
The Schrödinger equation[edit]
Equations of motion[edit]
Since the states obey the Schrödinger equation, the path integral must reproduce the Heisenberg equations of motion for the averages of x and variables, but it is instructive to see this directly. The direct approach shows that the expectation values calculated from the path integral reproduce the usual ones of quantum mechanics.
Start by considering the path integral with some fixed initial state
Now note that x(t) at each separate time is a separate integration variable. So it is legitimate to change variables in the integral by shifting: x(t) = u(t) + ε(t) where ε(t) is a different shift at each time but ε(0) = ε(T) = 0, since the endpoints are not integrated:
The change in the integral from the shift is, to first infinitesimal order in ε:
which, integrating by parts in t, gives:
this is the Heisenberg equation of motion.
Stationary phase approximation[edit]
If the variation in the action exceeds ħ by many orders of magnitude, we typically have destructive phase interference other than in the vicinity of those trajectories satisfying the Euler–Lagrange equation, which is now reinterpreted as the condition for constructive phase interference. This can be shown using the method of stationary phase applied to the propagator. As ħ decreases, the exponential in the integral oscillates rapidly in the complex domain for any change in the action. Thus, in the limit that ħ goes to zero, only points where the classical action does not vary contribute to the propagator.
Canonical commutation relations[edit]
Note that the distance that a random walk moves is proportional to t, so that:
The quantity xẋ is ambiguous, with two possible meanings:
In elementary calculus, the two are only different by an amount which goes to 0 as ε goes to 0. But in this case, the difference between the two is not 0:
Defining the time order to be the operator order:
For a general statistical action, a similar argument shows that
Particle in curved space[edit]
The path integral and the partition function[edit]
is the action of the classical problem in which one investigates the path starting at time t = 0 and ending at time t = T, and Dx denotes integration over all paths. In the classical limit, S[x] ≫ ħ, the path of minimum action dominates the integral, because the phase of any path away from this fluctuates rapidly and different contributions cancel.[9]
The connection with statistical mechanics follows. Considering only paths which begin and end in the same configuration, perform the Wick rotation it = τ, i.e., make time imaginary, and integrate over all possible beginning-ending configurations. The path integral now resembles the partition function of statistical mechanics defined in a canonical ensemble with inverse temperature proportional to imaginary time, 1/T = kBτ/ħ. Strictly speaking, though, this is the partition function for a statistical field theory.
Clearly, such a deep analogy between quantum mechanics and statistical mechanics cannot be dependent on the formulation. In the canonical formulation, one sees that the unitary evolution operator of a state is given by
where the state α is evolved from time t = 0. If one makes a Wick rotation here, and finds the amplitude to go from any state, back to the same state in (imaginary) time iT is given by
which is precisely the partition function of statistical mechanics for the same system at temperature quoted earlier. One aspect of this equivalence was also known to Erwin Schrödinger who remarked that the equation named after him looked like the diffusion equation after Wick rotation.
Measure theoretic factors[edit]
This factor is needed to restore unitarity.
For instance, if
then it means that each spatial slice is multiplied by the measure g. This measure cannot be expressed as a functional multiplying the Dx measure because they belong to entirely different classes.
Quantum field theory[edit]
The propagator[edit]
This is called the propagator. Superposing different values of the initial position x with an arbitrary initial state ψ0(x) constructs the final state.
and in p-space the proportionality factor here is constant in time, as will be verified in a moment. The Fourier transform in time, extending K(p; T) to be zero for negative times, gives Green's function, or the frequency space propagator:
It is also possible to reexpress the nonrelativistic time evolution in terms of propagators which go toward the past, since the Schrödinger equation is time-reversible. The past propagator is the same as the future propagator except for the obvious difference that it vanishes in the future, and in the Gaussian t is replaced by t. In this case, the interpretation is that these are the quantities to convolve the final wavefunction so as to get the initial wavefunction.
Given the nearly identical only change is the sign of E and ε, the parameter E in Green's function can either be the energy if the paths are going toward the future, or the negative of the energy if the paths are going toward the past.
The integral above is not trivial to interpret, because of the square root. Fortunately, there is a heuristic trick. The sum is over the relativistic arclength of the path of an oscillating quantity, and like the nonrelativistic path integral should be interpreted as slightly rotated into imaginary time. The function K(xy,τ) can be evaluated when the sum is over paths in Euclidean space.
This describes a sum over all paths of length Τ of the exponential of minus the length. This can be given a probability interpretation. The sum over all paths is a probability average over a path constructed step by step. The total number of steps is proportional to Τ, and each step is less likely the longer it is. By the central limit theorem, the result of many independent steps is a Gaussian of variance proportional to Τ.
The usual definition of the relativistic propagator only asks for the amplitude is to travel from x to y, after summing over all the possible proper times it could take.
Where W(Τ) is a weight factor, the relative importance of paths of different proper time. By the translation symmetry in proper time, this weight can only be an exponential factor, and can be absorbed into the constant α.
This is the Schwinger representation. Taking a Fourier transform over the variable (x − y) can be done for each value of Τ separately, and because each separate Τ contribution is a Gaussian, gives whose Fourier transform is another Gaussian with reciprocal width. So in p-space, the propagator can be reexpressed simply:
Which is the Euclidean propagator for a scalar particle. Rotating p0 to be imaginary gives the usual relativistic propagator, up to a factor of i and an ambiguity which will be clarified below.
This expression can be interpreted in the nonrelativistic limit, where it is convenient to split it by partial fractions:
Unlike the nonrelativistic case, it is impossible to produce a relativistic theory of local particle propagation without including antiparticles. All local differential operators have inverses which are nonzero outside the light cone, meaning that it is impossible to keep a particle from travelling faster than light. Such a particle cannot have a Green's function which is only nonzero in the future in a relativistically invariant theory.
Functionals of fields[edit]
Expectation values[edit]
In quantum field theory, if the action is given by the functional S of field configurations (which only depends locally on the fields), then the time ordered vacuum expectation value of polynomially bounded functional F, F, is given by
The symbol Dϕ here is a concise way to represent the infinite-dimensional integral over all possible field configurations on all of spacetime. As stated above, the unadorned path integral in the denominator ensures proper normalization.
As a probability[edit]
Strictly speaking the only question that can be asked in physics is: "What fraction of states satisfying condition A also satisfy condition B?" The answer to this is a number between 0 and 1 which can be interpreted as a conditional probability which is written as P(B|A). In terms of path integration, since P(B|A) = P(AB)/P(A) this means:
where the functional Oin[ϕ] is the superposition of all incoming states that could lead to the states we are interested in. In particular this could be a state corresponding to the state of the Universe just after the Big Bang although for actual calculation this can be simplified using heuristic methods. Since this expression is a quotient of path integrals it is naturally normalised.
Schwinger–Dyson equations[edit]
which now becomes
for any polynomially-bounded functional F.
in the deWitt notation.
These equations are the analog of the on shell EL equations. The time ordering is taken before the time derivatives inside the S,i.
Note that
Basically, if Dφ eiS[φ] is viewed as a functional distribution (this shouldn't be taken too literally as an interpretation of QFT, unlike its Wick rotated statistical mechanics analogue, because we have time ordering complications here!), then φ(x1) ... φ(xn)⟩ are its moments and Z is its Fourier transform.
and G is a functional of J, then
Then, from the properties of the functional integrals
we get the "master" Schwinger–Dyson equation:
If the functional measure is not translationally invariant, it might be possible to express it as the product M[φ] Dφ where M is a functional and Dφ is a translationally invariant measure. This is true, for example, for nonlinear sigma models where the target space is diffeomorphic to Rn. However, if the target manifold is some topologically nontrivial space, the concept of a translation does not even make any sense.
In that case, we would have to replace the S in this equation by another functional
Ward–Takahashi identities[edit]
Let's also assume
which implies
Now, let's assume even further that Q is a local integral
so that
Then, we would have
The above two equations are the Ward–Takahashi identities.
The need for regulators and renormalization[edit]
The path integral in quantum-mechanical interpretation[edit]
Quantum gravity[edit]
Whereas in quantum mechanics the path integral formulation is fully equivalent to other formulations, it may be that it can be extended to quantum gravity, which would make it different from the Hilbert space model. Feynman had some success in this direction and his work has been extended by Hawking and others.[11] Approaches that use this method include causal dynamical triangulations and spinfoam models.
Quantum tunneling[edit]
Quantum tunnelling can be modeled by using the path integral formation to determine the action of the trajectory through a potential barrier. Using the WKB approximation, the tunneling rate (Γ) can be determined to be of the form
with the effective action Seff and pre-exponential factor Ao. This form is specifically useful in a dissipative system, in which the systems and surroundings must be modeled together. Using the Langevin equation to model Brownian motion, the path integral formation can be used to determine an effective action and pre-exponential model to see the effect of dissipation on tunnelling.[12] From this model, tunneling rates of macroscopic systems (at finite temperatures) can be predicted.
See also[edit]
1. ^ Chaichian, Masud; Demichev, Andrei Pavlovich (2001). "Introduction". Path Integrals in Physics Volume 1: Stochastic Process & Quantum Mechanics. Taylor & Francis. p. 1ff. ISBN 0-7503-0801-X.
2. ^ Dirac, Paul A. M. (1933). "The Lagrangian in Quantum Mechanics" (PDF). Physikalische Zeitschrift der Sowjetunion. 3: 64–72.
3. ^ Van Vleck, John H. (1928). "The correspondence principle in the statistical interpretation of quantum mechanics". Proceedings of the National Academy of Sciences of the United States of America. 14 (2): 178–188. Bibcode:1928PNAS...14..178V. doi:10.1073/pnas.14.2.178. PMC 1085402Freely accessible. PMID 16577107.
4. ^ Both noted that in the limit of action that is large compared to the reduced Planck's constant ħ (using natural units, ħ = 1), or the classical limit, the path integral is dominated by solutions which are in the neighborhood of stationary points of the action.
6. ^ Duru, İ. H.; Kleinert, Hagen (1979-06-18). "Solution of the path integral for the H-atom" (PDF). Physics Letters. 84B (2): 185–188. Bibcode:1979PhLB...84..185D. doi:10.1016/0370-2693(79)90280-6. Retrieved 2007-11-25.
7. ^ For details see Chapter 13 in Kleinert's book cited above.
8. ^ Feynman, R. P. (1948). "Space-Time Approach to Non-Relativistic Quantum Mechanics". Reviews of Modern Physics. 20 (2): 367–387. Bibcode:1948RvMP...20..367F. doi:10.1103/RevModPhys.20.367.
9. ^ Feynman, Richard P.; Hibbs, Albert R.; Styer, Daniel F. (2010). Quantum Mechanics and Path Integrals. Mineola, NY: Dover Publications. pp. 29–31. ISBN 0-486-47722-3.
10. ^ Sinha, Sukanya; Sorkin, Rafael D. (1991). "A Sum-over-histories Account of an EPR(B) Experiment". Foundations of Physics Letters. 4 (4): 303–335. Bibcode:1991FoPhL...4..303S. doi:10.1007/BF00665892.
11. ^ Gell-Mann, Murray. "Most of the Good Stuff". In Brown, Laurie M.; Rigden, John S. Memories Of Richard Feynman. American Institute of Physics. [ISBN missing]
12. ^ Caldeira, A. O.; Leggett, A. J. (1983). "Quantum tunnelling in a dissipative system". Annals of Physics. 149 (2): 374–456. Bibcode:1983AnPhy.149..374C. doi:10.1016/0003-4916(83)90202-6.
Further reading[edit]
External links[edit] |
b2d0c82b81fff241 |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
Are there any examples of fermionic particles or quasiparticles for which the interaction potential is a globally smooth function? i.e. no singularities or branch points.
As an example, in Flügge's Practical Quantum Mechanics, problem 148 has two repulsive particles on a circle. This is supposed to model the two helium electrons in the ground state. The equation he gives is
$$ -\frac{\hbar^2}{2mr^2}\left(\frac{\partial^2 \psi}{\partial x_1^2}+\frac{\partial^2 \psi}{\partial x_2^2}\right)+V_0\cos(x_1-x_2)\psi=E\psi$$
I don't quite follow why this potential does not have a singularity when $x_2\rightarrow x_1$. Are there other such examples?
share|cite|improve this question
To clarify, you want a physical system for which there is a hamiltonian $H$ which is a very good approximation over some energy-range and whose term fourth order in fermions, $\rho(r_1)V(r_1-r_2)\rho(r_2)$, has $V(r)$ a $C^{\infty}$ function over all of space? – BebopButUnsteady Jun 27 '11 at 14:40
Yes. Ideally, $V(r_1-r_2)$ is $C^\infty$. At the very least is there any such model for fermionic particles where $V(r_1-r_2)$ is continuous at $r_1=r_2$? – Greg von Winckel Jun 27 '11 at 15:03
I'm not sure I completely understand the question, but if the two electrons have their spin degrees of freedom in a singlet, then the spatial wavefunction is symmetric under exchange of 1 & 2. There are no nodes in the ground-state wavefunction, so an effective potential doesn't have to introduce any singularities. – wsc Jun 27 '11 at 16:14
Well there's no particular reason for a textbook problem to actually model a physical system... But one can certainly write something like this as a completely valid approximation. Take Flugge's example, with He3 so its fermionic[fn.2]. Say the size of the atoms is very small, much smaller than the scales on which the ground state wavefunction varies, which is reasonable enough.
Now there should really be a term $V_{repulse}(x_1-x_2)\psi$ where $V_{repulse}$ gets really big when $|x_1-x_2|\rightarrow 0$ to capture the fact that you can't put the two atoms on top of each other [fn.2]. But this is going to be really short ranged, almost zero if $|x_1-x_2|$ is significantly bigger then the size of the atom. On the other hand we know that $\psi(x_1,x_2)\rightarrow 0$ when $x_1 \rightarrow x_2$. So in precisely the region where $V_{repulse}$ would matter, $\psi$ is basically zero. So we can basically ignore $V_{repulse}\psi$. More exactly, the term is proportional to (size of atom)/(size of circle) squared, which could be very small.
So you don't always have to include a repulsive term. It can actually be quite negligible, even though it seems like a fact you can't ignore.
[fn. 1] There are also times when you can ignore the repulsive interactions of bosons, although its not suppressed like the fermions.
[fn. 2] Its not really true that it should diverge as $x_1\rightarrow x_2$. If you really got the two atoms on top of each they would stop behaving like pointlike atoms, so your model would stop being applicable, rather than anything going to infinity.
share|cite|improve this answer
On reading again it seems maybe you want the potential to go to infinity to enforce Pauli exclusion. If that's the case, then the answer is that the Pauli exclusion is a purely geometrical fact, and has nothing to do with potentials. – BebopButUnsteady Jun 27 '11 at 20:58
On further reading I'm not sure what the question is, so you should clarify it. – BebopButUnsteady Jun 27 '11 at 22:17
I have written a spectral code for computing eigenstates of 1D fermion systems with arbitrary confinement and interaction potentials. I am looking for model problems to test the code on and have already tried solving the (no spin) $n$-particle problem $$\left\{-\frac{\hbar^2}{2m}\nabla^2 + \sum\limits_{j=1}^n V_{ext}(x_j) + \sum\limits_{k=j+1}^n V_{int}(x_j-x_k)\right\}\psi(\mathbf{x})=E\psi(\mathbf{x})$$. When I use Coulomb interaction for $V_{int}$, the method converges quadratically. I am looking for problems with smooth $V_{int}$ to see if the convergence improves. – Greg von Winckel Jun 28 '11 at 6:23
Why not just put in some smooth interaction, say $\frac{1}{(x^2 +1)^2}$ and see if it converges? Or if you're looking for an analytically solved model to compare against, there's a fairly extensive set of 1D fermionic systems that are analytically tractable. The confining potential will probably make things difficult, but if you make the scale of the interaction much smaller than the confinement you can probably get something to work. – BebopButUnsteady Jun 28 '11 at 14:34
I have indeed tried a Lorenzian potential and observed spectral convergence. My hope, however, was not to try an arbitrary smooth potential, but one that is used in practice to model something. Some analytically solvable 1D fermionic systems would be of interest anyway. Do you have a reference for any of those? – Greg von Winckel Jun 29 '11 at 8:00
Apparently the Gaussian Effective Potential is used a fair amount. This would be something like $$V(x_1,x_2)=V_0 \exp(-\alpha(x_1-x_2)))$$. Thanks for the responses.
share|cite|improve this answer
In atomic physics effective 1D soft-core Coulomb potentials are routinely used for the interaction between particles, particularly when external fields are present. For example, for the interaction between an electron and a (space-fixed) proton at the center of coordinates: $$V(x) = - \frac{1}{\sqrt{x^2 + \epsilon^2}}$$ where atomic units are used and $\epsilon$ is usual fitted or taken as one. It is much simpler to integrate the time-dependent Schrödinger equation for this potential.
share|cite|improve this answer
Your Answer
|
e9b9d4b8d9e57a2a | Properties of water
From Wikipedia, the free encyclopedia
Jump to: navigation, search
"Hydrogen monoxide" redirects here. For the hoax involving the chemical name of water, see Dihydrogen monoxide hoax.
Water (H
The water molecule has this basic geometric structure
Ball-and-stick model of a water molecule
Space filling model of a water molecule
A drop of water falling towards water in a glass
IUPAC name
water, oxidane
Other names
Hydrogen oxide, Dihydrogen monoxide (DHMO), Hydrogen monoxide, Dihydrogen oxide, Hydrogen hydroxide (HH or HOH), Hydric acid, Hydrohydroxic acid, Hydroxic acid, Hydrol,[1] μ-Oxido dihydrogen
7732-18-5 YesY
ChEBI CHEBI:15377 YesY
ChEMBL ChEMBL1098659 YesY
ChemSpider 937 YesY
Jmol 3D model Interactive image
PubChem 962
RTECS number ZC0110000
Molar mass 18.01528(33) g/mol
Appearance White solid or almost colorless, transparent, with a slight hint of blue, crystalline solid or liquid[2]
Odor None
Density Liquid: 999.9720 kg/m3 ≈ 1 tonne/m3 = 1 kg/l = 1 g/cm3 ≈ 62.4 lb/ft3 (maximum, at ~4 °C)
Solid: 917 kg/m3 = 0.917 tonne/m3 = 0.917 kg/l = 0.917 g/cm3 ≈ 57.2 lb/ft3
Boiling point 100.02 °C; 212.04 °F; 373.17 K [3][a]
Solubility Poorly soluble in haloalkanes, aliphatic and aromatic hydrocarbons, ethers.[4] Improved solubility in carboxylates, alcohols, ketones, amines. Miscible with methanol, ethanol, isopropanol, acetone, glycerol.
Vapor pressure 3.1690 kilopascals or 0.031276 atm[5]
Acidity (pKa) 13.995[6][b]
Basicity (pKb) 13.995
Thermal conductivity 0.6065 W/m·K[8]
1.3330 (20°C)[9]
Viscosity 0.890 cP[10]
1.8546 D [11]
75.375 ± 0.05 J/mol·K[12]
69.95 ± 0.03 J/mol·K[12]
-285.83 ± 0.040 kJ/mol[4][12]
-237.24 kJ/mol[4]
Main hazards Drowning
Water intoxication
Avalanche (as snow)
(see also Dihydrogen monoxide hoax)
NFPA 704
Flash point Non-flammable
Related compounds
Other cations
Hydrogen sulfide
Hydrogen selenide
Hydrogen telluride
Hydrogen polonide
Hydrogen peroxide
Related solvents
Related compounds
Water vapor
Heavy water
YesY verify (what is YesYN ?)
Infobox references
Water (H
) is a polar inorganic compound that is at room temperature a tasteless and odorless liquid, nearly colorless with a hint of blue. It is by far the most studied chemical compound and is described as the "universal solvent" for its ability to dissolve many substances.[13][14] This allows it to be the "solvent of life".[15] It is the only common substance to exist as a solid, liquid, and gas in nature.[16]
Water molecules form hydrogen bonds with each other and are strongly polar. This polarity allows it to separate ions in salts and strongly bond to other polar substances such as alcohols and acids, thus dissolving them. Its hydrogen bonding causes its many unique properties, such as having a solid form less dense than its liquid form, a relatively high boiling point of 100 °C for its molar mass, and a high heat capacity.
and OH
ions by self ionization. This regulates the concentrations of H+
and OH
ions in water.
The accepted IUPAC name of water is oxidane or simply water,[17] or its equivalent in different languages, although there are other systematic names which can be used to describe the molecule. Oxidane is only intended to be used as the name of the mononuclear parent hydride used for naming derivatives of water by substituent nomenclature.[18] These derivatives commonly have other recommended names. For example, the name hydroxyl is recommended over oxidanyl for the –OH group. The name oxane is explicitly mentioned by the IUPAC as being unsuitable for this purpose, since it is already the name of a cyclic ether also known as tetrahydropyran.[19][20]
The simplest systematic name of water is hydrogen oxide. This is analogous to related compounds such as hydrogen peroxide, hydrogen sulfide, and deuterium oxide (heavy water).
The polarized form of the water molecule, H+
, is also called hydron hydroxide by IUPAC nomenclature.[21]
In keeping with the basic rules of chemical nomenclature, water would have a systematic name of dihydrogen monoxide,[22] but this is not among the names published by the International Union of Pure and Applied Chemistry.[17] It is a rarely used name of water, and mostly used in various hoaxes or spoofs that call for this "lethal chemical" to be banned, such as in the dihydrogen monoxide hoax.
Other systematic names for water include hydroxic acid, hydroxylic acid, and hydrogen hydroxide, using acid and base names.[c] None of these exotic names are used widely.
Water is the chemical substance with chemical formula H
; one molecule of water has two hydrogen atoms covalently bonded to a single oxygen atom.[23] Water is a tasteless, odorless liquid at ambient temperature and pressure, and appears colorless in small quantities, although it has its own intrinsic very light blue hue.[24][2] Ice also appears colorless, and water vapor is essentially invisible as a gas.
Water is primarily a liquid under standard conditions, which is not predicted from its relationship to other analogous hydrides of the oxygen family in the periodic table, which are gases such as hydrogen sulfide. The elements surrounding oxygen in the periodic table, nitrogen, fluorine, phosphorus, sulfur and chlorine, all combine with hydrogen to produce gases under standard conditions. The reason that water forms a liquid is that oxygen is more electronegative than all of these elements with the exception of fluorine. Oxygen attracts electrons much more strongly than hydrogen, resulting in a net positive charge on the hydrogen atoms, and a net negative charge on the oxygen atom. These atomic charges give each water molecule a net dipole moment. Electrical attraction between water molecules due to this dipole pulls individual molecules closer together, making it more difficult to separate the molecules and therefore raising the boiling point. This attraction is known as hydrogen bonding.
The molecules of water are constantly moving in relation to each other, and the hydrogen bonds are continually breaking and reforming at timescales faster than 200 femtoseconds (2×10−13 seconds).[25] However, these bonds are strong enough to create many of the peculiar properties of water, some of which make it integral to life.
Water can be described as a polar liquid that slightly dissociates disproportionately or self ionizes into an hydronium ion and hydroxide ion.
2 H
+ OH
The dissociation constant for this dissociation is commonly symbolized as Kw and has a value of about 1014 at 25 °C; see here for values at other temperatures.
Water, ice, and vapor[edit]
Like many substances, water can take numerous forms, which are broadly categorized by phase of matter. The liquid phase is the most common among water's phases (within the Earth's atmosphere and surface) and is the form that is generally denoted by the word "water". The solid phase of water is known as ice and commonly takes the structure of hard, amalgamated crystals, such as ice cubes, or loosely accumulated granular crystals, like snow. For a list of the many different crystalline and amorphous forms of solid H
, see the article ice. The gaseous phase of water is known as water vapor (or steam), in which water takes the form of a transparent cloud. (Visible steam and clouds are, in fact, water in the liquid form as minute droplets suspended in the air.)
The fourth state of water, that of a supercritical fluid, is much less common than the other three and only rarely occurs in nature, in extremely hostile conditions. When water achieves a specific critical temperature and a specific critical pressure (647 K and 22.064 MPa), the liquid and gas phases merge to one homogeneous fluid phase, with properties of both gas and liquid. A likely example of naturally occurring supercritical water is in the hottest parts of deep water hydrothermal vents, in which water is heated to the critical temperature by volcanic plumes and the critical pressure is caused by the weight of the ocean at the extreme depths where the vents are located. This pressure is reached at a depth of about 2200 meters: much less than the mean depth of the ocean (3800 meters).[26]
Heat capacity and heats of vaporization and fusion[edit]
Heat of vaporization of water from melting to critical temperature
Water has a very high specific heat capacity of 4.1814 J/(g·K) at 25 °C – the second highest among all the heteroatomic species (after ammonia), as well as a high heat of vaporization (40.65 kJ/mol or 2257 kJ/kg at the normal boiling point), both of which are a result of the extensive hydrogen bonding between its molecules. These two unusual properties allow water to moderate Earth's climate by buffering large fluctuations in temperature. According to Josh Willis, of NASA's Jet Propulsion Laboratory, the oceans can absorb one thousand times more heat than the atmosphere without changing their temperature much and are absorbing 80 to 90% of the heat global warming.[27]
The specific enthalpy of fusion (more commonly known as latent heat) of water is 333.55 kJ/kg at 0 °C: the same amount of energy is required to melt ice as to warm ice from -160 degrees °C up to its melting point or to heat the same amount of water by about 80 °C. Of common substances, only that of ammonia is higher. This property confers resistance to melting on the ice of glaciers and drift ice. Before and since the advent of mechanical refrigeration, ice was and still is in common use for retarding food spoilage.
The specific heat capacity of ice at −10 °C is 2.03 J/(g·K)[28] and the heat capacity of steam at 100 °C is 2.08 J/(g·K).[29]
Density of water and ice[edit]
Density of ice and water as a function of temperature
The density of water is about 1 gram per cubic centimetre (62 lb/cu ft): this relationship was originally used to define the gram.[30] The density varies with temperature, but not linearly: as the temperature increases, the density rises to a peak at 3.98 °C (39.16 °F) and then decreases.[31] This unusual negative thermal expansion below 4 °C (39 °F) is also observed in molten silica.[32] Regular, hexagonal ice is also less dense than liquid water—upon freezing, the density of water decreases by about 9%.[33]
These effects are due to the reduction of thermal motion with cooling, which allows water molecules to form more hydrogen bonds that prevent the molecules from coming close to each other.[31] While below 4 °C the breakage of hydrogen bonds due to heating allows water molecules to pack closer despite the increase in the thermal motion (which tends to expand a liquid), above 4 °C water expands as the temperature increases.[31] Water near the boiling point is about 4% less dense than water at 4 °C (39 °F).[33][d]
Other substances that expand on freezing are acetic acid, silicon, gallium,[34] germanium, antimony, bismuth, plutonium and also chemical compounds that form spacious crystal lattices with tetrahedral coordination.
Under increasing pressure, ice undergoes a number of transitions to other allotropic forms with higher density than liquid water, such as ice II, ice III, high-density amorphous ice (HDA), and very-high-density amorphous ice (VHDA).[35][36]
Temperature distribution in a lake in summer and winter
The unusual density curve and lower density of ice than of water is vital to life—if water was most dense at the freezing point, then in winter the very cold water at the surface of lakes and other water bodies would sink, the lake could freeze from the bottom up, and all life in them would be killed.[33] Furthermore, given that water is a good thermal insulator (due to its heat capacity), some frozen lakes might not completely thaw in summer.[33] The layer of ice that floats on top insulates the water below.[37] Water at about 4 °C (39 °F) also sinks to the bottom, thus keeping the temperature of the water at the bottom constant (see diagram).[33]
Density of saltwater and ice[edit]
WOA surface density
The density of salt water depends on the dissolved salt content as well as the temperature. Ice still floats in the oceans, otherwise they would freeze from the bottom up. However, the salt content of oceans lowers the freezing point by about 1.9 °C[38] (see here for explanation) and lowers the temperature of the density maximum of water to the freezing point. This is why, in ocean water, the downward convection of colder water is not blocked by an expansion of water as it becomes colder near the freezing point. The oceans' cold water near the freezing point continues to sink. So creatures that live at the bottom of cold oceans like the Arctic Ocean generally live in water 4 °C colder than at the bottom of frozen-over fresh water lakes and rivers.
As the surface of salt water begins to freeze (at −1.9 °C[38] for normal salinity seawater, 3.5%) the ice that forms is essentially salt-free, with about the same density as freshwater ice. This ice floats on the surface, and the salt that is "frozen out" adds to the salinity and density of the sea water just below it, in a process known as brine rejection. This denser salt water sinks by convection and the replacing seawater is subject to the same process. This produces essentially freshwater ice at −1.9 °C[38] on the surface. The increased density of the sea water beneath the forming ice causes it to sink towards the bottom. On a large scale, the process of brine rejection and sinking cold salty water results in ocean currents forming to transport such water away from the Poles, leading to a global system of currents called the thermohaline circulation.
Miscibility and condensation[edit]
Red line shows saturation
Main article: Humidity
Water is miscible with many liquids, for example ethanol in all proportions, forming a single homogeneous liquid. On the other hand, water and most oils are immiscible usually forming layers according to increasing density from the top. This can be predicted by comparing the polarity. Water being a relatively polar compound will tend to be miscible with liquids of high polarity such as ethanol and acetone, whereas compounds with low polarity will tend to be immiscible and poorly soluble such as with hydrocarbons.
As a gas, water vapor is completely miscible with air. On the other hand, the maximum water vapor pressure that is thermodynamically stable with the liquid (or solid) at a given temperature is relatively low compared with total atmospheric pressure. For example, if the vapor's partial pressure is 2% of atmospheric pressure and the air is cooled from 25 °C, starting at about 22 °C water will start to condense, defining the dew point, and creating fog or dew. The reverse process accounts for the fog burning off in the morning. If the humidity is increased at room temperature, for example, by running a hot shower or a bath, and the temperature stays about the same, the vapor soon reaches the pressure for phase change, and then condenses out as minute water droplets, commonly referred to as steam.
A gas in this context is referred to as saturated or 100% relative humidity, when the vapor pressure of water in the air is at the equilibrium with vapor pressure due to (liquid) water; water (or ice, if cool enough) will fail to lose mass through evaporation when exposed to saturated air. Because the amount of water vapor in air is small, relative humidity, the ratio of the partial pressure due to the water vapor to the saturated partial vapor pressure, is much more useful. Water vapor pressure above 100% relative humidity is called super-saturated and can occur if air is rapidly cooled, for example, by rising suddenly in an updraft.[e]
Vapor pressure[edit]
Vapor pressure diagrams of water
The compressibility of water is a function of pressure and temperature. At 0 °C, at the limit of zero pressure, the compressibility is 5.1×10−10 Pa−1. At the zero-pressure limit, the compressibility reaches a minimum of 4.4×10−10 Pa−1 around 45 °C before increasing again with increasing temperature. As the pressure is increased, the compressibility decreases, being 3.9×10−10 Pa−1 at 0 °C and 100 megapascals (1,000 bar).[39]
The bulk modulus of water is about 2.2 GPa.[40] The low compressibility of non-gases, and of water in particular, leads to their often being assumed as incompressible. The low compressibility of water means that even in the deep oceans at 4 km depth, where pressures are 40 MPa, there is only a 1.8% decrease in volume.[40]
Triple point[edit]
The various triple points of water
Phases in stable equilibrium Pressure Temperature
liquid water, ice Ih, and water vapor 611.657 Pa[41] 273.16 K (0.01 °C)
liquid water, ice Ih, and ice III 209.9 MPa 251 K (−22 °C)
liquid water, ice III, and ice V 350.1 MPa −17.0 °C
liquid water, ice V, and ice VI 632.4 MPa 0.16 °C
ice Ih, Ice II, and ice III 213 MPa −35 °C
ice II, ice III, and ice V 344 MPa −24 °C
ice II, ice V, and ice VI 626 MPa −70 °C
The temperature and pressure at which solid, liquid, and gaseous water coexist in equilibrium is called the triple point of water. This point is used to define the units of temperature (the kelvin, the SI unit of thermodynamic temperature and, indirectly, the degree Celsius and even the degree Fahrenheit).
As a consequence, water's triple point temperature, as measured in these units, is a prescribed value rather than a measured quantity.
Phase diagram of water
This pressure is quite low, about 1166 of the normal sea level barometric pressure of 101,325 Pa. The atmospheric surface pressure on planet Mars is 610.5 Pa, which is remarkably close to the triple point pressure. The altitude of this surface pressure was used to define zero-elevation or "sea level" on that planet.[42]
Although it is commonly named as "the triple point of water", the stable combination of liquid water, ice I, and water vapor is but one of several triple points on the phase diagram of water. Gustav Heinrich Johann Apollon Tammann in Göttingen produced data on several other triple points in the early 20th century. Kamb and others documented further triple points in the 1960s.[43][44][45]
Melting point[edit]
The melting point of ice is 0 °C (32 °F; 273 K) at standard pressure; however, pure liquid water can be supercooled well below that temperature without freezing if the liquid is not mechanically disturbed. It can remain in a fluid state down to its homogeneous nucleation point of about 231 K (−42 °C; −44 °F).[46] The melting point of ordinary hexagonal ice falls slightly under moderately high pressures, by 0.0073 °C (0.0131 °F)/atm[f] or about 0.5 °C (0.90 °F)/70 atm[g][47] as the stabilization energy of hydrogen bonding is exceeded by intermolecular repulsion, but as ice transforms into its allotropes (see crystalline states of ice) above 209.9 MPa (2,072 atm), the melting point increases markedly with pressure, i.e., reaching 355 K (82 °C) at 2.216 GPa (21,870 atm) (triple point of Ice VII[48]).
Electrical properties[edit]
Electrical conductivity[edit]
Pure water containing no exogenous ions is an excellent insulator, but not even "deionized" water is completely free of ions. Water undergoes auto-ionization in the liquid state, when two water molecules form one hydroxide anion (OH
) and one hydronium cation (H
Because water is such a good solvent, it almost always has some solute dissolved in it, often a salt. If water has even a tiny amount of such an impurity, then it can conduct electricity far more readily.
It is known that the theoretical maximum electrical resistivity for water is approximately 18.2 MΩ·cm (182 ·m) at 25 °C.[49] This figure agrees well with what is typically seen on reverse osmosis, ultra-filtered and deionized ultra-pure water systems used, for instance, in semiconductor manufacturing plants. A salt or acid contaminant level exceeding even 100 parts per trillion (ppt) in otherwise ultra-pure water begins to noticeably lower its resistivity by up to several kΩ·m.[citation needed]
In pure water, sensitive equipment can detect a very slight electrical conductivity of 0.05501 ± 0.0001 µS/cm at 25.00 °C.[49] Water can also be electrolyzed into oxygen and hydrogen gases but in the absence of dissolved ions this is a very slow process, as very little current is conducted. In ice, the primary charge carriers are protons (see proton conductor).[50] Ice was previously thought to have a small but measurable conductivity of 1×1010 S cm−1, but this conductivity is now thought to almost entirely from surface defects, and without those, ice is an insulator with an immeasurably small conductivity.[31]
Polarity, hydrogen bonding and intermolecular structure[edit]
A diagram showing the partial charges on the atoms in a water molecule
An important feature of water is its polar nature. The structure has a bent molecular geometry for the two hydrogens from the oxygen vertex. The oxygen atom also has two lone pairs of electrons. One effect usually ascribed to the lone pairs is that the H–O–H gas phase bend angle is 104.48°,[51] which is smaller than the typical tetrahedral angle of 109.47°. The lone pairs are closer to the oxygen atom than the electrons sigma bonded to the hydrogens, so they require more space. The increased repulsion of the lone pairs forces the O–H bonds closer to each other.[52]
Another effect of the electronic structure is that water is a polar molecule. Due to the difference in electronegativity, there is a bond dipole moment pointing from each H to the O, making the oxygen partially negative and each hydrogen partially positive. In addition, the lone pairs of electrons on the O are in the direction opposite to the hydrogen atoms. This results in a large molecular dipole, pointing from a positive region between the two hydrogen atoms to the negative region of the oxygen atom. The charge differences cause water molecules to be attracted to each other (the relatively positive areas being attracted to the relatively negative areas) and to other polar molecules. This attraction contributes to hydrogen bonding, and explains many of the properties of water, such as solvent action.[53]
Although hydrogen bonding is a relatively weak attraction compared to the covalent bonds within the water molecule itself, it is responsible for a number of water's physical properties. These properties include its relatively high melting and boiling point temperatures: more energy is required to break the hydrogen bonds between water molecules. In contrast, hydrogen sulfide (H
), has much weaker hydrogen bonding due to sulfur's lower electronegativity. H
is a gas at room temperature, in spite of hydrogen sulfide having nearly twice the molar mass of water. The extra bonding between water molecules also gives liquid water a large specific heat capacity. This high heat capacity makes water a good heat storage medium (coolant) and heat shield.
Proposed structures[edit]
Model of hydrogen bonds (1) between molecules of water
A single water molecule can participate in a maximum of four hydrogen bonds because it can accept two bonds using the lone pairs on oxygen and donate two hydrogen atoms. Other molecules like hydrogen fluoride, ammonia and methanol can also form hydrogen bonds. However, they do not show anomalous thermodynamic, kinetic or structural properties like those observed in water because none of them can form four hydrogen bonds: either they cannot donate or accept hydrogen atoms, or there are steric effects in bulky residues. In water, intermolecular tetrahedral structures form due to the four hydrogen bonds, thereby forming an open structure and a three-dimensional bonding network, resulting in the anomalous decrease in density when cooled below 4 °C. This repeated, constantly reorganizing unit defines a three-dimensional network extending throughout the liquid. This view is based upon neutron scattering studies and computer simulations, and it makes sense in the light of the unambiguously tetrahedral arrangement of water molecules in ice structures.
However, there is an alternative theory for the structure of water. In 2004, a controversial paper from Stockholm University suggested that water molecules in liquid form typically bind not to four but to only two others; thus forming chains and rings. The term "string theory of water" (which is not to be confused with the string theory of physics) was coined. These observations were based upon X-ray absorption spectroscopy that probed the local environment of individual oxygen atoms. Water, the team suggests, is a muddle of the two proposed structures. They say that it is a soup flecked with "icebergs" each comprising 100 or so loosely connected molecules that are relatively open and hydrogen bonded. The soup is made of the string structure and the icebergs of the tetrahedral structure.[54]
Cohesion and adhesion[edit]
Dew drops adhering to a spider web
Water molecules stay close to each other (cohesion), due to the collective action of hydrogen bonds between water molecules. These hydrogen bonds are constantly breaking, with new bonds being formed with different water molecules; but at any given time in a sample of liquid water, a large portion of the molecules are held together by such bonds.[55]
Water also has high adhesion properties because of its polar nature. On extremely clean/smooth glass the water may form a thin film because the molecular forces between glass and water molecules (adhesive forces) are stronger than the cohesive forces. In biological cells and organelles, water is in contact with membrane and protein surfaces that are hydrophilic; that is, surfaces that have a strong attraction to water. Irving Langmuir observed a strong repulsive force between hydrophilic surfaces. To dehydrate hydrophilic surfaces—to remove the strongly held layers of water of hydration—requires doing substantial work against these forces, called hydration forces. These forces are very large but decrease rapidly over a nanometer or less.[56] They are important in biology, particularly when cells are dehydrated by exposure to dry atmospheres or to extracellular freezing.[57]
Surface tension[edit]
This paper clip is under the water level, which has risen gently and smoothly. Surface tension prevents the clip from submerging and the water from overflowing the glass edges.
Temperature dependence of the surface tension of pure water
Water has a high surface tension of 71.99 mN/m at 25 °C,[58] caused by the strong cohesion between water molecules, the highest of the common non-ionic, non-metallic liquids. This can be seen when small quantities of water are placed onto a sorption-free (non-adsorbent and non-absorbent) surface, such as polyethylene or Teflon, and the water stays together as drops. Just as significantly, air trapped in surface disturbances forms bubbles, which sometimes last long enough to transfer gas molecules to the water.[citation needed]
Another surface tension effect is capillary waves, which are the surface ripples that form around the impacts of drops on water surfaces, and sometimes occur with strong subsurface currents flowing to the water surface. The apparent elasticity caused by surface tension drives the waves.
Capillary action[edit]
Due to an interplay of the forces of adhesion and surface tension, water exhibits capillary action whereby water rises into a narrow tube against the force of gravity. Water adheres to the inside wall of the tube and surface tension tends to straighten the surface causing a surface rise and more water is pulled up through cohesion. The process continues as the water flows up the tube until there is enough water such that gravity balances the adhesive force.
Surface tension and capillary action are important in biology. For example, when water is carried through xylem up stems in plants, the strong intermolecular attractions (cohesion) hold the water column together and adhesive properties maintain the water attachment to the xylem and prevent tension rupture caused by transpiration pull.
Water as a solvent[edit]
Main article: Aqueous solution
Presence of colloidal calcium carbonate from high concentrations of dissolved lime turns the water of Havasu Falls turquoise.
Water is also a good solvent, due to its polarity. Substances that will mix well and dissolve in water (e.g. salts) are known as hydrophilic ("water-loving") substances, while those that do not mix well with water (e.g. fats and oils), are known as hydrophobic ("water-fearing") substances. The ability of a substance to dissolve in water is determined by whether or not the substance can match or better the strong attractive forces that water molecules generate between other water molecules. If a substance has properties that do not allow it to overcome these strong intermolecular forces, the molecules are "pushed out" from the water, and do not dissolve. Contrary to the common misconception, water and hydrophobic substances do not "repel", and the hydration of a hydrophobic surface is energetically, but not entropically, favorable.
When an ionic or polar compound enters water, it is surrounded by water molecules (Hydration). The relatively small size of water molecules (~ 3 Angstroms) allows many water molecules to surround one molecule of solute. The partially negative dipole ends of the water are attracted to positively charged components of the solute, and vice versa for the positive dipole ends.
In general, ionic and polar substances such as acids, alcohols, and salts are relatively soluble in water, and non-polar substances such as fats and oils are not. Non-polar molecules stay together in water because it is energetically more favorable for the water molecules to hydrogen bond to each other than to engage in van der Waals interactions with non-polar molecules.
cations and Cl
anions, each being surrounded by water molecules. The ions are then easily transported away from their crystalline lattice into solution. An example of a nonionic solute is table sugar. The water dipoles make hydrogen bonds with the polar regions of the sugar molecule (OH groups) and allow it to be carried away into solution.
Quantum tunneling[edit]
The quantum tunneling dynamics in water was reported as early as 1992. At that time it was known that there are motions which destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomers.[59] On 18 March 2016, it was reported that the hydrogen bond can be broken by quantum tunneling in the water hexamer. Unlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bonds.[60] Later in the same year, the discovery of the quantum tunneling of water molecules was reported.[61]
Chemical properties in nature[edit]
Action of water on rock over long periods of time typically leads to weathering and water erosion, physical processes that convert solid rocks and minerals into soil and sediment, but under some conditions chemical reactions with water occur as well, resulting in metasomatism or mineral hydration, a type of chemical alteration of a rock which produces clay minerals. It also occurs when Portland cement hardens.
Water ice can form clathrate compounds, known as clathrate hydrates, with a variety of small molecules that can be embedded in its spacious crystal lattice. The most notable of these is methane clathrate, 4 CH
, naturally found in large quantities on the ocean floor.
Pure water has the concentration of hydroxide ions (OH
) equal to that of the hydronium (H
) or hydrogen (H+
) ions, which gives pH of 7 at 298 K. In practice, pure water is very difficult to produce. Water left exposed to air for any length of time will dissolve carbon dioxide, forming a dilute solution of carbonic acid, with a limiting pH of about 5.7. As cloud droplets form in the atmosphere and as raindrops fall through the air minor amounts of CO
are absorbed, and thus most rain is slightly acidic. If high amounts of nitrogen and sulfur oxides are present in the air, they too will dissolve into the cloud and rain drops, producing acid rain.
Electromagnetic absorption[edit]
Water is relatively transparent to visible light, near ultraviolet light, and far-red light, but it absorbs most ultraviolet light, infrared light, and microwaves. Most photoreceptors and photosynthetic pigments utilize the portion of the light spectrum that is transmitted well through water. Microwave ovens take advantage of water's opacity to microwave radiation to heat the water inside of foods. The very weak onset of absorption in the red end of the visible spectrum lends water its intrinsic blue hue (see Color of water).
Heavy water and isotopologues[edit]
Several isotopes of both hydrogen and oxygen exist, giving rise to several known isotopologues of water.
Hydrogen occurs naturally in three isotopes. The most common isotope, 1
, sometimes called protium, accounts for more than 99.98% of hydrogen in water and consists of only a single proton in its nucleus. A second stable isotope, deuterium (chemical symbol D or 2
), has an additional neutron. Deuterium oxide, D
, is also known as heavy water because of its higher density. It is used in nuclear reactors as a neutron moderator. The third isotope, tritium (chemical symbol T or 3
) has 1 proton and 2 neutrons, and is radioactive, decaying with a half-life of 4500 days. THO exists in nature only in minute quantities, being produced primarily via cosmic ray-induced nuclear reactions in the atmosphere. Water with one protium and one deuterium atom HDO occurs naturally in ordinary water in low concentrations (~0.03%) and D
in far lower amounts (0.000003%) and any such molecules are temporary as the atoms recombine.
The most notable physical differences between H
and D
, other than the simple difference in specific mass, involve properties that are affected by hydrogen bonding, such as freezing and boiling, and other kinetic effects. This is because the nucleus of deuterium is twice as heavy as protium, and this causes noticeable differences in bonding energies. The difference in boiling points allows the isotopologues to be separated. The self-diffusion coefficient of H
at 25 °C is 23% higher than the value of D
.[62] Because water molecules exchange hydrogen atoms with one another, hydrogen deuterium oxide (DOH) is much more common in low-purity heavy water than pure dideuterium monoxide D
Consumption of pure isolated D
may affect biochemical processes – ingestion of large amounts impairs kidney and central nervous system function. Small quantities can be consumed without any ill-effects; humans are generally unaware of taste differences,[63] but sometimes report a burning sensation[64] or sweet flavor.[65] Very large amounts of heavy water must be consumed for any toxicity to become apparent. Rats, however, are able to avoid heavy water by smell, and it is toxic to many animals.[66]
Oxygen also has three stable isotopes, with 16
present in 99.76%, 17
in 0.04%, and 18
in 0.2% of water molecules.[67]
Light water refers to deuterium-depleted water (DDW), water in which the deuterium content has been reduced below the standard 155 ppm level.
Standard water[edit]
Vienna Standard Mean Ocean Water is the current international standard for water isotopes. Naturally occurring water is almost completely composed of the neutron-less hydrogen isotope protium. Only 155 ppm include deuterium (2
or D), a hydrogen isotope with one neutron, and fewer than 20 parts per quintillion include tritium (3
or T), which has two neutrons.
Acid-base reactions[edit]
Water is amphoteric: it has the ability to act as either an acid or a base in chemical reactions.[68] According to the Brønsted-Lowry definition, an acid is a proton (H+
) donor and a base is a proton acceptor.[69] When reacting with a stronger acid, water acts as a base; when reacting with a stronger base, it acts as an acid.[69] For instance, water receives an H+
ion from HCl when hydrochloric acid is formed:
+ H
+ Cl
In the reaction with ammonia, NH
, water donates a H+
ion, and is thus acting as an acid:
+ H
+ OH
Because the oxygen atom in water has two lone pairs, water often acts as a Lewis base, or electron pair donor, in reactions with Lewis acids, although it can also react with Lewis bases, forming hydrogen bonds between the electron pair donors and the hydrogen atoms of water. HSAB theory describes water as both a weak hard acid and a weak hard base, meaning that it reacts preferentially with other hard species:
(Lewis acid)
+ H
(Lewis base)
(Lewis acid)
+ H
(Lewis base)
(Lewis base)
+ H
(Lewis acid)
When a salt of a weak acid or of a weak base is dissolved in water, water can partially hydrolyze the salt, producing the corresponding base or acid, which gives aqueous solutions of soap and baking soda their basic pH:
+ H
⇌ NaOH + NaHCO
Ligand chemistry[edit]
Water's Lewis base character makes it a common ligand in transition metal complexes, examples of which range from solvated ions, such as Fe(H
, to perrhenic acid, which contains two water molecules coordinated to a rhenium atom, and various solid hydrates, such as CoCl
. Water is typically a monodentate ligand–it forms only one bond with the central atom.[70]
Organic chemistry[edit]
As a hard base, water reacts readily with organic carbocations; for example in an hydration reaction, a hydroxyl group (OH
) and an acidic proton are added to the two carbon atoms bonded together in the carbon-carbon double bond, resulting in an alcohol. When addition of water to an organic molecule cleaves the molecule in two, hydrolysis is said to occur. Notable examples of hydrolysis are the saponification of fats and the digestion of proteins and polysaccharides. Water can also be a leaving group in SN2 substitution and E2 elimination reactions; the latter is then known as a dehydration reaction.
Water in redox reactions[edit]
Water contains hydrogen in the oxidation state +1 and oxygen in the oxidation state −2.[71] It oxidizes chemicals such as hydrides, alkali metals, and some alkaline earth metals.[72][73][74] One example of an alkali metal reacting with water is:[75]
2 Na + 2 H
+ 2 Na+
+ 2 OH
Some other reactive metals, such as aluminum and beryllium, are oxidized by water as well, but their oxides adhere to the metal and form a passive protective layer.[76] Note, however, that the rusting of iron is a reaction between iron and oxygen[77] that is dissolved in water, not between iron and water.
Water can be oxidized to emit oxygen gas, but very few oxidants react with water even if their reduction potential is greater than the potential of O
. Almost all such reactions require a catalyst.[78] An example of the oxidation of water is:
4 AgF
+ 2 H
→ 4 AgF + 4 HF + O
Main article: Electrolysis of water
Water can be split into its constituent elements, hydrogen and oxygen, by passing an electric current through it. This process is called electrolysis. The cathode half reaction is:
2 H+
+ 2 eH
The anode half reaction is:
2 H
+ 4 H+
+ 4 e
The gases produced bubble to the surface, where they can be collected. The standard potential of the water electrolysis cell (when heat is added to the reaction) is a minimum of 1.23 V at 25 °C. The operating potential is actually 1.48 V (or above) in practical electrolysis when heat input is negligible.
Henry Cavendish showed that water was composed of oxygen and hydrogen in 1781.[79] The first decomposition of water into hydrogen and oxygen, by electrolysis, was done in 1800 by English chemist William Nicholson and Anthony Carlisle.[79][80] In 1805, Joseph Louis Gay-Lussac and Alexander von Humboldt showed that water is composed of two parts hydrogen and one part oxygen.[81]
Gilbert Newton Lewis isolated the first sample of pure heavy water in 1933.[82]
The properties of water have historically been used to define various temperature scales. Notably, the Kelvin, Celsius, Rankine, and Fahrenheit scales were, or currently are, defined by the freezing and boiling points of water. The less common scales of Delisle, Newton, Réaumur and Rømer were defined similarly. The triple point of water is a more commonly used standard point today.[83][better source needed]
See also[edit]
2. ^ A commonly quoted value of 15.7 used mainly in organic chemistry for the pKa of water is incorrect.[7]
3. ^ Both acid and base names exist for water because it is amphoteric (able to react both as an acid or an alkali)
4. ^ (1-0.95865/1.00000) × 100% = 4.135%
5. ^ Adiabatic cooling resulting from the ideal gas law
6. ^ The source gives it as 0.0072°C/atm. However the author defines an atmosphere as 1,000,000 dynes/cm2 (a bar). Using the standard definition of atmosphere, 1,013,250 dynes/cm2, it works out to 0.0073°C/atm
7. ^ Using the fact that 0.5/0.0073 = 68.5
1. ^ "Definition of Hydrol". Merriam-Webster. (subscription required (help)).
2. ^ a b Braun, Charles L.; Smirnov, Sergei N. (1993-08-01). "Why is water blue?". Journal of Chemical Education. 70 (8): 612. Bibcode:1993JChEd..70..612B. doi:10.1021/ed070p612. ISSN 0021-9584.
3. ^ Water in Linstrom, P.J.; Mallard, W.G. (eds.) NIST Chemistry WebBook, NIST Standard Reference Database Number 69. National Institute of Standards and Technology, Gaithersburg MD. (retrieved 2016-5-27)
4. ^ a b c Anatolievich, Kiper Ruslan. "Properties of substance: water".
5. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. Vapor Pressure of Water From 0 to 370° C in Sec. 6. ISBN 9780849304842.
6. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. Chapter 8: Dissociation Constants of Inorganic Acids and Bases. ISBN 9780849304842.
7. ^ "What is the pKa of Water". University of California, Davis.
9. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. 8—Concentrative Properties of Aqueous Solutions: Density, Refractive Index, Freezing Point Depression, and Viscosity. ISBN 9780849304842.
10. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. 6-186. ISBN 9780849304842.
11. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. 9—Dipole Moments. ISBN 9780849304842.
12. ^ a b c Water in Linstrom, P.J.; Mallard, W.G. (eds.) NIST Chemistry WebBook, NIST Standard Reference Database Number 69. National Institute of Standards and Technology, Gaithersburg MD. (retrieved 2014-06-01)
13. ^ Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 620. ISBN 0-08-037941-9.
14. ^ "Water, the Universal Solvent". USGS.
17. ^ a b Leigh, G. J.; et al. (1998). Principles of chemical nomenclature: a guide to IUPAC recommendations (PDF). Blackwell Science Ltd, UK. p. 34. ISBN 0-86542-685-6. Archived (PDF) from the original on 2011-07-26.
18. ^ Nomenclature of Inorganic Chemistry: IUPAC Recommendations 2005 (PDF). Royal Society of Chemistry. 22 Nov 2005. p. 85. ISBN 978-0-85404-438-2. Retrieved 2016-07-31.
19. ^ Leigh, G. J.; et al. (1998). Principles of chemical nomenclature: a guide to IUPAC recommendations (PDF). IUPAC, Commission on Nomenclature of Organic Chemistry. Blackwell Science Ltd, UK. p. 99. ISBN 0-86542-685-6. Archived (PDF) from the original on 2011-07-26.
20. ^ "Tetrahydropyran". Pubchem. National Institutes of Health. Retrieved 2016-07-31.
21. ^ "Compound Summary for CID 22247451". Pubchem Compound Database. National Center for Biotechnology Information.
22. ^ Leigh, G. J.; et al. (1998). Principles of chemical nomenclature: a guide to IUPAC recommendations (PDF). Blackwell Science Ltd, UK. pp. 27–28. ISBN 0-86542-685-6. Archived (PDF) from the original on 2011-07-26.
24. ^ "Water (Code C65147)". NCI Thesaurus. National Cancer Institute. Retrieved 2016-08-01.
25. ^ Smith, Jared D.; Christopher D. Cappa; Kevin R. Wilson; Ronald C. Cohen; Phillip L. Geissler; Richard J. Saykally (2005). "Unified description of temperature-dependent hydrogen bond rearrangements in liquid water" (PDF). Proc. Natl. Acad. Sci. USA. 102 (40): 14171–14174. Bibcode:2005PNAS..10214171S. doi:10.1073/pnas.0506899102. PMC 1242322free to read. PMID 16179387.
26. ^ Deguchi, Shigeru; Tsujii, Kaoru (2007-06-19). "Supercritical water: a fascinating medium for soft matter". Soft Matter. 3 (7): 797. Bibcode:2007SMat....3..797D. doi:10.1039/b611584e. ISSN 1744-6848.
27. ^ NASA – Oceans of Climate Change. (2009-04-22). Retrieved on 2011-11-22.
28. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. Chapter 6: Properties of Ice and Supercooled Water. ISBN 9780849304842.
29. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. 6. Properties of Water and Steam as a Function of Temperature and Pressure. ISBN 9780849304842.
31. ^ a b c d Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 625. ISBN 0-08-037941-9.
32. ^ Shell, Scott M.; Debenedetti, Pablo G. & Panagiotopoulos, Athanassios Z. (2002). "Molecular structural order and anomalies in liquid silica" (PDF). Phys. Rev. E. 66: 011202. arXiv:cond-mat/0203383free to read. Bibcode:2002PhRvE..66a1202S. doi:10.1103/PhysRevE.66.011202.
33. ^ a b c d e Perlman, Howard. "Water Density". The USGS Water Science School. Retrieved 2016-06-03.
34. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 938. ISBN 978-1-13-361109-7.
35. ^ Loerting, Thomas; Salzmann, Christoph; Kohl, Ingrid; Mayer, Erwin; Hallbrucker, Andreas (2001-01-01). "A second distinct structural "state" of high-density amorphous ice at 77 K and 1 bar". Physical Chemistry Chemical Physics. 3 (24): 5355–5357. doi:10.1039/b108676f. ISSN 1463-9084.
36. ^ Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 624. ISBN 0-08-037941-9.
37. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 493. ISBN 978-1-13-361109-7.
38. ^ a b c "Can the ocean freeze?". National Ocean Service. National Oceanic and Atmospheric Administration. Retrieved 2016-06-09.
40. ^ a b Nave, R. "Bulk Elastic Properties". HyperPhysics. Georgia State University. Retrieved 2007-10-26.
41. ^ Review of the vapour pressures of ice and supercooled water for atmospheric applications. D. M. Murphy and T. Koop (2005) Quarterly Journal of the Royal Meteorological Society, 131, 1539.
43. ^ Schlüter, Oliver (2003-07-28). "Impact of High Pressure — Low Temperature Processes on Cellular Materials Related to Foods" (PDF). Technischen Universität Berlin.
44. ^ Tammann, Gustav H.J.A (1925). "The States Of Aggregation". Constable And Company.
45. ^ Lewis, William C.M. & Rice, James (1922). A System of Physical Chemistry. Longmans, Green and Co.
46. ^ Debenedetti, P. G. & Stanley, H. E. (2003). "Supercooled and Glassy Water" (PDF). Physics Today. 56 (6): 40–46. Bibcode:2003PhT....56f..40D. doi:10.1063/1.1595053.
47. ^ Sharp, Robert Phillip (1988-11-25). Living Ice: Understanding Glaciers and Glaciation. Cambridge University Press. p. 27. ISBN 0-521-33009-2.
48. ^ "Revised Release on the Pressure along the Melting and Sublimation Curves of Ordinary Water Substance" (PDF). IAPWS. September 2011. Retrieved 2013-02-19.
49. ^ a b Light, Truman S.; Licht, Stuart; Bevilacqua, Anthony C.; Morash, Kenneth R. (2005-01-01). "The Fundamental Conductivity and Resistivity of Water". Electrochemical and Solid-State Letters. 8 (1): E16–E19. doi:10.1149/1.1836121. ISSN 1099-0062.
50. ^ Crofts, A. (1996). "Lecture 12: Proton Conduction, Stoichiometry". University of Illinois at Urbana-Champaign. Retrieved 2009-12-06.
51. ^ Hoy, AR; Bunker, PR (1979). "A precise solution of the rotation bending Schrödinger equation for a triatomic molecule with application to the water molecule". Journal of Molecular Spectroscopy. 74: 1–8. Bibcode:1979JMoSp..74....1H. doi:10.1016/0022-2852(79)90019-5.
52. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 393. ISBN 978-1-13-361109-7.
53. ^ Campbell, Mary K. & Farrell, Shawn O. (2007). Biochemistry (6th ed.). Cengage Learning. pp. 37–38. ISBN 978-0-495-39041-1.
54. ^ Ball, Philip (2008). "Water—an enduring mystery". Nature. 452 (7185): 291–292. Bibcode:2008Natur.452..291B. doi:10.1038/452291a. PMID 18354466.
55. ^ Campbell, Neil A. & Reece, Jane B. (2009). Biology (8th ed.). Pearson. p. 47. ISBN 978-0-8053-6844-4.
56. ^ Chiavazzo, Eliodoro; Fasano, Matteo; Asinari, Pietro; Decuzzi, Paolo (2014). "Scaling behaviour for the water transport in nanoconfined geometries". Nature Communications. 5: 4565. Bibcode:2014NatCo...5E4565C. doi:10.1038/ncomms4565.
57. ^ "Physical Forces Organizing Biomolecules" (PDF). Biophysical Society. Archived from the original on August 7, 2007.
58. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Press. Water in Table Surface Tension of Common Liquids. ISBN 9780849304842.
59. ^ Pugliano, N. (1992-11-01). "Vibration-Rotation-Tunneling Dynamics in Small Water Clusters". Lawrence Berkeley Lab., CA (United States): 6. doi:10.2172/6642535.
60. ^ Richardson, Jeremy O.; Pérez, Cristóbal; Lobsiger, Simon; Reid, Adam A.; Temelso, Berhane; Shields, George C.; Kisiel, Zbigniew; Wales, David J.; Pate, Brooks H. (2016-03-18). "Concerted hydrogen-bond breaking by quantum tunneling in the water hexamer prism". Science. 351 (6279): 1310–1313. Bibcode:2016Sci...351.1310R. doi:10.1126/science.aae0012. ISSN 0036-8075. PMID 26989250. Retrieved 2016-04-23.
61. ^ Kolesnikov, Alexander I. (2016-04-22). "Quantum Tunneling of Water in Beryl: A New State of the Water Molecule". Physical Review Letters. 116 (16): 167802. Bibcode:2016PhRvL.116p7802K. doi:10.1103/PhysRevLett.116.167802. PMID 27152824. Retrieved 2016-04-23.
62. ^ Hardy, Edme H.; Zygar, Astrid; Zeidler, Manfred D.; Holz, Manfred; Sacher, Frank D. (2001). "Isotope effect on the translational and rotational motion in liquid water and ammonia". J. Chem Phys. 114 (7): 3174–3181. Bibcode:2001JChPh.114.3174H. doi:10.1063/1.1340584.
63. ^ Urey, Harold C.; et al. (15 Mar 1935). "Concerning the Taste of Heavy Water". Science. 81 (2098). New York: The Science Press. p. 273. doi:10.1126/science.81.2098.273-a.
64. ^ "Experimenter Drinks 'Heavy Water' at $5,000 a Quart". Popular Science Monthly. 126 (4). New York: Popular Science Publishing. Apr 1935. p. 17. Retrieved 7 Jan 2011.
65. ^ Müller, Grover C. (June 1937). "Is 'Heavy Water' the Fountain of Youth?". Popular Science Monthly. 130 (6). New York: Popular Science Publishing. pp. 22–23. Retrieved 7 Jan 2011.
66. ^ Miller, Inglis J., Jr.; Mooser, Gregory (Jul 1979). "Taste Responses to Deuterium Oxide". Physiology & Behavior. 23 (1): 69–74. doi:10.1016/0031-9384(79)90124-0.
67. ^ "Guideline on the Use of Fundamental Physical Constants and Basic Constants of Water" (PDF). IAPWS. 2001.
68. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 659. ISBN 978-1-13-361109-7.
69. ^ a b Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 654. ISBN 978-1-13-361109-7.
70. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 984. ISBN 978-1-13-361109-7.
71. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 171. ISBN 978-1-13-361109-7.
72. ^ "Hydrides". Chemwiki. UC Davis. Retrieved 2016-06-25.
73. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 932. ISBN 978-1-13-361109-7.
74. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 936. ISBN 978-1-13-361109-7.
75. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 338. ISBN 978-1-13-361109-7.
76. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 862. ISBN 978-1-13-361109-7.
77. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 981. ISBN 978-1-13-361109-7.
78. ^ Charlot, G. (2007). Qualitative Inorganic Analysis. Read Books. p. 275. ISBN 1-4067-4789-0.
79. ^ a b Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 601. ISBN 0-08-037941-9.
80. ^ "Enterprise and electrolysis...". Royal Society of Chemistry. August 2003. Retrieved 2016-06-24.
81. ^ "Joseph Louis Gay-Lussac, French chemist (1778-1850)". 1902 Encyclopedia. Footnote 122-1. Retrieved 2016-05-26.
82. ^ Lewis, G. N.; MacDonald, R. T. (1933). "Concentration of H2 Isotope". The Journal of Chemical Physics. 1 (6): 341. Bibcode:1933JChPh...1..341L. doi:10.1063/1.1749300.
83. ^ A Brief History of Temperature Measurement. Retrieved on 2011-11-22.
External links[edit]
Wikiversity has small "student" steam tables suitable for classroom use. |
1cfbc228ba3cc64e | Dietrich ZawischaContact German version
Atomic spectra
Light emitted or absorbed by single atoms contributes only very little to the colours of our surroundings. Neon signs (or other gas discharge tubes) as used for advertising, sodium or mercury vapour lamps show atomic emission; the colours of fireworks are due to it. The aurora borealis (northern light) is very rare at our latitudes, and to appreciate the colours of cosmic objects, powerful telescopes are necessary. Neon, which gives red colour in a gas discharge, is a colourless gas. If the light of the sun is spread out into different colours by a simple glass prism, the narrow absorption lines cannot be seen.
Firework. Photo © Pete Lawrence,,
Shown with permission.
Nevertheless, to understand how the colours which surround us come about, one needs some basic knowledge on the smallest parts of matter.
Atomic structure
Only with quantum theory atomic structure can be understood. Quantum theory is, so to say, the mathematical formulation of particle–wave duality. While we cannot dive into mathematical details here, the basic principles shall be sketched.
Waves always have some spatial extension, while one may imagine the elementary, indivisible particles as being “pointlike”. The fact that these apparently contradictory attributes are compatible in matter waves and also in light (photons) is hard to understand, but all experimental data point out that this is the case.
Thus the electrons bound by electric force to an atomic nucleus (which contains almost all of the atom's mass) must be considered to be waves. Wavefunctions are used to calculate observable quantities; in particular, the probability to find the (pointlike) particle in some volume is given by the squared value of the wavefunction integrated over the volume.
The hydrogen atom is the simplest of all atoms. Its nucleus carries one unit of positive elementary charge and thus binds only one electron to it. Its possible wavefunctions can be obtained as solutions of the Schrödinger equation. This is described in detail in all textbooks on quantum mechanics. For us it is important to realize that the electron forms some kind of standing wave. Some simple examples will be used to demonstrate general properties of oscillating systems, standing waves in particular.
Normal modes
The exact way how a guitar's string vibrates depends on the spot where it has been plucked. It is always possible to describe the motion of the string as a superposition of simple modes which have the peculiar property that all parts of the string move sinusoidally with the same frequency and phase. These are called normal modes or eigenmodes. The superposition of different normal modes is heard as superposition of ground- and overtones.
The picture below shows how a string vibrates in the lowest three normal modes. The motion is so fast that it cannot be resolved by the eye, one sees a sequence of nodes and antinodes.
key tone
first harmonic
second harmonic
Important properties of three-dimensional waves cannot be seen on strings; vibrating membranes show somewhat more. Instead of nodes the normal modes exhibit nodal lines. In the case of vibrating metal plates, the nodal lines are known from classroom demonstrations as Chladni figures.
Water waves are more difficult to describe than those which we are interested in. If we restrict the considerations to small amplitudes, we may neglect that. Water waves have the advantage to be slow and the wave motion can clearly be seen. The following pictures are idealized and in slow motion.
In the simplest mode in a glass of water, which can easily be excited, the liquid is swashing to and fro. This can happen in an arbitrary direction and therefore there are infinitely many standing waves of that kind, all with the same oscillation frequency.
animated picture Double-clicking one of the images will switch on the animation; a single click switches it off again.
(a)(b) (c)(d)
An essential property of waves is that they can be added (superposed). By a linear combination (i.e. superposition) of the modes shown in figure (a) and (b) with suitable weight factors, all the infinitely many others can be obtained. Also the modes shown in Figures (c) and (d) may be obtained from a superposition of (a) and (b), with phase shifts of 90º or –90º, respectively. Therefore, there are only two linearly independent normal modes of the swashing type.
(e) (f)(g)
The mode of figure (e) can be excited by falling drops. It has rotational symmetry and therefore there is no other possible orientation. The mode (f) may be obtained by gently shaking the glass with the right frequency; again different directions of motion are possible, and, similar to the above, there are two linearly independent modes, the superposition of which will yield all the others, in particular also waves going round clockwise or counterclockwise (g). The dark spot in the middle of the surface indicates that this point does not move.
Elektron clouds in the hydrogen atom
To illustrate the matter waves of the electron in an hydrogen atom, the square of the local amplitude is plotted. This quantity is identical with the particle density, and this cloudy density distribution is plotted as a white mist over a dark background. Each of the clouds shown contains exactly the one electron of the hydrogen atom, even if it consists of several distinct parts.
The state with lowest energy is spherically symmetric without any nodes. The electron is very close to the nucleus.
Scale: The width of this and the following pictures is sixty times the Bohr radius, i.e. 3.17 nm.
For the next higher possible energy, two states of different shape are possible. One of them is spherically symmetric and has a spherical nodal area (left), the other is rotationally symmetric around a symmetry axis. The symmetry axis is vertical in the right picture, but it may have any orientation in space. Each one of these infinitely many functions may be written as a superposition (linear combination) of three basis functions, the symmetry axes of which are conveniently chosen along the coordinate axes.
The energy states are labeled by a number n which is called “principal quantum number”. The number of the ground state is n=1, that of the first excited state is n=2, and so on. The letters s, p, d, f written after the principal quantum number under the images originate from properties which in early times were supposed in these states: (sharp, principal, diffuse, fundamental); these letters stand for the “azimuthal quantum number” which is the quantum number of orbital angular momentum, and s means l = 0, p means l = 1, d stands for l = 2, f for l = 3, and then it continues with the alphabet.
Three different shapes of waves belong to the next higher possible energy. One of them is again spherically symmetric (left), the other rotationally symmetric around a symmetry axis (vertical in the picture in the middle) which may have any direction in space, and, exactly as discussed above, all orientations can be considered as linear combinations of three suitably chosen basis functions. The third kind of electron cloud for this energy exhibits two nodal planes; the centroids of the four parts lie in a plane (which is the plane of the screen in the right picture). Again, any orientation in space is possible, and again all of these may be obtained as a superposition of a few suitably chosen basis functions.
Assume that the center of force is the origin, the x-axis is perpendicular to the plane of the screen and pointing outwards, the y-axis points to the right and the z-axis upwards.
It is easy to figure out six different 3d electron clouds in special orientations in the coordinate system chosen. In the picture shown above, the maxima of the density are in the y-z-plane, and the x-y and x-z planes are nodal planes.
If the cloud is rotated by 90º around the z-axis, the maxima are located in the z-x-plane – this is the second simple special orientation, and the third one is that where the maxima are in the x-y-plane.
Next we assume the 3d-cloud of the figure above to be rotated by 45º around the x-axis. This yields the density distribution shown to the left. Here the maxima of the density lie on the coordinate axes in the y-z-plane. Two more states again result from rotations by 90º around the z- and y-axis, respectively.
Now let us take a closer look at the latter three ones. For this purpose, it is convenient to write down the wave-functions. We denote by r the distance from the origin; r2 = x2 + y2 + z2. The three functions can be written as
(y2z2) f(r),
(z2x2) f(r), and
(x2y2) f(r).
Adding these three expressions, the sum is zero. In other words: the sixth of the functions which we have found can be written as a linear combination of the fourth and the fifth, and thus is not independent of the others.
3d (m=0)
Instead of the three functions written above one usually choses the last one and the difference of the first two lines (divided by the squareroot of 3). The corresponding density distribution is shown at the left; it is rotationally symmetric around the vertical z-axis.
The selection of base functions sketched here for the hydrogen atom is different from the one presented in most textbooks. For several reasons in most cases another selection is preferrable, instead of standing waves with fixed orientation in space one chooses waves with definite projections of angular momentum onto the z-axis. Our choice corresponds to cases (a) and (b) in the above mentioned water-waves model, the latter ones to cases (c) and (d).
Energy level scheme and spectrum of hydrogen
The Schrödinger equation supplies both the energies and the wavefunctions of the possible states of an electron in a Coulomb potential well (hydrogen atom and hydrogenlike ions). The zero of the energy scale is chosen to correspond to infinite separation of an electron at rest from the nucleus. Then the energies of the bound states are negative and the absolute values are equal to the minimum energy necessary to ionize the atom i.e. to separate the electron from the nucleus.
E1 = –13.6 eV
For an electron in a Coulomb potential, the energies depend only on the principal quantum number (which we have introduced by simply numbering the energies):
En = E1/n2,
and thus the following level scheme results:
Energy level scheme of the hydrogen atom. The levels have been drawn separately for the different values of (orbital) angular momentum and are labelled by the principal quantum number n and the azimuthal quantum number, where the letters s, p, d, f, g, h stand for l = 0, 1, 2, 3, 4, 5, respectively.
Transitions from lower to higher states can occur if the necessary energy is supplied by an electromagnetic wave or by a collision with an other particle (if the temperature is high enough), and vice versa transitions from higher to lower states can occur through emission of radiation or in collisions with other atoms or molecules. The emitted photons carry the energy difference between initial and final state of the atom. For photons, the basic quantum mechanical relation between energy and frequency ν holds (h is the Planck constant):
Ephoton = h ν
This yields the wavelength λ = c/ν where c is the velocity of light, and finally for a transition from an initial state with number ni to a final one nf
1/λ = (1/nf2 – 1/ni2) |E1| / hc
Only the lines with nf = 2 are in the visible range (Balmer series). The result is shown below. The lines with nf = 1 (Lyman series) belong to the ultraviolet region, those with nf > 2 (Paschen, Brackett, and Pfund series) to the infrared.
Computed visible part of the hydrogen spectrum. The scale gives the wavelength in nm (nanometers). In experiment, the spectral lines are images of the spectroscope's entrance slit which occur on different places depending on the wavelength.
Starting from the Schrödinger equation we have obtained the emission spectrum of the hydrogen atom. Historically, the path towards knowledge was worked through in the opposite direction.
Some remarks on matter waves: we have used the analogy with water waves and other mechanical oscillations to make their properties plausible. But – until now nothing has been seen of any wavy motion! Indeed there really is no motion in states of fixed energy, such states are “stationary”. This is due to the peculiar quantum mechanical time dependence: instead of being like cos(ωt + φ) it is described by ei (ωt + φ) (with ω = 2πν = 2πE/h). But if there is a mixture of states with different energies (and thus different frequencies), there is real time dependence. An example is given in the following slow motion pictures:
Superposition of 2p and 3d
Two stationary states of the hydrogen atom and a superposition of both. On “mouse over”, the time dependence is shown by animation.
In times of the old Bohr Theory of Quanta, the transition from one energy eigenstate to an other one has been called “quantum leap”, this notion has even found its way into colloquial language (at least in German). However, the conception of leaps is not necessary. Though it is not possible to observe such a transition in detail – it is an appealing idea that the initial state continuously changes into the final one, while the electric charge oscillates and emits (or absorbs) an electromagnetic wave.
The periodic system of elements
After hydrogen, the helium atom is the next simplest atom. Its nucleus carries two units of positive electric charge and binds two electrons. An exact analytical solution of the Schrödinger equation is no longer possible, one has to use approximate or numerical methods.
In an approximate treatment of atoms with more than one electron one replaces the interaction between electrons by a static potential. At large distance from the nucleus, an electron “feels” attraction by the nucleus and repulsion by the other electrons which screens the attraction. The closer the electron comes to the nucleus, the weaker is this screening. The effective potential for an electron in a neutral atom approaches the Coulomb potential of a single elementary charge for large distances, while for very small distances, that of the full nuclear charge.
The change of the potential affects the shape of the wave functions and the resulting density distributions (electron clouds), but the general appearance is not changed, in particular the symmetries and the ordering of nodal surfaces remain the same.
Here, the density distributions of the electron in some selected modes of the hydrogen atom are shown once more; all in the same scale (the width of each picture is 3.17 nm).
3d (m=0)
4d (m=0)
4f (m=0)
It is a peculiarity of the Coulomb potential that states with the same principal, but different azimuthal quantum numbers have the same energy. If the effective potential is close to that of a single elementary charge at large radii, but more attractive near the nucleus, then electrons in s-states will “feel” this most strongly, as the density has its maximum just at the nucleus. Of all the states with the same principal quantum number n, those with l = 0 are lowered in energy more than those with l = 1, these more than those with l = 2 and so on, for the larger l is, the larger is the distance of the electron clouds to the nucleus, as can be seen in the pictures above.
Now our preliminary definition of the principal quantum number as the number of the energy level is no longer true. Denoting by nr the number of spherical nodal surfaces, by l the azimuthal (orbital angular momentum) quantum number, then n can be defined as
n = nr + l + 1,
and then n and l can be used to label the states as before.
In the ground state of the helium atom, both electrons occupy the lowest possible single particle level 1s, this is denoted by 1s2.
One might expect that in the lithium atom (the nucleus of which has three units of positive charge) the electron configuration is 1s3. But this is not the case. What one has learned from the optical spectra is, that only two electrons may be in the lowest single particle state, the next one must be content with the next higher one which is the 2s state. Thus the electron configuration of Lithium is 1s22s.
The periodic system of elements can only be explained if one assumes that every single particle mode characterized by the two quantum numbers n and l and its orientation in space can only be taken on by two electrons. These two electrons must differ in an other property characterized by one more quantum number (the spin quantum number) which can only take two values, 1/2 or –1/2. This is a special case of the more general Pauli principle (Wolfgang Pauli 1924) which says that the states of two electrons (at the same place) never can be identical in all quantum numbers.
Therefore it is important to know how many independent orientations are possible for a given shape of a mode. The s-waves are spherically symmetric, thus there is only one s-wave for every principal quantum number. We have seen that a p-wave in an arbitrary direction can always be written as a superposition of three independent basis functions, thus there are three independent p-waves for each n, which can be chosen along the coordinate axes. For the d-waves (l=2), we have seen that there are five independent functions, and it generally holds that for each value of l there are 2l+1 independent possible orientations, which for free atoms usually are labelled by the magnetic quantum number ml which can take on integer values from –l to l.
Considering a nucleus with multiple charge Z and “filling in” electrons successively in the lowest possible state (as allowed by the Pauli principle), the following sequence is obtained:
1s, 2s, 2p, 3s, 3p, (4s, 3d), 4p, (5s, 4d), 5p, 6s, (4f, 5d), 6p, 7s, (5f, 6d), 7p
The levels written in brackets are very close in energy, with the consequence that, when electrons are successively added, the second level in the bracket gets some occupation already before the first one is completely filled.
As an example the electron configuration of molybdenum:
42Mo: 1s2, 2s2, 2p6, 3s2, 3p6, 4s2, 3d10, 4p6, 5s1, 4d5.
The density distributions shown above illustrate that the spacial extension increases rapidly with increasing principal quantum number n. The set of states with the same n is called a shell; in real space, the different shells are not well separated from each other, but states of the same shell have roughly the same spatial extension. States with the same n and l form subshells. If all states of a subshell are occupied by electrons, their density distribution is spherically symmetric. A shell completely filled with electrons is particularly stable and therefore chemically inert. The possible interactions of an atom with its neighbours – the chemical properties – are essentially determined by the outermost electrons. From the order of levels given above and the fact that the same symmetry properties occur repeatedly, the periodic table of the elements results.
Flame test, spectroscopy
Hydrogen is a colourless gas; under “normal” circumstances the atoms are bound in pairs to H2-molecules and nothing can be seen of the possibility that light may be absorbed or emitted. Air (mainly nitrogen and oxygen) and the noble gases are colourless, and the same holds for many other substances. To observe emission of light or even spectral lines, one has to supply energy to excite the atoms.
In a gas discharge tube, the molecules are broken by collisions with electrons and ions, atoms are excited or even ionized by collisions, and then emission of light as well as absorption can be observed.
High temperatures have the same effect: in the sun there is atomic hydrogen in excited states, and in the solar spectrum the absorption lines of hydrogen can be seen.
The temperature of the flame of a Bunsen burner is sufficiently high to split molecules and to ionize atoms which after recombination give off their energy by emission of photons. With traces of alkali metal or alkaline earth metal ions (and other substances as well) flames can be coloured; this is used in pyrotechnics and also for quick tests on these substances in minerals etc., see e.g the Wikipedia “Flame test”.
Lithium: Crimson
Sodium:intense orange-yellow
Barium:Light green
Boron:Bright green
(Photos: Herge. The images have been taken from “wikimedia commons” and are reduced in size.)
The spectra which, after the one of hydrogen, are the simplest to explain, are those of the alkali metals. These atoms have a single, relatively weakly bound electron in the outermost shell in addition to the spherical, noble-gas like core. The transitions of the outer electron from the low lying excited states to the ground state produce the visible part of the spectrum.
The sodium spectrum is dominated by a line of 589 nm wavelength, coming from the transition from the 3p state to the 3s state. (In fact, due to fine structure splitting of the p state, which has not been dealt with here, this line is actually a doublet, i.e. two very closely neighbouring lines.)
“Flame spektrum” of sodium. Scale: wavelength in nm. Data from: Sansonetti and Martin, Handbook of Basic Atomic Spectroscopic Data
If there are more than one electron in the outermost shell, there are many more possibilities of electronic transitions and the spectra accordingly become more involved.
“Flame spektrum” of Calcium. (Only the stronger lines are shown.) Scale: wavelength in nm. Data source as above
Most elements give no conspicuous colour to flames. If the valence electrons are more strongly bound, the energies of the lowest excitations correspond to the ultraviolet region; the strongest spectral lines therefore are invisible for us.
Lightnings. Left: Photo © Axel Rouvin, (source) enlarge. Right: Photo: Imac Vincent, May 14, 2008. ( source) enlarge.
In lightnings the molecules of the air (nitrogen, oxygen and water vapour) are dissociated and the atoms ionized and excited. The emitted light is white due to the many lines in the visible spectrun, see the figures below. In addition to the lines of nitrogen and oxygen those of hydrogen are present. The varying “red” contribution of the latter (depending on the amount of rain falling or on the density of the clouds) causes the slight differences in observed colour. A lightning spectrum is shown in an “Optics Picture Of the Day” (OPOD) of Les Cowley's Atmospheric Optics site.
Top: Spectrum of nitrogen. Bottom: Spectrum of oxygen. Scale: wavelengths in nm. The images are derived from the Saha-LTE (local thermodynamic equilibrium) Spectra provided by the NIST Atomic Spectra Database with the parameters temperature 12 eV, electron density 2.0×1022 cm−3.
Real spectra may differ considerably from these synthesized ones in that lines shown here are not seen there or that lines occur which are not included in the above pictures. See the lightning spectrum as an example.
The relative intensities of the lines depend on the circumstances of excitation and de-excitation of the atoms and ions and emission and possible re-absorption of the photons. Assuming a thin plasma sheet in thermodynamic equilibrium, only temperature and electron density are important. But not so in most other cases.
An electric spark in air produced with a small Wimshurst machine, photographed through a diffraction grating. There is a continuous background due to photon emission by unbound electrons when captured by ions.
The violet lines are blue in the picture, and the yellow and orange lines of nitrogen show up green and red instead. Spectral colours cannot be reproduced faithfully in photography. This has been discussed already in connection with experiments with the prism. Click on the above spectrum to see an image which has been processed to make it somewhat more similar to the visual impression!
Coupling rules, “allowed” and “forbidden” transitions
The dominant spectral lines of the polar light belong to atomic oxygen with wavelengths of 558 nm (green) and 630 nm (red). In the above figure of the oxygen spectrum these lines are not present at all!
Consider the electron configuration of oxygen: the nucleus carries eight units of positive charge; adding electrons successively to the lowest possible single-particle levels, one reaches the following configuration:
8O: 1s2, 2s2, 2p4.
The four 2p-electrons may be distributed over six possible single-particle states. If there were no interaction between the electrons, all the fifteen possibilities would lead to the same energy. But due to their charge, the electrons repel each other.
Elektrons with parallel spins tend to avoid each other due to the Paiuli principle. Thus, parallel spins are energetically favoured. The larger the total orbital angular momentum, the less is the mutual overlap of the electron densities. This leads to a level splitting according to total orbital angular momentum (“gross structure”). In addition, there is an interaction between orbital and spin-angular momentum, leading to additional splitting according to total angular momentum (“fine structure”). For the lighter elements, spin-orbit interaction is only small. As long as there is no preferred direction in space, total angular momentum is conserved and the states can be labelled by its quantum numbers.
Usually total angular momentum is denoted by capital letters S, P, D, F . . . standing for L=0, 1, 2, 3 . . respectively. Spin is given as a left upper index (2S+1).
Thus the lowest few levels of oxygen are the following (experimental data taken from the compilation of Sansonetti und Martin):
Configuration Term J Level (cm−1)
1s2 2s2 2p43P20.000
1s2 2s2 2p41D215867.862
1s2 2s2 2p41S033792.583
1s2 2s2 2p3(4S)3s5S273768.200
. . .
In the Levels' column the energies divided by h·c are given (h being the Planck constant and c the velocity of light). This is very convenient for the computation of the wavelength λ of the spectral lines, as
1/λ = ν/c = (Ei – Ef)/(hc)
For the transition 1S→1D one gets the wavelength of 558 nm – that is the green line, and for 1D→3P the wavelength is 630 nm, the red line of the aurora. But why are these lines not seen under laboratory conditions?
In the treatment of dyes emission and absorption probabilities are dealt with in some nore detail. Large probabilities are only found in electric dipole transitions; transitions which occur much less frequent have been called “forbidden” since the early days of atomic spectroscopy, as often they are not observable at all.
In that sense, both transitions are “forbidden”. 1S→1D is an electric quadrupole transition, and in 1D→3P a spin-flip occurs.
As mentioned in the beginning, light emission from single atoms can be observed only seldom in nature. Lightnings and polar lights are the only prominent earthly examples, but if seen, they are impressive!
Polar light (aurora borealis and australis)
Northern (and southern) light is caused by the charged particles (protons and electrons mostly) of the solar wind. The earth's magnetic field forces them on curved paths around the earth, but a fraction is captured and eventually reaches the ionosphere. The details are quite complicated, see the pages of the Atmospheric Optics and the COMET® (Meteorology Education and Training) websites.
Left: Northern light, October 22, 2001, 22:00 UT, as seen north of Wiesbaden, Germany enlarge.
Right: Northern light, October 30, 2003, 19:57 Uhr UT, close to Wolfsheim (Rheinhessen, Germany) enlarge.
Both Photos: © Ulrich Rieth; shown with permission.
Currents of energetic charged particles are flowing at high altitude along the magnetic field lines (the single particles following coiled paths). They come closest to the ground at high latidudes. In collisions, energy is transferred to the particles of the upper atmosphere and is then radiated off as visible light. The polar light originates from heights between roughly 100 and 200 km; in times of strong solar activity also much higher, then it can be seen even at middle latitudes. The composition of the atmosphere at these heights is very different from that of the lower layers: nitrogen molecules and oxygen atoms are much more abundant than all the rest.
The red line of oxygen (630 nm) is dominant at high altitudes, but fades below 150 km, as the excited oxygen atoms in the 1D-state are de-excited by collisions with nitrogen molecules much faster than by radiation. The green colour of the aurora below 150 km height stems from the 558 nm line of Oxygen. It is not seen at larger heights since the 1S-state is not reached from the 3P ground state in collisions with electrons or protons. It is assumed that this state is produced in collisions of O(3P) with excited nitrogen molecules which give off their energy and take over angular momentum:
N2* + O(3P) → N2 + O(1S)
This line vanishes at high altitudes where the concentration of N2 is too low.
Northern light, February 22, 2001, 0:14 UT, Finland close to the polar circle enlarge.
Photo: © Ulrich Rieth.
The following two diagrams are included to substantiate what has been said above. The source of this material is the COMET® Website at of the University Corporation for Atmospheric Research (UCAR), sponsored in part through cooperative agreement(s) with the National Oceanic and Atmospheric Administration (NOAA), U.S. Department of Commerce (DOC). ©1997-2009 University Corporation for Atmospheric Research. All Rights Reserved.
The animation of this and the next picture does not work if the Adobe Flash Player is not installed. You may click here to open a small page which will test this and guide you to install it.
The composition of the thermosphere (atmosphere above 100 km height). Between 100 and 200 km molecular nitrogen and atomic oxygen are the most abundant. © UCAR, source: COMET® site.
Typical spectrum of (greenish) polar light. By removing or setting checkmarks the different components can be identified. © UCAR, source: COMET® site.
The units of the left scale are kR (kiloRayleigh). These units do not measure perceptual lightness, but the number of photons per unit area and time. The apparently strong lines below 400 nm (4000 Å) and above 700 nm (7000 Å) are hardly visible (and those between 670 and 700 nm are only very faint).
As seen from the above figure, molecular nitrogen can emit blue, red and thus also purple light, and quite multicoloured displays are possible. However, a detailed discussion would lead us too far; see the Atmospheric Optics site for more details and wonderful images.
Gas tubes
Far more often than in natural phenomena one can see artificially induced atomic emission – at least in the city. Here as an example a neon-filled gas discharge tube in front of a black wall is shown. (This and the next picture show exhibits of the “Universum® Science Centre” in Bremen.)
Plasma lamps, plasma balls
Plasma lamps, invented by Nicola Tesla, have been re-invented as decorative objects, plasma spheres or balls, by Bill Parker. They contain a rarefied noble gas or mixture of noble gases; the energy to excite the atoms is supplied by high frequency high voltage at the inner electrode (ca. 2 kV or more depending on the size, ca. 10–40 kHz). Typical gas pressures range from 2 to 10 Torr. The ball shown to the left presumably contains a mixture of Neon and Xenon (95% Ne, 5% Xe ist customary), other mixtures produce streamers in different colours.
It has already been mentioned that under normal terrestrial conditions only the noble gases consist of single atoms. All other elements have the tendency that their atoms bind to other ones, forming molecules, the energy of which is lower than that of the separate constituents.
In general, the charge distribution in a compound molecule of different atoms is not symmetric, an electric dipole moment results, the molecules attract each other and thus condense to a liquid or a solid.
Small molecules built from the lighter elements are colourless. As the free atoms, they cannot absorb visible light; only in the ultraviolet region the photons have sufficient energy to electronically excite the molecule. On the other hand, the vibrational excitations of the molecules lead to absorption and emission of infrared radiation and causes only very faint colour, as in water.
The large and strongly coloured molecules shall be treated in the special section “dyes”, another section is devoted to the colours of minerals and crystals.
Back to the index page “the origins of colour”
Valid HTML 4.01 Transitional CSS is valid! |
23524aeb6dc02627 | SpringerOpen Newsletter
Receive periodic news and updates relating to SpringerOpen.
Open Access Nano Express
Author affiliations
For all author emails, please log on.
Citation and License
Received:18 July 2012
Accepted:23 August 2012
Published:12 November 2012
© 2012 Donmez et al; licensee Springer.
Quantum well infrared photodetector (QWIP) structures have been developed since 1990s [1]. There are many different types of QWIP structures. QWIPs can be categorized by their electrical properties: photovoltaic or photoconductive, or by their layer thicknesses: multi-quantum wells (MQW) or superlattice structures. They can also be categorized by having optical responsivity at a single or multiple wavelengths. Multi-color QWIPs can be composed of double barriers [2], stepped quantum wells [3], and stepped barriers. The structures with stepped barriers are also called as staircase-like QWIPs in the literature [4].
In this work, photomodulated reflectance (PR) and photoluminescence (PL) experiments were carried out on two different staircase-like QWIP structures at room temperature. PR is a powerful characterization method to determine optical transitions in both bulk and low-dimensional multilayer semiconductor structures. Its absorption-like character and high sensitivity makes it possible to observe optical transitions between ground and excited states, even at room temperature. PR spectroscopy utilizes the modulation of the built-in electric field at the semiconductor surface or at the interfaces through photo-injection of electron–hole pairs generated by a chopped incident laser beam. This technique produces sharp spectral features related to the critical points of the band structure. This provides a more explicit comparison of experimental results with theoretical models. However, PL only gives information about ground state transitions in QWs at room temperature. PR spectra were analyzed using the third derivative functional form (TDFF) in order to fit the optical transition energies, and the results were compared to the theoretical values calculated using transfer matrix method.
Transfer matrix technique is a common method for solving Schrödinger equation for MQW structures which consist of layers having different band gaps and effective masses. By virtue of this technique, energy levels, wave functions under zero or constant electric field can be calculated in complex structures [5-7]. In this work, we had employed this technique to calculate the energy levels in each QW at 300 K.
In order to determine the band gap of GaAs at room temperature, Varshni equation [8] was used:
<a onClick="popup('http://www.nanoscalereslett.com/content/7/1/622/mathml/M1','MathML',630,470);return false;" target="_blank" href="http://www.nanoscalereslett.com/content/7/1/622/mathml/M1">View MathML</a>
where Eg(0) is the band gap of GaAs at T = 0 K; α = 5.405 × 10−4 eV/K and β = 204 K are Varshni parameters at the Г point. For AlxGa1−xAs ternary alloys. Temperature dependence of the band gap for x < 0.4 can be estimated by:
<a onClick="popup('http://www.nanoscalereslett.com/content/7/1/622/mathml/M2','MathML',630,470);return false;" target="_blank" href="http://www.nanoscalereslett.com/content/7/1/622/mathml/M2">View MathML</a>
where α and β are Varshni parameters of AlxGa1−xAs. Adachi showed that compositional dependence of Varshni parameters becomes significant in AlxGa1−xAs ternary alloys for x > 0.4 [9]. However, since x < 0.4 for AlxGa1−xAs in our structures, we used the same values as GaAs [9,10]. The conduction and the valance band offsets were chosen as 60% and 40%, respectively. In the calculations of energy levels, the effective mass for each layer was considered separately. The effective masses of electrons in AlAs and GaAs were taken as 0.15 and 0.067, respectively. Using these values, the effective mass of electrons in AlxGa1−xAs layers was calculated by applying Vegard’s law:
<a onClick="popup('http://www.nanoscalereslett.com/content/7/1/622/mathml/M3','MathML',630,470);return false;" target="_blank" href="http://www.nanoscalereslett.com/content/7/1/622/mathml/M3">View MathML</a>
Similarly, the effective masses of holes in the AlxGa1−xAs layers were also calculated using Equation 3, taking the density of states heavy hole effective masses as 0.81 and 0.55, and the averaged light hole effective masses were taken as 0.16 and 0.083 in AlAs and GaAs, respectively [9].
PR spectra were fitted using the linear combination of several Aspnes’ TDFFs [11], expressed as:
<a onClick="popup('http://www.nanoscalereslett.com/content/7/1/622/mathml/M4','MathML',630,470);return false;" target="_blank" href="http://www.nanoscalereslett.com/content/7/1/622/mathml/M4">View MathML</a>
where n is the number of spectral features to be fitted; E is the photon energy; Aj, φj, Egj, and Гj are the amplitude, phase, band gap energy, and line broadening of the jth feature, respectively. mj represents the type of critical point depending on the dimensionality of the structure, and its value is 2.5 or 3 for 3-D (bulk) or 2-D cases, respectively. The background signal in the measurements was simulated and suppressed from Equation 4 by a linear f(E) function.
PR and PL measurements were carried out on two different MQW structures at room temperature. A tunable monochromatic probe light was provided by a 100-W tungsten lamp, dispersed by a single grating monochromator, and the sample was pumped with a modulated 10-mW He-Ne laser at 632.8 nm that was mechanically chopped at 280 Hz. The reflected probe beam was measured by a Si photodiode. The AC and DC components of reflectance (R) and differential changes in R (ΔR) were acquired by a computer, simultaneously.
The structures used in this study consist of n-doped GaAs QWs sandwiched between undoped AlxGa1−xAs barriers with three different x compositions, producing staircase-like barriers. The structures were designed as QWIP devices. Details of the structures are given in Figure 1. The main differences between the two structures are the doping profile, the doping concentration, and the barrier composition of the triangular well. All QWs in ANA-coded samples have a doping concentration of 2 × 1018 cm−3. On the other hand, in IQE samples, QWs with 5.5- and 5-nm well width have doping concentrations of 3 × 1018 and 1 × 1018 cm−3, respectively. QWs with central doping width of 1.2 nm in the ANA structure were replaced by 2.5 nm in IQE structure. ANA samples have triangular wells in the active period with barriers containing 30% Al concentration. Besides, IQE-coded samples have 27% Al concentration. One edge of the triangular quantum well is formed by a graded barrier; the other edge is formed by a fixed barrier as seen from the potential profile of the conduction band given in Figure 2. We have introduced the graded barrier in the structure in order to provide a quasi-electric field which facilitates the drift current of the photo-excited carriers to adjacent layers. In Figure 2, quantum wells which have different barrier heights are labeled with numbers.
thumbnailFigure 1. Schematic layer structure of (a) ANA14 and (b) IQE14 sample.
thumbnailFigure 2. Conduction band profile. (a) ANA14 and (b) IQE14 structures. One period of the active region is shown in the diagram.
Results and discussion
Experimental results on reflectivity (R), PL, and PR spectra of ANA14 and IQE14 structures are given in Figure 3. Calculated PR spectra are also included in the figure. The R spectra are given just for information and not to be included in the discussion. PL signal begins to rise from the fundamental band edge of bulk GaAs and peaks at about the combined excitonic transition region. Details of the excitonic transitions are smeared out. However, in the PR spectra, the fundamental band gaps of bulk GaAs cap layer and AlxGa1−xAs barrier regions, and a series of excitonic transitions are clearly resolvable. In order to analyze the obtained PR spectra, we divided the spectrum into three regions. The first region between 1.4 to 1.51 eV includes signals from the bulk GaAs and effective band gap due to e1-hh1 excitonic transitions of doped GaAs QWs having AlxGa1−xAs barriers with x = 0.21. The second region ranging from 1.51 to 1.6 eV includes the other excitonic transitions such as e1-hh1, e1-hh2, and e1-lh1, coming mainly from the active period of the structures. Finally, 1.6 to 1.8 eV region includes PR signals of AlxGa1−xAs layers. The experimental results exhibit Lorentzian-like peaks which are obtained from Equation 4 as the modulus of PR resonances according to the equation below:
<a onClick="popup('http://www.nanoscalereslett.com/content/7/1/622/mathml/M5','MathML',630,470);return false;" target="_blank" href="http://www.nanoscalereslett.com/content/7/1/622/mathml/M5">View MathML</a>
thumbnailFigure 3. R, PL, and PR spectra. (a) ANA14 and (b) IQE14 sample. Red line is the experimental, while open circles are the calculated PR spectra points.
Using this equation, bulk and excitonic transition parameters A and Γ of each signal were determined. These parameters were placed into Equation 4, and then the optical transition energies were calculated. The calculated energy levels and corresponding PR peaks in the spectra are summarized in Table 1. All possible excitonic transitions in the calculated PR spectra are clearly distinguishable. Experimental and calculated PR spectra are in excellent agreement. Although most of the excitonic transitions are identified, some of the calculated transitions were not observed in the PR spectra (Table 1).
Table 1. Electron and hole energy levels and PR peaks of the QWIP structure obtained at T = 300 K
PL studies showed just a single broad peak for ANA14 structure at 1.525 eV and for IQE14 structure at 1.539 eV at room temperature. As seen from the calculated values of the excitonic transitions in different quantum wells, the energy differences between them are quite small; therefore, the observed PL peak cannot be attributed to just one transition. It can be concluded that observed PL peak represents additive information about some of the optical transitions. However, PR provides detailed information, resolving closely separated energy levels, even at room temperature.
The importance of the photomodulated reflectance spectroscopy in complicated semiconductor QW structures and hence in QWIPs has been verified by the experimental and the theoretical results obtained from this work. QWs with barriers having minor differences in the alloy composition can clearly be distinguished by PR measurements at room temperature. Indeed, e1-hh1, e1-hh2, and e1-lh1 transitions were clearly observed and resolved. On the other hand, in PL measurements, only one single photoluminescence peak was observed.
MQW: multi-quantum well; PL: photoluminescence; PR: photomodulated reflectance; QW: quantum well; QWIP: quantum well infrared photodetector; R: reflectance; TDFF: third derivative functional form.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
OD carried out the experiments and fitted the PR spectra in collaboration with AE, MCA, and FN. FN calculated the energy levels. OD, FN, MCA, and AE contributed to the manuscript preparation. YE is the designer of the QWIP structure. All authors read and approved the final manuscript.
We are grateful to Dr. Bulent Aslan and Dr. Ugur Serincan from Anadolu University for growing the ANA samples with MBE. This work was partially supported by The Scientific and Technological Research Council of Turkey (TUBITAK; project number 108T721), COST Action MP0805, Scientific Research Projects Coordination Unit of Istanbul University (project numbers 3587 and UDP 16607), and the Ministry of Development of Turkey (project number 2010K121050).
1. Levine BF: Quantum-well infrared photodetectors.
J Appl Phys 1993, 74:R1-R81. Publisher Full Text OpenURL
2. Luna E, Guzman A, Sdnchez-Rojas J, Sanchez J, Munoz E: GaAs-based modulation-doped quantum-well infrared photodetectors for single- and two-color detection in 3–5 um.
IEE J Selected Topics in Quantum Electronics 2002, 8:992-997. Publisher Full Text OpenURL
3. Mii YJ, Wang KL, Karunasiri RPG, Yuh PF: Observation of large oscillator strengths for both 1→2 and 1→3 intersubband transitions of step quantum wells.
Appl Phys Lett 1990, 56:1046. Publisher Full Text OpenURL
4. Eker S, Hostut M, Ergun Y, Sokmen I: A new approach to quantum well infrared photodetectors: staircase-like quantum well and barriers.
Infrared Physics and Technology 2006, 48:101-108. Publisher Full Text OpenURL
5. Jonsson B, Eng S: Solving the Schrodinger equation in arbitrary quantum-well potential profiles using the transfer matrix method.
IEEE J Quantum Electron 1990, 26:2025-2035. Publisher Full Text OpenURL
6. Li W: Generalized free wave transfer matrix method for solving the Schrodinger equation with an arbitrary potential profile.
IEEE J Quantum Electron 2010, 46:970-975. OpenURL
7. Lantz KR: Two color photodetector using an asymmetric quantum well structure. California: Naval Postgraduate School, Monterey; 2002. [PhD thesis] OpenURL
8. Varshni YP: Temperature dependence of the energy gap in semiconductors.
9. Adachi S: Properties of Semiconductor Alloys: Group-IV, III-V and II-VI Semiconductors. Wiltshire: Wiley; 2009:159-160.
10. Aspnes DE: GaAs lower conduction-band minima: ordering and properties.
Phys Rev B 1976, 14:5331-5343. Publisher Full Text OpenURL
11. Aspnes DE: Third-derivative modulation spectroscopy with low-field electroreflectance.
Surf Science 1973, 37:418-442. OpenURL |
77a0c0bc262e28f1 | Office of the Registrar
Faculty of Science (2011/2012)
9.9 Physics and Physical Oceanography
Physics courses are designated by PHYS.
Introductory Physics I
(F) & (W)
is a non-calculus based introduction to mechanics. This course may be completed by someone who has no physics background provided some extra effort is made.
CO: Mathematics 1090
CR: PHYS 1050
LH: 3; six laboratory sessions per semester
OR: optional tutorials will be available, on average one hour per week
Introductory Physics II
(F) & (W)
CO: Mathematics 1000
LH: 3; normally there will be six laboratory sessions per semester
PR: PHYS 1020 or 1050 and Mathematics 1090 or 1000
General Physics I: Mechanics
(F) & (W)
is a calculus based introduction to mechanics. The course will emphasize problem solving. For more details regarding PHYS 1050, see Note 4 under Physics and Physical Oceanography.
CO: Mathematics 1000
CR: PHYS 1020
PR: Mathematics 1000
General Physics II: Oscillations, Waves, Electromagnetism (F) (W) & (S)
CO: Mathematics 1001
PR: PHYS 1050, or 1021, or 1020 (with a minimum grade of 65%) and Mathematics 1001
Fluids and Thermal Physics
examines elasticity, fluid mechanics, thermodynamics, kinetic theory and statistical mechanics.
CO: Mathematics 1001 and PHYS 1051
LH: 3
PR: Mathematics 1001 and PHYS 1051
Electricity and Magnetism
examines Gauss' Law, the electrostatic potential, capacitance, magnetic forces and the magnetic field, electromagnetic induction, magnetic materials, ac circuits, superconductivity, the displacement current and Maxwell's equations.
CO: Mathematics 2000
LH: 3
PR: Mathematics 2000 and PHYS 1051
Stellar Astronomy and Astrophysics
(F) & (W)
PR: 6 credit hours in Mathematics courses at the first year level
Modern Physics
CO: Mathematics 1001 and PHYS 1051
CR: PHYS 2056
PR: Mathematics 1001 and PHYS 1051
Computational Mechanics
(F) & (W)
covers newtonian dynamics and celestial mechanics, numerical differentiation and integration, numerical solutions to mechanics problems, data and spectral analysis, Fourier series and normal modes, oscillations and vibrations, linear and non-linear oscillators, nonlinear dynamics and chaos.
CO: Mathematics 2000
LC: 5
LH: 5
PR: PHYS 1051, Mathematics 2000
Physics of Device Materials
is structures of crystalline and amorphous solids. Excitations and transport in metals, semiconductors, and dielectrics; electronic band structures. Physics of multi-material devices including photodiodes, solid state lasers, and field-effect transistors.
PR: PHYS 2055 or registration in Academic Term 3 of the Electrical Engineering Program
Astrophysics I
is a review of macroscopic and microscopic physics. The sun: luminosity, mass, spectrum, photosphere, corona, interior. Principles of stellar structure; radiative and convective transport of energy. The virial theorem. Thermonuclear fusion; temperature dependence; the solar neutrino problem. Nucleosynthesis; the curve of binding energy; the synthesis of heavy elements. White dwarfs, neutron stars, and black holes; degenerate electron and neutron gases; Chandrasekhar's Limit. Population I and Population II stars; the Hertzsprung-Russell diagram; relationships among luminosity, mass, and effective temperature for main sequence dwarfs. Evolution of post main sequence stars.
PR: PHYS 2053, 2750 (or 2056), and 2820
Astrophysics II
covers stellar spectra and classification of stars. Hertzsprung-Russell diagram; equations of stellar structure for a star in equilibrium; temperature and density dependencies of nuclear processes. Formation and classification of binary stars; mass and energy transfer in binary star systems; semidetached binaries; cataclysmic variables, pulsars, etc. Galaxies and galactic structure; active galactic nuclei; cosmological redshift. Cosmology.
PR: PHYS 3150 and 3220
Classical Mechanics I
CO: PHYS 2820 and Mathematics 3260
PR: PHYS 2820 and Mathematics 3260
Classical Mechanics II
covers rigid body motion. Lagrange's equations. Hamilton's equations. Vibrations. Special theory of relativity.
PR: PHYS 3220 and 3810 (or Mathematics 3202) and Mathematics 3260
Introduction to Physical Oceanography
PR: PHYS 2053 and Mathematics 2000
Principles of Environmental Physics
will explore the basic physical principles of light, heat, energy and sound in the natural environment. Several key aspects of physics in the environment will be covered including climate and the physical evolution of the planet and the present role of the atmosphere and ocean spectroscopy in the atmosphere and measurement and observation of the atmosphere; principles of energy generation and pollution transport in the atmosphere and ocean.
PR: Mathematics 2000 and PHYS 2053
covers the first and second laws of thermodynamics. Entropy. Thermodynamics of real substances. Kinetic theory of matter. Introduction to statistical mechanics.
PR: Mathematics 2000, PHYS 2053 and PHYS 2750 or 2056
Electromagnetic Fields I
examines electrostatic Field: field, potential, Poisson's equation, Laplace's equation, capacitance, dielectrics, polarization, electric displacement, boundary conditions. Magnetic Field: electric current and magnetic field, vector potential, Lorentz force and relativity, changing magnetic field, inductance, magnetic materials, magnetization. Maxwell's equations.
PR: PHYS 2055 and 3810 (or Mathematics 3202)
Electric Circuits
covers circuit elements. Simple resistive circuits. Techniques of circuit analysis. Topology in circuit analysis. Operational amplifiers. Reactive circuit elements. Natural response and step response of RL, RC and RLC circuits. Circuits driven by sinusoidal sources. Mutual inductance. Series and parallel resonance. Laplace transforms in the analysis of frequency response.
CO: Mathematics 3260
CR: Engineering 3821
LC: 6
LH: 6
PR: Mathematics 2050, Mathematics 3260, PHYS 2055
Analogue Electronics
is a review of network analysis. Feedback. Electron tubes. Semiconductor diodes. Introduction to transistors. Introduction to amplifiers. Small signal models. Small signal analysis of amplifiers. Operational amplifiers. Selected topics in circuit design such as biasing, voltage regulators and power circuits, noise. This course is recommended for students with an interest in experimental Physics.
LC: 6
LH: 6
PR: PHYS 3550 and Mathematics 3260
Optics and Photonics I
covers geometrical Optics: thin lenses, mirrors, optical systems. Two-beam and multiple-beam interference phenomena. Fraunhofer Diffraction. Introduction to Maxwell's Theory: reflection, transmission, and polarization. Modulation of light waves. Fibre-optical light guides: intermodal dispersion, index profiles, loss mechanisms, single mode fibres. Optical communication systems: free space and fibre systems, emitters, detectors, amplifers, wavelength-division multiplexing, integrated optics.
PR: Mathematics 2000 and PHYS 2055
Quantum Physics I
covers wave-particle duality of nature. Introduction to Quantum Mechanics. Schrödinger equation. One electron atoms. Quantum statistics.
CO: PHYS 3220 and 3810 or Mathematics 3202
PR: PHYS 2750 (or 2056), 3220 and 3810 (or Mathematics 3202)
Quantum Physics II
covers multielectron atoms. Molecules. Solids - conductors and semiconductors. Superconductors. Magnetic properties. Nuclear models. Nuclear decay and nuclear reactions. Properties and interactions of elementary particles.
PR: PHYS 3750
Computational Physics
is a project-based course intended to train students to become functional in computational methods, by writing and compiling computer code (C/Fortran) in a Unix environment to solve problems drawn from different areas of physics. Students will complete several projects selected from different areas of physics. Projects will introduce the students to a particular class of numerical methods. Lectures and tutorials will cover the theory that underlies the computational methods and background for code development and the application of the required numerical methods.
CO: Any two 2000-level Physics course plus at least one other 3000-level Physics course
LC: 5
LH: 5
PR: Computer Science 1510, PHYS 2820, Mathematics 3202, Mathematics 3260
Mathematical Analysis
- inactive course.
Mathematical Physics III
PR: Mathematics 3260 and PHYS 3810 (or Mathematics 3202)
Physics Laboratory I
is a selection of experiments based primarily on material covered in the third year courses.
LH: 6
PR: at least two of PHYS 2053, 2820, 2055, and PHYS 2750 (or 2056)
Solid State Physics
covers crystal structure and binding, phonons and lattice vibrations, thermal properties of solids. Electrons in solids, energy bands, semi-conductors, superconductivity, dielectric properties. Magnetic properties of solids.
PR: PHYS 3400 and 3750 or waiver approved by the instructor
Classical Mechanics III
- inactive course.
Introduction to Fluid Dynamics
CR: Mathematics 4180
PR: PHYS 3220 and either Mathematics 4160 or the former PHYS 3821 or waiver approved by the instructor
Continuum Mechanics
- inactive course.
Introduction to general Relativity
(Mathematics 4130) studies both the mathematical structure and physical content of Einstein’s theory of gravity. Topics include the geometric formulation of special relativity, curved spacetimes, metrics, geodesics, causal structure, gravity as spacetime curvature, the weak-field limit, geometry outside a spherical star, Schwarzschild and Kerr black holes, Robertson-Walker cosmologies, gravitational waves, an instruction to tensor calculus, Einstein’s equations, and the stress-energy tensor.
CO: Mathematics 4230
CR: Mathematics 4130
PR: Mathematics 3202 and one of PHYS 3220, Mathematics 4230 or waiver approved by the instructor
Advanced Physical Oceanography
PR: PHYS 3300 and 3820 or registration in Academic term 6 of the Ocean and Naval Architectural Engineering program, or waiver approved by the instructor
Topics in Physical Oceanography
- inactive course.
Modelling in Environmental Physics
covers the basic principles underlying environmental modelling will be developed and techniques for modelling presented and applied. Techniques for numerical modelling will be developed and simple numerical models will be developed for use in terrestrial, atmospheric and oceanic environments. Free and forced systems will be discussed and the transition to chaos and some aspects of chaotic dynamics.
PR: PHYS 3340 and PHYS 3820 or waiver approved by the instructor
Statistical Mechanics
covers ensembles. Classical and quantum statistical mechanics. Statistical mechanics of phase transitions. Advanced topics in statistical mechanics.
CO: PHYS 3750
PR: PHYS 3400 and 3750
Electromagnetic Fields II
covers multipole expansions, electrostatic fields as boundary value problems, polarizability of molecules in dielectric media, Clausius-Mossotti relation, gauges. Electromagnetic Waves: Poynting's theorem, reflection and transmission of electromagnetic waves, cavity resonators, wave guides. Electromagnetic Radiation: dipoles, antennas, quantum mechanics and electro-magnetic interactions. Selected topics in electrodynamics and applied electromagnetism.
PR: PHYS 3500 and 3820 or waiver approved by the instructor
Optics and Photonics II
is a review of basic topics in wave optics. Phase sensitive imaging. Electromagnetic waves in anisotropic media. Scattering of electromagnetic waves. The physics of light sources and applications. Non-linear optics and applications.
CO: PHYS 3751
PR: PHYS 3500, 3600, and PHYS 3751 or waiver approved by the instructor
Atomic and Molecular Physics
- inactive course.
Nuclear Physics
- inactive course.
Mathematical Physics III
covers further topics on partial differential equations of Mathematical Physics and boundary value problems; Sturm-Liouville theory, Fourier series, generalized Fourier series, introduction to the theory of distributions, Dirac delta function, Green’s functions, Bessel functions, Legendre functions, spherical harmonics.
PR: PHYS 3820
Quantum Mechanics
examines postulates of quantum mechanics. Operators and operator algebra. Matrix representations. Spin and magnetic fields. Approximation methods: WKB method, time independent perturbation theory, time dependent perturbation theory, variational methods. Elementary scattering theory.
PR: PHYS 3230, 3750, 3820 or waiver approved by the instructor
Advanced Quantum Mechanics
covers general formulation of quantum mechanics, measurement theory and operators. Hilbert spaces. Advanced topics selected from: electron in a strong magnetic field and the Aharonov-Bohm effect; advanced scattering theory; systems of identical particles; Feynman path integral formulation of quantum mechanics; relativistic quantum mechanics; second quantization; symmetry and group theory; density matrix and mixtures.
PR: PHYS 4850 and the former 3821 or waiver approved by the instructor
Physics Laboratory II
is a selection of experiments at the senior level.
LH: 6
PR: PHYS 3900
Honours Physics Thesis
is required of the Honours program.
Underwater Acoustics
covers basic theory of sound, sound in the ocean environment, wave equation, ray tracing, sonar system operation, transducers, applications.
PR: PHYS 3810 (or the former Mathematics 3220) and 3220, or waiver approved by the instructor
Ocean Climate Modelling
covers numerical techniques, finite difference, finite element and spectral methods. Introduction to the climate system. Ocean climate models. Box models. Variability on interdecadal, centennial and geological scales. Zonally averaged models. 3-D ocean modelling. Thermohaline circulation. General circulation models. Climate modelling and global warming.
PR: PHYS 3810 (or Mathematics 3202), PHYS 3300 and the completion of any 15 credit hours in core courses at the 3000 or 4000 level in the Faculty of Science or waiver approved by the instructor
|
bee3f94df18e807f | From Wikipedia, the free encyclopedia
Jump to: navigation, search
"SUSY" redirects here. For other uses, see Susy (disambiguation).
For the episode of the American TV series Angel, see Supersymmetry (Angel).
In particle physics, supersymmetry (SUSY) is a proposed type of spacetime symmetry that relates two basic classes of elementary particles: bosons, which have an integer-valued spin, and fermions, which have a half-integer spin.[1] Each particle from one group is associated with a particle from the other, known as its superpartner, the spin of which differs by a half-integer. In a theory with perfectly "unbroken" supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. For example, there would be a "selectron" (superpartner electron), a bosonic version of the electron with the same mass as the electron, that would be easy to find in a laboratory. Thus, since no superpartners have been observed, if supersymmetry exists it must be a spontaneously broken symmetry so that superpartners may differ in mass.[2][3] Spontaneously-broken supersymmetry could solve many mysterious problems in particle physics including the hierarchy problem. The simplest realization of spontaneously-broken supersymmetry, the so-called Minimal Supersymmetric Standard Model, is one of the best studied candidates for physics beyond the Standard Model.[citation needed]
There is only indirect evidence and motivation for the existence of supersymmetry. Direct confirmation would entail production of superpartners in collider experiments, such as the Large Hadron Collider (LHC). The first run of the LHC found no evidence for supersymmetry (all results were consistent with the Standard Model), and thus set limits on superpartner masses in supersymmetric theories. While some remain enthusiastic about supersymmetry,[4] this first run at the LHC led some physicists to explore other ideas.[5] One of the former enthusiastic supporters of supersymmetry theory during the 80s Mikhail Shifman in his article (dated October 2012)[6] pledged the theoretical community to search for the new ideas and acknowledge the reality, the failure of supersymmetry theory in order to help the new generation of theoreticians (to avoid the useless work and becoming a lost generation).[clarification needed] The LHC resumed its search for supersymmetry and other new physics in its second run.
There are numerous phenomenological motivations for supersymmetry close to the electroweak scale, as well as technical motivations for supersymmetry at any scale.
The hierarchy problem[edit]
Supersymmetry close to the electroweak scale ameliorates the hierarchy problem that afflicts the Standard Model.[7] In the Standard Model, the electroweak scale receives enormous Planck-scale quantum corrections. The observed hierarchy between the electroweak scale and the Planck scale must be achieved with extraordinary fine tuning. In a supersymmetric theory, on the other hand, Planck-scale quantum corrections cancel between partners and superpartners (owing to a minus sign associated with fermionic loops). The hierarchy between the electroweak scale and the Planck scale is achieved in a natural manner, without miraculous fine-tuning.
Gauge coupling unification[edit]
The idea that the gauge symmetry groups unify at high-energy is called Grand unification theory. In the Standard Model, however, the weak, strong and electromagnetic couplings fail to unify at high energy. In a supersymmetry theory, the running of the gauge couplings are modified, and precise high-energy unification of the gauge couplings is achieved. The modified running also provides a natural mechanism for radiative electroweak symmetry breaking.
Dark matter[edit]
TeV-scale supersymmetry (augmented with a discrete symmetry) typically provides a candidate dark matter particle at a mass scale consistent with thermal relic abundance calculations.[8][9]
Other technical motivations[edit]
Supersymmetry is also motivated by solutions to several theoretical problems, for generally providing many desirable mathematical properties, and for ensuring sensible behavior at high energies. Supersymmetric quantum field theory is often much easier to analyze, as many more problems become exactly solvable. When supersymmetry is imposed as a local symmetry, Einstein's theory of general relativity is included automatically, and the result is said to be a theory of supergravity. It is also a necessary feature of the most popular candidate for a theory of everything, superstring theory, and a SUSY theory could explain the issue of cosmological inflation.
Another theoretically appealing property of supersymmetry is that it offers the only "loophole" to the Coleman–Mandula theorem, which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories like the Standard Model with very general assumptions. The Haag-Lopuszanski-Sohnius theorem demonstrates that supersymmetry is the only way spacetime and internal symmetries can be combined consistently.[10]
A supersymmetry relating mesons and baryons was first proposed, in the context of hadronic physics, by Hironari Miyazawa during 1966. This supersymmetry did not involve spacetime, that is, it concerned internal symmetry, and was broken badly. Miyazawa's work was largely ignored at the time.[11][12][13][14]
J. L. Gervais and B. Sakita (during 1971),[15] Yu. A. Golfand and E. P. Likhtman (also during 1971), and D.V. Volkov and V.P. Akulov (1972),[16] independently rediscovered supersymmetry in the context of quantum field theory, a radically new type of symmetry of spacetime and fundamental fields, which establishes a relationship between elementary particles of different quantum nature, bosons and fermions, and unifies spacetime and internal symmetries of microscopic phenomena. Supersymmetry with a consistent Lie-algebraic graded structure on which the Gervais−Sakita rediscovery was based directly first arose during 1971[17] in the context of an early version of string theory by Pierre Ramond, John H. Schwarz and André Neveu.
Finally, Julius Wess and Bruno Zumino (during 1974)[18] identified the characteristic renormalization features of four-dimensional supersymmetric field theories, which identified them as remarkable QFTs, and they and Abdus Salam and their fellow researchers introduced early particle physics applications. The mathematical structure of supersymmetry (Graded Lie superalgebras) has subsequently been applied successfully to other topics of physics, ranging from nuclear physics,[19][20] critical phenomena,[21] quantum mechanics to statistical physics. It remains a vital part of many proposed theories of physics.
The first realistic supersymmetric version of the Standard Model was proposed during 1977 by Pierre Fayet and is known as the Minimal Supersymmetric Standard Model or MSSM for short. It was proposed to solve, amongst other things, the hierarchy problem.
Extension of possible symmetry groups[edit]
One reason that physicists explored supersymmetry is because it offers an extension to the more familiar symmetries of quantum field theory. These symmetries are grouped into the Poincaré group and internal symmetries and the Coleman–Mandula theorem showed that under certain assumptions, the symmetries of the S-matrix must be a direct product of the Poincaré group with a compact internal symmetry group or if there is not any mass gap, the conformal group with a compact internal symmetry group. During 1971 Golfand and Likhtman were the first to show that the Poincaré algebra can be extended through introduction of four anticommuting spinor generators (in four dimensions), which later became known as supercharges. During 1975 the Haag-Lopuszanski-Sohnius theorem analyzed all possible superalgebras in the general form, including those with an extended number of the supergenerators and central charges. This extended super-Poincaré algebra paved the way for obtaining a very large and important class of supersymmetric field theories.
The supersymmetry algebra[edit]
Main article: Supersymmetry algebra
Traditional symmetries of physics are generated by objects that transform by the tensor representations of the Poincaré group and internal symmetries. Supersymmetries, however, are generated by objects that transform by the spin representations. According to the spin-statistics theorem, bosonic fields commute while fermionic fields anticommute. Combining the two kinds of fields into a single algebra requires the introduction of a Z2-grading under which the bosons are the even elements and the fermions are the odd elements. Such an algebra is called a Lie superalgebra.
The simplest supersymmetric extension of the Poincaré algebra is the Super-Poincaré algebra. Expressed in terms of two Weyl spinors, has the following anti-commutation relation:
and all other anti-commutation relations between the Qs and commutation relations between the Qs and Ps vanish. In the above expression are the generators of translation and are the Pauli matrices.
There are representations of a Lie superalgebra that are analogous to representations of a Lie algebra. Each Lie algebra has an associated Lie group and a Lie superalgebra can sometimes be extended into representations of a Lie supergroup.
The Supersymmetric Standard Model[edit]
Incorporating supersymmetry into the Standard Model requires doubling the number of particles since there is no way that any of the particles in the Standard Model can be superpartners of each other. With the addition of new particles, there are many possible new interactions. The simplest possible supersymmetric model consistent with the Standard Model is the Minimal Supersymmetric Standard Model (MSSM) which can include the necessary additional new particles that are able to be superpartners of those in the Standard Model.
One of the main motivations for SUSY comes from the quadratically divergent contributions to the Higgs mass squared. The quantum mechanical interactions of the Higgs boson causes a large renormalization of the Higgs mass and unless there is an accidental cancellation, the natural size of the Higgs mass is the greatest scale possible. This problem is known as the hierarchy problem. Supersymmetry reduces the size of the quantum corrections by having automatic cancellations between fermionic and bosonic Higgs interactions. If supersymmetry is restored at the weak scale, then the Higgs mass is related to supersymmetry breaking which can be induced from small non-perturbative effects explaining the vastly different scales in the weak interactions and gravitational interactions.
In many supersymmetric Standard Models there is a heavy stable particle (such as neutralino) which could serve as a weakly interacting massive particle (WIMP) dark matter candidate. The existence of a supersymmetric dark matter candidate is related closely to R-parity.
The standard paradigm for incorporating supersymmetry into a realistic theory is to have the underlying dynamics of the theory be supersymmetric, but the ground state of the theory does not respect the symmetry and supersymmetry is broken spontaneously. The supersymmetry break can not be done permanently by the particles of the MSSM as they currently appear. This means that there is a new sector of the theory that is responsible for the breaking. The only constraint on this new sector is that it must break supersymmetry permanently and must give superparticles TeV scale masses. There are many models that can do this and most of their details do not matter. In order to parameterize the relevant features of supersymmetry breaking, arbitrary soft SUSY breaking terms are added to the theory which temporarily break SUSY explicitly but could never arise from a complete theory of supersymmetry breaking.
Gauge-coupling unification[edit]
One piece of evidence for supersymmetry existing is gauge coupling unification. The renormalization group evolution of the three gauge coupling constants of the Standard Model is somewhat sensitive to the present particle content of the theory. These coupling constants do not quite meet together at a common energy scale if we run the renormalization group using the Standard Model.[22] With the addition of minimal SUSY joint convergence of the coupling constants is projected at approximately 1016 GeV.[22]
Supersymmetric quantum mechanics[edit]
Supersymmetric quantum mechanics adds the SUSY superalgebra to quantum mechanics as opposed to quantum field theory. Supersymmetric quantum mechanics often becomes relevant when studying the dynamics of supersymmetric solitons, and due to the simplified nature of having fields which are only functions of time (rather than space-time), a great deal of progress has been made in this subject and it is now studied in its own right.
SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. (The potential energy terms which occur in the Hamiltonians are then known as partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy. This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy.
Supersymmetry: Applications to condensed matter physics[edit]
SUSY concepts have provided useful extensions to the WKB approximation. Additionally, SUSY has been applied to disorder averaged systems both quantum and non-quantum (through statistical mechanics), the Fokker-Planck equation being an example of a non-quantum theory. The 'supersymmetry' in all these systems arises from the fact that one is modelling one particle and as such the 'statistics' don't matter. The use of the supersymmetry method provides a mathematical rigorous alternative to the replica trick, but only in non-interacting systems, which attempts to address the so-called 'problem of the denominator' under disorder averaging. For more on the applications of supersymmetry in condensed matter physics see the book[23]
Supersymmetry in optics[edit]
Integrated optics was recently found[24] to provide a fertile ground on which certain ramifications of SUSY can be explored in readily-accessible laboratory settings. Making use of the analogous mathematical structure of the quantum-mechanical Schrödinger equation and the wave equation governing the evolution of light in one-dimensional settings, one may interpret the refractive index distribution of a structure as a potential landscape in which optical wave packets propagate. In this manner, a new class of functional optical structures with possible applications in phase matching, mode conversion[25] and space-division multiplexing becomes possible. SUSY transformations have been also proposed as a way to address inverse scattering problems in optics and as a one-dimensional transformation optics[26]
SUSY is also sometimes studied mathematically for its intrinsic properties. This is because it describes complex fields satisfying a property known as holomorphy, which allows holomorphic quantities to be exactly computed. This makes supersymmetric models useful "toy models" of more realistic theories. A prime example of this has been the demonstration of S-duality in four-dimensional gauge theories[27] that interchanges particles and monopoles.
The proof of the Atiyah-Singer index theorem is much simplified by the use of supersymmetric quantum mechanics.
General supersymmetry[edit]
Supersymmetry appears in many related contexts of theoretical physics. It is possible to have multiple supersymmetries and also have supersymmetric extra dimensions.
Extended supersymmetry[edit]
It is possible to have more than one kind of supersymmetry transformation. Theories with more than one supersymmetry transformation are known as extended supersymmetric theories. The more supersymmetry a theory has, the more constrained are the field content and interactions. Typically the number of copies of a supersymmetry is a power of 2, i.e. 1, 2, 4, 8. In four dimensions, a spinor has four degrees of freedom and thus the minimal number of supersymmetry generators is four in four dimensions and having eight copies of supersymmetry means that there are 32 supersymmetry generators.
The maximal number of supersymmetry generators possible is 32. Theories with more than 32 supersymmetry generators automatically have massless fields with spin greater than 2. It is not known how to make massless fields with spin greater than two interact, so the maximal number of supersymmetry generators considered is 32. This is due to the Weinberg-Witten theorem. This corresponds to an N = 8 supersymmetry theory. Theories with 32 supersymmetries automatically have a graviton.
For four dimensions there are the following theories, with the corresponding multiplets[28](CPT adds a copy, whenever they are not invariant under such symmetry)
• N = 1
Chiral multiplet: (0,12) Vector multiplet: (12,1) Gravitino multiplet: (1,32) Graviton multiplet: (32,2)
• N = 2
hypermultiplet: (-12,02,12) vector multiplet: (0,122,1) supergravity multiplet: (1,322,2)
• N = 4
Vector multiplet: (-1,-124,06,124,1) Supergravity multiplet: (0,124,16,324,2)
• N = 8
Supergravity multiplet: (-2,-328,-128,-1256,070,1256,128,328,2)
Supersymmetry in alternate numbers of dimensions[edit]
It is possible to have supersymmetry in dimensions other than four. Because the properties of spinors change drastically between different dimensions, each dimension has its characteristic. In d dimensions, the size of spinors is approximately 2d/2 or 2(d − 1)/2. Since the maximum number of supersymmetries is 32, the greatest number of dimensions in which a supersymmetric theory can exist is eleven.
Supersymmetry in quantum gravity[edit]
Supersymmetry is part of a larger enterprise of theoretical physics to unify everything we know about the universe into a single consistent set of physical principles, known as the quest for a Theory of Everything (TOE). A significant part of this larger enterprise is the quest for a theory of quantum gravity, which would unify the classical theory of general relativity and the Standard Model, which explains the other three basic forces in physics (electromagnetism, the strong interaction, and the weak interaction), and provides a palette of fundamental particles upon which all four forces act. Two of the most active methods of forming a theory of quantum gravity are string theory and loop quantum gravity (LQG), although in theory, supersymmetry could be a component of other theories as well.
If experimental evidence confirms supersymmetry in the form of supersymmetric particles such as the neutralino that is often believed to be the lightest superpartner, some people believe this would be a major boost to string theory. Since supersymmetry is a required component of string theory, any discovered supersymmetry would be consistent with string theory. If the Large Hadron Collider and other major particle physics experiments fail to detect supersymmetric partners or evidence of extra dimensions, many versions of string theory which had predicted certain low mass superpartners to existing particles may need to be significantly revised. The failure of experiments to discover either supersymmetric partners or extra spatial dimensions, as of 2013, has encouraged loop quantum gravity researchers.
Current status[edit]
Supersymmetric models are constrained by a variety of experiments, including measurements of low-energy observables – for example, the anomalous magnetic moment of the muon at Brookhaven; the WMAP dark matter density measurement and direct detection experiments – for example, XENON-100 and LUX; and by particle collider experiments, including B-physics, Higgs phenomenology and direct searches for superpartners (sparticles), at the Large Electron–Positron Collider, Tevatron and the LHC.
Historically, the tightest limits were from direct production at colliders. The first mass limits for squarks and gluinos were made at CERN by the UA1 experiment and the UA2 experiment at the Super Proton Synchrotron. LEP later set very strong limits.,[29] which in 2006 were extended by the D0 experiment at the Tevatron.[30][31] From 2003-2015, WMAP's and Planck's dark matter density measurements have strongly constrained supersymmetry models, which, if they explain dark matter, have to be tuned to invoke a particular mechanism to sufficiently reduce the neutralino density.
Prior to the beginning of the LHC, in 2009 fits of available data to CMSSM and NUHM1 indicated that squarks and gluinos were most likely to have masses in the 500 to 800 GeV range, though values as high as 2.5 TeV were allowed with low probabilities. Neutralinos and sleptons were expected to be quite light, with the lightest neutralino and the lightest stau most likely to be found between 100 and 150 GeV.[32]
The first run of the LHC found no evidence for supersymmetry, and, as a result, surpassed existing experimental limits from the Large Electron–Positron Collider and Tevatron and partially excluded the aforementioned expected ranges.[33]
During 2011 and 2012, the LHC discovered a Higgs boson with a mass of about 125 GeV, and with couplings to fermions and bosons which are consistent with the Standard Model. The MSSM predicts that the mass of the lightest Higgs boson should not be much higher than the mass of the Z boson, and, in the absence of fine tuning (with the supersymmetry breaking scale on the order of 1 TeV), should not exceed 130 GeV. Furthermore, for values of the MSSM parameter tan β ≤ 3, it predicts a Higgs mass below 114 GeV over most of the parameter space.[34] This region of Higgs mass was excluded by LEP by 2000. The LHC result is somewhat problematic for the minimal supersymmetric model, as the value of 125 GeV is relatively large for the model and can only be achieved with large radiative loop corrections from top squarks, which many theorists consider to be "unnatural" (see naturalness and fine tuning).[35] On the other hand, the lightest Higgs boson in the MSSM is Standard Model-like, which is consistent with measurements of the Higgs boson couplings at the LHC.
See also[edit]
1. ^ Haber, Howie. "SUPERSYMMETRY, PART I (THEORY)" (PDF). Reviews, Tables and Plots. Particle Data Group (PDG). Retrieved 8 July 2015.
2. ^ Martin, Stephen P. (1997). "A Supersymmetry Primer". arXiv:hep-ph/9709356Freely accessible.
3. ^ Dine, Michael (2007). Supersymmetry and String Theory: Beyond the Standard Model. p. 169.
4. ^ Ellis, John. "The Physics Landscape after the Higgs Discovery at the LHC". arXiv:1504.03654Freely accessible.
5. ^ Wolchover, Natalie (November 20, 2012). "Supersymmetry Fails Test, Forcing Physics to Seek New Ideas". Quanta Magazine.
6. ^ M. Shifman: Reflections and Impressionistic Portrait at the Conference Frontiers Beyond the Standard Model, FTPI (pdf), FTPI, 31 October 2012
8. ^ Jonathan Feng: Supersymmetric Dark Matter (pdf), University of California, Irvine, 11 May 2007
9. ^ Torsten Bringmann: The WIMP "Miracle" (pdf) University of Hamburg
10. ^ R. Haag, J. T. Lopuszanski and M. Sohnius, "All Possible Generators Of Supersymmetries Of The S Matrix", Nucl. Phys. B 88 (1975) 257
11. ^ H. Miyazawa (1966). "Baryon Number Changing Currents". Prog. Theor. Phys. 36 (6): 1266–1276. Bibcode:1966PThPh..36.1266M. doi:10.1143/PTP.36.1266.
12. ^ H. Miyazawa (1968). "Spinor Currents and Symmetries of Baryons and Mesons". Phys. Rev. 170 (5): 1586–1590. Bibcode:1968PhRv..170.1586M. doi:10.1103/PhysRev.170.1586.
13. ^ Michio Kaku, Quantum Field Theory, ISBN 0-19-509158-2, pg 663.
14. ^ Peter Freund, Introduction to Supersymmetry, ISBN 0-521-35675-X, pages 26-27, 138.
15. ^ Gervais, J. -L.; Sakita, B. (1971). "Field theory interpretation of supergauges in dual models". Nuclear Physics B. 34 (2): 632–639. Bibcode:1971NuPhB..34..632G. doi:10.1016/0550-3213(71)90351-8.
16. ^ D.V. Volkov, V.P. Akulov, Pisma Zh.Eksp.Teor.Fiz. 16 (1972) 621; Phys.Lett. B46 (1973) 109; V.P. Akulov, D.V. Volkov, Teor.Mat.Fiz. 18 (1974) 39
17. ^ Ramond, P. (1971). "Dual Theory for Free Fermions". Physical Review D. 3 (10): 2415–2418. Bibcode:1971PhRvD...3.2415R. doi:10.1103/PhysRevD.3.2415.
18. ^ Wess, J.; Zumino, B. (1974). "Supergauge transformations in four dimensions". Nuclear Physics B. 70: 39–50. Bibcode:1974NuPhB..70...39W. doi:10.1016/0550-3213(74)90355-1.
19. ^ suggested here
20. ^ Iachello, F. (1980). "Dynamical Supersymmetries in Nuclei". Physical Review Letters. 44 (12): 772–775. Bibcode:1980PhRvL..44..772I. doi:10.1103/PhysRevLett.44.772.
21. ^ Friedan, D.; Qiu, Z.; Shenker, S. (1984). "Conformal Invariance, Unitarity, and Critical Exponents in Two Dimensions". Physical Review Letters. 52 (18): 1575–1578. Bibcode:1984PhRvL..52.1575F. doi:10.1103/PhysRevLett.52.1575.
22. ^ a b Gordon L. Kane, The Dawn of Physics Beyond the Standard Model, Scientific American, June 2003, page 60 and The frontiers of physics, special edition, Vol 15, #3, page 8
23. ^ Supersymmetry in Disorder and Chaos, Konstantin Efetov, Cambridge university press, 1997.
24. ^ Miri, M.-A.; Heinrich, M.; El-Ganainy, R.; Christodoulides, D. N. (2013). "Superymmetric optical structures". Physical Review Letters. APS. 110 (23): 233902. arXiv:1304.6646Freely accessible. Bibcode:2013PhRvL.110w3902M. doi:10.1103/PhysRevLett.110.233902. PMID 25167493. Retrieved April 2014. Check date values in: |access-date= (help)
25. ^ Heinrich, M.; Miri, M.-A.; Stützer, S.; El-Ganainy, R.; Nolte, S.; Szameit, A.; Christodoulides, D. N. (2014). "Superymmetric mode converters". Nature Communications. NPG. 5: 3698. arXiv:1401.5734Freely accessible. Bibcode:2014NatCo...5E3698H. doi:10.1038/ncomms4698. PMID 24739256. Retrieved April 2014. Check date values in: |access-date= (help)
26. ^ Miri, M.-A.; Heinrich, Matthias; Christodoulides, D. N. (2014). "SUSY-inspired one-dimensional transformation optics". Optica. OSA. 1 (2): 89. arXiv:1408.0832Freely accessible. doi:10.1364/OPTICA.1.000089. Retrieved August 2014. Check date values in: |access-date= (help)
27. ^ Krasnitz, Michael (2002). Correlation functions in supersymmetric gauge theories from supergravity fluctuafluctuations hHKtions (PDF). Princeton University Department of Physics: Princeton University Department of Physics. p. 91.
28. ^ Polchinski,J. String theory. Vol. 2: Superstring theory and beyond, Appendix B
29. ^ LEPSUSYWG, ALEPH, DELPHI, L3 and OPAL experiments, charginos, large m0 LEPSUSYWG/01-03.1
30. ^ The D0-Collaboration (2009). "Search for associated production of charginos and neutralinos in the trilepton final state using 2.3 fb−1 of data". arXiv:0901.0646Freely accessible. Bibcode:2009PhLB..680...34D. doi:10.1016/j.physletb.2009.08.011.
31. ^ The D0 Collaboration (2006). "Search for squarks and gluinos in events with jets and missing transverse energy using 2.1 fb-1 of pp¯ collision data at s=1.96 TeV". arXiv:0712.3805Freely accessible. Bibcode:2008PhLB..660..449D. doi:10.1016/j.physletb.2008.01.042.
32. ^ O. Buchmueller; et al. (2009). "Likelihood Functions for Supersymmetric Observables in Frequentist Analyses of the CMSSM and NUHM1". The European Physical Journal C. 64 (3): 391–415. arXiv:0907.5568Freely accessible. Bibcode:2009EPJC...64..391B. doi:10.1140/epjc/s10052-009-1159-z.
33. ^ Roszkowski, Leszek; Sessolo, Enrico Maria; Williams, Andrew J. (11 August 2014). "What next for the CMSSM and the NUHM: improved prospects for superpartner and dark matter detection". Journal of High Energy Physics. 2014 (8). arXiv:1405.4289Freely accessible. Bibcode:2014JHEP...08..067R. doi:10.1007/JHEP08(2014)067.
34. ^ Marcela Carena and Howard E. Haber; Haber (1970). "Higgs Boson Theory and Phenomenology". Progress in Particle and Nuclear Physics. 50: 63–152. arXiv:hep-ph/0208209v3Freely accessible. Bibcode:2003PrPNP..50...63C. doi:10.1016/S0146-6410(02)00177-1.
35. ^ Patrick Draper; et al. (December 2011). "Implications of a 125 GeV Higgs for the MSSM and Low-Scale SUSY Breaking". Physical Review D. 85 (9): 095007. arXiv:1112.3068Freely accessible. Bibcode:2012PhRvD..85i5007D. doi:10.1103/PhysRevD.85.095007.
Further reading[edit]
Theoretical introductions, free and online[edit]
On experiments[edit]
External links[edit] |
ae53a236c9097f1b | Skip to main content
Some questions from readers
A Black scientist wrote to me in response to my list of white privileges:
I prefer to stay out of discussions about race for personal reasons, but in reading your most recent blog post, I keep feeling that there is an undercurrent of "class-ism" that is generally overlooked. For example, I would add to Dr. McIntosh's list something along the lines of "I can enjoy a meal in a wealthier part of town without having my presence questioned because of my race," the assumption being that I must be too poor or uneducated to be in such an establishment given my race. I imagine that this would be an unlikely occurrence for a member of a majority group. Is this something you have come across?
My answer:
I agree with your addition to Dr. McIntosh's list. This highlights how class and race are intertwined.
I strongly encourage you to read Seeing White. Chapter 5 covers the intersection of socioeconomic class and white privilege.
In that chapter, the authors point out that the things we associate with upper class (classical music, art, wine) are things that are also a part of white society. For example, a person from India might enjoy classical music that has roots that go back centuries further than Beethoven, but Indian classical music, while a class signifier in India, is not in our country. For an Indian person in the US, they need to switch to listening to Western European classical music to have the proper musical signifier of class. They need to be more white to be more upper class.
The reason upper class is associated with white is because throughout history Black people in particular have been forced into a lower socioeconomic class. Even today, a Black family earning $150K/year is much more likely to live near poverty than a white family earning the same amount. And no matter the income of a Black family, there exists a 18:1 wealth gap (where wealth = assets - debts).
Finally, class is mutable. A white person from Appalachia can go to Hollywood, make it big, and enter into an upper class. Or they can go to college, graduate to Wall Street, and become upper class. But a Black person, no matter their accomplishments, will remain a Black person. When people see Oprah, they may see a successful woman. But they more likely see a successful Black woman first. But when people see Martha Stewart, despite her criminal record, she's just a successful woman, with no mention of her whiteness.
In a discussion about racism, focusing on class is like walking into a room full of people with tuberculosis and saying, "Jeez, I wish someone would do something about all this coughing!" Yes, coughing is bad. It spreads germs. It's a problem. But the root is the disease (tuberculosis). The root of class differences for Black people is racism. Racism is the disease, and we have an epidemic in our country.
I hope this helps!
From a white scientist:
First off, I've seen the "racism = prejudice + power" line a fair amount recently, but only recently. Has this idea of what constitutes racism been around longer and I've just not noticed, or is it a relatively recent idea? By modern standards, it mostly means that one can't be racist against white people in the United States. It may just be the unfamiliarity of the newer definition (or proper academic definition I wasn't aware of), but something about it just doesn't seem right to me. To my mind, there's a difference between systemic racism (which is largely what you were talking about) and individual racism (which, from the right people, ultimately powers systemic racism). I see that this will be discussed in a future post, so I hope that ends up being enlightening. In the paragraph where you mention this, you seem to imply that the individual racism could (or should) be called bigotry instead. Am I reading that properly? I think just by writing this out, I've been able to get some things straight in my head, but I need to bounce my thoughts off of someone who knows more than I do.
First of all, I received this email more than a month ago, so I hope the reader looked into this a bit more and has been following my subsequent posts. Here's an excellent discussion of the various definitions of racism. Here's another essay on definitions. Here's another that I just found by Googling "Definitions of Racism."
My take-away from these resources and others is that, first of all, precision in the words we use matters. As George Orwell wrote:
Our language must be precise and carry maximal information content in order to advance society. I think that refining the precision of racism to focus on what matters, namely the power imbalance that allows for active oppression at the systematic level and reinforcement at the personal/interpersonal level. A Black man can intensely hate white people, keep his kids from playing with white kids, call them names behind their back. But do these actions affect the privileges that white people enjoy? Does his hatred toward white people increase is ability to acquire wealth or exert political/social power in his life?
Hell no.
So if we want the word "racism" to carry precise meaning, it's important to distinguish it from prejudice and bigotry, and keep the focus on the power that makes racism so much more powerful than these other concepts.
Another question from a white reader:
What can I, as a white guy, do to help? I've read your posts and I'm feeling like my viewpoints are changing. But when I go to work, I feel like I don't know how to influence my coworkers or change our work environment.
First, thanks for your desire to help. But let me start by saying that I've never received the question "how can I help" from a white person who has done their homework. People who have dug into multiple books, joined anti-racism discussion groups, started following other blogs, and embarked on a journey of discovery rapidly start seeing where and how they can help.
Secondly, I spent months not responding to this question because I was busy answering this very question through my writing. The message I keep pounding on is straight-forward but not easy: read, read, read. If you didn't know anything about quantum mechanics, then of course you'd be confused about how to solve the time-independent Schrödinger equation. Of course the concept of a probability distribution for an electron's position would make no sense to you. How could you fix it? Well, I suppose you could email a physicist. But once she figured out that you've never taken a Quantum class, never read a book on the subject, and therefore lacked any meaningful background knowledge, she'd probably stop responding to your emails.
Finally, white people need to recognize that their local person of color is not the designated race relations officer of their department. The average person of color is putting in more work with more on their mind and dealing with more shit than the average white person. So rather than burdening a person of color with your basic questions, do a Google search and/or pick up a book. I'm sorry to sound harsh about this, but if you want to learn, its on you, not the nearest Black person.
And just to prove that I'm not trying to be all righteous on that last one, here's me catching myself almost doing the same thing about an LGBT issue:
Popular posts from this blog
An annual note to all the (NSF) haters
The Long Con
Hiding in Plain Sight
Culture: Made Fresh Daily
|
8bbb588053db209c | Sunday, March 12, 2017
Jacques Distler vs some QFT lore
Three weeks ago, in the article titled
What's going on? Indeed, textbooks and instructors often – and, according to some measures, always – say that quantum mechanics of one particle ceases to behave well once you switch to relativity – to theories covariant under the Lorentz transformations.
Are these statements right? Are they wrong? And are the correct statements one can make important? It depends what exact statements you have in mind.
What Distler discusses is the existence of the Hilbert space – and Hamiltonian – for one particle, e.g. the Klein-Gordon particle. Does it exist? You bet. If you believe that a Hilbert space of particles exists in free quantum field theory, do the following: Write a basis vector of that Hilbert space as the basis of a Fock space, i.e. in terms of the basis vector that are\[
a^\dagger_{\vec k_1} \cdots a^\dagger_{\vec k_n} \ket 0
\] And simply pick those basis vectors that contain exactly one creation operator. This one-particle subspace of the Hilbert space will evolve to itself under the empty-spacetime evolution operators. In fact, if you write the basis in the momentum basis as I did, the Hamiltonian for one real quantum of the real Klein-Gordon equation will be simply\[
H = \sqrt{|\vec k|^2 + m^2}.
\] This is something you may derive from quantum field theory. The operator above is perfectly well-defined in the momentum space. The energy is non-negative, the norms of states are positive, everything works fine.
So has Distler shown that all the statements of the type "one particle isn't consistent in relativistic quantum mechanics" are wrong?
Nope, he hasn't. In particular, he was talking about the statement
...replacing the [non-relativistic, e.g. one-particle] Schrödinger equation with Klein-Gordon make[s] no sense...
But this statement is right at the level of one-particle quantum mechanics because his equation for the evolution of the wave function is not the Klein-Gordon equation. You know, the Klein-Gordon equation is\[
\left(\frac{\partial^2}{\partial t^2} - \frac{\partial^2}{\partial x^2} - \frac{\partial^2}{\partial y^2} - \frac{\partial^2}{\partial z^2} + m^2 \right) \Phi = 0.
\] That's a nice, local – perfectly differential equation. On the other hand, the replacement for the non-relativistic Schrödinger equation\[
i\hbar\frac{\partial}{\partial t} \psi = -\frac{\hbar^2}{2m} \Delta \psi + V(x) \psi
\] that he derived and that describes the evolution of one-particle states was\[
i\hbar\frac{\partial}{\partial t} \psi = c \sqrt{m^2c^2-\hbar^2\Delta} \psi + V(x) \psi
\] Because the square root has a neverending Taylor expansion, the function of the Laplace operator is a terribly non-local "integral operator" acting on the wave function \(\psi(x,y,z,t)\) in the position representation. So this equation for one particle, even though it follows from the Klein-Gordon quantum field theory, doesn't have the nice and local Klein-Gordon form. It isn't pretty and it isn't fundamental. If you wrote this equation in isolation, you should be worried that the resulting theory isn't relativistic because relativity implies locality and this equation allows the localized wave function packet to spread superluminally!
What the statements mean is that if you want to use some nice and local equation for a wave function for one particle – i.e. if you literally want to replace Schrödinger's equation by the similar Klein-Gordon equation – you won't find a way to construct (in terms of local functions of derivatives etc.) the probability current and density etc. that would have the desired positivity properties etc. And this statement is just true and important!
If you want to return to simple, fundamental, justifiable, beautiful equations, you can indeed use the Klein-Gordon, Dirac, Maxwell, and other equations. But you must appreciate that they're equations for (field) operators, not for wave functions.
This statement is important because it's not just a mathematical one. It's highly physical, too. In particular, if you consider any relativistic quantum mechanical theory of particles – quantum field theory or something grander, like string theory – it's unavoidable that when you confine particles to the distance shorter than the Compton wavelength \(\hbar / mc\) of that particle, you will unavoidably have enough energy so that particle-antiparticle pairs will start to be produced at nonzero probabilities. And in relativity, it's normal for a particular to move by a speed comparable to the speed of light, and then its wavelength is comparable to the Compton wavelength. You can't really trust the one-particle theory at distances comparable to its normal de Broglie wavelength! So the theory is wrong in some very strong sense.
The antiparticles (which are the same with the original particle in the real Klein-Gordon case, just to be sure) inevitably follow from relativity combined with quantum mechanics, and so does the pair production of particles and antiparticles. This physical statement has lots of nearly equivalent mathematical manifestations. For example, local observables in a relativistic quantum theory have to be constructed out of quantum fields. So the 1-particle Hilbert space doesn't have any truly local observables: You can't construct the Klein-Gordon field \(\Phi(x,y,z,t)\) out of operators acting on the 1-particle Hilbert space because the latter operators never change the number of particles while \(\Phi(x,y,z,t)\) does (by one or minus one – it's a combination of creation and annihilation operators). In fact, you can't construct the bilinears in \(\Phi\) and/or its derivatives, either, because while those operators in QFT contain some terms that preserve the number of particles, they also contain equally important terms that change the number of particles by two (particle-antiparticle pair production or pair annihilation) and those are equally important for obtaining the right commutators and other things. The mixing of creation operators for particles and the annihilation operators for antiparticles is absolutely unavoidable if you want to define observables at points (or regions smaller than the Compton wavelength).
There's one more statement that Distler made and that is really wrong. Distler wrote that the problems only begin when you start to consider interactions – and from the context, it's clear that he meant interactions involving several quanta of quantum fields, several particles in the quantum field theory sense. But that's not true.
Problems of "one-particle relativistic quantum mechanics" already appear if you consider the behavior of the single particle in external classical fields. Just squeeze a Klein-Gordon particle – e.g. a Higgs boson – in between two metallic plates whose distance is sent to zero. Will it make sense? No, as I mentioned, the walls start to produce particle-antiparticle quanta in general. Time-dependent Hamiltonians lead to particle production, if you wish. Similarly, if you place these particles in any external classical field, the actual Klein-Gordon field may react in a way to create particle pairs.
So the truncation of the Hilbert space of a quantum field theory to the one-particle subspace is inconsistent not only if you consider interactions of particles in the usual Feynman diagrammatic sense – but even if you consider the behavior of the particle in external classical fields. Whatever you try to with the particle that goes beyond the stupid simple single free-particle Hamiltonian will force you to acknowledge that the truncated one-particle theory is no good.
We want to do something more with the theory than just write an unmotivated non-local Hamiltonian of the kind \(H\sim \sqrt{m^2+p^2}\) if I use \(\hbar=c=1\) units here. And as soon as we do anything else – justify this ugly and seemingly non-local (and therefore seemingly relativity-violating) Hamiltonian by an elegant theory, study particle interactions, study the behavior of one particle in external classical fields – we just need to switch to the full-blown quantum field theory, otherwise our musings will be inconsistent.
One extra comment. I mentioned that the non-local differential operator allows the wave packet to spread superluminally. How is it possible that such a thing results from a relativistic theory? Well, quantum field theory has no problem with that because when you do any doable measurement, the processes in which a particle spreads in the middle gets combined with processes involving antiparticles. When you calculate the "strength of influences spreading superluminally", some Green's functions – which are nonzero for spacelike separations – will combine to the "commutator correlation function" which vanishes at spacelike separation. So the inseparable presence of antiparticles will save the locality for you. The truncation to particles-only (without antiparticles) would indeed violate locality required by relativity as long as you could experimentally verify it (you need at least some interactions of that particle with something else for that).
While Jacques is right about the possibility to truncate the Hilbert space of quantum field theories to the one-particle subspaces, he's morally wrong about all these big statements – and some of his statements are literally wrong, too. At least morally, the lore that drives him up the wall is right and there are ways to formulate this lore so that it is both literally true and important, too.
So students in Austin are encouraged to actively ignore their grumpy instructor's tirades against the quantum field theory lore and even more encouraged to understand in what sense the lore is true.
As I explain in the comments, many quantum field theory textbooks have wonderful explanations – usually at the very beginning – of the wisdom that Jacques Distler seems to misunderstand, namely why quantum fields and the mixing of sectors with different numbers of particles is unavoidable for consistency of quantum mechanics with special relativity.
The 2008 textbook by my adviser Tom Banks starts the explanation on Page 3, in section "Why quantum field theory?" It says that the probability amplitude for a particle emission at spacetime point \(x\) and its absorption at point \(y\) is unavoidably nonzero for spacelike separations and because it would only be only nonzero for one of the two time orderings of \(x,y\), and the ordering of spacelike-separated event isn't Lorentz-invariant, the Lorentz invariance would be broken and one must actually demand that only amplitudes where both orders are summed over are allowed. In other words, as argued on page 5, the only known consistent ways to solve this clash with the Lorentz invariance is to postulate that every emission source must also be able to act as an absorption sink and vice versa. When both terms are combined, the sum is still nonzero in the spacelike region but has no brutal discontinuities when the ordering gets reversed.
Also, when the particle carries charges, the emission and absorption in the two related processes must involve particles of opposite charges and one predicts (and Dirac predicted) the existence of antiparticles that are needed for things to work.
Weinberg QFT Volume 1 explains the negative probabilities and energies of the relativistic equations naively used instead of the non-relativistic Schrödinger equation on pages 7, 12, 15... Read it for a while. It's OK but, in my opinion, much less deep than Tom's presentation.
Peskin's and Schroeder's textbook on quantum field theory discusses the non-vanishing of the amplitudes in the spacelike region on page 14 and pages 27-28 discuss that the actual influence of one measurement on another is measured by the commutator of two field operators. And that vanishes for spacelike separations – again, because two processes that are opposite to each other are subtracted.
Without the mixing of creation operators (for particles) and annihilation operators (for antiparticles), you just can't define any observables that would belong to a point or a region and that would behave relativistically (respected the independence of observables that are spacelike separated). Quantum fields are the only known way to avoid this conflict between quantum mechanics and relativity. They are unavoidably superpositions of positive- and negative-energy solutions, and therefore are expanded in sums of creation and annihilation operators. That's why all local discussions make it necessary to allow emission and absorption at the same time – and, consequently, the combination of quantum mechanics and relativity makes it necessary to consider the whole Fock space with a variable number of particles. The one-particle truncation is inconsistent with relativistic dynamics such as time-dependent interactions, emission, or absorption.
In the mathematical language, fields and their functions are necessary for any local observables in relativistic quantum mechanical theories. They always contain terms that change the number of particles – except for the trivial constant operator \(1\). In the physical language, relativity and quantum mechanics simultaneously imply that emission and absorption are linked, antiparticle exists, and scattering amplitudes for particles and antiparticles have to obey identities such as the crossing symmetry.
The teaching of a quantum field theory course could be a good opportunity for Jacques to learn this basic stuff that is often presented on pages such as 3,5,7,12,14... of introductory textbooks.
No comments:
Post a Comment |
a31981a9a89a37d7 | Forgot your password?
Earth Science
Scientific Cruise Meets Perfect Storm, Inspires Extreme Wave Research 107
Posted by Unknown Lamer
from the creative-punishment-for-copyright-infringers-discovered dept.
An anonymous reader writes "The oceanographers aboard RRS Discovery were expecting the winter weather on their North Atlantic research cruise to be bad, but they didn't expect to have to negotiate the highest waves ever recorded in the open ocean. Wave heights were measured by the vessel's Shipborne Wave Recorder, which allowed scientists from the National Oceanography Centre to produce a paper titled 'Were extreme waves in the Rockall Trough the largest ever recorded?' It's that paper, in combination with the first confirmed measurement of a rogue wave (at the Draupner platform in the North Sea), that led to 'a surge of interest in extreme and rogue waves, and a renewed emphasis on protecting ships and offshore structures from their destructive power.'"
Scientific Cruise Meets Perfect Storm, Inspires Extreme Wave Research
Comments Filter:
• by Anonymous Coward on Tuesday April 17, 2012 @12:06AM (#39707379)
This scientific cruise also proved that the only kind of cruise where nobody gets laid is a "scientific cruise"
• by cplusplus (782679) on Tuesday April 17, 2012 @12:15AM (#39707423) Journal
I only RTFAs to find out how high the waves were - it turns out they were up to 29.1 meters (95.5 feet).
• Rogue waves (Score:3, Funny)
by gstrickler (920733) on Tuesday April 17, 2012 @12:23AM (#39707453)
Outlaw them and put out a bounty (or a Bounty?)
• 2006 (Score:5, Informative)
by Anonymous Coward on Tuesday April 17, 2012 @12:32AM (#39707491)
The article was published in 2006. How is this 'new?'
• The article was published in 2006. How is this 'new?'
I guess it's some sort of tie in with the 100th anniversary of the Titanic making it almost all the way across the Atlantic.
• The wave was so high that the ship did a loopty-loop, causing a rift in time where they just ended up here. The same phenomenon can be seen if you can swing high enough on a swingset to go around once
• by jlehtira (655619)
Well, I agree with your point. But six years is a good time to let scientific papers simmer. Less than that is not enough time for other scientists to evaluate the correctness and value of some paper.
• by Anonymous Coward
Many researchers were lost during the peer-review of this paper.
• by dreemernj (859414)
2006? Wasn't that around the time a rogue wave was recorded on The Deadliest Catch?
• Data collected in 2000. Paper published in 2006. Reported in /. in 2012. The pace of good science is slow and deliberate.
• by Anonymous Coward on Tuesday April 17, 2012 @12:43AM (#39707553)
look up Schrodinger wave equations and apply them to ocean waves. You will get 30+ meter tall waves with a trough next to the "wall" of water, (the wave is tall and narrow - like a wall). This trough adds to the great difficulty in surviving one of these waves. Ships that are designed to withstand forces of 10 tons/m2 have to content with 10 times that force. I believe there was a study in which someone, (don't remember her name :( ) mapped the entire earth over a two week period and found something on the order of 20 of these waves. Fascinating stuff.
• by phantomfive (622387) on Tuesday April 17, 2012 @03:21AM (#39708089) Journal
Oh yeah, just found it [bbc.co.uk]. They found about 10 giant waves.
• by Anonymous Coward
FYI the Schrodinger wave equation does not describe ocean waves. Water waves are described by the Navier-Stokes (N-S) equations. Turbulence models fall out of N-S, however only electrons sometimes fall out from Schrodinger :)
• There is a non-relativistic version of the Schrödinger equation. Some theories attempt to explain rogue waves in the open sea using these non-linear equations as a model, because the distribution of wave heights that would result from the linear model substantially underpredicts the occurrence and size of rogue waves.
• by Anonymous Coward
The nonlinear Schordinger equation is one of the many various equations that can be used to describe the behaviour of water waves in various regimes, with a tiny bit about it on Wikipedia here [wikipedia.org]. Although the NLS is mostly used for behaviour of the envelope of deep water waves, which means you can show soliton based rouge wave like behaviour, but not say much about trough to peak steepening as in the grandparent post.
The set of equations and theories used to model nonlinear water waves is quite diverse, wit
• by WaffleMonster (969671) on Tuesday April 17, 2012 @12:52AM (#39707605)
For those looking for more details about this voyage http://eprints.soton.ac.uk/294/ [soton.ac.uk]
• Specifically in 1998, a 120ft wave off the east coast of tasmania http://www.swellnet.com.au/news/124-a-short-history-of-tasman-lows [swellnet.com.au]
• Since extreme waves were not the subject of their expedition, they had not read all the prior literature.
• by TapeCutter (624760) on Tuesday April 17, 2012 @02:48AM (#39708017) Journal
The Tasman sea is notorious for rouge waves. Many moons ago I worked a fishing trawler in Bass Straight, I never saw anything like 120ft but the regular waves were tall enough that the radar was blocked by the peaks when the boat was in a trough, I'm guessing the radar mast was about 30ft above the water line. A lot like riding in a giant roller coaster carriage really, slowly climb up one wave, crest, then race down the other side and watch the bow dig under the next one, throw the water over the wheel house as the bow pops up to the surface, and starts the next climb. From what I've heard, the problem with rouge waves is not so much their height but the fact that they are too steep to climb.
• Wow, that is incredibly exciting.
• I detect a hint of sarcasm but to be honest it was downright fucking scary the first trip but after a few trips it became as exciting to me as an old fashioned roller coaster is to the guy who stands up on it all day operating the brake. Although a stingray the size of a family dinner table flapping about on an 8X12 deck was never boring.
• No sarcasm at all. If the human lifespan weren't so short I would definitely consider going down and trying it out for a few years. I don't know about that stingray thing, though. I know people who go ocean kayaking but that's nothing in comparison.
• by tlhIngan (30335)
Waves are never boring, especially big ones. The key is to cut through them - if you let them hit the side, you risk capsizing. The only way to do this is engine power (run
• by serbanp (139486)
Does this mean that the "the Perfect Storm" depiction of how the Andea Gail sunk was technically inaccurate? In that film, the ship went with its bow straight into the freak wave but could not reach the top and fell over.
• Yep, it's a lot like a plane, if engine is fucked, gravity takes over and you basically fall of the wave..
• That article claims 42.5m is 120 feet - it's actually 140 feet. The wave was probably recorded as 120 feet and someone mangled the conversion rather than the other way round.
• by Sarten-X (1102295) on Tuesday April 17, 2012 @12:53AM (#39707611) Homepage
Rogue waves: Demonstrating yet again that reality is a fascinatingly weird place.
• by iamhassi (659463)
And we don't understand our planet as much as we think. We are always focused on exploring strange new worlds, to seek out new life and new civilizations, to boldly... um, you get the idea, but look, there's new things happening on our own planet. How can we understand new planets when we don't understand the one we are on? Not saying never explore space, just saying maybe we should focus on what we have.
• by Anonymous Coward
How can we understand this planet when we have nothing to compare it to?
Rethorical questions only caters to peoples emotional response but they don't make much of an argument.
• by Sarten-X (1102295)
Reminds me of the TV show seaQuest... for almost a whole season, they had interesting episodes based around real weirdness in the oceans.
What fascinates me even more is the emergent behavior observable in simple systems, such as growing crystals, diffusing liquids, convection currents... all of those delightfully complex results from simple principles. There's beauty in the result, and simplicity in the process.
• by Anonymous Coward
Although the paper might have spurred interest in rogue waves, the wave in the paper linked in the summary wouldn't really be considered a rogue wave. Usually a cut-off is arbitrarily picked at 2 times the significant wave height (the average of the highest third of waves). In this case, the wave was about 1.5 times the significant wave height. Statistically speaking, you would expect about 1 in a 100 waves to be 1.5 times the wave height, just from the mixing and constructive interference of waves, whil
• Big waves (Score:4, Interesting)
by MarkRose (820682) on Tuesday April 17, 2012 @01:06AM (#39707665) Homepage
Waves over 20 m (60 ft) tall are actually pretty common in some places. My dad is senior keeper at Triple Island Lightstation [fogwhistle.ca], located just off the BC coast. In severe winter storms, the waves will often crest over the square part of the building, which is about 20 m above sea level. This January, one such wave blew in a storm window on the top floor -- several tons of water will sometimes do that. The building stays up because it's constructed with 2 ft thick rebar concrete walls.
• Re:Big waves (Score:5, Informative)
by tirerim (1108567) on Tuesday April 17, 2012 @01:39AM (#39707811)
TFA is talking about waves in the open ocean, though. Waves get higher when they reach shallower water, so the 20 m waves you're talking about would have been significantly smaller in the open ocean -- which makes 29 m open ocean waves that much more impressive.
• Nice traditional exterior, but sad to see the drop ceiling [lighthousememories.ca] on the interior. At least the wood floor is original.
• Interesting link but some of the text is reminiscent of Julian and Sandy (http://en.wikipedia.org/wiki/Julian_and_Sandy) from "Round the Horne", I mean, "The Triple Island light was built to guide mariners through the rocky waters of Brown Passage, on their way to the port of Prince Rupert.", I ask ya!
• It's interesting how often myth and legend end up being scientific fact. There has been talk since sailors took to the sea of rogue waves that reached a 100' or more. Science has been confirming these myths in recent years. Most myths have an element of truth in them. On the practical side it's a serious concern since surviving a 100' rogue wave is not something all sea worthy ships can do yet they can face them without warning. I read years ago the theoretical limit was twice what has been recorded so the
• The paper is from 2006, and describes a wave observed in 2000.
Satellite-based radar altimeters produce a lot of data about wave height world wide, but they don't, apparently, have quite enough resolution yet to see this kind of thing. A view of such waves from above, over a few minutes, would tell us a lot. Is it an intersection of two or more waves? How far does it travel? How long does it persist?
The U.S. Navy has put considerable effort into answering questions like that.
• bad statistics (Score:4, Interesting)
by Tom (822) on Tuesday April 17, 2012 @04:24AM (#39708283) Homepage Journal
What has fascinated me about freak/rogue waves is that sailors have known about them for decades if not centuries, but scientists were telling them it can't be.
And the reason is badly understood statistics. I've recently read Black Swan, and that gave me a few new concepts to work with, but the basic idea is exactly that: We don't really have a good understanding of statistics and probabilities, especially about extremely low probabilities in big numbers.
Or, as Tim Minchin put it: One-in-a-million things happen all the time.
And it's not just in the oceans. The entire financial crisis was caused by the people in charge taking huge (but low probability) risks, ignoring that once enough people have taken enough of those "low probability" risk, they become very likely to actually happen.
Freak waves are cool because they are in the gray area between the normal distribution and the really freaky - thus they happen often enough that they are rare, but not bigfoot-rare. We can actually study them.
• Re: (Score:3, Interesting)
by edxwelch (600979)
There's an interesting article about that, here: http://www.bbc.co.uk/science/horizon/2002/freakwave.shtml [bbc.co.uk]
Apparently, there are two scientific models, linear, which says freak waves are impossible and Quantum physics which says they are possible.
• by Tom (822)
The problem is that a gaussian approach to the numbers assumes that random fluctuations will even out. But the equations used in quantum physics allow for waves to combine, and that's what is happening - interference, just not between 2 waves as in the double-slit experiment, but between dozens or maybe hundreds of waves.
This article here: http://dev.physicslab.org/Document.aspx?doctype=3&filename=PhysicalOptics_InterferenceDiffraction.xml [physicslab.org] shows towards the bottom how massive peaks you can get with mult
• by Anonymous Coward
Linear wave theory allows for interference and combining of waves (that is kind of actually one of the major properties of linear theories in a lot of situations). The statistics on linear theory waves (which ends up being a Rayleigh distribution, not a Gaussian) is what says that waves much larger than those around it are very unlikely. What nonlinear theories add is not just overlapping like interference, but soliton like solutions, where a single wave or small wave train much larger than neighboring wa
• by Tom (822)
Thanks, AC. In 12+ years of /. this was one of the most informative AC comments I've come across.
• We have bigger waves in Texas!
• I've never understood that particular idiocy. Texans know they don't live in the biggest US state, right? Texas is less than half the size of Alaska.
• by dtmos (447842) * on Tuesday April 17, 2012 @07:09AM (#39708623)
My uncle retired as a US Navy Captain. For many years he had two photographs displayed in his house, which he ascribed to Admiral "Bull" Halsey's "second" typhoon [navy.mil], in June 1945. At that time my uncle was an ensign, assigned to a destroyer, and on his first sea voyage.
The two photographs were of a sister destroyer. In the first photograph, all one sees is a giant wave, with the bow of the destroyer sticking out of one side, and the stern sticking out of the other. The middle of the ship, including the masts and superstructure, is submerged and not visible.
In the second photo, taken a few seconds later, the middle of the ship is now visible, but both the bow and stern are now submerged in the wave train. And as a kid, the part that fascinated me the most: You could see an air gap below the middle of the ship, between the ship's keel and the wave trough below.
• I'm surprised I can't get for my boat (or raft) a platform with accelerometers that operates a hydraulic piston to compensate for wave action. It might need some lateral actuator too, as wave motion is circular. But it might not, if the light floats slide along the surface as the piston pushes down on them keeping the heavy inertial payload in place.
Just accelerometers, hydraulic pistons, and DSP. Big bonus points for a device that harvests that energy moving through the site to power the hydraulics.
|
f30f3f70270cc1da | Quantum gravity
Quantum gravity
Quantum gravity is the field of theoretical physics attempting to unify quantum mechanics, which describes three of the fundamental forces of nature (electromagnetism, weak interaction, and strong interaction), with general relativity, the theory of the fourth fundamental force: gravity. One ultimate goal hoped to emerge as a result of this is a unified framework for all fundamental forces— called a "theory of everything" (TOE).
Much of the difficulty in merging these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. Quantum field theory depends on particle fields embedded in the flat space-time of special relativity. General relativity models gravity as a curvature within space-time that changes as a gravitational mass moves. Historically, the most obvious way of combining the two (such as treating gravity as simply another particle field) ran quickly into what is known as the renormalization problem. In the old-fashioned understanding of renormalization, gravity particles would attract each other and adding together all of the interactions results in many infinite values which cannot easily be cancelled out mathematically to yield sensible, finite results. This is in contrast with quantum electrodynamics where, while the series still do not converge, the interactions sometimes evaluate to infinite results, but those are few enough in number to be removable via renormalization.
Effective field theories
In recent decades, however, this antiquated understanding of renormalization has given way to the modern idea of effective field theory. All quantum field theories come with some high-energy cutoff, beyond which we do not expect that the theory provides a good description of nature. The "infinities" then become large but finite quantities proportional to this finite cutoff scale, and correspond to processes that involve very high energies near the fundamental cutoff. These quantities can then be absorbed into an infinite collection of coupling constants, and at energies well below the fundamental cutoff of the theory, to any desired precision; only a finite number of these coupling constants need to be measured in order to make legitimate quantum-mechanical predictions. This same logic works just as well for the highly successful theory of low-energy pions as for quantum gravity. Indeed, the first quantum-mechanical corrections to graviton-scattering and Newton's law of gravitation have been explicitly computed (although they are so astronomically small that we may never be able to measure them), and any more fundamental theory of nature would need to replicate these results in order to be taken seriously. In fact, gravity is in many ways a much better quantum field theory than the Standard Model, since it appears to be valid all the way up to its cutoff at the Planck scale. (By comparison, the Standard Model is expected to start to break down above its cutoff at the much smaller scale of around 1000 GeV.)
While confirming that quantum mechanics and gravity are indeed consistent at reasonable energies (in fact, the complete structure of gravity can be shown to arise automatically from the quantum mechanics of spin-2 massless particles), this way of thinking makes clear that near or above the fundamental cutoff of our effective quantum theory of gravity (the cutoff is generally assumed to be of order the Planck scale), a new model of nature will be needed. That is, in the modern way of thinking, the problem of combining quantum mechanics and gravity becomes an issue only at very high energies, and may well require a totally new kind of model.
Quantum gravity theory for the highest energy scales
The general approach taken in deriving a theory of quantum gravity that is valid at even the highest energy scales is to assume that the underlying theory will be simple and elegant and then to look at current theories for symmetries and hints for how to combine them elegantly into an overarching theory. One problem with this approach is that it is not known if quantum gravity will be a simple and elegant theory (that resolves the conundrum of special and general relativity with regard to the uniformity of acceleration and gravity, in the former case and spacetime curvature in the latter case).
Such a theory is required in order to understand those problems involving the combination of very large mass or energy and very small dimensions of space, such as the behavior of black holes, and the origin of the universe.
Quantum Mechanics and General Relativity
The graviton
At present, one of the deepest problems in theoretical physics is harmonizing the theory of general relativity, which describes gravitation, and applies to large-scale structures (stars, planets, galaxies), with quantum mechanics, which describes the other three fundamental forces acting on the atomic scale. This problem must be put in the proper context, however. In particular, contrary to the popular claim that quantum mechanics and general relativity are fundamentally incompatible, one can demonstrate that the structure of general relativity essentially follows inevitably from the quantum mechanics of interacting theoretical spin-2 massless particles (called gravitons).
While there is no concrete proof of the existence of gravitons, all quantized theories of matter necessitate their existence. Supporting this theory is the observation that all other fundamental forces have one or more messenger particles, except gravity, leading researchers to believe that at least one most likely does exist; they have dubbed these hypothetical particles gravitons. Many of the accepted notions of a unified theory of physics since the 1970s, including string theory, superstring theory, M-theory, loop quantum gravity, all assume, and to some degree depend upon, the existence of the graviton. Many researchers view the detection of the graviton as vital to validating their work. CERN plans to dedicate a large timeshare to search for the graviton using the Large Hadron Collider.
Nonrenormalizability of gravity
Historically, many believed that general relativity was in fact fundamentally inconsistent with quantum mechanics. General relativity, like electromagnetism, is a classical field theory. One might expect that, as with electromagnetism, there should be a corresponding quantum field theory.
As explained below, there is a way around this problem by treating QG as an effective field theory.
Any meaningful theory of quantum gravity that makes sense and is predictive at all energy scales must have some deep principle that reduces the infinitely many unknown parameters to a finite number that can then be measured.
• One possibility is that normal perturbation theory is not a reliable guide to the renormalizability of the theory, and that there really is a UV fixed point for gravity. Since this is a question of non-perturbative quantum field theory, it is difficult to find a reliable answer, but some people still pursue this option.
• Another possibility is that there are new symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken by string theory, where all of the excitations of the string essentially manifest themselves as new symmetries.
QG as an effective field theory
In an effective field theory, all but the first few of the infinite set of parameters in a nonrenormalizable theory are suppressed by huge energy scales and hence can be neglected when computing low-energy effects. Thus, at least in the low-energy regime, the model is indeed a predictive quantum field theory. (A very similar situation occurs for the very similar effective field theory of low-energy pions.) Furthermore, many theorists agree that even the Standard Model should really be regarded as an effective field theory as well, with "nonrenormalizable" interactions suppressed by large energy scales and whose effects have consequently not been observed experimentally.
Recent work has shown that by treating general relativity as an effective field theory, one can actually make legitimate predictions for quantum gravity, at least for low-energy phenomena. An example is the well-known calculation of the tiny first-order quantum-mechanical correction to the classical Newtonian gravitational potential between two masses. Such predictions would need to be replicated by any candidate theory of high-energy quantum gravity.
Spacetime background dependence
A fundamental lesson of general relativity is that there is no fixed spacetime background, as found in Newtonian mechanics and special relativity; the spacetime geometry is dynamic. While easy to grasp in principle, this is the hardest idea to understand about general relativity, and its consequences are profound and not fully explored, even at the classical level. To a certain extent, general relativity can be seen to be a relational theory, in which the only physically relevant information is the relationship between different events in space-time.
On the other hand, quantum mechanics has depended since its inception on a fixed background (non-dynamical) structure. In the case of quantum mechanics, it is time that is given and not dynamic, just as in Newtonian classical mechanics. In relativistic quantum field theory, just as in classical field theory, Minkowski spacetime is the fixed background of the theory.
String theory
String theory started out as a generalization of quantum field theory where instead of point particles, string-like objects propagate in a fixed spacetime background. Although string theory had its origins in the study of quark confinement and not of quantum gravity, it was soon discovered that the string spectrum contains the graviton, and that "condensation" of certain vibration modes of strings is equivalent to a modification of the original background. In this sense, string perturbation theory exhibits exactly the features one would expect of a perturbation theory that may exhibit a strong dependence on asymptotics (as seen, for example, in the AdS/CFT correspondence) which is a weak form of background dependence.
Background independent theories
Loop quantum gravity is the fruit of an effort to formulate a background-independent quantum theory.
Topological quantum field theory provided an example of background-independent quantum theory, but with no local degrees of freedom, and only finitely many degrees of freedom globally. This is inadequate to describe gravity in 3+1 dimensions which has local degrees of freedom according to general relativity. In 2+1 dimensions, however, gravity is a topological field theory, and it has been successfully quantized in several different ways, including spin networks.
Fields vs particles
Quantum field theory on curved (non-Minkowskian) backgrounds, while not a quantum theory of gravity, has shown that some of the assumptions of quantum field theory cannot be carried over to curved spacetime, let alone to full-blown quantum gravity. In particular, the vacuum, when it exists, is shown to depend on the path of the observer through space-time (see Unruh effect).
Also, some argue that in curved spacetime, the field concept is seen to be fundamental over the particle concept (which arises as a convenient way to describe localized interactions). However, since it appears possible to regard curved spacetime as consisting of a condensate of gravitons, there is still some debate over which concept is truly the more fundamental.
Points of tension
There are two other points of tension between quantum mechanics and general relativity.
• First, classical general relativity breaks down at singularities, and quantum mechanics becomes inconsistent with general relativity in a neighborhood of singularities (however, no one is certain that classical general relativity applies near singularities in the first place).
• Second, it is not clear how to determine the gravitational field of a particle, since under the Heisenberg uncertainty principle of quantum mechanics its location and velocity cannot be known with certainty. The resolution of these points may come from a better understanding of general relativity.
Candidate theories
There are a number of proposed quantum gravity theories. Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available.
String theory
One suggestive starting point are ordinary quantum field theories which, after all, are successful in describing the other three basic fundamental forces in the context of the standard model of elementary particle physics. However, while this leads to an acceptable effective (quantum) field theory of gravity at low energies, gravity turns out to be much more problematic at higher energies. Where, for ordinary field theories such as quantum electrodynamics, a technique known as renormalization is an integral part of deriving predictions which take into account higher-energy contributions, gravity turns out to be nonrenormalizable: at high energies, applying the recipes of ordinary quantum field theory yields models that are devoid of all predictive power.
One attempt to overcome these limitations is to replace ordinary quantum field theory, which is based on the classical concept of a point particle, with a quantum theory of one-dimensional extended objects: string theory. At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different (electric and other) charges. In this way, string theory promises to be a unified description of all particles and interactions. The theory is successful in that one mode will always correspond to a graviton, the messenger particle of gravity; however, the price to pay are unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.
Loop quantum gravity
Another approach to quantum gravity starts with the canonical quantization procedures of quantum theory. Starting with the initial-value-formulation of general relativity (cf. the section on evolution equations, above), the result is an analogue of the Schrödinger equation: the Wheeler-deWitt equation which, regrettably, turns out to be ill-defined. A major break-through came with the introduction of what are now known as Ashtekar variables, which represent geometric gravity using mathematical analogues of electric and magnetic fields. The resulting candidate for a theory of quantum gravity is Loop quantum gravity, in which space is represented by a network structure called a spin network, evolving over time in discrete steps.
Other candidates
There are a number of other approaches to quantum gravity. The approaches differ depending on which features of general relativity and quantum theory are accepted unchanged, and which features are modified. Examples include:
Weinberg-Witten theorem
There is a theorem in quantum field theory called the Weinberg-Witten theorem which places some constraints on theories of composite gravity/emergent gravity.
In popular culture
See also
| year=2007 }} | doi=10.1088/0264-9381/21/15/R01 }} | doi=10.1088/0034-4885/64/8/301 }} | editor-last=Cornet | editor-first=Fernando | title=Effective Theories: Proceedings of the Advanced School, Almunecar, Spain, 26 June–1 July 1995 | isbn=9810229089 }} }} | doi=10.1088/0264-9381/17/5/321 }}
Search another word or see quantum gravityon Dictionary | Thesaurus |Spanish
Copyright © 2014 Dictionary.com, LLC. All rights reserved.
• Please Login or Sign Up to use the Recent Searches feature |
f246692e6cbbe521 |
Nautilus' disillusioned ex-physicist
Bob Henderson wrote an autobiography for Nautil.Us (via CIP):
What Does Any of This Have To Do with Physics?
OK, let's start to review his memoir. In 1993, he went to Rochester's graduate school to study theoretical physics. He had read a lot about Einstein and Feynman, they were great guys. But Henderson also mentions The Tao of Physics and Zen and the Art of Motorcycle Maintenance. I know these two titles but haven't read the books.
In spite of that, I feel almost certain that these are not the books that the people whom I consider physicists or prospective physicists are attracted to. Books like that may use the words borrowed from physics but their whole way of thinking is largely unscientific. If you are a physicist who has a friend who believes in mysterious stuff peppered with physics vocabulary (or vice versa), I don't have to explain to you what's the difference between you and your friend, do I?
There may be some similarities – some shared excitement about mental, spiritual, or non-practical questions – but the differences between science and religion/superstition are perhaps greater than the similarities.
It is imaginable that people attracted to New Agey books could do good physics. But in general, I think that it's safe to say that an overwhelming majority of readers of similar books are simply not equipped to do physics. You know, the "opinion" that these superstitious and religious approaches aren't the most sensible way to approach the fundamental laws of physics isn't something that people like me were adopting when they joined a graduate school.
With a debatable 1-week high school exception, I have never had any inclination to look into these superstitious and religious books claiming to be books about physics. Those books reflect a naive, unscientific approach to the truth. They propose easy solutions. Just believe in something, we're all united, God penetrates all of us and is spread to all our bodies, whatever (I am vaguely reproducing some excited lessons I received from a New Age friend LOL), and you get close to the deepest truths about the Universe.
Sorry, you can't. With these mysterious vague superstitious proclamations, you haven't learned a damn thing. The learning of the physical truth about the Universe obviously does require some calculations, often long ones, or a careful argumentation and hours of mental work in which the brain often burns and it is producing nothing useful for the path most of the time.
This is a sketch of the "path towards the deep laws of the Universe" that I already had in mind when I was 4 years old or so – and I think that other physicists who don't relate to Henderson's complaints would tell you something similar. Henderson is telling us that he was gradually discovering some of these things during his grad school years. One actually has to work hard at some moment, be materially modest, be confused much of the time, and try many paths that don't lead to interesting outcomes, while the greatest discovery in a century arrives relatively rarely (approximately once a century, if you want to know).
Those are shocking facts!
You should have known it before you entered the graduate school. Quite generally, I would guess that people who read about "tao" and "zen" are likely to face some problems as grad students of theoretical physics – as far as I can say, those problems may be exactly as severe as the problems of those whose background is all about "Jesus" or "Mohammed". Those are not helpful prerequisites for the discipline. And if the readers are told that those are good prerequisites for the research in theoretical physics, I think that these readers including Henderson have been deceived by the writers of the superstitious books and they may demand a compensation.
Another detail is that Henderson went to University of Rochester, NY to study theoretical physics. It may be an OK school but it is in no way a university that is close to the top in the world's cutting-edge theoretical physics. Henderson's adviser Sarada G. Rajeev may be a local Rochester star in theoretical physics but that doesn't necessarily mean that he's a global star. Click at the hyperlink to see his papers. It's a decent list for a career at such a university but it's not quite the same list as if you look at e.g. Polchinski's record.
I am saying it because if Henderson wanted to search for a theory of everything, going to Rochester doesn't look like a straightforward, sensible path towards that goal. It's plausible that someone at Rochester – or someone with a degree from Rochester – would find a theory of everything. But if that's so, she will have to be repeatedly lucky. The starting point looks more troublesome when you combine all these strange details. If you want to professionally search for a theory of everything, read "tao" and "zen" and go to Rochester. Well, not really. ;-)
A minute ago, I mentioned tennis and the people's ability to understand that they're not as good tennis players as the world's best tennis players. In fact, I am absolutely convinced that the intellectual gap between the best theoretical physics groups in the world and those at Rochester (or worse) is far deeper than the difference between Djokovic and the average Portuguese players, to pick a random non-stellar tennis nation. It's questionable whether places at the level of Rochester (or Portugal) should claim to produce "researchers of a theory of everything" at all. A theory of everything could be too big a game for such places that simply don't belong to the elite. The very statement that they're doing something of the sort is deceptive for most of those non-elite places. These non-elite places should describe their work with a more humble language, otherwise they're deceiving prospective students and sponsors.
A big part of Henderson's story is about the modest material conditions that physics graduate students – and even postdocs etc. – sometimes experience. They're sometimes poor, sometimes they're not. But I do think that the folks who have no problem with such modest conditions – similar to those of monks – are more likely to be "natural theoretical physicists". Readers of "tao" and "zen" books may think that it's cool to search for the deep truths and be as materially undemanding as monks – when it comes to the housing, food, beverages, traveling, sex, whatever – but when the reality arrives, they may find out that they are not really this modest and the usual biological needs do play a big role for them.
Again, I can't relate to Henderson's story because I really don't have a problem with extremely modest material conditions but also solitude and other things. Some people earn bucks by writing about saints and then there are people who just shut up and they are saints. I must humbly admit that I am one of those ;-) while Henderson probably never was. Just to be sure, I am not saying that all theoretical physicists live like monks. You can earn lots of money (think of the Milner $3 million prizes), jobs like the Harvard Junior Fellowship bring some mandatory opulent life, and people who become career professors are materially insured for their life from most viewpoints.
But I want to spend more time with Henderson's disillusion about the research projects. He had to learn lots of papers and he really didn't know how much he has to learn. When you're thrown into research, it is different from a university course. At school, the instructor may have outlined the path for you and you are just following the plan. Many students have probably done "almost the same sequence of steps" before you. Locally, in each lecture, you may deviate a bit, you may calculate various things in different methods, learn something differently than others, but the big picture of the path is clear.
There's nothing of the sort when you're an independent researcher. Tens of thousands of papers (and thousands of books) have been written about theoretical physics. You can't – well, you shouldn't – read all of them. You must pick a subset that is useful for your goals or, to say the least, that is useful for a goal that you may pick as your own even though your expectations could have been different.
You should have a rough plan to get through this hopeless chaos. One aspect of the plan is the realization that most of the papers that have been written are redundant noise (or they are wrong). You want to want to do something more interesting than what the authors of average papers did. The second aspect is that even among the valuable papers, there's a lot of redundancy so you don't need to read everything – it gets repeated – and you may and you should rediscover many of the important things yourself, anyway. And the third aspect is some degree of specialization. You must admit that you won't understand absolutely everything that was written by the other physicists, even if it is correct, and you must live with this fact. Non-scientists live with it happily. As a physicist, you should still understand a vastly greater percentage of the physics wisdom than the non-physicists.
Some self-confidence is therefore highly desirable, much like some humility. On one hand, you must know that you will rely on the work of others, stand on the shoulders of giants from the past and present, use some textbooks or reviews or standard courses, and use the skills and comparative advantages of your collaborators. On the other hand, you must feel that you basically don't need those things. You don't need to read tens of thousands of papers most of the time. You may rediscover everything you need or at least find the right place where you may learn a known thing when you need it. The dominant theme should be that you are refining your own picture of the laws of physics and all the other people from the past and present are just helping you. Most of the time, you are thinking for yourself and you believe that you're smarter than almost everyone else. If this ambitious belief of yours is rubbish, you should get eliminated. But some people may survive and they really are using their brains independently, their intellectual self-confidence is justified (even though they sometimes hide it), and those have really created and are creating the skeleton of physics.
For a grad student or any "junior" member of a collaboration, it's quite normal – and logically justifiable – to do some brute force work whose broader importance isn't understandable to him. Professors are sometimes abusing grad students as slaves or robots. And they love to repeat (true) jokes about it and to count the research work in kilo-graduate-student-hours. But this fact should be obvious to everyone who cares. It is not an exclusive feature of physics and it has a understandable justification, too.
You know, the professor who directs the "big picture" of the research project may be doing the seemingly "easier" part of the job – and he may work for much less than 15 hours a day, a figure that Henderson mentions – but he may still be doing the more important part, just like the boss of an innovative company. His skills to direct the "big picture" of the project are the most scarce resources. Imagine that you live 100 years ago and want to produce cars. To do so, you need some experience e.g. from Henry Ford's company. Well, you are probably going to do some more ordinary, boring work. That has an easy explanation: You're not (a) Henry Ford (yet). Of course you're not the one who is inventing the big strategy and giving the orders to lots of employees. You're not the damn Henry Ford. It is not even clear whether you're good at the things that Henry Ford is reasonably good at. So how could you be Henry Ford?
There's a simple recipe if you're dissatisfied with your place. If you want to do things like Ford and give orders to others, become a Henry Ford yourself, if you can. You must accumulate some capital – money, fame, and credibility, whatever you need – and then you will be able to employ your workers. Or your graduate students. These two examples – and many others – are obviously analogous.
Theoretical physics research may be among the occupations with the smallest role played by plans. One really has a lot of freedom in making his decisions – what he should read and study and calculate and focus on – and indeed, that's why one can get completely lost, too. The shape of the final product (theories of Nature) is almost completely unpredictable, too. But this freedom (which may lead to good or bad outcomes) and the unparalleled depth of the initially unknown wisdom is one of the features that makes theoretical physics so remarkable.
It's hard to give some recommendations that would help everyone escape the potential mess. No universal solutions like that exist. It's unavoidable that they don't exist and it's good that they don't exist. There are many decisions to make, so some people – and probably most people – will unavoidably get lost. What should you do if you don't want to get lost? Be smart, be hard-working, but don't be submissive, be stubborn, be successful, and don't be unsuccessful. These recipes are not too helpful, of course. Some people aren't that smart. They aren't independent enough. They get manipulated. And if they don't get manipulated, they really don't know what to do. Indeed, being an independent researcher – and especially a "principal investigator", if I put it in this way – means to be able to make many such decisions. So the whole idea of recommendations "what you should do" in such an occupation is an oxymoron. If someone else could tell you what to do, nothing would be left for your actual job. The decisions are your job. To ask "what to do" is basically equivalent to asking "do the job for me".
In the college but also in the later years, I was talking to lots of people who begged for recommendations like that. What should I do not to get lost? My answer was never so direct but yes, my current answer would be: If you need this leadership repeatedly, just quit it. If you don't know what you're doing, why you're doing it, and where you are going, and how you may roughly get there, then it's a bad idea to start or continue the journey. People who are picking an occupation should feel some "internal drive" and they should have at least a vague idea what they're doing, why, and how. Again, I don't think that this common sense only holds in theoretical physics. Theoretical physics only differs by the deeper caves in which one may get lost – because deeper caves are being discovered or built by theoretical physicists, too.
Another complaint by Henderson was that his adviser (who was 5 years older) "knew" what the result of their joint project was supposed to be and that's where they ultimately got, indeed. This finding was shocking and disappointing for Henderson, a junior collaborator. I don't understand why it's disappointing. It's common sense. Many projects work like that: One has a hope that there's a certain kind of an answer that can be found and sufficiently rigorously justified. The "senior", usually more experienced (and sometimes, indeed, "more talented") members of the collaborations have some hopefully correct vision about the "big picture" while the other members are expected to do much of the brute force calculations. How could it be otherwise? This story only says that some researchers should have some idea where they're roughly going. And then it's saying that some collaborators – well, the "senior ones" – have a better idea than others. Does one really need to torture himself for years in the graduate school to understand these common-sense tautologies?
In the previous paragraph, I've used some big words. But the actual project that Henderson discussed was his paper with Rajeev, Quantum gravity on a circle and the diffeomorphism invariance of the Schrödinger equation. Well, this paper from 1994 only has 3 citations at this moment. I know the rough content. The tiny number of citations after 22 years indicates that this was probably not a paper that finally found the theory of everything. Or anything else that was revolutionary. Well, it was a much weaker paper than the average paper in the field, too.
Some appraisals by Henderson are therefore correct. This paper couldn't have fulfilled Henderson's dreams about "tao" and "zen". Also, if you have this particular paper in mind, new light is shed on many other claims by Henderson. For example, he said that Rajeev's vision about the final result was finally confirmed, after difficult calculations. Was Rajeev a visionary? Well, a more accurate evaluation could be a bit different: Rajeev simply invented some kind of a paper, including the conclusions, and he employed his grad student Henderson to fill in some details so that the story looks at least somewhat convincing. This is the "Al Gore Rhythm" to write papers in some disciplines that is being used often if not predominantly in soft scientific disciplines such as the climate science. The conclusion is decided in advance and all the seemingly complex, long, and technical language and formulae is only inserted to make the conclusion look more scientific! It's not real hard science, however. If you verify the argumentation really carefully, you usually find out that something important is wrong with the paper even though "local regions" of the paper may look kosher.
But the paper still doesn't look too convincing. You know, there are better physicists than Rajeev and most of them would probably agree that the paper hasn't found any important principle or mechanism in quantum gravity at all. 22 years after the paper appeared, most top theoretical physicists would almost certainly disagree with the conclusions by Rajeev and Henderson, e.g. that there's a canonical link between distance and the phase of a wave function in quantum gravity. It is one of the papers that try to study quantum gravity as if it were a local field theory. But quantum gravity isn't quite a local field theory. In spacetime dimensions lower than four, theories of quantum gravity may look almost indistinguishable from local field theories (and there exists e.g. a formal proof of the equivalence of 3D quantum gravity and 3D Chern-Simons theory) but I think it's right to say that even in the low dimensions, this similarity is deceitful and overlooks some delicate details that become very important in higher dimensions. At any rate, what they found couldn't have been meaningfully applied in the theories of quantum gravity that are really interesting and that we care about, in \(d\geq 4\).
It means that Henderson was 1) a junior member of this collaboration, a status that understandably involves the shortage of independence. But 22 years after the paper was written, we may see that the shortage of independence was more severe than previously thought. Henderson still failed to understand that their "solution" to the problem of the "quantum mechanics on a circle" wasn't necessarily "the" right solution or "the" right approach to this kind of a problem – according to the truly best physicists in the world. While Henderson understands that "quantum gravity on a circle" is a special toy model that isn't likely to teach us much about the big problems of quantum gravity, he still doesn't see that even this toy model was probably solved in a way that is conceptually uninteresting if not strictly wrong. Henderson misunderstands his own paper to the extent of not being able to imagine that something could be problematic about it.
You know, only a small portion of physics PhDs get really close to the world's elite. But I think that after some years, even the other ones should be able to understand and see the difference between the top physicists and those who are not top physicists at all, at least in a fuzzy way. If they can't even see why top physicists are generally more influential than the mediocre ones, it shows that they really don't have the talent for the discipline.
We also learn that Henderson began to hate Rajeev because the latter didn't care about the suffering of the former and dashed his dreams. For a year, Henderson tried to work in isolation. It didn't work too well. He returned, Rajeev accepted him, but soon afterwards, Henderson was hurt when Rajeev asked "Do I have to explain the fiber bundles again?" Come on, is it so terrible to hear this question? Fiber bundles are a hard enough concept – used by people who really want to think like trained mathematicians – but if they're important enough for some project and if Rajeev spends some time by explaining them to someone else, it may be frustrating for Rajeev to see that he has wasted his time by the pedagogic efforts. So why couldn't Rajeev ask "Do I have to explain the fiber bundles again?" Is it a question that one may really get offended by? Have you tried to think about the interaction from Rajeev's perspective, Mr Henderson? Again, I think that this situation is not specifically tied to theoretical physics. If a coach teaches something to a tennis player and it's completely ignored a day later, the coach may also get reasonably upset and emit an irritating remark, can't he?
A theme underlying the story is the tough job market. The number of faculty (and postdoc) jobs is too small relatively to the number of theoretical physics graduate students. I think it's true, the tension has gotten even more extreme in recent years, and the suffering that many young brilliant theoretical physicists I have known had to repeatedly go through was almost heartbreaking. On the other hand, I am pretty sure that the number of faculty jobs shouldn't grow enough to turn e.g. Mr Henderson into a theoretical physics professor. I think that his – nicely written – story makes it clear that he pretty much never had a clue about theoretical physics and he still doesn't have a clue. He isn't thinking as a physicist.
And it's not just about the Virasoro algebra and Yamabe problem, phrases that Henderson used in his and Rajeev's 1994 problem but Henderson "couldn't define them for us today", as he told us. He was clearly misunderstanding and he is still misunderstanding some much more general issues about theoretical physics and what it really means to do research on it (and maybe in science in general). Years after he joined that field, he may still be shocked when he discovers that physicists sometimes have to make independent decisions and similar spectacularly profound wisdoms. ;-)
Again, his prose is impressive – and includes all the linguistically colorful, redundant, and emotional inserted details that make some writers famous and that guarantee that I have never been a reader of novels LOL :-) – but his opinions about physical concepts that are described in his prose are typical opinions held by the laymen, especially when it comes to the frustration caused by some features of physical theories that physicists actually love. A paragraph complains that there are at least three "pictures" to define the time evolution in quantum mechanical theories – the Heisenberg picture, the Feynman approach, and the Schrödinger picture. Henderson was apparently disappointed – and is still disgusted – by the huge number of the pictures (three) – it's not shocking that many crackpots display irrational, anxious reactions to theories with \(10^{500}\) solutions because many people find "three" to be a terrifyingly high integer, too – and he was and he still is repelled by the idea that the deeper theories of particle physics could suffer from the same "problem". He says that the Holy Grail could be a hall of mirrors. It's a great literary metaphor but what's not great is that the hall of mirrors clearly scares him.
Please, give me a break. The transition from the Heisenberg picture to the Schrödinger picture is a simple time-dependent unitary change of the coordinates on the Hilbert space. It's obvious that in every theory that has some time-dependent quantities (and every theory that we use deals with those), one may redefine them by field redefinitions and, when they carry Hilbert space vector indices, those include the unitary transformations of the Hilbert space. Of course this freedom will always exist as long as physics will be based on some quantities (undoubtedly) or on Hilbert spaces (almost certainly as well). Why would one be disappointed by the existence of the two pictures? How could someone possibly think about doing research on quantum gravity if he's frightened by the existence of the Schrödinger and Heisenberg pictures?
In a similar way, one may show the equivalence of these two pictures with the Feynman path integral approach whenever some quantities similar to those in classical physics – like \(x(t), \phi(x,y,z,t)\) – exist in the theory. The proof of the equivalence of the path integral to the operator approaches indeed works (before Feynman, it was already sketched by Dirac) and is rather universally applicable. It's enough to learn it once and you're done. It's a cute piece of the puzzle that has been mastered and that a theoretical physicist happily learns and teaches. Yes, it's one of the mirrors in the hall surrounding the room with the Holy Grail. Why would one be disappointed by those? It makes absolutely no sense.
In fact, these mirrors – rhetorically different but physically equivalent descriptions – became even more widespread, important, and omnipresent in theoretical physics of recent decades when the string and field theory dualities were uncovered. And they're absolutely wonderful, not disappointing. It's surprising that a guy who claims to have been shaped by books about Feynman would think that this multitude of descriptions is disappointing. Feynman always emphasized his hobby to look at problems from many different perspectives. It's so great. Even Apple had the slogan "think different" years before it has turned its consumers to a brain-dead mass of sheep that are using the same boring uninnovative smartphones who suffer from the maximum imaginable group think (not only when it comes to phones but even politics and other things). New perspectives – including new equivalent pictures in quantum mechanics and new descriptions of string or field theory related by dualities – enrich our mind, give us new abilities to solve certain problems or see previously overlooked analogies and isomorphisms. A mirror is an object that a kid physicist likes and is intrigued by. There's just nothing wrong about the idea that a mature physicist who makes important steps towards an important theory has to master a hall of mirrors. Isn't it exactly the kind of an activity that he was trained for as a kid and that he liked? Well, one may see that "tao" and "zen" books are encouraging the readers to do very different and less physical things than to investigate a network of mirrors and how it works.
If Mr Henderson doesn't like the physicist's ability to look at the phenomena through many perspectives or pictures, his thinking is clearly nothing like Feynman's. So maybe Mr Henderson was excited to hear that Feynman was picking locks but he must have understood that picking locks is not the most characteristic kind of work done by theoretical physicists, right? Looking at things with new eyes is what theoretical physicists often need to do – they must be good at it and they're happy and proud about it. If those things (looking at the Universe with new eyes) make you frustrated instead, theoretical physics just clearly isn't the occupation for you.
The final theory may indeed be a "hall of mirrors" in some literary metaphor but if it is so, it's great. A big part of the physicists' task will be to understand how the mirrors work, where they are located, and learn how to use their seemingly complex reflections to learn about phenomena of Nature, including the phenomena that previously looked "trivial" but they were hiding a complex game with mirrors. Again, this is a development that makes a true theoretical physicist happy. A theoretical physicist just wants to see under the surface. He wants to ask "why" even when the practically oriented laymen are "satisfied" and don't ask a damn thing. Many things look simple but this impression is misleading and something rather elaborate may be hiding behind the surface. Theoretical physicists naturally have the desire to remove the surface layer of illusions and see what's inside – and if the interior includes a hall of mirrors, then it's very interesting to know and understand in detail.
I could discuss other aspects of his opinions about physics. One implicit assumption at Rochester – and other schools that don't belong to the global elite – is that you may search for a theory of everything or a theory of quantum gravity while ignoring string theory. This is of course a lie, a lie that certain people maliciously try to spread, and if you combine this ignoring of string theory with the hatred towards the pictures of quantum mechanics, dualities, fiber bundles, and other things, your chances to contribute to the search for a theory of everything really drop close to zero.
At the end, even though this guy is a good writer and I would prefer if people were never emotionally frustrated or disappointed, it's hard for me to feel much sympathy for him. He may have been deceived by pop-science books which made him believe that theoretical physics is something entirely different than what it is. But he continued to lie to himself and to others and he's still searching for problems at the wrong places. Sorry, Mr Henderson, but the end of your love affair with theoretical physics wasn't the fault of theoretical physics.
Add to Digg this Add to reddit
snail feedback (0) : |
4379ce237c534517 | Analytic Properties of Feynman Diagrams in Quantum Field
Format: Paperback
Language: English
Format: PDF / Kindle / ePub
Size: 7.91 MB
Downloadable formats: PDF
A fibre with a sharp change between the refractive index of the core fibre and the refractive index of the cladding is called a step index fibre. At that point though, it still wasn't proven, although the fact that everything is made up of ENERGY, was. It doesn't leave a "pattern" of any kind, just one little blotch. The quantum potential formulation of the de Broglie-Bohm theory is still fairly widely used.
Pages: 168
Publisher: Pergamon Pr; 1st edition (June 1986)
ISBN: 0080165443
Using the energy constant for light, it is now possible to complete de Broglie’s calculations and determine the rest mass of a single quantum of light , cited: Symmetries and Semiclassical download online download online. He presumed that the light wasn't really a continuous wave as everyone assumed, but perhaps could exist with only specific amounts, or "quanta," of energy. Planck didn't really believe this was true about light, in fact he later referred to this math gimmick as "an act of desperation." The wave particle duality exists to ensure that life would continue. If it didn't exist, black holes would eat up stars and eventually become 100% dark energy in the universe. The wave particle duality overcomes time and space as a conscious cosmic function of pro-reality and life. Instead, it is an oscillation of 0 and 1. Reality and non reality are in a dance moving with a probability of existence Letters on Wave Mechanics: download pdf Letters on Wave Mechanics:. It is one of the strange, but fundamental, concepts in modern physics that light has both a wave and particle state (but not at the same time), called wave-particle dualism. Perhaps the foremost scientists of the 20th century was Niels Bohr, the first to apply Planck's quantum idea to problems in atomic physics. In the early 1900's, Bohr proposed a quantum mechanical description of the atom to replace the early model of Rutherford Selected Papers of Abdus Salam download pdf Is SAP true, in which case why prefer physical explanations to it, or is it false, in which case why ever apply it? It is precisely MW’s unfalsifiability that bothers some leading physicists such as Allen Guth (the inflationary universe theory), George Smoot (led the COBE effort: experimental verification of the inflationary universe) and Brian Greene (superstring theorist) Nonlinear Waves, Solitons and Chaos
Capillary action: rise of liquid in narrow tube due to surface tension. Carnot efficiency: ideal efficiency of heat engine or refrigerator working between two constant temperatures Gravitation and Spacetime (Second Edition) This is coincidentally equal to the speed of light in a vacuum, c = 3 × 108 m s−1. Furthermore, a measurement of the speed of a particular light beam yields the same answer regardless of the speed of the light source or the speed at which the measuring instrument is moving Euclidean Quantum Gravity on Manifolds with Boundary (Fundamental Theories of Physics) Euclidean Quantum Gravity on Manifolds. The reader may be aware from quantum theory of Schrodinger’s equation, which describes the probability of a particle (or system of particles) being in each possible state that it could be in. In Penrose’s example, the equation would be simple in that it would include just two probabilities: one giving the chance of finding the electron spinning ‘up’, and the other giving the probability of the electron spinning ‘down’ , source: Heavy Quarkonium Production Phenomenology and Automation of One-Loop Scattering Amplitude Computations (Springer Theses) Heavy Quarkonium Production.
Almost All About Waves
Principles and Applications of Wavelet Transform
Time-Harmonic Electromagnetic Fields (McGraw-Hill Texts in Electrical Engineering)
It is now time to define this concept more precisely. A quantum mechanical wave function is said to be invariant under some transformation if the transformed wave function is observationally indistinguishable from the original Scattering Theory If the universe is infinite, and there is infinitely more matter beyond the visible universe, that gravity would be balanced, and there would be no reason to postulate a cosmological constant at all. If that is true, then cosmological redshift may have something to do with the dissipation of energy as light waves move through the aether Lectures on Electromagnetic Theory: A Short Course for Engineers It has always been known that making observations affects a phenomenon, but the point is that the effect cannot be disregarded or minimized or decreased arbitrarily by rearranging the apparatus. When we look for a certain phenomenon we cannot help but disturb it in a certain minimum way, and the disturbance is necessary for the consistency of the viewpoint , e.g. Optics in Instruments: Applications in Biology and Medicine (ISTE) Whatever you think about and believe to be true regardless if those beliefs are based on "real truth" or "perceived truth" are what determines how your life will unfold. Quantum Physics has shown us that there exists no such thing as "untruth" only physical experiences in each area of our life which are formed based on our individual "perceptions" of truth , e.g. Field and Wave Electromagnetics (Addison-Wesley series in electrical engineering) read here. The two sound very much alike, but they are different Topological and Geometrical Methods in Field Theory, Turku, Finland, 26 May-1 June 1991 This resonance work energy was in addition to the thermal energy already inherent in the system as a result of its temperature. The amount of resonance work energy at the microscale is the resonance work variable, “rA”. In the solvent system example, individual elements in the system irradiated with resonant EM waves possessed greater energy than the elements in the thermal system ref.: Wave Propagation and download online
Probabilistic Methods in Quantum Field Theory and Quantum Gravity (NATO Science Series B: Physics)
Theory of Solitons in Inhomogeneous Media
Physics of Waves (Fundamentals of Physics)
Mechanics and Wave Motion
Radiation and Quantum Physics (Oxford physics series, 3)
Integrable Quantum Field Theories (Nato Science Series B:)
PCT, Spin & Statistics, and All That
Approximations and Numerical Methods for the Solution of Maxwell's Equations (The Institute of Mathematics and its Applications Conference Series, New Series)
Beyond Conventional Quantization
Strings, Branes and Gravity
Distributed Feedback Laser Diodes: Principles and Physical Modelling
Quantum Field Theory and String Theory (Nato Science Series B:)
Physics of Solitons
Advances in Topological Quantum Field Theory: Proceedings of the NATO Adavanced Research Workshop on New Techniques in Topological Quantum Field ... 22 - 26 August 2001 (Nato Science Series II:)
Solitary Waves in Dispersive Complex Media: 149 (Springer Series in Solid-State Sciences)
Introduction To Nearshore Hydrodynamics (Advanced Series on Ocean Engineering (Paperback))
Influencing variables (temperature and surface electrodes, surface electrodes, distance between them, nature and concentration of the solution). Ionic conductivity of a solution, σ - Linear chain, branched or cyclic saturated and unsaturated Lecture Notes on read online From Schrödinger equation can be derived the fact that the average position varies according to the average momentum. This coincides with the classical setting of classical mechanics! Even though I can prove it mathematically, I have no understanding of the fundamental reason why Schrödinger equation links average position and average momentum Gauge Theory on Compact Surfaces (Memoirs of the American Mathematical Society) read here. And both would be solutions by superposition. So that's the end of the theorem because then these things are even or odd and have the same energy. So the solutions can be chosen to be even or odd under x. So if you've proven this, you've got it already. For bound states in one dimension, the solutions not anymore the word chosen Vibrations and Waves read online. IIT JEE 1980 - 2009 Transverse wave – Here, the elements of the disturbed media of the travelling wave, move perpendicular to the direction of the wave’s propagation. A particle at the crest / trough has zero velocity. The distance between two consecutive crests / troughs is equal to the wavelength of the wave. Therefore, the distance between a consecutive pair of crest / trough is half of the wave’s wavelength , source: Collected Papers on Wave Mechanics Collected Papers on Wave Mechanics. Frequency refers to the addition of time. Wave motion transfers energy from one point to another, which displace particles of the transmission medium–that is, with little or no associated mass transport. Waves consist, instead, of oscillations or vibrations (of a physical quantity), around almost fixed locations. Mechanical waves propagate through a medium, and the substance of this medium is deformed , source: The Quantum Theory of Fields, Volume 3: Supersymmetry by Steven Weinberg B01_0207 This is the Pauli exclusion principle. All particles with half-integer spin, including electrons, behave this way and are called fermions. For particles with integer spin, including photons, the wave function does not change sign. Such particles are called bosons. Electrons in an atom arrange themselves in shells because they are fermions, but light from a laser emerges in a single superintense beamessentially a single quantum statebecause light is composed of bosons Gauge Field Theories (Frontiers in Physics) Gauge Field Theories (Frontiers in. Interpretations of quantum mechanics address questions such as what the relation is between the wave function, the underlying reality, and the results of experimental measurements. An important aspect is the relationship between the Schrödinger equation and wavefunction collapse , source: Wave Dynamics and Stability of read here I think the photos we used provided a good mix of the reality at Malibu when there's a good swell , e.g. Introduction to Mechanical Vibrations Quantum theory permits the quantitative understanding of molecules, of solids and liquids, and of conductors and semiconductors. It explains bizarre phenomena such as superconductivity and superfluidity, and exotic forms of matter such as the stuff of neutron stars and Bose-Einstein condensates, in which all the atoms in a gas behave like a single superatom , e.g. Ray and Wave Chaos in Ocean Acoustics - Chaos in Waveguides (CNC Series on Complexity, Nonlinearity, and Chaos) Ray and Wave Chaos in Ocean Acoustics -. Augustine’s classical philosophical argument that ‘the effect of the universe’s existence requires a suitable cause’ is unambiguously applicable here ref.: A Study Of Splashes read pdf |
1011038d5767a190 | Login Register
Thread Rating:
• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
Lumo's attack on dBB continues
With the post Bohmians' self-confidence evaporates as soon as they're expected to calculate anything continues his attack against dBB theory. It contains, as usual, a lot of name-calling, now even personally against me - I'm named "crank", and a link to this forum posted in the comments was rewritten so that one cannot directly click it. Whatever, this is what has to be expected.
Quote:He obviously meant that it was done in proper quantum field theory governed by the standard, "Copenhagen" postulates of quantum mechanics (at most reformulated with a different "accent" but not a different "content"). And because he must believe that Bohmian mechanics has "conquered" the standard quantum mechanics and may claim credit for all of the successes of quantum mechanics (while taking no responsibility for the alleged drawbacks), he just doesn't need to write anything, he believes.
Not that I believe that one does not need to write anything - what I believe is that one has to prove an equivalence theorem. Only if such an equivalence theorem is proven, one can use the computations made in one theory in the other theory too.
I agree, if one thinks that dBB makes somewhere different experimental predictions than standard quantum theory, then one has to make all the computations to find out where the predictions are indistinguishable and where one can distinguish them and find out which theory is true. But this is not the case. Where it is possible to construct a dBB theory, it is, in its experimental predictions, equivalent to standard quantum theory.
What are, in this case, the advantages of dBB theory? They are conceptual. The Copenhagen interpretation subdivides the world into a classical and a quantum part. Which is unproblematic from a pragmatic point of view, but not really nice. I prefer a theory without such an artificial subdivision. Which is dBB.
Quote:These Bohmian people often make claims such as "all the physics of QFT works just fine in Bohmian theory". References to incoherent preprints that make similar claims are the "evidence" you may get. All these preprints contain some "extra" (and very ugly) mathematical constructions that are absolutely different from the standard QM/QFT and it's obvious that they can't be producing the same predictions in general. But if someone claims that the Bohmian theory makes sense, shouldn't he be able to write the "proper modern [Bohmian]" textbook replacing the existing textbooks of quantum field theory? At least a few chapters, up to a calculation of some annihilation processes of QED.
The last time I have seen a really serious objection - serious enough to question the equivalence of dBB field theory and its viability - I have made some computations and published them, here is the reference:
I. Schmelzer, Overlaps in pilot wave field theories, Foundations of Physics vol. 40 nr. 3, 289-300 (2010), arXiv:0904.0764 [quant-ph].
The point of mentioning some, ugly or not, extra constructions is beyond me. Of course, once there is an equivalence theorem, there is no need to repeat standard QFT computations. And for a textbook about QFT, fine, a good idea. Except that there is no need to rewrite a lot of the QFT texts themself. In such a book I would start with the basic definitions based on the dBB interpretation, so that the initial part would indeed differ. But then I would focus on the development of the standard mathematical apparatus of QT resp. QFT. There would be some shift toward lattice regularizations, away from, say, dimensional regularization, because for lattice regularizations everything is nice, we have a well-defined dBB interpretation as well as a well-defined quantum lattice theory without infinities.
Quote:By its definition, the Bohmian mechanics must have a result for the measurement of the "photon position" that is ready before the measurement. Except that there can't be any equations – at least not local or otherwise natural equations – that could govern the motion of such "real Bohmian photons".
The point being? The dBB picture I prefer is not using photon positions as beables, but, instead, the EM field. There are problems introducing photon trajectories into a dBB picture? Fine, so don't do it.
Has anybody cared if one can define phonon trajectories in quantum condensed matter theory? Would this be an argument against a dBB variant for condensed matter theory based on trajectories for the atoms?
Quote:Well, it's simple. In quantum mechanics, the energy conservation follows from the collapse of the wave function and by the very definition of the Bohmian mechanics, Bohmian mechanics avoids the collapse at the moment of the measurement. In any Bohmian picture, the measured values must be already prepared a femtosecond before the measurement. ...
The Bohmian theory never collapses the atom's wave function to an energy eigenstate. In fact, in the Bohmian theories, the "real particle" doesn't influence the pilot wave at all!
No, dBB theory does not avoid the collapse, it describes the collapse, giving the evolution equation for the effective wave function of the subsystem in terms of the global wave function (which contains the macroscopic measurement device too) and the (macroscopically observable) trajectory of the measurement device, by the formula:
\[ \psi^{eff}(q_{sys}, t) = \psi^{full}(q_{dev}(t),q_{sys}, t).\]
As one can see from the formula, it is not the trajectory of the "real particle", which would be \(q_{sys}(t)\), which defines the collapse. But the trajectory of the macroscopic device, \(q_{dev}(t)\), which defines the result of the collapse. But don't forget that \(q_{dev}(t)\) is defined by the guidance equation, and, via the guidance equation, influenced by \(q_{sys}(t)\). And, indeed, the global wave function \(\psi^{full}(q_{dev},q_{sys}, t)\) is not influenced nor by \(q_{dev}(t)\), nor by \(q_{sys}(t)\). But the effective wave function \(\psi^{eff}(q_{sys}, t)\) already depends on \(q_{dev}(t)\), thus, is influenced by the "real particle" \(q_{sys}(t)\) too.
Quote:This is a rather brutal feature of the Bohmian theory: the theory is very loud about the influence of the pilot wave on the particle but it basically assumes that the particle doesn't affect the pilot wave at all. You should always be suspicious about theories with similar "asymmetric" influences. They sound like a theory about God who can influence everyone else but can't be influenced. In proper physics at the fundamental level, all interactions go in both ways.
This is, indeed, a strange feature of dBB theory. One which is worth to be considered and discussed. But it is, in fact, only relevant for a hypothetical, theoretical entity: the wave function of the whole universe.
The effective wave function of a subsystem is, as we have seen, influenced by the trajectory, whenever the subsystem interacts with its environment.
Quote:The absence of the collapse in Bohmian mechanics means that the atom can simply never collapse to an energy eigenstate, even though the photon that has known about the atom's energy has gone through the prism and was detected. The wave functions of the Bohmian mechanics never really collapse.
But the effective wave function of the atom collapses. It is only the wave function of atom + EM field, and later of atom + EM field + prism + detector, and a later the wave function of the whole universe, which does not collapse.
Quote:And the positions of the electron and proton aren't affected by the detection of the photon – even though they should really be correlated with the photon's energy.
That's wrong. Once there is a nontrivial interaction between the atom and the EM field, the trajectory of the configuration of the EM field is influenced by the trajectory of the atom. How? By the guiding equation. Because to define the velocity \(\dot{q}_{atom}(t)\) by the guiding equation, we need the full wave function \(\psi^{full}(q_{EM},q_{atom}, t)\) as well as the actual configurations \(q_{EM}(t),q_{atom}(t)\) of all relevant parts at that moment of time.
Of course, the influence matters only as long as there is an interaction between the system and the measurement device. If there is no such interaction, and \(\psi^{full}(q_{dev},q_{sys}, t)=\psi^{dev}(q_{dev},t) \psi^{sys}(q_{sys},t)\), there will be no such influence between the two trajectories. There is also some influence if the two systems are in a superpositional state \[\psi^{full}(q_{dev},q_{sys}, t)= \psi_1^{dev}(q_{dev},t) \psi_1^{sys}(q_{sys},t)+\psi_2^{dev}(q_{dev},t) \psi_2^{sys}(q_{sys},t).\]
If, say, the wave functions \(\psi_{1/2}^{dev}\) do not overlap, and \(q_{dev}(t)\) is inside the support of \(\psi_{1}^{dev}\), then the trajectory \(q_{sys}(t)\) will be the same as if guided by \(\psi_1^{sys}\), else as if guided by \(\psi_2^{sys}\). So, there is an influence.
All this has nothing to do with any "ad hoc fixes", it is simply the application of the standard equations of dBB theory.
PS: I see that secur has been banned there too:
Quote:BTW I saw your explanations on Schmelzer's crackpot website how you play with your nicknames and why you came to my server. This is in clear violation of the basic integrity rules required on this server so you were banned.
Looks quite natural: If one has only bad arguments, one has to ban those who may easily refute them.
Last but not least, there was a time when he felt more secure about his arguments, when he has given me the participate on his blog with an Argumentation about de Broglie-Bohm pilot wave theory. He has, it seems, learned the lesson that his arguments are too weak, so that he cannot allow such counterarguments on his blog.
I can. I have no problem with lumo coming here for "debunking" all this "crackpot nonsense" here. He would have to restrict himself to arguments about the content, personal attacks would not be allowed, this is all. I would guess, if he has some arguments about the content, he would make them here too. But if there are no such counterarguments about the content, it would be unreasonable for him to appear here. We will see Wink
I have found here a strange quote from lumo:
Quote:BTW as I said many times, Feynman could have sold the path-integral formulation as something much more, a new interpretation that is meant to "replace" Copenhagen, fix it, and so on. But he was modest and he knew better which is why he preferred to say that it was physically equivalent - and he could give a proof. The physical equivalence depends on a careful isolation of what is physical/measurable and what is just a calculational/linguistic convention. Many others ("interpreters") are much less careful about the "physicality" yet much less modest.
Strange, because this is what is done by dBB theory too: For those cases where one can construct a dBB interpretation, it is, in the quantum equilibrium, physically equivalent to standard quantum theory. And they can and do prove it. Without such a proof - which was the actual situation between de Broglie's presentation at the Solvay conference 1927 and Bohm's paper 1952 where the equivalence proof was given - dBB theory would be as dead as possible for a physical theory.
At http://physics.stackexchange.com/, where I cannot post too simply because I have created an account only now and therefore do not have any reputational points yet, I have found some other points worth to be replied to:
Quote:The more realistic field theories that we would wish to discuss, simply have not been defined mathematically in a way which would allow concrete Bohmian calculations to be exhibited. Practical QFT can ignore that constraint because of the EFT philosophy, but Bohmian field theory cannot.
Why this? The EFT philosophy is nicely compatible with dBB theory.
What I propose is to consider lattice regularizations of QFT. With a finite number of lattice nodes this fits nicely into classical quantum theory, and, as well, into classival dBB theory too. The relativistic Hamiltonian, regularized on a lattice, has the form \(H=\sum_n \pi_n^2 + V(\varphi)\), where the potential \(V(\varphi)\) depends only on the configuration variables, which is what we need to apply the standard construction of a dBB theory. So, we have a dBB lattice theory equvialent to the corresponding lattice QFT. So, as far as such a lattice regularization for QFT is effectively equivalent to QFT (however defined), we have also effective equivalence of that lattice dBB theory to that QFT (however defined). Of course, lattice regularizations are not unproblematic (fermion doubling, chiral lattice theory problems), but these are problems one would like to solve anyway to get a correctly defined QFT based on lattice regularizations.
Quote:Gauge symmetry might be a more serious problem. Bohmian mechanics can deal with the problem of special relativity by just picking a reference frame and saying that's the real one. That's a kind of gauge-fixing and it's going to work. I have no similar confidence that gauge-fixing would work for "Bohmian gauge field theory".
First, for the classical gauge theory there is no problem at all, for the gauge condition we have a very natural candidate, the Lorenz gauge \(\partial_\mu A^\mu = 0\). Classically, there is no difference.
Then, in quantum theory, there may be some differences, related with Gribov copies. But so what? This would be a point where it would be interesting if this would really lead to observable effects or not, and, if it leads, which theory is better.
But this is about another hidden variable - the gauge potential.
Quote:I know that Lubos blogged on another occasion that Bohmian mechanics absolutely couldn't deal with fermion fields, because they are based on Grassmann variables and you can't have "Grassmann beables". I don't know if that argument is valid; if it is, maybe you could still get by just with beables for the bosonic fields;
About how to handle fermion fields, see my post A dBB theory for fermion fields. But it is worth to note that an incomplete configuration space - say, based on bosons only - would be sufficient for all practical purposes too. What one needs to prove an equivalence theorem is, essentially, only that different macroscopic states of the measurement instruments can be distinguished already by this incomplete part of the configuration. I have to admit that I have never liked this idea and cannot give now a good reference to this or provide the relevant formulas, but I have seen papers where this has been proposed, as simply for particles with spin, where only the position was used, and for field theory, where only bosons were used.
Wow, another continuation of the attack titled Bohmists' inequivalence & dishonesty.
The physically interesting part of the answer I have put into a separate thread.
I have to thank him for providing the link to http://www.bohmianmechanics.org/, nice site, I have not seen this link before. And, good work, if one wants to discredit a site, the best place to look for would be something satirical, like the following "advice for Bohmians":
Quote:Relax about it. It is unlikely that you will ever convince anyone. The result either will be “You can’t say that!” or “Sounds nice, but there must be something wrong with it otherwise the physics community would embrace it.” Seriously. You are not going to win an argument.
Still here? Wow, you are a glutton for punishment.
and try to find an objectionable joke, which can be presented out of context as evil.
Motl has changed his policy toward this forum, and linked it himself. Fine. The point for linking it was quite laughable: To support a conspiracy theory that secur and me are the same person. Together with some user7348 on stackexchange. LOL. To fake users to create the impression that there are more supporters of dBB than in reality? Sorry, this is something I leave to democrats, who think that the number of supporters of some nonsense matters. But even if I would, it would be more plausible to invent a supporter of my ether theories, where I'm really alone yet, instead of some dBB supporters, which is, in comparison, mainstream with a lot of supporters. Whatever, it is not an argument about physics at all.
Then there is a link to an, it seems, unpublished paper Pisin Chen, Hagen Kleinert, Deficiencies of Bohmian Trajectories in View of Basic Quantum Principles, which is so dubious that even Motl doubts the main technical claim. Then, we see a string theorist lamenting about the Bohmist referees rejecting anti-Bohmian papers:
Quote:By the way, it seems clear to me what happens when people – including Hagen Kleinert, a collaborator of Feynman in his last years and a co-author of an ingenious path-integral solution to the hydrogen atom – send a paper criticizing Bohmian mechanics to a journal. The editor almost certainly sends such a paper to some Bohmist referees. And what a surprise, Bohmists are dishonest Marxist aßholes who will prevent the publication even if they know that the paper is correct. The very existence of this would-be "subfield" of physics is a problem that should have been prevented.
I have emphasized the last sentence, because I think this describes the main difference between his (string-theoretic?) view and my own: I would not object against the very existence of string theory, even if I think this direction is an impasse. He thinks that the very existence of alternatives is a problem which should be prevented.
Then, there is a lot of argumentation about some thermodynamical arguments. In principle, I had planned to answer them, but I have seen that this requires more work, because I would have to cover also a lot of the conceptual foundations of thermodynamics. The problem is that about these foundations I tend to favor now the modern, Bayesian approach, described, for example, by Caticha, Entropic Inference and the Foundations of Physics, USP Press, Sao Paulo, Brazil 2012. So I guess this would open another can of worms of disagreement, now about what is entropy.
Whatever, once the microscopic theory is proven to be equivalent, the macroscopic approximation has to be equivalent too. If not, there is some error in the macroscopic approximation.
After this, Motl quotes some thread on stack exchange without giving a link. So, here is what he has not quoted.
Quote:I think it is possible to reduce it to the quantum mechanical calculations. I would acknowledge that what I have written here is an oversimplification, and that this problem needs a more detailed discussion. Of course, the von Neumann entropy is 0 for a pure state, but a measurement transforms a pure state into a mixed one, and a position measurement would transform it into a state such that the entropy I have given would be the von Neumann entropy, so it is something close enough. The details have to be discussed. But I doubt here would be the right place. – Schmelzer
Ilja, none of the statements you are making is correct. For example, in your latest comment, it is not true that a "measurement transforms a pure state to a mixed state". Where did you get this misconception? Your formulae for the entropy aren't "oversimplifications". Instead, they're so wrong that they have nothing at all to do with the correct result. Just try to calculate the numerical magnitude of the heat capacities - which you were supposed to do, anyway. You will see that they have nothing to do with the tiny measured heat capacities. – Luboš Motl
If one ignores the result of the measurement, the result is a mixed state. It is the same mixed state which you can predict before the measurement, if all you know is that the measurement will be done. – Schmelzer
But, of course, if one would quote such things as me saying "Of course, the von Neumann entropy is 0 for a pure state", one could no longer make such claims as
Quote:Instead, the entropy is S=0 for any pure state |ψ⟩|ψ⟩ while for a mixed state, we must use the von Neumann entropy ... You may see that Schmelzer doesn't understand statistical physics (and probably quantum mechanics) at all. He confuses pure states and mixed states (mixed states are absolutely needed to meaningfully discuss any nonzero values of entropy etc.) and does many other stupid things.
So, I can understand that Motl preferred not to link the discussion itself.
And yet another attack by Lubos Motl: "Slowly for Peter Shor: Page 1 of Dirac", this time against Peter Shor. The case is rather funny: Motl thinks that some quotes from Dirac provide a sufficient argument against any classical theory. But even the consideration of the context raises doubt: Dirac has written this text at a time when de Broglie has given up his pilot wave theory (because he was unable to find the measurement theory which has been found later by Bohm). So, that he thinks that a return to a classical worldview is impossible is a triviality - else, he would have published a paper containing Bohm's theory or something similar himself. In such a situation, one has to expect that there is also some problem on the way which seems unsolvable. And that one, if one writes a textbook, writes about this problem.
The Dirac textbook quotes themself are irrelevant about dBB, no wonder, given they were written not about dBB by someone who did not even know that something like dBB can exist at all. In the remaining text I have found nothing that has not already been answered here. Or, maybe this?
Quote:I must emphasize that if you extend the atom's |ψ⟩ to the quantum field theory's |Ψ⟩ that you rebrand as a "second-quantized pilot wave", you may obviously extend the theory to "mathematically coincide" with the whole quantum field theory. But the meaning of the symbol |Ψ⟩ is completely different than in quantum field theory and if you want to preserve the "realist" i.e. classical character of your theory, you must ultimately say that some observable quantities are functions of the classical degrees of freedom |Ψ⟩.
In this way, you're just postponing the moment when you admit that it doesn't work at all. At the end, you know that the only correct interpretation of the complex numbers in |ψ⟩ or |Ψ⟩ is that they are probability amplitudes that determine the probabilities of otherwise random outcomes, via the Born rule. But Bohmian mechanics ultimately wants to say that |ψ⟩ or |Ψ⟩ are classical degrees of freedom that should be observable "directly", in a single experiment.
Hm, what dBB-like theories explicitly say is that the configuration is what we immediately observe around us. The status of the wave function is much less clear. But the majority opinion is clearly that the wave function is a or defines some really existing object.
But Bohmians are certainly not primitive positivists, who refuse to give something the status of existence if they are unable to observe it directly. Do they want to? I would guess, nobody would object if it would be given for free, but I doubt anyone would really try hard to reach this. What one wants to have is a consistent set of equations, where the future of all the really existing things is defined by really existing things only, instead of ghosts or Gods or so. So, once the evolution equation for the really existing configuration depends, via the guiding equation, on the wave function, there is a point to award the status of real existence to the wave function too.
But, let's also note that this is not at all obligatory. All the wave functions which appear in real life are, from point of view of dBB theory, only effective wave functions. Now, effective wave functions are defined by the wave function of the universe by
\[ \psi^{eff}(q_{sys},t) = \psi^{universe}(q_{everything\, else}(t), q_{sys}, t)\]
thus, it is, by definition, something which depends not only on the wave function of the universe, but also on trajectories \(q_{everything\, else}(t)\) which, without doubt, are real, and, moreover, define (via the preparation procedure) the effective wave function completely.
Whatever, it does not matter at all what the status is - ontological, epistemic, or some strange mixture of above. Once Bohmists are not positivists, they will not require a direct observability of the wave function.
Or, maybe, I have misunderstood Motl here? Maybe he sees a conflict that QM defines via the Born rule probabilities only, but dBB assumes the wave function is real? This interpretation would require that Motl has not understood that in dBB theory there is a difference between the observable probabilities for trajectories and the wave function, and that the Born rule holds only for a particular subclass of dBB states, namely states in quantum equilibrium. So, there is no such conflict at all. Probabilities for trajectories are one thing, the wave function is another thing.
Quote:At any rate, Bohmian mechanics is a theory where the pilot wave and the particle position are equally "real". So if a physical system reaches the equilibrium, it spends an "equal amount of time" at every possible region of the total phase space.
So? I think there are a lot of assumptions about how classical physics looks like which one has to use to reach such conclusions. Motl uses here the term "phase space". Hm. Sounds like he presupposes that the equations are of Hamiltonian form, so that one can define a microscopic entropy \(S=\int \rho \ln \rho dp dq\) which is preserved in time during the evolution, not?
Another line of argumentation which can be easily seen to be invalid in this form, but which is worth to be commented:
Quote:In a classical theory describing the pilot wave |ψ⟩, a set of classical degrees of freedom, there is really no reason whatever to assume that the fundamental equations controlling these degrees of freedom are linear. Everything that is not forbidden is allowed. And because this non-linearity isn't reducing the consistency of this non-quantum theory at all, one must assume that it exists – both in the normal evolution of the pilot wave as well as in its impacts on other parts of the system that we use to measure the atom (e.g. the electromagnetic field). This is an example of Dirac's claim that quantum mechanics is far less arbitrary than classical physics: the linearity of the evolution and other operators (observable) is needed for consistency (because the linearity basically coincides with the linearity rules in the probability calculus) while no similar linearity constraint may ever be justified in a classical theory.
Why this is invalid? Because the fundamental equation for \(\psi\) is nothing derived from something more fundamental, it is simply postulated. It is the Schrödinger equation. End of discussion. (This is not string theory, it is physics, where we have well-defined equations.)
But does this mean that one would not prefer to understand why the equation is the Schrödinger equation? Of course. It would be nice to be able to derive the Schrödinger equation from something else, preferably something more simple, easier to understand or so. Of course, linearity is so easy to understand - for almost everything small variations are approximately linear - that it is not even a problem. But there is, of course, not only linearity which one would like to understand there.
And now note the possibility mentioned above - that the wave function is not something really existing, but epistemic, describing our knowledge of reality, not reality itself. Such an interpretation requires a very different consideration. In particular, in this case one cannot simply say that the guiding equation defines the velocity. Our knowledge about reality cannot define how reality behaves - except in wishful thinking. Thus, the guiding equation obtains a different status - it becomes a sort of consistency condition between the real velocity and our knowledge about it defined by the wave function. Or it also obtains a status of knowledge - as the average velocity for a given state of knowledge. There are logical consistency rules, which distinguish rational, consistent sets of knowledge. And if one assumes that the wave function describes such a state of knowledge, this clearly enforces quite rigorous restrictions on the equations of evolution of such knowledge.
And, indeed, these restrictions would be more rigorous than the restrictions which exist if one assumes that the wave function is some really existing object.
In this sense, assuming that the wave function is a really existing object is a cheap solution. If one uses it, almost every system of equations would be fine: All one needs for a realistic theory is that the evolution equations depend only on really existing (according to the theory) things. Which can be simply done, by giving all the items which appear in the equations the status of really existing things. This simple solution has it advantages - it produces a simple realistic and causal theory, to counter quantum mysticism, anti-realism and rejection of causality.
But it does not promise a deeper understanding, an explanation, of the nature of the Schrödinger equation.
"But it does not promise a deeper understanding, an explanation, of the nature of the Schrödinger equation".
I think it does. But what you end up with isn't de Broglie - Bohm theory any more. As for what it is, I don't know. But see https://arxiv.org/pdf/1103.3506v2.pdf on page 16 where you quote Bell?
The scintillation is not the particle, just as the eye of the storm is not the storm.
[Image: 300px-Hurricane_Isabel_from_ISS.jpg]
Of course, what I will end up if the program is successful will not be dBB. It differs in such a central question as what defines the complete state of reality. It is for this reason I have invented a new name for it, paleoclassical interpretation, for this program.
Actually I think Caticha's entropic dynamics is quite close to this program too. So I am quite close to giving this own program up and supporting Caticha. But I have yet some ideas beyond Caticha. Not yet written down, but work in progress.
Don't give up your own program to support Caticha. See things like this from http://arxiv.org/abs/1601.01708 :
"Our goal has been to extend entropic dynamics to curved spaces which is an important preliminary step toward an entropic dynamics of gravity."
I'm confident that this is a step in the wrong direction.
Of course, I do not have to support every movement in every direction.
I'm looking at Bohm (1952). On page 171 he outlines 3 ad hoc assumptions which are needed to guarantee equivalence with quantum mechanics. (3) says that the probability density of position is given by the usual square of the magnitude of the wave function. It then says that this is due to not knowing the initial conditions of the particle. I do not understand why the initial conditions of the particle are unknown. This seems like another assumption 3b. Furthermore, if the wavefunction is a position eigenstate which always occurs after a position measurement, then P(x) is 1 at a specific location and 0 everywhere else so that the precise location of the position is well defined which undermines the reasoning of (3).
Yes, Bohm 1952 requires the assumption that the wave function is initially in quantum equilibrium.
But this is no longer an independent assumption. Today we have the Valentini's subquantum H-theorem, which shows that the quantum equlibrium will be reached in analogy to Boltzmann's H-theorem for thermal equilibrium. This has been also supported by numerical computations by Valentini and Westman. See https://en.wikipedia.org/wiki/Antony_Valentini for references.
(08-18-2016, 06:28 PM)user7348 Wrote: Furthermore, if the wavefunction is a position eigenstate which always occurs after a position measurement, then P(x) is 1 at a specific location and 0 everywhere else so that the precise location of the position is well defined which undermines the reasoning of (3).
Of course, the wavefunction is not a "position eigenstate". dBB theory is a realistic theory, so it is assumed to describe what really exists, not what is measured.
Forum Jump:
Users browsing this thread: 1 Guest(s) |
cc711a713199d71c | Wednesday, May 06, 2015
Many uses in researching quantum dots
Because nanoparticles are so small, millions of times smaller than the width of a human hair, they have "tremendous surface area," raising the possibility of using them to design materials with more efficient solar-to-electricity and solar-to-chemical energy pathways, says Ari Chakraborty, an assistant professor of chemistry at Syracuse University.
"They are very promising materials," he says. "You can optimize the amount of energy you produce from a nanoparticle-based solar cell."
Chakraborty, an expert in physical and theoretical chemistry, quantum mechanics and nanomaterials, is seeking to understand how these nanoparticles interact with light after changing their shape and size, which means, for example, they ultimately could provide enhanced photovoltaic and light-harvesting properties. Changing their shape and size is possible "without changing their chemical composition," he says. "The same chemical compound in different sizes and shapes will interact differently with light."
Specifically, the National Science Foundation (NSF)-funded scientist is focusing on , which are semiconductor crystals on a nanometer scale. Quantum dots are so tiny that the electrons within them exist only in states with specific energies. As such, quantum dots behave similarly to atoms, and, like atoms, can achieve higher levels of energy when light stimulates them.
Chakraborty works in theoretical and , meaning "we work with computers and computers only," he says. "The goal of computational chemistry is to use fundamental laws of physics to understand how matter interacts with each other, and, in my research, with light. We want to predict chemical processes before they actually happen in the lab, which tells us which direction to pursue."
These atoms and molecules follow natural laws of motion, "and we know what they are," he says. "Unfortunately, they are too complicated to be solved by hand or calculator when applied to chemical systems, which is why we use a computer."
The "electronically excited" states of the nanoparticles influence their optical properties, he says.
"We investigate these excited states by solving the Schrödinger equation for the nanoparticles," he says, referring to a partial differential equation that describes how the quantum state of some physical system changes with time. "The Schrödinger equation provides the quantum mechanical description of all the electrons in the nanoparticle.
"However, accurate solution of the Schrödinger equation is challenging because of large number of electrons in system," he adds. "For example, a 20 nanometer CdSe quantum dot contains over 6 million electrons. Currently, the primary focus of my research group is to develop new quantum chemical methods to address these challenges. The newly developed methods are implemented in open-source computational software, which will be distributed to the general public free of charge."
Solar voltaics, "requires a substance that captures light, uses it, and transfers that energy into electrical energy," he says. With solar cell materials made of nanoparticles, "you can use different shapes and sizes, and capture more energy," he adds. "Also, you can have a large for a small amount of materials, so you don't need a lot of them."
Nanoparticles also could be useful in converting solar energy to chemical energy, he says. "How do you store the energy when the sun is not out?" he says. "For example, leaves on a tree take energy and store it as glucose, then later use the glucose for food. One potential application is to develop artificial leaves for artificial photosynthesis. There is a huge area of ongoing research to make compounds that can store energy."
Medical imaging presents another useful potential application, he says.
"For example, nanoparticles have been coated with binding agents that bind to cancerous cells," he says. "Under certain chemical and physical conditions, the nanoparticles can be tuned to emit light, which allows us to take pictures of the . You could pinpoint the areas where there are cancerous cells in the body. The regions where the are located show up as bright spots in the photograph."
Chakraborty is conducting his research under an NSF Faculty Early Career Development (CAREER) award. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organization. NSF is funding his work with $622,123 over five years.
As part of the grant's educational component, Chakraborty is hosting several students from a local high school—East Syracuse Mineoa High School—in his lab. He also has organized two workshops for high school teachers on how to use computational tools in their classrooms "to make chemistry more interesting and intuitive to high school students," he says.
"The really good part about it is that the kids can really work with the molecules because they can see them on the screen and manipulate them in 3-D space," he adds. "They can explore their structure using computers. They can measure distances, angles, and energies associated with the molecules, which is not possible to do with a physical model. They can stretch it, and see it come back to its original structure. It's a real hands-on experience that the kids can have while learning chemistry."
No comments: |
5766aa4d918b88dc | Take the 2-minute tour ×
Given a symmetric (densely defined) operator in a Hilbert space, there might be quite a lot of selfadjoint extensions to it. This might be the case for a Schrödinger operator with a "bad" potential. There is a "smallest" one (Friedrichs) and a largest one (Krein), and all others are in some sense in between. Considering the corresponding Schrödinger equations, to each of these extensions there is a (completely different) unitary group solving it. My question is: what is the physical meaning of these extensions? How do you distinguish between the different unitary groups? Is there one which is physically "relevant"? Why is the Friedrichs extension chosen so often?
share|improve this question
I am asking this question as a mathematician trying to understand the meaning and motivation of the objects I am working with. – András Bátkai Sep 15 '11 at 19:49
2 Answers 2
up vote 26 down vote accepted
The differential operator itself (defined on some domain) encodes local information about the dynamics of the quantum system . Its self-adjoint extensions depend precisely on choices of boundary conditions of the states that the operator acts on, hence on global information about the kinematics of the physical system.
This is even true fully abstractly, mathematically: in a precise sense the self-adjoint extensions of symmetric operators (under mild conditions) are classified by choices of boundary data.
More information on this is collected here
See the references on applications in physics there for examples of choices of boundary conditions in physics and how they lead to self-adjoint extensions of symmetric Hamiltonians. And see the article by Wei-Jiang there for the fully general notion of boundary conditions.
share|improve this answer
A typical interpretation of the self-adjoint extensions for the free hamiltonian in a line segment is that you get a four parametric family of possible boundary conditions, to preserve unitarity. Some of them just "bounce" the wave, some others "teletransport" it from one wall to the other. So it is also traditional to imagine this segment as a circle where you have removed a point, and then you are in the mood of studying "point interactions" or generalisations of dirac-delta potentials. The topic resurfaces from time to time, but surely some old references can be digged starting from M. Carreau. Four-parameter point-interaction in 1d quantum systems. Journal of Physics A, 26:427, 1993. In some works, I quote also Seba and Polonyi.
Sometimes the extensions are linked to the question of the domain of definition for the operator and then to the existence of anomalies. Here Phys.Rev.D34: 674-677, 1986, "Anomalies in conservation laws in the Hamiltonian formalism", revisited by the same autor, J G Esteve, later in Phys.Rev.D66:125013,2002 ( http://arxiv.org/abs/hep-th/0207164 ). These topics have been live for years in the university of Zaragoza; some related material, perhaps more about boundary conditions than about extensions, is http://arxiv.org/abs/0704.1084, http://arxiv.org/abs/quant-ph/0609023, http://arxiv.org/abs/0712.4353
share|improve this answer
I hadn't been aware of the reference by Esteve. I have added it to the references of the nLab entry ncatlab.org/nlab/show/quantum+anomaly (many more references are currently still missing there, of course). – Urs Schreiber Sep 16 '11 at 11:14
@Urs Schreiber Thanks for the add. The topic was common folklore in Zaragoza in the nineties and it was not infrequent in PhD theses, but I think that its main role was motivational, either aiming towards other topics, or used as a guide when exploring some other concept. For instance, it was very valuable to me in order to navigate Albeverio et al, who had got into a confusing notation/naming for some self adjoint extensions classifying these "1D point interactions". – user135 Sep 16 '11 at 11:35
Thank, I like both answers very much. The references are great. Unfortunately, I have to choose an answer to accept... – András Bátkai Sep 16 '11 at 19:23
Your Answer
|
a06aee3521c1fedd | Incite 1: Visualization of Electron Walkers Computed by Quantum Monte Carlo Simulation of Energy Pathways in Photosynthesis Reactions
Table of Contents
In this article, we show how the NERSC Visualization Group has worked closely with the Quantum Monte Carlo Study of Photoprotection via Carotenoids in Photosynthetic Centers Project at NERSC. The self-guided demonstration will walk you through some of the steps taken by the NERSC Visualization Group during the collaborative effort.
This project, led by William A. Lester, Jr. of LBNL and UC Berkeley, aims to increase understanding of the complex processes which occur during photosynthesis, the process by which plants and bacteria convert the sun's light into energy, taking in carbon dioxide and producing oxygen in the process. This project proposes to investigate the electronic structures behind a defense mechanism within the photosynthetic system that protects plants from absorbing more solar energy than they can immediately utilize, and, as a result, suffering from oxidation damage.
The systems of interest are the light harvesting protein II, LH-II, of Rs. Molischianum, and Photosystem I, PS I, of cyanobacteria. Caroteniods prevent the formation of singlet oxygen molecules by quenching the lowest triplet state of chlorophyll.
3Chl* + 1Car &rarr 1Chl + 3Car*
The main objective of this project is to determine the ground-to triplet-state energy difference of the carotenoids present in LH-II and PS I using Quantum Monte Carlo (QMC) methods. QMC is a stocastic method for solving the Schrödinger equation [1][2].
First Light
The first step required in the visualization process is to transform data from the form output by the simulation into a form suitable for use by visualization tools. The first data sets that we had were of Variational Monte Carlo walkers for the electron density of Spheroidene (the carotenoid in this case), generated as hdf5 files. Each walker is a snapshot of the configuration of the 3N electronic coordinates (N number of electrons). The idea is to sample electronic density using fictitious kinetics that in the limit of large simulation time yields the density at equilibrium [2]. The random walkers are propagated following a transition probability for a coordinate to move from one position to another. The image below shows our "first light" result, which is used as a "sanity check" to make sure we didn't do anything untoward with the data while reading it in. We obviously had the nuclei and the electrons in different units!
First Light from Incite1: the coordinates are in different units!
3D Visualization
Once we established that we had a correct representation of the system we started to plan more elaborate ways of visualizing the data. The walkers are scattered points in 3D space, we binned them to visualize isosurfaces of electron density as the simulation progresses.
Spheroidene molecule. Isosurface of density 1 for the walkers. All the walkers.
The next step was to follow the walkers along their trajectories. The idea was to make a "tail" that gets darker and more transparent for older positions of the walkers. A tail size of 10 steps seemed like a resonable length.
Trajectories, first try. Fading walkers. Another view.
A high resolution view of three walkers colored with different colors. Click on the image to see a 4096x4096 view.
Adding the chlorophyll molecule completes the picture of the system.
Spheroidene and Chlorophyll system. The Chlorophyll was rendered using a colored stick representation.
Animations of the evolution of the simulation allow to visualize electron trajectories and analyze their behavior.
Quick Time movie of the walkers moving along their trajectories (54M)
MPEG movie of the walkers moving along their trajectories (8.3M)
Discussion and Next Steps
As we continue working with the Incite 1 group, we will visualize Diffusion Monte Carlo data sets and we will implement an Electron Pair Localization function (EPLF) that describes the pairing of electrons in a molecular system.
A preliminary visualization of the difference of densities of alpha and beta electrons shows promising results.
alpha - beta densities, blue is negative, red is positive.
MPEG movie of the alpha-beta densities in time (5.4M)
[1] Monte Carlo Methods in Ab Initio Quantum Chemistry, B. L. Hammond, W. A. Lester Jr., and P. J. Reynolds (World Scientific, Singapore, 1994).
[2] A. Aspuru-Guzik and W. A. Lester, Jr. Quantum Monte Carlo methods for the solution of the Schroedinger equation for molecular systems Handbook of Numerical Analysis. Handbook of Numerical Analysis, Vol. X. Special Volume: Computational Chemistry. C. Le Bris, Ed. Elsevier, 2003 , |
9a40dd3a60954145 | Genesis of Eden
Genesis Home
The latest version of this research: Biocosmology
PDF versions:
1. Prebiotic Epoch: Cosmic Symmetry-breaking and Molecular Evolution (pdf)
2. Evolutionary Epoch: Complexity, Chaos and Complementarity (pdf)
3. Consummating Cosmology: Quantum Cosmology and the Hard Problem of the Conscious Brain (pdf)
Critical research developments
1. Chemist Shows How RNA Can Be the Starting Point for Life May 14, 2009
A pivotal article showing how nucleotides can be synthesized from simple molecules.
2. First Cells, Proton-Pumping and Undersea Rock Pores (research article pdf password "model") Oct 19, 2009
A breakthrough in understanding how the first living cells could have been created at an undersea rock-pore interface.
Fig 1 (a) Divergence of the four forces from a single superforce. (b) Non-zero vacuum polarization at minimum energy is the cause of electro-weak symmetry-breaking. (c) Cosmic abundances of the bioelements. (d) Global t-RNA structure and (e) protein lysozyme with substrate (purple). Primary, secondary, tertiary and global structures and conformation changes in biomolecules are the result of a fractal hierarchy of strong covalent and ionic and weaker chemical interactions - H-bonding, hydrophilic, polar and ven der Waal's interactions, arising from the unresolved non-linear nature of chemical bonding (figures from King, except (e) Watson et. al.).
Biocosmology: Cosmic Symmetry-Breaking and Molecular Evolution
We now explore the structural relationship between cosmological symmetry-breaking and the form of molecular evolution leading to biological systems on Earth. It thus forms an alternative to historical hypotheses in which the form of biogenesis is believed to be the product of a linked sequence of specific conditions, bridged by stochastic selection processes.
The rich diversity of structure in molecular systems is made possible by the profound asymmetries between the nuclear forces and electromagnetism. Although molecular dynamics is founded on electromagnetic orbitals, the diversity of the elements and their asymmetric charge structure, with electrons captured by a spectrum of positively charged nuclei, is made possible through the divergence of symmetry of the four fundamental forces. The non-linear electromagnetic charge interactions of these asymmetric structures is responsible for both chemical bonding and the hierarchy of weak bonding interactions which result in the non-periodic secondary and tertiary structures of proteins and nucleic acids. It also provides the basis for a bifurcation theory which could give biogenesis the same generality that nucleogenesis has.
Differentiation and Inflation: The Microscopic and Cosmic Scales
Force Differentiation: The strong (nuclear binding) and weak (neutron decay) forces, electromagnetism and gravity are believed to have emerged from a single superforce shortly after the big bang, fig1(a). The strong force is believed to be a secondary effect of the colour force in much the same way that molecular bonding is a secondary consequence of the formation of atoms. The weak force has become short range because it is mediated by massive particles, which are believed to gain an extra degree of freedom by assimilating a Higg's boson (Georgi 1981, t'Hooft 1980, Veltman 1986). The symmetry between the Z and W particles of the weak force and the massless photon of elecromagnetism is thus broken by the lower energy of the polarized configuration, fig 1(b). Even heavier particles are believed to separate the strong force from these two. Force reconvergence occurs at the unification temperature fig 1(c). The strong force mesons gain mass from a different mechanism, being the energies of the bound states of the colour force, whose gluons are massless, but confined. The separation of gravity from the other forces is more fundamental because it involves the structure of space-time and may be described by a higher-dimensional superstring force in which particles become excited loops or strings in a higher dimensional space-time which is compactified into our 4-dimensional form (Green 1985, 1986, Goldman 1988, Freedman & van Nieuwenhuizen 1985).
Cosmic Inflation: A universe in a symmetrical state, but below its unification temperature is in an unstable high-energy false vacuum. The energy of the Higg's field causes inflation, in which the universe has net gravitational repulsion and expands exponentially, smoothing irregularities to fractal structures on the scale of galaxies (Guth & Steinhardt 1984). The breakdown of the false vacuum then releases a stream of high-energy particles as latent heat, to form the hot expanding universe under attractive gravitation. The gravitational potential energy thus gained equals that of the energetic particles, making the generation of the universe possible from a quantum fluctuation. However variations in the cosmic background radiation are consistent with a big-bang smoothed by inflation (Smoot 1992).
Interactive Dynamics:
The interaction between the resulting wave-particles also results in distinct effects on the microscopic and cosmic scales, namely galaxy and star formation and genesis of nuclei, chemical elements, and finally molecules, in which the non-linear nature of chemical bonding becomes fully expressed in complex tertiary structures. These interactions are modified indirectly by the nuclear forces which contribute asymmetries, spin-effects, weak decay and the nuclear energy of stars.
Particle Interaction-1: Nucleosynthesis as a Cosmological Dynamical System. The nucleosynthesis pathway generates over 100 atomic nuclei from the already composite proton and neutron. Parity between protons and neutrons is slightly broken via weak decay, fig 1(e) to balance between the lowest nuclear quantum states being filled and increasing electromagnetic repulsion. The process is exothermic and moderated by the catalytic action of several of the isotopes of lighter elements such as carbon and oxygen. The cosmic abundance of the elements fig1(d) reflects the binding energies of the nuclei and stable a-particle-like shells (Moeller et. al. 1984). The nucleosynthesis pathway has a cosmologically-general form despite having some variation in individual star systems.
Particle Interaction-2: Moleculosynthesis. The Culminating Dynamic Although, by comparison with the energies of cosmic creation or even astronomical bodies, the structures of biomolecules seem much too fragile to be a cosmological feature, symmetry-breaking of the forces leads inevitably to molecular structures as a hierarchical culmination of the interactive phase. Quarks are bound by gluons into composite particles such as the proton p+ and neutron n. These interact by the strong force via the nucleosynthesis pathway to form the elementary nuclei. Subsequently the weaker electromagnetic force interacts, also in two phases, firstly by the formation of atoms around nuclei and then by secondary interaction to form molecules. The latter phase occurs in a sequence of stages through successive strong and weak bonding interactions, producing the complex tertiary structures of biomolecules, fig 1(f,g).
The Cosmic Interaction Sequence: The Pathway to the Planetary Biosphere Galaxy formation is followed by the generation of the chemical nuclei in the supernova explosion of a short-lived hot star. In the second phase these elements are drawn into a lower energy long-lived sun-like star, the lighter [bio]elements, occurring in high cosmic abundance as a result of nucelosynthesis dynamics, fig1(d), becoming concentrated on mid-range planets. The final re-entry of the forces is thus represented by stellar photon irradiation of molecular systems, under gravitational stabilization on a planetary surface.
A Brief Survey of Non-linear Orbital Theory
The fact that the laws of chemistry were discovered sooner and were relatively easier to explore than the conditions underlying the unification of electromagnetism with the nuclear forces has resulted in an anomalous historical perspective which has helped to obscure some of the most interesting and complex manifestations of chemistry as a final interactive consequence of cosmological quantum symmetry-breaking. The increasing nuclear charge permits an unparalleled richness and complexity of quantum bonding structures in which electron-electron repulsions, spin-obit coupling, and other effects perturb the periodicity of orbital properties and lead to the development of higher-order molecular structures.
Although quanta obey linear wave amplitude superposition, chemistry inherits non-linearity in the form of the attractive and repulsive charge interactions between orbital systems. Such non-linear interaction, combined with Pauli exclusion, is responsible for the diversity of chemical interaction from the covalent bond to the secondary and tertiary effects manifest in the complex structures of proteins and nucleic acids.
The source of this non-linear interaction is the foundation of all chemical bonding, the electric potential. Although the state vector of a quantum-mechanical system comprises a linear combination of eigenfunctions, the electrostatic charge of the electron causes orbital interaction to have non-linear energetics.
The atomic and molecular structure of molecular orbitals illustrate the geometrical complexity that arises from the asymmetrical charge distributions of atomic orbitals. If the nuclear force did not provide up to 100 stable nuclei and the electromagnetic force were not also asymmetrically distributed between the nucleus and the electrons this complexity would be impossible. Top left: the radial densitywaves of 1s 2s 3s orbitals. Top Right: the 1+3+5 pattern of s, p, and d orbitals showing their geometry. This explains the periodic properties of the table of elements. Bottom left: s and p orbitals can form hybrids by super-position in elements such as carbon, nitrogen, and oxygen. Botton centre: two types of molecular orbitals are illustrated in which s and p orbitals are combined to create a chemical bond. Bottom right: the linear, planar, and tetrahedral arrangement of the hybrid orbitals. Pi-orbitals are also capable of forming delocalized molecular orbitals which span a whole molecule. These and the tenrahedral sp3-hybrids form the backbone of biomolecular bonding.
In the Schrödinger equation for the hydrogen atom, where ,
the potential function results in charge attraction and a negative energy. The electric potential provides the principal non-linear basis for subsequent bonding phenomena because it results in an inverse square law force and non-linear attraction-repulsion dynamics in four-dimensional space-time. Further effects such as spin-orbit coupling add complicating terms to the Hamiltonian.
The the underlying linearity of wave superposition is illustrated in the formation of linear combinations of s & p wave functions to form the four sp3 hybrid orbitals. The treatment of more complex atoms is generally simplified by approximation by perturbation theory or the self-consistent field method, in which a hydrogen-like orbital is based on purely radial repulsion factors for the inner electrons. Variation theory succinctly illustrates the interactive non-linearity of bond formation. The total energy is represented by the resonance integral of the Hamiltonian composed with the wave function, divided by the normalizing overlap integral S.
In the case of the one-electron Hydrogen molecule ion, with Saa= Sbb normalized to 1, we have 2 solutions, as indicated:
Quantum matrix methods are generally simplified to take account of only one aspect of molecular interaction and involve extensive approximations such as the independent particle approximation and Hükel theory (Brown 1972). The non-linear interactions of electron repulsions and spin-orbit coupling in the global context of molecular tertiary structure require complex computer techniques for example to predict the 3-D structure of protein molecules. These are only beginning to simulate the folding of complex molecules, again requiring approximation techniques.
The capacity of orbitals, including unoccupied orbitals, to cause successive perturbations of bonding energetics results in an interaction succession from strong covalent and ionic bond types [200-800 kj/mole] through to their residual effects in the variety of weaker H-bonding, polar, hydrophobic, and van der Waals interactions [4-40 kj/mole] merging into the average kinetic energies at biological temperatures [2.6 kj/mole at 25oC], (Watson et. al. 1988). These are responsible for secondary structures such as the a-helix of proteins and base-pairing and stacking of nucleic acids, and result in the tertiary effects central to enzyme action, whose energetics are determined by global interactions in complex molecules.
The cooperative reactivity of the active site of hexokinase demonstrates how, even after resolving the covalent and successive weaker bonding effects, and the local interactions of individual sides chains, and the larger fractal structures arising from weak bonds forming secondayry and tertiary protein structure, the entire enzyme is still capable of marked global conformation changes of a highly energetic nature. Chemical forces are thus fractal, leading right up to the globally fractal tissue structures we see in organismic biology, from the lungs to the brain. This is confirmed in the fractal dynamics of key cell structures (Watson et. al.).
Beloushov-Zhabotinskii type reaction giving rise to three-dimensional scroll waves (CK).
2.2 Fractal and Chaotic Dynamics and Structure in Molecular Systems.
Most minerals adopt periodic crystal geometries. Although some anomalies are disordered, many such as those superconducting perovskites have higher-order geometrical regularity. By contrast, the irregularties in polymers such as polypeptides and RNA is critical are establishing the richness of their tertiary structures, and their bio-activity. Variable sequence polymers with significant tertiary structure are non-periodic because the unlimited variety of monomeric primary sequences induce irregular secondary and tertiary structures. These irregularities are central to biochemistry because they result in powerful catalysts which can alter the reaction dynamics because of the generation of local activating sites globally potentiated through intermolecular weak-bonding associations. They also permit allosteric regulation. Despite being genetically coded, such molecules form fractal structures both in stereochemical terms and in terms of their relaxation dynamics.
Prigogine's theory of non-equilibrium thermodynamics, in which maximum entropy is replaced by a more general critical point of entropy production, which in an open system may not be a maximum. The associated oscillating chemical systems such as the Beloushov-Zhabotinskii reaction have demonstrated the capacity of chemical systems to enter into non-linear concentration dynamics, including limit cycle bifurcations. Period-doubling bifurcations and chaotic concentration dynamics have also been observed . Similar dynamics occur in electrochemical membrane excitation. The living cell is a non-equilibrium open thermodynamic system whose boundary, the memerane, exchanges material with the outside world. This makes it possible for life to be a negentropic system within a universe where entropy is increasing. The photosynthetic conversion of light to chemical energy and structural growth in our great forests is a prime example.
By contrast, viruses do not form a thermodynamic system as such, but rather a system of pure information. The first emergence of polynucleotides may similarly have been associated with the acrual of such information by a more direct negentropic route, phase transition.
Our excellent online Braindumps training programs will lead you to success in the Pass4sure 642-902 exam. We also offer latest Testking 200-120 and Actualtests 300-208 with 100% success guarantee. Our California College of the Arts is rare in IT world.
Fig 2: (a) Symmetry-breaking model of selection of bioelements, as an interference interaction between H and CNO, followed by secondary ionic, covalent and catalytic interactions. (b) Boiling points of hydrides illustrate the optimality of H2O as polar H-bonding and structural medium for biological structure (CK). 3-D periodic table Sci Am. Sep 98
Biogenesis as a Central Synthetic Pathway
One of the central ideas of the cosmological biogenesis model is that the molecular interactions forming the pathways to the origins of life as we know it are not just an accidental set of chemical reactions out of a great variety of ad-hoc initial conditions, but that they represent a fundamental biforcation arising ultimately from cosmological symmetry-breaking of the four forces. The non-linear properties of electron orbitals cause the periodic table to have a critical sequence of bifurcations relating to the fundamental interactions.
Traditionally chemists have become so wedded to the idea of atoms and molecules as simply the "building blocks of the universe", as Isaac Asimov once put it that they cannot comprehend how they might interact as a quantum dynamical system. The fact that chemical bonding is possible between a large variety of atoms in some form or other leads to the loss of an understanding for how the non-linear electronic interactions gave rise to chemical bonding in the first place. It also leads to a mechanistic view of biogenesis, in which there is no underlying dynamical theory, but simply a search for the underlying special or initial conditions which caused the first self-replicating reaction to get going. The aim is thus either to set up a laboratory reaction by placing extreme order on the system, to elucidate this reaction pathway, or an attempt to use random processes and probabilistc arguments to model the likelihood that some collection of replicating molecules might accidentally come together. This has marred prebiotic research and profoundly slowed its advances.
Two illustrations hightlight this conceptual barrier. There is a 40 year time span between Miller and Urey's first spark experiments elucidating pathways from simple precursors to the purine nucleic acid bases, and the modification of this synthesis which led to good yields of the pyrimidines. Likewise there has been two decades of research in attempting to polymerize ribonucleotides, littered with failures due to oversimiplification of RNA interactions and mechanistic variants such as peptide-nucleic acids, before the ribonucleotide evolution techniques of Szostak and finally simple relationship between polymerizing ribonucleotides and montmorrilonite clays became obvious.
Atomic radii and electronegativity scales indicate the optimal bonding energetics of CNO and particularly the strong polarity of O. A clear differentiation can also be seen between the ionic potentials of Na and K.
The cosmological biogenesis theory asserts the following three points:
1. All molecular interaction is highly non-linear, and forms an unresolved fractal interactive milieu which permits not only the cascade of weaking bonding and global interactions characterizing protein enzymes and nucleic acids but also on a larger scale the tissue structures of whole organisms. This means that, while nature can be crystalline, it can also display emergent properties on larger scales which are very difficult to predict from an examination of the components "the whole is more than the sum of the parts". The non-linear perspective realizes emergence within an in-principle reductionist viewpoint because the underlying principles are quantum chemistry, but the consequences are emergent fractal interaction. This situation is clearly illustrated in the great difficulty of fully accurate modelling of the electronic dynamics of even simple atoms because they are many-body problems, and by the complexity of the protein-folding problem (Sci Am. Jan 91 see also Shape is All NS 24 Oct 98 42).
2. The entire molecular environment is non-linear in a way which is capable of exploring its phase space in the manner of a chaotic dynamical system. This means that planetary, terrestrial and molecular systems display sufficient chaos to generate all the varieties of structural interaction possible. These non-linearities make the natural environment a quantum equivalent of a Mandelbrot set in which a potentially infinite variety of dynamics are possible. The overwhelming majority of chemical experiments into the origins of life (with the notable exception of the original spark experiments) attempt to defeat this process by introducing simple overweaning conditions of order to force simple clear-cut products out of the system.
3. Underlying this rich chaotic interaction is a universal bifurcation pathway which is a direct consequence of the form of cosmological symmetry-breaking of the four quantum forces. While there may be more than one way that molecular replication could occur in chemistry, the RNA-based form of life is nevertheless a central bifurcation product of the interaction between the fundamental forces and by no means a mere accident of unlikely circumstances.
The unique and key nature of H-bonding in biogenesis is illustrated in the
structure of the alpha helix of proteins and the pairing of nucleotide bases (Watson et. al.).
Principal Symmetry-splitting : The Covalent Interaction of H with C, N, O.
Quantum interference interaction between the two-electron 1s orbital and the eight-electron 2sp3 hybrid. The resulting three dimensional covalent bonds give C, N and O optimal capacity to form diverse polymeric structures in association with H. Symmetry is split, because the 1s has only one binding electron state, while the 2sp3 has a series from with differing energies and varied occupancy, as the nuclear charge increases. The 1s orbital is unique in the generation of the hydrogen bond through the capacity of the bare proton to interact with a lone pair orbital. The CNO group all possess the same tetrahedral sp3 bonding geometry and form a graded sequence in electronegativity, with one and two lone pairs appearing successively in N and O.
Polymeric condensation of unstable high-energy multiple-bonded forms. Some of the strongest covalent bonds are the multiple-bonds such as CC , CN, and > C = O. These can be generated by applying any one of several high-energy sources such as u.v. light, high temperatures (900oC), or spark discharge. Because of the higher energy of the resulting pi-orbitals, these bonds possess a specific type of structural instability in which one or two pi-bonds can open to form polymeric structures, particularly when bound to H and alkyl groups, as under reducing conditions. Most of the prebiotic molecular complexity generated by such energy sources can be derived from mutual polymerizations of HCCH, HCN, and H2C = O, including purines, pyrimidines, key sugar types, amino acids, porphyrins etc. They form a core pathway from high energy stability to structurally unstable polymerization, which we will examine in the next section.
Radio-telescope data demonstrates clouds of HCN and H2CO spanning the region in the Orion nebula where several new stars are forming. All of A, U, G, and C have been detected in carbonaceous chondrite meteorites, which also contain membrane-forming products. HCN and HCHO polymerizations also lead to membranous microcellular structures. Although the presence of CO2 as a principal atmospheric gas on the early earth could have reduced the quantities of such reduced molecules, HCN could have been produced as a transient in the early atmosphere leading to heterocyclic products. A variety of microenvironments would still have had access to reducing conditions.
The formation of conjugated double and single bonds in these reactions results is the appearance of delocalized pi-orbitals. Such orbitals in heterocyclic (N, C) rings with conjugated resonance configurations also enable lone pair n > p* and also p > p* transitions, resulting in increased photon absorption. These effects in combination play a key role in many biological processes including photosynthesis, electron transport and bioluminescence.
The dynamical subtlety over simple crystalline form is demonstrated by the exterme variety of snowflake form which displays both fractal dendrite growth from the vapour solid interface, which varies with external conditions of temperature and humidity and a global coherence of geometry which develops partly from quantum interactions including H2O molecules'walking' across the crystal surface.
Secondary Splitting between C, N, and O : Electronegativity Bifurcation.
In addition to varying covalent valencies, lone pairs etc., the 8-electron 2sp3 hybrid generates a sequence of elements with increasing electronegativity, arising from the increasing nuclear charge. This results in a variety of secondary effects in addition to the oxidation-reduction parameter, from the polarity bifurcation into aqueous and hydrophobic phases to the complementation of CO2 and NH3 as organic acid and base.
The tetrahedral H-bonding of H2O molecules in ice. In liquid water, 80% of the molecules are in such ordered relationships at any given time. The structures of DNA and proteins derive their energetics from their interaction with water. The stability of polynucleotides is a combination of hydrophobic base-stacking and polar interaction with the phosphate-sugar backbone. Soluable proteins (myoglobin is illustrated) derive their stability and energetics through their interactions with induced water structures. Generally these form micelle structures which have a hydrophobic interior surrounded by polar amino acids (see the illustration of lysozyme). All polymerization pathways of polysaccharides, polynucleotides and polypeptides share a common feature - dehydration is the basis of polymerization, making water the common factor in the negentropic assimilation of order (Watson et. al.).
Optimality of H2O: Polarity, Phase and Acid-base bifurcations. Ionic and Hydrogen bonding.
Outside metals such as mercury, water has one of the highest specific heats. This is a reflection of the large number of conformational degress of freedom it contains. It is also capable of a very unusual number of interactions, ionic, polar, H-bonding, acid-base and the polarity bifurcation into hydrophilic (water-loving) and hydrophobic (oily) phases in biological molecules and structures such as the lipid membrane, which is a sandwich of oily and watery moieties.
(a) Isolation of non-polar molecules into the hydrophobic phase is facilitated by water-bonding clathrate structures whose impressed order is reduced and entropy increased, by reducing these strictures to one common envelope. (b) polypeptides and (c) polynucleotides both form by elimination of H2O, accompanied by ATP energy in the latter as illustrated in the left of the two monomers.
Dehydration is the common currency of polymerization, beginning with the mineral pyrophosphate linkage of ATP. The central biopolymers, polynucleotides, polypeptides and polysacharides are uniformly linked by the removal of a molecule of water, dehydration in the aqueous medium. Furthermore the three-dimensional structures of the nucleic acid double-helix, globular enzymes, membranes and ion channels are all made structurally and energetically possible only through the interactions of these molecules with water and the induced H2O structures that form around them in solution. Both nucleic acids and proteins consist of a balance of hydrophilic and hydrophobic interactions which in the former give hydrophobic base-stacking within a polar back-bone and with enzymes a non-polar micelle surrounded by hydrophilic groups.
Differential electronegativity results in several coincident bifurcations associated with water structure. A symmetry-breaking occurs between the relatively non-polar CH bond and the increasingly polar NH and OH. This results in phase bifurcation of the aqueous medium into polar and non-polar phases in association with low-entropy water bonding structures induced around non-polar molecules. This is directly responsible for the development a variety of structures from the membrane in the context of lipid molecules, to the globular enzyme form and base-stacking of nucleic acids.
The optimal nature of water as a hydride is illustrated in boiling points. By comparison with ammonia H3N, water H2O has balanced doning and accepting H-bonds and a stronger polarity. Such polar properties are also clearly optimal over H2S, alcohols etc.
The discovery by the ISO Infra-red Space Observatory, of widespread incidence of water around stars, planets and throughout the universe where stars are forming has led increasing weight tothe cosmological status of water as a pre-cursor to life. - AP Apr 98
Water provides several other secondary bifurcations besides polarity. The dissociation of H2O into H+ and OH- lays the foundation for the acid-base bifurcation, while ionic solubility generates anion-cation. H-bonding structures are also pivotal in determining the form of polymers including the alpha helix, base pairing and solubility of molecules such as sugars. Many properties of proteins and nucleic acids, are derived from water bonding structures in which a mix of H-bonding and phase bifurcation effects occur. The large diversity of quantum modes in water is exemplified by its high specific heat contrasting with that of proteins (Cochran 1971). Polymerization of nucleotides, amino-acids and sugars all involve dehydration elimination of H2O, giving water a central role in polymer formation.
P and S as Low-energy Covalent Modifiers - the delicate role of Silicon.
The second-row covalent elements are sub-optimal in their mutual covalent interactions and their interaction with H. Their size is more compatible with interaction with O, forming e.g. SiO32-, PO43- & SO42- ions including crystalline minerals. The silicones are notable for their O content by comparison with hydrocarbons. However in the context of the primary H-CNO interaction, two new optimal properties are introduced.
PO43- is unique in its capacity to form a series of dehydration polymers, both in the form of pyro- and poly-phosphates, and in interaction with other molecules such as sugars. The energy of phosphorylation falls neatly into the weak bond range (30-60 kj/mole) making it suitable for conformational changes. The universality of dehydration as a polymerization mechanism in polynucleotides, polypeptides, polysaccharides and lipids, the involvement of phosphate in ATP energetics, RNA and membrane structure, and the fact that the dehydration mechanism easily recycles, unlike the organic condensing agents, give phosphate uniqueness and optimality as a dehydrating salt.
The function of S in biosystems highlights a second optimality. The lowered energy of oxidation transitions in S particularly S-S ´ S-H , by comparison with first row elements, gives S a unique role both in terms of tertiary bonding and low energy respiration and photosynthesis pathways.
It has recently been discovered that oligoribonucleotides will polymerize effectively on silicate clay surfaces, where the positive ions of atoms such as Al make polar interactions with the phosphate backbones of RNA, stabilizing the molecules and making further polymerization possible in an ordered geometry. This consitiutes a major breakthrough in the modelling of life's origins and demonstrates the sensitivity of the biogenic pathway to the subtle differences of electronegativity of the second-row covalent elements phosphorus and silicon.
Ionic Bifurcation.
The cations bifurcate in two phases : monovalent-divalent, and series (Na-K, Mg-Ca). Although ions such as K+ and Na+ are chemically very similar, their radii of hydration differ significantly enough to result in a bifurcation between their properties in relation to water structures and the membrane. Smaller Na+ and H3O+ require water structures to resolve their more intense electric fields. Larger K+ is soluble with less hydration, making it smaller in solution and more permeable to the membrane (King 1978) . Ca2+ and Mg2+ have a similar divergence, Ca2+ having stronger chelating properties. This causes a crossed bifurcation between the two series in which K+ and Mg2+ are intracellular, Mg2+ having a pivotal role in RNA tranesterifications. Cl- remains the central anion along with organic groups. These bifurcations are the basis of membrane excitability and the maintenance of concentration gradients in the intracellular medium which distinguish the living medium from the environment at large.
Transition Element Catalysis
These add d-orbital effects, forming a catalytic group. Almost all of the transition elements e.g. Mn, Fe, Co, Cu, Zn are essential biological trace elements (Frieden 1972), promote prebiotic syntheses (Kobayashi and Ponnamperuma 1985) and are optimal in their catalytic ligand-forming capacity and valency transitions. Zn2+ for example, by coupling to the PO43- backbone, catalyses RNA polymerization in prebiotic syntheses and occurs both in polymerases and DNA binding proteins. Both the Fe2+-Fe3+ transition, and spin-orbit coupling conversion of electrons into the triplet-state in Fe-S complexes occur in electron and oxygen transport (McGlynn et. al. 1964). Other metal atoms such as Mo, Mn have similar optimal functions, e.g. in N2 fixation.
Fig 3 : (a) The perturbing effect of the neutral weak force results in violation of chiral symmetry in electron orbits. Without perturbation (i) the orbits are non-chiral, but the action of Zo results in a perturbing chiral rotation. (b) Autocatalytic symmetry-breaking causes random chiral bifurcation (i). Weak perturbation results in only one chiral form (iii) (King).
Chirality bifurcation.
Although the electromagnetic force has chiral symmetry, the electron also interacts via the neutral weak force when close to the nucleus. This causes a perturbation to the electronic orbit causing it to become selectively chiral, fig 3(a) (Bouchiat & Pottier 1984, Hegstrom & Kondputi 1990). In a polymeric system with competing D and L variants, in which there is negative feedback between the two chiral forms of polymerization, making the system unstable, the chiral weak force provides a symmetry-breaking perturbation. In a simulation, fig 3(bi) high [S][T] causes autocatalytic bifurcation of system (ii), resulting in random symmetry-breaking into products D or L. Chiral weak perturbation (iii) results in one form only. The selection of D-nucleotides could have resulted in L-amino acids by a stereochemical association (Lacey et. al. 1988).
Inner Circles New Scientist 8 Aug 98 11 reports on findings that there is a 17% net circular polarization in light in gas clouds in the Orion nebula where new stars are forming. Although this was infra-red light James Hough says it should also apply to the ultra-violet light. This would explain the excess of L-amino-acids found in the Murchison meteorite, suggesting a cosmic rather than accidental origin for the handedness of biological molecules on earth.
Tertiary Interaction of Mineral Interface.
Both silicates such as clays and volcanic magmas have been the subject of intensive interest as catalytic or information organizing adjuncts to prebiotic evolution. Clays have been proposed as a primitive genetic system and both include adsorbent and catalytic sites. The mineral interface involves crucial processes of selective adsorption, chromatographic migration, and fractional concentration, which may be essential to explain how rich concentrations of nucleotide monomers could have occurred over geologic time scales.
More recently a fundamental interaction between RNA and clays has been elucidated wich appears to be central in enabling oligo-ribonucleotides to polymerize in an ordered way while bound to the positively charged metal groups in montmorrilonite clays, bridging the gap between small random ribo-oligomers and RNA molecules of a length capable of self-replication.
Key polymerizations
Key polymerizations such as those of HCN and HCHO are proposed to generate a series of generic bifurcation structures through combined autocatalytic and quantum bond effects, which include major components of the metabloism including nucleotides, polypeptides and key membrane components. These will be examined in the next section
The astronomical perspective
The presence of HCN and HCHO clouds in the Orion nebula (left) attests to the ubiquitous nature of the energetic monomeric precursors of biomolecules, in particular RNA. CO clouds (right) occupy a protosolar disc 20 times the size of the orbit of Pluto (King, Scientific American).
The occurrence of the key precursors of biomeolecules are not in any way confined to Earth of the specific conditions of Earth. Much of the organic material on earth is believed to have peppered down from comets and carbonaceous meteorites especially earlier in the evolution of the solar systems when less of the original material from the proto-solar gas and dust cloud had been swept away by collision. Protosolar gas clouds in the Orion nebula are known to contain precisely HCN and HCHO as shown above. Certain parts of the universe give off an infra-red signal not unlike that of carbohydrates. Interstellar dust grains are also known to contain organic molecules. In fact the occurrence of organic molecules is essentially ubiquiitous to all second generation sun-like stars containing a mix of elements of nucleosynthesis formed from the material of previous supernovas.
Indeed their presence is so commonplace and the incidence of life on Earth is so early that the possibility that arose previously to the formation of Earth cannot entirely be ruled out. Cosmological biogenesis is however ideally suited to the conditions actually occurring on Earth with plentiful water, a temperature just above the liquefaction of water, a good supply of organic molecules and a steady mild solar input.
Terrestrial planets Venus to Uranus: It is clear that the extreme variety of our own planetary system and the like variety of the many galilean satellites spells out a very important feature of astronomical law. In four-dimensional space-time gravitational forces are inverse square law. In addition to the natural graduation of temerature noted above in the proposolar disc, prviding for life as we know it on the outer edge of the inner region where water is liquid, the chaotic variation of conditions produced by such non-linear force fields generates a situation of extreme variety in the universe, much like the Mandelbrot set does. While this may seem to make the other planets of our system a little alien, it does guarantee the universe will explore its entire phase space of possibilities virtually guaranteeing it will leave no stone (planet) unturned in its exploration of the possibilities that life will emerge (Hubble public gallery).
Just as one can consider the non-linearities of the electromagnetic force in developing the fractal dynamics of molecules, one can also appreciate the significance of non-linearities in gravitation in forming the rich diversity of planets and satellites we see in our own solar system. Other stars now seem to be quite richly endowed with planets, but these again show very marked variation. Such marked variationis characteristic of non-linear dynamics, which serves to accentuate existing differences for example in temperature and composition between the planets to cause unique effects, such as the highly acidic, electrified runaway greenhouse atmosphere of venus.
Far from considering these extreme variations as reducing the likelihood of finding life on other planets, what it demonstrates is that on an astronomical scale as well as the microscopic, the universe behaves very much like a Mandelbrot set in establishing dynamics of uniqueness and diversity which explores the dynamical space of possibilities.
On to Biocosmology Part 2: Central Polymerization Pathways? |
cb251a8519f30579 | Self-polarization effects in spherical inverted core–shell quantum dot
Part of the following topical collections:
1. Focus on Optics and Bio-photonics, Photonica 2017
Single electron in spherical shell in infinite potential barrier is investigated. Schrödinger equation in effective mass approximation for this spherical heterosystem is solved to get eigenenergies and corresponding wave functions. In case of infinite potential barrier the ground state energy depends only on the shell width. Self-polarization potential is obtained by solving Poisson equation in three concentric spherical regions: core, shell and surrounding medium. Shell self-polarization energy depends on geometry (core and shell radius) and the dielectric mismatch at the quantum dot boundaries. Self-polarization energy is used as perturbation. One electron ground state energy results are presented in this paper. CdSe is placed in the shell. Surrounding medium is dielectric of smaller permittivity (vacuum or water), that is the most realistic case. The core dielectric permittivity influence on ground state energy is analysed. The self-polarization corrections to the ground state energy for different compositions, i.e. core and shell radius, are presented. For smaller core radius and the same shell thickness (which imply the same unperturbed ground state energy) bigger is the energy correction, i.e. perturbed energy state, for fixed value of the core dielectric permittivity. Increase of core dielectric permittivity produces decrease of energy correction.
Core/shell nanostructure Poisson equation Self-polarization energy
This work is supported by Serbian Ministry of Education and Science, under Projects No. III45003.
1. Boichuk, V.I., Bilynskyi, I.V., Leshko, R.Y.: Ground and excited states of D0 and D donors in a spherical quantum dots. Ukr. J. Phys. 53, 991–996 (2008)Google Scholar
2. Chafai, A., Dujardin, F., Essaoudi, I., Ainane, A.: Energy spectrum of an exciton in a CdSe/ZnTe type-II core/shell spherical quantum dot. Superlattices Microstruct. 101, 40–48 (2017)ADSCrossRefGoogle Scholar
3. Cristea, M., Radu, A., Niculescu, E.C.: Electric field effect on the third-order nonlinear optical susceptibility in inverted core–shell nanodots with dielectric confinement. J. Lumin. 143, 592–599 (2013)CrossRefGoogle Scholar
4. Fonoberov, V.A., Pokatilov, E.P., Balandin, A.A.: Exciton states and optical transitions in colloidal CdS quantum dots: Shape and dielectric mismatch effects. Phys. Rev. B 66, 085310 (2002)ADSCrossRefGoogle Scholar
5. Gheshaghi, N., Pisheh, H.S., Karim, M.R., Unlu, H.: Interface strain effects on ZnSe/(CdSe) based Type I and ZnSe/CdS Type II core/shell quantum dots. Energy Procedia 102, 152–163 (2016)CrossRefGoogle Scholar
6. Ibral, A., Zouitine, A., Assaid, E.M., El Achouby, H., Feddi, E.M., Dujardin, F.: Polarization effects on spectra of spherical core/shell nanostructures: perturbation theory against finite difference approach. Physica B 458, 73–84 (2015)ADSCrossRefGoogle Scholar
7. Kostić, R., Stojanović, D.: Nonlinear absorption spectra for intersubband transition of CdSe/ZnS spherical quantum dots. J. Nanophotonic 5, 051810 (2011)ADSCrossRefGoogle Scholar
8. Niculescu, E.C.: Interlevel transition in core-shell nanodots with dielectric environment. Superlattices Microstruct. 51, 814–824 (2012)ADSCrossRefGoogle Scholar
9. Pu, C., Peng, X.: To battle surface traps on CdSe/CdS core/shell nanocrystals: shell isolation versus surface treatment. J. Am. Chem. Soc. 138, 8134–8142 (2016)CrossRefGoogle Scholar
10. Sahin, M., Nizamoglu, S., Kavruk, A.E., Demir, H.V.: Self-consistent computation of electronic and optical properties of a single exciton in a spherical quantum dot via matrix diagonalization method. J. Appl. Phys. 106, 043704 (2009)ADSCrossRefGoogle Scholar
11. Schoss, D., Mews, A., Eychmuller, A., Weller, H.: Quantum-dot quantum well CdS/HgS/CdS: theory and experiment. Phys. Rev. B 49, 17072–17078 (1994)ADSCrossRefGoogle Scholar
12. Stojanović, D., Kostić, R.: Binding energy of D0 and D impurity centers in CdTe/ZnTe spherical quantum dot. J. Nanosci. Nanotechnol. 12, 8715–8720 (2012)CrossRefGoogle Scholar
13. Sukkabot, W.: Atomistic tight-binding computations of exitonic fine structure splitting in CdSe/ZnSe type-I and ZnSe/CdSe invert type-I core/shell nanocrystals. Mater. Sci. Semicond. Process. 47, 57–61 (2016)CrossRefGoogle Scholar
14. Tyrrell, E.J., Tomić, S.: Effect of correlation and dielectric confinement on 1S1/2(e)n 1S3/2(h) excitons in CdTe/CdSe and CdSe/CdTe Type-II quantum dots. J. Phys. Chem. C 119, 12720–12730 (2015)CrossRefGoogle Scholar
Copyright information
Authors and Affiliations
1. 1.Institute of PhysicsUniversity of BelgradeBelgradeSerbia
Personalised recommendations |
239001a9e5408953 | User account
Enter your Quantum Shorts username.
Enter the password that accompanies your username.
Newsletter Signup
Submit your email address so we can send you occasional competition updates and tell you who wins!
Quantum Theories
T is for ... Tunnelling
This happens when quantum objects “borrow” energy in order to bypass an obstacle such as a gap in an electrical circuit. It is possible thanks to the uncertainty principle, and enables quantum particles to do things other particles can’t.
M is for ... Many Worlds Theory
Some researchers think the best way to explain the strange characteristics of the quantum world is to allow that each quantum event creates a new universe.
Q is for ... Quantum biology
A new and growing field that explores whether many biological processes depend on uniquely quantum processes to work. Under particular scrutiny at the moment are photosynthesis, smell and the navigation of migratory birds.
S is for ... Schrödinger’s Cat
A hypothetical experiment in which a cat kept in a closed box can be alive and dead at the same time – as long as nobody lifts the lid to take a look.
G is for ... Gluon
These elementary particles hold together the quarks that lie at the heart of matter.
C is for ... Cryptography
People have been hiding information in messages for millennia, but the quantum world provides a whole new way to do it.
P is for ... Planck's Constant
This is one of the universal constants of nature, and relates the energy of a single quantum of radiation to its frequency. It is central to quantum theory and appears in many important formulae, including the Schrödinger Equation.
K is for ... Kaon
These are particles that carry a quantum property called strangeness. Some fundamental particles have the property known as charm!
G is for ... Gravity
Our best theory of gravity no longer belongs to Isaac Newton. It’s Einstein’s General Theory of Relativity. There’s just one problem: it is incompatible with quantum theory. The effort to tie the two together provides the greatest challenge to physics in the 21st century.
D is for ... Dice
Albert Einstein decided quantum theory couldn’t be right because its reliance on probability means everything is a result of chance. “God doesn’t play dice with the world,” he said.
I is for ... Interferometer
Some of the strangest characteristics of quantum theory can be demonstrated by firing a photon into an interferometer: the device’s output is a pattern that can only be explained by the photon passing simultaneously through two widely-separated slits.
C is for ... Computing
The rules of the quantum world mean that we can process information much faster than is possible using the computers we use now.
S is for ... Schrödinger Equation
This is the central equation of quantum theory, and describes how any quantum system will behave, and how its observable qualities are likely to manifest in an experiment.
R is for ... Reality
Since the predictions of quantum theory have been right in every experiment ever done, many researchers think it is the best guide we have to the nature of reality. Unfortunately, that still leaves room for plenty of ideas about what reality really is!
J is for ... Josephson Junction
This is a narrow constriction in a ring of superconductor. Current can only move around the ring because of quantum laws; the apparatus provides a neat way to investigate the properties of quantum mechanics.
H is for ... Hidden Variables
One school of thought says that the strangeness of quantum theory can be put down to a lack of information; if we could find the “hidden variables” the mysteries would all go away.
U is for ... Universe
To many researchers, the universe behaves like a gigantic quantum computer that is busy processing all the information it contains.
E is for ... Entanglement
When two quantum objects interact, the information they contain becomes shared. This can result in a kind of link between them, where an action performed on one will affect the outcome of an action performed on the other. This “entanglement” applies even if the two particles are half a universe apart.
R is for ... Radioactivity
The atoms of a radioactive substance break apart, emitting particles. It is impossible to predict when the next particle will be emitted as it happens at random. All we can do is give the probability that any particular atom will have decayed by a given time.
F is for ... Free Will
Ideas at the heart of quantum theory, to do with randomness and the character of the molecules that make up the physical matter of our brains, lead some researchers to suggest humans can’t have free will.
W is for ... Wave-particle duality
It is possible to describe an atom, an electron, or a photon as either a wave or a particle. In reality, they are both: a wave and a particle.
Q is for ... Qubit
One quantum bit of information is known as a qubit (pronounced Q-bit). The ability of quantum particles to exist in many different states at once means a single quantum object can represent multiple qubits at once, opening up the possibility of extremely fast information processing.
A is for ... Atom
This is the basic building block of matter that creates the world of chemical elements – although it is made up of more fundamental particles.
V is for ... Virtual particles
Quantum theory’s uncertainty principle says that since not even empty space can have zero energy, the universe is fizzing with particle-antiparticle pairs that pop in and out of existence. These “virtual” particles are the source of Hawking radiation.
S is for ... Superposition
Quantum objects can exist in two or more states at once: an electron in superposition, for example, can simultaneously move clockwise and anticlockwise around a ring-shaped conductor.
A is for ... Alice and Bob
In quantum experiments, these are the names traditionally given to the people transmitting and receiving information. In quantum cryptography, an eavesdropper called Eve tries to intercept the information.
B is for ... Bell's Theorem
In 1964, John Bell came up with a way of testing whether quantum theory was a true reflection of reality. In 1982, the results came in – and the world has never been the same since!
P is for ... Probability
Quantum mechanics is a probabilistic theory: it does not give definite answers, but only the probability that an experiment will come up with a particular answer. This was the source of Einstein’s objection that God “does not play dice” with the universe.
N is for ... Nonlocality
When two quantum particles are entangled, it can also be said they are “nonlocal”: their physical proximity does not affect the way their quantum states are linked.
I is for ... Information
Many researchers working in quantum theory believe that information is the most fundamental building block of reality.
L is for ... Light
We used to believe light was a wave, then we discovered it had the properties of a particle that we call a photon. Now we know it, like all elementary quantum objects, is both a wave and a particle!
T is for ... Teleportation
Quantum tricks allow a particle to be transported from one location to another without passing through the intervening space – or that’s how it appears. The reality is that the process is more like faxing, where the information held by one particle is written onto a distant particle.
Z is for ... Zero-point energy
Even at absolute zero, the lowest temperature possible, nothing has zero energy. In these conditions, particles and fields are in their lowest energy state, with an energy proportional to Planck’s constant.
O is for ... Objective reality
Niels Bohr, one of the founding fathers of quantum physics, said there is no such thing as objective reality. All we can talk about, he said, is the results of measurements we make.
H is for ... Hawking Radiation
In 1975, Stephen Hawking showed that the principles of quantum mechanics would mean that a black hole emits a slow stream of particles and would eventually evaporate.
L is for ... Large Hadron Collider (LHC)
At CERN in Geneva, Switzerland, this machine is smashing apart particles in order to discover their constituent parts and the quantum laws that govern their behaviour.
B is for ... Bose-Einstein Condensate (BEC)
At extremely low temperatures, quantum rules mean that atoms can come together and behave as if they are one giant super-atom.
W is for ... Wavefunction
The mathematics of quantum theory associates each quantum object with a wavefunction that appears in the Schrödinger equation and gives the probability of finding it in any given state.
Y is for ... Young's Double Slit Experiment
In 1801, Thomas Young proved light was a wave, and overthrew Newton’s idea that light was a “corpuscle”.
U is for ... Uncertainty Principle
One of the most famous ideas in science, this declares that it is impossible to know all the physical attributes of a quantum particle or system simultaneously.
X is for ... X-ray
In 1923 Arthur Compton shone X-rays onto a block of graphite and found that they bounced off with their energy reduced exactly as would be expected if they were composed of particles colliding with electrons in the graphite. This was the first indication of radiation’s particle-like nature.
A is for ... Act of observation
Some people believe this changes everything in the quantum world, even bringing things into existence.
M is for ... Multiverse
Our most successful theories of cosmology suggest that our universe is one of many universes that bubble off from one another. It’s not clear whether it will ever be possible to detect these other universes.
R is for ... Randomness
Unpredictability lies at the heart of quantum mechanics. It bothered Einstein, but it also bothers the Dalai Lama.
D is for ... Decoherence
Unless it is carefully isolated, a quantum system will “leak” information into its surroundings. This can destroy delicate states such as superposition and entanglement. |
ce64d060ae48b1eb | söndag 24 februari 2013
2nd Coming of the 2nd Law
• dK/dt = W - D
• dE/dt = - W + D
• D > = 0,
• dE + pdV = TdS = dQ
• dS > = 0,
PS2 Lubos refers to Bohr's view of physics:
24 kommentarer:
1. Exactly where is the confusion and mystery in the last link you give to the blog-post by Motl?
Reading the post it lucidly explains where the confusion about the second law originates (you seem to run into this trap your self looking at your presented theory) and it dispels any mystery. Or it certainly will in near future, if one look at one of the first comments, coarse-graining is mentioned and the original writer mention a will to write a specific blog-post touching upon this.
Oh, and a final question. How does your version of the second law work in a solid?
2. Friction is for a solid friction what corresponds to turbulent diffusion for a gas. The confusion with statistics is that statistics is done by humans but not by physical objects. If you believe in physics independent of human observation, statistics is not an option. If you think that you are the center of the universe, statistics is fine.
3. Did you read the blog-post that you refer to?
4. The post is a repetition of standard statistical mechanics, which in my opinion is not physics.
5. That was the point...
Statistical mechanics are neither confusing or mysterious.
That you doesn't consider it to be physics is more your loss then those who use it as an indispensable tool in describing and predicting nature.
6. Yes, maybe it reflects a limitation of my mind. But all that glitters in mechanics is not gold, according to Schrödinger and Einstein...
7. In your opinion, what kind of theory should one use on length and time scales where continuum mechanics breaks down? What is the physical meaning of D in those regimes?
The theory you present here looks exactly as the second law defined from a dissipation function originating from mass, momentum and energy conservation in a fluid. This is in the curriculum in an ordinary master level course in continuum mechanics. What is new?
8. A proper version of the Schrödinger equation may be used. This is also a continuum model and as such subject to the presence of D as a reflection of impossibility to exactly satisfy the conservation law expressed by Schrödingers equation, which reflects the ,inevitable appearance of turbulence in systems with many components/particles/atoms. The novelty in our approach is to give the dissipation a meaning as reflecting an impossibility of satisfying the exact conservation laws in finite precision computation/finite precision physics in which by necessity local mean values will
be taken, which is the essence of turbulence. This also gives a way to understand turbulence as shown in my book Computational Turbulent Incompressible Flow. To simply assume positive dissipation/friction without explaining from where the dissipation/friction is coming, does not answer the key question why there is a 2nd law and why there is dissipation/friction. This is what I seek to do. It can be viewed as a very primitive form of statistics without the drawbacks of statistical mechanics with its horrendous calculations of number of microstates. The irreversibility of smashing an egg is then explained as an impossibility to realize the high precision required in finite time.
9. So you do acknowledge that statistics is unavoidable?
Further, there seems to be a lot of unproven assumptions and loose ends. Have you tried this approach on nano-systems? Does it work if this is applied to all the empirical data connected to nano-science?
10. Claes, I do not disagree with your assessment. The reason I pointed to Lubos's post was that he clearly determines using statistics that heat flows one way as does time and DLR can not occur in the macrostate if the atmospheric temperature is less than the surface temperature because in his terms that would decrease entropy.
I find Lubos's string theory interesting. I think he is very knowledgeable in his field but I recognise that he has no experience in engineering science such as heat & mass transfer, or fluid dynamics. For example he makes some incorrect assumption in his post about Venus, http://motls.blogspot.com.au/2010/05/hyperventilating-on-venus.html, but a least he comes up with the answer that there is no significant "greenhouse effect" on Venus. His post and others have discredited the gurus of AGW such as Sir John Houghton (who included a greenhouse Venus in his poor quality book "The Physics of Atmospheres 1986 2nd edition)
11. No, it is snot tatistics but finite precision computation, which is like chopping decimals up or down, which is a very primitive and hence understandable form of statistics, very different from counting number of microstates. Finite precision computation gives classical deterministic models a new life to the benefit of mankind. Applications are endless.
12. In my view statistics is not real objective physics but rather subjective physics of the mind of the observer, which at least for macroscopic physics is against the basic principle of science of objectivity and repeatability.
13. If you come to rhe same result, as Boltzmann and Planck, with your finite precision computation methode, what´s then the problem. That´s very good I think. The result is even more true the more ways you can prove it in. But with your way to calculate things, you can never proove the nonexistance of DLR, how much you will try. Planck`s law tells it exist and furthermore Kirchoffs radiation law tells that DLR can be absorbed by materia at the surface of the earth. If you say something else, you are denying these laws. Are you?
14. DLR violates the 2nd law, and thus cannot exist as a physical phenomenon, only as a phantasm in twisted minds.
15. Then Planck´s and Kirchoff´s radiation laws violates the 2nd law! Do you really believe that?
16. DLR violates the 2nd Law. If Planck was alive he could tell if he insists that his radiation law includes DLR. In his absence we have to think ourselves, and this is what I have done.
17. Claes,
you write: "DLR violates the 2nd Law."
Why? DLR is a consequence of the temperature of the atmosphere.
So why is it violating the 2nd law?
Can you explain this without repeating just your claim.
Best regards
18. Transfer of heat energy from cold to warm without external forcing violates the 2nd law.
19. (I repeat:)
CO2 (in atmosphere) radiates at 667. Also, acc to Planck´s law, the earth radiation spectrum contains 667. Then Kirchoff´s law tells that the earth can absorb this radiation. Do you deny this?
20. I repeat: If you are convinced that non-forced heat transfer from cold to warm is possible, I suggest that you present your ideas to Vattenfall and study the reaction, instead of bombarding me with silly questions.
21. Claes, I don´t say that heat is transferred from CO2 to the earth. I just say that, in the first step, radiation from CO2 evidently, acc to Planck´s and Kirchoff´s laws, is absorbed by the earth. In the next step, of course, the earth is reemitting this energy, or more, back to some receiver, acc to the same laws. This means that the earth can´t be directly warmed by a colder materia, like CO2, but DLR from a colder body exists. Is this so hard to understand. This also means that the 2nd law is not applicable in the first step above, but after the second step. The 2nd law is best suited for macroscopic systems. It was invented before the atomic behaviour was fully clear.
I don´t think my view on things is more silly than yours, rather the opposite. Can you agree on something I have written here.
22. Lasse H, it seems you do not want to understand or open your closed thinking. I have suggested to you to read chapter 4-Thermodynamics and 5-Heat& Mass transfer of Perry's Chemical Engineering Handbook which has been in existence since 1934 with many revisions and editions to keep it upto date. Mark's Mechanical Engineering Handbook has similar sections but less detailed about the work of Prof Hoyt Hottel who carried out a vast amount of research on the absorption and emission of gases (from combustion) containing water vapor and carbon dioxide. Clearly you do not understand Kirchoff's law, or radiative & convective heat transfer (are you aware of the Nusselt number?- I think not). What you say above is wrong. You are just repeating the nonsense spread by alarmists who a) have no qualifications in engineering science and b) have had no experience of design and measurement of combustion and heat transfer systems and equipment.
23. Cementafriend: Planck´s and Kirchoff´s laws speak for themselves. I have not invented them.
24. The Uranus Dilemma
Consideration of the planet Uranus very clearly indicates that radiative models (and any type of "Energy Budget" similar to those produced by the IPCC) can never be used to explain observed temperatures on Uranus. We can deduce that there must be some other physical process which transfers some of the energy absorbed in the upper levels of the Uranus atmosphere from the meagre 3W/m^2 of Solar radiation down into its depths, and that same mechanism must "work" on all planets with significant atmospheres.
Uranus is an unusual planet in that there is no evidence of any internal heat generation. Yet, as we read in this Wikipedia article, the temperature at the base of its (theoretical) troposphere is about 320K - quite a hot day on Earth. But it gets hotter still as we go further down in an atmosphere that is nearly 20,000Km in depth. Somewhere down there it is thought that there is indeed a solid core with about half the mass of Earth. The surface of that mini Earth is literally thousands of degrees. And of course there's no Solar radiation reaching anywhere near that depth.
Think about it, and I'll be happy to answer any questions - and explain what actually happens, not only on Uranus, Venus, Jupiter etc, but also on Earth. |
0a704a5bddee1e8f | Imagine that we find very special 3D cameras on sale. We get a few hundred of them and give them to several groups of students for a weekend project. Each student gets a camera and just one hydrogen atom in a special box. Their task is to take as many photographs of their hydrogen atom as possible. Well, you know that in the real world this would not be possible for many reasons, but there are few limits to our imagination.
We collect all the digital pictures taken and add them together in Photoshop 3D in such a way that all nuclei are perfectly aligned at the point where the Cartesian axes intersect. In our special pictures, the nucleus is invisible, but electrons show up as dots. Our congregate image (left below) represents a measure of the probability of finding the electron in different volumes of space around the nucleus of the ground-state hydrogen atom. You may easily discern the probability density pattern. It has spherical symmetry, with high density toward the center (nucleus) and low and diminishing densities at the outer edges of our observation box. By taking thousands of pictures of thousands of electrons in thousands of individual positions in thousand of atoms, we have learned about the probability of finding one electron in any “space” (unit volume cube) around the nucleus.
cross-sections of electron distributions in hydrogen atom>
click on any orbital picture on the right
Jmol help
Interestingly, collections of pictures from students from some groups showed different patterns (above on the right). It turns out that these groups were given boxes with an extra energy supply (batteries included!) that kept the hydrogen atoms in various excited states. The ground-state distribution (a) is repeated to match the size of the others, and all are shown in cross-sections to reveal the internal details. Some of these probability patterns (b, d) are still spherical, although they have larger spreads than that observed for the ground state hydrogen atom (a). They also have some white spaces within their interiors, illustrating regions where electrons are not allowed to be (nodes). Other probability distributions (c, e, f) are a bit more complicated. They have “directionality” and more complex node patterns.
These pictures are our first exposure to the different shapes and sizes of hydrogen orbitals. All orbitals with spherical symmetry (a, b, d) are s-type orbitals: 1s, 2s, and 3s with quantum numbers n and l (=0) defined. The dumbbell shapes (c, e) are p-type orbitals, 2p and 3p, with nodes at the nucleus (l=1). They are representatives of a 3-piece set in each case. The last picture (f) is a 3d orbital (l=2) that belongs to a 5-member family of even more diverse profiles. In hydrogen atom, orbitals with the same n are degenerate, so, for example 3s, 3p, and 3d, all have the same energy. What that means is that in our imaginary experiment we would have great difficulty sorting the pictures from the students given boxes with n = 3 hydrogen atoms (where a total of 9 orbitals, i.e. different electron distributions, have the same energy!). In reality, the shapes, sizes and directions of orbitals are obtained by solving the Schrödinger equation.
The applications of the Schrödinger equation, HΨ = EΨ, require advanced calculus, but its solutions and some important basic ideas may be often presented graphically in a qualitative but useful manner. At the first glance the equation looks quite simple, but the “quantum devil” is in the detail. It all starts with de Broglie’s idea that electrons in atoms are described by standing waves. The mathematical expressions of these waves are called wavefunctions and are usually represented by a Greek Ψ (psi). The waves cannot be measured directly (they have no physical meaning), but their squares (Ψ2) represent the probability of an electron being in a given tiny volume of space (probability density). The wavefunctions for an atom (or a molecule, or any quantum system) are found by solving the Schrödinger equation. In that equation, “H” hides a set of mathematical operators (mathematical instructions on how to manipulate the wavefunction) that calculate all energies of the system including all electrostatic interactions between all nuclei and electrons and the kinetic energies of all electrons. The results of the calculations are the energies of the quantum systems, E, and the wavefunctions describing them.
Here is our standing wave illustration: tall in the middle, then dropping below the level of the undisturbed water surface (serving as our reference plane), to climb again, and drop again in a series of diminishing concentric hills and valleys. If we produce a cross-section trace of the water’s surface on graph paper, with the height of the water’s surface represented on the vertical axis (z) and the separation from the center on the horizontal axis (x), we get a mathematical function that has positive algebraic values in the upper part of the plot and negative values below the x axis.
water wave two dimensional representation of the wave
The function shown above (right) is two dimensional; for each point [x] we have a value of the wavefunction as shown on z axis [Ψ(x)]. If we spin our function around the z axis we can recreate the water’s surface. That surface is three-dimensional; for each point [x,y] in the xy plane the function has a value expressed on z axis [Ψ(x,y)]. To transition to atomic wavefunctions we have to add another dimension. For each point in space [x,y,z] around the nucleus we have to specify the value of our wavefunction [Ψ(x,y,z)]. Unfortunately, displaying things in four dimensions is rather difficult. We will rely on various cross-sections to reduce dimensionality (as we did in our water-wave example here) or use other graphical “tricks” to simplify the presentation.
The probability of finding an electron in a miniscule volume ("cube") of space around a point [x,y,z] is equal to the square of the value of the wavefunction at this point. We need to use these tiny volumes (dxdydz cubes, for those mathematically inclined) as points themselves have no volume. This probability density (probability per unit volume) is also called electron density and can be visualized as a fraction of an electron in that minuscule volume cube. Notice that squared values are never negative, as probability must be, even if the wavefunctions may be negative in certain regions of space. Electron density is a directly observable quantity. Many modern techniques allow us to probe it for atoms and molecules.
Now we are ready to explore some specific solutions of the Schrödinger equation for the hydrogen atom. All plots have the same horizontal scale, and, within sets, the same vertical scale. You may click on any of the plots to enlarge them for side-by-side comparisons.. The distance from the nucleus, r, is measured in atomic units (bohr, ao) and in Å.. The functions are plotted in both directions along the horizontal axis in a way analogous to our water wave. The specific direction in space or the sign of r are arbitrary and irrelevant for orbitals with spherical symmetry.
Highslide JS
1s wavefunction has no nodes and drops off rapidly with r. The top part of the plot was cut off to show all wavefunctions on the same scale.
Highslide JS
2s wavefunction has one node and spreads farther out from the nucleous. It has regions with opposite algebraic sign of the wavefuntion..
Highslide JS
3s wavefunction has two nodes (i.e three separate region with different algebraic sign) and stretches far from the nucleus.
These wavefunctions have the highest value at the origin (nucleus) and they drop off equally in all directions as r grows. The wavefunctions for 2s and 3s extend further from the center than 1s, have regions of positive and negative values, and have nodes in spots where the wavefunctions change their sign.
The orbital cross-sections are aligned with the corresponding electron density 2D plots (below). The circular (i.e. spherical in 3D) nodes are readily apparent.
Highslide JS
1s electron density profile is quite compact; beyond 3 Å. the values are very close to zero.
Highslide JS
2s electron density is more spread out with a node around 2 Å.
Highslide JS
3s electron density has two nodes and stretches beyond 10 Å.
The squares of the wavefunctions represent electron density, i.e, the probability of finding an electron in a tiny volume of space around each point of atomic territory. These are given as fractional numbers (probabilities) per unit volume (in atomic units). The squares of the wavefunctions are always positive (or zero). as probabilities must be.
Highslide JS
The peaks in radial probability plots correspond to the most probable distance for the electron. For 1s it is 1 bohr (ao= 0.529 Å).
Highslide JS
The 2s radial probability has two distinct peaks with the larger (the most probable) at ca. 3 Å.
Highslide JS
For 3s orbital the most probable distance of the electron is about 7 Å, but the orbital stretches well beyond 10 Å.
In the radial probability plots all electron density of tiny cubes at a given r are added up to provide the total probability of a given separation between the nucleus and the electron. They demonstrate well the increasing spread of electron density with n and normalization of atomic orbitals (the total area under each plots is equal unity).
click on any of the plots
Since the s orbitals have spherical symmetry, i.e. they stretch equally from the nucleus in all directions of space, we may ask what the most probable separation distance is between the electron and the nucleus. Looking at the electron density plots, an intuitive answer might be that electron should “statistically” be at (or very, very near) the proton. That is not correct, however. Although the electron density is high at or near the nucleus, that volume of space is very small. Electron density is expressed as probability per volume element (dxdydz), i.e. a small “cube” of space. The number of such cubes is very small at the center, but increases with r. At a given r, the probability function is simply [Ψ(r)]2 multiplied by the surface area of the sphere with radius r (i.e. multiplied by 4πr2). In other words, we add up probabilities of all the cubes contained in a thin spherical shell ("onion layer") of radius r. The results of these calculations are shown by the last set of functions above called radial probability. Close to the nucleus we have high values of electron density, but few volume cubes. At larger distances the electron density drops off, but the number of volume cubes increases as our spherical shell gains in radius. As the result, the most probable distance for electron in the ground state is 0.529 Å, i.e. 1 bohr which is the atomic unit of distance and corresponds to radius of the Bohr orbit, ao for n = 1, After all, Bohr was “almost” right! The most probable distance increases for the other s orbitals.
The total probability of finding an electron within the orbital must be unity (orbitals are normalized), that is all probability densities for all volume units added up must equal 1. As you may have noticed, the wavefunctions and electron density functions stretch to infinity, even if with extremely low values of electron density at lager r values.. To facilitate graphical presentations of orbitals, an arbitrary cut-off point is chosen in such a way that the probability of finding the electron inside the demarcated volume is 90%. That cutoff point corresponds to a very low value of electron density, let's say 0.002 au (anyway, what’s 10% difference between friends?). For orbitals, it represents an artificial isosurface (surface with all values of electron density equal to 0.002 au, in our example) that serves as an imaginary border (“skin”). This isosurface makes the outside orbital layer smooth in our orbital representations, instead of being fuzzy, as shown in our electron "pictures" on the top of this page. It also makes our 4D problem disappear. Now, we have to show only [x,y,z] points at which the electron probability has the set value (0.002 au, for our example), i.e. we are dealing with 3D representations. We have sacrificed all the details of the internal electron density distribution, but we gain in simplicity of presentation that is important for our qualitative (non-mathematical) models.
The only other information we will need is the algebraic sign of the underlying wavefunction. We will display it on the surface of our orbitals with color or other graphical marks. Since we do not have a way to experimentally distinguish the algebraic signs of the wavefunctions (we only observe the squared values) the labels are assigned arbitrarily and only aim to show if the signs are the same or opposite for different orbitals or different orbital lobes. That information will be important when orbitals start to overlap and form bonds. We would need to know then if they are going to interfere (like water waves) in a constructive or destructive way. But that is a subject for another page. Let's now look at our s orbitals in 3D!
We start by drawing the , or more precisely an isosurface encompassing 90% of electron density. Well... it is quite boring, just a sphere painted blue to represent one of the possible signs of the underlying wavefunction. Let's (arbitrarily) say it is positive to match the picture above. You may spin it or move it, but it looks the same from all sides. That's what spherical symmetry means. For all the artists out there, we can be more adventurous and try to use to indicate the curvature of the surface. And if we draw the lines more densely we get a presentation. It looks a bit like a cage for the electron. At this scale, the proton (the nucleus of hydrogen atom) would be smaller than one pixel at the crossing point of the coordinate axes.
If you play play with the models using your mouse, spinning them, moving them, zooming in or out, remember to press the "reset" button before moving on to the next model. The reset will return the model to the same size and orientation for easy comparisons.
Sometimes it is convenient to make the orbital skin translucent to see what's inside. This presentation technique is especially useful for molecular surfaces where we may want to peak inside to check on what atoms are participating in bonding and what is the molecular geometry. For now, let just make our orbital skin It is still quite monotonous, like its solid-color analogue. To add some accents we may instead show a of the orbital, replacing dot density (see above) with a color gradient. We can even take it for a spin..
On the other hand, for those who like hiking, maps are the preferred way to read topography of the terrain, or electron density (you may click the picture and use the mouse weel to zoom in for e closer look). One of the contour lines, when span around its axis generates back the orbital isosurface. Finally, we can go and show three mutually perpendicular cross-sections at the same time, just for fun.
The , is very similar, except twice bigger and it takes a bit longer to calculate and display (patience, young grasshoper). It is "painted" red to consistently show (see the plots above) that the "outside" algebraic sign of the 2s wavefunction is opposite to that on the "inside". We can barely see the axes unless we make it . If we do, we can immediately recognize the two "inside" spheres, the bigger one of the the same sign as the outside surface and the smaller one of the opposite algebraic sign, all matching the outside's isosurface absolute value. The existence of these three concentric isosurfaces may be confirmed by examination of the 2s-electron density plot above that shows that a line drawn at some low value of electron density (parallel to the horizontal axis) will cross the blue Ψ2 function 3 times on each side of the origin. As before, to add some color we can paint to shows some internal values of the wavefunction.
select a link from the text on the right
Jmol help
On this scale, the 3s orbital would take the width of page and a half of your monitor. Its interior would be even more complex (what is the maximum number of internal spheres that could be present inside the 3s orbital?), but the beauty of our simplified presentation is that all s orbitals may be drawn as uniformly colored spheres of various sizes.
The p orbitals come in sets of three, all having the same overall shape but each pointing in a different direction in space. We call them px, py, and pz to indicate their relative spatial orientation. Examples of wavefunctions and radial probability for pz orbitals are shown below in a way analogous to that used for s orbitals. In this case the horizontal axis is chosen to be the z axis. Individual p orbitals do not have spherical symmetry, so the choice of direction is important. The px and py orbitals look the same, except are perpendicular to pz and to each other. The functions shown here are accurate, but only along the z axis, not in all directions as was the case for s-wavefunctions.
Highslide JS
2p wavefunction has a node at the nucleus.
Highslide JS
3p wavefunction has a node at the the nucleous and one additional one spreading on both "sides" of the of the orbital.
The p-wavefunctions have nodes on the nucleus and opposite algebraic signs on the two sides of the axis. They have similar "spreads" to the s-function when compared to the 2s and 3s orbitals, respectively.
The orbital cross-sections are aligned with the corresponding electron density 2D plots (below). The nodal region are clearly visible. Both orbitals have one nodal plane cutting through the nucleus, and 3p has an additional radial node.
Highslide JS
2p electron density; spinning the function along the z axis would yield a dumbbell shape.
Highslide JS
3p electron density; spinning the function along the z axis would yield a "double dumbbell" shape.
The squares of the wavefunctions (all positive) represent electron density. Spinning the functions around the z axis will give you an approximation of of 3D representations (see below).
Highslide JS
2p radial probability; the values apply only along the z axis.
Highslide JS
3p radial probability; the values are valid along the z axis only.
As before we can ask about the most probable distance for the electron by plotting radial probabilities. Because the p orbitals are not spherical the plotted data apply only along the z axis. Note that the position of the maxima are similar to those for 2s and 3s orbital radial probabilities..
select a link from the text on the right
Jmol help
Let's start reviewing 3D attributes of p orbitals with the familiar . with opposite algebraic signs of the wavefunction. We need to scale our models down a little as compared to s orbitals (above) since we are dealing with larger orbitals with quantum numbers of 2 or 3. Here, we align the z axis to match the plots above, even if z axis is usually given vertical orientation (directions in space are chosen arbitrarily), We can turn the surface into a or make it . to explore the interior, but there is nothing there of interest. To spruce it up we add colors by drawing a or plot. All of these pictures tell us why a quick drawing of an "8" is a fair symbolic representation of p-type orbitals.
The other two 2p orbitals (px and py) look the same, but point to different directions in space. We can visualize all of them at once by using different colors for the lobes to make them easier to distinguish from each other. The familiar is first, then we have and, finally all in mesh representation so we can look through them. You should give them the spin to gain a 3D appreciation of their relative spatial orientation.
Let's add them up. Start with and then look at all . As you may notice, all three p orbitals as a group have again spherical symmetry. This is true for all sets of subshells orbitals (p, d, and f), although in organic chemistry we concentrate on s and p orbitals only.
Finally we can have some fun with 3pz orbital. We will just list the possible display modes for your enjoyment:
, .
We have learned all we wanted to ever know about hydrogen orbitals and their shapes. The simple summary is that they are very similar on the outside, spherical for s orbitals, or dumbbell-shaped for p orbitals. Gratifyingly, atoms other than hydrogen have very similar orbitals with two small, but important differences. The atoms have varying nuclear charges and since the electrons shield each other, the given electrons experience different effective nuclear charges. In general, as the result, the orbitals shrink (as compared to the corresponding hydrogen orbitals) and have lower energies than those in the hydrogen atom. These important differences affect bonding properties of the orbitals.
Molecular Gallery Last updated 08/31/12 Copyright 1997-2013
Advanced Quantum Lighter side Reactions Connections Tutorials Comments Top |
5d43418fca873a64 |
All novelties of quantum mechanics are consequences of nonzero commutators
The discrimination of classical physics and quantum physics may really be described through a simple parameter, Planck's constant:\[
\hline {\rm physics}& {\rm classical} & {\rm quantum} \\
\hline {\rm value}& \hbar = 0 & \hbar\neq 0 \\
\] Assuming that you know how to work both with classical and quantum physics, the value of this single parameter is enough to discriminate between them. It's that simple.
Now, quantum mechanics is a more general framework while classical mechanics is a special case. A classical theory may be typically obtained as the \(\hbar\to 0\) limit of a quantum mechanical theory. When a small \(\hbar\) becomes infinitesimal and is "really" sent to zero, the resulting limiting theory may be said to have \(\hbar=0\) and it's therefore a classical theory if the limit exists at all.
In classical physics, there's just no way to have \(\hbar\neq 0\). This is true by definition because \(\hbar\) is the constant that measures the deviation of a physical theory from the class of theories of classical physics. Because classical theories aren't sufficient to discuss the quantum mechanical ones, we must use the more general framework – the quantum mechanical framework – if we want to compare classical and quantum mechanics.
This need is completely analogous to the need to use relativistic theories when we want to compare relativistic and non-relativistic theories. In that case, the deviation from a non-relativistic theory is measured by the parameter known as \(1/c\), the inverse maximum speed in Nature. The limit \(1/c\to 0\) i.e. \(c\to \infty\) is the non-relativistic limit and it is analogous to the \(\hbar\to 0\) classical limit of a quantum mechanical theory.
In both cases, the limiting theory obeys some extra axioms that are not valid in the more general theory. In particular, the non-relativistic theories (\(c\to \infty\) limits of relativistic ones) obeys the "absolute character of simultaneity" while the \(\hbar\to 0\) classical limits of quantum theories obey the extra axiom about the objectively well-defined values of observables (independently of observers or before the observations).
The more general theories or frameworks reject these extra axioms. Relativity says that the simultaneity is relative i.e. dependent on the inertial system; quantum mechanics says that the values of observables are only well-defined relatively to an observer and what the observer considers to be observations (events in which the observer acquired the information).
OK, what is Planck's constant operationally?
Planck's constant \(\hbar\) is omnipresent in quantum mechanical theories. Well, it's omnipresent if we use the system of units in which \(\hbar\) has to be inserted at all. Particle physicists and some other adult physicists often use units in which \(\hbar=1\) which simplifies most of the formulae in any quantum mechanical research.
However, if we want to compare quantum mechanical theories with their limits, we need to use units in which \(\hbar\) is variable – i.e. it is not set to one (or another constant) – because a particular variation, the \(\hbar\to 0\) limit, is how we get from the general quantum mechanical theories to the classical ones (the limits).
Because \(\hbar\) appears at so many places, we may define its value in many ways. But ultimately all of them may be shown to be equivalent to the commutator of some observables – the refusal of observables to commute with each other. For example, we may define \(\hbar\) as\[
\hbar = i(px-xp)
\] in any theory that contains positions \(x\) and momenta \(p\). In classical physics, the commutator is zero, in a quantum mechanical theory, it's not. The precise value dictates the shape of the wave associated with a momentum \(p\) particle. The wavelength (period) of this wave is simply \(2\pi\hbar / p\).
Equivalently, we may define \(\hbar\) as\[
\hbar = -i \cdot J_z^{-1} \cdot (J_x J_y - J_y J_x)
\] The commutator between various components of the angular momentum are zero in classical physics, nonzero in quantum mechanics, and these commutators may be easily derived from the formulae for \(J\) as a function of \(x,p\) and from the commutator \([x,p]\) that we discussed a minute ago.
Now, the commutators such as \([J_x,J_y]\) are enough to determine that the eigenvalues of each component are quantized. \(\hbar/2\) is the smallest allowed nonzero value of \(J_z\) or any component of the angular momentum, for example. Obviously, we could discuss as many examples of commutators of observables – "elementary" or "composite" ones – in many mechanical or other theories.
The constant \(\hbar\) also appears in the Schrödinger equation. Does this appearance have anything to do with nonzero commutators? You bet. A more conceptual sensible way to rewrite the Schrödinger equation is to apply the time-dependent unitary transformation and derive the Heisenberg equation with the Heisenberg equations of motion. In that picture, the observables obey \[
i\hbar \frac{\dd L(t)}{\dd t} = [L(t),H]
\] This equation explicitly says that the commutators with the Hamiltonian are the time derivatives of the operators multiplied by the tiny constant \(i\hbar\) once again. The dynamical equations of the Heisenberg picture are examples of the more general fact that the commutators are proportional to \(\hbar\).
Classical physics emerges in the limit \(\hbar\to 0\). It recognizes that the commutators are small because they're proportional to \(\hbar\) which is tiny in the units used to study problems where classical physics becomes a good approximation. So classical physics "amplifies" these commutators, multiplies them by \(1/i\hbar\), and this product is called the "Poisson bracket". Effectively, only the leading terms in a power expansion in \(\hbar\) are kept in the classical limit. That's how we construct a classical theory.
The volatile anti-quantum zealot I have referred to has written things like
What distinguishes quantum from classical mechanics is how the observables of a composite system are related to the observables of the individual systems.
But this opinion is completely wrong – the bold face fonts used for that sentence only highlight how much wrong he is. When we have a conventional composite system, the observables \(A_j,B_k\) describing the subsystems \(A,B\) of this composite system commute with all operators in the other group:\[
[A_j,B_k] = 0.
\] The commutator is zero, just like in classical physics, so the mere additional of "subsystems" to the whole system simply cannot make the physical system more quantum. The composite system may display lots of characteristically quantum behavior but that behavior only arises due to the nonzero commutators, i.e. \[
[A_j,A_{j'}] \neq 0, \quad [B_{k},B_{k'}]\neq 0.
\] All the novel quantum behavior appears "inside" \(A\) or inside \(B\), inside the individual subsystems of the composite system!
Now, the "quantum mechanics is non-local" crackpots love to say not only crazy things about the extra non-locality of quantum mechanics – which I have thoroughly debunked in many previous specialized blog posts – but they also love to present the quantum entanglement as some absolutely new voodoo, a supernatural phenomenon that has absolutely no counterpart in classical physics and whose divine content has nothing to do with the uncertainty principle or the nonzero commutators.
Except that all this voodoo is just fog.
Quantum entanglement is nothing else than the most general pure-state description of a correlation between two subsystems in quantum mechanics. Assuming that we use the correct laws and formalism of quantum mechanics to describe systems that obey quantum mechanics,
an entangled state and a state with a correlation between two subsystems are absolutely synonymous.
If you study a singlet state of two spin-1/2 particles, you may measure \(J_z\) of both entangled particles. As I have discussed in numerous recent blog posts, this anticorrelation between the two spins is absolutely equivalent to the anticorrelation between the colors of two socks of Dr Bertlmann – a system we may describe by classical physics.
The idea that the correlation between the two electrons is "something fundamentally different" from the correlation between the two socks' colors – an idea pioneered by John Bell – is totally and absolutely wrong. After all, in the real world around us, even socks of Dr Bertlmann are accurately described by quantum mechanics. Classical physics isn't quite right, even for socks. So if we have measured a complete set of observables to bring the two socks in a pure state – but an entangled one (and be sure it's possible) – then the socks will inevitably be in an entangled state. If you're accurate about socks in the real world, you need to describe them as an entangled state, just like the two spins, and not just as a classical correlation. It's true simply because the classical theory is never quite right in the world around us!
So the opinion that the correlation of (real world) Bertlmann's socks is something "totally and fundamentally different" from the entanglement of the two spins contains the self-evidently incorrect assumption that socks in Austria don't obey the laws of quantum mechanics. But be sure that they do. Everything in this damn Universe does.
What's new about the quantum entanglement – relatively to the "ordinary" classical correlation – is that the two subsystems such as the two spins may have correlated many other properties at the same moment. The singlet state has \[
\vec J_1 = -\vec J_2
\] which simply means that \((\vec J_1+\vec J_2)\ket\psi = 0\). So you may measure \(J_{1z},J_{2z}\) and obtain a perfect anticorrelation. But if the two experimenters happen to measure \(J_{1x},J_{2x}\) instead, they get a perfect anticorrelation, too. And similarly for \(J_{1y},J_{2y}\). You couldn't invent a classical model with two classical bits that would emulate this behavior. And the components of the spin \(J_x,J_y\) are entirely different than or independent from \(J_z\).
The real-world socks are only simpler because of one fact: due to the complexity that produces decoherence etc., it is extremely hard and practically impossible to measure anything such as \(J_x,J_y\) for the socks – observables that don't commute with the sock color i.e. are non-diagonal in a basis of the color-of-sock eigenstates. These non-diagonal observables operationally don't exist for socks.
But all this new behavior depends on the nonzero commutators \([J_{1z},J_{1x}]\) and similar ones! What prevents you from this really "intimate" correlation between the two bits in classical physics is that in classical physics, all observables are simple functions on the phase space \(F=F(x_j,p_k)\) – or the set of possible values of all the bits if the information is described in terms of bits. For this reason, in classical physics, it's always enough to measure the values of \(x_j,p_k\) etc. and you know everything.
However, this is not the case in quantum mechanics. You can't measure all "elementary" observables \(x_j,p_k\) at the same moment because of the uncertainty principle – because of the nonzero commutators. Instead, you must decide what you measure and the post-measurement state will be an eigenstate of this measured observable \(L\). Moreover, you really need to list all conceivable observables which have all conceivable eigenstates if you want to exhaust all the options. And the set of observables – Hermitian matrices – is very large.
In a classical description of two bits, the phase space would have four points \(00,01,10,11\) which correspond to the four arrangement of the values \(J_{1z},J_{2z}\), to pick the standard convention. Once you would know that \(J_{1z},J_{2z}\) may only have \(2\times 2 =4\) values, i.e. they are two bits, and the knowledge of these two bits is the "maximum" you may know about the system, in classical physics, it would follow that no other interesting correlation may be present in the state of the two bits. All observables are functions of \(J_{1z},J_{2z}\). It means that you may at most represent the possibilities \(00,01,10,11\) by four arbitrary numbers \(e,f,g,h\). There are just four fixed options and a perfect correlation may at most mean that some of these four options are ruled out.
However, in quantum mechanics, there exist operators such as \(J_{1x}\) which don't commute with one of the observables \(J_{1z},J_{2z}\) – in this case with \(J_{1z}\). It's exactly this nonzero commutator that makes \(J_{1x}\) sensitive on the relative phase between the complex probability amplitudes that know about the options \(J_{1z}=+\hbar/2\) and \(J_{1z}=-\hbar/2\). Also, it's exactly the nonzero commutator \([J_{1x},J_{1z}]\) that says that \(J_{1x}\) isn't a diagonal matrix in the basis of the \(J_{1z}\) eigenstates. And it's the non-diagonal matrices for observables that contain all the novelties of quantum mechanics, including the "tighter" correlation that the quantum entanglement may guarantee in comparison with the correlations in classical physics.
Aside from the consequences of the nonzero commutators – i.e. of the need to use off-diagonal matrices for the most general observables that may really be measured – there is simply nothing "qualitatively new" in quantum mechanics. Whoever fails to understand these points misunderstands the character of quantum mechanics and the relationship between quantum mechanics and classical physics, too.
Add to Digg this Add to reddit
snail feedback (0) : |
0550a57b691a9bbf | You are hereFZN
Fyzikální základy nanotechnologií
Course: Physical Principles of Nanotechnology
Department/Abbreviation: KEF/BFZN
Year: 2018 2019
Guarantee: 'doc. Mgr. Jiří Tuček, Ph.D.'
Annotation: The topics cover the phenomena and properties occuring in the nanoworld.
Course review:
1. Crystal structure of solids and their changes upon decrease in size of the (nano)material. 2. FCC nanoparticles (structural magic numbers), tetrahedrally-coupled semiconducting structures (ionic model, covalent model, Vegard law). 3. Schrödinger equation for a system of electrons and nuclei and its approximations, Bloch theorem, Bloch function, localized and delocalized electrons, localization of electrons with decrease in size of a (nano)material, hole (a quasi-particle with positive charge a positive effective mass), excitons (Mott-Wannier excitons and Frenkel excitons, Saha equation). 4. Properties of individual nanoparticles, metal nanoclusters (preparation methods, structural and electronic magic numbers, superatoms, hellium model, basics of molecular orbital theory and density functional theory (DFT)). 5. Semiconducting nanoclusters (optical properties of semiconducting nanoclusters and their change with size, regime of strong and weak confinement of the exciton, blue shift and size of semiconducting nanoclusters, change in the bandgap with the size of semiconducting clusters), photofragmentation, Coulomb explosion. 6. Clusters of inert gases (van der Waals potential, Lennard-Jones potential), non-viscous nanoclusters, Bose-Einstein condensation (qualitative description), molecular nanoclusters (molecule of water and symmetrically hydrogen-bonded water). 7. Bulk nanostructural disorder materials, mechanisms of defects evolution in grain materials, mechanical properties of disordered nanostructures (Young modulus, Hall-Petch equation, elasticity, brittleness and hardness of disordered nanostructures), nanostructural multilayered disordered materials (effect of thickness of nanolayers on hardness of the material), electrical properties of disordered nanostructural materials (conductivity and electron tunneling), nanocomposite metal nanoclusters containing glasses (optical properties and Plasmon absorption, non-linear optical phenomena - non-linear refractive index, methods of nanocomposite glasses preparation), porous silicon (luminescence, photoluminescence and phosphorescence, Jablonski diagram - qualitative description, emitting and non-emitting transtions, pore size and its effect on luminescence of silicon). 8. Nanostructural crystals: natural nanocrystals, array of nanoparticles in zeolites, lattices of nanoparticles in colloidal suspensions (principle of hard and soft repulsion, Kirkwood-Alder transition, transition between FCC and BCC ordering), photonic crystals (definition and production of photonic crystals, Maxwell equations of the photonic crystals in operator form, Helmholtz equation for magnetic and electric intensity, periodicity of relative permittivity, bands of allowed and forbidden energies, dielectric and air bands, calculation of dispersion relation for simple 1D photonic crystal, resonant chamber, frequency and size of radius of holes in 2D and 3D photonic crystal). 9. Quantum nature of the nanoworld (wave function, Schrodinger equation in one dimension, time dependent and independent Schrodinger equation, particle trapped in one dimension, linear combination of solution, expected values and two-particle wave function, reflection and tunneling through potential step, tunneling through potential barrier, particles trapped in two and three dimensions, quantum dots, two-dimensional bands and quantum wires, simple harmonic oscillator, magnetic moments). 10. Quantum consequences for the macroworld, nanosymmetry and two-atomic molecules, covalent bond and covalent antibond as pure nanophysical phenomenon, definition of exchange interaction, polar and van der Waals fluctuation forces, electric polarization of neutral atoms and molecules, dipole-dipole interactions of neutral and symmetric atoms, Casimir force, experimental setup for measurement of Casimir force, hydrogen bond. 11. Single-electron tunneling, Coulomb blocade, Coulomb staircase, superconductivity and quantum nanostructures |
4188223c3ec5bfe6 |
Univ. Paris-Saclay
Foundational questions of quantum information
April 4-5, 2012
Workshop "Foundational questions of quantum information"
Dates: April 4-5, 2012
Jointly organized by LARSIM and QuPa
Venue: Amphi Opale, 46 rue Barrault, Paris 13e
April 4
9:30-9:45 Coffee and Opening
9:45-10:45 Robert Raussendorf (University of British Columbia)
10:45-11:00 Coffee
11:00-12:00 Oscar Dahlsten (University of Oxford)
14:15-15:15 Matthew Pusey (Imperial College London)
15:15-16:15 Michel Bitbol (CREA, CNRS-Ecole Polytechnique)
16:15-16:45 Coffee
16:45-17:45 Virginie Lerays (LRI, Université Paris Sud)
April 5
9:30-9:45 Coffee
9:45-10:45 Damian Markham (LTCI, CNRS-Télécom ParisTech)
10:45-11:00 Coffee
11:00-12:00 Kavan Modi (University of Oxford and Centre for Quantum Technologies, National University of Singapore)
14:15-15:15 Giacomo Mauro d'Ariano (University of Pavia)
15:15-16:15 Caslav Brukner (University of Vienna)
16:15-16:45 Coffee
16:45-17:45 Alexei Grinbaum (LARSIM, CEA-Saclay)
Robert Raussendorf
"Symmetry constraints on temporal order in measurement-based quantum computation"
We discuss the interdependence of resource state, measurement setting and temporal order in measurement-based quantum computation. The possible temporal orders of measurement events are constrained by the principle that the randomness inherent in quantum measurement should not affect the outcome of the computation. We provide a classification for all temporal relations among measurement events compatible with a given initial quantum state and measurement setting, in terms of a matroid. Conversely, we show that classical processing relations necessary for turning the local measurement outcomes into computational output determine the resource state and measurement setting up to local equivalence. Further, we find a symmetry transformation related to local complementation that leaves the temporal relations invariant.
Oscar Dahlsten
"Tsirelson’s bound from a Generalised Data Processing Inequality"
The strength of quantum correlations is bounded from above by Tsirelson’s bound. We establish a connection between this bound and the fact that correlations between two systems cannot increase under local operations, a property known as the data processing inequality. More specifically, we consider arbitrary convex probabilistic theories. These can be equipped with an entropy measure that naturally generalizes the von Neumann entropy, as shown recently by Short and Wehner. We prove that if the data processing inequality holds with respect to this generalized entropy measure then the underlying theory necessarily respects Tsirelson’s bound. We moreover generalise this statement to any entropy measure satisfying certain minimal requirements. Based on arXiv:1108.4549.
Matthew Pusey
"Comparing two explanations for qubits"
I will discuss two long-standing realist models for qubits - one due to Bell and the other to Kochen and Specker. I will argue that the latter provides a much more compelling explanation of various quantum information phenomena, mainly thanks to the feature that multiple quantum states can apply to the same real state. Finally I will show that, on the other hand, it is precisely this feature that prevents the latter model from explaining a very particular phenomena. Based on arXiv:1111.3328.
Michel Bitbol
"Kant and quantum mechanics: a middle way between the ontic and epistemic approaches"
Instead of either formulating new metaphysical images of the so-called "quantum reality" or rejecting any metaphysical attempt in an empiricist spirit, the case of quantum mechanics might require a redefinition of metaphysics. The sought redefinition will be performed in the spirit of Kant, according to whom metaphysics is the discipline of the boundaries of human knowledge. This can be called a "reflective" conception of metaphysics. Along with this perspective, theoretical structures are neither ontic nor purely epistemic. They do not express exclusively the structure of reality out there, or the form of our own knowledge, but their active interface. Our understanding of the structure of quantum mechanics then works in two steps :
(1) The most basic structures of quantum mechanics are neither imposed onto us (by some pre-structured reality) nor arbitrary (just meant to "save the phenomena"), but made necessary by the general characteristics of our demand of knowledge.
(2) Yet, there can also be additional features of theoretical structures corresponding to special characteristics of our demand of knowledge, adapted to certain directions of research or to cultural prejudice. The "surplus structure" of some of the most popular interpretations of quantum mechanics will be understood this way.
Finally, it will be shown that some of the major "paradoxes" of quantum mechanics, such as the measurement problem, can easily be dissolved by way of this reflective attitude.
Virginie Lerays
"Detector efficiency and communication complexity"
In the standard setting of communication complexity, two players each have an input and they wish to compute some function of the joint inputs. This has been the object of much study in computer science and a wide variety of lower bound methods have been introduced to address the problem of showing lower bounds on communication. Physicists have considered a closely related scenario where two players share a predefined entangled state. Each is given a measurement as input, which they perform on their share of the system. The outcomes of the measurements follow a distribution which is predicted by quantum mechanics. The goal is to rule out the possibility that there is a classical explanation for the distribution, through loopholes such as communication or detector inefficiency. In an experimental setting, Bell inequalities are used to distinguish truly quantum from classical behavior.
Bell test and communication complexity are both measures of how far a distribution is from the set of local distributions (those requiring no communication), and one would expect that if a bell test shows a large violation for a distribution, it should require a lot of communication and vice versa.
We present a new lower bound technique for communication complexity based on the notion of detector inefficiency for the setting of simulating distributions, and show that it coincides with the best lower bound in communication complexity known until now. We show that it amounts to constructing an explicit Bell inequality. Joint work with Sophie Laplante and Jérémie Roland.
Damian Markham
"On non-linear extensions of quantum mechanics"
We present some observations on the restrictions imposed on non-linear extensions of quantum mechanics with respect to non-signaling. We see that non-signaling can be understood as imposing the destruction of correlations, a property noticed for closed time-like curves by Bennett et al, arising from the 'non-linearity trap'. We discuss in what sense such theories can still allow for 'local' cloning and state discrimination. Joint work with Julien Degorre.
Kavan Modi
"Entanglement distribution with quantum communication"
Two distant labs cannot increase the entanglement between them via classical communication. However, they can do so via quantum communication. Surprisingly, the communicated system need not be entangled with either / both of the labs, but it must be quantum correlated (as determined by quantum discord). We show that quantum discord that bounds the increase in the entanglement via quantum communication. Additionally, the bound also leads to subadditivity of entropy and gives an interpretation for negative conditional entropy.
Giacomo Mauro d'Ariano
"Physics from Informational Principles"
Recently quantum theory has been derived from six principles that are of purely informational nature. The "(epistemo)logical" nature of these principles makes them rock solid. We want now to take a pause of reflection about the general foundations of Physics, and re-examine how solid are principles as the Galilean relativity and the Einsteinian equivalence principle. Are they truly compelling? Why are they under dispute, and violations are considered? Following the route of the informational paradigm, I will suggest three new candidate principles, all of informational nature: 1) The Church–Turing–Deutsch principle, namely that theory must allow simulating any physical process by a universal finite computer (this implies that the information involved in any process is locally bounded); 2) topological locality of interaction; 3) topological homogeneity of interactions. These principles along with the six ones for Quantum Theory suggest a new foundation of Quantum Field Theory as Quantum Cellular Automata theory. I will show how this framework can actually provide an extension of Quantum Field Theory to include localized states and observables, whereas Galileo's and Einstein's covariance and other symmetries are only approximate, and to be recovered only in the field-limit, whereas their violation make the extended theory in-principle falsifiable. The new informational principles open totally unexpected routes and re-definitions of mechanical notions (as inertial mass, Planck constant, Hamiltonian, Dirac equation as free flow of information), Minkowsian space‐time as emergent, and an unexpected role for Majorana field in the solution of the so-called Feynman problem of simulating anti-commuting fields by the automaton.
Caslav Brukner
"Tests distinguishing between quantum and more general probabilistic theories"
The historical experience teaches us that every theory that was accepted at a certain time was later inevitably replaced by a deeper and more fundamental theory. There is no reason why quantum theory should be an exception in this respect. At present, quantum theory has been tested against very specific alternative theories, such as hidden variables, non-linear Schrödinger equations or the collapse models. The common feature of all of them is that they keep one or the other basic principle of the classical world intact. Yet, it is very unlikely that a post-quantum theory will be based on pre-quantum concepts. In contrast, it is likely that it will break not only principles of classical but also quantum physics. This gives us a motivation for the following research program: 1) To reconstruct quantum mechanics from a set of axioms. 2) To weaken the axioms and to look for broader structures. 3) To test quantum theory against them. Following this approach I will present two tests that can distinguish between quantum theory and more general probabilistic theories.
Alexei Grinbaum
"Quantum observers and Kolmogorov complexity"
Different observers do not have to agree on how they identify a quantum system. We explore a condition based on algorithmic complexity that allows a system to be described as an objective "element of reality". We also suggest an experimental test of the hypothesis that any system, even much smaller than a human being, can be a quantum mechanical observer.
Maj : 28/03/2012 (1899)
Retour en haut |
ca3f608e69a82c72 | Citation for this page in APA citation style. Close
Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Louise Antony
Thomas Aquinas
David Armstrong
Harald Atmanspacher
Robert Audi
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Michael Burke
Lawrence Cahoone
Joseph Keim Campbell
Rudolf Carnap
Ernst Cassirer
David Chalmers
Roderick Chisholm
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Herbert Feigl
Arthur Fine
John Martin Fischer
Frederic Fitch
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
Sam Harris
William Hasker
Georg W.F. Hegel
Martin Heidegger
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Walter Kaufmann
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Andrea Lavazza
Christoph Lehner
Keith Lehrer
Gottfried Leibniz
Jules Lequyer
Michael Levin
George Henry Lewes
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
E. Jonathan Lowe
John R. Lucas
Alasdair MacIntyre
Ruth Barcan Marcus
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Karl Popper
Huw Price
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
G.H. von Wright
David Foster Wallace
R. Jay Wallace
Ted Warfield
Roy Weatherford
C.F. von Weizsäcker
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf
Michael Arbib
Walter Baade
Bernard Baars
Leslie Ballentine
Gregory Bateson
John S. Bell
Mara Beller
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Donald Campbell
Anthony Cashmore
Eric Chaisson
Gregory Chaitin
Jean-Pierre Changeux
Arthur Holly Compton
John Conway
John Cramer
E. P. Culverwell
Olivier Darrigol
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Gerald Edelman
Paul Ehrenfest
Albert Einstein
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
Joseph Fourier
Philipp Frank
Steven Frautschi
Edward Fredkin
Lila Gatlin
Michael Gazzaniga
GianCarlo Ghirardi
J. Willard Gibbs
Nicolas Gisin
Paul Glimcher
Thomas Gold
A. O. Gomes
Brian Goodwin
Joshua Greene
Jacques Hadamard
Mark Hadley
Patrick Haggard
Stuart Hameroff
Augustin Hamon
Sam Harris
Hyman Hartman
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Art Hobson
Jesper Hoffmeyer
E. T. Jaynes
William Stanley Jevons
Roman Jakobson
Pascual Jordan
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
William R. Klemm
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Benjamin Libet
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
James Clerk Maxwell
Ernst Mayr
John McCarthy
Warren McCulloch
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Colin Pittendrigh
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Adolphe Quételet
Jürgen Renn/a>
Juan Roederer
Jerome Rothstein
David Ruelle
Tilman Sauer
Jürgen Schmidhuber
Erwin Schrödinger
Aaron Schurger
Claude Shannon
David Shiang
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Lee Smolin
Ray Solomonoff
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
William Thomson (Kelvin)
Giulio Tononi
Peter Tse
Vlatko Vedral
Heinz von Foerster
John von Neumann
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
Stephen Wolfram
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky
Free Will
Mental Causation
James Symposium
John Stewart Bell
In 1964 John Bell showed how the 1935 "thought experiments" of Einstein, Podolsky, and Rosen (EPR) could be made into real experiments. He put limits on local "hidden variables" that might restore a deterministic physics in the form of what he called an "inequality," the violation of which would confirm standard quantum mechanics.
Some thinkers, mostly philosophers of science rather than working quantum physicists, think that Bell's work has restored the determinism in physics that Einstein had wanted and that Bell recovered the "local elements of reality" that Einstein hoped for.
But Bell himself came to the conclusion that local "hidden variables" will never be found that give the same results as quantum mechanics. This has come to be known as Bell's Theorem.
All theories that reproduce the predictions of quantum mechanics will be "nonlocal," Bell concluded. Nonlocality is an element of physical reality and it has produced some remarkable new applications of quantum physics, including quantum cryptography and quantum computing.
Bell based his thught of real experiments on the 1952 ideas of David Bohm. Bohm proposed an improvement on the original EPR experiment (which measured position and momentum). Bohm's reformulation of quantum mechanics postulates (undetectable) deterministic positions and trajectories for atomic particles, where the instantaneous collapse happens in a new "quantum potential" field that can move faster than light speed. But it is still a "nonlocal" theory.
So Bohm (and Bell) believed that nonlocal "hidden variables" might exist, and that some form of information could come into existence at remote "space-like separations" at speeds faster then light, if not instantaneously.
The original EPR paper was based on a question of Einstein's about two electrons fired in opposite directions from a central source with equal velocities. Einstein imagined them starting from a distance at t0 and approaching one another with high velocities, then for a short time interval from t1 to t1 + Δt in contact with one another, where experimental measurements could be made on the momenta, after which they separate. Now at a later time t2 it would be possible to make a measurement of electron 1's position and would therefore know the position of electron 2 without measuring it explicitly.
Einstein used the conservation of linear momentum to "know" the symmetric position of the other electron. This knowledge implies information about the remote electron that is available instantly. Einstein called this "spooky action-at-a-distance." It might better be called "knowledge-at-a-distance."
Bohm's 1952 thought experiment used two electrons that are prepared in an initial state of known total spin. If one electron spin is 1/2 in the up direction and the other is spin down or -1/2, the total spin is zero. The underlying physical law of importance is still a conservation law, in this case the conservation of spin angular momentum.
In his 1964 paper "On the Einstein-Podolsky-Rosen Paradox," Bell made the case for nonlocality.
The paradox of Einstein, Podolsky and Rosen was advanced as a argument that quantum mechanics could not be a complete theory but should be supplemented by additional variables. These additional variables were to restore to the theory causality and locality. In this note that idea will be formulated mathematically and shown to be incompatible with the statistical predictions of quantum mechanics. It is the requirement of locality, or more precisely that the result of a measurement on one system be unaffected by operations on a distant system with which it has interacted in the past, that creates the essential difficulty. There have been attempts to show that even without such a separability or locality requirement no 'hidden variable' interpretation of quantum mechanics is possible. These attempts have been examined [by Bell] elsewhere and found wanting. Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed [by Bohm]. That particular interpretation has indeed a gross non-local structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions.
Bell titled his 1976 review of the first tests of his theorem about his predicted inequalities, "Einstein-Podolsky-Rosen Experiments."
He described his talk as about the "foundations of quantum mechanics," and it was the early days of a movement by a few scientists and many philosophers of science to challenge the "orthodox" quantum mechanics. They particularly attacked the Copenhagen Interpretation, with its notorious speculations about the role of the "conscious observer" and its attacks on physical reality.
From the earliest presentations in the late 1920's of the ideas of the supposed "founders" of quantum mechanics, Einstein had deep misgivings of the work going on in Copenhagen, although he never doubted the calculating power of their new mathematical methods, and he came to accept the statistical (indeterministic) nature of quantum physics, which he himself had reluctantly discovered. He described their work as "incomplete" because it is based on the statistical results of many experiments so only makes probabilistic predictions about individual experiments. Nevertheless, Einstein hoped to visualize what is going on in an underlying "objective reality."
Bell was deeply sympathetic to Einstein's hopes for a return to the "local reality" of classical physics. He identified the EPR paper's title, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" as a search for new variables to provide the completeness. Bell thought David Bohm's "hidden variables' were one way to achieve this, though Einstein had called Bohm's approach "too cheap," probably because Bohm included vector potentials traveling faster than light speed, an apparent violation of Einstein's special theory of relativity.
I have been invited to speak on “foundations of quantum mechanics”...
The area in question is that of Einstein, Podolsky, and Rosen. Suppose for example, that protons of a few MeV energy are incident on a hydrogen target. Occasionally one will scatter, causing a target proton to recoil.
Suppose (Fig. 1) that we have counter telescopes T1 and T2 which register when suitable protons are going towards distant counters C1 and C2. With ideal arrangements, registering of both T1 and T2 will then imply registering of both C1 and C2 after appropriate time decays [delays?]. Suppose next that C1 and C2 are preceded by filters that pass only particles of given polarization, say those with spin projection +1 along the z axis. Then one or both of C1and C2 may fail to register. Indeed for protons of suitable energy one and only one of these counters will register on almost every suitable occasion — i.e., those occasions certified as suitable by telescopes T1 and T2. This is because proton-proton scattering at large angle and low energy, say a few MeV, goes mainly in S wave. But the antisymmetry of the final wave function then requires the antisymmetric singlet spin state. In this state, when one spin is found “up” the other is found “down”. This follows formally from the quantum expectation value
<singlet|σz(1)σz(2)|singlet> = -1
where ½σz(1) and ½σz(2) are the z component spin operators for the two particles.
Suppose now the source-counter distances are such that the proton going towards C1 arrives there before the other proton arrives at C2. Someone looking at counter C1 will not know in advance whether it will or will not register. But once he has noted what happens to C1 at the appropriate time, he immediately knows what will happen subsequently to C2, however far away C2 may be.
Some people find this situation paradoxical. They may, for example, have come to think of quantum mechanics as fundamentally indeterministic. In particular they may have come to think of the result of a spin measurement on an unpolarized particle (and each particle, considered separately, is unpolarized here) as utterly indefinite until it has happened. And yet here is a situation where the result of such a measurement is perfectly definitely known in advance.
It did become determined (but it was not predetermined beforehand) by the measurement at C1, which collapses the entangled two-particle wave function
Did it only become determined at the instant when the distant particle passed the distant filter? But how could what happens a long way off change the situation here? Is it not more reasonable to assume that the result was somehow predetermined all along?
Since Bell's original work, many other physicists have defined other "Bell inequalities" and developed increasingly sophisticated experiments to test them. Most recent tests have used oppositely polarized photons coming from a central source. It is the total photon spin of zero that is conserved.
The first experiments that confirmed Bell's Theorem were done by John Clauser and Stuart Freedman in 1971, Clauser and Abner Shimony described the first few experiments in a 1978 review. There they agreed with Bell about measurements on two spin 1/2 particles, as suggested by David Bohm.
A variant of EPR’s argument was given by Bohm (1951), formulated in terms of discrete states. He considered a pair of spatially separated spin-1/2 particles produced somehow in a singlet state, for example, by dissociation of the spin-0 system...
Suppose that one measures the spin of particle 1 along the x axis. The outcome is not predetermined by the description [wave function] Ψ. But from it, one can predict that if particle 1 is found to have its spin parallel to the x axis, then particle 2 will be found to have its spin antiparallel to the x axis if the x component of its spin is also measured.
Thus, the experimenter can arrange his apparatus in such a way that he can predict the value of the x component of spin of particle 2 presumably without interacting with it (if there is no action-at-a-distance).
When the x component is measured, it disrupts the y and z components, rendering them indeterminate. All the spin components are not simultaneously definite! See Dirac's three polarizers.
Likewise, he can arrange the apparatus so that he can predict any other component of the spin of particle 2. The conclusion of the argument is that all components of spin of each particle are definite, which of course is not so in the quantum-mechanical description. Hence, a hidden-variables theory seems to be required.
If all three x, y, z components of spin had definite values of 1/2, the resultant vector (the diagonal of a cube with side 1/2) would be 3½/2. This is impossible. Spin is always quantized at ℏ/2. Unmeasured components are in a linear combination of + ℏ/2 and - ℏ/2 (average value zero!).
The concept of "local" hidden variables is then the simultaneous existence of positive definite values of σy and σz (both equal to + / - ℏ/2) at the same time σx has measured value ℏ/2!
Although Bell's Theorem is one of the foundational documents in the "Foundations of Quantum Mechanics," it is cited much more often than the confirming experiments are explained, because they are quite complicated. The most famous explanations are given in terms of analogies, with flashing lights, dice throws, or card games. What is needed is an explanation describing what happens to the quantum particles and their statistics.
The most important experiments were likely that done by John Clauser, Michael Horne, Abner Shimony, and Richard Holt (known collectively as CHSH) and later by Alain Aspect, who did more sophisticated tests.
When the two particles reach the polarizers a and b they are always found in opposite spin states (one up or +, the other down or -). This is consistent with Einstein's "objective reality" that the particles have had those values since their mutual state were prepared at t=0.
Now this is true whether σx and σy is measured (assuming the transmission axis is along the z direction). But keep in mind that if σx is measured, σy is then indeterminate.
This is why we say that the outcome of a measurement depends on the "free choice" of the experimenter. Measuring in the x direction gives us a value in that direction. Did the spin in the x direction exist before the measurement? Did the spins in the two orthogonal directions exist before the measurement? Those orthogonal spins definitely do not exist after the measurement, since the measurement is also a state preparation.
All three potential spins are latent in the sense that whichever direction is chosen, if the same direction is chosen for the other particle it will be found to have opposite spin. If a different direction is chosen for the other particle, it will not be correlated at all with the first particle spin. By latent we mean perfectly opposite correlations are inherent in all three directions. Now the electron spin is limited to
This is because under exchange of the two indistinguishable particles, the antisymmetric wave function changes its sign.
Experimental Results
Bell Theorem tests always add what Bell called "filters," polarization analyzers whose polarization angles can be set, sometimes at high speeds between the so-called "first" and "second" measurements. Notice that this represents an interaction that alters the underlying physics, projecting the particles into unpredictable states.
On David Bohm's "Impossible" Pilot Wave
Why is the pilot wave picture ignored in textbooks? Should it not be taught, not as the only way, but as an antidote to the prevailing complacency? To show that vagueness, subjectivity, and indeterminism are not forced on us by experimental facts, but by deliberate theoretical choice?
Bohm’s 1952 papers on quantum mechanics were for me a revelation. The elimination of indeterminism was very striking. But more important, it seemed to me, was the elimination of any need for a vague division of the world into “system” on the one hand, and “apparatus” or “observer” on the other. I have always felt since that people who have not grasped the ideas of those papers ... and unfortunately they remain the majority ... are handicapped in any discussion of the meaning of quantum mechanics.
A preliminary account of these notions was entitled “Quantum field theory without observers, or observables, or measurements, or systems, or apparatus, or wavefunction collapse, or anything like that”. This could suggest to some that the issue in question is a philosophical one. But I insist that my concern is strictly professional. I think that conventional formulations of quantum theory, and of quantum field theory in particular, are unprofessionally vague and ambiguous. Professional theoretical physicists ought to be able to do better. Bohm has shown us a way.
Following John Bell's idea, Nicholas Gisin and Antoine Suarez argue that something might be coming from "outside space and time" to correlate results in their own experimental tests of Bell's Theorem. Roger Penrose and Stuart Hameroff have proposed causes coming "backward in time" to achieve the perfect EPR correlations, as has philosopher Huw Price.
A Preferred Frame?
A little later in the same BBC interview, Bell suggested that a preferred frame of reference might help to explain nonlocality and entanglement.
[Davies] Bell's inequality is, as I understand it, rooted in two assumptions: the first is what we might call objective reality - the reality of the external world, independent of our observations; the second is locality, or non-separability, or no faster-than-light signalling. Now, Aspect's experiment appears to indicate that one of these two has to go. Which of the two would you like to hang on to?
[Bell] Well, you see, I don't really know. For me it's not something where I have a solution to sell! For me it's a dilemma. I think it's a deep dilemma, and the resolution of it will not be trivial; it will require a substantial change in the way we look at things. But I would say that the cheapest resolution is something like going back to relativity as it was before Einstein, when people like Lorentz and Poincare thought that there was an aether - a preferred frame of reference - but that our measuring instruments were distorted by motion in such a way that we could not detect motion through the aether. Now, in that way you can imagine that there is a preferred frame of reference, and in this preferred frame of reference things do go faster than light. But then in other frames of reference when they seem to go not only faster than light but backwards in time, that is an optical illusion.
The standard explanation of entangled particles usually begins with an observer A, often called Alice, and a distant observer B, known as Bob. Between them is a source of two entangled particles. The two-particle wave function describing the indistinguishable particles cannot be separated into a product of two single-particle wave functions.
The problem of faster-than-light signaling arises when Alice is said to measure particle A and then puzzle over how Bob's (later) measurements of particle B can be perfectly correlated, when there is not enough time for any "influence" to travel from A to B.
Back in the 1960's, C. W. Rietdijk and Hilary Putnam argued that physical determinism could be proved to be true by considering the experiments and observers A and B in a "spacelike" separation and moving at high speed with respect to one another. Roger Penrose developed a similar argument in his book The Emperor's New Mind. It is called the Andromeda Paradox.
Because a "preferred frame" has an important use in special relativity, where all inertial frames are equivalent, we might call this frame a "special frame."
The EPR "paradox" is the result of a naive non-relativistic description of events. Although the two events (measurements of particles A and B) are simultaneous in our special frame, the space-like separation of the events means that from Alice's point of view, any knowledge of event B is out in her future. Bob likewise sees Alice's event A out in his future. These both cannot be true. Yet they are both true (and in some sense neither is true). Thus the paradox.
Instead of just one particle making an appearance in the collapse of a single-particle wave function, in the two-particle case, when either particle is measured, we know instantly those properties of the other particle that satisfy the conservation laws, including its location equidistant from, but on the opposite side of, the source, and its other properties such as spin.
You can compare the collapse of the two-particle probability amplitude above to the single-particle collapse here.
We can enhance our visualization of what might be happening between the time two entangled electrons are emitted with opposite spins and the time one or both electrons are detected.
Quantum mechanics describes the state of the two electrons as in a linear combination of | + - > and | - + > states. We can visualize the electron moving left to be both spin up | + > and spin down | - >. And the electron moving right would be both spin down | - > and spin up | + >. We could require that when the left electron is spin up | + >, the right electron must be spin down | - >, so that total spin is always conserved.
Consider this possible animation of the experiment, which illustrates the assumption that each electron is in a linear combination of up and down spin. It imitates the superposition (or linear combination) with up and down arrows on each electron oscillating quickly.
Notice that if you move the animation frame by frame by dragging the dot in the timeline, you will see that total spin = 0 is conserved. When one electron is spin up the other is always spin down.
Note that we can challenge the idea that spins are oscillating. Would a force of some kind be needed to change the spins in sync? Perhaps we can see the rapid changes like resonance phenomena in molecular bonds?
Standard quantum mechanics says we cannot know the spin until it is measured, our minimal information estimate is a 50/50 probability between up and down.
This is the same as assuming Schrödinger's Cat is 50/50 alive and dead. But what this means of course is simply that if we do a large number of identical experiments, the statistics for live and dead cats will be approximately 50/50%. We never observe/measure a cat that is both dead and alive!
As Einstein noted, QM tells us nothing about individual cats. Quantum mechanics is incomplete in this respect. He is correct, although Bohr and Heisenberg insisted QM is complete, because we cannot know more before we measure, and reality is created (they say) when we do measure.
Despite accepting that a particular value of an "observable" can only be known by a measurement (knowledge is an epistemological problem, Einstein asked whether the particle actually (really, ontologically) has a path and position before we measure it? His answer was yes.
left right
σx σy σz σx σy σz
+ +/- +/- - +/- +/-
+/- + +/- +/- - +/-
+/- +/- + +/-- +/- -
- +/- +/- + +/- +/-
+/- - +/- +/- + +/-
+/- +/- - +/-- +/- +
Below is an animation that illustrates the unprovable assumption that the two electrons are randomly produced in states that have latent components that conserve spin momentum, and that they remain in those states no matter how far they separate, provided neither interacts with anything else before the measurement.
Since each electron has only one unit of electron spin (a magnetic moment equal to one Bohr magneton), we can only say that if measured in a given direction, the spin will be projected into that direction for the left electron, into the opposite direction for the right electron.
The table shows six possible outcomes. This animation measures in the y direction (the second - yellow - row).
Werner Heisenberg and later Paul Dirac and others refer to the "free choice" of the experimenter as to which direction is chosen to measure. But then Dirac adds that nature makes a random choice as to whether to find the electron spin is up or down in that chosen direction.
Now entanglement adds the nonlocality and non-separability that is caused by the (single) two-particle wave function collapsing symmetrically and simultaneously in our special frame.
How Mysterious Is Entanglement?
Schrödinger knew that his two-particle wave function Ψ12 could not have the same simple interpretation as the single particle, which can be visualized in ordinary 3-dimensional configuration space. And he is right that entanglement apparently exhibits a richer form of the "action-at-a-distance" and nonlocality that Einstein had already identified in the collapse of the single particle wave function.
But the main difference is that two particles acquire new properties instead of one, and they appear to do it instantaneously (at faster than light speeds), just as in the case of a single-particle measurement the probability of finding that particular single particle anywhere else is now zero.
Nonlocality and entanglement are thus just another manifestation of Richard Feynman's "only" mystery. In both single-particle and two-particle cases paradoxes appear only when we attempt to describe individual particles following specific paths to measurement by observer A (and/or observer B).
Wee cannot know the specific paths at every instant without measurements. But Einstein has told us that at every instant the particles are conserving linear momentum and electron spin, despite our lack of knowledge during individual experiments.
We can ask what happens if Bob is not at the same distance from the origin as Alice, but farther away. When Alice detects the particle (with say spin up), at that instant the other particle also becomes determinate (with spin down) at the same distance on the other side of the origin. It now continues, in that determinate state, to Bob's measuring apparatus.
Recall Bell's description of the process (quoted above), with its mistaken bias toward assuming first one measurement is made, and the other measurement is made later.
If measurement of the component σ1 • a, where a is some unit vector, yields the value + 1 then, according to quantum mechanics, measurement of σ2 • a must yield the value — 1 and vice versa... Since we can predict in advance the result of measuring any chosen component of σ2, by previously measuring the same component of σ1, it follows that the result of any such measurement must actually be predetermined.
Since the collapse of the two-particle wave function is indeterminate, nothing is pre-determined, although σ2 is indeed determined to have opposite sign (to conserve spin momentum) once σ1 is measured. Here Bell is describing The "following" measurement to be in the same direction as the "previous" measurement. In Bell's termss, Bob is measuring "the same component" as Alice.
If Bob should measure even a fraction of a second after Alice, but measure in a different spin direction (a different component), his measurements will be found to be 50/50, up and down, or + and -.
To recap our picture of entanglement measurements:
1) the position measurements are always symmetric and equidistant from the central source as Einstein clearly explained, in order to conserve linear momentum.
2) Spin momentum is also conserved, without any hidden variables or communications between the particles. Here the relationship is antisymmetric, as required under interchange of the indistinguishable electrons.
And 3), we can imagine the two spin vectors precessing as the electrons travel through coordinate space, always preserving opposite directions to conserve momentum. Without measurements, no observer can ever know these values. But measurements will always produce one of the six possible outcomes predicted above, confirming the quantum statistics.
In 1987, Bell contributed an article to a centenary volume for Erwin Schrödinger entitled
Are There Quantum Jumps? Schrödinger denied such jumps or any collapses of the wave function. Bell's title was inspired by two articles with the same title by Schrödinger in 1952 (Part I, Part II).
Just a year before Bell's death in 1990, physicists assembled for a conference on 62 Years of Uncertainty (referring to Werner Heisenberg's 1927 principle of indeterminacy).
John Bell's contribution to the conference was an article called "Against Measurement." In it he attacked Max Born's statistical interpretation of quantum mechanics. And he praised the new ideas of GianCarlo Ghirardi and his colleagues, Alberto Rimini and Tomaso Weber:
In the beginning, Schrödinger tried to interpret his wavefunction as giving somehow the density of the stuff of which the world is made. He tried to think of an electron as represented by a wavepacket — a wave-function appreciably different from zero only over a small region in space. The extension of that region he thought of as the actual size of the electron — his electron was a bit fuzzy. At first he thought that small wavepackets, evolving according to the Schrödinger equation, would remain small. But that was wrong. Wavepackets diffuse, and with the passage of time become indefinitely extended, according to the Schrödinger equation. But however far the wavefunction has extended, the reaction of a detector to an electron remains spotty. So Schrödinger's 'realistic' interpretation of his wavefunction did not survive.
Then came the Born interpretation. The wavefunction gives not the density of stuff, but gives rather (on squaring its modulus) the density of probability. Probability of what exactly? Not of the electron being there, but of the electron being found there, if its position is 'measured.'
Why this aversion to 'being' and insistence on 'finding'? The founding fathers were unable to form a clear picture of things on the remote atomic scale. They became very aware of the intervening apparatus, and of the need for a 'classical' base from which to intervene on the quantum system. And so the shifty split.
The kinematics of the world, in this orthodox picture, is given a wavefunction (maybe more than one?) for the quantum part, and classical variables — variables which have values — for the classical part: (Ψ(t, q, ...), X(t),...). The Xs are somehow macroscopic. This is not spelled out very explicitly. The dynamics is not very precisely formulated either. It includes a Schrödinger equation for the quantum part, and some sort of classical mechanics for the classical part, and 'collapse' recipes for their interaction.
It seems to me that the only hope of precision with the dual (Ψ, x) kinematics is to omit completely the shifty split, and let both Ψ and x refer to the world as a whole. Then the xs must not be confined to some vague macroscopic scale, but must extend to all scales. In the picture of de Broglie and Bohm, every particle is attributed a position x(t). Then instrument pointers — assemblies of particles have positions, and experiments have results. The dynamics is given by the world Schrödinger equation plus precise 'guiding' equations prescribing how the x(t)s move under the influence of Ψ. Particles are not attributed angular momenta, energies, etc., but only positions as functions of time. Peculiar 'measurement' results for angular momenta, energies, and so on, emerge as pointer positions in appropriate experimental setups. Considerations of KG [Kurt Gottfried] and vK [N. G. van Kampen] type, on the absence (FAPP) [For All Practical Purposes] of macroscopic interference, take their place here, and an important one, is showing how usually we do not have (FAPP) to pay attention to the whole world, but only to some subsystem and can simplify the wave-function... FAPP.
The Born-type kinematics (Ψ, X) has a duality that the original 'density of stuff' picture of Schrödinger did not. The position of the particle there was just a feature of the wavepacket, not something in addition. The Landau—Lifshitz approach can be seen as maintaining this simple non-dual kinematics, but with the wavefunction compact on a macroscopic rather than microscopic scale. We know, they seem to say, that macroscopic pointers have definite positions. And we think there is nothing but the wavefunction. So the wavefunction must be narrow as regards macroscopic variables. The Schrödinger equation does not preserve such narrowness (as Schrödinger himself dramatised with his cat). So there must be some kind of 'collapse' going on in addition, to enforce macroscopic narrowness. In the same way, if we had modified Schrödinger's evolution somehow we might have prevented the spreading of his wavepacket electrons. But actually the idea that an electron in a ground-state hydrogen atom is as big as the atom (which is then perfectly spherical) is perfectly tolerable — and maybe even attractive. The idea that a macroscopic pointer can point simultaneously in different directions, or that a cat can have several of its nine lives at the same time, is harder to swallow. And if we have no extra variables X to express macroscopic definiteness, the wavefunction itself must be narrow in macroscopic directions in the configuration space. This the Landau—Lifshitz collapse brings about. It does so in a rather vague way, at rather vaguely specified times.
In the Ghirardi—Rimini—Weber scheme (see the contributions of Ghirardi, Rimini, Weber, Pearle, Gisin and Diosi presented at 62 Years of Uncertainty, Erice, Italy, 5-14 August 1989) this vagueness is replaced by mathematical precision. The Schrödinger wavefunction even for a single particle, is supposed to be unstable, with a prescribed mean life per particle, against spontaneous collapse of a prescribed form. The lifetime and collapsed extension are such that departures of the Schrödinger equation show up very rarely and very weakly in few-particle systems. But in macroscopic systems, as a consequence of the prescribed equations, pointers very rapidly point, and cats are very quickly killed or spared.
The orthodox approaches, whether the authors think they have made derivations or assumptions, are just fine FAPP — when used with the good taste and discretion picked up from exposure to good examples. At least two roads are open from there towards a precise theory, it seems to me. Both eliminate the shifty split. The de Broglie—Bohm-type theories retain, exactly, the linear wave equation, and so necessarily add complementary variables to express the non-waviness of the world on the macroscopic scale. The GRW-type theories have nothing in the kinematics but the wavefunction. It gives the density (in a multidimensional configuration space!) of stuff. To account for the narrowness of that stuff in macroscopic dimensions, the linear Schrödinger equation has to be modified, in this GRW picture by a mathematically prescribed spontaneous collapse mechanism.
The big question, in my opinion, is which, if either, of these two precise pictures can be redeveloped in a Lorentz invariant way.
...All historical experience confirms that men might not achieve the possible if they had not, time and time again, reached out for the impossible. (Max Weber)
...we do not know where we are stupid until we stick our necks out. (R. P. Feynman)
On the 22nd of January 1990, Bell gave a talk explaining his theorem at CERN in Geneva
organized by Antoine Suarez, director of the
Center for Quantum Philosophy.
There are links on the CERN website to the
video of this talk, and to a transcription.
In this talk, Bell summarizes the situation as follows:
It just is a fact that quantum mechanical predictions and experiments, in so far as they have been done, do not agree with [my] inequality. And that's just a brutal fact of nature...that's just the fact of the situation; the Einstein program fails, that's too bad for Einstein, but should we worry about that?
I cannot say that action at a distance is required in physics. But I can say that you cannot get away with no action at a distance. You cannot separate off what happens in one place and what happens in another. Somehow they have to be described and explained jointly.
Bell gives three reasons for not worrying.
1. Nonlocality is unavoidable, even if it looks like "action at a distance."
[It does not, with a proper understanding of quantum physics. See our EPR page.]
2. Because the events are in a spacelike separation, either one can occur before the other in some relativistic frame, so no "causal" connection can exist between them.
3. No faster-than-light signals can be sent using entanglement and nonlocality.
He concludes:
So as a solution of this situation, I think we cannot just say 'Oh oh, nature is not like that.' I think you must find a picture in which perfect correlations are natural, without implying determinism, because that leads you back to nonlocality. And also in this independence as far as our individual experiences goes, our independence of the rest of the world is also natural. So the connections have to be very subtle, and I have told you all that I know about them. Thank you.
The work of GianCarlo Ghirardi that Bell endorsed is a scheme that makes the wave function collapse by adding small (order of 10-24) nonlinear and stochastic terms to the linear Schrödinger equation. GRW can not predict when and where their collapse occurs (it is simply random), but the contact with macroscopic objects such as a measuring apparatus (with the order of 1024 atoms) makes the probability of collapse of order unity.
Information physics removes Bell's "shifty split" without "hidden variables" or making ad hoc non-linear additions like those of Ghirardi-Rimini-Weber to the linear Schrödinger equation. The "moment" at which the boundary between quantum and classical worlds occurs is the moment that irreversible observable information enters the universe.
So we can now look at John Bell's diagram of possible locations for his "shifty split" and identify the correct moment - when irreversible information enters the universe.
In the information physics solution to the problem of measurement, the timing and location of Bell's "shifty split" (the "cut" or "Schnitt" of Heisenberg and von Neumann) are identified with the interaction between quantum system and classical apparatus that leaves the apparatus in an irreversible stable state providing information to the observer.
As Bell may have seen, it is therefore not a "measurement" by a conscious observer that is needed to "collapse" wave functions. It is the irreversible interaction of the quantum system with another system, whether quantum or approximately classical. The interaction must be one that changes the information about the system. And that means a local entropy decrease and overall entropy increase to make the information stable enough to be observed by an experimenter and therefore be a measurement.
Against Measurement (PDF)
Beables for Quantum Field Theory (PDF)
On the Einstein-Podolsky-Rosen Paradox (PDF)
On the Impossible Pilot Wave (PDF)
Are There Quantum Jumps? (PDF, Excerpt)
BBC Interview (PDF, Excerpt)
For Teachers
For Scholars
Chapter 1.5 - The Philosophers Chapter 2.1 - The Problem of Knowledge
Home Part Two - Knowledge
Normal | Teacher | Scholar |
9710b5df6c022d2f | In many-body theory (and quantum field theory I suppose) we often work in the grand canonical ensemble, where the number of particles in the system is only fixed on average. The density operator used to compute expectation values is
$$ \rho = \frac{e^{-\beta (H - \mu N)}}{Z_{G}} = \frac{e^{-\beta K}}{Z_{G}} $$
where $K$ is the so-called grand canonical hamiltonian.
My problem is when we substitute $H$ for $K$ in the expression of the evolution operator $ U(t) = e^{-iHt} \rightarrow e^{-iKt} $, which is done most of the time because it simplifies calculations. It seems equivalent to saying that Schrodinger equation is invariant under the change $H \rightarrow K$.
The justifications I've seen so far, are based on the fact that since the original hamiltonian conserves the number of particles and thus commutes with the operator $N$, this replacement is just a displacement in energy and does not essentially changes the dynamics of the system. I'm not really convinced by this because the replacement with $K$ is not equivalent with adding a simple constant to the hamiltonian.
Additionally, it seems that if this argument is true, then in general one would be justified to construct a new hamiltonian $H' = H + \mathcal{O}$ to describe the dynamics of a system, as long as $[H, \mathcal{O}]=0$.
Given that, my questions are as follow :
1. Is the above statement that you can replace a hamiltonian $H' = H + \mathcal{O}$ when $[H, \mathcal{O}]=0$ true? If it'sn't, is there anything special with the case $\mathcal{O} = \mu N$, or are there some caveats?
2. Thermodynamics ensembles are often defined by density operators on the form $\rho=e^{-\beta S}$ with $S$ that is a linear combination of $H$ and various conserved operators $c_j \mathcal{O}_j$. Could we get rid of those ensembles by just including the operators $c_j \mathcal{O}_j$ in the hamiltonian in the first place and finding $c_j$ such that $\langle \mathcal{O_j} \rangle$ is the value we want? $c_j$ would have the physical interpretation of a classical field that couples with $\mathcal{O}_j$.
• $\begingroup$ AndreaPaco, the questions you've put in the bounty message might work better as a new question... $\endgroup$ – Nathaniel Jun 25 '17 at 23:56
The Schrödinger equation is not invariant under the change $H\rightarrow K$. However, in the grand canonical ensemble with some values of inverse temperature $\beta$ and chemical potential $\mu$, the fluctuation of the number of particles $N$ is of the order of $\sqrt{\langle N \rangle }$, where $\langle N \rangle $ is the average number of particles in this ensemble. This fluctuation is much less than $\langle N\rangle$, so the states of this ensemble most probably have a number of particles that is very close to $ \langle N \rangle $, and $K=H-\mu N$ gives pretty much the same evolution as $K^\prime=H-\mu \langle N\rangle$ for this specific grand canonical ensemble. And $K^\prime$ is indeed just $H$ with a constant shift.
• $\begingroup$ Thanks for your reply, but I fear you did not answer to some points. What you've explained is the usual role of the grand-canonical ensemble, a typical approach used also in classical Statistical Physics. What I am interested in is how the substitution $H\to K$ affect the $dynamics$ of the system. As I asked in the bounty disclaimer, please explain how to compute the shift for energies and explain the physical meaning of the Heisenberg equations that one can derive from $K$. From a dynamical point of view, does it correspond to a canonical transformation? $\endgroup$ – AndreaPaco Jun 30 '17 at 7:33
• 1
$\begingroup$ @AndreaPaco: I did not intend to answer all questions, but I believe I did explain how the substitution affects the dynamics: not much, as the number of particles in the ensemble is pretty close to $\langle N \rangle$; and the energy shift is approximately $-\mu\langle N \rangle$ $\endgroup$ – akhmeteli Jun 30 '17 at 23:15
• $\begingroup$ @AndreaPaco: I kind of like this argurment. However, your reasonning seems to be applicable to any macroscopic operator, not just $N$. Also, what do you mean by "for this specific grand canonical ensemble"? $\endgroup$ – Undead Jul 1 '17 at 23:33
• $\begingroup$ @Undead : "However, your reasonning seems to be applicable to any macroscopic operator, not just $N$." - This is an interesting thought, and it may be correct in some cases, but not always. For example, the reasoning is not applicable for some macroscopic operators in systems with spontaneous symmetry breaking (such as magnetization in ferromagnetics). "Specific grand ensemble" - a grand ansamble with specific Hamiltonian, temperature and chemical potential. $\endgroup$ – akhmeteli Jul 2 '17 at 1:44
Actually, replacing $H \to H - \mu N$ does affect the dynamics substantially, but it does so in a rather trivial way. This is precisely because $H$ and $N$ commute. If two operators $A$ and $B$ commute, and only then, we can factor an exponential as $e^{A + B} = e^{A} e^{B}$. This allows us to conclude that $e^{-iHt} = e^{iN\mu t} e^{-iKt}$. That is, we are free to evolve with $K$ as long as we remember that at the end of the day we have to apply $e^{iN\mu t}$ to recover the "true" time evolution. This is generally pretty easy since $N$ is a very simple operator. Another way to think about it is that we are working in a "rotating frame" -- we study the evolution not of the actual state of the system $|\psi(t)\rangle$, but of the "rotated" state $e^{iN\mu t} |\psi(t)\rangle$.
The answer above, which claimed that the differences are insignificant because $\sqrt{ \langle (N - \bar{N})^2 \rangle }$ is small, is not correct. In the thermodynamic limit, this quantity still goes to infinity. Its ratio with $\bar{N}$ goes to zero, but it would have to go to zero in absolute value to not affect the dynamics, and this is not the case. In fact, one can verify that, for example, the expectation value (in a bosonic system) of a boson creation operator $\langle a^{\dagger} \rangle$ rotates at a different frequency under $K$ than under $H$ (specifically, the difference in frequency is $\mu$). This is precisely accounted for by the extra rotation by $e^{iN \mu t}$ discussed above.
• $\begingroup$ I am afraid I don't understand your critique of my answer. It looks like your arguments (for example, the argument about the boson creation operator) are also applicable to the case where you just add a constant to the Hamiltonian. if you believe the dynamics will be different in that case, I disagree, as that would mean that, for example, gauge change changes dynamics. $\endgroup$ – akhmeteli Nov 11 '18 at 19:26
• $\begingroup$ @akhemeteli I don't understand your comment. If you add a constant to the Hamiltonian then he wavefunction just accumulates an extra phase. The N operator is a non-trivial operator hence generates non-trivial dynamics. It's true that in each N sector, you are just adding a constant. So the dynamics in each sector will not change. But that doesn't apply to $a^\dagger$ as it connects sectors of different N. $\endgroup$ – Dominic Else Nov 12 '18 at 4:25
• $\begingroup$ Relative fluctuations of the energy in the ensemble are of the same order as the relative fluctuations of the number of particles, so I don't understand why changes of dynamics due to fluctuations of the number of particles are more important. $\endgroup$ – akhmeteli Nov 12 '18 at 6:46
• $\begingroup$ @akhmeteli The Hamiltonian $H$ generates the correct time evolution for any state, regardless of its energy distribution. Passing from $H$ to $H - \mu N$ is the only thing that needs to be justified, and that requires considering the dynamics generated by $N$. $\endgroup$ – Dominic Else Nov 14 '18 at 5:43
• $\begingroup$ "The Hamiltonian $H$ generates the correct time evolution for any state" - technically, that is correct, but it does not matter, as you cannot be sure that your specific state has the density operator of the canonical ensemble. E.g., how can you be sure that your specific state should not be described by the microcanonical ensemble? The reason we can successfully use the canonical state is small deviations from this state do not affect the results. So when we use the grand canonical ensemble we make an error of the same order of magnitude as the error we make using the canonical ensemble. $\endgroup$ – akhmeteli Nov 14 '18 at 7:53
Your Answer
|
56b3f2860d471a7d | Quantum math: garbage in, garbage out?
8 thoughts on “Quantum math: garbage in, garbage out?
1. In considering “line of sight” that acts as the third axis, this is considered the z-axis and the direction of propagation, as you probably know. However, I too have found the asymmetry of orbitals to be bizarre. When solving for rotational motion, we work around the z-axis again, which seems arbitrary unless we consider the z-axis again as the direction of propagation. However, I think this means we can conclude that the atom itself cannot STOP moving, otherwise the motion of the electron around the nucleus becomes erratic. (After all, the electron is or is acting along a wave in one direction.) Instead, the orbitals themselves exist in such asymmetric forms because the atom is traveling along the z-axis. If we work with this assumption, then this may actually make solving the three-particle-problem (proton-electron-electron, p-p-e, e-e-e, etc.) (slightly) easier. After all, the xyz-axis would not be arbitrary NOR could the quantum solutions of each electron around the proton be arbitrary because both electrons have to rotate with motion in respect to the z-axis. In some ways, this makes me think that the rotation of the electrons around the proton can again be treated as “two-dimensional” problems, though admittedly, we would need an integral (from calculus) to create the full “stack” of solutions. What do I mean? Consider this: Let each electron be rotating around the proton at a FIXED z-axis value. Then, the full “solution” is the “sum” of all of these solutions for each z-axis value of the electron. I make it sound simple, though admittedly, the math won’t be “easy” per se. However, a fixed z-axis value eliminates one degree of freedom.
And I just thought of all this as I’m writing. Now I really want to try the math on this. (Of course, the math may not work out as I’d like, but we’ll see.) Thank you so much for your inspiration!
2. I sat down this evening and punched out an equation for the potential energy to depend only on the distances of the electrons from the z-axis (which can be kept constant) and an angle theta that defines the offset angle (from the x-axis) of the primary free particle (electron) while keeping the secondary free particle (electron) static. I did this by forbidding motion of the electron along the z-axis. If I plugged my potential energy formula into the Schrödinger equation, I could use it to give me a 3D slice of the complete 5D solution (5-degrees of freedom because we can confine the secondary free particle to the x-axis, or more precisely, one plane of freedom (after all, it doesn’t matter where the x-axis is placed around the z-axis – only the relative distance between electrons matters for the xy-plane). (I misspoke in my previous comment by saying I needed an “integral from calculus”. What I was thinking was that my solutions were going to form slices of the whole picture.)
Obviously, using only theta gives us one degree of freedom, which creates a… um… circle. The interesting part is the amplitudes at each point in the circle, which no doubt will not be the same as in a two-particle system. I would not expect the amplitude to be uniform, however, if the electron is truly acting as a wave (as in the two-particle system solution), then I would expect it to have a nice, complete oscillation around the center.
Two have a complete 3D slice requires having two other degrees of freedom. I think the most useful thing to do (and in order to complete two “sets”, which I’ll mention in a moment) would be to allow the distances of the electrons from the z-axis change, but NOT the distance along the z-axis. The nice part about this is that, rather than using the complicated spherical coordinate curl operator, we only have to use the polar coordinate one (which is much more friendly). These three degrees of freedom (theta, e1 distance from z-axis, and e2 distance from z-axis) create a general solution (or a “set” of solutions) for a pair of z-axis values z1 and z2 (the distances of the electrons from the nucleus along the z-axis). The next general solution (set of solutions) is that of picking a fixed theta and distances from the z-axis value. Then we let the electron distances from the nucleus along the z-axis vary. In fact, some of these solutions are even symmetrical (as they should be)! There are only two cases: where both electrons are “above” (or both below) the nucleus and where each electron is on opposite sides of the nucleus.
To get the true trajectory of the electron would require picking initial conditions and incrementing the electrons along their identified trajectories, as you would have to do for any sufficiently advanced problem. But that doesn’t give us nice pretty pictures. At least by obtaining slices, we can create graphs on the computer and combine them to create a more complete representation of these 5D orbitals.
All of this treats the electrons as particles following harmonic oscillation. As I think about this, it seems to me that the wave function itself isn’t necessarily a “thing” so much as a law – that the particle, by its nature, must obey harmonic motion. This says something fundamental about the particle and the wave function because it means that the wave function is not some meaningless mathematical gimmick – it is the MOTION path of the electron, which is what it should be. At the same time, it doesn’t defy the interpretation that the squared value of the amplitude of the wave function is also the probability of finding the particle. However, rather than “God rolling the dice”, the electron is simply moving along that path. This makes quantum tunneling easier to understand: rather than the particle randomly “popping” out of its potential energy “well” (hole, ditch, whatever), it happens to be moving outside of it all along! Then, as it finds itself on the part of its path outside of the potential well, perhaps something collides with it or the nucleus moves away from it, wrecking its path just enough to where it finds itself escaping.
And there you have it: Quantum tunneling from a deterministic perspective. It doesn’t seem like such a weird phenomenon to me now.
Now let’s answer the question as to WHY the electron must obey the law of harmonic motion: The electron seems to be an oscillating wave of electromagnetic energy and mass, which is what we’re discussing here. In other words, the oscillations that form the wave function of the particle must “align” so that the wave doesn’t interfere with itself and self-destruct. The existence of the particle is the wave.
That said, we can say that the particle does have both definite position AND velocity, despite the Heisenberg Uncertainty Principle (“HUP” for short). In fact, the HUP says something about the nature of the available SOLUTIONS but NOT about the nature of the particle. In other words, the HUP is misinterpreted, in my opinion. When we pick a position of the particle, the wave function equation spreads out over space. This isn’t wrong at all. In fact, this is what reality SHOULD say: it means that any SINGLE position in space can have ANY wave function. Hence, any and every wave is a valid solution! When we pick a velocity of the particle, the wave function extends to infinity again but allows us any point along that direction of propagation. Again, this is not wrong at all. It simply means that a particle of a particle velocity can be located anywhere along that wave. In fact, ALL positions along that wave are valid!
Thus, my view of the electron now is that it is a point-wave that is obeying a its “space field”. Imagine if you were to create a 3D wave (2D is easier to imagine) extending over all of space and that this wave maintained complete uniformity throughout. That is, it looked no different anywhere. Let’s call this “wave-space”. Then pick a single point in wave-space. That’s a particle. Whenever that particle moves, it takes on the new wave amplitude of that position in wave-space. Interference with other particles makes it “pick” a new wave-space (not just a new wave-space position). Wave-space itself is a 3D electromagnetic field, wherein each point represents some amplitude of electricity and amplitude of magnetism.
Naturally, then, this helps us make sense as to why photons, electrons, and protons interact, but not photons and, say, Higgs bosons (if they exist). Photons directly interfere with the very essence of the electron (a point in wave-space), causing it to move into a new wave-space. The resulting wave-space is still uniform (making the electron motion harmonic), but now the electron is traveling with a new velocity.
Notably, I say “wave-space” as though it were its own “thing”, but I don’t mean that the space itself exists in its entirety alone and apart from the electron. The only “existing” part of wave-space is the point that the electron (or proton or photon) occupies. It’s simply a tool to represent the oscillation of the particle itself, which is an electromagnetic point oscillating like a wave as a moves through space.
I’ll pause there for now. Thoughts?
• Wow ! As mentioned in my other reply, we shouldn’t discuss by filling pages of comments. Let’s work on a little paper together for the Los Alamos e-print archives or… Well… If we can add enough academic references and all of the other garbage reviewers expect to see, perhaps for a more serious physics e-journal?
3. Oh, and adding to that too: It should make sense now why an electron can interfere with itself in a single-slit diffraction experiment. The other particles forming the hole affect the wave-space that the electron can have, which ultimately affects its trajectory. In other words, the electron is “doomed” or “predestined” to follow along the harmonic path. The hole itself determines the wave-space, and the individual electron positions as they pass through the hole will determine where along that path they travel. And since there are multiple particles involved, the wave-space isn’t aligned like window blinds. Instead, as we see, the trajectories are spread out. Even still, an electron isn’t necessarily going to “fan out” (that is, just because it’s at the “top” of the slit doesn’t mean it’s going to be at the “top” of the detector screen). Instead, its final position may be determined by even the tiniest of digits in its angle as it enters the slit. The distance from the slit to the detector merely serves to amplify what “wave-space looks like” in an area the size of the slit. In other words, the single slit experiment acts like a kind of microphone, amplifying space. The smaller the slit, the smaller the sample of space. The larger the slit, the larger the sample of space, which, as it gets noisier, causes a loss of the diffraction pattern.
Leave a Reply
WordPress.com Logo
Google photo
Twitter picture
Facebook photo
Connecting to %s |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.