chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
91f866d37729a54b | The Nature of the Wave Function in in the Time Independent Schroedinger Equation
San José State University
Thayer Watkins
Silicon Valley
& Tornado Alley
The Nature of the Wave Function
in the Time Independent
Schrödinger Equation
When Erwin Schrödinger formulated the wave mechanics version of quantum physics in 1926 he did not specify what the wave function ψ represented. He thought its squared magnitude would represent something physical such as charge density. Max Born suggested that its squared magnitude represented spatial density of finding the particle near a particular location. Niels Bohr and his group in Copenhagen concurred and the notion that the wave function represents the intrinsic indeterminancy of the particle of the system came to be known as the Copenhagen Interpretation.
First consider a particle classically traversing periodic trajectory. Let s be path length and the velocity v be given as a function of s, say v(s). The period of time spent in an interval of length Δs is Δs/|v(s)|, where |v(s)| is the average speed of the particle in that interval. As Δs→ds the ratio ds/|v(s)| becomes something in the nature of a density. If the values of 1/|v(s)| are normalized the resulting function can be considered to be a probability density function P(s). The normalizing constant is just the time period of the trajectory cycle.
P(s) = 1/(Tv(s))
where ∫ds/|v(s)| = ∫dt = T
The kinetic energy of the particle is given by
K = ½mv²
and thus
v = (2K/m)½
and hence
P(s) = (m/2)½/K½
Any constant multiplier is irrelevant in determining the probability density because it shows up in the normalizing constant as well and cancels out. The probability density for a classical particle is inversely proportional to the square root of the kinetic energy.
The Quantum Mechanical Wave Function
The time independent Schrödinger equation for a particle in a potential field V(r) is
h²∇²ψ + V(r)ψ = Eψ
where E is energy and ψ is the wave function, the nature of which is at issue.
This equation can be multiplied on both sides by ψ and rearranged to give
h²ψ∇²ψ = −(E−V(r))ψ²
or, equivalently
h²ψ∇²ψ = −K(r)ψ²
where K(r) is the kinetic energy of the particle expressed as a function of the radial distance from the center of the potential field.
Now consider the vector calculus identity
div(ψ·grad(ψ)) = ψ·div(grad(ψ)) + grad(ψ)·grad(ψ)
which in the ∇ notation is
∇·(ψ∇ψ) = ψ∇²ψ + ∇ψ·∇ψ
or, equivalently
ψ∇²ψ = ∇·(ψ∇ψ) − ∇ψ·∇ψ
Thus the modified time independent Schrödinger equation is equivalent to
h² (∇·(ψ∇ψ) − ∇ψ·∇ψ) = −K(r)ψ²
Note that relative extremes of ψ² occur where
ψ∇ψ = 0
which is where either
ψ = 0 and ψ² is a minimum of 0
∇ψ = 0 and ψ² is a relative maximum
Note furthermore that
grad(ψ²) = 2ψgrad(ψ)
and hence
grad(ψ) = ½grad(ψ²)/ψ
and thus
grad(ψ)·grad(ψ) = (1/4)grad(ψ²)·grad(ψ²)/ψ²
or in ∇ notation
∇ψ·∇ψ = (1/4)∇(ψ²)·∇(ψ²)/ψ²
When the RHS of this equation is substituted for its LHS in the last modified version of the time independent Schrödinger equation the result is
h²(∇·(ψ∇ψ) − (1/4)∇(ψ²)·∇(ψ²)/ψ²) = −K(r)ψ²
There is a looped chain of maxima that are separated by minima of zero. This has to be established in general but for the case of a particle in a potential field the polar coordinates, as is shown in the Appendix, lead to an angular component of ψ² as shown below.
This is ψ² at a constant value of r and for a principal quantum number of 6.
Below is a depiction of the probability bumps in the plane of the particle's motion for a principal quantum number of 6.
The particle moves quantum mechanically relatively slowly in a probability bump, otherwise known as a state, and then relatively rapidly to the next state (bump).
The analysis now returns to the equation previously developed
If this equation is integrated from one maximum to the next or from one maximum to an adjacent minimum, the first term in the above equation evaluates to zero because at the end points of the integration either ψ is zero or ∇ψ is the zero vector. What is left is
−(1/4)h²( ∫∇(ψ²)·∇(ψ²)ds)/ψ*²) = −K(r)ψ#²Δs
or, equivalently
(1/4)h²([ ∫∇(ψ²)·∇(ψ²)ds/Δs/ψ*²) = K(r)ψ#²
or also
(1/4)h²([ ∇(ψ²)·∇(ψ²)/ψ*²) = K(r)ψ#²
where ψ*, ψ# are values of ψ within the interval of integration. The term ∇(ψ²)·∇(ψ²) stands for the average of ∇(ψ²)·∇(ψ²) in the interval of integration. Multiplying by ψ*² and taking the geometric mean of ψ*sup2; and ψ#² and denoting it as ψ² gives the equation
( ψ²)² = (1/4)h²([ ∇(ψ²)·∇(ψ²)/ψ*²)/K(r)
and hence
( ψ²) = (1/2)h([ ∇(ψ²)·∇(ψ²)/ψ*²)/K(r)½
Here again is the probability density inversely proportional to the square root of kinetic energy.
The squared magnitude of the wave function is a probability density, but that probability density is the proportion of the time the particle spends in its various allowable locations.
The Laplacian operator ∇² for polar coordinates (r, θ) is
(∂²/∂r²) + (1/r)(∂/∂r) + (1/r²)((∂²/∂θ²)
Thus the equation to be satisfied by ψ is:
−(h²/2m)[(∂²ψ/∂r²) + (1/r)(∂ψ/∂r) + (1/r²)((∂²ψ/∂θ²)] +(|E| − Q/r)ψ = 0
At this point it will be assumed that ψ(r, θ) is equal to R(r)Θ(θ). This is the separation of variables assumption. This is a mathematical convenience that is fraught with danger of precluding the physically relevant solutions. In this case it is alright because only circular orbits will be dealt with later.
When R(r)Θ(θ) is substituted into the equation it can be reduced to
−(h²/2m)[R"(r)/R + (1/r)R'(r) + (1/r²)(Θ"(θ)/Θ] + (|E| − Q/r) = 0
This equation may be put into the form
r²R"(r)/R + rR'(r)/R + (2m/(h²)(−|E|r² + rQ) = − (Θ"(θ)/Θ)
The LHS of the above is a function only of r and the RHS a function only of θ. Therefore their common value must be a constant. Let this constant be denoted as n².
(Θ"(θ)/Θ) = −n²
or, equivalently
Θ"(θ) + n²Θ(θ)
This equation has solutions of the form
Θ(θ) = A·cos(n·θ + θ0)
where A and θ0 are constants. Through a proper orientation of the polar coordinate system θ0 can made equal to zero. So Θ(θ) = A·cos(n·θ). In order for Θ(θ+2π) to be equal to Θ(θ) n must be an integer. The probability density is the squared magnitude of the wave function. Therefore the probability density is proportional to cos²(nθ).
Below is the shape of this function for n=6.
HOME PAGE OF applet-magic |
d0d09058149d083a |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
Consider Schrödinger's time-independent equation $$ -\frac{\hbar^2}{2m}\nabla^2\psi+V\psi=E\psi. $$ In typical examples, the potential $V(x)$ has discontinuities, called potential jumps.
Outside these discontinuities of the potential, the wave function is required to be twice differentiable in order to solve Schrödinger's equation.
In order to control what happens at the discontinuities of $V$ the following assumption seems to be standard (see, for instance, Keith Hannabus' An Introduction to Quantum Theory):
Assumption: The wave function and its derivative are continuous at a potential jump.
1) Why is it necessary for a (physically meaningful) solution to fulfill this condition?
2) Why is it, on the other hand, okay to abandon twofold differentiability?
Edit: One thing that just became clear to me is that the above assumption garanties for a well-defined probability/particle current.
share|cite|improve this question
You may want to look at – Willie Wong Jul 26 '10 at 13:11
Thanks Willie. This provides some explanation concerning my second question. – Rasmus Bentmann Jul 26 '10 at 13:40
@Downvoter: Why did you downvote? – Rasmus Bentmann Aug 20 '10 at 12:37
up vote 8 down vote accepted
To answer your first question:
Actually the assumption is not that the wave function and its derivative are continuous. That follows from the Schrödinger equation once you make the assumption that the probability amplitude $\langle \psi|\psi\rangle$ remains finite. That is the physical assumption. This is discussed in Chapter 1 of the first volume of Quantum mechanics by Cohen-Tannoudji, Diu and Laloe, for example. (Google books only has the second volume in English, it seems.)
More generally, you may have potentials which are distributional, in which case the wave function may still be continuous, but not even once-differentiable.
To answer your second question:
Once you deduce that the wave function is continuous, the equation itself tells you that the wave function cannot be twice differentiable, since the second derivative is given in terms of the potential, and this is not continuous.
share|cite|improve this answer
Your first argument is not clear to me - I'll take a look at Cohen-Tannoudji. – Rasmus Bentmann Jul 26 '10 at 15:23
The idea is the following: suppose that $V$ has isolated discontinuities and let $x_0$ be the location of one such discontinuity. Replace $V$ on $[x_0-\epsilon, x_0+\epsilon]$ with another potential which is continuous and which tends to $V$ as $\epsilon\to 0$. Then you show that the wave-function which solves the Schrödinger equation for this new potential tends in the limit as $\epsilon\to0$ to the wave-function you want and that in this limit the first derivative remains continuous. This is not really proven in Cohen-Tannoudji et al. but only sketched. The details are not hard, though. – José Figueroa-O'Farrill Jul 26 '10 at 16:56
There is a very clear physical reason why the wavefunction should be continuous: it's derivative is proportional to the momentum of the particle, so discontinuities imply that the state has an infinite-momentum component. – Jess Riedel Apr 27 '11 at 3:47
Since you talk about 'jump' discontinuities, I guess you are interested in a one dimensional Schroedinger equation, i.e., $x\in\mathbb{R}$. In this situation a nice theory can be developed under the sole assumption that $V\in L^1(\mathbb{R})$ (and real valued of course). By a nice theory I mean that the operator $-d^2/dx^2+V(x)$ is selfadjoint, with continuous spectrum the positive real axis, and (possibly) a sequence of negative eigenvalues accumulating at 0. Better behaviour can be produced by requiring that $(1+|x|)^a V(x)$ be integrable (e.g. for $a=1$ the negative eigenvalues are at most finite in number). If you are interested in this point of view, a nice starting point might be the classical paper by Deift and Trubowitz on Communications Pure Appl. Math. 1979. Notice that the solutions are at least $H^1_{loc}$ (hence continuous) and even something more.
A theory for the case $V$ = Dirac delta (or combination of a finite number of deltas) was developed by Albeverio et al.; the definition of the Schroedinger operator must be tweaked a little to make sense of it. This is probably beyond your interests.
Summing up, no differentiability at all is required on the potential to solve the equation in a meaningful way. However, I suspect that this point of view is too mathematical and you are actually more interested in the physical relevance of the assumptions.
share|cite|improve this answer
Here is a tangential response to your first question: sometimes these discontinuities do have physical significance and are not just issues of mathematical trickery surrounding pathological cases. Wavefunctions for molecular Hamiltonians become pointy where the atomic nuclei lie, which indicate places where the 1/r Coulomb operator becomes singular. There are equations like the Kato cusp conditions (T. Kato, Comm. Pure Appl. Math. 10, 151 (1957)) that relate the magnitude of the discontinuity at the nucleus to the size of the nuclear charge. I have heard this explained as a result of requiring the energy (which is the Hamiltonian's eigenvalue) to remain finite everywhere, thus at places where the potential is singular, the kinetic energy operator must also become singular at those places. Since the kinetic energy operator also controls the curvature of the wavefunction, the wavefunction at points of discontinuity must change in a nonsmooth way.
share|cite|improve this answer
Your Answer
|
bd758164a86759e6 | Materials in Electronics/Schrödinger's Equation
From Wikibooks, open books for an open world
Jump to: navigation, search
Schrödinger's Equation is a differential equation that describes the evolution of Ψ(x) over time. By solving the differential equation for a particular situation, the wave function can be found. It is a statement of the conservation of energy of the particle.
Schrödinger's Equation in 1-Dimension[edit]
In the simplest case, a particle in one dimension, it is derived as follows:
• T(x) is the kinetic energy of the particle
• V(x) is the potential energy of the particle
• E is the energy of the particle, which is constant
Substituting for the kinetic energy of wave, as shown here:
Now we need to get this differential equation in terms of Ψ(x). Assume that Ψ(x) is given by
Double differentiating our trial solution,:
Rearranging for k2
Substituting this in the differential equation gives:
Multiplying through by Ψ(x) gives us Schrödinger's Equation in 1D:
[Schrödinger's Equation in 1D]
Solving the Schrödinger Equation gives us the wavefunction of the particle, which can be used to find the electron distribution in a system.
This is a time-independent solution - it will not change as time goes on. It is straightforward to add time-dependence to this equation, but for the moment we will consider only time-independent wave functions, so it is not necessary. The time-dependent wavefunction is denoted by
While this equation was derived for a specific function, a complex exponential, it is more general than it appears as Fourier analysis can express any continuous function over range L as a sum of functions of this kind:
The Schrödinger Equation as an Eigenequation[edit]
The Schrödinger Equation can be expressed as an eigenequation of the form:
[Schrödinger Equation as an Eigenequation]
• ψ is the eigenfunction (or eigenstate, both mean the same thing)
• E is the eigenvalue corresponding to the energy.
• H is the Hamiltonian operator given by:
[1D Hamiltonian Operator]
This means that by applying the operator, H, to the function ψ(x), we will obtain a solution that is simply a scalar multiple of ψ(x). This multiple is E - the energy of the particle.
This also means that every wavefunction (i.e. every solution to the Schrödinger Equation) has a particular associated energy.
Higher Dimensions[edit]
The equation that we just derived is the Schrödinger equation for a particle in one dimension. Adding more dimensions is not difficult. The three dimensional equation is:
Where is the Laplace operator, which, in Cartesian coordinates, is given by:
See this page for the derivation. It is also possible to add more dimensions, but this does not generally yield useful results, given that we inhabit a 3D universe.
In order to integrate Schrödinger's equation with relativity, Paul Dirac showed that electrons have an additional property, called spin. This does not actually mean the electron is spinning on an axis, but in some ways it is a useful analogy.
The spin on an electron can take two values;
We can incorporate spin into the wavefunction, Ψ by multiplying by an addition component - the spin wavefunction, σ(s), where s is ±1/2. This is often just called "spin-up" and "spin-down", respectively. The full, time-dependent, wavefunction is now given by:
Conditions on the Wavefunction[edit]
In order to represent a particle's state, the wavefunction must satisfy several conditions:
• It must be square-integrable, and moreover, the integral of the wavefunction's probability density function must be equal to unity, as the electron must exist somewhere in all of space:
For 1D systems this is:
• must be continuous, because its derivative, which in proportional to momentum, must be finite.
• must be continuous, because its derivative, which is proportional to kinetic energy, must be finite.
• must satisfy boundary conditions. In particular, as x tends to infinity, ψ(r) tends to zero. (This is required to satisfy the normalistation condition above).
Examples of Use of Schrödinger's Equation[edit]
Schrödinger's Equation can be used to find wavefunctions for many physical systems. See Confined Particles for more information.
• Shrödinger's Equation (SE) is a statement of the Law of Conservation of Energy.
• It is given by
• By solving the equation, one can obtain the wavefunction, ψ.
• From the wavefunction we find the distribution of the electron's probability function.
• The probability of the electron existing over all space must be 1.
• SE gives a set of discrete wavefunctions, each with an associated energy.
• An electron cannot exist at an energy other than these. |
4fbe832f23f1d99d |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
When we solve the Schrödinger equation on an infinite domain with a given potential $U$, much of the time the lowest possible energy for a solution corresponds to a non-zero energy. For example, for the simple harmonic oscillator with frequency $\omega$, the possible energies are $\hbar\omega(n+\frac12)$ for $n =0,1,\dots$ . Some of the time, solutions with zero energy are possibly mathematically, but boundary conditions would mean that such solutions would be everywhere zero, and hence the probability of finding a particle anywhere would be zero. (For example, an infinite potential well).
However, when solving the Schrödinger equation for a particle moving freely on a circle of lenfth $2\pi$ with periodic boundary conditions $\Psi(0,t)=\Psi(2\pi,t)$ and $\frac{\partial\Psi}{\partial x}(0,t)=\frac{\partial\Psi}{\partial x}(2\pi,t)$, I have found a (normalised) solution $\Psi(x,t)=\frac1{2\pi}$ with corresponding energy $0$. I can't find a way to discount this mathematically, and it seems to make sense physically as well. Is this a valid solution, and so is it sometimes allowable to have solutions with $0$ energy? Or is there something I'm missing?
share|cite|improve this question
You might want to read [particles in a ring][1], it seems to be a legitimate solution that occurs in organic chemistry..., [1]: – innisfree May 3 '13 at 23:39
Doesn't this state get in trouble with the Uncertainty Principle, though? It appears to have precisely zero momentum... – Rococo May 4 '13 at 2:10
@BenCrowell But all the basis states $|x + 2n \pi\rangle$ with $n$ integer are physically equivalent. I would argue that the correct variance of the position is finite ($2\pi / \sqrt{3}$ if my algebra's right). Presumably the apparent conflict with the canonical position-momentum uncertainty relation is resolved by properly deriving the analogous expression on a compact space with periodic BCs. I'm going to bed but will try to formalise this tomorrow. Or leave it as a challenge... – Mark Mitchison May 4 '13 at 3:33
@Mark Mitchison: The issue of how to construe the uncertainty relation in a space that wraps around is interesting, but I think it's a completely separate question. – Ben Crowell May 5 '13 at 3:42
@BenCrowell Fair point. In case you're interested, though, this problem was apparently considered way back in '63. The upshot is that one can modify the uncertainty relation in this case to allow for a state $\psi(x) = \sqrt{1/2\pi}$ of definite angular momentum. See On the uncertainty relation for Lz and φ by D. Judge, Physics Letters 5 p. 189 (1963). – Mark Mitchison May 5 '13 at 18:22
up vote 1 down vote accepted
In my view, the important question to answer here is a special case of the more general question
Given a space $M$, what are the physically allowable wavefunctions for a particle moving on $M$?
Aside from issues of smoothness wavefunctions (which can be tricky; consider the Dirac delta potential well on the real line for example), as far as I can tell there are precisely two other conditions that one needs to consider:
1. Does the wavefunction in question satisfy the desired boundary conditions?
2. Is the wavefunction in question square integrable?
If a wavefunction satisfies these properties, then I would be inclined to assert that it is physically allowable.
In your case where $M$ is the circle $S^1$, the constant solution is smooth, satisfies the appropriate conditions to be a function on the circle (periodicity), and is square integrable, so it is a physically allowed state. It also happens to be an eigenvector of the Hamiltonian operator with zero eigenvalue; there's nothing wrong with a state having zero energy.
share|cite|improve this answer
Your solution is valid. It has zero kinetic energy. It doesn't necessarily have zero energy. It can have any potential energy you'd like. Just because your particle is "freely moving," that doesn't mean the potential is zero. You could have $V(x)=k$ for any constant $k$. The value of $k$ is not observable and has no physical significance.
In general there is no special significance to having zero energy in a solution to the Schrodinger equation. Any solution can be defined to have zero energy simply by changing the potential appropriately like $V\rightarrow V+c$, where $c$ is some constant.
A realistic example involving zero kinetic energy and a constant wavefunction would be some particle-rotor models of nuclei, in which the deformed (prolate) nucleus (rotor) has some orientation in space, specified by one or two angular coordinates. If the odd particle has some component $K$ of its angular momentum along the symmetry axis, you get a rotational band with energies proportional to $J(J+1)$, starting with a ground state at $J=K$. In the ground state for the $K=0$ case, the rotor has zero kinetic energy, and its wavefunction is a constant as a function of the angular coordinates.
share|cite|improve this answer
I suppose this is true, but doesn't changing the potential change the form of the solutions as well as the energies? Unlike the gravitational potential say, the quantum potential determines the form of the allowed solutions. Doesn't changing it lead to a different problem? – Tom Oldfield May 4 '13 at 8:57
@TomOldfield: Yes, adding a constant to the potential changes the form of the solutions. However, that isn't really all that surprising. A Galilean boost changes the form of the solutions to the Schrodinger equation in a similar way. – Ben Crowell May 5 '13 at 3:43
Your Answer
|
c51cf3ed513d8950 | Saturday, May 29, 2010
This weekend, I'll be on my way back to Sweden. My time here at Perimeter Institute turned out to be busier than expected, but it has also been very productive. It is somewhat sad that every time I come back more people I knew have left. Those postdocs who I spent my years with here have either left already, sit on packed bags about to start a new job, or are due to apply for a new job this fall.
The weather here in Waterloo has been brilliant the last two weeks, and the construction at PI has proceeded rapidly. On the risk of boring you to death, here's more photos of the building extension. Meanwhile, one can imagine how the result will look. The photo below is from the back of the building. To the right, you see the old part of the building, the glass boxes are the researchers' offices.
This is a close-up of the new part of the building, with the goldish shimmering glass front:
And this is again the view from the parking lot, compare to three weeks ago. If I recall correctly, that's where the new main entrance will be.
So, now I have to pack my bag. You'll hear from me once I'm back in Stockholm. Meanwhile, a great weekend to all of you!
Thursday, May 27, 2010
Learning to deal with information
In the 21st century, information is cheap. Or is it? I have written several times on this blog that it is a naive illusion to think of the internet as a democratic provider of information. Moreover, the simple provision of information is not equivalent to people being well informed.
The availability of information on the internet is not democratic but, if anything, anarcho-capitalist. If you have the money to pay people who know something about search engine optimization, and others to spam links to your site wherever they won't be immediately deleted, you can pimp up your website's ranking dramatically. Even Google's PageRank algorithm itself is clearly not democratic: it gives more weight to a link from a site that has itself more links. That's what makes it so powerful and so useful. Sure, we all profit from this clever ordering of information. I'm certainly not complaining about it. It's just not democratic and shouldn't be sold as democratic since not everybody's voice has the same weight. Google itself does smartly not call the algorithm itself "democratic" but writes that "PageRank relies on the uniquely democratic nature of the web." Ohm, which democratic nature are we talking about again? But maybe more important, Google's PageRank also doesn't tell you anything about the quality of information you obtain. That the voices of the wealthy have more impact is hardly surprising, and merely a reflection of what has been going on in the media and news press for a long time.
Now one could of course argue that it's up to me to just go through all the hits that my search brought up and find the best piece of information. But as a matter of fact, most people don't do that. I usually don't do it either. And that's not even irrational, because scanning through all the hits that one gets on a query is very time-intensive and the result rarely justifies the effort. Thus, most people will skim maybe the first 20 hits, if at all, and conclude that they've gotten a fair cross-section of what there is to know about the topic. That's the part of the information that is "cheap." Everything else, for example checking sources, becomes increasingly costly in terms of time and effort. And since most websites don't list their sources, there's few shortcuts to that. What is left is that whoever dominates the "cheap" information does, for all practical purposes, dominate the information market. The only cure for that is information literacy.
The other day, I read an interesting article by Mark Moran. Moran is CEO of a Web publisher that offers free content and tools that teach students how to use the Web effectively. He writes:
"[A]s the founder of a company whose mission is to teach the effective use of the Internet, I have pored through dozens of studies, and recently oversaw one myself, that all came to the same conclusion: Students do not know how to find or evaluate the information they need on the Internet.
In a recent study of fifth grade students in the Netherlands, most never questioned the credibility of a Web site, even though they had just completed a course on information literacy. When my company asked 300 school students how they searched, nearly half answered: "I type a question." When we asked how students knew if a site was credible, the most common answers were "if it sounds good" or "if it has the information I need." Equally dismal was their widespread failure to check a source’s date, author or citations."
I find this seriously scary! As I have expressed in my earlier post Cast Away, the passing on of knowledge to the next generation is one of the most essential ingredients to continuing progress. How are people supposed to make informed decisions if they can't tell what the relevant information is to begin with? Where does that leave our political systems? But then I read the following:
Wise words, eh? Guess where that's from? Guess, don't Google! It's a press release from the White House. No, really. It's an announcement for the "National Information Literacy Awareness Month" that was last year in October, which somehow passed me by. While recognizing a problem isn't the same as solving it, it is certainly a good first step. Let's hope that other nations will follow that example, there's clearly hope. Yes, we can do it! Indeed, there is more hopeful news today: The Pew Research Center's Project for Excellence in Journalism some days ago published new data comparing the news coverage on blogs to that in the traditional press. Here's an interesting number: only 2% of news in the traditional press are about science and technology. But on the blogs, it's 18%.
Sunday, May 23, 2010
On the Edge of Chaos
I saw this advert about a month ago, and it got me thinking. It doesn't matter much if you don't understand German, the visuals speak for themselves:
It's an advert for craftsmanship (Handwerk). The song lyrics are roughly saying: imagine how life would be without them. (The long list in the end is a list of professions.) To me it shows so nicely how incredibly complex our life has become, and how much that we take for granted is only a very recent achievement in the history of mankind.
Friday, May 21, 2010
Terra Incognita
As you know, I am presently at a workshop at Perimeter Institute about the Laws of Nature: Their Nature and Knowability. Yesterday, we had a talk by Marcelo Gleiser titled “What can we know of the world?”. It occurred to me somewhat belatedly that I recently read an article by Gleiser in New Scientist, “The imperfect universe: Goodbye, theory of everything.” In that article, he writes that after “Fiften years [as] a physicist hard at work hunting for a theory of nature that would unify the very big and the very small” he has come to the conclusion that “the very notion of a final theory is faulty.” In a nutshell, that was also what his talk was about.
The only thing that's interesting about this insight is that it took him 15 years to arrive there. And maybe, why it got printed in New Scientist. Of course the notion of a final, fundamental, theory of all and everything is faulty. For the simple reason that even if we had a theory that explained everything we know, we could never be sure it would eternally remain the theory of everything we know. As Popper already realized about a century ago, one cannot verify a theory, it can only be falsified. Thus, theories we have are forever out for test, always on the risk that some new data does not fit in. That's exactly what makes a theory scientific. It's also one of the points I made in my FQXi essay. You see, I'm an even Newer Scientist.
That we can never know whether a theory is truly fundamental and able to explain all observable phenomena of course does not mean there is no fundamental theory. It just means we can never know - so your believe that such a theory exists belongs in the realm of religion, not science.
In any case, in his talk (video and audio here), Gleiser touched on another topic that reminded me of something else. He had a sketch of our expanding knowledge, with a filled circle representing “The Known” in the middle, that is expanding into what is now the unknown (“perennial ignorance”) outside:
I used a similar, though slightly different analogy for the progress of science in my PI public lecture some years ago (which incidentally has the same title as the FQXi essay, I'm very into recycling). In this case though, I used a map of Middle Earth.
The message that I wanted to convey is that the process of knowledge discovery is very similar to exploring unknown territory. There are parts that you have already seen and that you know very well, though details may be missing. And let me be clear that with “The Known” (in contrast to Gleiser) I don't mean laws themselves but the data from which the laws were extracted. Otherwise you lose information that is possibly important about the range of applicability (information you at first possibly didn't think was relevant).
You try to explain the known by a theory, and if everything fits you point somewhere into the unknown (make a prediction). Ideally, experimentalists go there and find what you told them they would find. You don't want to point out too far because people today are quite impatient, and if your prediction is not measurable within their lifetime it won't help you get tenure. The other way progress happens is that there is data available for which a theoretical explanation is missing. Or a theory might be sketchy and not work very well. That's the situation of the experimentalist saying: we've seen something on the horizon, please explain that. The body of knowledge that we have is usually not neatly simply connected, but typically has some pieces that don't really match with anything else.
Which brings me back to Gleiser's article then. The essential question is not whether you do or don't believe in a fundamental theory of everything. The essential question is what is a good and promising way to expand what is known. You can believe in flying spaghetti monsters, reincarnation, or a theory of everything: if it helps you with your research, by all means, go ahead, just don't put your believes in the abstract of your paper.
Experimental input is of course essential to progress all along. On the theoretical side, the obvious reason why people are looking for a unification of the known forces is that unification has worked previously and has been tremendously successful. The same holds for symmetry principles. Sure, that doesn't mean these procedures will continue to be successful, but it's the obvious thing to try. It's the same reason why a band's second hit sounds like the first, and why, after my move to Sweden I first had to learn that asking to speak to a supervisor and complaining about lacking customer service is not a very successful tactic in this country. Similarly, we might have to reconsider our tactics and learn new ways of thinking if we remain unsuccessful making headway on today's big questions in physics. For example when it comes to resolving the apparent tension between General Relativity and quantum mechanics, or to explain the arrow of time, rspt the initial conditions of the universe: It's terry incognita and there may be dragons.
That's why I find meetings like the current one at PI very useful to become more aware of our standard mode of thinking, for awareness and acknowledgement of limitations of a procedure is the first step to improvement.
Monday, May 17, 2010
Abramowitz/Stegun goes online
Did you ever need to learn about the properties of some obscure mathematical function which turns up when you try to solve, say, the Schrödinger equation with a linear potential?
In the times before Wikipedia and Eric Weisstein's World of Mathematics/MathWorld, the usual way to proceed was to go to the library and look up in the "Abramowitz/Stegun", a compilation of formulas, relations, graphs and data tables for all kinds of functions you can think of.
Airy functions Ai(x), Bi(x) and M(x).
Over the last years, Milton Abramowitz' and Irene A. Stegun's time-honored "Handbook of Mathematical Functions" has been carried over to the internet age as the Digital Library of Mathematical Functions. Published by the US National Institute for Standards and Technology (NIST),
... the NIST Digital Library of Mathematical Functions (DLMF), is the culmination of a project that was conceived in 1996 at the National Institute of Standards and Technology (NIST). The project had two equally important goals: to develop an authoritative replacement for the highly successful Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, published in 1964 by the National Bureau of Standards (M. Abramowitz and I. A. Stegun, editors); and to disseminate essentially the same information from a public Web site operated by NIST. (From the DLMF Preface)
Parts of the DLMF have been available since some time, but the complete site went online just last week, on May 11.
In comparison to the old printed book, there are more functions and formulas, which all can be copied as Latex or MathML code. And while the function graphs at MathWorld are interactive, the DLMF features more detailed descriptions of applications in mathematics and physics, and links to freely available software libraries.
Should I ever need to code Jacobian Elliptic Functions, I'll know where to look them up.
Via bit-player, where you can also read more about the history of the Abramowitz/Stegun.
Saturday, May 15, 2010
The Future of the Conference
As you know, I am here at Perimeter Institute for the upcoming workshop on the Laws of Nature: Their Nature and Knowability. Every time I lift my bag on a scale at a check-in counter I am wondering if there will come a time when, instead of stepping on a plane, we will meet in cyberspace.
The Past
The classical conference in academia is still omnipresent. You go, you sit and listen to a dozen talks a day, you smalltalk over coffee and cheap cookies and try to get to know some people at the "social event," typically a reception with buffet or a conference dinner. Also typical is that the average participants pays a horrendously high fee that covers the VIP guests airfare and four star hotel. But that's okay because most of the participants have a travel grant for exactly this purpose.
That might sound a bit dull, and it frankly sometimes is, but there are a lot of good reasons both to organize and go to a conference.
On the participant's side: Most notably, conferences are useful to obtain or keep an overview on the research going on in one's field. An overview both on the what and on the who. It is possible to do that by other means, but a conference is an especially efficient way to do it, in particular if you're a newcomer. In contrast to reading a review, you can go and talk to the people who work on similar stuff like you and face-to-face communication is still the best way to exchange information. You might learn about some unfinished work and find a new collaborator. You might get to hear a talk by some of the more well-known people in the field that might be unlikely to pass by your own institution. And last but not least, you have the opportunity to communicate your own research, get feedback and advice.
On the organizer's side: Organizing a conference takes a lot of work and time. One doesn't make money with a scientific conference - in fact the first steps will be trying to find sponsors. Reasons for organizing a conference are most notably to advance the research in one's own field. It's to bring together and support the community, to spread ideas, to foster the formation of collaborations. Conferences are also frequently used to advertise the institution where they take place, which most often is one of the main sponsors. As an organizer one often has, to some extend, the possibility to select speakers that one is interested in hearing or meeting. And then there is the communication of the research field's relevance beyond the own community. Many conferences will have a public lecture and will have some media coverage, at least in the news magazine of the university where it takes place.
There are a few variations to the conference scheme. A workshop for example will typically have fewer participants and fewer talks, and these talks will be more specialized and leave more room for discussion. The large conferences will often separate talks into plenary talks and parallel session. Sometimes they will dump people into a poster session.
The Present
During the last years, with the spread of social networking tools and the continuing improvements in information technologies, one could start to see some modifications of the standard scheme. To begin with, it is nowadays possible to let a speaker deliver a talk by video link and it is similarly possible to let people participate by streaming video. I have been to a few conferences were talks were given by video and they typically are not very well attended. I'm not entirely sure why. It's like people think "Oh, he wont really be here." Or maybe it's that, not so surprisingly, these talks typically imply a lot of technical fiddling and are fault-prone. That however will improve the more often it happens.
Another quite obvious change that is now so common one easily takes it for granted, is that most conferences have the slides of talks online and in many cases even a recording. This is so omnipresent that indeed most conferences no longer publish proceedings. I find this extremely useful because I don't have to take notes and write down a reference on the speaker's slide. I can just go and look it up later.
Then there are the changes in format. I wrote previously that I was at the SciBar Camp in Toronto and later at the SciFoo Camp in California. In this case the schedule is not set before the start of the meeting, but assembled by self-organization and participant's interests upon arrival which is greatly aided if the participants had in advance a possibility to exchange their interests online. This spontaneous self-assembly has advantages and disadvantages. The advantage is that it's a more flexible format that expresses the participants' rather than the organizers' interests, is more lively and interactive. Disadvantage is that most of the sessions will be pretty much unprepared. Since I actually prefer listening to thought-through arguments instead of improvised babble, I am not too much in favor of this mode of organization. I think that for scientific purposes, a combination of both, the old-fashioned scheduled talks and some more flexible sessions would be more suitable.
And then of course there is the use of social networking tools, like Twitter, Friendfeed and blogging or just setting up a networking site specifically dedicated to the event. Whether or not that works well depends crucially on how many people make use of it. But if it works well, this can serve a lot of purposes. One is clearly the communication to the public. But besides this it also improves the exchange between the participants. In particular if you are at a large conference you might not actually know who is interested in similar topics like you or who you might want to talk to in the coffee break. If a conference wants to make good use of Web2.0 tools first thing they should do is to aggregate the participants' feeds. It is somewhat ironic, but you might not actually know that the person sitting next to you is writing a blog you read frequently. And installations like a Tweetwall for example (a screen displaying tweets by participants, see picture) add a completely new layer to the discussions that can take place at a meeting and can greatly improve information exchange and facilitate networking.
It is interesting to see how more and more conference organizers are making use of these possibilities. It depends of course a lot on the technical support that they have. For example I read the other day on Resonaances that the International Conference on High Energy Physics 2010 has launched an official blog and recruited some bloggers to cover the event. How cool is that? One should add that of course these things have been done since years in some communities that are especially dedicated to advancing these changes in networking, blogging, outreach and information technology. And Perimeter Institute's outreach efforts have been playing around with all these possibilities since years, for example with the recent Quantum2Cosmos Festival. Point I'm trying to make is that the use of these tools is now slowly spreading and becoming more common.
The Future?
So what's next? NatureNetworks organizes conferences that are life-streamed to Second Life. Will this become the conference of the future? I think it is very well possible. I don't think it is very likely that conferences will become entirely decoupled from physical reality in the sense that we exclusively meet online. But it will become increasingly more common to attend a "real" conference "virtually" if one cannot be there in person for one or the other reason, may that be lack of funding or illness.
I also think that conferences will obtain several more virtual layers in the soon future. For example, I imagine that you go to a talk and while you are there can "log in" to the respective website, and so could people who are not physically there but following online. You could then for example skip back and forth in the slides as you wish or ask your colleague in the second row what he thinks about what the speaker just said. I think that this happens to some extend today by people sending emails, but it could become much better aggregated. I am not sure however that such a complex environment as SecondLife is necessary for these purposes. Though it has of course the advantage that the technology is already in place.
As to the increased flexibility in format. There is a quite obvious hurdle to having an academic conference that has not a program online a month ahead and neatly scheduled speakers: Many people can only justify their participation and receive a reimbursement for their expenses when they are giving a talk. This is a typical example where requiring "accountability" can be misguided (see also) and hinders improvements. However, I think that this problem will resolve by itself once funding agencies notice that there are other means to document one's participation in an event than being listed on the program. Basically, all they really want to know is that you didn't just spend the week on the beach at their expenses. But taking part in online discussions or blogging can serve a similar purpose.
Real change is happening and I think we'll see more of it!
Thursday, May 13, 2010
Paradigm Shifts
One day, when I'm old and my hair is grey, and I'm sitting in a rocking chair stroking a cat on my lap when the neighbor's son comes with a book on the history of science, I want to say "Yes, I was there."
There are as many different motivations to become a physicist as there are physicists. But one of them is certainly the wish to be part of something greater, an event of historical importance. It's the wish to be there and have a say when our view of the world fundamentally changes; when a new picture comes into focus that will be passed on to future generations.
The change of our fundamental understanding of Nature, the emergence of a new way of thinking about the world is what is known as a "paradigm shift." It's a notion that occasionally creeps up in a discussion. It most often does so either as a means of defense, when a new proposal is widely rejected or when the speaker tries to make himself more interesting.
I was wondering the other day what paradigms there are that might be shifting today. In the 22nd century's textbooks, what of our today's understanding will appear in the historical appendix instead?
There's three such potential paradigm shifts that I've come across.
The first one is about the limits of reductionism. With the incredible success reductionism has had in physics in the first half of the century, there came the believe that one day we'll be able to explain everything by taking it apart into smaller and smaller pieces. This paradigm has by now pretty much shifted over in favor of acknowledging that emerging features might not be possible to explain, either in practice or in principle, by reduction to more elementary constituents (see also). This change on our perception came along with the rise of chaos theory and complexity, features that are both very common to natural systems and hard if not impossible to address by reductionist approaches. It is funny in fact how silently this shift seems to have taken place. You sometimes find today people in talk vigorously arguing that reductionism has limitations, just to find there's nobody actually disagreeing with them. Except for the old professor in the front row.
The second potential paradigm shift that has crossed my way is the multiverse. The multiverse I have in mind is the one forced upon you from the string theory landscape, a vast number of possible universes with different laws of Nature, versus the previously prevailing idea that our universe is unique and is so for a reason that we have to find. Various other sorts of multiverses seem to creep up from other considerations. The multiverse is presently a very hotly discussed topic, with strong defenders both for and against it. I have previously expressed my opinion that the multiverse isn't so much a new paradigm but a new way of thinking about an old paradigm. Instead of finding a way to derive the parameters of the standard model as a 'most optimal' configuration of some sort, one now searches for a measure in the multiverse by means of which our universe is (ideally) most likely. It's a watered-down version of the same game. In any case, I recall that Keith Dienes (the guy with the String Vacuum Project) spoke about the probabilistic attempt as a new way of thinking about "why" we have exactly these laws. And yes, I was thinking, maybe he's right and in some decades from now that will be how we think about our reality. That we're embedded in a vast number of different universes which different laws of Nature and our grandchildren will laugh about that we once thought we were unique.
The third potential paradigm shift is that spacetime might not be a fundamental entity. I think that everybody who works on quantum gravity (whatever sort of) is familiar with the idea. But I noticed on occasion, most recently when I was talking to Sophie about Verlinde's emergent gravity scenario, that the idea that space-time is only seemingly a smooth, continuous manifold with a metric on it and on small scales might be nothing like this is not very widely spread outside the community. While there are many approaches towards finding a more fundamental description of spacetime, they each suffer from their own problems. So I think it is pretty unclear presently whether this will turn out to be a true (and useful) description of Nature in some way. But it's certainly a thought hanging in the air. On the completely opposite side is the idea that space-time instead is the only fundamental entity and that matter indeed emerges from it (an idea that dates back at least to Kaluza and Klein). Or that neither is fundamental, but arises from something unified that's neither matter nor space-time.
These are all ideas that physicists have been chewing on for quite some while now. I am curious how people will be think about them in the future, if they will laugh about or foolishness or admire our imagination.
Tuesday, May 11, 2010
Under Construction
As you have probably seen from my Twitter-feed, I have made it to Waterloo, despite ash cloud and all. Perimeter Institute is, as usual, buzzing with activity. Since I have to contribute my part to the buzz, here's just a short update on the building construction. The photos from February are here. It is oddly pleasing to see reality evolve by plan, just as we've been shown in the models. It's so different from my every day life...
Sunday, May 09, 2010
Knowledge for the sake of knowledge
In May 2007, Canadian Prime Minister Stephen Harper announced the government's new Science and Technology agenda. The location they chose for this happened to be Perimeter Institute. I blogged about the event here. To introduce the Prime Minister, Mike Lazaridis, founder of the Institute said a few welcoming words (Video and audio here). An excerpt:
“We believe that bold focus and continuing investments in theoretical physics and its applications result in further breakthroughs. These in turn will be the basis for tomorrow's goods and services [...]
If we learned anything from this great experience that is Perimeter and IQC, it is this: Canada can lead the world in key scientific fields from which future economic prosperity and job creation will flow - as long as the private sector and governments make bold, focused, and long-term investments in carefully chosen fields.”
I didn't write about Lazaridis' words then because I distinctively recall feeling let down. So all the fundamental research we do, in the end it's all to produce something that you can go and buy at the mall? Needless to say, I do believe there is value in knowledge just for the sake of knowledge. Curiosity and the wish to know - where we come from, what we are made of, what is out there - is one of the key drivers of our development, both as a species and as a person. Material prosperity is a, certainly welcome and desired, result that better knowledge of the laws of Nature can bring. But knowledge itself also feeds our desires, even if it remains immaterial. Whenever somebody justifies fundamental research as an investment in future technologies (international competitiveness! economic prosperity! job creation!) they are missing half of the story. Yes, that's one of the reasons. But the other reason is that we just want to know.
Now I've never talked to Lazaridis and I actually don't know what his opinion is on the matter. It is very well possible that his speech to the Prime Minister was just a collection of “right things to say on such occasion.” (My only encounter with Big Mike was a very short one. I had been to PI's gym on a weekend and after half an hour or so on the treadmill had the sudden urge to look up something in a book. Sweaty and without glasses I headed over to the library, where a group of important looking grey suits were just being shown around. Thinking that I might leave a somewhat unfortunate impression, I silently vanished.) But leaving aside Lazaridis and the Canadians for a moment, the idea that tax-paid fundamental research in the end should produce some sort of *stuff* is unfortunately wide spread.
Last year Paul Drayson, Minister of Science in the UK, said
“Scientists should be accountable where work is funded by the taxpayer and therefore I think it is right that scientists should be asked to think about the impact that they have had.”
(as quoted in THE "Science, we have a problem."). What bothers me about this quotation is the implicit assumption that scientists do not think about the impact they have. As if scientists would not care whether their work is relevant for the society they are part of. I previously wrote that in my experience it is a crude misconception. Most people I know who work in fundamental research do actually suffer from feeling useless for the exact reason that the impact of their work is difficult, if not impossible, to quantify. It is often even difficult to communicate. But think about it: there is the prospect that their work will fundamentally change the way we understand our own place in this world, possibly some centuries into the future. How do you want them to account for that?
To come back to the Canadians one more time, on page 20 of Canada’s Science and Technology Strategy, Mobilizing Science and Technology to Canada’s Advantage, you can read:
“Science and technology is not an end unto itself. It is a means by which we can pursue sustainable development.”
(via Jeff Sharom). I am all for sustainable development. I am also totally in favor of innovation and I love my BlackBerry. I have a lot of fantasy and can imagine that our rapidly improved understanding of, for example, the first moments of the universe might one day lead in mysterious ways to an application. But sometimes science is an end unto itself.
Saturday, May 08, 2010
If the charming Icelandic volcano lets me, I'll be flying to Toronto on the weekend. Next week, I'll be attending the PI workshop on the "Laws of Nature:Their Nature and Knowability," which promises to be interesting. I have some more trips upcoming this summer. Towards the end of June, there's a meeting of the "Working group on quantum black holes" in Bonn (which is part of the COST action "Black holes in a violent universe"), the SUSY 2010 happens to be also in Bonn in August. The Planck 2010 is May 31 to June 4 at CERN, where I will not be going because it conflicts with my Toronto trip. And of course there's the ESQG 2010, July 12-16, here in Stockholm. To top off the summer, my 15 year high school reunion is also planned for August. Not the busiest summer ever, but still seems I'll collect some frequent flyer miles.
You'll hear from me when I'm on the other side of the big water. Meanwhile, have a nice weekend or, as the Swedes say, ha det så bra.
Thursday, May 06, 2010
Why'd you have to go and make things so complicated?
"Why'd you have to go and make things so complicated?
I see the way you're actin' like you're somebody else
Gets me frustrated
Life's like this you
You fall and you crawl and you break
And you take what you get, and you turn it into
Honestly, you promised me
I'm never gonna find you fake it
No no no"
It is interesting, if you follow the news press, how frequently one finds references to "complex" problems, issues and questions: "Illegal immigration is a complex [...] issue with no easy solution," "Toyota faces complex legal woes as lawsuits mount," "Senate passes complex, controversial energy reform bill," "[T]he subject of radicalization [...] is a complex problem," and so on and so forth. One is left to wonder, what is not complex?
The difference between complex and complicated is that a complex system has new, emergent features that you would not have seen coming from studying its constituents alone. (For the meaning of "emergent", see my earlier post on Emergence and Reductionism.) The complex problem, it can't be decomposed. It can't be reduced. It's global, interrelated, it's on many timescales, and it doesn't respect professional boundaries either. Worse, you don't know were it begins and ends. It's full of "unknown unknowns." It's not only their problem, it's our problem too.
If you need any evidence for the popular appeal to complexity, even the Pope had something to say about it last year:
"The current global economic crisis must also be viewed as a test: are we ready to look at it, in all its complexity, as a challenge for the future and not just as an emergency that needs short-lived responses?"
In a recent article in the New York Times, David Segal wrote
"[C]omplexity has a way of defeating good intentions. As we clean up the messes, there's no point in hoping for a new age of simplicity. The best we can do is hope the solutions are just complicated enough to work."
Calling a problem "complex" seems to mean nowadays to acknowledge one doesn't really know how to handle it. A complicated problem, sure, we'd figure out what to do. After all, evolution has kindly endowed us with big brains. But a complex problem? Our political and social systems can't deal with that. *Shrug shoulders* Now what? Let's clean up the messes and hope that a complicated solution will do.
It is true that the problems we are facing are becoming ever more complex. This is a consequence of our world getting increasingly more and increasingly better connected. This creates new opportunities and fosters progress, but along the way it causes interdependencies and, when left unattended, lowers resilience.
It is however not true that we don't know what to do with a complex problem. We just don't do it. In contrast to our political systems, humans are good at solving complex problems. It's the complicated ones that you better leave to a computer. Look at the quote from Avril Lavigne that is title for this post. She's talking about relationships. Navigating in a human society is a multi-layered task on many time-scales with unexpected emergent features. It's full of unknown unknowns. That's not a complicated problem - it's a complex one. We have the skills to deal with that.
The reason why we can't use our abilities to deal with economic or political problems is simply lack of input and lack of method. These are solvable problems. And they are neither complex nor complicated.
I have written previously on what three requirements have to be fulfilled for a system to be able to develop into an optimal state, to find a good solution to a problem. One is free variation. Democracy and a free market economy are good conditions for that. The second one is to detect whether a small variation is an improvement or not. The third is the ability to react to the result of the variation. That's basically a poor man's way to find a maximum: go a step in each direction and take the direction that goes up*.
This procedure however dramatically fails whenever there is either data missing to find out whether a change is an improvement or not, or if there's no way to react to it. Take the recent economic crisis. There have been people all over the place who found something odd is going on; that this money creation out of nothing didn't make sense. They've had the data, but they've had no way to act on it. There was no feedback mechanism for their odd feeling. Way too late one would hear them saying they've sensed all the time something was wrong. From a transcript of a radio broadcast "This American Life" (audio, pdf transcript):
mortgage broker: was unbelievable... my boss was in the business for 25 years. He hated those loans. He hated them and used to rant and say, “It makes me sick to my stomach the kind of loans that we do.”
Wall St. banker: ...No income no asset loans. That's a liar's loan. We are telling you to lie to us. We're hoping you don't lie. Tell us what you make, tell us what you have in the bank, but we won't verify? We’re setting you up to lie. Something about that feels very wrong. It felt wrong way back when and I wish we had never done it. Unfortunately, what happened ... we did it because everyone else was doing it.
Italics added. (We previously discussed this in my post The Future of Rationality.)
It's not that nobody noticed what was going on. There was a variation taking place, but part of the change it was creating wasn't monitored. And there was no way to feed notice about the change back into the system. Computer programs made a risk assessment. They might not have made sense, but you wouldn't question them because everybody played the same game. In a recent NewScientist article, economist Ernst Fair is quoted saying
"Almost everyone in business, finance or government studies some economics along the way and this is what they think is the norm. It's a biased way of perceiving the world."
"Biased" is another way to say there's input missing.
We notice similar failures with other examples. Our economic systems are slow if not incapable of dealing with ecological problems because the problems don't automatically feed back into the system (at least not on useful timescales). There is a variation, but the optimization process can't work properly.
The reason why this close monitoring of the system (our global political, social, ecological systems) has become necessary and why no return to simplicity is possible is that even small groups of humans can cause a significant change to their environment. That may be a natural environment, social, or an organizational environment, which could be summarized as "background". In the earlier days, we were trying to achieve an optimization in a fixed background. Now, we can no longer neglect that we are changing the background by our own actions. In physics, this is commonly known as "backreaction."
If you take for example the deflection of light at the sun, then to compute the deviation you treat the photon as propagating in the fixed background field of the sun. That is an excellent approximation. Yet to be precise, the photon does actually change the background field too. If you'd take heavier objects passing by the sun, you'd eventually come to notice that they do contribute to the gravitational field too. The approximation of a fixed background is often made. For example, for the Hawking radiation of black holes, one commonly neglects the backreaction of the emitted radiation. This, again, is an excellent approximation, but one that breaks down at some point. (In this case when the energy of the emitted particles comes close to the mass of the black hole itself.)
If you are in a regime however where you can no longer neglect backreaction, as we are now with humans living on planet Earth, then you have to find a common solution for both the system and the background. Or you could say, they form a common system. This necessity to find a solution for both the background and the objects in it is one of the great insights of Einstein's theory of General Relativity, where the background is space-time, formerly thought to be an unchanging, fixed entity. You cannot have a time evolution for any system and just look at what the background will do or the other way round. You have to find a solution for both together. It is somewhat of a stretch to the notion of a "background" but I think that we are facing exactly this problem today when we are trying to find a sustainable solution for mankind living on this planet. We can either return to an era where backreaction was negligible and the background was eternally static and unchanging at our disposal. Or we learn how to find a stable solution to the full problem: us and our environment.
This issue is far more complex than you might think. That's because we are now in a situation were the change we cause to our environment does influence our own evolution and adaption to the environment. Human Culture has demonstrably been an evolutionary force since thousands of years already. And we are now only short of actively shaping our own evolution, not to mention that of other species. Whether that's a good idea or not depends on whether we are able to learn fast enough, ie whether assesment and reaction to a change is fast enough so the system doesn't just run down the hill before we can say bullshit.
And that's why I keep saying we need to finish the scientific revolution. Trial and error may have worked well to organize our living together for thousands of years, but this method has its limits. In an increasingly interconnected world, errors are too costly. We need to use a smarter method, a scientific method.
To be able to find a stable, sustainable, and good integration of the ongoing human development into the environment we need first of all to know what's going on. It is not too far fetched to think that Google will play a role in that with creating "real-time natural crisis tracking system," "real-world issue reporting system" or "collecting and organize the world's urban data" (see: Project 10 to the 100). The next step is to find a good way to extract meaning from all this data to be able to react in a timely manner to changes. People often seem to think that with that I mean the systems' dynamics has to be predicted. And let us be clear again that the system we are talking about is the global political, economical and ecological system. Having a model that makes good prediction would be nice, but it is questionable whether this is possible or even desirable. But that is in fact not necessary.
You don't need to predict the dynamics of the system. You just need to know what parameter space it will smoothly operate in so optimization works. You want to stay away from threshold effects, abrupt changes with potentially disastrous consequences. Think again about how we deal with human relationships. You don't predict what your friends, relatives or your partner will be doing. This would be pretty much impossible. But after you have got to know them you'll have an idea what to expect from them, and you'll be able to maintain a sustainable relationship on a balance of taking and giving. The same holds for the systems that govern our lives. You don't need to predict their evolution. You just need to know the limits. Life's like this...
* This does not find you a global maximum, but that's a more complicated problem that we'll discuss some other time.
Tuesday, May 04, 2010
Physics Bits and Bites
Here are three interesting and intriguing physics items I came across recently:
• Last year, the American Association for the Advancement of Science (AAAS) had organized a symposium called "Quest for the Perfect Liquid: Connecting Heavy Ions, String Theory, and Cold Atoms". Perfect, low-viscosity liquids can be observed when there is a very strong interaction between the constituents of the fluid, as is the case for the quarks and gluons created in heavy ion collisions at RHIC, or clouds of ultracold lithium atoms in optical traps. The strongly coupled quark gluon plasma can be described using the AdS/CFT correspondence, which brings sting theory into play (see also this earlier post). At the AAAS symposium, physicist-blogger Clifford Johnson (from Asymptotia) and Peter Steinberg (from Entropy Bound) discussed this connection, and a write-up of their presentation has now come out as a feature article in the May 2010 issue of Physics Today, "What black holes teach about strongly coupled particles" (free access).
• You may be aware of the ongoing quest for the densest possible packing of tetrahedra? The NYT wrote about this in January, and the articles mentions that a paper on the subject "prompted Paul M. Chaikin, a professor of physics at New York University, to buy tetrahedral dice by the hundreds and have a high school student stuff them into fish bowls and other containers." This project now resulted in a Physical Review Letter, with an experimentally determined volume fraction of 0.76±0.02 (The current theoretical "record" is at 0.856). Analysis of the experiment was done using Magnetic Resonance Imaging to look "into" the container crammed with tetrahedra, which shows that the packing is highly disordered. More background can be found in an article at "Physics", which also contains a free link to the PRL paper.
• Also via Physics, I've learned about what is the fastest (and possibly smallest) analogue computer to perform Fourier transforms: a single iodine molecule. A iodine molecule consists of two iodine atoms, which can vibrate, realizing a tiny harmonic oscillator. During one period, a harmonic oscillator follows a circular trajectory in phase space, which means that the Wigner function describing the quantum state of the oscillator "switches" space and momentum coordinates every quarter period. Going from real space to momentum space corresponds to a Fourier transform, so when the wave function of the iodine molecule is prepared in real space, after quarter of a period, the wave function encodes the Fourier transform of the initial configuration. Using laser, it is possible to prepare the molecule in definite state, and to probe the state again later. This allows discrete Fourier transforms for four and eight elements, and all this within 145 femtoseconds, "which is shorter than the typical clock period of the current fastest Si-based computers by 3 orders of magnitudes." (Ultrafast Fourier Transform with a Femtosecond-Laser-Driven Molecule", PRL).
Saturday, May 01, 2010
Publication Cut-off
The German Research Foundation (DFG) has taken an important and overdue step. To limit their applicant's attempts to blind the reviewer with publications, from July 1st 2010 on a maximum of 5 publications can be listed in the CV. In addition to this, only papers that are already published can be listed. Previously, it was possible to also list papers that are submitted, but not yet published. The change in this policy is apparently a reaction to an instance last year in which applicants (in the area of biodiversity) invented publications. (More details on the new regulation here.) It remains unclear to me whether a paper on the arxiv counts as published or unpublished.
With this decision, the DFG is clearly signaling that it's quality that matters, and not quantity. Or at least that's what should matter for their referees. Another reason for the change is that other countries have similar restrictions. The NSF for example also has a limit of 5 publications relevant for the project, and the NIH 15.
Matthias Kleiner, President of the DFG said
“With this we want to show: For us it is the content that matters for the judgement and the support of science.”
And he bemoans that today
“The first question is often not anymore what somebody's research is but where and how much he has published.”
(As quoted in Physik Journal, April 2010, my translation).
The DFG is the funding source for scientific research in Germany. Not the only one, but without doubt the most important one. This decision will therefore have a large impact. The impact however is limited in that the other major reason publication numbers are ever increasing is that hiring committees pay attention to these numbers - or at least are believed to pay attention, which is sufficient already to create the effect. The President of the German Higher Education Association (DHV*), Bernhard Kempen, comments
“To assess a candidate's qualification in a hiring process it should also be solely the content of provided publications, not their number, that is decisive for an appointment.”
(as quoted here, my translation.)
Since I have written many times that it hinders scientific progress when selection criteria set incentives for researchers to strive for secondary goals (many publications) instead of primary goals (good research), it should be clear that I welcome this decision by the DFG.
* DHV stands for Deutscher Hochschulverband. The literal translation of the German word "Hochschule" is "high school" but the meaning is different. "Hochschule" in Germany is basically all sorts of higher education, past finishing what's "high school" in America. The American "high school" is in German instead called "Oberstufe," lit. "upper step." See also Wikipedia. |
799fc02804120552 | Sunday Scribbles #31 - Bedtime Stories
Photobucket - Video and Image HostingOnce upon a time, in a Land filled with instant gratification, there lived an Urban Princess. The Princess was due at the Ball the next morning, but the excitement of it left her mind racing at night.
Each evening, the Princess lay her petite head upon the pillow, yet sleep eluded her.
Surely, she thought, I have been hexed by that powerful witch of a Department Manager.
Surely, she lamented, it is the fault of the President, who wages war for barrels of oil.
Surely, her mind cried out in the darkened room, I can not sleep because it is a conspiracy of the Liberal Media and Dick Cheney to keep me awake for days on end so that my blurry eyes can not guide my fair hand on Election Day.
(In this Land, it was right and proper to blame everyone else for your shortcomings.)
As she gazed at her popcorn ceiling, vowing to have contractors come out and replaster it sometime before next Summer, a gentle glow filled the room. A large, near translucent green butterfly had gently floated in through her open window while she had been contemplating. It fluttered above her head, sweetly humming a tune.
"My, but aren't you a gaudy, CG-gauzy thing?" she cried, reaching for her slipper.
"Wait!" the butterfly pleaded, ducking behind a half-finished glass of rum on the rocks. "I am magical!"
The Princess gritted her teeth and calculated how much force would be needed in order to squash the insignificant bug without damaging her Ethan Allen Tuscany nightstand. "Magic, my ass!"
"Fair Urban Princess, it is true!" it said, skittering away from the glass and hiding behind an empty package of Unisom. "I can give you and your restless mind the sleep you need. So you can finally enjoy a restful night and a fresh start! So from the time your head hits the pillow until the second your alarm clock sounds, you're getting the peaceful sleep you need."
"Hmmm?" the Princess questioned, a bit intrigued.
"I am designed to give you a restful night's sleep. It not only helps most people fall asleep quickly, I help you stay asleep all night long with fewer interruptions and you will wake up refreshed. I will not lose my effectiveness over time as shown in a 6-month study. Additionally, I am approved for long-term use. That is what makes me unique."
The Princess scratched her chin and contemplated the fact that she was having a conversation with a green-glowing CG-animated butterfly.
"You can feel quite good about taking me," the butterfly added, sensing the close of a sale. "When you're about to go to bed, simply swallow me with a bit of water and get ready to enjoy a restful night's sleep."
"Oh, alright," the Princess said, grateful to have a bit of magic to usher her into slumber.
"Important Safety Information!" babbled the Butterfly suddenly, so quickly that it nearly sounded as though he were an Auctioneer in his larval stage,
"I should only be taken immediately before bedtime. Be sure you have at least eight hours to devote to sleep before becoming active. You should not engage in any activity after taking me that requires complete alertness, such as driving a car or operating machinery. You should use extreme care when engaging in these activities the morning after taking me. Do not use alcohol while taking any sleep medicine. Most sleep medicines carry some risk of dependency. Do not use sleep medicines for extended periods without first talking to your doctor. Side effects may include unpleasant taste, headache, drowsiness and dizziness."
"Oh, alright!" moaned the Princess, too eager to capture a dream to bother with words that seemed as if spoken in fine print. "Shut up already and get in my mouth."
She quickly fell asleep moments later, and awoke feeling refreshed the next day. She dressed for the ball, jumped into her Mercedes ML320, and drove along the highway - only to realize that she was still partially asleep as her SUV kissed the guardrail and plummeted off a steep cliff.
The moral of this story: there is no magic pill that will solve all your problems. Treat the condition; do not simply medicate the symptom.
The End.
Read about other Bedtime Stories: Sunday Scribblings: #31
Pharmaceuticals and Caribou
This sabbatical has taken much longer than expected, and I apologize to my regular readers for leaving them out in the cold. Thank you for all the warm email wishes. At least I feel good. I feel much better than I did the day before. To feel good (to feel well) is to feel alive.
I would like to thank all the little people who made this wonderful event of "feeling good" possible, like Glaxo Smith Kline.
What an interesting topic. Alas, I shall not waste any effort to untangle the treacherous web of pharmaceutical manufacturers. I shall say that, in 2005, Old GSK's pharmaceutical sales accounted for 18.88 billion Pounds Sterling, with was roundly 86% of their total sales. 6.9 billion Pounds Sterling was their profit. Don't we all love those clever Brits? How about some shits and giggles? I found a conversion table: $33,883,192,823.84. "B" as in Billion dollars.
Billions. I think I will be sick. Excuse me whilst I go pop a pill. Never fear! It is a generic one, manufactured by Watson Pharmaceuticals, a humble little joint that provides us with generic formulas to better usurp funds from those tricky Brits at GSK. Watson does respectfully, and reports indicate that for the six months (ended June 30, 2006), total net revenue increased 12 percent to $917.6 million, as compared to $817.1 million for the first six months of 2005. Net income for the first six months of 2006 was $9.6 million, or $0.09 per basic and diluted share, as compared to net income of $79.1 million, or $0.67 per diluted share, for the same period of 2005.
Million. Much more soothing than Billion. I feel better scarfing my name-brand knock off poison now, knowing in soul that I am supporting good old American attitude. I shall now move to Canada to take advantage of the one thing our country can not seem to provide: socialized health care.
Take that, GSK!
I have always wanted to live in Thunder Bay (that is in Ontario, dear readers), and dance upon the majestic shores as bald eagles and Peregrine falcons soar free over my head. I would, in all probably, dance myself off a damned rock and crack my skull on the ground, to be run over by a rogue caribou as I slip into unconsciousness. Not to worry, I will have free health care in Canada, and Watson makes generic hydrocodone to ease my pains.
I could save myself some trouble and move to England, another land of The Free Health Care System, but I'm deathly afraid of Routemaster double-decker buses. I know they were thinned out in the 1980's, but I just will not take the chance of walking near Trafalgar Square and having one jump out at me suddenly. People have been gruesomely mauled by Routemasters, and the 1208GMT 159 bus out of Marble Arch is know to be especially savage.
Aut? What are you on?
Please do not ask that again. You do know I that hate to share.
Friday the 13th: the Muse Roams Blogland.
Happy Friday the 13th to all of you. Friday is my blogging day, where I can sit back and read catch up on everyone's week. I thought it would be nice to share some of my faviortes with you today:
I have just come back from Roadchick's
Roadtrip, where I LMAO over her spewing pumpkin and Friday the 13th Follies. I'm down to the granny panties, but not desperate enough to do the Man Solution and turn things inside out (after first conducting a Sniff Test), at least as far as undergarments go. Better Half has brought only a few items up from the Dungeon Laundry Cell, so by my visual perspective of the upstairs closet, it would appear sniff tests might come into play this weekend. I'm certain (hoping, praying!) that there is more clean laundry downstairs.
Stopping by
To Love, Honor and Dismay, I saw an interesting article concerning "How not to ask your husband for help." Thank God for Better Half. I may have to motivate him from time to time, but he is not ashamed of running a vacuum or using cleaning liquids. I did crack up over Dr. Andrew's interview on Basil's Blog.
Paris Parfait has delighted my sense once again, and her photograph of La Giralda Cathedral takes my breath away!
I was shocked to hear that someone I knew passed away this week. Although JerryALT and I didn't chat often, I loved his insight into the Jewish faith. David Shelton gave a wonderful
tribute to him in his blog. David has ben working hard on his new book, The Rainbow Kingdom, and it is now available for preorders.
On that note, Michael sent me an autographed copy of his latest work, and I am working on a review of it for Amazon. If I can just get it completed, I will offer a copy to B&Noble. Michael's book can be found at your local book store, or you can order it
here. I will publish my review here once it is completed. Michael also mailed me a copy of his article, Kidnapped in Iraq, which is the story of peace activist James Loney and his partner, Dan. This atricle was published in the August 29, 2006 edition of The Advocate.
Lori~Flower had me grinning as she shared that mother's never cease to mother, even after we have left our roaring twenties. My mother would have done the same. Actually, now that I think about it, Mum never fails to give good advice at least once per phone call.
The Benedict Notes, by AnnieElf, set my heart soaring - the return of the Latin Mass. It's about time! I am probably one of the few people who really enjoys Latin, and the greatest beauty is hearing an entire Mass in that tongue. Many Americans will scorn it, to be certain, but they hardly have a say in it, especially as most of them don't bother to even learn what the Mass is about. They sit quietly and recite prayers and have no clue as to what they are supposed to be thinking as they pray. It is my opinion that the average American Catholic is a pod. Can you tell that I have never been a Vatican II fan? Annie's other blog has a lovely haiku about a blackbird, which I promptly printed.
Darren Naish had an interview on the BBC news, which I missed. He also explores the controversial origins of the family dog, and our views are very similar on that subject.
Finally (for today at least) I ended my reading with
Sunday Scribblings. This week's theme is #29 - If I could stop time. . . and I encourage all of you to check out their blog and post your own entry.
My Apologies
I'm sorry I have not made much of an effort to keep up my blog this week. I will probably lose a few readers over that.
It has been a bad month for me physically. The month itself, and the environment around me, is epically beautiful however, and I managed to get a few pictures of the season before the leaves are snatched from the trees by the hands on winter. I'll try to get them posted tomorrow.
As I write this, the first snowflakes are falling outside!
Other than that, I have not been able to do much. My apologies to you.
A Catfish Paprika Recipe Worthy of George Totin
What does one do with 4 pounds of fresh catfish? The packets were on sale yesterday, and for roughly $3, we would have the makings of a goof fish fry. We would have, that is, if Better Half wasn't too sore from doing yard work yesterday.
There are many things I can do well, but frying fish is not one of them. Better Half, the Southern Boy, can fry anything to perfection. He surely channels Paula Dean and Bobby Flay, if not Emeril. I woke up this morning anticipating spicy fried catfish and a side of winter squash.
Better Half woke up anticipating going back to bed.
What does one do with 4 pounds of fresh catfish? They improvise.
A family favorite in this house (handed down from my Father) is Chicken Paprika. It's a Hungarian dish, heavy on the paprika and sour cream; a true comfort food. I used to make this traditional dish every Father's Day, but since moving away from my parents, I have not bothered to whip up a batch (with the exception of their visit out here this past summer.) It is not that the recipe is difficult, or the ingredients too hard to come by. It simply reminds me of my Dad, and to make chicken paprika is to admit that it saddens me as he is not here in person to enjoy it with us. I have made the dish with chicken, beef and pork... and my mind pondered the possibilities of catfish. Would it work? Would it taste terrible? Could I add the squash to it? Oh, what the hell! Let's go.
Catfish Paprika Recipe
4 pounds fresh catfish, 1" cubes
1 yellow onion, diced small
1 Patty Pan squash, diced small
1 Tablespoon butter
Salt, to taste
Pepper, to taste (we use 1 tablespoon)
Paprika, ground, to taste (we use a whopping 1/8 cup or more!)
1 can low fat, no MSG chicken broth
1 cup sour cream
1 cup shell noodles, cooked
Melt butter on med-hi heat in a large skillet, then add diced onions, salt, pepper, and 1/2 the paprika. Cook until translucent. Add squash and fry for about 2 minutes, or until it begins to become tender. If things get too dry, you can add a tad more butter.
Add catfish to the pan, and stir fry for a few minutes, then add chicken broth and remaining paprika. Cook until catfish is done. Lower heat and add sour cream, a bit at a time, working it into the broth mixture. If you would like the sauce to be more hardy, you can thicken it with flour. Once sour cream is combined, add noodles. Your completed dish should have a medium pale orange color to it.
Enjoy! Share a hardy pot with friends and family.
I tired it... and I like it! The fish doesn't overpower, and the flavor blends well with the paprika and sour cream. Normally, this recipe would be done without squash (and you can substitute pork or chicken.) However, the squash added a bit of harvest aroma to it.
So here's to you, Dad! Wish you were here to savor Catfish Papikosh with us!
Sunday Scribblings #28 - An Assignment
This Sunday Scribblings was a tough one, as I'm sort of homebound this week - thanks to my crappy body.
As I can not write about any people that I observe (and writing about Better Half becomes too mundane for some of my readers), I'll draw you into Bold's world.
The Autumn air has chilled, and the leaves prepare to slip their bonds. The sun dances through them, and the canopy of the tree becomes a stained glass church. It is here that Bold dwells, the summer and autumn of this, his first year, a true test of his stock.
Bold is a rugged thing, a burley thick-bodied American Tree Sparrow (Spizela arborea), masquerading among the Chipping sparrows. His red cap is eternally tussled, bits of feather sticking up at odd angles as he pecks frantically at the harvest seed in the hanging feeder. When I first laid eyes on him, earlier this year, I thought him perhaps sickly, as no healthy bird would run about with such a poorly preened coating. Yet he remained, steadfast against all odds, the mutant Tree-Chipping Sparrow skulking amidst his beautiful cousins. He never offered a humble chirp, but always chose to announce his presence with a rather throaty CHURP, accompanied by the strangest dancing displays. After closer observation, I believe he was either lost, or else the two different species mated to produce him. If 'o' is a typical Chipping sparrow, then Bold waddles in with 'O'... larger, rounder, louder, and much much bolder. He drives even the largest of Ravens from his territory.
For some weeks, I have lost track of our house wens, cardinals, and chipping sparrows. I have not heard the haunting cry of our mourning doves in quite a while. I have not been able to sit on my front porch to enjoy their community as it draws together each day in celebration of bountiful food and water. I have keep my eyes opened, hoping for some small sign that my freakish little bird was still about. Bold was my companion, and my inspiration to keep fighting, no matter how heavily stacked against me the odds are.
Yesterday graced us with a heavy downpour of rain. Better Half was about to start the mower, and I had patted my hair into place and had ventured outside to keep him company. The rains began almost immediately. I grabbed the container of seed, urging Better Half to at least get that feast set up for our friends, and then tore open the seed cake packets for the mesh feeders. A flash of lightening sent a few lurking Chipping sparrows racing for the protection of the canopy of our large tree... and in that flash, I saw Bold.
He stood firm in the tree, his head cocked to one side as he waited for Better Half to resupply the hanging feeder. Rain and thunder be damned, for Mother Nature herself would not drive him from his perch. I shouted to Better Half and pointed, crying "Oh look, there's Bold" as the poor man did his best to get food in place while being drenched by the storm. Unfortunately, Better Half could not see Bold through his rain streaked glasses.
Bold has changed in the past few weeks. He is even larger, and more bedraggled in feather. He is every bit as lively, however, and offered a singular CHURP in gratitude for the free meal. His is an unquenchable spirit; each passing day means another chance to profit from the last moments of summer. His robust form skittered from twig to twig, and he regarded me momentarily before ducking out of sight behind a particularly large clump of leaves. I have never been able to capture him on film, yet his enthusiasm for life is etched upon my heart.
I struggled with insomnia until early this morning, . I had left the bedroom window open, preferring the feel of the crisp night air. Better Half let the dogs out around 7 am, and closed the door, allowing me the luxury of a warm bed sans any dogs, cats or other humans. In the still morning air, I heard a particularly pleasing CHURP, and lifted the blinds just so. I lay quietly, allowing the early sunshine to warm the air, and my eyes gazed into the depths of the maple tree. The CHURP came again, and I spotted Bold on a limb. He shook himself, and glistening droplets flung out from his plumage. He cocked his head and stared at me from one gleaming black eye, and then ducked his head under his wing to nibble at some small itch. When his head emerged, his cap was just as scruffy as ever, and I smiled silently and thought of how tussled my own hair must look.
What was he thinking at that moment? Was he already dwelling upon the bird feeder below, or perhaps he was testing his resolve to migrate to some distant place? I do not know, in honestly, but it seemed to me that he had come up to the very top of the tree just to check in on me, for he stayed quite a while. I closed my eyes and lay back, warmed by the occasional song he offered in lullaby.
The sun climbed higher, and the bright light dragged me from my groggy state. I got up quietly and began to shut the blind - and there Bold sat, still in the same place. I whispered "Good morning, Bold" and offered him a nod. He scratched his head lazily with his leg, tearing several spent downy feathers from his neck and chest in the process, and then gave a final CHURP in return. It was the bit of peace that I needed, and it ushered me into a deep sleep.
As I write this, I hear a familiar song creeping in through the cracked window in the office. My heart soars.
On Entropy, the Arrow of Time, and Anthropic bias
I am going to digress from my usual rambling to allow you a brief snapshot into what Better Half and I do while driving: we communicate. Talking is a lost art to many people. It is more than a method of conveying needs; it is the prime method whereupon we can convey thoughts, theory, and philosophical ideas. To dialog, to communicate what seems incommunicable, is divine.
This entire topic began when I purchased a cheap watch. I have owned many in my life, but seldom wear one. I tend to exist outside the ideals of the space-time continuum, as I ignore time as a dimension.
(In physics, spacetime is a mathematical model that combines three-dimensional space and one-dimensional time into a single construct called the space-time continuum, in which time plays the role of the 4th dimension. According to Euclidean space perception, our universe has three dimensions of space, and one dimension of time. By combining space and time into a single manifold, physicists have significantly simplified a good deal of physical theory, as well as described in a more uniform way the workings of the universe at both the supergalactic and subatomic levels.)
Time is a strange thing. We can have a perception of the passage of time, as things move along in a sequence - the sun rises, and the sun also sets. This is what most think of when they hear the word "time" itself - the Time of Day/Night (and you don't even need to know who Isaac Newton is!)
I hold closer to Immanuel Kant's view of time: time is part of the fundamental intellectual structure (together with space and number) within which we sequence events, quantify the duration of events and the intervals between them, and compare the motions of objects. In this view, time does not refer to any kind of entity that "flows", that objects "move through", or that is a "container" for events.
I simply couldn't care less when I wake up, when I go to bed, or when I eat breakfast. I do not keep a schedule that is set, as I set my own schedule and never seem to do things exactly the same from day to day. I lose track of time, not because I fail to pay attention to its passing, but because I have no need to bother with tracking it at all. Clocks assault my vision in just about every room, but how often have I bothered to actually glance at one simply for the desire to know what time of day it is? Hardly ever, unless my existence must suddenly grind itself back to a more mundane path due to the pressing need to coordinate my personal time with the synchronicity of the rest of the world (or to keep an appointment in time with a doctor or group.) Thus I exist, and thus Isaac Newton rolls over in his grave. Kant, I am sure, would applaud that there is at least one being who does not need to rely upon Newton's theories in order to maintain sanity. I am quite happy to exist without a schedule or the knowledge of "what time it is" right now.
Hence, I shrug at time. I am chronologically challenged, meaning that the time arrow does affect me mentally (although I do age) yet I see all things as relevant. I balk at the evidence of time's passing, for it means nothing. I am not immortal, yet my mortality is not hinged upon moving forward in time or in time's stagnation (for if time stagnates, then nothing moves forward, and the only option is to find out why, or hold on as we surf the event horizon and the effects of reverse of time back to the black hole of Antioch. Never mind. You had to be there - 19 years ago - in the singularity of that moment, for that joke to hit home as humor.)
Alright. I'll try to explain (and will borrow, heavily, from other sources!)
In the natural sciences, time’s arrow, or arrow of time as it is also known, is a term used to distinguish a direction of time on a four-dimensional relativistic map of the world - which can be determined by a study of organizations of atoms, molecules, and bodies. \
The thermodynamic arrow of time is provided by the Second Law of Thermodynamics, which says that in an isolated system entropy will only increase with time; it will not decrease with time. Entropy can be thought of as a measure of disorder; thus the Second Law implies that time is asymmetrical with respect to the amount of order in an isolated system: as time increases, a system will always become more disordered. This asymmetry can be used empirically to distinguish between future and past. (I won't delve into Chaos Theory here.)
The Second Law does not hold with strict universality: any system can fluctuate to a state of lower entropy (see the Poincaré recurrence theorem). However, the Second Law seems accurately to describe the overall trend in real systems toward higher entropy.
Certain subatomic interactions involving the weak nuclear force violate the conservation of parity, but only very rarely. According to the CPT Theorem, this means they should also be time irreversible, and so establish an arrow of time. Such processes should be responsible for matter creation in the early universe. To me, in my daily life, time follows that pathway perfectly. I can not undo what has been done. I can not reverse time to change things that would later become a pinnacle by which I gain the desire to change so that the pinnacle does not take place, therefore changing my own timeline infinitely as that pinnacle is reshaped and reformed with each attempt to rid myself of it (and should I remove it I remove the desire to return to that point in time, thereby it does happen... or does it? Parallel universes explode, and Mickey Mouse does the Mashed Potato on Elvis' grave.)
This arrow is not linked to any other arrow by any proposed mechanism, and if it would have pointed to the opposite time direction, the only difference would have been that our universe would be made of anti-matter rather than from matter. More accurately, the definitions of matter and anti-matter would just be reversed. Does it matter? Not to me, but that is because I am just weird. It effects me, as I can not escape the clutches of time itself.
That parity is broken so rarely means that this arrow only "barely" points in one direction, setting it apart from the other arrows whose direction is much more obvious.
Quantum evolution is governed by the Schrödinger equation, which is time-symmetric, and by wave function collapse, which is time irreversible. As the mechanism of wave function collapse is still obscure, it's not known how this arrow links to the others. While at the microscopic level, collapse seems to show no favor to increasing or decreasing entropy, some believe there is a bias which shows up on macroscopic scales as the thermodynamic arrow. According to the theory of quantum decoherence, and assuming that the wave function collapse is merely apparent, the quantum arrow of time is a consequence of the thermodynamic arrow of time. Geeks everywhere are wondering if I would touch up "the cat". I won't. I don't believe it exists, and I walk through it. I won't let anthropic bias hinder me.
"Anthropic bias" is a term coined by the philosopher Nick Bostrom, as an expression for the bias arising when "your evidence is biased by observation selection effects". This is, basically an extreme generalization of the confirmation bias and the cognitive bias, involving not only mind-set, memory and methodology, but the whole way in which one sees oneself as an entity investigating an environment. As the etymology of the term suggests (from the Greek word for "human being") Bostrom's main claim could be reduced to saying that being a human being itself constitutes a bias for, and consequently a hindrance to, objective observation. In my own pondering, I tend to take things from different perspectives, and I often forget that I am approaching things as a human being. I escape the bounds and limitations of time and space, disregard biological necessities, and "lose track" of time as a whole. I spend hours probing a forming hypothesis, testing it to see if it would withstand the beatings necessary to become theory. I cease to the be entity, and become that which I study, bit by bit, on a mental scale. I leave the realm of hard science and embrace philosophy, but science remains my grounding point as the laws of mathematics must always be applied.
Bostrom suggests a way out using what amounts to quasi-empirical methods, and I enjoy embracing his philosophy. In his book Anthropic Bias: observation selection effects in science and philosophy, Bostrom explores the implications of these for "polling, cosmology (how many universes are there?), evolution theory (how improbable was the evolution of intelligent life on our planet?), the problem of time's arrow (can it be given a thermodynamic explanation?), game theoretic problems with imperfect recall (how to model them?), traffic analysis (why is the "next lane" faster?)."
It has been suggested that the whole idea of an anthropic bias is irrefutable. How could a criticism, presumably made by a human being, against the theory of anthropic biases be conceived? If it is not possible to review it critically, the whole theory becomes a will-o'-the-wisp without any practical consequences for our human lives here on Earth. I can tell you that existing the way I do when I'm on a mental tangent is harmful, as the "real life" things that are critical are often ignored. To remove oneself, one must remove one's self. To remove one's self, one neglects others. Few people can so totally remove themselves and remain sane. Perhaps that is an indication that I am insane, yet do we base sanity upon how an individual reacts to his environment, or do we base it on that individuals ability to grasp reality? Even on a "tangent", I assure that I grasp reality for what it is. I simply choose to ignore that which is not immediately essential for me to complete by reasoning.
Another problem with the theory purporting the existence of a general anthropic bias, is that it sounds self-referentially inconsistent — If Nick Bostrom is a human being, and the anthropic principle is valid, then his observations will be biased; the anthropic principle is an observation made by Nick Bostrom; hence, either (α) Nick Bostrom is not a human being (or alternatively, knowledge of the anthropic principle was supernaturally revealed to him), or (β) the anthropic principle is itself anthropically biased, or (γ) at least one observation made by a human being (e.g. N.B.'s observation of the anthropic bias) is not biased (γ is a counterexample of the general anthropic principle, and all three alternatives (α, β, and γ) point to Bostrom's theory being poorly conceived.
Needless to say, this entire line of thinking stems from a conversation between Better Half and myself, whereupon we dialoged concerning what terminology would best apply to me as far as my attunement to time is concerned. Am I chronologically challenged, in regards to my complete ignorance of the actual time of day? Am I entropically hindered, as I throw the 4th dimension out the window on a daily basis? Perhaps we are socially challenged, Better Half and I. Perhaps other spouses discuss the kids, or groceries, or shoes? Perhaps they dwell upon stupid, mundane matters such as what to eat next Friday? Perhaps the only thing holding them together is the daily events that bind them, and their relationship goes stagnate as they attempt to keep cohesive as a pair by interjecting commentaries about how they think things should be when the sun rises? I don't know. Better Half and I have always had the ability to remove ourselves from the "mate" prospect, male and female, in order to explore the scientific and philosophical nature of things as a combined mind. That, dear readers, is why I married him. Time destroys, breaks down mountains and turns seas into deserts.
In time, relationships based solely upon sexual fulfillment fall by the wayside. When I chose Better Half, it was for his mind as well as his body. As we age, and as the arrow of time reminds us that we are indeed mortal, we will lose our bodies to the ravages of time, yet we have a bond that will keep cohesive for as long as our minds hold out. For those that are curious - I exist in a world that does not reply upon time, and my thread of connectivity to the real world is held by a being that is content to obey the laws of time: hence, I am grounded. |
8735322eed68769b | Sofja Kovalevskaja Award 2006 - Award Winners
Jens Bredenbeck
Molecule dynamics - mainspring of chemistry and biology
Elementary vital functions, chemical reactions, the behaviour of substances in our environment - the driving force behind all of these phenomena is the movement of molecules. Molecules interact and change their structure, sometimes slowly, sometimes at incredible speed. Jens Bredenbeck is developing new measuring techniques which can keep up with the molecules' pace. Multidimensional infrared spectroscopy is the name given to the method which measures molecular motion with ultrashort infrared laser pulses. This molecular motion detector should help us to understand what important processes on the molecular level look like in real time, such as how biomolecules fold themselves into the right structure and how they fulfil their vitally important tasks.
Host Institute: Frankfurt a.M. University, Institute of Biophysics
Host: Prof. Dr. Josef Wachtveitl
• Jens BredenbeckDr. Jens Bredenbeck,
born in Germany in 1975, studied chemistry at Darmstadt Technical University, Göttingen University and Zürich University, Switzerland, where he completed his doctorate at the Institute of Physical Chemistry in 2005. He is currently continuing his research at the FOM Institute for Atomic and Molecular Physics in Amsterdam, Netherlands.
Jure Demsar
Solid State Physics
New impulse for developing usable superconductors
Greenhouse gases, climate change and rising prices - the consequences of our use of energy are onerous. The idea that it might be possible to conduct electrical current without loss, to transform it and use it in engines - in a completely new way - sounds rather like a fairytale. Precisely this, however, i.e. the superconductivity of certain materials, has long since become reality. But only in the lab. Problems with the materials as well as the low temperatures required make it difficult to transform the new superconductors into usable electricity conductors. Thus, Jure Demsar is investigating novel so-called strongly-correlated high temperature superconductors. With the aid of ultrafast laser pulses he is observing in real time how electrons and other excitations behave and interact in this highly correlated superconducting state, and drawing inferences for optimising the material. The dream of loss-free conductors and other new electronic applications could move a step closer thanks to superconductivity research.
Host Institute: Konstanz University, Department of Physics, Modern Optics and Photonics
Host: Prof. Dr. Thomas Dekorsy
• Jure DemsarDr. Jure Demsar,
born in Slovenia in 1970, studied physics at Ljubljana University, where he took his doctorate in 2000. He continued his research in Ljubljana at the Jo¿ef Stefan Institute, in the Complex Matter Department, before receiving a two-year fellowship to research at the Los Alamos National Laboratory in the United States. Since then, Demsar has been working at the Jo¿ef Stefan Institute where he attained his professorial qualification (Habilitation) in 2005.
Felix Engel
Cell biology
Hearts that heal themselves
The human heart is a unique organ in the true sense of the word: adult heart cells are unable to divide. If they die off as a result of a heart attack, for instance, the tissue cannot rebuild itself. Felix Engel is searching for a way of encouraging adult heart cells to divide - a capacity inherent in youthful cells, but one which they lose shortly after birth. As Felix Engel and his colleagues discovered, responsibility for this lies with a protein. If it is blocked, the cell regains its capacity to divide. What has worked in experiments on animals is now supposed to be used to treat humans successfully and thus be developed as an alternative to the controversial treatment with stem cells.
Host Institute: Max Planck Institute for Heart and Lung Research, Bad Nauheim
Host: Prof. Dr. Thomas Braun
• Felix EngelDr. Felix Engel,
born in Germany in 1971, previously worked at the Children's Hospital/Harvard Medical School in Boston. Engel studied biotechnology at Berlin Technical University and completed his doctorate there in 2001 after working on his thesis externally at the Max Delbrück Centre for Molecular Medicine in Berlin.
Natalia Filatkina
Historical Philology
Vom Kerbholz zur Datenbank
What did the German expression, einen blauen Mantel umhängen, mean in the Middle Ages? What is a tally stick, what has it got to do with committing a criminal offence and how does it come about that this term is still used in the same context in modern German? These are the questions being answered by Natalia Filatkina who is investigating the history of such formulaic figures of speech in German. These so-called phraseologisms are, after all, a salient feature of all languages and essential for understanding them. What are the social, historical and cultural phenomena underlying these ancient phraseologisms? What conclusions can be drawn for modern language? So far, there have only been fragmentary investigations in this field. In her pioneering work, which is combining historical philology with the international technologies of markup languages, Natalia Filatkina is preparing an electronic body of texts from the 8th to the 17th centuries and interpreting them according to modern linguistic criteria. In this way, a data base is being created that will bring a part of cultural history nearer not only to an interdisciplinary circle of experts but also to a broad non-academic public and will generate new knowledge for the present day.
Host Institute: Trier University, Department of German, Older German Philology
Host: Prof. Dr. Claudine Moulin
• Natalia FilatkinaDr. Natalia Filatkina,
born in the Russian Federation in 1975, studied at the Moscow State Linguistic University, the Humboldt University Berlin on a DAAD scholarship, the University of Luxembourg, and Bamberg University where she took her doctorate in 2003. Her dissertation on the Luxembourg language was awarded the Prix d'encouragement for young researchers by the University of Luxembourg. She is working in the field of Older German Philology in the Department of German at Trier University.
Olga Holtz
Numerical Analysis
The way out of the data jungle
Whether you are looking at the handling and flying qualities of the new Airbus, developing a new drug to combat Aids or designing the ideal underground timetable for a city with more than a million inhabitants - at some time or other you will have to do some complicated computations. The amount of data computers have to cope with is extremely large, we are talking in terms of millions of equations and unknowns, and they only have a finite number of digits for representing a number. In order to solve this problem using reliable and fast algorithms you need to know as much about computers as mathematics. Olga Holtz is working at the interface of pure and applied mathematics. She is searching for methods which are both fast and reliable - which in this field of applied mathematics is usually a contradiction in terms. Her project, developing a method of matrix multiplication, should provide the solution to a multitude of computational calculations in science and engineering.
Host Institute: Berlin Technical University, Institute of Mathematics
Host: Prof. Dr. Volker Mehrmann
• Olga HoltzDr. Olga Holtz,
born in the Russian Federation in 1973, studied applied mathematics in her own country at the Chelyabinsk State Technical University and at the University of Wisconsin Madison in the United States, where she received a doctorate in mathematics in 2000 and subsequently continued her research in the Department of Computer Sciences. She was a Humboldt Research Fellow at Berlin Technical University before being appointed to the University of California, Berkeley, where she has been working ever since.
Reinhard Kienberger
Electron and Quantum Optics
Using x-ray flashes to visualise inconceivable speed
If you want to observe and understand how chemical bonds evolve, how electrons move in semi-conductors or how light is turned into chemical energy through photosynthesis, you have to be pretty fast, because in these chemical, atomic or biological processes we are dealing with fractions of a second, so-called attoseconds, which last no longer than a trillionth of a second. Reinhard Kienberger has significantly contributed to developing observation methods which use ultra fast, intensive x-ray flashes on the attosecond scale to visualise and, in future maybe even to be able to control what has so far been unobservable. Novel lasers based on ultraviolet light or x-rays as well as improved radiation therapies in medicine are just a few of the possible future applications ensuing from the young discipline of attosecond research.
Host Institute: Max Planck Institute of Quantum Optics, Laboratory for Attosecond and High-Field Physics, Garching, near Munich
Host: Prof. Dr. Ferenc Krausz
• Reinhard Kienberger Dr. Reinhard Kienberger,
studied at Vienna Technical University, Austria, and completed his doctorate there with a dissertation on quantum mechanics in 2002. He subsequently became a fellow of the Austrian Academy of Sciences, researching at Stanford University's Stanford Linear Accelerator Center, Menlo Park in California. He is currently working at the Max Planck Institute of Quantum Optics in Garching.
Marga Cornelia Lensen
Macromolecular Chemistry
Turning to nature: made-to-measure hydrogels for medical systems
If the first thing you associate with a happy baby is a dry nappy, it probably does not occur to you that both the parents and the baby actually have the blessings of biomaterial research to thank for this satisfactory state of affairs. The reason for this is that nappies and other hygiene products for absorbing moisture contain the magic anti-moisture ingredients known as hydrogels. These are three-dimensional polymer networks which can store many times their own weight in water and release it again. Humans have copied this principle from nature where hydrogels proliferate, in plants for instance. But hydrogels have much greater potential than this, for example in bioresearch or medicine. They might release doses of drugs in the body or act as sensors. They might also be used as artificial muscles or to bond natural tissue with artificial implants. This would require gels with properties made-to-measure through utilising nanotechnology. To lay the foundations for this, Marga Cornelia Lensen is investigating ways of changing the structure of the gels and how they interact with cells. Consequently, one of the things she is going to do is to use novel nanoimprint technology, which, so far, has largely been tested on hard material, to structure hydrogels and insert them as carriers for experiments on living cells.
Host Institute: RWTH Aachen, German Wool Research Institute
Host: Prof. Dr. Martin Möller
• Marga Cornelia LensenDr. Marga Cornelia Lensen,
born in the Netherlands in 1977, studied chemistry at Wageningen University and at Radboud University Nijmegen, where she took her doctorate in 2005. As a Humboldt Research Fellow she has been working at her host institute at RWTH Aachen, where she will continue her research as a Kovalevskaja Award Winner, since October 2005.
Martin Lövden
Developmental Psychology
Tracking down the secret of life-long learning
In our aging societies in Europe, the idea of life-long learning has gained a special relevance. But although the learning ability of young brains is considerable and has been well researched, there are not many studies on the reasons for the deterioration of learning ability in old age and how to deal with it. Martin Lövden is investigating the neurochemical, neuroanatomical and neurofunctional conditions for successful learning in old age and the consequences for everyday life. To this end, he uses neuroimaging methods, such as functional resonance imaging and resonance spectroscopy, by which he can observe the brains of old and young test subjects during memory training in order to track down the neurological secret of successful learning and its limitations in old age.
Host Institute: Max Planck Institute for Human Development, Research Area Lifespan Psychology, Berlin
Host: Prof. Dr. Ulman Lindenberger
• Martin LövdenDr. Martin Lövden,
born in Sweden in 1972, studied psychology at Salzburg University in Austria and at the universities of Lund and Stockholm in Sweden as well as neuroscience at the Karolinska Institute in Stockholm. He was awarded his doctorate at Stockholm University in 2002. He continued his research at the Saarland University in Saarbrücken and is currently working at the Max Planck Institute for Human Development in Berlin.
Thomas Misgeld
Nerve fibres: the brain's fast wire
In the nervous system information is transported in the form of electrical impulses. To this end, every nerve cell has an appendage, the function of which is similar to that of a telephone cable - the nerve fibres, also called axons. Axons run through the brain and the spinal cord to the switch points at the nerve roots and have a certain capacity for learning. They are able to adapt to new requirements. Not a lot is known about how this adaptation functions and how axons protect themselves against damage. So, Thomas Misgeld is investigating the axons of living mice using high resolution microscopy. He wants to discover how nerve fibres are nourished, adapted and maintain their efficiency in a healthy organism. This basic information could lead to the development of new therapies for diseases like multiple sclerosis or for spinal cord injuries.
Host Institute: Munich Technical University, Institute of Neurosciences
Host: Prof. Dr. Arthur Konnerth
• Thomas MisgeldDr. Thomas Misgeld,
born in Germany in 1971, studied medicine at Munich Technical University where he completed his doctorate in 1999. He continued his research in the department of clinical neuroimmunology at the Max Planck Institute of Neurobiology in Martinsried and at Washington University in St. Louis. His most recent position was at Harvard University in Cambridge. In 2005, he was granted the first ever Wyeth Multiple Sclerosis Junior Research Award and the Robert Feulgen Prize by the Society for Histochemistry.
Benjamin Schlein
Mathematical Physics
Seeking evidence in the quantum world
In the first half of the 20th century, when physicists observed that new properties were revealed by light interacting with material, classical physics reached its limits. It was the birth of quantum mechanics, the principles of which are part of common knowledge in physics nowadays, such as the fact that material particles exhibit waves, just like light. This is a principle used in modern electron microscopes. One of the main pillars of quantum mechanics is the Schrödinger equation which, to this day, has been very successful in predicting experiments. But when it comes to examining macroscopic systems - i.e. systems composed of multitudes of the tiniest particles - the amount of data is so enormous that even the most modern computers are not powerful enough to solve the Schrödinger equation. Benjamin Schlein is trying to develop mathematical methods which will make it possible to derive simpler equations to describe the dynamics of macroscopic systems. He wants to create a solid mathematical basis on which to assess and develop further applications in quantum mechanics.
Host Institute: Munich University, Institute of Mathematics
Host: Prof. Dr. Laszlo Erdös
• Benjamin SchleinDr. Benjamin Schlein,
born in Switzerland in 1975, studied theoretical physics at the Swiss Federal Institute of Technology (ETH) in Zürich and completed his doctorate there with a dissertation on mathematical physics in 2002. He subsequently continued his research in the United States, at the universities of New York, Stanford, Harvard and California in Davis.
Taolei Sun
Medical Biochemistry
Novel biocompatible materials for medical systems
"Surfaces are a creation of the devil", the famous physicist and Nobel Prize Winner, Wolfgang Pauli, once remarked when he realised how much more complex the surfaces of materials were than their massive substance. Many technical, indeed everyday applications depend on the properties of material surfaces and their interactions, which is especially important in the biomedical fields. Just consider the surfaces of artificial joints and other implants, or artificial access to the human bloodstream in intensive medicine or cancer treatment. All of them have to get along really well with the surfaces of human tissue or human cells. Taolei Sun is working on biocompatible, artificial implants and medical devices, combining modern nanotechnology with chemical surface modification. His aim is to use nanostructured polymeric surfaces with special wettability as a platform for the emergence of a new generation of biocompatible materials.
Host Institute: Universität Münster, Physikalisches Institut
Host: Prof. Dr. Harald Fuchs
• Taolei SunDr. Taolei Sun,
born in China in 1974, studied at Wuhan University and at the Technical Institute of Physics and Chemistry in the Chinese Academy of Sciences in Beijing, where he took his doctorate in 2002, and then continued his research. He subsequently worked at the National Center for Nanosciences and Technology of China in Beijing before becoming a Humboldt Research Fellow in the Institute of Physics at Münster University where he will now carry out research as a Kovalevskaja Award Winner.
Kristina Güroff
Press, Communications
and Marketing
Tel.: +49 228 833-144/257
Fax: +49 228 833-441
Georg Scholl
Head of Press,
Communications and Marketing
Tel.: +49 228 833-258
Fax: +49 228 833-441 |
86c210c4533f4d25 | torsdag 28 februari 2013
Basic Evidence of Low Emissivity of CO2
A basic estimate of the radiative forcing of CO2 as atmospheric trace gas from its main resonance at wave number 667, can be obtained as follows from Planck's Law in the form
• R(n,T) = gamma T n^2 for n < 4T,
where n is wave number, T ~ 300 K is temperature and R(n,T) is radiation per unit of wave number and gamma is a constant. The total outgoing long wave radiation OLR from the Earth surface at temperature T assuming the atmosphere to be transparent, is then equal to the integral of R(n,T) over 0< n < 4T:
• OLR = gamma * 64/3 T^4 .
Adding the trace gas CO2 will block radiation in an interval around 667 ~ 2T of width 1 (motivated on Computational Blackbody Radiation as a basic phenomenon of near-resonance), with a blocking effect B given by
• B = gamma * 4 T^3 .
The emissivity of an atmosphere with CO2 as trace gas can then be estimated as the relative blocking effect B/OLR = 12/64 T < 0.001 of the main resonance at 667.
This is similar to the estimate 0.002 presented in a previous post, with the doubling resulting from the weaker spectral lines away from the main resonance.
Notice that the blocking effect of the main resonance of CO2 is in principle independent of concentration, which can be seen as an extreme form of logarithmic dependence with full effect at saturation already for small concentration.
The radiative forcing corresponding to an emissivity of 0.002 will be smaller than 0.5 W/m2 (as 0.2% of a total of about 200 W/m2), which is a factor 10 smaller than the 3.7 W/m2 serving as the basis of CO2 alarmism predicted by Modtran.
With the emissivity from the main resonance at 667 very small, the 3.7 W/m2 must result from the Modtran models of line broadening for spectral lines on the "shoulders" of the spectrum away from 667. CO2 alarmism thus critically depends on theoretical models of a phenomenon, which is so subtle that experimental evidence appears to be impossible.
Basis of CO2 Alarmism = Modtran = 0
As CO2 global warming alarmism is losing momentum in the absence of any warming since 15 years with politicians turning to other nobel causes, it may now be possible to question the very basis of this movement which has threatened to throw humanity back to stone-age by tough regulations to "decarbonize" society.
The scientific evidence of the warming effect of the trace gas CO2 consists of theoretical predictions of the "radiative forcing" effect using models for radiative transfer such as Modtran based on spectral data from the data base Hitran. A version of Modtran can be run on the web, which makes it possible to test its performance, as we did in a previous post.
Modtran gives a "radiative forcing" of 3.7 W/m2 upon doubling of atmospheric CO2 from 300 ppm to 600 ppm. This serves as the starting point of CO2 global warming alarmism by giving the trace gas CO2 a substantial warming effect as a powerful "greenhouse gas" GHG. Experimental evidence of this is effect is lacking. Without the 3.7 W/m2 produced by Modtran, there would be no IPCC and no CO2 alarmism.
Can we then trust the Modtran prediction of radiative forcing of 3.7 W/m2?
Well, Modtran as radiative transfer model is a very simplistic model of the complex atmosphere. Modtran is further supposed to model the effect of a very small cause since CO2 is an atmospheric trace gas, and to capture a small cause requires high accuracy. The scientific warning signs are thus blinking for Modtran.
Let us here check how Modtran reacts to very low concentrations of CO2 from 0.001 ppm to 1 ppm as an extension of previous posts. We get for a standard atmosphere with CO2 as the only greenhouse gas present, the following total outgoing long wave radiation OLR in W/m2 for different ppm of CO2:
• OLR = 397.524 for 0 (ppm CO2)
• OLR = 397.524 for 0.001
• OLR = 397.21 for 0.01
• OLR = 396.582 for 0.05
• OLR = 395.64 for 0.1
• OLR = 392.814 for 0.5
• OLR = 390.616 for 1 (ppm CO2).
We see a radiative forcing of 0.3 W/m2 from 0 to 0.01 ppm, 2 W/m2 from 0 to 0.1 ppm and 7 W/m2 from 0 to 1 ppm. This is a substantial effect from a cause as small as one part in 100 million. It is hard to believe that this effect can be viewed as a scientifically evidenced real effect.
This test gives yet another reason to question Modtran as the basic scientific evidence of a global warming effect of CO2, for both climate alarmists and skeptics who have accepted Modtran as truth.
If climate skeptics would dare to take the step to question Modtran, that could very well be the final nail in the coffin of IPCC.
tisdag 26 februari 2013
IR-Photons as Optical Phonons as Waves
In climate science it is common to view radiative heat transfer as a two-way flow of IR-photons particles carrying lumps of energy back and forth between e.g. the Earth surface and the atmosphere.
This view lacks physics rationale because it includes heat transfer by IR-photons not only from warm to cold, but also form cold to warm in violation of the 2nd Law of Thermodynamics. The usual way to handle this contradiction is to say that the net transfer is from warm to cold, and so there is no violation of the 2nd Law. But this requires the two-way transfer to be connected which is in conflict with an idea independent two-way transfer.
On Computational Blackbody Radiation I present a model of radiative heat transfer which is based on a wave equation for a collection of oscillators with small damping subject to periodic forcing solved by finite precision computation. Fourier analysis show that the oscillators in resonance take on a periodic motion which is out-of-phase with the forcing, which connects to optical phonons as wave motion in an elastic lattice with large amplitude (as compared to acoustical phonons with smaller amplitude).
Optical phonons typically occur in a lattice composed of two atoms of different mass, one big and one small, which connects to the radiation wave model with small damping.
We thus find reason to view IR-photons as a wave phenomenon similar optical phonons, rather than as "particles".
The radiation wave model includes two-way propagation of waves but only one way transfer of heat energy as an effect of cut-off of high frequencies due to finite precision computation.
måndag 25 februari 2013
Difference between Climate Skeptics and Deniers
CO2 Radiative Forcing by Modtran/Hitran??
The warming effect of atmospheric CO2 as a "greenhouse gas" is evidenced by the atmospheric radiative transfer computer code Modtran based on the high-resolution transmission molecular absorption data base Hitran, while direct observational evidence is lacking. CO2 alarmism is thus based on Modtran/Hitran.
In a previous post I noticed that Modtran assigns 1 ppm of CO2 a radiative forcing or warming effect of 6 W/m2. A very big effect from a very small cause! It is indeed very difficult to believe that one CO2 molecule per one million of O2/N2 molecules can change anything observable. This is like changing one grain of sand in the above picture!
Clive Best shows in The CO2 GHE Demystified using a radiative transfer model similar to Modtran a radiative forcing effect upon doubling of CO2 from 300 ppm to 600 of 3.52 W/m2 in close correspondence with the 3.7 W/m2 put forward by IPCC. The radiative forcing is shown to result from an increase of the effective altitude of radiation around wave numbers 600 and 750, which are far out on the "shoulders" of the CO2 spectrum centered at the main resonance 667. This is again a big effect of a small cause, since the spectrum on the shoulders is very sparse. The model further shows that the main emission from the band around 667 without shoulders occurs from altitudes of 30-40 km in the very thin stratosphere.
In both cases CO2 is attributed strong absorptivity away from the main resonce at 667, with very sparse spectral lines depending on concentration. Both results are most remarkable as a big effect of a small cause and as such call for a thorough investigation of the validity of the underlying radiative transfer model as concerns the effect of an atmospheric trace gas.
söndag 24 februari 2013
2nd Coming of the 2nd Law
The 2nd Law of Thermodynamics has remained as a main mystery of physics ever since it was first formulated by Clausius in 1865 as non-decrease of entropy, despite major efforts by mathematical physicists to give it a rational understandable meaning.
The view today is, based on the work by Ludwig Boltzmann, that the 2nd Law is a statistical law expressing a lack of precise human knowledge of microscopic physics, rather than a physical law independent of human observation and measurement. This view prepared the statistical interpretation of quantum mechanics as the basis of modern physics.
Modern physics is thus focussed on human observation of realities, while classical physics concerns realities independent of human observation. To involve the observer into the observed makes physics subjective which means a depart from the essence of physics of objectivity. A 2nd Law based on statistics thus comes along with many difficulties, which ended Boltzmann's life, and it is natural to seek a formulation in terms of classical physics without statistics.
Such a formulation is given in Computational Thermodynamics based on the Euler equations for an ideal compressible gas solved by finite precision computation. In this formulation the 2nd Law is a consequence of the following equations expressing conservation of kinetic energy K and internal (heat) energy E:
• dK/dt = W - D
• dE/dt = - W + D
• D > = 0,
where W is work and D is nonnegative turbulent dissipation (rates). The crucial element is the turbulent dissipation rate D which is non-negative, and thus signifies one-way transfer of energy from kinetic energy K into heat energy E.
The work W, positive in expansion and negative in compression, allows a two-way transfer between K and E, while turbulent diffusion D >= 0 can only transfer kinetic energy K into heat energy E, and not the other way.
We compare dE/dt = - W + D or rewritten as dE/dt + W = D as an alternative formulation of the 2nd Law, with the classical formulation found in books on thermodynamics:
• dE + pdV = TdS = dQ
• dS > = 0,
where p is pressure, V is volume (with pdV corresponding to W), T is temperature, S is entropy and dQ added heat energy.
We see that D >= 0 expresses the same relation as dS >= 0 since T > 0, and thus the alternative formulation expresses the same effective physics as the classical formulation.
The advantage of the alternative formulation is that turbulent dissipation rate D with D >= 0 has a direct physical meaning, while the physical meaning of S and dS >= 0 has remained a mystery.
The alternative formulation thus gives a formulation in terms of physical quantities without any need to introduce a mysterious concept of entropy, which cannot decrease for some mysterious reason. A main mystery of science can thus be put into the wardrobe of mysteries without solution and meaning, together with phlogistons.
Notice the connection to Computational Blackbody Radiation with an alternative proof of Planck's radiation law with again statistics replaced by finite precision computation.
For a recent expression of the confusion and mystery of the 2nd Law, see Ludwig Boltzmann: a birthday by Lubos.
PS1 The reason to define S by the relation dE + pdV = TdS is that for an ideal gas with pV = RT this makes dS = dE/T + pdV/T an exact differential, thus defining S in terms of T and p. The trouble with S thus defined, is that it lacks direct physical meaning.
PS2 Lubos refers to Bohr's view of physics:
This idea has ruined modern physics by encouraging a postmodern form medieval mysticism away from rational objectivity as the essence of science, where the physical world is reduced to a phantasm in the mind of the observer busy counting statistics of non-physical micro states.
PS3 Recall that statistics was introduced by Boltzmann to give a mathematical proof of the 2nd law, which appeared to be impossible using reversible Newtonian micromechanics, followed by Planck to prove his law of radiation, followed by Born to give the multidimensional Schrödinger equation an interpretation. But this was overkill. It is possible to prove a 2nd law and law of radiation using instead of full statistics a concept of finite precision computation as shown in Computational Thermodynamics and Computational Blackbody Radiation, which maintains the rationalism and objectivity of classical mechanics, while avoiding the devastating trap of reversible micromechanics.
lördag 23 februari 2013
Dysfunctional Peer Review of New Science?
The scholarly peer review system may be functional for normal science or puzzle solving routine science in the sense of Kuhn, but is not well suited to handle non-normal new science challenging an existing paradigm. This is because any new idea poses a threat to existing normal science and as such often meets overly negative reviews by referees without sufficient knowledge of the novelty. Correct new science may thus get rejected without good reasons, but is also possible that incorrect new science can get accepted by uncritical referees.
Furrther, incorrect normal science may be perpetuated by the peer review system, because incorrect normal science can only be questioned by new science.
In short, the peer review system is not suitable to handle new science, because either (i) good articles are rejected on bad grounds, or (ii) bad articles are accepted without good grounds.
An example of new science is given by the article New Theory of Flight presented on The Secret of Flight. The article was rejected by AIAA Journal and is now under review by Journal of Mathematical Fluid Mechanics JMFM.
JMFM has a difficult case to handle: Referees from normal science of fluid mechanics are not eager to touch the article and if so the review will be negative because the existing paradigm is challenged. On the other hand referee's from outside the fluid mechanics community under AIAA may not be able to give a credible review.
The normal science of flight as an example of an incorrect theory formulated 100 years ago, which has survived as normal science in the absence of a correct theory, carried by the peer review system and AIAA.
One option in such a case would seem to be to publish the article without peer review and then open to discussion with participation from normal science.
PS The peer review system has been eroded by in particular IPCC using peers to uncritically promote publication of articles supporting IPCC's climate alarmism and to selectively stop publication of articles not supporting this message.
fredag 22 februari 2013
IR Photons as Phlogistons
A photon is as elementary particle the carrier of the electromagnetic force.
A phonon is as collective elastic excitation in a lattice of atom or molecules the carrier of sound, referred to as a "quasiparticle".
A phonon is a collective sound wave while a photon is a "light particle". In a previous post I considered an acoustic model of radiative heat transfer between the Earth surface and the atmosphere and outer space, in the form of a string instrument with energy transfer from string to soundboard to surrounding air.
It is common to describe infrared radiative heat transfer between two bodies as a two-way flow of IR photon particles carrying "energy quanta" back and forth between the bodies. I have argued that this view is non-physical in the sense that energy is supposed to be carried not only from warm to cold, but also from cold to warm which is in violation of the 2nd law of thermodynamics.
To understand that the particle view is non-physical, it is illuminating to consider a model of the string instrument where the concept of phonon wave is replaced by "phonon particle" as an acoustic counterpart to a photon particle. A "phonon particle" would thus be a form of elementary particle as "sound particle" and "carrier of sound (force)".
We would then view the sound produced by the string instrument as consisting of a two-way flow of phonons between string and soundboard and between soundboard and surrounding air. In this model the sound of the surrounding air would send phonons to the soundboard which would send phonons to the string. This would be in violation with our experience reflecting the 2nd law, that it is the string which makes the soundboard vibrate, which makes the sound wave in the air.
We understand that a phonon particle model of a string instrument is non-physical as violation of the 2nd law and thus misleading.
In the same way an IR photon particle model of infrared radiative heat transfer is non-physical as violation of the 2nd law and thus misleading. Yet this model underlies the idea of "backradiation" from the cold atmosphere the to warmer Earth surface, which is a central part of CO2 alarmism.
Such a photon theory postulating heat transfer by photon particles without mass, charge, color, odor or taste, can be compared with the phlogiston theory postulating that in all flammable materials there is present phlogiston, a substance without color, odor, taste, or weight that is given off in burning.
Notice that because of the long wave length of infrared radiation, and IR-photon is similar to a phonon and thus is better described as collective wave phenomenon than as discrete particle. Compare with previous post on the subject.
PS There is a connection between optical phonons as large amplitude out-of-phase wave vibration of a lattice of two different atoms with different mass (as compared to acoustic small amplitude in-phase vibration), and the analysis of blackbody radiation on Computational Blackbody Radiation with incoming and outgoing radiation out-of-phase (also characteristic of a string instrument designed to give large amplitude output).
torsdag 21 februari 2013
Summary of Big Bluff of Warming Effect of CO2
CO2 alarmism fostered by IPCC is based on a proclaimed "heat trapping" or "radiation blocking" effect of CO2 as atmospheric trace gas causing a "radiative forcing" of 3.7 W/m2 upon doubling of preindustrial concentration into 600 pm from todays 390 ppm.
The evidence of substantial "radiative forcing" of CO2 presented by IPCC consists of spectra of outgoing longwave radiation OLR and downwelling longwave radiation DLR produced by instruments specially designed to measure DLR and OLR.
In a recent sequence of posts on DLR and OLR as parts of Big Bluff, I have given evidence that
• DLR-measurement is based on a formula without physical reality.
• OLR-measurment shows non-physical strong emissivity of CO2 in the 600 - 800 band.
The main reason of CO2 alarmism as a proclaimed warming effect of CO2 supposedly supported by measurement of DLR and OLR, thus evaporates upon inspection into fabricated evidence without real physics, only fictional invented physics.
In this fabrication governmental institutions use instruments and software designed by commercial companies according to governmental specifications, to fabricate instrumental evidence of non-existing physics, such as DLR. The scale of this scientific fraud is unprecedented, with the DLR-Pyrgeometer Formula as a glaring example of unphysical fabricated physics.
Of course you may argue that because the fraud is unprecedented, it simply cannot be true that it is fabricated evidence, and thus it must be correct science. But this is not a scientific argument: The fact that the scale of the (possibly) fabricated evidence is beyond comparison, does not make it true.
String Instrument as Model of Blackbody Radiation 2
• soundboard = radiating gas
• forcing of soundboard by vibrating string through bridges = incoming radiation
• sound waves generated by vibrating soundboard = outgoing radiation.
1. string is plucked into vibration
3. soundboard vibration in resonance with string vibration through bridges
• active: when generating sound waves = outgoing radiation.
• vibrating soundboard generates sound waves: atmosphere radiates to outer space
onsdag 20 februari 2013
Illuminating Model of Global Energy Balance
The radiative heat transfer from the Earth surface to outer space via the atmosphere can for resonant frequencies of the atmosphere be illustrated in a simple water flow model for two connected containers, with container 2 representing the Earth surface with the water level H_2 representing temperature, and container 1 representing the atmosphere of height/temperature H_1 < H_2 with an outlet representing outer space.
If the channel connecting 2 with 1 has the same dimension as the outlet of 1 and the channel flow Q is proportional to level difference, we have by conservation of water, normalizing the constant of proportionality to one:
• Q = H_2 - H_ 1
• Q = H_1
and thus Q = 0.5 x H_2. We compare with the situation with container 2 directly pouring into outer space (no atmosphere) which would give the double outlet flow 2 Q = H_2, as illustrated on top right. This would correspond to non-resonant frequencies for which the atmosphere is transparent.
Introducing 1 (atmosphere) between 2 (Earth surface) and outer space thus reduces the flow with a factor 2, the reduction coming from requiring the water to pass two channels instead of one.
The model exhibits the following fundamental aspects of heat transfer in the Earth-atmosphere system:
• One water flow from high (2) to low level (1): One-way heat transfer from warm Earth surface to colder atmosphere: No "back radiation".
• 1 as passive mediator between 2 and outer space reduces the flow: Decrease of outgoing long wave radiation OLR for resonant frequencies of the atmosphere. No decrease for non-resonant frequencies.
• 1 is a passive mediator in the sense that whatever it absorbs from 2 is emitted into outer space.
The total reduction of OLR or "radiative forcing" caused by radiation through the atmosphere, is then determined by the denseness of the resonant frequencies of the atmosphere. The trace gas CO2 has a very sparse spectrum which gives small "radiative forcing" (small emissivity), as shown in previous posts.
1. Warming effect of the atmosphere, acting as a passive intermediate "blanket" between the Earth surface and pure space, for resonant frequencies.
2. Non-warming effect for non-resonant frequencies.
3. Total warming effect dependent on denseness of resonant frequencies.
4. One-flow of heat energy from warm to cold.
5. Not two-way flow of heat energy carried by "photons" traveling back and forth.
For a new approach to radiative heat transfer connecting to this post, see Computational Blackbody Radiation.
CO2 alarmism is based on the hypothesis that the spectrum of the atmosphere with CO2 as a trace gas is dense in the entire wave number band 600 - 800, with a suggested "radiative forcing" of 3.7 W/m2 upon doubling of the concentration from preindustrial level to 600 ppm.
But the spectrum of CO2 as atmospheric trace gas is not dense but very sparse except in the narrow band 667 - 669, and thus the basis of "radiative forcing" of 3.7 W/m2 appears to be grossly incorrect, probably a factor 10 too big. Without this "radiative forcing" of 3.7 W/m2 CO2 alarmism collapses.
tisdag 19 februari 2013
Radiation of Solid vs Gas
A solid like a glowing lump of iron shows a continuous radiation emission spectrum in accordance with Planck's radiation law, while a gas shows an emission/absorption line spectrum with resonances at specific wave lengths, as illustrated above. What is then the difference between a solid and a gas, which generates different spectra?
The analysis presented on Computational Blackbody Radiation suggests the following answer: A solid can be modeled as continuous web or string of atoms which by collective vibration can generate a full sequence of harmonics with frequencies n ranging over the natural numbers n =1,2,3..., with higher frequencies like 10000, 10001, 10002, ... practically generating a continuum.
The acoustic analog is a vibrating guitar string capable of generating all harmonics corresponding to n =1, 2, 3, ..., because it can macroscopically be viewed as a continuum governed by a wave equation over a continuum of real numbers.
In this perspective a gas would be modeled instead as a finite collection of oscillators, each oscillator with a specific resonance frequency, thus with a discrete line spectrum. The coupling between molecules in solid allowing collective coordinated vibration generating a continuous spectrum, would thus be missing in a gas with the effect that the gas spectrum would be restricted to a discrete set of molecular resonances.
In short: The strong coupling of atoms in a solid allows collective coordinated vibration over a continuum of resonances, while the free flying atoms of a gas can only sustain discrete atomic or molecular resonances. In general the total emissivity of a solid is big and of a gas small.
For perspective, recall in particular the previous post on radiation and radiative heat transfer as a resonance phenomenon rather than an an exchange of energy carrying photons.
Low Emissivity of Atmospheric CO2: Hottel and Leckner
According to measurements by Hottel and Leckner the total emissivity of the Earth's atmosphere with its present concentration of 0.039% CO2 (without water vapor), can be estimated to be smaller than 0.002.
This means that out of the total outgoing long wave radiation of 240 W/m2 less than 0.5 W/m2 can attributed to CO2.
This is almost factor 10 smaller than the "radiative forcing" of 3.7 W/m2 by doubling to 0.06% from 0.03%, which is the basis of CO2 alarmism.
This is in direct contradiction to the prediction by Modtran in the previous post of 6 W/m2 from 0.0001% CO2.
The low emissivity of low concentration CO2 reflects the extreme sparseness of its emission spectrum.
PS. More of the same here.
måndag 18 februari 2013
Modtran: High Emissivity of 1 ppm CO2!
The uchicago modtran solver produces the following outgoing long wave radiation OLR spectra for a dry atmosphere with varying concentrations of CO2: 0 ppm, 1 ppm, 400 ppm and 600 ppm:
We see an effect of reducing OLR from 379 W/m2 to 373 W/m2 by adding 1 ppm CO2 to a carbon free atmosphere: Thus a warming effect of 6 W/m2 by adding 0.0001% CO2 to a dry atmosphere!!
We see warming effect of about 2 W/m2 by increasing CO2 to 600 ppm from present 400 ppm.
We see the effect of 6 W/m2 from the ditch in the spectrum (second graph from top) centered at the main resonance of CO2 at wave number 667, developing by adding just 1 ppm of CO2!
We thus see a very big effect from a very small cause, which directly triggers sound scientific skepticism: If one grain of salt can change the world, then either 10 grains will end it, or the effect quickly saturates maybe to no effect by adding more. Since the first option is absurd, only the second is thinkable and this is the IPCC logarithmic saturation effect: 6 W/m2 from 0 to 1 ppm, and 2 W/m2 from 400 to 600 ppm.
We compare with the absorption spectrum for 1000 m of 1 ppm CO2 computed by spectralcalc showing extreme sparsity of the absorption away from the narrow interval 667-669:
We thus find good reason to question the spectra produced by Modtran which serve as the main scientific evidence of a warming effect of CO2.
söndag 17 februari 2013
Low Emissivity of Atmospheric CO2
Recent posts suggest a low emissivity of atmospheric CO2 away from its main resonance at wave number 667. To check let us compute the transmittance of a transparent atmosphere at 225 K after adding 400 ppm CO2 over a distance of 100 m at a pressure of 200 mb using the free version of the commercial software SpectralCalc to get the following spectrum with a blowup around 667:
We see that the transmittance is zero in the narrow interval 664 - 668, where 400 ppm CO2 makes the atmosphere opaque, while outside this interval the transmittance is low only in a small portion of the spectrum. In other words, the total emissivity of 400 ppm CO2 is very small which means that CO2 has a very limited capability to "block radiation" from the Earth surface, thus contradicting typical OLR spectra with full blocking in the interval 600 - 800.
Further, changing to 600 ppm gives almost the same transmittance spectrum, signaling small radiative forcing from doubling of the preindustrial concentration of CO2:
PS Further evidence is given by Ed Caryl.
String Instrument as Model of Blackbody Radiation
A string instrument like a guitar or piano offers a conceptual model of blackbody radiation which can help to remove the mystery surrounding this phenomenon. The sound of a string instrument is generated by plucking strings in contact through bridges with a soundboard which generates sound waves in the surrounding air. The basic mathematical model takes the form:
• wave equation for soundboard + acoustic damping force = string force,
where the acoustic damping force models the sound force output from the instrument and the string force is the force on the soundboard transmitted from a plucked string through bridges.
The analysis presented on Computational Blackbody Radiation shows the following fundamental relation as a consequence of resonance between sound board and string:
• output sound energy = string energy
which is to be compared with a case of non-resonance:
• output sound energy < < string energy.
We see that a soundboard in resonance with a string transmits the full string plucking energy into output sound energy, while in the case on non-resonance only a small fraction is transmitted, see PS below for some more details.
In blackbody radiation this phenomenon comes out as high emissivity in the case of resonance and low emissivity in the case of non-resonance.
For example, CO2 has a main resonance at wave number 667, which gives high emissivity for wave numbers close to 667 independent of concentration, but low emissivity away from 667.
CO2 alarmism is based on high emissivity of atmospheric CO2 in the whole wave number band 600 - 800, which however most likely is an incorrect assumption.
PS The analysis on Computational Blackbody Radiation exhibits a phenomenon of near-resonance under small acoustic damping with the string force being in-phase with the soundboard displacement (and thus out-of-phase with the soundboard velocity), as the key to a good instrument with string and soundboard working together to produce a good sound.
lördag 16 februari 2013
Behöver Mattekommissionen KTHs Rektor?
Investors vd Börje Ekholm, Per Adolfsson vd Microsoft, Tobias Krantz chef för utbildning Svenskt Näringsliv och KTHs rektor Peter Gudmundson m fl meddelar på Brännpunkt SvD 15/2 att Sverige behöver Mattekommissionen:
• Vi anser att svenska elevers matematikkunskaper är en avgörande fråga för Sveriges tillväxt och våra framtida möjligheter att bli en kunskapsnation i världsklass. Därför startar vi Mattekommissionen – ett brett samverkansinitiativ med elva representanter från utbildningsväsendet, forskningen och näringslivet som har som mål att höja alla elevers kunnande och intresse för matematik.
• Genom samverkan, påverkan och konkreta aktiviteter vill Mattekommissionens uppnå följande tre huvudmål:
• Stärka svenska elevers matematikkunnande och öka intresset för matematikintensiva utbildningar så att Sverige kan leva upp till EU-överenskommelsen om ökad antagning till naturvetenskapliga och tekniska studier.
• Lyfta svenska elevers lägstanivå i matematik, att inte nå de grundläggande kunskapskraven i matematik begränsar individens möjligheter i yrkesliv och privatliv.
• Stärka det översta kunskapsskiktet så att de högpresterande eleverna får möjligheten att utveckla sitt intresse för matematik och naturvetenskap.
• Regeringens satsning på Matematiklyftet räcker inte för att råda bot på de stora utmaningar som vi står inför. Mycket återstår att göra för att nå upp till en acceptabel nivå vad gäller lärarnas utbildning, fortbildningsmöjligheter och undervisning i matematik så att eleverna får en mer kreativ, kontextualiserad och laborativ lärmiljö.
Det är samma KTH-rektor som ht 2010 medelst en mediakampanj sågade Mathematical Simulation Technology MST mitt under pågående testkurs på KTH, beskrivet som KTH-gate, och därmed stoppade alla försök att reformera en i ofruktbara former stelnad matematikutbildningen vid KTH, och i Sverige.
Det är samma KTH-rektor som 2010 utfärdade totalförbud att använda MST på KTH därför att MST erbjöd "eleverna en mer kreativ, kontextualiserad och laborativ lärmiljö" vilket hotade status quo, och som upprepade denna bannbulla ht 2012.
Studenter och näringsliv behöver en modern reformerad matematikutbildning, men KTH levererar en omodern utbildning och motarbetar reform. KTH utgör en bromskloss för den förnyelse av innehåll och form av matematikundervisningen som skulle vara möjlig om matematik kopplades med IT, och som skulle vara Sverige till gagn.
Vad vill alltså KTHs rektor tillföra Mattekommissionen?
fredag 15 februari 2013
Fabricated Evidence of GHE from CO2
The blogosphere offers many attempts to explain the CO2 greenhouse effect GHE, since it is not well explained in the scientific literature and it is the basis of CO2 alarmism. We find on Barret Bellamy Climate The GH effect of CO2 the following outgoing long wave emission OLR spectra with varying concentrations of CO2 as a trace gas (from 0 to 1000 ppm with 390 the present level) computed by Modtran:
Figure 2APortions of the emission spectra of the atmosphere with varying concentrations of CO2(in ppmv as indicated in each portion). The portions of Planck curves are for comparative temperatures; from the top downwards the curves are appropriate for the temperatures 300 K, 280 K, 260 K, 240 K and 220 K. The horizontal axis gives the wavenumbers in cm-1
We see the ditch around the main resonance at 667 widening as the concentration of CO2 is increasing. We see that the effect from the present 390 to a doubling of preindustrial level to 600, is very barely noticeable as a slight widening of the ditch. Barrett Bellamy offers the further remarkable information about the total emissions:
• The spectral portions show only the emissions from water vapour and CO2when it is present. Consider the emission when CO2 is absent and assume that the global mean temperature is 280 K (7°C). The radiance to space is estimated to be 286.2 W/m2, considerably greater than the value required for radiative balance (235 W/m2).
• Adding just 1 ppmv of CO2produces a noticeable effect and the Q branch of the spectrum is particularly obvious. The estimated radiance to space is 281.7 W/m2, a reduction of 4.5 W/m2. Such an atmosphere would be radiating less energy to space and the system as a whole would be warmer. Even 1 ppmv of CO2has a warming effect!
We read that even 1 ppm (0.0001%) of CO2 added to a carbon free atmosphere would have a warming effect or "radiative forcing" of 4.5 W/m2. Amazing!
We read that the total effect of the present 390 ppm of CO2 is about 50 W/m2, about 20% of the total forcing from the Sun. Remarkable. (It is stated that Modtran with 390 ppm gives OLR of 258.7 W/m2 to be compared with 235 for radiative balance, suggesting that something is wrong).
Both 4.5 W/m2 for 1 ppm and 50 (or 28) W/m2 for 390 ppm signal a big effect of CO2 as a trace and thus serve as the chief scientific evidence of the existence of a GHE from CO2. Not surprising it is also claimed that doubling to 600 would give a radiative forcing of about 4 W/m2, which fits with the canon defined by IPCC.
But is the evidence credible? Well, the numbers are computed by the commercial software Modtran marketed by Spectral Sciences Incorporated. The numbers are not supported by direct observation. The numbers are surprisingly big and against all forms of physics intuition of the possible effect of an trace gas: 4.5 W/m2 from 1 ppm simply seems impossible!
In a sequence of posts on OLR I have questioned to ability of a small presence of CO2 to block the radiation from the Earth surface in the entire interval 600 - 800 represented by the ditch. The analysis of blackbody radiation presented on Computational Blackbody Radiation suggests that CO2 even as a trace gas can absorb and emit radiation in a narrow band around its main resonance at 667, but that the emissivity is small away from 667. The analysis thus gives mathematical support of the intuitive conviction that 1 ppm of CO2 cannot cause a radiative forcing of 4.5 W/m2.
We are thus led to suspect that Modtran does not give a correct description of atmospheric radiation, and therefore that the main evidence of a GHE from CO2 is fabricated incorrect evidence.
A similar attempt to justify a GHE from CO2 is given on The Science of Doom. It is a remarkable that the most serious attempts to prove GHE are those made by amateur bloggers.
It is also remarkable that virtually nobody seems to be willing to question the evidence of GHE supplied by Modtran, as if Modtran cannot be questioned. Maybe it is like Coca Cola,which with its secret recipe cannot be questioned.
onsdag 13 februari 2013
Mathematics Reform Initialized in Estonia
WSJ reports on reform of mathematics education combining the power of the human brain and the computer:
• Schoolchildren in 30 schools in Estonia may be able to escape this misery, as they will shortly be embarking on a pilot for a new way of learning mathematics, computer-based mathematics, that reduces the emphasis on computation — doing sums — and increases the emphasis on understanding the uses of mathematics in real-world examples (“should I insure my mobile, how long will I live, or what makes a beautiful shape”).
• Estonia has been at the forefront of education reform. Last year it rolled out a trial to teach children from as young as seven robotics and how to program.
This is a signal to launch Mathematical Simulation Technology.
The "Hockey Stick" of the OLR Spectrum
As CO2 global warming alarmism is now losing credibility after 15 years of stationary temperatures and emerging fear of a coming ice age, it becomes possible for the first time scrutinize the core scientific evidence of the warming effect of CO2, which has been accepted by leading skeptics including Lindzen, Spencer, Singer and WUWT, namely the ditch between wave number 600 and 800 in the outgoing long wave radiation OLR spectrum produced by the IRIS and AIRS infrared spectrometers carried by satellites:
It is difficult to question evidence of this form, which bears the sign of hard physics as precise numbers produced by elaborate expensive instrumentation, because it requires knowledge of both instrument and processing of directly measured data, which both can be hidden in difficult technicalities.
Therefore the ditch in the spectrum, interpreted as a warming effect or "radiative forcing" from atmospheric CO2, has served CO2 alarmism as an "undeniable scientific fact" which cannot be questioned, with a warming effect of about 1 C upon doubling of atmospheric CO2. To refer to the spectrum has come to signify a deeper insight carried by both alarmists and skeptics, hidden to ordinary people not used to read spectra.
To question this "undeniable scientific fact" makes you into a "denier" destined to dwell on one of the lowest levels of Dante's Purgatorium.
In any case I have done so in a sequence of posts on OLR and I have come to the conclusion that the ditch in the spectrum attributed to CO2 is a misrepresentation of reality, or fabrication of fake evidence, similar to that of the "hockey stick", which started the fall of global warming alarmism.
I hope that skeptics are now read to question the OLR spectrum as the key evidence of CO2 warming with the same ardor as in the case of the hockey stick. Physicists have a special responsibility because the OLR spectrum is physics and not climate science.
But to be honest, very few seem to be interested in discussing the OLR spectrum, as if it is given once and for all by some superhuman intellect and thus beyond human understanding and scrutiny. But it is fabricated by people like you and me, and since it is the very basis of CO2 alarmism maybe some day someone will picks the thread. Since the first "hockey stick" attracted so much attention, may this new "hockey stick", if it is a "hockey stick", will deserve some as well.
Anyway, here are the key questions:
• How was the OLR spectrum produced?
• What was directly measured by the sensors, and what was computed in post processing?
• Does the spectrum describe reality with radiation "blocked" by CO2?
These are precise scientific questions which can be answered by using basic physics and mathematics, if only there is an interest in doing so. Alarmists are not interested but skeptics should be.
Maybe Fred Singer will then find reason to reconsider his message:
Singer is convinced that "CO2 certainly is a greenhouse gas", probably because he takes for granted that the OLR spectrum is correct science. Singer is a physicist and should be able upon close inspection to tell if it is or not.
If it is impossible to detect that "CO2 is a greenhouse gas", then it would be incorrect physics to declare that in any case "CO2 is a greenhouse gas". If ghosts cannot be detected, then it is not correct physics to nevertheless declare that "there certainly are ghosts" but they are "so small we cannot detect them". Right Fred?
PS1 WUWT reports:
• Following last night’s State of the Union Address in which the president pledged to implement a job-killing climate change agenda, U.S. Rep. Blaine Luetkemeyer (MO-3) today introduced legislation to prohibit the United States from contributing taxpayer dollars to the United Nations Intergovernmental Panel on Climate Change (IPCC) and the United Nations Framework Convention on Climate Change (UNFCCC).
Apparently, Luetkemeyer has read the OLR spectrum and understood that the science is dubious...
PS2 Fred does not seem to be willing to answer my question about the reality of the OLR spectrum, but I think the question asks for an answer.
tisdag 12 februari 2013
OLR Spectra Decoded as Fake!?
Consider the following outgoing long wave radiation OLR spectrum delivered by the Atmospheric Infrared Sounder AIRS flying on the Aqua satellite:
The graphs show the brightness temperature as function of wave number, with the brightness temperature the temperature of a blackbody with the same radiance at a given wave number as that recorded by the spectrometer (in principle bolometer) sensor, with in particular a zoom of the wave number interval 645 - 685 containing the main resonance at 667 of CO2.
We see a low brightness temperature of about 220 K and a peak at the main resonance at 667 of 250 K. Both these brightness temperatures are lower than the temperature of 295 K in the atmospheric window between 800 and 1200, as if the sensor has recorded the presence of CO2 both at the tropopause (220 K) and in the middle of the troposphere (250 K).
We understand that a bolometer sensor measures radiance calibrated to blackbody radiance and thus cannot distinguish between low emissivity/high temperature and high emissivity/low temperature. This means that the assignment of brightness temperature is influenced by an unknown emissivity, which explains why the assigned brightness temperature is high at the main resonance 667 for which the emissivity is high, and low in the weak resonances surrounding the main resonance for which the emissivity is low. But it does not make sense that CO2 radiates from different temperatures for different frequencies, because all frequencies are assumed to have the same temperature.
The spectrum constructed is thus an artificial spectrum reflecting the sensitivity of the bolometer, which may be chosen so that resonances of CO2 are picked up before the continuous spectrum from the Earth surface, and then are assigned brightness temperatures according to radiance, as shown above. The weak resonances with low total emissivity of CO2 away from 667, are then assigned a low brightness temperature (220 K) at full emissivity, as if all of the radiation from the Earth surface was blocked in the whole interval 600 - 800.
The OLR spectrum delivered by AIRS is thus an artificial spectrum constructed so as to hide that away from the main resonance 667, CO2 has small emissivity and thus cannot block all of the radiation from the Earth surface (even under doubled concentration from preindustrial level).
OLR spectra delivered by AIRS (and IRIS) are viewed as the key evidence of "heat trapping" or "radiation blocking" by atmospheric CO2. If these OLR spectra show to be fakes misrepresenting physics, the main scientific argument of CO2 alarmism evaporates. So what does true science tell: fake or not fake?
PS To help discussion, recall the sparseness of the CO2 spectrum around 667 as pictured in the previous post.
söndag 10 februari 2013
Model of Atmosphere with CO2 shows Small Emissivity
Computed transmittance of atmosphere with CO2 with main resonance at wave number 667. Notice that the atmosphere is effectively transparent, except in a small interval around 667. The effect of CO2 is thus small.
Here is a another argument indicating that the effect of the atmospheric trace gas CO2 on the radiation balance of the Earth is small.
We recall the model of blackbody radiation studied on Computational Blackbody Radiation as a collection of oscillators with small damping with equal oscillator internal energy T representing temperature, with oscillator resonance frequencies n varying from 1 to a cut-off set at T and each oscillator radiating
• E_n = gamma T n^2
where gamma is a universal constant, which is Planck's law. Summing over n from 1 to T, we obtain the total radiance
• E = sum_n gamma T n^2 = sigma T^4
which is Stefan-Boltzmann's law with sigma = gamma/3. In the case of only one resonance frequency n = T, the radiance would be reduced to
• e = gamma T T^2 = gamma T^3 ~ E/T
with the reduction factor 1/T.
The radiance of an atmosphere which is fully opaque over the entire spectrum would radiate E, while an atmosphere opaque only for a specific frequency near cut-off T, would radiate e ~ E/T with a reduction factor 1/T.
We conclude that the emissivity of transparent atmosphere with a trace gas like CO2 with only a few isolated resonances, would scale like 1/T and thus be small as soon as T is bigger than say 100 K.
We thus find theoretical evidence from a basic model that the emissivity of the Earth's atmosphere with the trace gas CO2 would be small, and thus that CO2 would have little effect on the Earth's radiation balance.
Note that the sparseness of CO2 as a trace gas, gets expressed as a spareseness of absorption spectrum rather than small mass fraction because of the universality of blackbody radiation as being independent of the mass of the oscillators.
torsdag 7 februari 2013
Story of Vanishing Evidence of CO2 Warming
In the late 20 century as the cold war was coming to an end, the fear of nuclear war was replaced by a fear of anthropogenic global warming caused by emissions of CO2 from burning of fossil fuels.
Governments united under UN gave massive support to science for producing evidence of global warming by CO2, evidence which governments could use to motivate a special tax on CO2, which would lead society into a new world without CO2-emissions, with lots of tax money to be spent by politicians.
Scientists started, with great enthusiasm stimulated by generous government grants, searching for evidence of a warming effect of CO2 in the form of a "greenhouse effect" with CO2 named "greenhouse gas" because of its capability of absorbing and emitting infrared radiation at isolated specific frequencies, because of its unsymmetric molecular structure, as observed in an early study by the Swedish Nobel Laureate Svante Arrhenius from 1896.
But finding evidence was difficult because CO2 is an atmospheric trace gas (now 390 ppm) and it is always difficult to find evidence of a big effect from a small cause, because this can only happen in unstable systems and such systems have no permanence allowing study.
An idea which developed was that doubling of CO2 would cause a "radiative forcing" of extra 2 - 4 W/m2 to be added to the 240 W/m2 received from the Sun, which was connected to a global warming of 1 C. Of course 1 C would be barely noticeable, but that was the best that could be squeezed out of CO2 alone, but with feedback from the system under "radiative forcing" from doubled CO2, the warming could be inflated to 3 C, which would enough to motivate a tax on CO2 to save the world.
The whole story of global warming by CO2 then rested on finding evidence of "radiative forcing" of 2 - 4 W/m2 from doubled CO2 and scientists were ordered to construct instruments for recording radiation spectra, which could show this effect.
But this turned out to be virtually impossible, because 2 - 4 W/m2 was too small to be measured: It required an accuracy below 1% which was impossible to achieve in practice. The difficulty of a big effect from a small cause showed its real face.
To counter this "small cause - big effect" syndrome, new physics of "back radiation" was invented showing a warming of 300 W/m2 of the Earth surface from the colder atmosphere as a big cause, but the new physics violated the 2nd law of thermodynamics and thus belonged to fiction.
What then remained of scientific support of CO2 alarmism based on observation, was a recorded global warming 1970 - 1998 of 0.5 C, which was connected to an increase of atmospheric CO2 from 330 to 370 ppm during the same time.
But this connection disappeared during 1998 - 2013, which went by without global warming, while CO2 continued to increase to 390 ppm.
The evidence of global warming by CO2 has thus today crumbled to nil, which presents a difficult case for the new report by IPPC under UN in charge of CO2 alarmism, to be presented in September. Without scientific support, a CO2 tax cannot be motivated and the whole BIG BLUFF project of unprecedented dimensions, collapses.
... and All the King's Horses and all the King's Men Couldn't put Humpty together again...
This is a summary of a study I was led into starting in 2009 and I have reported on this blog.
Radiative Heat Transfer as Resonance Phenomenon
The analysis of blackbody radiation exposed on Computational Blackbody Radiation suggests that radiative heat transfer is a phenomenon of near-resonance between bodies communicating through electromagnetic waves combined with a phenomenon of high-frequency cut-off, which effectively leads to one-way heat transfer from warm o cold.
Vieving radiative heat transfer this way removes the non-physical aspects which appear when viewing radiative heat transfer as a two-way exchange of photon particles carrying heat energy back and forth from warm to cold and from cold to warm. The latter view is common in e.g. climate science with in particular downwelling long wave radiation DLR from a cold atmosphere supposedly warming the Earth surface. The non-physical aspects concern the idea of infrared photons and violation of the 2nd law in heat transfer from cold to warm.
The model analyzed on Computational Blackbody Radiation consists of a system of bodies with each body consisting of a set of oscillators subject to small radiative damping, which communicate by sharing a common force carried as an electromagnetic wave. In equilibrium the bodies share a common temperature and there is no heat transfer between the bodies.
Each body is like a radio receiver/sender communicating with the other bodies through resonance transmitted by a force carried by electromagnetic waves, thus interacting over distance by resonance.
If one body is heated (e.g. internally), then its oscillator amplitude increases and so the corresponding balancing force and the residual force is transmitted to the other bodies which in resonance restore force balance reaching a common temperature. The result is that the heated body transfers heat energy to the surrounding colder bodies, by resonance over distance.
With this view, the functioning of an infrared thermometer can be understood as a set of oscillators which by resonance assumes the same temperature as a target at distance.
Similarly a selective infrared thermometer can be conceptualized as an oscillator with a specific resonance frequency with capability of at distance measuring at the temperature of a body with the specific resonance. It will operate like a sensitive radio sensitive receiver which can tune in on a weak sender at a specific frequency.
The Interferometric Reflectance Imaging Sensor IRIS carried by the Nimbus 4 satellite can be seen as such a selective infrared thermometer capable of measuring the temperature of the atmospheric trace gas CO2 through its main resonance at wave number 667, which produced the following spectrum supposedly demonstrating the warming effect of CO2 as the ditch around 667:
But as discussed in previous posts on emissivity, it is not all clear that the above spectrum constructed from measuring the temperature of the trace gas CO2 describes the emission spectrum of the Earth + atmosphere in the range of resonance of CO2. Most likely, it does not.
PS Here is a transmittance spectrum of CO2 from Scienceofdoom computed with spectralcalc illustrating the sparseness of the absorption away from 667. It does not seem plausible that the transmittance of a O2 - N2 atmosphere with a trace of CO2 is close to zero in the whole interval 600 - 800.
Here is a close-up of computed transmittance through 1 m atmosphere with typical CO2 concentration at 0.1 bar showing the sparseness of absorption:
onsdag 6 februari 2013
Inflated Modtran Effect of Atmospheric Trace Gases
What is the effect of the atmospheric trace gas CO2 (390 ppm or 0.039%) on global climate? The answer by IPCC consensus is given in the following graph:
which shows the spectrum of the outgoing longwave radiation OLR from the Earth with atmosphere as computed by the software Modtran on the basis of certain satellite measurements.
The idea is that the ditch in the spectrum in the wave number interval 600 - 800 can be attributed to the main absorption line at 667 of CO2. The total effect is then supposed to be the area between the Planck curves for 288 K and 220 K in the interval 600 - 800, which comes out to be about 40 W/m2. The total "radiative forcing" of atmospheric CO2 is thus supposed to be about 40 W/m2 (disregarding overlap with H2O), which by Stefan-Boltzann's Law can be connected to a temperature rise of about 10 C. The total effect of CO2 as the main cause of the ditch, would thus be global warming of 10 C. (With half the ditch attributed to CO2, the effect would be 5 C, which by Lindzen is cut down to 2.5 C without disclosing the reason). PS With 20 instead of 40, and half of 20 attributed to CO2 one would get 10 W/m2 and thus Lindzen's 2.5 C.
If CO2 was complemented by other trace gases together covering the whole spectrum, the warming effect would be 288 - 220, that is a whopping 68 K, from the mere presence of trace gases.
Is this reasonable? Is it possible that trace gases, making the atmosphere opaque for certain wave numbers, can make the full atmosphere opaque over the entire spectrum?
No, it does not seem to be reasonable, as detailed in an earlier discussion, which means that the above spectrum computed by Modtran does not seem to describe reality.
Yet the Modtran spectrum is the very basis of CO2 alarmism as the core evidence that the presence of of a trace gas can cause substantial global warming.
So there is the question: Can the presence of trace gases make the atmosphere fully opaque?
NASA is now spending big money on projects such as CERES to find instrumental evidence of anthropogenic global warming AGW, but finds nothing. It is similar to the fruitless efforts by CIA to find evidence of weapons of mass destruction WMD in Saddam Hussein's Iraq. Is this an expression of paralyzed US politics?
LB Överger IPCC: När Kommer KVAs Avbön?
Katternö-tidningen rappoterar att Lennart Bengtsson är på god väg att överge den CO2 alarmism han tidigare har har framfört för som ansvarig för KVAs uttalande till stöd för IPCC:
• Temperaturhöjningen är så liten att någon knappast skulle märka den, om inte vi meteorologer hade upplyst om saken.
• Jag vill snarare jämföra med katolska kyrkans medeltida avlatsbrev, vilket var ett effektivt sätt att få en förskrämd allmänhet att betala för att undslippa helvetets fasor. Dåtidens katolska kyrka visade här stor skicklighet. Vi får vara tacksamma att Luther lyckades få stopp på detta oskick, åtminstone i våra protestantiska trakter.
• Jag är inte bara surprised, jag är astonished!
• Att oroa sig för att Antarktis är på väg att smälta är nästan på samma nivå som att oroa sig för att jorden och Venus kan komma att kollidera inom så där en miljard år [vilket vissa modellberäkningar visar].
Nu väntar vi bara på att LB skriver om KVAs uttalande från stöd till avståndstagande från IPCCs CO2 alarmism. När kommer detta LB? Sveriges folk och regering väntar på besked för att kunna gå vidare!
Kanske t o m LB skulle kunna börja uppskatta en person som jag, och inte bara anse att det jag skriver är "rappakalja" enligt tidigare uttalande i DN? Kanske klockan har gått ett varv.
CERES and Radiative Forcing
The Earth's energy budget according to CERES showing unphysical "back radiation" of 340 W/m2
An overview of the CERES project is given in CERES, a review: Past, present and Future (2011):
• The Clouds and Earth Radiant Energy System (CERES) project’s objectives are to measure the reflected solar radiance (shortwave) and Earth-emitted (longwave) radiances and from these measurements to compute the shortwave and longwave radiation fluxes at the top of the atmosphere (TOA) and the surface and radiation divergence within the atmosphere. The fluxes at TOA are to be retrieved to an accuracy of 2%.
• The first objective of CERES is to measure OLR radiances to an accuracy of 1% and reflected solar radiances to 2%. The global mean OLR flux is approximately 240 W/m2, so the requirement is 2.4 W/m2 accuracy. Likewise the global mean reflected flux is 100 W/m2, thus the requirement for shortwave flux is 2 W/m2.
We conclude that the accuracy of CERES even at the lower level of 1% for OLR, would not be good enough to detect effects of "radiative forcing" by CO2.
That the "radiative forcing" of doubled CO2 would be 3.7 W/m2 is a wild guess by IPCC without experimental support, which serves as the following cornerstone of CO2 alarmism:
Here the "easy undisputed calculation" is dQ = 4 dT, where dQ is "radiative forcing" and dT corresponding global warming as a differentiated form of Stefan-Boltzmann's Law Q = sigma T^4.
The number 3.7 or 4 W/m2 is so chosen by IPCC that the global warming would be 1 C, according to the "easy undisputed calculation", which is not too big to not be possible and not too small to be negligible. The basic argument of CO2 alarmism is that with feedback 1 C can become 3 C which is alarming.
The whole idea of "radiative forcing" lacks sound physics basis since the forcing comes from the Sun and only from the Sun, and this is reflected by the fact that it cannot be discovered by instruments measuring physical phenomena, that is, it is fiction of a BIG BLUFF.
PS CERES is selling itself e.g. on a video telling us
• When you add greenhouse gasses such as CO2 and methane you change that radiation balance on the top of the atmosphere and you change the amount of outgoing radiation so that imbalance means more energy in the system...part of it goes into the ocean...and part of goes into actually warming the Earth...all of those things should give you a coherent picture of how things are changing as we warm the climate...
The purpose is obvious: Use CERES to show global warming by radiative forcing from CO2. The only trouble is that CERES shows nothing of this sort...all attempts to measure radiative forcing by CO2 seem to fail miserably...the scale of the BIG BLUFF with its organized governmental science support, is really impressive...
tisdag 5 februari 2013
CERES: No Global Warming Detected
In previous posts we have decoded the BIG BLUFF of CO2 alarmism based on fabricating evidence of a global warming effect of CO2 in large scale experimental projects such as ERBE followed by CERES designed to produce measurements of radiative fluxes. While ERBE focussed on outgoing long wave radiation OLR as one-way heat flux from the Earth + atmosphere into outer space at 3 K, which is physical heat flux, CERES has lifted the ambition to include two-way heat fluxes between different parts of the atmosphere and the Earth surface, which lack physical reality:
The objective of CERES is described as follows on Daily Press Febr 14 2012:
• Clouds and the Earth's Radiant System CERES is one of five earth science experiments aboard Suomi NPP, a $1.5 billion satellite NASA launched into space Oct. 28 2012 from Vandenberg Air Force Base in California.
• Managed at NASA Langley Research Center in Hampton, CERES measures the amount of sunlight that enters Earth and how much sunlight and thermal radiation is reflected back to space. The concept is known as Earth's energy budget.
• According to NASA, the sun annually provides the planet about 340 watts per square meter — roughly the energy radiated from six incandescent light bulbs. If the planet returned an equal amount of energy to space, temperatures would be constant, Loeb said.
• That is not occurring. Instead, roughly 0.8 watts per square meter stays on Earth.
• The energy is trapped by greenhouse gases, such as water vapor and carbon dioxide, that come from burning fossil fuels and other sources. Clouds also play a role; they reflect sunlight back into space and, depending on their height and thickness, prevent it from leaving the planet.
• The imbalance helps explain why global temperatures increased 1.4 degrees since last century and sea levels are rising, Loeb said.
• NASA uses computer models to summarize the images into daily and monthly reports that date back to 1985. That's when another Langley instrument, ERBE, or Earth Radiation Budget Experiment, began monitoring the planet.
• Four successive CERES instruments — the one launched in October is a fifth generation — followed, providing data used by, among others, the Intergovernmental Panel on Climate Change.
• The panel, which shared a Nobel Peace Prize with former Vice President Al Gore, wrote what many view as the definitive report on climate change.
CERES has four main objectives:
1. For climate change analysis, provide a continuation of the ERBE record of radiative fluxes at the top of the atmosphere (TOA), analyzed using the same algorithms that produced the ERBE data.
2. Double the accuracy of estimates of radiative fluxes at TOA and the Earth's surface.
3. Provide the first long-term global estimates of the radiative fluxes within the Earth's atmosphere.
4. Provide cloud property estimates that are consistent with the radiative fluxes from surface to TOA.
• scanning thermistor bolometer sensors which measure Earth-reflected and Earth- emitted filtered radiances in the broadband shortwave (0.3 μm - 5.0 μm), broadband total-wave (0.3 μm - >100 μm), and narrow-band water vapor window (8 μm-12 μm) spectral regions.
We see that CERES is used to record radiative heat fluxes using sophisticated technology with the ambition to discover effects of global warming by CO2 as an radiative imbalance. To the disappointment of the designers and users of CERES including IPCC Al Gore, the instruments show next to nothing: An imbalance of 0.8 W/m2 out of 340 W/m2 is smaller than any thinkable measurement error.
CERES (like LHC) thus discovers nothing, and what we can now expect is a new bigger project with more sensitive instruments with doubled accuracy with the hope of finding something which is not zero as evidence of global warming by CO2.
This will not be cheap, but when you go to an expensive restaurant you expect to be offered something special, and so the high price and sophistication of the instrumentation can be seen as a guarantee that something will be discovered. |
c9ddd090ac1888d7 | Format: Paperback
Language: English
Format: PDF / Kindle / ePub
Size: 10.94 MB
Downloadable formats: PDF
This corresponds to the following formula: This formulation is the most general form of Schrödinger equation. As every observation requires an energy exchange (photon) to create the observed 'data', some energy (wave) state of the observed object has to be altered. Quantum mechanics has allowed us to built very effective circuitry and the like without ever needing to worry about the philosophical implications.
Pages: 372
Publisher: Merchant Books (March 4, 2009)
ISBN: 1603861866
Shakespeare S Comedy Of A Midsummer Night S Dream
So, in terms of information, we look at our double-slit setup as a choice between two paths, where one path is 1 and the other path is 0. This is referred to a which-path determination; particles follow paths , cited: Analysis and Computation of download epub download epub. In the meantime, Maroney's team at Oxford is collaborating with a group at the University of New South Wales in Australia, to perform similar tests with ions, which are easier to track than photons. “Within the next six months we could have a watertight version of this experiment,” says Maroney Introduction to Real-Time download for free download for free. If the principle of relativity weren’t true, we would have to do all our calculations in some preferred reference frame. However, the more fundamental problem is that we have no idea what the velocity of this preferred frame might be. How about the velocity of the center of our galaxy or the mean velocity of all the galaxies , source: An Introduction to Field download online An Introduction to Field Quantization:? Thus our deterministic Bohmian model yields the usual quantum predictions for the results of the experiment. As the preceding paragraph suggests, and as we discuss in more detail later, Bohmian mechanics does not need any “measurement postulates” or axioms governing the behavior of other “observables” , e.g. Vortex Flows and Related Numerical Methods (Nato Science Series C:) Vortex Flows and Related Numerical. Within a wave function all 'knowable' (observable) information is contained, (e.g. position (x), momentum (p), energy (E), ...). Connected to each observable there is a corresponding operator [for momentum: p=-i(hbar)(d/dx)]. When the operator operates onto the wave function it extracts the desired information from it , source: Imaging Phonons: Acoustic Wave Propagation in Solids www.blackwaterpaddleandpedal.com. The study of waves dates back to the ancient Greeks who observed how the vibrating strings of musical instruments would generate sound. While there are two fundamental types of waves - longitudinal and transverse - waves can take many forms (e.g., light, sound, and physical waves) online. The darker red areas are where the wave crests are the largest. As the waves propagate further towards the island, the wave crests reach a maximum value and then start to break-- note that the color intensity decreases due to the wave breaking (towards the upper left in the figure. In this picture, the offshore wave is 10 ft high with a 15 sec. wave period. The grid points here are about 8.8 feet apart Vortex Flows and Related Numerical Methods (Nato Science Series C:) Vortex Flows and Related Numerical.
Towards the later part of his scientific career de Broglie worked towards developing a causal explanation of wave mechanics, in opposition to the wholly probabilistic models which dominate quantum theory but he had to abandon it in the face of severe criticism of fellow scientists epub. This idea that nature is inherently probabilistic — that particles have no hard properties, only likelihoods, until they are observed — is directly implied by the standard equations of quantum mechanics. But now a set of surprising experiments with fluids has revived old skepticism about that worldview. The bizarre results are fueling interest in an almost forgotten version of quantum mechanics, one that never gave up the idea of a single, concrete reality Wave Dynamics of Generalized read online permeopayments.com. In 1905, Albert Einstein put into words the light quantum hypothesis, which states that light of the frequency n consists of parts, called photons, with energy E =h�n , e.g. Introduction to wave mechanics download here download here.
The Pinch Technique and its Applications to Non-Abelian Gauge Theories (Cambridge Monographs on Particle Physics, Nuclear Physics and Cosmology)
Integrated Photonics: Fundamentals
Administration in the public sector
If you find yourself in the branch where the spin is up, your coefficient is α, but so what? How do you know what kind of coefficient is sitting outside the branch you are living on , source: Theory of Resonances: read for free read for free? A ‘random’ number represents the outcome of a missing cause-effect chain , cited: The Dirac Equation and Its' Solutions (De Gruyter Studies in Mathematical Physics) read pdf. The most basic wave (a form of plane wave ) may be expressed in the form: where σ determines the spread of k1-values about k, and N is the amplitude of the wave Universe on a T-Shirt : The Quest for the Theory of Everything http://permeopayments.com/?ebooks/universe-on-a-t-shirt-the-quest-for-the-theory-of-everything. Perhaps the foremost scientists of the 20th century was Niels Bohr, the first to apply Planck's quantum idea to problems in atomic physics. In the early 1900's, Bohr proposed a quantum mechanical description of the atom to replace the early model of Rutherford epub. It also seems to show that there can be no reality outside the universe, hence God is the universe or there is no God. This follows from a proof of Von Neumann in the 1930s, which demonstrated that ‘Hidden Variables’ cannot exist. ‘Hidden Variables’ is the reductionist view that there exists an underlying physical explanation for quantum mechanics, but it is hidden from view , e.g. Photomagneton and Quantum read online http://tellfredericksburg.com/freebooks/photomagneton-and-quantum-field-theory-the-volume-1-of-quantum-chemistry-series-in-optics-and. Prerequisites: Physics 218B. (S) A project-oriented laboratory course utilizing state-of-the-art experimental techniques in materials science. The course prepares students for research in a modern condensed matter-materials science laboratory. Under supervision, the students develop their own experimental ideas after investigating current research literature pdf. Transverse waves are waves where the displacement of the particles is perpendicular to the direction of travel of the waves (the vibrations are perpendicular to the direction of travel). The critical difference between transverse and longitudinal waves is that transverse waves can be polarised whereas longitudinal ones cannot. Polarised waves are ones where the vibrations of the wave are in a single plane Nonlinear Waves read for free http://tellfredericksburg.com/freebooks/nonlinear-waves. But it's true that, morally speaking, it seems to represent more than one particle. So now we talk a little about the nature of the spectrum. We want to go back to the Schrodinger equation here. And just move one term to the right hand side. And just see what can happen in terms of singularities and discontinuities. So first of all, we always begin with the idea that sine must be continuous Classical and Quantum Gravity: read for free Classical and Quantum Gravity: Theory,.
Electromagnetic Surface Waves: A Modern Perspective (Elsevier Insights)
P(0)2 Euclidean (Quantum) Field Theory (Princeton Series in Physics)
Quantum Field Theory
Seismic wave mechanics(Chinese Edition)
Shakespeare's Play of a Midsummer Night's Dream
Microwave Photonics: Devices and Applications
Nonperturbative Quantum Field Theory (Nato Science Series B:)
Theory and Applications of Ocean Surface Waves (Third Edition) (in 2 Volumes) (Advanced Series on Ocean Engineering (Hardcover))
On The Nature of Spacetime: Ontology of Physics - Classical And Quantum
Engineering Satellite-Based Navigation and Timing: Global Navigation Satellite Systems, Signals, and Receivers
Probabilistic Treatment of Gauge Theories (Contemporary Fundamental Physics)
Standard Model Measurements with the ATLAS Detector: Monte Carlo Simulations of the Tile Calorimeter and Measurement of the Z → τ τ Cross Section (Springer Theses)
The Power and Beauty of Electromagnetic Fields
Schrödinger used the relativistic energy momentum relation to find what is now known as the Klein– Gordon equation in a Coulomb potential. Schrodinger’s Equation In January 1926, Schrödinger published in Annalen der Physik the paper "Quantisierung als Eigenwertproblem" [tr Analysis and Modelling of the download epub http://kaigohoshou.com/library/analysis-and-modelling-of-the-noise-generation-during-vibratory-pile-driving-and-determination-of. More interestingly, in an atom, the positive charges of a nucleus affect the potential energies of wave functions of electrons, which in turns modify the Hamiltonian Electromagnetic Symmetry read epub http://tellfredericksburg.com/freebooks/electromagnetic-symmetry-electromagnetics-library. The particle itself being a wave has its position spread out in space. The entirety of information about particles is encoded in the wavefunction Ψ, that is computed in quantum mechanics, using the Schrodinger equation - a partial differential equation that can determine the nature and time development of the wavefunction online. Almost a century after its invention, experts still do not agree on the interpretation of such fundamental features as measurement or preparation of a quantum system. If these issues and applications intrigue you, then this course is where you should start Optics in Atmospheric Propagation and Random Phenomena: 26-27 September 1994, Rome, Italy (Proceedings of Spie) http://permeopayments.com/?ebooks/optics-in-atmospheric-propagation-and-random-phenomena-26-27-september-1994-rome-italy. So, if we add the probabilities that the particle is somewhere all over space, this is the probability that the particle is in this little dx we integrated that must be equal to 1. In terms of things to notice here, maybe one thing you can notice is the units of Psi Stifling Political download for free lv.emischool.com. Rather they are a mathematical description of what we can actually know about the system. They serve only to make statistical statements and predictions of the results of all measurements which we can carry out upon the system. (Albert Einstein, 1940) It seems to be clear, therefore, that Born's statistical interpretation of quantum theory is the only possible one Digital Signal Processing: A Practitioner's Approach http://tellfredericksburg.com/freebooks/digital-signal-processing-a-practitioners-approach. The electronic cloud effectively shields the nuclear charge towards outside, making up a neutral whole, but is inefficient inside; in computing its structure its own field that it will produce must not be taken into account, only the field of the nucleus. iii) It seemed impossible to account for e.g online. An on the surface look, from a strictly intellectual vantage point, with regard to what Quantum Physics is and what quantum physicists do, may and perhaps does, sound a bit ho hum or perhaps even, really complex Geometric and Topological download for free download for free. Take the Schrodinger equation, change all x's to minus x's and show that, in fact, not only is sine of x a solution, but sine of minus x is also a solution. So prove that sine of minus x is a solution with the same energy Qi HYPOTHESIS THEORY [HT2]: download here download here. If the matter wave to the left of the discontinuity is ψ1 = sin(k1x x+k1y y−ω1 t) and to the right is ψ2 = sin(k2x x + k2y y − ω2 t), then the wavefronts of the waves will match across the discontinuity for all time only if ω1 = ω2 ≡ ω and k1y = k2y ≡ ky Tools for Signal Compression: read online Tools for Signal Compression:. The Milky Way’s halo is curved spacetime. A moving particle has an associated aether displacement wave. In a double slit experiment the particle travels through a single slit and the associated wave in the aether passes through both Wave Mechanics for Chemists; http://hanoyobou.com/books/wave-mechanics-for-chemists.
Rated 4.9/5
based on 402 customer reviews |
2e5687be863bf720 | • Content count
• Joined
• Last visited
Community Reputation
259 Neutral
About coelurus
• Rank
Advanced Member
1. Learning Math really from scratch
You say you studied for grades in school. What is your goal this time, to study only enough maths so you can do the programming you're thinking of right now, or to actually gain that luscious deeper understanding and have a larger set of mathematical tools to choose from in the future? The best way of learning depends of course on who you are, but having a printed book is hard to beat. As has been mentioned, there are loads of really nice free video lectures online as complementary material, have a look. And considering the level of English in your post, I'd say maths books at the level you're interested in shouldn't be a problem. As always, there's no magic book to open the doors for you, just grab whatever you can find at the local library and have fun. Search for natural science programs in high schools and universities and see in what order they teach maths and go by that. Keep programming and try to find ways to apply what you're learning, and eventually you'll do just that with linear algebra. Also, don't skip too many parts, thinking they are not relevant to rotating a 3D object. Maths is wonderfully intertwined, and starting with a strong foundation, new things in maths will be "obvious" rather some something you need to cram or constantly look up (i.e. "the reason for this expression is this, so the answer to my question is that" vs "was it this or that, wikipedia help!"). That's what I did, and what I'd do again...
2. Note that this will create a higher density of points close to the center of the disc. If you'd prefer a uniform distribution of points in the xy-plane, either generate (x, y)-pairs until one point falls within the circular sector of the player of interest, or scale the points by sqrt(L): [source lang="C++"] x = sqrt(L) * cos(angle); y = sqrt(L) * sin(angle); [/source] Read up on polar integration for more info.
3. reason i love my wife
To indulge the OP: A good friend of mine started using Linux many years ago after having seen me tinker. He currently lives with his fiancee who plays the organ in a church and pretty much organizes any music (the choir, inviting "classical bands" some nights etc). They have at least 5 PCs or so (a media server, media player, one or two office computers and a laptop, he's a collector), all running Ubuntu (I think he is trying Arch on the laptop atm) during the day with the power horses dual booting to Windows for hefty games. Even though his fiancee is not a computer monster (she does what "normal people do" on computers), she really likes Ubuntu and have no issues or complaints about it. My good friend takes care of tech related issues anyway, and afaik that's a rather lazy "job". Wouldn't dare say this is the reason they love each other, but they both use Linux and they live a happy life together. For the nervy people: She is really pretty too... And does not smell funny! Aah, online communities, the metal grater back rub.
4. There are a few things you can do to compress the storage: *) Non uniform zenith angle, use more detail at the horizon, can reduce # verts to at most 60-70. *) Reuse color sets, use one "day color set" and one "night color set" for several hours, 8 color sets should suffice. *) As joe_bubamara stated, you don't need 32 bits precision, or even 24 bits since you interpolate, you will get all possible colors anyway. *) From the good old links by filousnt, skip azimuth dependence for color and modulate brightness. Let's say you have a 12*5 skydome, 8 color sets, 16 bits of color precision and zenith only dependence. With all the above it'd take 5*8*2 bytes = 80 bytes. For the entire skydome, 12*5*8*2 bytes = 960 bytes. A couple of years ago I specified 4 angles for azimuth and interpolated between, 4*5*8*2 bytes = 320 bytes.
5. In short, there's no explicit expression.
6. two 16 bits into one 32 bit
Convert to and from fixed point: unsigned int i32 = ((unsigned int)(v1 * 65535) << 16) | (unsigned int)(v2 * 65535); float o1 = (i32 >> 16) / 65535.0, o2 = (i32 & 0xFFFF) / 65535.0; I don't know specifics of HLSL, but something similar to the above code should do the trick.
7. Event handling in X11
Don't remember what happens to keyboard input when you do the override thingy, but what you say sounds reasonable. Grabbing the keyboard will fix it: XGrabKeyboard(dis, win, true, GrabModeAsync, GrabModeAsync, CurrentTime); Make sure to ungrab and regrab whenever you should. I would strongly suggest you take a look at some existing source code, X has a rather steep learning curve. The simplest sources are games, but they still cover the most important parts.
8. Event handling in X11
It's all handled with bitmasks: /* Enable redirection override. */ att.override_redirect = True; /* Mask all wanted event bits together. */ att.event_mask = KeyPressMask | KeyReleaseMask | PointerMotionMask | ButtonPressMask | ButtonReleaseMask | etc...; /* Tell window to take care of both override redirection and input events. */ XChangeWindowAttributes( dis, win, CWOverrideRedirect | CWEventMask, &att );
9. Event handling in X11
You don't need the override thingie for input, but events: att.event_mask = KeyPressMask; XChangeWindowAttributes( dis, win, CWEventMask, &att ); Check out the source code for the Quake games, they got a lot of useful stuff on X.
10. I'm in the process of writing my own general-purpose GUI and it'll replace wxWidgets in an editor and be used in-game. Written as a plugin and "only" takes care of the spatial relationships and events, rendering takes place in GUI renderer plugins. Only got a silverish renderer atm, but it'd be possible to make one that co-works with the scene plugins for 3D GUIs with all the available graphics effects, whatever one would do that for.
11. tangent space normal mapping
Do you save your normals using angles? Try vectors and transform your lights into tangent space with a TBN matrix (time to google!), vectors will represent normals in one reference system and you won't get those discontinuities.
12. Quaternions
I have no experience in FBX files and what parts are relevant to eachother, but do the animation rotation angles follow the rotation axis order parameter ala in the FBX SDK GetEulerRotation{X,Y,Z} and GetRotationOrder?
13. Shader System
How about writing one effect file for each possible case regarding both effect and implementation? Gonna be a lot of effect files, so just split up them up into different directories by implementation. This is very much a decision you will have to make, it's mainly a small organizational issue that determines how the effect development will work. Doesn't affect the higher level of the rendering process, so just choose one for now.
14. Quaternions
Regarding the "coordinate frames", it's already been covered in here. Don't concatenate quats from different coordinate systems (like one from object space and another from world space). But we already know that, right? :p Why are you trying to reconstruct a quaternion from Euler angles extracted from another quaternion? Look at each Euler angle you get and see if they're correct, simple examination that should tell you what goes wrong.
15. The math pros in here...
I studied "programming maths" by myself a couple of years back when I was still in elementary school, had one or two old high school grade maths books at home. University only showed me that the stuff I did was all true and how to extend upon what I already knew, plus interesting ways to use programming practically (solving the time-depenent Schrödinger equation in realtime in two and three dimensions, yummy!). |
294c9e33d7bb1aa0 | Modern Mathematical Physics: what it should be - Физический факультет СПбГУ
Санкт-Петербургский государственный университет
Modern Mathematical Physics: what it should beModern Mathematical Physics: what it should be
L. D. Faddeev
Steklov Mathematical Institute, St Petersburg 191011, Russia
When somebody asks me, what I do in science, I call myself a specialist in mathematical physics. As I have been there for more than 40 years, I have some definite interpretation of this combination of words: «mathematical physics.» Cynics or purists can insist that this is neither mathematics nor physics, adding comments with a different degree of malice. Naturally, this calls for an answer, and in this short essay I want to explain briefly my understanding of the subject. It can be considered as my contribution to the discussion about the origin and role of mathematical physics and thus to be relevant for this volume.
The matter is complicated by the fact that the term «mathematical physics» (often abbreviated by MP in what follows) is used in different senses and can have rather different content. This content changes with time, place and person.
I did not study properly the history of science; however, it is my impression that, in the beginning of the twentieth century, the term MP was practically equivalent to the concept of theoretical physics. Not only Henri Poincaré, but also Albert Einstein, were called mathematical physicists. Newly established theoretical chairs were called chairs of mathematical physics. It follows from the documents in the archives of the Nobel Committee that MP had a right to appear both in the nominations and discussion of the candidates for the Nobel Prize in physics [1]. Roughly speaking, the concept of MP covered theoretical papers where mathematical formulae were used.
However, during an unprecedented bloom of theoretical physics in the 20’s and 30’s, an essential separation of the terms «theoretical» and mathematical«occurred. For many people, MP was reduced to the important but auxiliary course «Methods of Mathematical Physics» including a set of useful mathematical tools. The monograph of P. Morse and H. Feshbach [2] is a classical example of such a course, addressed to a wide circle of physicists and engineers.
On the other hand, MP in the mathematical interpretation appeared as a theory of partial differential equations and variational calculus. The monographs of R. Courant and D. Hilbert [3] and S. Sobolev [4] are outstanding illustrations of this development. The theorems of existence and uniqueness based on the variational principles, a priori estimates, and imbedding theorems for functional spaces comprise the main content of this direction. As a student of O. Ladyzhenskaya, I was immersed in this subject since the 3rd year of my undergraduate studies at the Physics Department of Leningrad University. My fellow student N. Uraltseva now holds the chair of MP exactly in this sense.
MP in this context has as its source mainly geometry and such parts of classical mechanics as hydrodynamics and elasticity theory. Since the 60’s a new impetus to MP in this sense was supplied by Quantum Theory. Here the main apparatus is functional analysis, including the spectral theory of operators in Hilbert space, the mathematical theory of scattering and the theory of Lie groups and their representations. The main subject is the Schrödinger operator. Though the methods and concrete content of this part of MP are essentially different from those of its classical counterpart, the methodological attitude is the same. One sees the quest for the rigorous mathematical theorems about results which are understood by physicists in their own way.
I was born as a scientist exactly in this environment. I graduated from the unique chair of Mathematical Physics, established by V. I. Smirnov at the Physics Department ofLeningrad State University already in the 30’s. In his venture V .I. Smirnov got support from V. Fock, the world famous theoretical physicist with very wide mathematical interests. Originally this chair played the auxiliary role of being responsible for the mathematical courses for physics students. However in 1955 it got permission to supervise its own diploma projects, and I belonged to the very first group of students using this opportunity. As I already mentioned, O. A. Ladyzhenskaya was our main professor. Although her own interests were mostly in nonlinear PDE and hydrodynamics, she decided to direct me to quantum theory. During last two years of undergraduate studies I was to read the monograph of K. O. Friedrichs, «Mathematical Aspects of Quantum Field Theory,» and relate it to our group of 5 students and our professor on a special seminar. At the same time my student friends from the chair of Theoretical Physics were absorbed in reading the first monograph on Quantum Electrodynamics by A. Ahieser and V. Berestevsky. The difference in attitudes and language was striking and I was to become accustomed to both.
After my graduation O. A. Ladyzhenskaya remained my tutor but she left me free to choose research topics and literature to read. I read both mathematical papers (i.e. on direct and inverse scattering problems by I.M. Gelfand and B. M. Levitan, V. A. Marchenko, M. G. Krein, A. Ya. Povzner) and «Physical Review» (i.e. on formal scattering theory by M. Gell-Mann, M. Goldberger, J. Schwinger and H. Ekstein) as well. Papers by I. Segal, L. Van-Hoveand R. Haag added to my first impressions on Quantum Field Theory taken from K. Friederichs. In the process of this self-education my own understanding of the nature and goals of MP gradually deviated from the prevailing views of the members of the V. Smirnov chair. I decided that it is more challenging to do something which is not known to my colleagues from theoretical physics rather than supply theorems of substantiality. My first work on the inverse scattering problem especially for the many-dimensional Schrödinger operator and that on the three body scattering problem confirm that I really tried to follow this line of thought.
This attitude became even firmer when I began to work on Quantum Field Theory in the middle of the 60’s. As a result, my understanding of the goal of MP drastically modified. I consider as the main goal of MP the use of mathematical intuition for the derivation of really new results in the fundamental physics. In this sense, MP and Theoretical Physics are competitors. Their goals in unraveling the laws of the structure of matter coincide. However, the methods and even the estimates of the importance of the results of work may differ quite significally.
Here it is time to say in what sense I use the term «fundamental physics.» The adjective «fundamental» has many possible interpretations when applied to the classification of science. In a wider sense it is used to characterize the research directed to unraveling new properties of physical systems. In the narrow sense it is kept only for the search for the basic laws that govern and explain these properties.
Thus, all chemical properties can be derived from the Schrödinger equation for a system of electrons and nuclei. Alternatively, we can say that the fundamental laws of chemistry in a narrow sense are already known. This, of course, does not deprive chemistry of the right to be called a fundamental science in a wide sense.
The same can be said about classical mechanics and the quantum physics of condensed matter. Whereas the largest part of physical research lies now in the latter, it is clear that all its successes including the theory of superconductivity and superfluidity, Bose-Einstein condensation and quantum Hall effect have a fundamental explanation in the nonrelativistic quantum theory of many body systems.
An unfinished physical fundamental problem in a narrow sense is physics of elementary particles. This puts this part of physics into a special position. And it is here where modern MP has the most probable chances for a breakthrough.
Indeed, until recent time, all physics developed along the traditional circle: experiment v- theoretical interpretation v- new experiment. So the theory traditionally followed the experiment. This imposes a severe censorship on the theoretical work. Any idea, bright as it is, which is not supplied by the experimental knowledge at the time when it appeared is to be considered wrong and as such must be abandoned. Characteristically the role of censors might be played by theoreticians themselves and the great L. Landau and W. Pauli were, as far as I can judge, the most severe ones. And, of course, they had very good reason.
On the other hand, the development of mathematics, which is also to a great extent influenced by applications, has nevertheless its internal logic. Ideas are judged not by their relevance but more by esthetic criteria. The totalitarianism of theoretical physics gives way to a kind of democracy in mathematics and its inherent intuition. And exactly this freedom could be found useful for particle physics. This part of physics traditionally is based on the progress of accelerator techniques. The very high cost and restricted possibilities of the latter soon will become an uncircumventable obstacle to further development. And it is here that mathematical intuition could give an adequate alternative. This was already stressed by famous theoreticians with mathematical inclinations. Indeed, let me cite a paper [5] by P. Dirac from the early 30’s:
The steady progress of physics requires for its theoretical formulation a mathematics that gets continually more advanced. This is only natural and to be expected. What, however, was not expected by the scientific workers of the last century was the particular form that the line of advancement of the mathematics would take, namely, it was expected that the mathematics would get more complicated, but would rest on a permanent basis of axioms and definitions, while actually the modern physical developments have required a mathematics that continually shifts its foundations and gets more abstract. Non-euclidean geometry and non-commutative algebra, which were at one time considered to be purely fictions of the mind and pastimes for logical thinkers, have now been found to be very necessary for the description of general facts of the physical world. It seems likely that this process of increasing abstraction will continue in the future and that advance in physics is to be associated with a continual modification and generalization of the axioms at the base of mathematics rather than with logical development of any one mathematical scheme on a fixed foundation.
There are at present fundamental problems in theoretical phy-sics awaiting solution, e.g., the relativistic formulation of quantum mechanics and the nature of atomic nuclei (to be followed by more difficult ones such as the problem of life), the solution of which problems will presumably require a more drastic revision of our fundamental concepts than any that have gone before. Quite likely these changes will be so great that it will be beyond the power of human intelligence to get the necessary new ideas by direct attempts to formulate the experimental data in mathematical terms. The theoretical worker in the future will therefore have to proceed in a more inderect way. The most powerful method of advance that can be suggested at present is to employ all the resources of pure mathematics in attempts to perfect and generalise the mathematical formalism that forms the existing basis of theoretical physics, and after each success in this direction, to try to interpret the new mathematical features in terms of physical entities.
Similar views were expressed by C.N. Yang. I did not find a compact citation, but all spirit of his commentaries to his own collection of papers [6]shows this attitude. Also he used to tell this to me in private discussions.
I believe that the dramatic history of setting the gauge fields as a basic tool in the description of interactions in Quantum Field Theory gives a good illustration of the influence of mathematical intuition on the development of the fundamental physics. Gauge fields, or YangvMills fields, were introduced to the wide audience of physicists in 1954 in a short paper by C.N. Yang and R. Mills [7], dedicated to the generalization of the electromagnetic fields and the corresponding principle of gauge invariance. The geometric sense of this principle for the electromagnetic field was made clear as early as in the late 20’s due to the papers of V. Fock [8] and H. Weyl [9]. They underlined the analogy of the gauge (or gradient in the terminology of V. Fock) invariance of the electrodynamics and the equivalence principle of the Einstein theory of gravitation. The gauge group in electrodynamics is commutative and corresponds to the multiplication of the complex field (or wave function) of the electrically charged particle by a phase factor depending on the spacevtime coordinates. Einstein’s theory of gravity provides an example of a much more sophisticated gauge group, namely the group of general coordinate transformation. Both H. Weyl and V. Fock were to use the language of the moving frame with spin connection, associated with local Lorentz rotations. Thus the Lorentz group became the first nonabelian gauge group and one can see in [8] essentially all formulas characteristics of nonabelian gauge fields. However, in contradistinction to the electromagnetic field, the spin connection enters the description of the space-time and not the internal space of electric charge.
In the middle of the 30’s, after the discovery of the isotopic spin in nuclear physics, and forming the Yukawa idea of the intermediate boson, O. Klein tried to geometrise these objects. His proposal was based on his 5-dimensionalpicture. Proton and neutron (as well as electron and neutrino, there were no clear distinction between strong and weak interactions) were put together in an isovector and electromagnetic field and charged vector meson comprised a 2×2 matrix. However the noncommutative SU(2) gauge group was not mentioned.
Klein’s proposal was not received favorably and N. Borh did not recommend him to publish a paper. So the idea remained only in the form of contribution to proceedings of Warsaw Conference «New Theories in Physics» [10].
The noncommutative group, acting in the internal space of charges, appeared for the first time in the paper [8] of C.N. Yang and R. Mills in 1954. There is no wonder that Yang received a cool reaction when he presented his work at Princeton in 1954. The dramatic account of this event can be found in his commentaries [7]. Pauli was in the audience and immediately raised the question about mass. Indeed the gauge invariance forbids the introduction of mass to the vector charged fields and masslessness leads to the long range interaction, which contradicts the experiment. The only known massless particles (and accompaning long range interactions) are photon and graviton. It is evident from Yang’s text, that Pauli was well acquainted with the differential geometry of nonabelian vector fields but his own censorship did not allow him to speak about them. As we know now, the boldness of Yang and his esthetic feeling finally were vindicated. And it can be rightly said, that C. N. Yang proceeded according to mathematical intuition.
In 1954 the paper of Yang and Mills did not move to the forefront of high energy theoretical physics. However, the idea of the charged space with noncommutative symmetry group acquired more and more popularity due to the increasing number of elementary particles and the search for the universal scheme of their classification. And at that time the decisive role in the promotion of the YangvMills fields was also played by mathematical intuition.
At the beginning of the 60’s, R. Feynman worked on the extension of his own scheme of quantization of the electromagnetic field to the gravitation theory of Einstein. A purely technical difficulty v- the abundance of the tensor indices v- made his work rather slow. Following the advice of M. Gell-Mann, he exercised first on the simpler case of the YangvMills fields. To his surprise, he found that a naive generalization of his diagrammatic rules designed for electrodynamics did not work for the Yang-Mills field. The unitarity of the S-matrix was broken. Feynman restored the unitarity in one loop by reconstructing the full scattering amplitude from its imaginary part and found that the result can be interpreted as a subtraction of the contribution of some fictitious particle. However his technique became quite cumbersome beyond one loop. His approach was gradually developed by B. De-Witt[11]. It must be stressed that % the physical senselessness of the YangvMills field did not preclude Feynman from using it for mathematical construction.
The work of Feynman [12] became one of the starting points for my work in Quantum Field Theory, which I began in the middle of the 60’s together with Victor Popov. Another point as important was the mathematical monograph by A. Lichnerowitz [13], dedicated to the theory of connections in vector bundles. From Lichnerowitz’s book it followed clearly that the YangvMills field has a definite geometric interpretation: it defines a connection in the vector bundle, the base being the space-time and the fiber the linear space of the representation of the compact group of charges. Thus, the YangvMills field finds its natural place among the fields of geometrical origin between the electromagnetic field (which is its particular example for theone-dimensional charge) and Einstein’s gravitation field, which deals with the tangent bundle of the Riemannian space-time manifold.
It became clear to me that such a possibility cannot be missed and, notwithstanding the unsolved problem of zero mass, one must actively tackle the problem of the correct quantization of the YangvMills field.
The geometric origin of the YangvMills field gave a natural way to resolve the difficulties with the diagrammatic rules. The formulation of the quantum theory in terms of Feynman’s functional integral happened to be most appropriate from the technical point of view. Indeed, to take into account the gauge equivalence principle one has to integrate over the classes of gauge equivalent fields rather than over every individual configuration. As soon as this idea is understood, the technical realization is rather straightforward. As a result V. Popov and I came out at the end of 1966 with a set of rules valid for all orders of perturbation theory. The fictitious particles appeared as auxiliary variables giving the integral representation for the nontrivial determinant entering the measure over the set of gauge orbits.
Correct diagrammatic rules of quantization of the Yang-Mills field, obtained by V. Popov and me in 1966v1967 [14], [15] did not attract immediate the attention of physicists. Moreover, the time when our work was done was not favorable for it. Quantum Field Theory was virtually forbidden, especially in the Soviet Union, due to the influence of Landau. «The Hamiltonian is dead» v- this phrase from his paper [16], dedicated to the anniversary of W. Pauli v- shows the extreme of Landau’s attitude. The reason was quite solid, it was based not on experiment, but on the investigation of the effects of renormalization, which led Landau and his coworkers to believe that the renormalized physical coupling constant is inevitably zero for all possible local interactions. So there was no way for Victor Popov and me to publish an extended article in a major Soviet journal. We opted for the short communication in «Physics Letters» and were happy to be able to publish the full version in the preprint series of newly opened Kiev Institute of Theoretical Physics. This preprint was finally translated into English by B. Lee as a Fermilab preprint in 1972, and from the preface to the translation it follows that it was known in the West already in 1968.
A decisive role in the successful promotion of our diagrammatic rules into physics was played by the works of G. ’t Hooft [17], dedicated to the YangvMills field interacting with the Higgs field (and which ultimately led to a Nobel Prize for him in 1999) and the discovery of dimensional transmutation (the term of S. Coleman [18]). The problem of mass was solved in the first case via the spontaneous symmetry breaking. The second development was based on asymptotic freedom. There exists a vast literature dedicated to the history of this dramatic development. I refer to the recent papers of G. ’t Hooft [19] and D. Gross [20], where the participants in this story share their impressions of this progress. As a result, the Standard Model of unified interactions got its main technical tool. From the middle of the 70’s until our time it remains the fundamental base of high energy physics. For our discourse it is important to stress once again that the paper [14] based on mathematical intuition preceded the works made in the traditions of theoretical physics.
The Standard Model did not complete the development of fundamental physics in spite of its unexpected and astonishing experimental success. The gravitational interactions, whose geometrical interpretation is slightly different from that of the YangvMills theory, is not included in the Standard Model. The unification of quantum principles, LorentzvEinstein relativity and Einstein gravity has not yet been accomplished. We have every reason to conjecture that the modern MP and its mode of working will play the decisive role in the quest for such a unification.
Indeed, the new generation of theoreticians in high energy physics have received an incomparably higher mathematical education. They are not subject to the pressure of old authorities maintaining the purity of physical thinking and/or terminology. Futhermore, many professional mathematicians, tempted by the beauty of the methods used by physicists, moved to the position of the modern mathematical physics. Let use cite from the manifesto, written by P. MacPherson during the organization of the Quantum Field Theory year at the School of Mathematics of the Institute for Advanced Study at Princeton:
The goal is to create and convey an understanding, in terms congenial to mathematicians, of some fundamental notions of physics, such as quantum field theory. The emphasis will be on developing the intuition stemming from functional integrals.
One way to define the goals of the program is by negation, excluding certain important subjects commonly pursued by mathematicians whose work is motivated by physics. In this spirit, it is not planned to treat except peripherally the magnificient new applications of field theory, such as Seiberg-Witten equations to Donaldson theory. Nor is the plan to consider fundamental new constructions within mathimatics that were inspired by physics, such as quantum groups or vertex operator algebras. Nor is the aim to discuss how to provide mathematical rigor for physical theories. Rather, the goal is to develop the sort of intuition common among physicists for those who are used to thought processes stemming from geometry and algebra.
I propose to call the intuition to which MacPherson refers that of mathematical physics. I also recommend the reader to look at the instructive drawing by P. Dijkgraaf on the dust cover of the volumes of lectures given at the School [21].
The union of these two groups constitutes an enormous intellectual force. In the next century we will learn if this force is capable of substituting for the traditional experimental base of the development of fundamental physics and pertinent physical intuition.
1. B. Nagel, The Discussion Concerning the Nobel Prize of Max Planck, Science Technology and Society in the Time of Alfred Nobel (New York: Pergamon, 1982).
2. P. Morse and H. Feshbach, Methods of Theoretical Physics, (New York: McGraw-Hill, 1953).
3. R. Courant and D. Hilbert, Methoden der mathematischen Physik, (Berlin: Springer, 1931).
4. S.L. Sobolev, Nekotorye primeneniya funktsional’nogo analiza v matematicheskoi fizike (Some Applications of Functional Analysis in Mathematical Physics), (Leningrad: Lenigrad. Gos. Univ., 1950).
5. P. Dirac, Quantized Singularities in the Electromagnetic Field, Proc. Roy. Soc. London A 133, 60v72 (1931).
6. C.N. Yang, Selected Papers 1945v1980 with Commentary, (San Francisco: Freeman, 1983).
7. C.N. Yang and R. Mills, Conservation of Isotopic Spin and Isotopic Gauge Invariance, Phys. Rev. 96, 191v195i (1954).
8. V. Fock, L’equation d’onde de Dirac et la geometrie de Riemann, J. Phys. et Rad. 70 392v405 (1929).
9. H. Weyl, Electron and Gravitation, Zeit. Phys., 56, 330v352 (1929).
10. O. Klein, On the Theory of Charged Fields: Submitted to the Conference New Theories in Physics, Warsaw, 1938, Surv. High Energy Phys., 1986, 5 269 (1986).
11. B. De-Witt, Quantum Theory of Gravity II: The manifest covariant theory, Phys. Rev., 1967, 162, 1195v1239 (1967).
12. R.P. Feynman, Quantum Theory of Gravitation, Acta Phys. Polon. 24, 697v722 (1963).
13. A. Lichnerowicz, Théorie globale des connexions et des groupes d’holonomie, (Roma: Ed. Cremonese, 1955).
14. L. Faddeev and V. Popov, Feynman Diagrams for the Yang-Mills Field, Phys. Lett. B, 25, 29v30 (1967).
15. V. Popov and L. Faddeev, Perturbation Theory for Gauge-Invariant Fields, Preprint, National Accelerator Laboratory,NAL-THY-57 (1972).
16. L. Landau, in Theoretical Physics in the twentieth century, a memorial volume to Wolfgang Pauli, ed. M. Fierz and V. Weisskopf, (Cambridge, USA, 1956).
17. G. ’t Hooft, Renormalizable Lagrangians for Massive Yang-Mills Fields, Nucl. Phys. B, 35, 167v188 (1971).
18. S. Coleman, Secret Symmetries: An Introduction to Spontaneous Symmetry Breakdown and Gauge Fields: Lecture given at 1973 International Summer School in Physics Ettore Majorana, Erice (Sicily), 1973, Erice Subnucl. Phys., 1973.
19. G. ’t Hooft, When was Asumptotic Freedom Discovered? Rehabilitation of Quantum Field Theory, Preprint, hep-th/9808154 (1998).
20. D. Gross, Twenty Years of Asymptotic Freedom, Preprint, hep-th/9809060 (1998).
21. V. Dijkgraaf, Quantum Fields and Strings: A course for mathematicians, vols. I, II (AMS, IAS, 1999).
|
19bf60fcb42459be | Dismiss Notice
Join Physics Forums Today!
Physical justification for Uncertainty principle
1. May 10, 2006 #1
Most popular or elementary textbook level expositions advance the following physical argument in favor of the uncertainty principle:
In order to observe a state we must disturb it. Thus we have changed the state by our very act of observation and uncertainty creeps in.
Now this explanation is obviously wrong to me for more than one reasons, but what is the real explanation. Does an explanation exist or is it taken as an unexplainable axiom? Thanks.
2. jcsd
3. May 10, 2006 #2
User Avatar
Science Advisor
Homework Helper
It's a logical consequence of the 5+1 axioms of QM.
4. May 10, 2006 #3
Don't stick too much to the 'axioms' or to the 'observation' stuff.
The wavepacket picture is in itself already very clear and much simpler:
if a wavepacket has a sharp localisation it cannot have a sharp wavevector spectrum, and vice-versa
This picture covers only a small part of the application of the uncertaincy principle.
But it show that the "axiom" or "observation" point of views are not absolutely necessary.
However, there is a link. Of course, if you want to 'prepare' or to 'choose' a system a wave-system that has some localisation in space, then you know that -because it is a wave system- you will have an undefinite wavevector.
Now, there is still something to clarify: the relation between momentum and wavevector. Saying that this is simply Planck's relation tells us rather little, except that a universal constant is involved. But where comes this relation from, physically. What is its interpretation? There is none I think, only that QM is more general than CM.
I find fascinating how Euler, Lagrange, Hamilton, and others were able to put classical mechanics in the so-called 'canonical form' without knowing anything about the quantum. This is so impressive that I feel guilty for not being able to explain the path from momentum to wavevector.
5. May 10, 2006 #4
User Avatar
Gold Member
Loom, I dunno what your level of knowledge on the subject is, but mine is way below the level of the two above answers. They're of no help to me. I'll attempt to answer it in layperson's terms. Forgive me if I underestimate your level of knowledge.
First, be careful of the term "obvious". Nothing should be obvious. We're dealing with atomic and subatomic particles here, something that cannot be compared to macroscopic objects like rocks and people.
Second, be careful of the assumptions you make when talking about "obseving" something. In the macro world, we use an observing medium (light) that is WAY smaller than the objects we are trying to observe - it has no noticeable effect on them. But in the atomic world, our observing methods are on the scale of the things being observed.
Consider what would happen if you scaled your atomic observations up to macro size. Say you wanted to observe the speed an position of a grapefruit-sized atom across the table. Not too hard to do - unless your only source of information about its speed and position was by pelting it with apples! Further, you don't get to actually SEE the apples OR the grapefruit, all you get to do to record where the APPLES stopped after they bounced.
What's worse, slow-moving apples do a lousy job of telling you where the grapefruit is, or is going. Faster-moving apples will give you a better idea, but how do you think these faster-moving apples will affect the velocity and position of the grapefruit? It'll knock the grapefruit around even MORE.
Now, after the first few apples have ricocheted off the grapefruit, do you think it still has the same position or velocity?
Last edited: May 10, 2006
6. May 10, 2006 #5
The physical justification is that it works.
Dave, what you've said would imply that the Uncertainty Principle is a purely experimental phenomena. In fact, it is absolutely necessary for our current formulation of quantum mechanics, and is a consequence of the formal structure, not just some appendix to it.
What the OP said is more or less correct, to observe we must disturb, and unlike in classical mechanics, in quantum mechanics there is no omniscient observer measuring every dynamical quantity. The observer is part of the system, so the order of measurement that an observer chooses directly affects the system being measured, and there's not way around it.
7. May 10, 2006 #6
User Avatar
Staff Emeritus
Science Advisor
Education Advisor
2016 Award
I will have to concur with the previous post. You are implying that the HUP is a "measurement uncertainty". This is not correct. If it is, then does that mean that if I have a better instrument, I can actually change the HUP? The uncertainty that one gets in a single measurement, which is contained in what you have described above, is not the HUP.
Again, take note that the HUP is a natural consequence of the formulation and not a consequence of measurement instrumentation. The formulation provides no way around it.
8. May 10, 2006 #7
User Avatar
Gold Member
Yes, it was not meant to be my implication that it was merely a measurement thing.
What I was trying to get across is that there is no way of making an observation without affecting the thing being observed.
9. May 10, 2006 #8
Is the canonical comm. relation one of dextercioby's "5+1 postulates"?
10. May 10, 2006 #9
Here's a different formulation of heisenberg based on Fourier analysis:
Consider the quantum wavefunction as describing the probability that you will find the particle at that location. Take the fourier transform of the wavefunction.
Suppose you have determined the position of the particle to great accuracy. The quantum wavefunction then "looks" like a spike at the location where the particle is and zero elsewhere. What does the fourier transform look like? It's all over the place since the range of frequencies that must be added up to get the wavefunction is huge - so the wave number of the quantum wavefunction is gigantic. Since wave number is related to momentum, position and momentum therefore are mutually uncertain.
Similarly, suppose you have determined wave number(i.e., momentum) to great certainty. The fourier transform of the wavefunction is then a spike, indicating that the wavefunction has few frequency components - so the quantum wavefunction is all spread out, meaning there is uncertainty in position.
Finally, some knowledge of fourier analysis says the minimum uncertainty happens when the quantum wavefunction is a gaussian. Therefore, the wave number is also a gaussian, and the product is 1/2
This can be converted into the familiar form by plugging in the momentum in place of the wave number, and saying that is the absolute minimum (which is where the greater-than or equals comes in)
11. May 10, 2006 #10
User Avatar
Staff Emeritus
Gold Member
Dearly Missed
If I am not mistaken this was Heisenberg's original justification for his principle.
12. May 10, 2006 #11
I'm not exactly sure if it was his original, but I remember reading that Heisenberg initially fell into the same wrong conclusion, that it is error introduced by observation, and was convinced by Born (or possibly Bohr?) to look at his work again, after which he determined it was really a fundamental uncertainty.
13. May 10, 2006 #12
Molu, uncertainty has nothing to do with disturbances. To see this, we just have to remember that the quantum-mechanical probability algorithm is time-symmetric. It allows us to assign probabilities not only to the possible outcomes of future measurements (on the basis of actual outcomes of present measurements) but also to the possible outcomes of past measurements (on the same basis). (In both cases we use the Born rule. We can even assign probabilities to the possible outcomes of present measurements on the basis of past and future measurements if we use the Aharonov-Bergmann-Lebowitz rule instead.)
Suppose that a measurement at t1 yields the value v1 and a measurement at the later time t2 yields the value v2. If the measurements are of the repeatable kind and the Hamiltonian is 0, then according to the usual time-asymmetric interpretation of the formalism, the observable measured possesses the value v1 from t1 to t2, at which time its value changes to v2. If this is a valid interpretation than so is the following: the observable measured possesses the value v2 from t1, at which time its value suddenly changes from v1, till the time t2. If the latter interpretation is harebrained, then so is the first. If there is no collapse backward in time, then there is no collapse forward in time. And if there is no collapse, the question of disturbance does not arise. Observables have values only if, when, and to the extent that they are measured.
If you want to say something half testable about the values of observables between measurements, you can say it in terms of the probabilities of the possible outcomes of unperformed measurements.
14. May 10, 2006 #13
Daniel, right you are, but there is nothing sacrosanct about those axioms. You can choose others as long as they give you the same probability assignments. These days, post string theory and Susskind's landscape, anthropic arguments are again en vogue. Quoting Jeffrey Bub,
the fact that we find ourselves in a quantum world where measurement is possible... will surely involve the same sort of explanation as the fact that we find ourselves in a world where we are able to exist as carbon-based life forms.
In this spirit, require the existence of objects that
• have spatial extent (they "occupy" space),
• are composed of a (large but) finite number of objects that lack spatial extent,
• are stable—they neither collapse nor explode the moment they are formed,
and find that a precondition for the existence of such objects is the fuzziness of both the relative positions and the relative momenta of their components, as I have explained in https://www.physicsforums.com/showthread.php?t=116582". For a stable equilibrium between
• the tendency of interatomic relative positions to become less fuzzy due to electrostatic attraction and
• the tendency of interatomic relative positions to become more fuzzy due to the fuzziness of the corresponding momenta,
we need a relation between Delta x and Delta p such that a decrease in one implies an increase in the other. It stands to reason that Delta p goes to infinity as Delta x goes to zero. A sharp position then implies a maximally fuzzy momentum (that is, a flat distribution containing no information whatever). This suffices to derive the free particle propagator and the free Schrödinger equation. Once we have that, the possibilities of incorporating interactions into the resulting formalism are very limited and largely determined by requiring the existence of measurement devices or, in other words, the consistency of the formalism, inasmuch this presupposes the existence measurement devices. :smile:
Last edited by a moderator: Apr 22, 2017
15. May 11, 2006 #14
Michel, the concepts that feature in classical physics have their roots in the mathematical features of the quantum-mechanical probability algorithms. The classical concepts of energy and momentum are rooted in the time and space dependence of the phases of quantum-mechanical probability amplitudes. (These amplitudes are complex numbers. While a real number has an absolute value and a sign, a complex number has an absolute value and a phase, which is a real number.)
16. May 11, 2006 #15
The explanation you give, a highly popular one first given by Heisenberg himself, is nevertheless a wrong one as pointed out by others and as I said myself. In fact post-EPR it is my understanding that it is entirely possible to avoid the observer effect. If Bob measures one of two entangled particles, he is also in fact measuring Alice's particle without in any way 'disturbing' Alice's particle. Because of this, any uncertainty principle constructed on the basis of observer effect would hold at classical mechanics but break down when the Bell Inequality is violated.
17. May 11, 2006 #16
I think that's the best explanation I've seen here. It explained things at the right level for me. Thanks :) Though I'm afraid I didn't take the trouble to understand Fourier Analysis, I can get the essence of the argument.
18. Jun 13, 2006 #17
There's actually not much magical stuff to the whole "uncertainty principle." "It comes from the axioms of QM." Well...it can....but you can also get it just from arguments involving the fourier transform of a finite wave pulse. As such it can just as easily be obtained classically as it can quantum mechanically.
Even classically, if you look at a pulse of light and take its fourier transform, you'll find that it behaves as though it has a distribution of frequencies with an average frequency that corresponds exactly to the frequency that normally appears in Planck's E=h*(nu). If you then say that with each frequency there is an associated energy of the same form as above, then you can obtain, via classical arguments, the "uncertainty principle." It's not really an uncertainty principle, per se, since this is classical, but you can find that the length of the time pulse and said energy distribution obeys exactly the same inequality. See? Oooooo! QM from classical! Of course the wave for "particles" requires quantum mechanics, but the principle can be obtained in a similar way...
If you want a better feel for the uncertainty principle, screw elementary texts and ad hoc hand-waving explanations like the one you mentioned and study fourier transforms and some wave theory. I always found that elementary texts just made physics more confusing by replacing valid mathematics with the suspicious invocation of intuition and ad hoc explanations anyway.
EDIT: Oh! It looks like someone got to the fourier transform point before I did. I would still just like to emphasize that in that argument you already get the jist of, there was NO mention of quantum mechanical phenomena aside from the wave function associated with the "particle." I think this begs the question: what other supposedly quantum mechanical phenomena can be obtained from classical arguments? You'd be surprised how much. Think about it some. Whenever you encounter something funny in quantum mechanics, ask "can this be obtained with equivalently classical methods?" There's been a lot of work in stochastic mechanics by people like Nelson and Boyers that show that classical particles subject to stochastically varying potentials or fields appear to obey quantum mechanical laws. Thought provoking, no?
Sorry to get off topic.
Last edited: Jun 13, 2006
Similar Discussions: Physical justification for Uncertainty principle |
d6c3074c28da21fe |
25 Responses to “Calling all Quantum Theorists and Cosmologists who can be patient with innumerate humanists and theists…”
1. Janet Says:
Whoops! Gavin has already commented. You’ve got to see this over on the “conversation continues” comment thread! It’s hi-lar-i-ous….
2. Gavin Says:
I can’t play ask-a-physicist without setting some limits, or I won’t have time for anything else in my life. Here are some rules:
1. I don’t discuss theories outside the main-steam of science. Science is a huge search. Vast regions of possible truth have been searched and have produced nothing. Small regions are proving fertile, and we are concentrating our attention there. I cannot go over every barren region again with newcomers. Faith healing, the realm of Platonic forms, a spiritual plane, Penrose’s link between quantum gravity and wave function collapse, and Bohm’s theory are all in the vast barren region. Sorry, we’ve moved on.
I do make some exception for widely held or forcefully promoted ideas: creationism and intelligent design, and quantum woo. I will not, however, be polite. There’s nothing useful to be said about these concepts while remaining polite.
2. First priority is always going to the issues that are directly related to the faith issues that we eventually want to reach. The quantum nature of the universe is at the wrong energy scale for addressing questions of God or a soul. The issues surrounding John McCain are far more relevant.
Now for Janet’s remaining questions.
1) No, the ball will not go through the wall. I know what they are trying to say, but this is the wrong way to say it.
2) This question contained seven questions, so I’m going to pick one: “Now why is it that the collapse of the wave function is so worrisome to theorists[?]” The problem is that quantum mechanics obeys one rule between measurements, and another when a measurement occurs. This would be fine if somebody could tell me what a measurement is. So, there are two different rules and no sure way to know which one should be used. That is the problem.
You seem to think the problem is that we lost determinism. It is not. We don’t like randomness, and we don’t like many-worlds either, but we understand that nature doesn’t care what we like. We can cope with randomness, if we have clear rules for when to use it. We know how to use it for water splashing and rifle shots.
3. Janet Says:
Uh huh. Okay!
Yeah, I get your rules. I’m very sympthetic to the time constraint, which I feel when trying to explain semiotic theory. So maybe others will help out. Even very short answers help me a lot with the issues that come up in my own field that I can’t explain to the physicists exactly.
After all, aren’t we learning that we can’t become experts in one another’s fields. So we have to be willing to talk at each other in this hafl-frustrating and half-very-valuable manner? Those who simply say, study the books are giving up on the conversation. Because I am studying the textbooks and I still have questions that only those much more informed than I am can clarify. And those clarifications are more to help me think from my own vantage point and training than to think as a physicst…(and vice-versa).
But I must say that I’m surprised Roger Penrose is in “the vast barren region” with all those others Gavin names. There’s no indication he is either thinking about God (a hidden agenda, so to speak) or that he’s in the “quantum woo” camp that uses Bohm a lot, is there? So Gavin just reports that quantum gravity is unfruitful? Or is it the brain-and-QM connection in general that seems unfruitful? (Gavin, isn’t that question inevitable given the centrality of defining measurement?)
Moving on. Gavin says: “We can cope with randomness, if we have clear rules for when to use it. We know how to use it for water splashing and rifle shots.” And he says: “The problem is that quantum mechanics obeys one rule between measurements, and another when a measurement occurs.”
Okay, that helps me much.
This helps me too. But, then, why don’t you expect an underlying mechanism to be found that will explain both behaviors in one theory? Or is it just that every avenue for finding that has proven fruitless, so you are sticking with the current enigma as being more true to the data?
Gavin, are there measurements made in nature without any intentional observer or measurer or a measuring machine made by such, or is that precisely what we cannot know because we’d have to measure to find out? Like a photon hitting an eyeball. Is that a measurement? (The electromagnetic wave acts like a particle?)
Finally, this is really fascinating! Gavin says: “The quantum nature of the universe is at the wrong energy scale for addressing questions of God or a soul. The issues surrounding John McCain are far more relevant.”
What??? This is so surprising to me. In the West, starting with the Greeks, the divine has always been sought at the most fundamental level of the material universe. We look to the most underlying element(s) to find the “causes” of the abundant orders and the coherent emergences we see all around us. But that is “causes” in the explanatory sense, not necessarily the mechanical sense that science has focused on from the beginning.
Can you say why the meaning structures surrounding “John McCain” seem more fruitful for you? Is it simply that you think that for God to exist, God must have a physical kernel of reference, like the physical body of John McCain? But that is absolutely ruled out by the very definition of “God” in the West.
(This btw is why the notion of the Incarnation is so entirely scandalous. To localize the non-local in a physical body? To assume finiteness and vulnerability (and even death) by what is infinite and omnipotent? These are supposed to be utterly mind-blowing and very offensive contradictions. Something that is “a foolishness to the Greeks, and to the Jews, a stumbling block.”)
The Schrodinger equation gives you probabilities. Why is it worrisome to physicists that each wave can only collapse in ONE of those probable locations. Isn’t that what happens in every statistically probable future? What ACTUALLY happens is only one of the several likely results that WOULD or COULD happen?
I know a lot of you are following this conversation (thank heavens for blog stats!) so someone else should try to relieve Gavin once in awhile. On the other hand, sometimes it takes a lot of prior conversation to get the point where you have covered enough common ground to communicate across these barriers of disciplinary background….
4. Janet Says:
Or is Roger Penrose “the mathematician” that Maria mentions? (In comments on “The conversation continues”)
5. HI Says:
Poor Gavin, indeed. I will try to comment on Janet’s questions 3) and 4), even though I don’t have credentials of a working physicist.
3) Regarding Penrose:
It was a long time ago when I read “The Emperor’s New Mind” by Roger Penrose. It didn’t make much sense to me even when I tried to read it carefully then and it is even more difficult to follow his argument as I try to skim through it now. So, I cannot give too detailed comments but will give you my impressions from reading it a long time ago.
As I understand, Penrose is trying to somehow connect three poorly understood subjects, quantum gravity, quantum measurement and consciousness. Since we don’t understand much about any of these, we cannot say outright that Penrose is wrong. When we are ignorant, there is more room for speculations, as I think Gavin wrote before. But that doesn’t mean that there is high probability that any such speculation is right. And Penrose didn’t make a very convincing case that his particular speculation is right. In fact, I think many people have reasons to think that Penrose is likely to be wrong on this.
Do we need to take gravity into account in order to understand quantum measurement? We can ignore gravity to understand most quantum phenomena, because the effect of gravity is negligibly weak. So, it doesn’t make much sense that when we think about the measurement, quantum gravity suddenly becomes fundamental. Even if he is right, how helpful is his speculation when there is no successful theory of quantum gravity yet? Penrose may be thought-provoking, but he is not providing any thing very substantial, unlike EPR paradox and Bell’s theorem that lead to better understanding of quantum measurement. I think that the right way to progress is to try better understanding of each subject. If there indeed is a fundamental connection between these subjects of the kind Penrose proposes, it is bound to be found. But there is not a convincing reason to believe in such connection now.
Is quantum effects important for consciousness? Again, it seems to me that quantum effects don’t play significant role in most of basic neurobiological processes, such as firing of neurons and synaptic transmissions. And while neuroscientists may have yet to explain consciousness, they have learned great deal about how the neurons and our brains work. Penrose, as brilliant man as he is, is not an expert of neuroscience. Don’t you think it is a bit arrogant of him to claim that he knows better than the neuroscientists, especially when he is not making a good argument about the connection between quantum effects and neurobiological phenomena? You have to realize that the revolutions of quantum mechanics and relativity are very exceptional events in the history of science. The problem is rarely that we don’t have adequate fundamental laws. Often the difficult part is to understand the more complex higher order phenomena from the simple laws and the basic processes. Classical physics is sufficient to predict the weather, but it remain difficult to forecast the weather. Taking quantum mechanics and relativity into account won’t help it. Suppose that the string theorists are successful in coming up with the so-called “theory of everything” that unite quantum mechanics and general relativity. It is not going to affect biologists, chemists, nor even condensed-matter physicists. You don’t even need quarks to explain the structure of DNA, or chemical bonds, or superconductivity.
I might add that I don’t find it very fruitful to connect quantum measurement to consciousness (like Wigner did, for example). (And it is interesting that Penrose doesn’t like Wigner’s interpretation, either.) It is difficult enough to quantum mechanically describe a measuring apparatus. Consciousness is even more poorly defined. To think that consciousness somehow does something magical seems like baseless speculation and wishful thinking to me.
4) Regarding Bohm’s theory:
I have never studied Bohm’s theory, so all I know is based on what other people wrote about it.
My understanding is that Bohm’s theory is a variation of hidden variables theories. These are deterministic theories of quantum mechanics and the probabilistic nature is explained as a consequence of our ignorance on hidden variables. Most straightforward hidden variables theories are ruled out. Apparently Bohm’s theory is not ruled out unlike other hidden variables theories, but it comes with a great cost. It includes non-local interactions that are weird and complicated. It may be deterministic alright, but it looks too contrived and not appealing to many physicists. They’d rather take probabilistic quantum mechanics that may not be entirely satisfactory, but is simpler.
6. Gavin Says:
I said there are two rules in quantum mechanics, one for use between measurements and one for measurements. The one to use between measurements is the Schrödinger equation, which is deterministic. The Schrödinger equation does not give probabilities. You tell it what wave function or density operator you have at the start and it tells you exactly what wave function or density operator you will have at the end. There’s nothing random about it; it’s totally deterministic.
The rule for measurements is wave function collapse. You tell it what wave function or density operator you have and it gives you the probabilities for a bunch of different wave functions or density operators that you might get out, with their associated measurement results. This is a random, non-deterministic process, but the probabilities are predictable.
Janet asks “But, then, why don’t you expect an underlying mechanism to be found that will explain both behaviors in one theory?” In fact, the underlying mechanism has already been found. There is one theory that explains both the Schrödinger equation and wave function collapse. That theory is the Schrödinger equation. If you just use the Schrödinger equation all the time, even when you do a measurement, then you can predict all of the features of wave function collapse. What should we call this new theory? We call it quantum mechanics, which is exactly what we called the old theory, so I can understand why people get confused. The process of wave function collapse is called “decoherence.”
Note that it doesn’t work the other way. You can’t use wave function collapse to explain the Schrödinger equation. It just doesn’t work. So, the reason we got rid of the non-deterministic random aspect of quantum mechanics isn’t because we don’t like randomness (although we don’t like randomness, so we’re pretty happy about this), it is because wave function collapse was a useless and ill defined part of the theory.
All of this is well understood and accepted by the experts in the field. We can use the Schrödinger equation to understand measurement in great detail. In particular, we can answer with confidence all of the questions you ask about measurement:
1) “Gavin, are there measurements made in nature without any intentional observer or measurer or a measuring machine made by such…?” Yes, all the time. In fact, natural measurements happen at an absolutely staggering rate. This is the main reason that the macroscopic world does not look quantum mechanical (and why the ball won’t go through the wall).
2) “Like a photon hitting an eyeball. Is that a measurement?” Yes. The photon hitting a tree or a rock is a measurement too. A photon hitting a mirror is not.
3) “The electromagnetic wave acts like a particle?” Yes, mostly. It retains enough of its wave characteristic for us to see color (approximately).
Not only can we understand what is an isn’t a measurement, we can perform partial measurements. This isn’t just a crazy theory, partial measurements are used in modern atomic clocks. The state of excited atoms is partially measured in one stage, and the measurement is finished in another much later stage. The long duration of the measurement allows a much more accurate measurement of the frequency of the atoms’ oscillations, in accordance with the Heisenberg uncertainty principle. Also, The field of quantum computing is base on understanding the details of measurements, including partial measurements.
Now, I said that all of this is well understood and accepted by the experts. That is not a terribly large population, and it certainly doesn’t include all physicists. I can back up everything I’ve said, but the math is difficult, even by physicist’s standards. Furthermore, the some of the implications are startling and not fully understood, causing some concern. None the less, the people who actually have to earn a living doing quantum measurements are on board because it is the only approach that makes sense and works.
Roger Penrose does not have to earn a living doing quantum measurements. Quantum gravity is a worthy pursuit (it is what I do), Prof. Penrose is a good physicist, and wave function collapse is an interesting (solved) problem. Prof. Penrose’s link between quantum gravity and wave function collapse didn’t pan out. The quantum woo community’s link between consciousness and wave function collapse also failed. (Penrose promoted this connection in The Emperor’s New Mind, so his record on wave function collapse is not good. He’s very good at relativity.) Decoherence is the winner.
Perhaps I jumped the gun with my comment about God and souls. If you could tell me one thing that God or a soul does, that would be very helpful.
Roger Penrose is not “the mathematician” that Maria mentions. I think she is talking about compact extra dimensions, which are common in string theory and are explained in several popular books about string theory, including Lisa Randal’s ,Warped Passages.
7. Maria Kirby Says:
God gives life. God creates. God predicts the future. Souls live eternally. Souls might be considered the life force of bodies. -Not particularly testable actions.
I was speaking of Lisa Randal and Raman Sundrum, whose theories elaborated in their papers RS1 and RS2 will hopefully be tested next year at CERN.
I do think its very interesting that we create a reality or form like Hamlet or the Saint Paul Cathedral (before it was constructed) and then proceed to create an instance of the form as an actual building. (Some persons take on the role or character of a literary form, thus giving them an instance, maybe for the duration of the play, maybe for a life time. An actor who does this brings the character to ‘life’ for the audience.
It seems like mathematics/physics does a similar process, albeit some times in reverse. We observe certain phenomenon or instances. We then try to create a form (mathematical equations) which we can use to reproduce an equivalent instance to the one observed. The form is validated when it can not only predict an equivalent instance but can be used to predict other observable instances. The laws of motion work for dropping balls from the tower of Pisa as well as the movement of the planets.
It seems to me that when it comes to spiritual things we’re kind of like blind people trying to understand color. I think its very interesting that churches report more miracles in locales where persons are more prone to believe in demons and magic. While I think psychology is an important factor, I don’t think it explains everything. It seems that there is some connection between what we believe/think (its form) and what is observed to occur in the physical realm (its instance). And I don’t necessarily think that quantum mechanics can explain the phenomenon.
(I personally think that a number of people have hijacked quantum theories to make them support certain philosophical ideas, instead of letting quantum theories speak for themselves. But there does seem to be implied corrolaries, I’ve heard the laws of motion used in connection with human interaction.)
If mathematicians/physicists can prove that there are more that three dimensions, then it seems that the obvious next question is what is in those dimensions? Does what is observed in three dimensions project at all into any of those other dimensions? If something like gravity can project from a fourth dimension into our three, then can the strong, weak, or electro-magnetic force project into the forth? And what would that look like?
I may be naive, but I still believe in angels. It seems to me that angels are spiritual beings, who at times, have a physical presence. (Unlike ourselves who are physical beings with a spiritual presence.) Why would it be so unreasonable to say that what we attribute to spirit is not a force of another dimension?
8. Janet Says:
I’ll respond to Hi, then Gavin, and then Maria.
Thanks so much! I’m really interested to hear that Wigner tried to make a QM-consciousness connection, too, and your judicious comments about Penrose and the different areas of his work were very helpful and clarifying too. (I want to look at Wigner — do you happen to know where he published this?) Eugene Wigner, remember folks, wrote the elegant essay on “The Unreasonable Success of Mathematics in Physics” that we discussed earlier under Part 4 of my lit theory lectures….
On Bohm and non-locality, I’ve been wanting to prod Hi and Gavin to say something about EPR and Bell’s theorum…. Gavin, would you rule out from a physics standpoint any connection between non-locality in QM and the idea that deep reality might be non-local? (At one point I read Bell’s entire book, Speakable and Unspeakable in QM, so I have a right to ask this, I think! But it’s not right for H & G to have to answer it. But how can you science-guys blame us innumerate humanists for getting stirred up by this stuff. It is downright IN-EV-IT-A-BLE.)
As for Gavin, Gavin, I don’t think you realized how new-style you are (as opposed to old-style scientific thought). You grew up with paradigm shift and an indefinite future for physics to evolve into and so you always are surprised that I am concerned with determinism or old-style locality in space and time. But speaking HISTORICALLY, it was precisely that emphasis of Galileo, Descartes, and Newton on “reality” being a natural world of empirical time and space (and Newton did try for an absolute time and space against which to measure relativistic space and motion) that led to the general incapacity of modern Westerners to imagine as “real” anything that is not something you could rest your coffee cup on (or kick).
Yet, the thoughtful or philosophical among Christians have always been uncomfortable with the modern notion that God or the soul are supposed to be “immaterial” entities that are divorced from the natural world. Yes, modern theists often do think of them this way, but this is because Descartes cut the mind or soul out of the material world and made it into a non-material thing, and created the “ghost” in the machine.
Now Maria, you are brave and straight-forward and obviously doing some reading and thinking here, and I’m glad you joined us. Knowing Gavin and Hi, though, as I do, do you mind if I echo your remarks in a somewhat different manner?
Also, may I say in passing — and this may be only my view and not Maria’s — that I think we theists would get further if we made it clear that in saying that “God creates” or “the soul is immortal” are not meant as naturalistic knowledge-claims. Theists bear witness to what they have come to believe on the basis of other disciplines and practices. These are things we say we “know” in the sense of having intimately experienced and/or of being committed to as a grounding hope. (The soul’s immortality, for example, is to me one of the most speculative of religious beliefs. The Jews in OT times generally didn’t have an afterlife in view. They worshipped Yahweh but in THIS life. It didn’t make them any less theistic. But the VALUE and PRECIOUSNESS of the soul is not speculative, because it is not about the unknowable future. It is experienced as a present reality, and as an ethical and esthetic commitment that is imposed upon anyone who desires to “imitate” God.)
Okay, so Gavin asks, “name something God or a soul DOES.” But you mean, “Name something God or the soul does that I cannot account for in other ways, scientifically.”
Remember, Gavin, that a scientific account is just that. It is a naturalistic account of something physically detectable in the world, according to the standards and methods of science.” You want physical causes or physical effects. But there are other causes and other accounts, depending on the discipline or the way of knowing you are working in.
I want to say that God “does” everything that happens in nature and physics and chemistry and biology, but not as a physical cause-and-effect. That would make God either a mere part of the physical world, and else an absolute determiner of the physical world, so that it would have no independent life or being. Instead, God “does” it all, in the sense of giving to the world the capacities and potencies to unfold as it therefore can, and to do all the kinds of things that it therefore can do.
And as it says in Genesis, or as Plato and Aristotle realized, the higher living creatures and especially this strange “speaking” and “thinking” creature that we are is “most like” that underlying immanence called potentiality or capacity, because we are capable of recognizing and thinking about and naming and re-enacting those unfolding laws and principles and kinds of things.
In other words, I am thinking of the world as something that can move into the future on its own, based on inherent potentialities that shape what is possible but do not absolutely determine it. God is the name we use to refer to the nature of those potentialities as potentialities, and therefore the name of their source and their direction, even if those are internal to universe itself.
Perhaps I should simply say that theists and pre-scientific thinkers all tend to see the universe as dynamic, not inert. And whatever it is in the cosmos and in living things and in history that keeps the patterned changes going and the developments developing and the processes processing and the “inert” elements being formed and the exploitations of every possibility for higher-order complexities exploited and evolution evolving, that is the indwelling divinity of the universe. And yet this divinity comes to us somehow as itself and not merely as the sum of all the separate processes. This is the fundamental human reaction to nature, and even the non-theistic scientists share it, as long as it isn’t called “God.”
By the way, no matter how “personal” a God one worships, I think in our day something is missing if one does not have the god of the philosophers included in the notion of God. (Perhaps this is why the early and medieval Christian church was so much more profound in its thought than the modern churches tend to be.)
So when Maria answers, “God creates.” And “a soul is the life-force of the body.” Then I want to say….
… that Maria doesn’t mean that God creates in the sense of a physical cause or a physical law or mechanism “creating” a certain physical state. It is not a push-pull cause and effect like in Newtonian mechanics. It is something far more philosophical, and yet felt by people on a daily level. Sometimes it is said that God is the “condition of possibility” for these natural mechanisms to exist and operate, and that God is the “reason” that there is something rather than nothing. Science is getting waaay beyond its sphere of expertise when anyone in science claims that science speaks to these questions or is able to speak to them.
Suppose that a mechanism for the Big Bang is found and we come to know that it occurred because of a whole series of other and preceding factors or suppose we come to find that it makes no sense to talk of a “before” the Big Bang (which was Augustine’s position) or some other scientific insight is arrived at 10 years or 1000 years from now. This will not do away with the basic philosophical questions and their cogency for human beings. It is wishful thinking for scientists to try to say that science will or science does do away with these questions. They are rational and inevitable. And I’m not going to be here in 1000 years and maybe not in 10. So I have to go with looking at all of the arts and sciences and all of my life journey and doing the best I honestly can to arrive at a worldview that I can keep faith with and that accords with my deepest knowing.
Now part of my deepest knowing is the model of the scientific, and science is so beautiful to me that I don’t want any of those laws to be abridged or changed by any interventions. (I hate the idea of miracles, if you want to know the truth!!) But what I cannot get away from is the way those beautiful coherencies and those intricate emergences of higher-order complexities depend upon potentialities that lie within the natural world itself. What is im-manent — what “abides in” those empirical things — and at the same time “lies beyond” those things, is the most fundamental meaning of the term “God” for me, and in our Western tradition of thought.
In other words, our cosmos has had the potentiality within itself from the beginning to bring forth all that it has brought forth and will bring forth, and that potentiality ITSELF is exactly what Aristotle meant by the word “form.” The potentiality is something separate from every single instance of it. There is something in this universe of which the universe is an instance and yet that is not the same as this universe itself. There is something unfolded in the history of this universe of which this universe’s history is only an instance, and that is not the same as the history of this universe.
Folks, this is highly philosophical, what I just said. Only human beings (as far as we know) of living creatures in our universe can observe the “existence” of what I what just speaking about. I think this is why Heidegger said that we are the kind of being that “raises the question of the being of beings,” and by so doing, identify ourselves as the kind of being that we uniquely are.
So I love Maria’s the soul is the life-force of the body. And the life-force of the body is clearly something different from the “stuff” that is left as the “body” without its life, because we have all seen the life leave the body, and the body is no longer a real and living body without it. (That doesn’t mean that the life-force exists forever, by the way, or even that the life can be life without its body. It is interesting that before Descartes, Westerners believed that angels, like all created beings, HAD to have bodies, even if the bodies were made of “ether” or some other more perfect composition. And all self-directing “bodies” had to have an indwelling formal element that held them together.)
Now I am one of those who thinks that every single thing that happens in our consciousness has to be related to brain chemistry. The soul or mind or personality for me is not something detached from the brain. It is instead an emergent phenomenon that is entirely based on chemistry and physics but that has a complex “being” and an organization on its own level of being.
One hundred years ago (Gavin) to say that everything in our minds was based on brain chemistry would have been to say that our minds are strictly determined by rigid laws of cause-and-effect. Scientific determinism raised those questions of free will that so occupied people in the Newtonian period. But now, as Dennett says, we see that, scientifically and naturalisticly speaking, “freedom evolves.” The more highly developed the consciousness of the kind of creature turns out to be, the more room for freedom has evolved in its mental determinations, beginning with moving away from danger and twoard prey and so forth.
Maria, I think the notion of “dimensions” is quite different for a mathematician or a physicist than for most of the rest of us. Extra dimensions beyond the ones we normally perceive do not necessarily imply a mysteriously “other” world of being in nature. Perhaps I should apply this advice to myself too, as regards EPR non-locality. But I don’t want to find a mysterious other realm of being. I just want to be able to say that much of our cosmos is alive, and that what is not alive is nonetheless potently capable of exploiting the conditions for life. It may take 4 billion years, but the amino acids will get it together! And the hydrogen molecules had to have already condensed.
I don’t see how science threatens the overwhelming reality of a universe that contains within itself potentialities such as we have seen and such as we have resulted from. To reduce this universe to a purely mechanistic model will not work any more, even if it seemed to (for some) in the 18th and 19th centuries. This universe has had direction from the beginning in the “form” of certain inherent potentialities, and it has evolved not only life but freedom and conscience. The anthropic principle cannot prove or disprove a creator God, or that our universe is a purposeful universe in a strictly religious sense. But it illustrates that we can no longer view this universe as an inert machine (as Dawkins knows).
So like good little liberal arts students we move back-and-forth between these new-style physical sciences to the other disciplines for a renewed conversation between all of them about the most basic metaphysical issues. I think we are verging toward naming an indwelling determinacy that is neither a law of strict necessity nor a chaos of pure chance, but that leaps into a future from a presence that came out of the past. Aristotle called it a coherent wholeness, one that is based on “that which is possible, according either to probability or necessity.”
9. Janet Says:
Yes. In my own work (off-line) I am trying to formalize what Maria is focusing on here in a way that works for all the ways of knowing. We often don’t see the importance of this form-al mediation because we tend to reduce it to our modern notion of “abstraction.” (Very 18th century!) In an earlier post, I tried talking about this instead under the name of “rehearsal.” (As soon as I get the software working, I’m going back to explaining the semiotic codes or normative principles we encounter with language and structures of language, by the way.)
Hume changed everything in the West when he questioned “induction.” He pointed out that no matter how many instances we encounter, we cannot be SURE and CERTAIN that the next instance won’t be a counter-instance and destroy the general principle. So Hume realized that when we go from empirical instances to form-ality we are moving away from Descartes’ ABSOLUTE certainty. This required Kant to go to work to save induction and cause-and-effect, the other big formalization that Hume demolished.
But look at how this Humean thinking is based on the requirement of absolute certainty. The very definition of “Knowledge” became after Hume “what we can know with absolute certainty.” None of our science today would survive this requirement, because we realize today that our knowing is always “open to the future.” In the future, we may revise or re-understand what we know today. (notice that the empirical is always something that is slipping away into the past. We are left thinking about its significance for the future!
But Plato & Aristotle rightly thought that induction was a dynamic part of everyday life and every human learning, beginning with language. We wouldn’t know what a word meant if we had to be absolutely certain or if the word was tied down to a static one-to-one relationship to a “closed” meaning. Instead we develop a theoretical construct open to the future, for every word we learn. This is why poststructuralists are always talking about how problematic imposed closures (associated with the absolutism of the 18th century) are for the well-being of our knowing and being.
Aistotle thought, contra Hume, that on occasion, even ONE instance could be enough for a form-al interpretation. (like the way we judge the other person on a date?) And most of the time, and in relation to most of the things we really, really need to know, we have to go with likelihood and probability and the hopes of acheiving a high degree of confidence, but not “absolute knowledge.” So why not just go back to the notion of ike (techne or episteme) of the Greeks, where an ike is an attempt to come to know better the formal characteristics of a kind of thing (or kind of process)? It will have exactly as much likelihood as the kind of thing itself allows, but it will still be a valid discipline of that kind of thing. (MAJOR assumption of Western education before the laws of motion installed absolute certainty as the norm for a real “science.”)
And let’s particularly notice the role of time — and “the future” — here. When the Newtonian laws of motion became the BIG cultural paradigm for knowledge, it appeared that the future could be predicted absolutely by laws, and hence was determined. This didn’t hold up, even for the physical sciences, and Hi talks about this above. Atmospheric science uses deterministic laws and you cannot predict the weather with more than certain varying degrees of probability.
For the Newtonian worldview (which of course is not the same as Newton), it looked like the future was only the current “actuality” all over again, repeating itself. But Aristotle — esp. in his literary theory but everywhere else as well — looked at it in this way. First, the past is no longer open. Once it has happened, it’s determined in the sense that it can’t be changed.
But only a part of what happened (actuality) was because of ordered principles of causation. A lot of it was accidental or contingent “stuff” because various causations happened to intersect in a random manner. So Aristotle thought of predicting the future as taking what was coherently causal in the past and formalizing it and then projecting that formality into the possible future, knowing that we can’t be certain because of all the different causal processes and their random interferences. And because some causal processes are simply less deterministic than others by nature.
THIS IS WHY EPISTEME IS FUNDAMENTAL TO KNOWING. Each kind of causation needs its own disciplinary community. We cannot know the whole world directly. We need to find coherent parts of the world (kinds of things) to formalize and we need to learn about formalization itself, first.
But where do we step back and put all the ikes together and think about the whole of life and the whole of where everything is going? (It’s called “First Philosophy” or metaphysics and there is a discipline for it. Theologies are also inherently a kind of first philosophy.)
There’s an existential core to each of our lives and we have a drive to achieve an integrated worldview and make some sense of things and also determine what kind of person we should be and how we should act (and how we get so we can act like that when we know we want to — the biggest problem of all).
There’s no absolute certainty is THESE areas, the ones that finally matter the most. And you don’t make any progress by simply accepting a ready-made worldview or ideology or religion either, especially if it’s Christianity, because this faith is so counter-intuitive and demands so much thought work and so much willingness to ALWAYS scrutinize and overturn one’s assumptions. (You can try to resist this, but it always gets done for you, anyway.)
Being a Christian is being called to a continuous inward revolution and requires the activity of the full mind and the whole person. Christianity is based upon the paradox that there is a fullness of truth toward which we aim our passion and we do experience it from time to time and try to chart our course by it, but we can never have in our finite and limited selves an adequate conception of the truth or what it means. The more we try to cling to and insist upon certainty, the more likely that the shells of our certainties will be overthrown to get us to a deeper truth.
10. Janet Says:
Let’s get back to the questions of the “existence” of:
Hamlet (the character — but consider the play)
John McCain
the electron
Note that with regard to “electron” the difference between “an electron” and “the electron.” Here’s that Form-al awareness Maria was talking about, entering into the picture. (Bertrand Russell had to spend 100+ pages on the meaning of “the” in his Principia Mathematica!)
“An electron” usually refers to a particular instance of the electron, whereas “the electron” refers to the formal mode of being of the electron, as a theoretical construct (what Plato & Aristotle called a “logos,” a formal definition or account), the electron as a topic or a subject matter for formal inquiry. (The “eidetic,” as I am calling it in my off-line work, after Plato’s eidos or “Form” or Idea.)
Sooo…. Let’s not dismiss Plato’s Forms too quickly to the barren wilderness or the realm of quantum woo…. Too often, they have been interpreted to suggest an otherworldly realm of pure Ideality, but in practice, in the dialogues Plato wrote 2400 years ago, they emerge as tentative or provisional idea(l)s of the topic, and then the Form is used, paradoxically (or dialectically), to critique or to call into question all of the current (received) ideas about the topic. (Experimental and reflective testing is built in to the notion of the eidetic or the Form-al. The naming of the kind of thing within a philosophical inquiry opens the space of inquiry by opposing the Form as the ideal reality to the theory so far, or to whatever we unreflectively may have supposed.)
So, I’d like to say that the Form or the eidos is “The Putative Reality, As It Might Get To Be Known in the Future”! It is the practical and servicable goal of our quest, though we never reach it. It is the Ideal Answer that we strive toward but do not yet have in its entirety. And there’s no sense, with Plato or Aristotle, that our disciplinary knowing is useless unless or until we do arrive at ultimate knowledge. The search is substantial and makes progress, and that gives us the experiential contact with reality that we need as human beings. It seasons us and makes us committed to the search for truth.
By the way, guess where the word “future” comes from? It comes from the Greek word physis, from which we get “physics.” It’s that active ending -sis added to the verb phuo that means to grow, bring forth, or give rise to. So again we see that what any discipline does is attend to what can be observed to have already happened and to be happening, in regard to a certain formal kind of thing and its process of coming-to-be, and then weed out the irrelevant noise and accidents and incidentals, and then formalize the potentialities that might have been in action there. Then, we will have the kind of episteme that enables us to make predictions about the future that are better than those of persons who do not have the episteme.
It’s not the predicting itself that matters here, though it is fundamental to scientific method. Episteme is not so that we gain “control” of the future per se. Knowing, instead, is about assimilating the know-how or expertise or deeper understanding that gives the member of the ike the “power to know” — the power to know “how to do” certain things, and that involves being able to gauge what most likely might or will or would happen. The important thing here is that the knower is trying to follow something that has produced a pattern in the past — and follow it into the future.
Remember Paul on how hard it is to dream up experiments to test new ideas? Harder than coming up with the ideas themselves? Science is inventive and creative, working along lines already laid down, and projecting them into a “future” that we MIGHT get to FROM HERE. This means that the Present must be viewed as being structured by formal organizations that can be hypothetically discerned from the past and projected into the future along the same principles. We are trying to move from the past into the future by assuming something that operated in the past (we think) and in the present (we think) will continue to operate (more or less, apart from accidental interferences and incidental complexities) in the future. We project our knowing as an expectation about the future that comes FROM the formal principles upon which we’ve come to think some of the stuff in the present and the past were based.
All of this requires the use of what we moderns tend to call “imagination,” but Aristotle called “poietike,” a kind of “making” of a “fictive future” of what “would” happen, in the sense of what “might” happen IF, as we suppose, our analysis in (of) the past has indeed been moving in a fruitful direction. The Possible, or The Possible-Probable, of Aristotle is not confined to the worlds of art. (P.S. The imagination is a Romantic concept only 200 years old and a bit too free-wheeling to pull together science and the arts as ways of knowing, in my judgment.)
Now, I want to remind us all that I insisted on adding to Gavin’s list of “things that exist” a couple more items (with a view to eventually discussing God and faith issues, as well as the liberal arts).
So let’s add:
John McCain
the market (as in “the market sets the prices)
“Summer” differs from the first three “things” because, unlike Hamlet, it is based more im-mediately upon empirical or physical observations and sensations and measurements of “it,” like “John McCain,” but we can’t just point to a “summer” sitting there as an entity in the empirical world. So it is like an “electron,” in that we have a theoretical construct to define something we have detected empirically, but it is unlike an electron in that it doesn’t have the same coherent wholeness or entity-ship.
“Summer” has edges that are blurry, and different cultures may divide up the seasons somewhat differently, so it may be a local construct. On the other hand, there is certainly in nature a fairly regular recurrence and patterning in the swinging around and repetition of the seasons. Yet you cannot simply identify “summer” with its empirical measurements, as I’ll show, in part because what constitutes a summer (actual temperatures, weather) may be different in Alaska than in Maylasia, and yet we still speak of a summer in both cases. (This is exactly like the identity of phonemes or morphemes in language.)
We CAN identify summer MOST coherently IN DIFFERENTIAL RELATIONSHIP TO THE OTHER SEASONS. “Seasons” is the fruitful category here, like a genus, and then we need the differentia that make the seasons differ from one another in each case….
So finally “summer” as a “thing” is a theoretical construct that “exists” for us because we have defined it in relationship to other closely related things within a certain coherent context (the cycle of seasons). But is the existence of this “summer” out there in the actual empirical world? If there is no one to observe the patterns in the weather and compare and contrast them from year to year and name them in the common language so little toddlers begin to learn about “summer” and “winter” as theoretical constructs, then does “summer” exist empirically in the natural world? This is NOT a yes/no question!
We can even say things like: “This was a very cool summer, hardly like a summer at all. More like late winter.” We are talking about and interacting with the natural world in these sentences, and we are also using the culturally prevalent constructions of all of that empirical data into the particular units or wholes that in our language and culture enable us to talk about the data on this more powerful formal level in meaningful terms.
But when we say that the cool summer was not really a “summer” at all, what exactly do we mean by summer? The cool summer is an actual instance. The summer that it is not, is our idealized or typical summer in our minds (Plato’s Form), against which we measure each actual occurrence.
So why don’t we just call this summer a winter if it “more like” a winter than a summer. You know why. We have a whole theoretically precise set of constructs in place, and as a result, just because the specific manifestations of this particular summer don’t resemble the formal identity of summer, it is still a summer. For us, in terms of interpreting the data…
The identity of many “things” does not depend on their physical make-up so much as on the normative structures (based on physical instances) that we bring to evaluating them. (John McCain is a man, a senator, a POW….) With regard to linguistic units, this is so much the case that Saussure compares it to a game of chess, in which the formal rules remain and make the various “pieces” what they are. So you can replace a pawn or a rook with anything you want, a coin, say, and it is still a pawn or a rook so long as it differs from the other pieces enough for us to keep its identity straight.
The “being a Pawn” — or the mode of being called a pawn — does not depend on any physical substance the pawn is made out of. But this is NOT saying that the identities of pawns or of summers are merely socially constructed. It is simply the case that we aren’t done defining them if we designate a piece of polished wood of a certain shape or a set of temperature ranges and weather patterns. Physical structures are involved at every level, but the identities are formal and relational (differential) identities. As every structuralist knows, a relationship is always also a contrast and an identity is also always a difference, because identities as regonized by human knowers are always defined within a coherence context and with reference to one another.
Then, of course, Shakespeare’s Richard III says, “This is the winter of our discontent, made glorious summer by this son of York….” These are metaphors, not references to a “real” summer or “winter” at all, it would seem, and yet of course they are references to real summers! We wouldn’t even understand the metaphors if we didn’t have a form-al notion of “summer” based on many actual summers, in contrast to many actual winters, experienced and named by our speech community.
For the Greeks, that hypothetical or normative Idea is the “Real” and the actual summer is merely one actual instance of that real thing…. It’s a very, very helpful contrast for us, this contrast between the empirically actual, which is always gone (into the past), and the Real-ity of the formal theoretical constructs which we human knowers come up with, to use as we seek a deeper understanding.
But how in the world are we going to talk about the existence of “the market”? Where is it? (Like the Internet. It’s in our heads, and it’s Real, and it’s actual.) Here we have to start talking about invisible codes of “behavior” that connect all of the members of the economic community and “information” and “market forces” and these are not occult. They exist, if we can rely on observations, but the mode of their existence? I heard Alan Greenspan’s replacement say that if we could only figure out what causes “confidence” we could predict the market absolutely, but we can’t….
These “names” are technical vocabulary and refer to things going on in the world. Their existence is clearly in the mode of the “Real” or Form-al or ideal “things” we’ve talked about, like “being a pawn,” and not the merely actual or physical objects, only these constructs are removed from the first-hand data by more layers of theoretical construct. (We don’t even know what the data we want is until we have some kind of theory going.)
The big question in American academia the last 30 years or so has been, are the theoretical constructs in our heads also out there in the physical world? This is such a naive question. Only English speakers with our own tradition of reductive empiricism, from our “scientific” philosophy that valiantly struggled to model itself “logically” upon geometry, would think that if a thing is a construct that cannot be simply equated with a physical object, then “the construct” is “just socially constructed.”
All human knowing is “constructed” knowing, esepcially in the sciences, with those constructions always, always based upon constant interaction with the world. The reason we don’t see this as self-evident is that we have forgotten that there is a difference between an actual “thing” and a “kind of thing,” even though we never ever perceive and know any actual thing without the theory of the kind of thing and the theory of its difference from and relationship to other kinds of things mediating our knowing of it.
The very words in the lexicon of our language that we learn as we emerge as human persons in early childhood refer to the formal kindness of things, as we have learned to name them in the past (langue), and because they are formal constructs of that sort, therefore we can in the future use those form-al words to make specific references to instances of those kinds of things in the world, and in memory, in dream, in literature…. The formality of their identity enables us to transpose them into various realms that are realms of projected formal being….
11. Gavin Says:
I’m just going to pick one thing as an example. You say:
I am inclined to say, “Fine.” I personally don’t see any reason to personify those things, but if want to, that’s great.
However, I run into trouble. My friend, Brent agrees with you but adds that God thinks homosexuality is an abomination. Then there’s Andrey who agrees with you and Brent, but also thinks that God has asked him to intimidate gays with physical violence, to the point of death.
How can I be respectful of non-empirical knowledge in some cases, and then oppose these other ways of knowing in other cases. As an elder in the Presbyterian church I spent considerable time watching men and women argue about what God wanted us to be doing in bed. It was typically a He said He said debate with everyone quoting and interpreting passages from the bible. It had no connection, that I could see, to the world because it wasn’t based on anything empirical, and they got nowhere. If we had decided to work with empirical evidence, then the issue would have been rather easily resolved.
Asking everyone to stick to empirical evidence seems to be the best way to make progress in debates about practically anything, which make me reluctant to say “fine” if you claim to have personal knowledge of some deity whose every action is undetectable.
12. HI Says:
Janet wrote:
But don’t you see exactly where non-theists have a problem with? Here you are talking as if God only means such “potentialities.” But of course that is not all what the God is to you and most theists. Don’t Christians use the words such as loving and caring to describe their god? And didn’t you confess how real that kind of God is to you? Why would you worship “potentialities” anyway? But the problem is that it is not self-evident that God that is loving and caring and God of “potentialities” or “the condition of possibility” are one and the same. (And I suspect that God of “potentialities” is not the primary motivation for the faith of most theists.)
John McCain is a senator and a former POW at the same time. But a senator is not necessarily a former POW. We only know that John McCain is both a senator and a former POW, because we know that John McCain is a senator and we know that John McCain was a POW and we know that John McCain the senator and John McCain the former POW is the same person. Can you make a similar connection for God? It is more convenient to just to talk about the more philosophical concept of God, but that is not going to be enough.
And even if we forget about your personal God of love and focus on the philosophical God, there still remains a question of how meaningful such a concept of God is. Read what Sean Carroll wrote.
(Also, in a different thread on Cosmicvariance, someone called Ali made a following comment. I’m not sure if this is the same Ali who also comments on the thread above.
“Speaking as a religious scholar, I think you’ll have to be careful about that first one, since in order to put forth any argument at all, you’ll have to very precisely define which conception of “God” you’ll be defending (there are so many, after all, not merely the American Protestant version). Some early Christian apologists, in an attempt to defend the existence of God according the principles of the Greek philosophical tradition with which they were familiar, ended up identifying “God” with existence itself. It would difficult to make a case against existence existing, after all. On the other hand, what you end up with is a tautology, albeit an interesting one.”
This sounds similar to what you are attempting. Do you care to comment on that?)
Regarding Wigner, I really don’t know much anything beyond what was written in popular science books or what you can find on internet. Among other things, Wigner proposed a thought experiment called “Wigner’s friend”, which is a variation of Schroedinger’s cat that essentially replaced the cat with a human (and the human doesn’t have to die unlike Schroedinger’s cat). It was supposed to illustrate the importance of a conscious observer in the measurement, but it seems to illustrate the flaw in his thinking to me.
13. Maria Kirby Says:
Isn’t that exactly what Christians are claiming? The soul is immortal because we have empirical evidence that Jesus rose from the dead, in a new eternal body?
And we also know the Form of God because we know the Jesus who is the Word become flesh, the Form became an empirical experience?
14. Janet Says:
All of you are keeping ME on my toes!
Maria, you point out something I badly need to clarify, and it is connected with all of our impasses about empirical and semiotic and so forth.
I’m drafting a reply to everyone. Thanks!
15. Janet Says:
By the way, Hi, those links aren’t working for me. To Sean Carrol and Ali. Can you offer them again or name the posts to see at cosmicvariance? Thanks!
Gavin, your links are truly horrifying. Thanks for alerting us.
16. HI Says:
Here are the links.
Sean Carroll:
“Please Tell Me What “God” Means”
Comment #12 to “The Best Arguments for Things I Don’t Believe”
17. Gavin Says:
I was also going to recommend the post by Sean. It is at:
I agree with him except for his use of the word “dishonest.”
18. Janet Says:
I just read Sean’s post and I am surprised at him — I think he is being incredibly reductive and narrow-minded. (His review of Dawkins was much better, imho.) And Ali, there is so much more to the discussion of “existence” than what this “religious scholar” refers to.
If you scientific folks think that MY grasp of QM is not adequate, and I’ve done a lot of work there, then I have to say that to me these accounts of the theological, philosophical, and logical issues at stake surrounding “God” are not even kindergarten-level. And yet they don’t seem to recognize that they don’t know anything about what they are dismissing with their knock-down arguments; that there are intellectual worlds there of which they have no knowledge whatsoever, not to mention cultural, ethical, and daily worlds of which they apparently know nothing. It is as though they are color-blind or tone-deaf. Only what their own way of knowing illuminates, can “exist” or make a difference. Anything that other people might perceive or treasure simply doesn’t exist, because their little elite group has the only way of knowing, and anything else would complicate things too much. They are bound and determined in advance to know about nothing except what they want to know about.
And I don’t buy this demonizing of the Christian rank-and-file as having no theological or philosophical sophistication. You can’t have a genuine experience of God without having those profound philosophical ramifications entering into your new experience of life. (It should go without saying that there are of course “Christians” who have had no genuine experience of God, and non-Christians who have had. The Bible is full of this — remember the Pharisees?)
I will try again and reread that post tomorrow, but I am very sad. Sean speaks of a single world with a single way of knowing what’s what, and you guys agree with him? So what have we been talking about all this time? The very idea that so many people seem to think it is okay for Dawkins or anyone else to dismiss the very question of God as stupidity, without knowing any theology, makes me want to weep. If all YOU happen to know is the straw man that Dawkins attacks, then you simply are as uninformed as he is. It doesn’t mean it’s okay to reduce the whole thing to what you’ve encountered. This is prejudice and bigotry. Fanaticism always works like this.
How is this insistence that there’s no content to faith in God any different from narrow-minded and fanatical Christians saying that evolution is wrong and blasphemous, when they don’t know the science or understand or credit the scientific method on its own terms?
This kind of reductive and fanatical insistence that one way of knowing is the single obvious monolithic truth and that it speaks for itself and that everyone else is just dead wrong is just appalling. Asking other ways of knowing to justify themselves by the standards of your own field is deadly to thought and to any prospects for human peace and advancement.
You cannot base an argument on ignorance. You may decide you don’t like religion and that YOU don’t WANT to know anything about it, but you can’t then dismiss it and claim to be able to close it off and dispose of it in advance as empty and void of truth or reality for anyone. That is simply fanaticism, and Dawkins is in this respect as simple-minded and fanatical as they come. He is blindingly ignorant of what it is that he is dismissing. He even says that his atheism is “a victimless crime” — that it hurts no one. (Not being an atheist, but his militant attacks on all religions and religious people.) He is living in a dream world. He is fomenting hatred and aggression against what many human beings hold to be their most precious possession. That isn’t hurting anyone? Militant attacks always hurt people. The militants themselves above all.
Look, no one can tell me anything about the evils of religion that I don’t know first hand. But if you don’t know anything about its treasures and its depth and its meaningfulness and the daily goodness it has also supported, then how can you begin to make an evaluation of it? I don’t mind Dawkins disliking religion and he is entitled to his opinions. It is his claim that an entire rich dimension of human exploration and experience is worthless and empty and can be known to be worthless from outside, that qualifies him as a bigot. You cannot ignore the voices of human beings with other backgrounds and think that you don’t need to value their experiences and their insights — just because you know better than they do, in advance.
What if we asked artists to “name one thing that art has done that makes a difference.” Or, “How would the universe be different without art?” Or music?
What if we made it harder and asked, how would the universe be any different without government and politics? Just look at all the terrible things governments have done. Look at how destructive political fanaticism or ideologies have been? Let’s stop believing in it and it will go away.
Everyone is trying to make the universe much simpler than it is. For a person to dismiss as nonsense something like the “condition of possibility” is simply ignorance. It’s pitiful, to anyone in those fields. Dawkins’ arguments do make you cringe, just like Terry Eagleton says, they are so sophomoric. It’s exactly like a Fundamentalist getting an easy laugh from the audience by ridiculing the idea that humans descended from apes. It is a pitiful spectacle to watch supposedly liberally educated people indulge themselves in demeaning and demonizing whole segments of the human race instead of attempting to understand them and hear them on their own terms.
Depth experiences of God occur in all cultures, and in the biblical faiths, the experience of a “personal” God is simultaneous with the experience of an ultimate reality and with “the ground of existence” and the condition of possibility. These aren’t empty phrases pointing to nothing at all. Does me not knowing and understanding advanced elements of a field of science make that science empty and meaningless? Only if I think I can dismiss the work of other human beings in an arduous common enterprise, just because I haven’t been drawn to it or trained in it.
Does that mean we accept everything that claims a religious basis (like gay-bashing) or a scientific basis (like experiments that cause unconscionable suffering to helpless dogs and cats and other higher animals) just because they claim to be religious or scientific? No. We have to keep on struggling to interpet and distinguish. It’s never easy.
Gavin says we’re better off to simply “stick with the empirical,” and then we could settle things more easily, without the ambiguities of religion. It’s a nice hope, but I think that everything including science is pretty ambiguous ethically, and we are stuck in the middle of the whole mess having to struggle constantly with interpretations and decisions, individual and collective. We’re all in this together and demonizing each other isn’t helping. (Talk about “dishonest.” All these knee-jerk reactions and wholesale dismissals and sweeping assumptions that what is self-evident to me is therefore universally applicable to everyone, without even checking with the others first?)
I hate it when Christians are self-righteous and reductive and judgmental, but it isn’t really any nicer to see it in atheists, either.
If you “believe in God,” it is either because you have accepted a form of religion passed down to you, or because God has become unmistakably manifest to you, or both. In the latter cases, you don’t add up the “arguments” pro and con. You try to integrate the continuing reality of God with everything else you know, and that usually means finding a tradition that is capable of helping you to grow in your relationship with God.
Because you feel incredible gratitude to God, and a profound sense of the sacredness and goodness of the sacred dimension in your life, religion can become a powerful force for good or evil, and it is just as liable to become distorted and destructive as a marriage or a family or a community or any other human institution is. One feels of course that God is on the side of health and fruitfulness in all of these cases. But for us to know the good, and then to do the good? That is always the problem. But we have an evolving tradition that is very rich and profound to guide us.
I think that one of the differences that God makes is deeply inward. Genuine experience of God moves one into a journey of discovery in which you are just as foolish and intolerant as anyone else, but you aren’t left to your own resources. There’s an inexorable pressure to see through your own excuses eventually and become more humane. And there’s knowing and loving this incredibly suffering and loving presence…. I could say so much more, but I would have to do it by speaking of my own tradition and not so generally about the religious dimension in general.
What does God see when God looks at the world — imagine this, as a thought experiment if you will. The Christian tradition says that God sees the spiritual suffering and struggle of the world, because God values that above all else. (The Jewish tradition, also.) And that the spiritual is not separated from the yearnings of the natural and the animal world as well. God looks inwardly and sees the inward heart of things and God values even the smallest increase in the kingdom of love. And God is broken by every violence that breaks any one of us. God suffers with us and in us and for us. There’s nothing easy here. Nothing snappy. Just something unbearably relevant and real.
19. Janet Says:
Gavin says: “How can I be respectful of non-empirical knowledge in some cases, and then oppose these other ways of knowing in other cases.”
But you have to. You have to try to distinguish genuine ways of knowing from ones you cannot accept as genuine. You have to distinguish the Christian tradition from gay-bashing, for instance. You have to distinguish scientific knowing from the hideous torture of animals. You have to do the best you can, as thoughtfully as you can, and take your stand as best you can, but you can’t just throw out whole ways of knowing because they change, disagree, and sponsor terrible things.
And here you are saying “non-empirical” again…. Anything that requires human observation over time is no longer strictly empirical, but a weaving together of empirical observations at different times into a construct that is both empirically based and that “exists” in human consciousness, language, and history. You are talking about a way of knowing that has as its ultimate arbiter the conformity of these constructions to experimental testing.
Such a way of knowing cannot do ethics and perform in many other vital areas. It can help inform ethical decision-making, but it cannot make the decisions, because it isn’t designed to do that. How would empirical considerations settle the Presbyterian elders’ debates about what people should do in bed? It could inform the debate, but you would still need to make larger ethical arguments for how to interpret the scientific data in an ethical framework.
The science on homosexuality did settle my own stance as a Christian on homosexuality, but that is only because I have a larger context of religious and ethical theory, i.e. that the Cross shows that nothing trumps divine love. Therefore, if people are born with different sexual orientations, and have no choice in the matter, as we now believe based on science, then I don’t believe Christ would condemn non-heterosexual persons to live without intimacy and physical love. But I’m not allowed to condemn and hate Christians who cannot, in good faith, come to this view of the matter. (I know some of them for whom this is tearing them apart. Great suffering here on all sides.) To me, we are in another period of historical change. We’ve gone through this with abolition of slavery, with Christians on both sides, and then again with women in ministry, and now we are going through it again with homosexuality. But I do have to oppose any hating and persecuting of other persons because of homosexuality (or for any other reason).
The whole church will come around on this as it has on the other issues. We’re a species that is now evolving culturally as well as (or more than) genetically, but we still resist at every step the manifestations of a transcendent love and compassion that we also prize and adore above all else.
20. Janet Says:
I know many, many of the Greek Orthodox here in Seattle quite personally, because my Episcopal parish shared our building with a Greek Orthodox mission congregation until they were big enough to have their own building, but we are still very close, and I went to their larger gatherings, and more gentle and loving people you would never hope to meet. They took care of their elderly and adored their children and reached out to everyone — they were on fire with love. I can’t put it any other way. Their children would come home crying from second grade because little Evangelical children had told them they “weren’t really Christians.” When their tradition goes straight back to the early church. There isn’t much limit to our human pettiness and iniquity. And there isn’t much limit to those people’s love and the good that they do and are.
Aren’t the sociological reasons for those Russian men’s looking for scapegoats pretty obvious? Sean Carroll’s review of Richard Dawkins’ The God Delusion is a sophisticated discussion of why we can’t attribute all evil by religious persons to religious factors.
Also, folks, I’d like to add to the list of questions we’ve been building.
What difference does innocence make?
What difference does forgiveness make?
What difference does vicarious sacrifice make?
What does the figure of God on the Cross mean to the mothers of those who’ve been “disappeared” in South America, and does it make a difference for them that God’s son too was put to death as a criminal?
This narrow rationalism is as thin as water. God takes on our flesh and our blood and speaks to us in our deepest sufferings and rebukes all our iniquities by taking them all on personally. (But I can also see how people who have been suffocated by the perversions of religion can find in science freedom and space and fresh air, while for me the scientific attitude was the source of great harm. This is where we need semiotic theory. Things take on their identity in large part from the surrounding system of associations and the rules we have in place, like summer from the other seasons and a pawn in chess. The “Christ-event” for one person might be a forest and for another romantic love and for another science itself. Maybe we should read Dante together here….)
Don’t let the distortions fool you. The most powerful goods can be turned into the most powerful evils quite easily and it happens all the time. Humans are deeply irrational creatures, as well as deeply rational creatures, and we are in desperate need interventions on all levels of our being. Wow. This is turning into a lead-in to discussing Shusako Endo’s _Silence_! Starts tomorrow… on All Saints Day, in fact, as it happens.
21. Gavin Says:
As I said before, I agree with Sean except for his use of the word “dishonest.” You respond:
This passage stands out for its clarity, but mischaracterizations and insults continue throughout. I will not participate in a conversation like this.
Good luck,
22. Janet Says:
I am very sad. You have been a wonderful conversation partner, Gavin.
I was fresh from the lacerations I had just received from reading Sean’s piece and some of the comment thread. I should have waited until I was calmer.
I wasn’t talking about you and Sean personally, Gavin. I was talking about this militant attack on religion as a way of thinking and viewing the world. I still believe it is as tragically narrow as the fundamentalist biblical literalists who are crusading against Darwin is.
What about all of us in the middle? I hope you reconsider, but in any case, I’ll always treasure the conversation — and reread the QM parts!
(Looks like I need to learn some spiritual lessons in humility from Shusaku Endo.)
23. Maria Kirby Says:
I would like to go back to your example of the reality of Hamlet as an example of semiotic knowledge and empirical knowledge. Hamlet as a play, as words written down, expresses certain ideas and concepts embedded in the character of Hamlet. When an actor performs Hamlet, he converts the semiotic knowledge into empirical knowledge. To the extent that the actor’s representation or characterization reflects accurately the semiotic knowledge of Hamlet, that semiotic knowledge becomes empirical to the actor and his audience.
I believe the same is true for religious concepts, particularly our knowing God through Jesus. To the extent that we understand the semiotic knowledge expressed in the Bible about Jesus, to the extent that our semiotic knowledge is developed through philosophy, nature, or other means, we can convert that knowledge into empirical knowledge through how we behave towards others, through how we embody Christ, or Love, or forgiveness.
It seems to me that one of the major themes in the Bible is that of transformation. God transforms evil into good. Forgiveness transforms enemies into friends. God’s love transforms us from dying or dead into living and alive. The resurrection transforms death into life. I see a similar phenomenon occurring in biological systems where DNA is torn apart, replicated, and restored. And in the process a new set of DNA is created and life is duplicated. Evil and death tear apart the present life. Forgiveness and love restores life, but its not a restoration to the previous conditions, its a restoration to new life, eternal life as seen (witnessed, empirically experienced) in the resurrection of Jesus.
Because the new life that Jesus lived after the resurrection had a physical form, an empirical form, and because when we forgive each other we are converting the semiotic form that Jesus represents into an empirical form of earthly experience, I would like to think that we are also creating an eternal empirical form of an eternal life. It seems that many passages in the NT indicate that eternal life is something that we receive not only because God forgives us, but because we forgive others.
24. Janet Says:
Thanks, Maria. I have been working on a response to your emails which I’ll be able to post soon.
And I’m posting on Shusaku Endo later today. Thanks everyone.
25. Janet Says:
That image of the DNA being torn apart and re-united with another “torn” DNA is a powerful image.
“Except a seed fall into the ground and die,” right?
Semiotics, word theory, is filled with deaths and rebirths within words, sustaining them. It is like Heidegger’s unconcealment (truth) and re-concealment being dialectically related.
I think from a semiotics standpoint, I want to comment on the nature of both what you call semiotic knowledge and what you call empirical knowledge, though I certainly see what you mean. The two are much more inter-related than usually appears on the surface. It’s fascinating to remember that the “word” sustains even what we think of as empirical being. More on this soon. (I keep saying soon, but truly….)
As for forgiveness, it’s forgiving oneself that is often most difficult, isn’t it? The intricate interrelationship between the present and the future is something I’ll be hitting on too, I hope. Thanks. More soon.
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
%d bloggers like this: |
a3e70f1608394a7a | Package mpqc
Ab-inito chemistry program
MPQC is the Massively Parallel Quantum Chemistry Program. It computes
properties of atoms and molecules from first principles using the time
independent Schrödinger equation. It runs on a wide range of
architectures ranging from individual workstations to symmetric
multiprocessors to massively parallel computers. Its design is object
oriented, using the C++ programming language.
General Commands
Command Description
molrender The molrender program reads a molecule from an input file and can render it in a...
mpqc The Massively Parallel Quantum Chemistry program (MPQC) computes the properties...
mpqcrun The mpqcrun program simplifies running MPQC.
scls The scls program is used to list objects in checkpoint files.
scpr The scpr program is used to print out objects in checkpoint files. |
ab49ede89dd9ed13 | Jan Vrbik
We show how a Monte Carlo procedure (based on random numbers) can generate a large sample of electron locations in any simple molecule. Based on this sampling, we can accurately estimate the molecule’s ground-state energy and other properties of interest. We demonstrate this using the LiH molecule.
Trial Solution to Schrödinger Equation
A mathematical description of a molecule (say LiH) is provided by a solution to the Schrödinger equation [1] (a differential eigenvalue problem with smallest eigenvalue ):
Here is the Laplacian operator , the summation is over all electrons, is a function of their positions, and is the molecule’s ground-state energy. The electrostatic potential in the molecule is given by
Here, the first summation is over all electrons and nuclei, where and are the nuclear charges and locations, respectively. The second summation is over all pairs of electrons, and the last one, analogously, is over all pairs of nuclei. The nuclear locations are kept fixed in accordance with the Born-Oppenheimer approximation [2]. We use atomic units, in which the electron’s charge, mass, and Planck’s constant are set equal to 1.
Thus, for the LiH molecule is computed by the following functions, assuming that the Li nucleus is located at the origin.
The argument Q consists of the four electron positions, each a list of three Cartesian coordinates.
The solution to (1) is subject to two boundary conditions: that whenever one or more electrons approach infinity, and that must change sign whenever two electrons of the same spin are interchanged, in accord with the Pauli exclusion principle [3].
There are several techniques for finding an approximate solution to (1), which we denote by , to distinguish it from the exact solution . Monte Carlo is a method that “borrows” one of these solutions, called, in this context, a trial function [1], and seeks to improve its accuracy. This is done by generating, with the help of , a random statistical sample, called an ensemble in this context, representing the exact solution. Based on this ensemble of electron locations, or configurations, one can then easily find, within the “standard” statistical error, the value of the molecule’s ground-state energy, and related properties such as dipole moment and polarizability, etc.
To build a trial solution for LiH, we start with two molecular orbitals, linear combinations of four simple atomic orbitals [2].
The argument q is a list of three coordinates that describe a location of a single electron. The parameters of the MO functions and those of J in the next expression have been obtained by minimizing the variational energy (7).
Secondly, we define the so-called Jastrow function [2], its arguments being locations of each pair of electrons.
The resulting trial function has the following form.
The two factors are determinants of the two molecular orbitals, each evaluated at the location of the four electrons. Electrons 1 and 2 have spin “up,” while 3 and 4 have spin “down.” The remaining factors are Jastrow functions for each pair of opposite-spin electrons.
Next we estimate the ground-state energy of LiH based on this trial function.
Variational Estimate of the Smallest Eigenvalue
Let us rewrite (1) more compactly as
where represents the sum of both operators on the left-hand side of (1). The variational principle tells us [2] that
The integration is over the three coordinates of each of the four electrons, altogether a 12-dimensional problem—no mean task—and is any trial solution to (1). The limit of equality holds only for the exact solution , but for approximate solutions, called variational estimates of the ground-state energy, the left-hand side of (4) is usually quite close to . The main problem is how to evaluate the two 12-dimensional integrals; this is impossible to do analytically and not feasible even numerically. Well, Monte Carlo has the answer.
Let us first define the so-called drift function by
and local energy by
Here are the corresponding commands.
The coordinates of the four electrons are now called , understanding that these are the , , coordinates of the first electron, followed by the , , coordinates of the second electron, etc. The left-hand side of (4) can now be rewritten as
Next, we randomly generate a large sample of 1000 configurations of values denoted collectively as Ro and compute the corresponding , , and .
By averaging the 1000 values of , we get an estimate of . Unfortunately, this estimate will be very inaccurate since our random sample of configurations bears, at this point, no relationship to of (7).
There is only one little snag: the result will still have an error proportional to the step size . To correct for this, we would have to make impractically small and equilibration would take forever. Fortunately, there is another way, called Metropolis sampling [3]: for each proposed move (8) we compute a scalar quantity
where the subscripts and mean that and have been evaluated at the new or old location, respectively. The move is then accepted with a probability equal to . When , the move is accepted automatically. When a move is rejected, the configuration simply remains at its old location . The step size should be adjusted to yield a reasonable proportion of rejections, say between 10% and 30%.
Rejecting configurations in this manner creates the last small problem: in our original random sample there is usually a handful of configurations which, because they have landed at “wrong” locations, just would not move. To fix this, we have to monitor, for each configuration, the number of consecutive times a move has been rejected, and let it move, regardless of , when this number exceeds a certain value, such as 10. After this is done and the sample equilibrates, the problem automatically disappears, and no configuration is ever refused its move more than six consecutive times (confirming that 10 consecutive rejections was a good indication of a “stuck” configuration).
The following program carries this out (its execution will take a minute or two).
Note that moni keeps track, for each configuration, of how many of its last consecutive proposed moves have been rejected, its value being reset to 0 as soon as a move is accepted. The program returns (in res) the value of all sample averages of the local energy , together with the average acceptance rate.
This displays the former.
We see that about 50 iterations are necessary to reach equilibrium.
To get an accurate estimate of , we repeat the simulation with substantially more iterations, changing the Do loop’s “count” from 60 to 1000, and then computing the grand mean of the values by . In our case, this yields atomic units, with an average acceptance rate of about 85%.
This improves the estimate to atomic units, with the standard error of . The “exact” ground-state energy of LiH is atomic units. The obvious discrepancy, well beyond the statistical error, between our estimate and this value is due to our use of a rather primitive trial function. In accordance with the variational principle, our estimate remains higher than the exact value.
Monte Carlo Estimate of the Smallest Eigenvalue
When, in (7), we replace by , the expression then yields “nearly” the exact value of , subject only to a small nodal error [1]. So, all we need to do is to modify our simulation program accordingly, to get a sample from a distribution whose probability density function is proportional to instead of . This can be achieved by assuming that each configuration carries a different weight, computed from
where is the local energy of the configuration as computed iterations ago, the summation is over all past iterations, and is a rough estimate of (the variational result will do). The sum in (10) “depreciates” the past values at a rate that should resemble the decrease in serial correlation of the sequence, which can be easily monitored during the variational simulation.
1. Occasionally (e.g., when an electron moves too close to a nucleus), may acquire an unusually low value, making the corresponding rather large, sometimes larger than all the remaining weights combined. We must eliminate “outliers” outside the range. It is better to do this in a symmetrical way by truncating the value to the nearest boundary of the interval.
This can all be achieved by the following simple modifications of the program from the previous section. Monte Carlo techniques in general require a long time to execute (this one may take several hours).
For this reason, we have made it (and the subsequent command, which processes its output) non-evaluatable.
A ListPlot of the iteration averages of will show that equilibration now takes many more steps (about 500, when ) than in the case of variational simulation. We have thus decided to discard the first 1000 results and partition the remaining 6000 into six blocks of 1000.
Similarly, we can produce six such values with and , calling them r050 and r075, respectively.
It is now easy to find the resulting intercept.
This yields the value of for the corresponding intercept. This is in reasonable agreement, in view of the nodal error, with the exact value of atomic units.
This visualizes the regression fit.
In a follow-up article, we will show how this procedure can be extended to estimate other significant molecular properties, including geometry and polarizability, etc. and how to optimize parameters of a trial function, to make the Monte Carlo method more “self-sufficient.”
[1] J. B. Anderson, “Quantum Chemistry by Random Walk: Higher Accuracy,” The Journal of Chemical Physics, 73(8), 1980 pp. 3897-3899. doi:10.1063/1.440575.
[2] P. J. Reynolds, D. M. Ceperley, B. J. Alder, and W. A. Lester, Jr., “Fixed-Node Quantum Monte Carlo for Molecules,” The Journal of Chemical Physics, 77(12), 1982 pp. 5593-5603. doi:10.1063/1.443766.
[3] D. M. Ceperley and B. J. Alder, “Quantum Monte Carlo,” Science, 231(4738), 1986 pp. 555-560. doi:10.1126/science.231.4738.555.
J. Vrbik, “Monte Carlo Simulation of Simple Molecules,” The Mathematica Journal, 2011. dx.doi.org/doi:10.3888/tmj.13-5.
About the Author
Jan Vrbik
Department of Mathematics, Brock University
500 Glenridge Ave., St. Catharines
Ontario, Canada, L2S 3A1 |
30e5c1d35ebb959a | Quantum many-particle dynamics in 1D with matrix product states
View the Project on GitHub amilsted/evoMPS
Tutorial videos:
evoMPS simulates time-evolution (real or imaginary) of one-dimensional many-particle quantum systems using matrix product states (MPS) and the time dependent variational principle (TDVP).
It can be used to efficiently find ground states and simulate dynamics.
The evoMPS implementation assumes a nearest-neighbour or next-nearest-neighbour Hamiltonian and one of the following situations:
It is based on algorithms published by:
and available on arxiv.org under arXiv:1103.0936v2. The algorithm for handling localized nonuniformities on infinite chains was developed by:
and is detailed in arXiv:1207.0691. For details, see doc/implementation_details.pdf and the source code itself, which I endeavour to annotate thoroughly.
evoMPS is implemented in Python using Scipy http://www.scipy.org and benefits from optimized linear algebra libraries being installed (BLAS and LAPACK). For more details, see INSTALL.
evoMPS was originally developed as part of an MSc project by Ashley Milsted, supervised by Tobias Osborne at the Institute for Theoretical Physics of Leibniz Universität Hannover http://www.itp.uni-hannover.de/.
The evoMPS algorithms are presented as python classes to be used in a script. Some example scripts can be found in the "examples" directory. To run an example script without installing the evoMPS modules, copy it to the base directory first e.g. under Windows::
copy examples\transverse_ising_uniform.py .
python transverse_ising_uniform.py
Essentially, the user defines a spin chain Hilbert space and a nearest-neighbour Hamiltonian and then carries out a series of small time steps (numerically integrating the "Schrödinger equation" for the MPS parameters)::
sim = EvoMPS_TDVP_Uniform(bond_dim, local_hilb_dim, my_hamiltonian)
for i in range(max_steps):
my_exp_val = sim.expect_1s(my_op)
Operators, including the Hamiltonian, are defined as arrays like this::
pauli_z = numpy.array([[1, 0],
[0, -1]])
or as python callables (functions) like this::
def pauli_z(s, t):
if s == t:
return (-1.0)**s
return 0
Calculating expectation values or other quantities can be done after each step as desired.
Switching between imaginary time evolution (for finding the ground state) and real time evolution is as easy as multiplying the time step size by a factor of i!
Please send comments to:
ashmilsted at
To submit ideas or bug reports, please use the GitHub Issues system http://github.com/amilsted/evoMPS/. |
3a68a6359974cb0d | Take the 2-minute tour ×
I read about Fractional Quantum Mechanics and it seemed interesting. But are there any justifications for this concept, such as some connection to reality, or other physical motivations, apart from the pure mathematical insight?
If there are none, why did anyone even bother to invent it?
share|improve this question
Interesting question. Looking here it seems like you can derive generalizations of the usual formulas which look the same but with the Levy parameter in them (like the "fractional Bohr atom"), you get the usual answer by putting the parameter to 2. But, as you say....why? – twistor59 Nov 26 '12 at 12:37
Related: physics.stackexchange.com/q/4005/2451 and links therein. – Qmechanic Nov 27 '12 at 0:12
@Qmechanic interesting. Thanks for the link – namehere Nov 28 '12 at 16:33
1 Answer 1
It seems the goal here is to be able to explain all kind of phenomena considering complex situations, in which nonlinearity could be infeasible to handle as it happens in non-quantic systems. According to this reference, the fractional Schrödinger equation
$$i\hbar\dfrac{\partial\Psi(\vec{x},t)}{\partial t}=-[D_{\alpha}(\hbar\nabla)^{\alpha}+V(\vec{x},t)]\Psi(\vec{x},t)$$
where $(\hbar\nabla)^{\alpha}$ is the quantum Riesz fractional derivative
$$(-\hbar ^2\Delta )^{\alpha /2}\Psi (\vec{x},t)=\frac 1{(2\pi \hbar )^3}\int d^3pe^{i\frac{\vec{p}\cdot\vec{x}}\hbar }|\mathbf{p}|^\alpha \varphi ( \vec{p},t)$$
still corresponds/represents quantic systems. For instance, Laskin shows that uncertainty (fractal) it does exist, because
$$\langle|\Delta x|^\mu\rangle^{1/{\mu}}\cdot\langle|\Delta p|^\mu\rangle^{1/{\mu}}>\dfrac{\hbar}{(2\alpha)^{1/{\mu}}}$$
for $\mu<\alpha$ and $1<\alpha\leq 2$.
share|improve this answer
quantic? You do mean quantum, right? I can see this equations, but I find no relating of physics with these mathematics up there. I found Qmechanic's link more enlightening. Check it out, its not bad. – namehere Dec 7 '12 at 11:35
Your Answer
|
2c465c2cc5d17eca | Gyromagnetic ratio
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In physics, the gyromagnetic ratio (also sometimes known as the magnetogyric ratio in other disciplines) of a particle or system is the ratio of its magnetic dipole moment to its angular momentum, and it is often denoted by the symbol γ, gamma. Its SI unit is the radian per second per tesla (rad·s−1·T−1) or, equivalently, the coulomb per kilogram (C·kg−1).
The term "gyromagnetic ratio" is sometimes used[1] as a synonym for a different but closely related quantity, the g-factor. The g-factor, unlike the gyromagnetic ratio, is dimensionless. For more on the g-factor, see below, or see the article g-factor.
Gyromagnetic ratio and Larmor precession[edit]
Main article: Larmor precession
Any free system with a constant gyromagnetic ratio, such as a rigid system of charges, a nucleus, or an electron, when placed in an external magnetic field B (measured in teslas) that is not aligned with its magnetic moment, will precess at a frequency f (measured in hertz), that is proportional to the external field:
For this reason, values of γ/(2π), in units of hertz per tesla (Hz/T), are often quoted instead of γ.
This relationship also explains an apparent contradiction between the two equivalent terms, gyromagnetic ratio versus magnetogyric ratio: whereas it is a ratio of a magnetic property (i.e. dipole moment) to a gyric (rotational, from Greek: γύρος, "turn") property (i.e. angular momentum), it is also, at the same time, a ratio between the angular precession frequency (another gyric property) ω = 2πf and the magnetic field.
Gyromagnetic ratio for a classical rotating body[edit]
Consider a charged body rotating about an axis of symmetry. According to the laws of classical physics, it has both a magnetic dipole moment and an angular momentum due to its rotation. It can be shown that as long as its charge and mass are distributed identically (e.g., both distributed uniformly), its gyromagnetic ratio is
\gamma = \frac{q}{2m}
where q is its charge and m is its mass. The derivation of this relation is as follows:
It suffices to demonstrate this for an infinitesimally narrow circular ring within the body, as the general result follows from an integration. Suppose the ring has radius r, area A = πr2, mass m, charge q, and angular momentum L = mvr. Then the magnitude of the magnetic dipole moment is
\mu = IA = \frac{qv}{2\pi r}\times\pi r^2 = \frac{q}{2m}\times mvr = \frac {q}{2m} L .
Gyromagnetic ratio for an isolated electron[edit]
An isolated electron has an angular momentum and a magnetic moment resulting from its spin. While an electron's spin is sometimes visualized as a literal rotation about an axis, it cannot be attributed to mass distributed identically to the charge. The above classical relation does not hold, giving the wrong result by a dimensionless factor called the electron g-factor, denoted ge (or just g when there is no risk of confusion):
| \gamma_\mathrm{e} | = \frac{|-e|}{2m_\mathrm{e}}g_\mathrm{e} = g_\mathrm{e} \mu_\mathrm{B}/\hbar,
where μB is the Bohr magneton. As mentioned above, in classical physics one would expect the g-factor to be g = 1. However in the framework of relativistic quantum mechanics,
g_e = 2(1+\frac{\alpha}{2\pi}+\cdots),
where \alpha is the fine-structure constant. Here the small corrections to the relativistic result g = 2 come from the quantum field theory. Experimentally, the electron g-factor has been measured to twelve decimal places:[2]
~g_\mathrm{e} = 2.0023193043617(15).
The electron gyromagnetic ratio is given by NIST[3][4] as
\left| \gamma_\mathrm{e} \right| = 1.760\,859\,708(39) \times 10^{11}\, \mathrm{\ \frac{rad}{s\cdot T} }
\left| \frac{\gamma_\mathrm{e}}{2\pi} \right| = 28\,024.952\,66(62) \mathrm{\ \frac{MHz}{ T}}.
The g-factor and γ are in excellent agreement with theory; see Precision tests of QED for details.
Gyromagnetic factor as a consequence of relativity[edit]
Since a gyromagnetic factor equal to 2 follows from the Dirac's equation it is a frequent misconception to think that a g-factor 2 is a consequence of relativity; it is not. The factor 2 can be obtained from the linearization of both the Schrödinger equation and the relativistic Klein–Gordon equation (which leads to Dirac's). In both cases a 4-spinor is obtained and for both linearizations the g-factor is found to be equal to 2; Therefore, the factor 2 is a consequence of the wave equation dependency on the first (and not the second) derivatives with respect to space and time.[5]
Gyromagnetic ratio for a nucleus[edit]
The sign of the gyromagnetic ratio, γ, determines the sense of precession. Nuclei such as 1H and 13C are said to have clockwise precession whereas 15N has counterclockwise precession.[6][7] While the magnetic moments shown here are oriented the same for both cases of γ, the spin angular momentum are in opposite directions. Spin and magnetic moment are in the same direction for γ>0.
Protons, neutrons, and many nuclei carry nuclear spin, which gives rise to a gyromagnetic ratio as above. The ratio is conventionally written in terms of the proton mass and charge, even for neutrons and for other nuclei, for the sake of simplicity and consistency. The formula is:
\gamma_n = \frac{e}{2m_p}g_n = g_n \mu_\mathrm{N}/\hbar,
where \mu_\mathrm{N} is the nuclear magneton, and g_n is the g-factor of the nucleon or nucleus in question.
The gyromagnetic ratio of a nucleus plays a role in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). These procedures rely on the fact that bulk magnetization due to nuclear spins precess in a magnetic field at a rate called the Larmor frequency, which is simply the product of the gyromagnetic ratio with the magnetic field strength. With this phenomenon, the sign of γ determines the sense (clockwise vs counterclockwise) of precession.
Most common nuclei such as 1H and 13C have positive gyromagnetic ratios.[6][7] Approximate values for some common nuclei are given in the table below.[8][9]
Nucleus \gamma_n (106 rad s−1 T −1) \gamma_n/(2\pi) (MHz T −1)
1H 267.513 42.576
2H 41.065 6.536
3He 203.789 32.434
7Li 103.962 16.546
13C 67.262 10.705
14N 19.331 3.077
15N −27.116 −4.316
17O 36.264 5.772
19F 251.662 40.052
23Na 70.761 11.262
27Al 69.763 11.103
29Si −53.190 −8.465
31P 108.291 17.235
57Fe 8.681 1.382
63Cu 71.118 11.319
67Zn 16.767 2.669
129Xe 73.997 11.777
See also[edit]
Note 1 note[edit]
General note[edit]
1. ^ For example, see: D.C. Giancoli, Physics for Scientists and Engineers, 3rd ed., page 1017. Or see: P.A. Tipler and R.A. Llewellyn, Modern Physics, 4th ed., page 309.
2. ^ B Odom, D Hanneke, B D'Urso and G Gabrielse (2006). "New measurement of the electron magnetic moment using a one-electron quantum cyclotron". Physical Review Letters 97 (3): 030801. Bibcode:2006PhRvL..97c0801O. doi:10.1103/PhysRevLett.97.030801. PMID 16907490.
3. ^ NIST: Electron gyromagnetic ratio. Note that NIST puts a positive sign on the quantity; however, to be consistent with the formulas in this article, a negative sign is put on γ here. Indeed, many references say that γ<0 for an electron; for example, Weil and Bolton, Electron Paramagnetic Resonance (Wiley 2007), page 578. Also note that the units of radians are added for clarity.
4. ^ NIST: Electron gyromagnetic ratio over 2 pi
5. ^ Walter Greiner Quantum Mechanics: An Introduction, Springer Verlag
6. ^ a b M H Levitt (2008). Spin Dynamics. John Wiley & Sons Ltd. ISBN 0470511176.
7. ^ a b Arthur G Palmer (2007). Protein NMR Spectroscopy. Elsevier Academic Press. ISBN 012164491X.
8. ^ M A Bernstein, K F King and X J Zhou (2004). Handbook of MRI Pulse Sequences. San Diego: Elsevier Academic Press. p. 960. ISBN 0-12-092861-2.
9. ^ R C Weast, M J Astle, ed. (1982). Handbook of Chemistry and Physics. Boca Raton: CRC Press. p. E66. ISBN 0-8493-0463-6. |
7c6af14e52d512ca | Interaction picture
From Wikipedia, the free encyclopedia
Jump to: navigation, search
In quantum mechanics, the interaction picture (also known as the Dirac picture) is an intermediate representation between the Schrödinger picture and the Heisenberg picture. Whereas in the other two pictures either the state vector or the operators carry time dependence, in the interaction picture both carry part of the time dependence of observables.[1] The interaction picture is useful in dealing with changes to the wave functions and observable due to interactions. Most field theoretical calculations[2] use the interaction representation because they construct the solution to the many body Schrödinger equation as the solution to the free particle problem plus some unknown interaction parts.
To switch into the interaction picture, we divide the Schrödinger picture Hamiltonian into two parts,
State vectors[edit]
An operator in the interaction picture is defined as
Note that AS(t) will typically not depend on t, and can be rewritten as just AS. It only depends on t if the operator has "explicit time dependence", for example due to its dependence on an applied, external, time-varying electric field.
Hamiltonian operator[edit]
For the operator H0 itself, the interaction picture and Schrödinger picture coincide,
For the perturbation Hamiltonian H1,I, however,
where the interaction picture perturbation Hamiltonian becomes a time-dependent Hamiltonian—unless [H1,S, H0,S] = 0 .
Density matrix[edit]
The density matrix can be shown to transform to the interaction picture in the same way as any other operator. In particular, let ρI and ρS be the density matrix in the interaction picture and the Schrödinger picture, respectively. If there is probability pn to be in the physical state |ψn〉, then
Evolution Picture
of: Heisenberg Interaction Schrödinger
Ket state constant
Observable constant
Density matrix constant
Time-evolution equations in the interaction picture[edit]
Using the density matrix expression for expectation value, we will get
Expectation values[edit]
Time-evolution of states[edit]
Transforming the Schrödinger equation into the interaction picture gives:
This equation is referred to as the SchwingerTomonaga equation.
Time-evolution of operators[edit]
If the operator AS is time independent (i.e., does not have "explicit time dependence"; see above), then the corresponding time evolution for AI(t) is given by
Time-evolution of the density matrix[edit]
Transforming the Schwinger–Tomonaga equation into the language of the density matrix (or equivalently, transforming the von Neumann equation into the interaction picture) gives:
Use of interaction picture[edit]
The interaction picture is convenient when considering the effect of a small interaction term, H1,S, being added to the Hamiltonian of a solved system, H0,S. By utilizing the interaction picture, one can use time-dependent perturbation theory to find the effect of H1,I,[4]:355ff e.g., in the derivation of Fermi's golden rule,[4]:359–363 or the Dyson series,[4]:355–357 in quantum field theory: In 1947, Tomonaga and Schwinger appreciated that covariant perturbation theory could be formulated elegantly in the interaction picture, since field operators can evolve in time as free fields, even in the presence of interactions, now treated perturbatively in such a Dyson series.
4. ^ a b c Sakukrai, J. J.; Napolitano, Jim (2010), Modern Quantum Mechanics (2nd ed.), Addison-Wesley, ISBN 978-0805382914
See also[edit] |
ff480f9bcc4c73f7 | WKB approximation
From Wikipedia, the free encyclopedia
(Redirected from WKB method)
Jump to: navigation, search
For other uses, see WKB (disambiguation).
In mathematical physics, the WKB approximation or WKB method is a method for finding approximate solutions to linear partial differential equations with spatially varying coefficients. It is typically used for a semiclassical calculation in quantum mechanics in which the wavefunction is recast as an exponential function, semiclassically expanded, and then either the amplitude or the phase is taken to be slowly changing.
The name is an acronym for Wentzel–Kramers–Brillouin. It is also known as the LG or Liouville–Green method. Other often-used acronyms for the method include JWKB and WKBJ, where the "J" stands for Jeffreys.
Brief history[edit]
This method is named after physicists Wentzel, Kramers, and Brillouin, who all developed it in 1926. In 1923, mathematician Harold Jeffreys had developed a general method of approximating solutions to linear, second-order differential equations, which includes the Schrödinger equation. Even though the Schrödinger equation was developed two years later, Wentzel, Kramers, and Brillouin were apparently unaware of this earlier work, so Jeffreys is often neglected credit. Early texts in quantum mechanics contain any number of combinations of their initials, including WBK, BWK, WKBJ, JWKB and BWKJ.
Earlier references to the method are: Carlini in 1817, Liouville in 1837, Green in 1837, Rayleigh in 1912 and Gans in 1915. Liouville and Green may be said to have founded the method in 1837, and it is also commonly referred to as the Liouville–Green or LG method.[1] [2]
The important contribution of Jeffreys, Wentzel, Kramers and Brillouin to the method was the inclusion of the treatment of turning points, connecting the evanescent and oscillatory solutions at either side of the turning point. For example, this may occur in the Schrödinger equation, due to a potential energy hill.
WKB method[edit]
Generally, WKB theory is a method for approximating the solution of a differential equation whose highest derivative is multiplied by a small parameter ε. The method of approximation is as follows:
For a differential equation
\epsilon \frac{\mathrm{d}^ny}{\mathrm{d}x^n} + a(x)\frac{\mathrm{d}^{n-1}y}{\mathrm{d}x^{n-1}} + \cdots + k(x)\frac{\mathrm{d}y}{\mathrm{d}x} + m(x)y= 0
assume a solution of the form of an asymptotic series expansion
y(x) \sim \exp\left[\frac{1}{\delta}\sum_{n=0}^{\infty}\delta^nS_n(x)\right]
in the limit \delta \rightarrow 0. Substitution of the above ansatz into the differential equation and canceling out the exponential terms allows one to solve for an arbitrary number of terms S_n(x) in the expansion. WKB theory is a special case of multiple scale analysis.[3][4][5]
An example[edit]
Consider the second-order homogeneous linear differential equation
\epsilon^2 \frac{d^2 y}{dx^2} = Q(x) y,
where Q(x) \neq 0. Substituting
y(x) = \exp\left[\frac{1}{\delta}\sum_{n=0}^\infty \delta^nS_n(x)\right]
results in the equation
\epsilon^2\left[\frac{1}{\delta^2}\left(\sum_{n=0}^\infty \delta^nS_n'\right)^2 + \frac{1}{\delta}\sum_{n=0}^{\infty}\delta^nS_n''\right] = Q(x).
To leading order (assuming, for the moment, the series will be asymptotically consistent) the above can be approximated as
\frac{\epsilon^2}{\delta^2}S_0'^2 + \frac{2\epsilon^2}{\delta}S_0'S_1' + \frac{\epsilon^2}{\delta}S_0'' = Q(x).
In the limit \delta \rightarrow 0, the dominant balance is given by
\frac{\epsilon^2}{\delta^2}S_0'^2 \sim Q(x).
So δ is proportional to ε. Setting them equal and comparing powers renders
\epsilon^0: \quad S_0'^2 = Q(x),
which can be recognized as the Eikonal equation, with solution
S_0(x) = \pm \int_{x_0}^x \sqrt{Q(t)}\,dt.
Looking at first-order powers of \epsilon gives
\epsilon^1: \quad 2S_0'S_1' + S_0'' = 0.
This is the unidimensional transport equation, having the solution
S_1(x) = -\frac{1}{4}\ln Q(x) + k_1,
where k_1 is an arbitrary constant. We now have a pair of approximations to the system (a pair because S_0 can take two signs); the first-order WKB-approximation will be a linear combination of the two:
y(x) \approx c_1Q^{-\frac{1}{4}}(x)\exp\left[\frac{1}{\epsilon}\int_{x_0}^x\sqrt{Q(t)}dt\right] + c_2Q^{-\frac{1}{4}}(x)\exp\left[-\frac{1}{\epsilon}\int_{x_0}^x\sqrt{Q(t)}dt\right].
Higher-order terms can be obtained by looking at equations for higher powers of δ. Explicitly,
2S_0'S_n' + S''_{n-1} + \sum_{j=1}^{n-1}S'_jS'_{n-j} = 0
for n\geq 2. This example comes from Bender and Orszag's textbook (see references).
Precision of the asymptotic series[edit]
The asymptotic series for y(x) is usually a divergent series whose general term \delta ^n S_n(x) starts to increase after a certain value n=n_\max. Therefore the smallest error achieved by the WKB method is at best of the order of the last included term. For the equation
with Q(x)<0 an analytic function, the value n_\max and the magnitude of the last term can be estimated as follows:[6]
n_\max \approx 2\epsilon^{-1} \left| \int_{x_0}^{x_{\ast}} dz\sqrt{-Q(z)} \right| ,
\delta^{n_\max}S_{n_\max}(x_0) \approx \sqrt{\frac{2\pi}{n_\max}} \exp[-n_\max],
where x_0 is the point at which y(x_0) needs to be evaluated and x_{\ast} is the (complex) turning point where Q(x_{\ast})=0, closest to x=x_0. The number n_\max can be interpreted as the number of oscillations between x_0 and the closest turning point. If \epsilon^{-1}Q(x) is a slowly changing function,
\epsilon\left| \frac{dQ}{dx} \right| \ll Q^2 ,
the number n_\max will be large, and the minimum error of the asymptotic series will be exponentially small.
Application to Schrödinger equation[edit]
The one-dimensional, time-independent Schrödinger equation is
-\frac{\hbar^2}{2m} \frac{\mathrm{d}^2}{\mathrm{d}x^2} \Psi(x) + V(x) \Psi(x) = E \Psi(x),
which can be rewritten as
\frac{\mathrm{d}^2}{\mathrm{d}x^2} \Psi(x) = \frac{2m}{\hbar^2} \left( V(x) - E \right) \Psi(x).
The wavefunction can be rewritten as the exponential of another function Φ (which is closely related to the action), which could be complex:
\Psi(x) = e^{\Phi(x)}, \!
so that
\Phi''(x) + \left[\Phi'(x)\right]^2 = \frac{2m}{\hbar^2} \left( V(x) - E \right),
where \Phi' indicates the derivative of \Phi with respect to x. The derivative \Phi'(x) can be separated into real and imaginary parts by introducing the real functions A and B:
\Phi'(x) = A(x) + i B(x). \;
The amplitude of the wavefunction is then \exp\left[\int^x A(x')dx'\right]\,\!, while the phase is \int^x B(x')dx'\,\!. The real and imaginary parts of the Schrödinger equation then become
A'(x) + A(x)^2 - B(x)^2 = \frac{2m}{\hbar^2} \left( V(x) - E \right),
B'(x) + 2 A(x) B(x) = 0. \;
Next, the semiclassical approximation is invoked. This means that each function is expanded as a power series in \hbar. From the equations it can be seen that the power series must start with at least an order of \hbar^{-1} to satisfy the real part of the equation. In order to achieve a good classical limit, it is necessary to start with as high a power of Planck's constant as possible:
A(x) = \frac{1}{\hbar} \sum_{n=0}^\infty \hbar^n A_n(x),
B(x) = \frac{1}{\hbar} \sum_{n=0}^\infty \hbar^n B_n(x).
To the zeroth order in this expansion, the conditions on A and B can be written:
A_0(x)^2 - B_0(x)^2 = 2m \left( V(x) - E \right),
A_0(x) B_0(x) = 0 \;.
If the amplitude varies sufficiently slowly as compared to the phase (A_0(x) = 0), it follows that
B_0(x) = \pm \sqrt{ 2m \left( E - V(x) \right) },
which is only valid when the total energy is greater than the potential energy, as is always the case in classical motion. After the same procedure on the next order of the expansion it follows that
\Psi(x) \approx C_0 \frac{ e^{i \int \mathrm{d}x \sqrt{\frac{2m}{\hbar^2} \left( E - V(x) \right)} + \theta} }{\sqrt[4]{\frac{2m}{\hbar^2} \left( E - V(x) \right)}}.
On the other hand, if it is the phase that varies slowly (as compared to the amplitude), (B_0(x) = 0) then
A_0(x) = \pm \sqrt{ 2m \left( V(x) - E \right) },
which is only valid when the potential energy is greater than the total energy (the regime in which quantum tunneling occurs). Finding the next order of the expansion yields
\Psi(x) \approx \frac{ C_{+} e^{+\int \mathrm{d}x \sqrt{\frac{2m}{\hbar^2} \left( V(x) - E \right)}} + C_{-} e^{-\int \mathrm{d}x \sqrt{\frac{2m}{\hbar^2} \left( V(x) - E \right)}}}{\sqrt[4]{\frac{2m}{\hbar^2} \left( V(x) - E \right)}}.
It is apparent from the denominator, that both of these approximate solutions become singular near the classical turning point, where E = V(x), and cannot be valid. These are the approximate solutions away from the potential hill and beneath the potential hill. Away from the potential hill, the particle acts similarly to a free wave—the wave-function is oscillating. Beneath the potential hill, the particle undergoes exponential changes in amplitude.
To complete the derivation, the approximate solutions must be found everywhere and their coefficients matched to make a global approximate solution. The approximate solution near the classical turning points E=V(x) is yet to be found.
For a classical turning point x_1 and close to E=V(x_1), the term \frac{2m}{\hbar^2}\left(V(x)-E\right) can be expanded in a power series:
\frac{2m}{\hbar^2}\left(V(x)-E\right) = U_1 \cdot (x - x_1) + U_2 \cdot (x - x_1)^2 + \cdots\;.
To first order, one finds
\frac{\mathrm{d}^2}{\mathrm{d}x^2} \Psi(x) = U_1 \cdot (x - x_1) \cdot \Psi(x).
This differential equation is known as the Airy equation, and the solution may be written in terms of Airy functions:
\Psi(x) = C_A \textrm{Ai}\left( \sqrt[3]{U_1} \cdot (x - x_1) \right) + C_B \textrm{Bi}\left( \sqrt[3]{U_1} \cdot (x - x_1) \right).
This solution should connect the far away and beneath solutions. Given the 2 coefficients on one side of the classical turning point, the 2 coefficients on the other side of the classical turning point can be determined by using this local solution to connect them. Thus, a relationship between C_0,\theta and C_{+},C_{-} can be found.
Fortunately the Airy functions will asymptote into sine, cosine and exponential functions in the proper limits. The relationship can be found to be as follows (often referred to as "connection formulas"):
C_{+} = + \frac{1}{2} C_0 \cos{\left(\theta - \frac{\pi}{4}\right)},
C_{-} = - \frac{1}{2} C_0 \sin{\left(\theta - \frac{\pi}{4}\right)}.
Now the global (approximate) solutions can be constructed. For an estimate of the errors in this approximation, see Chapter 15 of Hall.
See also[edit]
1. ^ Adrian E. Gill (1982). Atmosphere-ocean dynamics. Academic Press. p. 297. ISBN 978-0-12-283522-3.
2. ^ Renato Spigler and Marco Vianello (1998). "A Survey on the Liouville–Green (WKB) approximation for linear difference equations of the second order". In Saber Elaydi, I. Győri, and G. E. Ladas. Advances in difference equations: proceedings of the Second International Conference on Difference Equations : Veszprém, Hungary, August 7–11, 1995. CRC Press. p. 567. ISBN 978-90-5699-521-8.
3. ^ Filippi, Paul (1999). Acoustics: basic physics, theory and methods. Academic Press. p. 171. ISBN 978-0-12-256190-0.
4. ^ Kevorkian, J.; Cole, J. D. (1996). Multiple scale and singular perturbation methods. Springer. ISBN 0-387-94202-5.
5. ^ Bender, C.M.; Orszag, S.A. (1999). Advanced mathematical methods for scientists and engineers. Springer. pp. 549–568. ISBN 0-387-98931-5.
6. ^ Winitzki, S. (2005). "Cosmological particle production and the precision of the WKB approximation". Phys. Rev. D 72: 104011, 14 pp. arXiv:gr-qc/0510001. Bibcode:2005PhRvD..72j4011W. doi:10.1103/PhysRevD.72.104011.
Modern references[edit]
• Bender, Carl; Orszag, Steven (1978). Advanced Mathematical Methods for Scientists and Engineers. McGraw-Hill. ISBN 0-07-004452-X.
• Child, M. S. (1991). Semiclassical mechanics with molecular applications. Oxford: Clarendon Press. ISBN 0-19-855654-3.
• Hall, B.C. (2013). Quantum Theory for Mathematicians. Springer.
• Liboff, Richard L. (2003). Introductory Quantum Mechanics (4th ed.). Addison-Wesley. ISBN 0-8053-8714-5.
• Olver, Frank J. W. (1974). Asymptotics and Special Functions. Academic Press. ISBN 0-12-525850-X.
• Razavy, Mohsen (2003). Quantum Theory of Tunneling. World Scientific. ISBN 981-238-019-1.
• Sakurai, J. J. (1993). Modern Quantum Mechanics. Addison-Wesley. ISBN 0-201-53929-2.
Historical references[edit]
External links[edit]
• Fitzpatrick, Richard (2002). "The W.K.B. Approximation". (An application of the WKB approximation to the scattering of radio waves from the ionosphere.) |
ab7692500c854151 | Welcome to Fuwanovel Forums
Our community blogs
1. RgSeWltl.jpg
My upcoming project, The Last Birdling, alternates between Bimonia and Tayo’s perspectives. Today, I would like to explore some of the details behind this system. Your feedback is most welcome, and if you know of other visual novels that handle multiple viewpoints in a fresh way, please let me know!
The Last Birdling is written from a first person perspective, so the viewpoint character’s personality will come through in everything she observes. Her inner thoughts, reactions, they are all distinctly her own. On paper, or on screen in our case, this means different word choices as well as sentence structures.
That said, I dislike making characters act a certain way just to showcase their uniqueness. If you study writing books, you will often be advised to make every character sound different, to a point where you can tell who is speaking even without dialogue tags. Now, imagine a group of your friends. If you closed your eyes, and they all had the same voice, would you be able to tell them apart? I for one would have a hard time. For me, being truthful takes priority above all rules.
Since we are in a visual medium, we may as well take advantage of that when it comes to perspective shifts. Notice how the UI changes to green in the screenshot below:
This means we have switched to Tayo’s perspective in chapter two. The same concept applies to decision points:
In the world of traditional novels, shifting perspectives mid-scene is ill advised. To avoid disorienting readers, we want to jump into different heads during a scene or chapter break. The Last Birdling does not feature chapter titles, but there are end of scene cards to signal a perspective change:
This introduces several concerns in terms of programming. For instance, if players return to the previous scene, will the UI switch back as expected?
What about loading another saved game mid-session? I hope all the common scenarios have been addressed. We will see if players spot any edge cases after the game’s release.
So why would we alternate between perspectives? Once readers recognize the pattern, it is no longer something they need to worry about, which makes for an ideal reading experience. If the view changes went as follows:
Bimonia, Bimonia, Tayo, Tayo, Tayo, Tayo, Bimonia, Tayo, Bimonia, Bimonia
And so on, gamers would stumble every time we reached a new scene. We want to set up roadblocks for our characters, not our players.
To round things off, The Last Birdling follows the journeys of Bimonia and Tayo from childhood to adolescence. By the time these two reach their teens, they have suffered through an awful lot. To reflect both their physical and mental shifts, the UI will also update accordingly:
Find out more about The Last Birdling via:
The website contains a demo version, which illustrates how the perspective changes work. Please feel free to have a look.
To finish things off, I am happy to say The Last Birdling has been approved for publication on Steam several hours ago. Thank you so much for your support!
Hope to see you again soon :vinty:.
2. Now, I'm well aware that most people don't play VNs twice. Visual novels are a static media, similar to one of the old 'choose your own adventure' novels in interactive terms, so this is only natural. To be blunt, the main reason I go back and play old VNs is because nothing is satisfying one of my itches amongst the more recent releases. That said, there are some pieces of advice I can give for those who habitually re-read their favorite books and rewatch their favorite anime.
1- Wait long enough for your memories to fade: The human brain has a tendency to 'compress' old memories, and it is rare person who, through training or at birth, possesses an eidetic memory. As a result, details do fade over a period of time that tends to vary greatly with the individual. In my case, the base runs from a year to a year and a half for VNs that made a good impression and four months for ones that didn't.
2- Pick your paths: When it comes down to it, most of us are going back for a particular heroine or path. We aren't that interested in rehashing the heroine paths that we didn't find that interesting, and this is only natural. Sagaoz and other sites with complete saves can let you go to the true ending without bothering with the heroine endings, if that is what you want.
3- With gameplay hybrids, make full use of your save data: Most VN hybrids have NG+ built in, and as a result, you can breeze through the game portions of most of them rather easily by simply using your own save data. This is immensely helpful in games with a particularly tedious bent (like srpgs), where re-leveling would take forever.
4- Limit replays to your favorites: While I occasionally get a junk-food-like craving for something crappy that nonetheless remained in memory, in most cases I only really enjoy replaying my favorite VNs (in my case, a list of about fifty).
5- Nakige and utsuge work, but pure charage don't: I'm not kidding. Pure charage are agonizing to replay, no matter how long after you go back. I can still cry for the sad scenes in a Key game, but if you asked me to replay anything by Feng or most games by Navel, I'd rather cut off my balls and hang them out to dry on my windowsill.
6- If you fall asleep, just stop- In my experience, nothing is worse than getting bored of your favorites and then forcing yourself to continue. If you can't pay attention or if you suddenly lose interest, it is time to stop. If you force yourself to continue, there is a distinct possibility you will ruin your own impressions of the game in question for future playthroughs.
7- Stay away from pure mindfucks- I shouldn't have to explain this, but I will... the value of a mindfuck is in its surprise. Games centered on a mindfuck, with the sole purpose of trying to fool you into thinking one thing while something else is going on, are terrible for VN replays. This is because they are probably the most spoiler-vulnerable genre out there.
8- Highly emotional or intellectually stimulating works will often gain more depth: This isn't a fanciful statement. In my experience, a VN that is trying to get across something else besides pure story or something that is trying to make you cry will inevitably make for a better replay than something that is just shoving sex, romance, and comedy in your face. I could probably replay Houkago no Futekikakusha, for instance, three or four times in a year without the emotional aspects fading significantly, and I find new things out about Dies Irae, Vermilion, and Devils Devel Concept with each playthrough.
9- Infodumpers take longer to recover from: Bradyon Veda, I/O, Muramasa, etc... VNs that infodump seriously as part of the storytelling tend to leave a lot of info inside your brain. As a result, it takes significantly longer for your memories of them to fully 'compress'. Don't expect to be able to enjoy anything with frequent infodumps at less than one and a half times that of any of your other favorites.
10- A good night's sleep is your friend: Why am I emphasizing this? Because to get the best out of a truly great VN, a well-rested body and brain is necessary. Nothing kills enjoyment of a good story like being unable to grasp it due to brain-numbness from sleep deprivation.
Hope yall enjoyed my little lecture, lol.
• 1
• 0
• 24
Recent Entries
Often when people talk about OELVNS, they fail to mention the otome games. I don't know if this is because the majority of VN readers are males who don't follow the OELVN otome scene, but either way, it's quite a shame. Many talk about how OELVNS follow Japanese tropes and try to be too Japanese. With otome games, I can sometimes see this due to the anime-inspired art, but there are really a lot of notable differences.
For example, OELVN otome MCs tend to be more distinct. They certainly have distinct personalities and a lot of character- something I prefer since I don't self-insert. The MC of the recently released free otoge Cinderella Phenomenon for example, Lucette, is a cold-hearted, mean-spirited person. She is someone who is certainly very flawed but is also interesting as a person and a character. Throughout the story, it is evident that it is her story and a lot about her background is revealed. The characters do adhere to certain tropes but I didn't find that to be detrimental to the overall experience. Cinderella Phenomenon has a decent length and story, making it ideal for anyone who wants to try an indie otome game in English. It is also relatively high-quality compared to most indies.
Another free indie otome is Lads in Distress. It has a finished NaNoRenO version with 170000+ words and three routes. It will have a more full-fleshed out version with longer routes and an additional 3 routes. The premise is centered on genderbent fairytale princesses with problems which the MC, Princess Charming, must work on. It has pretty decent art and story but it's really the characters who shine in this game. I loved how Princess Charming interacts with her love interests and her antics are amusing and fun to read. It is pretty lighthearted, although I suspect it won't be so when the full game comes out.
Mystic Destinies: Serendipity of Aeons is a commercial otome game available on Steam. It follows the pay-per-route format one sees in mobile otome games. Normally, I wouldn't touch it until it's complete and everything is available at once, but I caved in the end. My decision did not disappoint me as the writing, art, CGs, and music are of a higher quality than those mobile otoges I used to purchase. It's even more astounding when those mobages are produced by well-established companies while MD:SOA is produced by an indie team. I would recommend this to those who play mobages and are just getting into PC otome. Mystic destinies has a decent-length for its price, a well-written story, a good cast of characters, and excellent art. The music is also great. The money also goes towards developing the game as the routes are released after it's developed. The writing itself is quite good in contrast to many indies.
A cursory glance at the Lemmasoft forums will reveal a lot of otome games in development which is why I believe that it's a shame that it's so often overlooked. Two otome games in development I want to highlight are Changeling and The Pirate Mermaid. Changeling, developed by Steamberry studios is an otome game focused on the supernatural. It has a cast of characters based on folklore and mythology. From what I can see from the demo, the lore is also well-thought of. It has recently been funded through kickstarter and the dev regularly updates through tumblr. The artstyle is also very western and might turn off some people. I don't mind the art, however, as I am more drawn to the story. It follows Nora who due to some strange events in her childhood ends up being estranged with her twin brother. She moves back to a small town and meets the love interests who are all connected to the supernatural world in one way or another. The writing in this game, as with MD:SOA is rather well-done.
What I'm really excited about however, is The Pirate Mermaid. I played the demo a long time ago and then forgot about it. However, I started checking out their blog again recently and they have a Steam Greenlight campaign going on. The MC is a pirate captain whose crew abandons her and turns into a mermaid in search of mermaid treasure. I love her character design personally, it's rare to see an otome MC who looks like me. I don't self-insert but to me (and I believe, some otoge fans out there), the representation means a lot. That aside, I'm impressed with the production values so far. It seems that it will feature a Ren'Py 3d camera and something incredibly rare: English and Japanese VAs. I will probably play the game with Japanese voice acting as I like it more based on what I've heard so far. The art is also done well and it has a story I'm interested in.
In conclusion, a big part of the OELVN scene are otome games and I think they deserve to be more represented when it comes to EVN discussions. The indie otome scene has been continuously growing these past few years and I'm excited to see the direction they're going. It seems that as time goes on, the writing does get better as the developers gain experience. The art may be subjective but I enjoy seeing both western-style art and anime-style art. I don't think EVN devs should be criticized for drawing what they can and want to draw. In the end, as the community grows, more EVN developers will get serious and release quality VNs. I'm excited as I've been watching it for two years now and I can see more groups starting to be serious and working on commercial titles that shows serious effort.
3. Visual Novel Translation Status (04/22/2017)
Since we had Nishikino Maki as one of the image header there, I decided to just make it like somehow Maki had reverse harem of ayakashi from Ayakashi Gohan lol (I knew that the ayakashi had one girl that they like though). Well sorry if it's quite cheesy there, and let me say welcome to this week VNTS Review. As for this week, compared to last week we had more updates from Mangagamer and some releases from both of Sekai and fan translation. Although if I may comment here, it's still not big enough to me though if we talk about the releases. For more detail here, let's see what kind of updates that we had at this week shall we.
If some people here waiting for Dies Irae release (I bet there's a lot here), be happy because Light was announced when they release the digital version for it. It's at May 31st, although keep in mind that it was because of the delay though which mean that Light was planned to release it by the middle of May. There's also the matter of physical edition in which it'll be delivered at June 30th. Keep in mind that it may change in the future, although for now I was content though now that we had a release date. There's also Lump of Sugar which planned to release another of their VN overseas (Magical Charming), and it was already Greenlighted at Steam which of course it'll be released at Steam. No idea of how much the cut at H content for now here seeing that Steam was forbid game with explicit sexual content for now, although the first impression that I got for now that it's quite average VN looking at VNDB score. Let's see it later in regard of Magical Charming. Oh, and by the way Fruitbat was announced that Kickstarter for Chuusotsu was begin at 25th (Good luck), and I'll report the progress of the Kickstarter here.
By the way, if there's people who look forward to Nekonin be glad because they finally release it. As for the VN itself, well it's quite interesting if one interested with catgirl and ninja I guess. From what I knew, looks like it was quite mediocre looking at both of of EGS and VNDB average score (Both was at around 68). But then again maybe some people here will find the good side of Nekonin though, so feel free to try it out. Oh, and other than Nekonin release this week Sekai was only had one update in which goes like Bokukotsu was at 57.27% which is quite a big jump from last week. Other than those two news, there's still no word from Sekai in regard of Rakuen yet so I guess it's probably would be going to delayed again.
As usual ie biweekly, this week go got another updates from Mangagamer which of course was good. As for the updates it goes like Sorcery Joker was at 87% translated, Hapymager was at 74% translated (69% edited), Trinoline was 11.5% translated (2% edited), Boukaku was fully edited, Naked Butler was at 63% retranslated, Hashihime was at 39% translated (3% edited), Fata Morgana fandisc was at 44% edited, and Dal Segno was in testing. There's also some secret projects update which goes like 1st was at 88% translated, 3rd was at 43.5% translated (43% edited), 4th was at 97% translated (64% edited), and 7th was at 54% translated. There's also some matter of Imouto Paradise 2 in which right now our Doddler was handling the porting for it, and he was also planned to working on Bokuten next. For one last remark here, I wonder if they about to prepare the release if Dal Segno in near future although looking at Mangagamer release pattern it's probably in this year (Hopefully).
For fan translation section, there's two nukige release. One of those releases was Maki Fes (Another was Black Lilith VN), in which it was tell the story of about how very bland MC was try to made some friendship with Maki in hope to be more intimate with her (Of course Maki will like the MC lol). To tell the truth, the story was just lemon (18+ fanfic) for Maki so for the VN itself it's nothing special. Although the staff whose worked on Maki Fes was quite interesting though (There's Zakamutt and Asonn), and by the way congratulations for Maki Fes release.
As for the updates, first of all we finally got Majokoi continuing the translation progress because Maki Fes was finished (The new translator was also work in Maki Fes), and for the update here we got Majokoi at 73.5% translated and 19% TLC-ed (Good luck with the translation there). For the rest of the updates we got Loverable finally reached 60% mark translated (Also with 33.45% TLC-ed and 18.5% edited); Koiken Otome fandisc was had Shiho route at 15% translated; Eustia was at 7.5% translated (For more detail the prologue was at 81.44% translated and halfway (50.26%) edited), and by the way they were planned to release a partial patch that cover up to Eris arc. As for HatsuKoi, maybe they'll release Maya's patch later or tomorrow but one thing for sure I'll waiting for the update here (Also report it here of course, along with their next project).
That's all for this week, and see you next week.
PS - Right now Tsurezure was released Maya's patch which of course translated Maya's route and all other heroines route except Yukino. As for the progress, Yukino's was at 35% translated and overall was already pass 90% mark translated (91.47% translated). As for their next project, it was Tsujidou-san no Jun'ai Road which was called as some sort of Majikoi successor iirc. It's also had the translation project here, but looks like it's already dead for years. Look forward if you fan of Tsujidou, and I'll definitely report the progress for Tsujidou here.
4. This will actually be far too brief to make a blog post around, but alas...
What is "overrated"? Many things, apparently, but so far as I can tell the word itself is used to describe something that too many people like, or that is well liked for little reason.
This is an inherently subjective word. Infuriatingly so. At its most basic, it means "too many people have a different opinion than mine", which if phrased that way means nothing. Yet so many people employ the word. There is no such thing as "other people like X too much". It is precisely because people have different opinions that some thing's are successes and others are not, and if something is a success, then it is, for that very reason, not overrated -- if people like something, that something must've done something to deserve it, whether you understand what that is or not, because thousands of people don't like something "just because". And I suppose that's just where the issue lies. Once again, people have different opinions. It's that simple.
Such a shame it doesn't stop me from hating the word. It has no purpose. It's obvious and always implied whenever anyone has an opinion, and it only serves to bring a bad connotation and disagree with other peoples' opinions. "Underrated" at least has a purpose (in this context. Other contexts, more similar to "underestimate" are a different, unrelated, story); not to say people don't "like something enough" or people "dislike something too much", which is also another way to spell "too many people have a different opinion from mine", but to say that not enough people know of a particular thing. "Not enough people know about X", or "Not enough people gave X a chance".
You can use "overrated" properly. "The importance of X is overrated" can be used to describe situations where misinformation is popularly spread. But therein lies the key. Using it in "subjective" matters, in matters of opinion, means little. It's a way of complaining that other people have different opinions. And I dislike that, so I complain about it on forums online.
This post was sponsored by that thread asking if CLANNAD deserves its popularity or if it's overrated. I'd say when something makes you ask if it is "overrated", that's because it is popular to a point where it must have done some things the proletariat likes, and therefore can't be overrated.
Of course, that doesn't remove the merit of discussing what is "well done" or "badly done" according to other people.
5. 19 titles selected and 22 smaller ones in the year 1989. Same as in 1988, more or less. A good half of those games were likable, but I felt that the quality level was especially high with three games - Hare Nochi Oosawagi!, Tawhid, Soft de Hard na Monogatari 2. Still Hare Nochi was too light-minded, Soft de Hard too difficult and earthly (I'd rather read it as manga tbh) while Tawhid was perfect and it gets the VN of the year 1989 title.
First of all, let's try to distinguish what features year 1989 brought:
• number games with multiple endings greatly increased
• while still being interactive adventures, VNs started to have simplified command selection mechanics
• simplified game mechanics led to higher game volumes and thus higher satisfaction with the story
• most of year 1989 games are story-driven and it's the golden age for adventure theme in VNs
• proportion of rpg games and elements got higher
• school setting games got more numerous
• science fiction clearly lost to mystery
Great year, overall. I like it that there are few formal detective investigations and more informal ones. Great animations and voices start to show up. Graphics finally got enjoyable with 90% of games and it's still far till Windows era. Ok, time to elaborate.
1. Angel Hearts (VNDB)
The secret organization of the school is feeding on the students. Main character gets a request from a newspaper department manager to destroy that organization and rescue newspaper members that never returned from this task.
First "elf" game (technically, it's third already, but I condemned first two as eroge) and it's sitting in oblivion. Time to fix that!
It's an rpg+stealth. All the enemies are girls and hero uses attacks like whispers, kisses and chest massage. Since what you need the most is full HP, you're going to run back to school infirmary after each fight.
That's definitely an erotic game. So what does it even do here? Just to show that it's 3rd elf game already and it's erotic centered, monotonous, dull and has bad visuals. CG usually is quite small, like a quarter of the screen, so it's not really much erotic. First 7 games of elf were erotic crap, but later ones of them had a tilt into RPG which probably led to Dragon Knight.
2. Burning Point (VNDB)
There's a bit of English info here and since it's detective story I'm not anxious to double check. But since it's Enix, it has to be good story rather than tons of footwork.
3. Cobra: Kokuryuuou no Densetsu (VNDB)
If you like space anime then you already know what to expect. Mad action and pure awesomeness. The game version had to introduce gameplay mechanics to torture players, but it's still a good animated story. Couple reviews found are located here
4. Destruction Joukan (VNDB)
Soft Studio Wing here. Epic story based on studio's very first black and white series. So epic that it could not fit into one game, so it's the first part of the dilogy.
TV broadcast was interrupted by unusual voice saying "Look at the sky at midnight". At midnight a sky object resembling an angered human face appeared on horizon and people seeing it dropped down with their mouths full of blood, but got up with superpowers. Darkness is coming and espers should become the warriors of light to stop it.
You play as a reporter tasked to investigate the appearance of the sky object that turned out to be absent on cameras and photos made at midnight.
Such atmosphere can only be attained by Studio Wing. Part one is introductory one. Tension rises and little by little seeds of darkness embrace the city. Let's wait for culmination in part 2.
5. Dororo ~Jigoku Emaki no Shou~ (VNDB)
Well, game basically retells the anime story with a different ending. And that anime a black and white one, year 1969, hardcore stuff. The only major difference is the ending. Unlike anime game's is colorful and it saved the unique graphics of original. Good action shounen story but I can't stand shounens sadly.
6. Dragon Knight (VNDB)
Game was quite a beautiful RPG with a charismatic hero which was still new for the time. We've got a pretty thorough english review on it here
7. Famicom Tantei Kurabu Part II: Ushiro ni Tatsu Shoujo (VNDB)
For those still interested in detective VNs link to reviews
8. Gaudi ~Barcelona no Kaze~ (VNDB)
While the whole world enjoying Olympics of 1992 in Barcelona a catalonian political unrest grows in the very heart of the city. You are consultant sent to settle matters with local separatists.
Setting was really far from most of gamers. Olympics in Barcelona terrorism and tackling with separatist movement lacked interest. Main hero as "information consultant" also rose lots of questions. You're neither detective nor fighter but rather a negotiator of some kind who must rely mostly on information manipulation. Depending on made choices, there were multiple endings. Graphics weren't good enough to be able to depict good scenery of the city. Very unusual game that could not find its audience.
9. Hare Nochi Oosawagi! (VNDB)
A formal request was passed from school principal to you as the student council president to investigate ghost matter in school. As you question victim highschool girls, a connection with a mysterious incident that happened 300 years ago comes to light...
Although the system was command select formula, there were very few flags to get so the pace was very good. You question cute witness girls, get close to the mystery, get an ero scene late into the game and then game literally says "to be continued".
Girls were totally different so this work can be described as an early charage. Although the investigation struggled to be conducted in a serious the atmosphere was rather a light comedy.
Anyway, this game made Cocktail Soft a serious player on eroge market and encouraged to work on similar games.
10. J.B. Harold no Jikenbo 3 - D.C. Connection (VNDB)
Third part in series. Will get some review excepts here
11. Kami no Machi (VNDB)
Kannai town in Yokohama became a ground for a major incident and turned into a limited zone. The lawless city of "Agarta" rose on its place. Two years aftere the incident a young man Yuuki sets for Agartha in search of his lover missed after the incident.
You gather ammo for your weapons and break through monsters to get into the mazed complex with couple comrades. It's a solid animated story with loss of comrades, joy of reunion and mystery of the major incident. It's said that game was in development for 3 years. It's quite sad that it was not especially popular.
12. Lipstick. ADV 2 (VNDB)
Gorou gets a request to deliver a suitcase to Osaka. However, a bomb is found inside and Gorou charged with being a terrorist.
So scenario writer changed. But surprisingly enough the charm of characters and the plot remained on high level. Whether Hiruda made a carcass for it or remaining staff was intact, but game is same funny. Same heroine accompanies Gorou with the sole change that she is now a college student. It suits her less so in part 3 she's going to be high school girl again. Still, since it's part 2 with relatively the same cast, the degree of freshness is poor and this time it's no revolutionary matter, but a very good one still.
13. Marusa no Onna (VNDB)
Wow, when I first saw gameplay videos, I thought it's a simulator of a usual working woman. It turned out to be game remake of a film Taxing woman - woman who excels at finding tax avoidance schemes. There are some minor differences with the film in details but you can't talk about the game without seeing the movie which I'm not really inclined to do.
14. Muteki Keiji Daidageki ~Shijou Saidai no Hanzai~ (VNDB)
So two detectives dig up that plutonium is being smuggled in the country and end up having all the mafia on the tail.
It's actually a sequel of kidnapping detective story of 1987. Game starts seriously but constantly drops into stupid situations and kidding then returns back to seriousness. It's both tackling contemporary politics and some monster may also appear all of a sudden. Game has fighting scenes, mainly punches and kicks. So genre is action parody or something like that. Game has almost all the lines voiced (!), so with listening all the dialogues length would supersede 7 hours.
15. Oishinbo Kyuukyoku no Menyuu Sanbonshoubu (VNDB)
Err... what? A game based on manga about cooking? Aim is to prepare the best banquet? Really? And it has English translation? That's a great news! So instead of digging it up i'd rather post review link
16. Seirei Gari (VNDB)
There's an english review and even english youtube walkthrough, so not interfering. Takes slightly over 2h to finish this horror adventure.
17. Soft de Hard na Monogatari 2 (VNDB)
In part 1 Hiroshi had to substitute ill father as a president of a small soft company and tear it from the claws of bankruptcy. This time Hiroshi returns to Mocha Systems after a while as scenario writer and new receptionist girl Ishida joins the company and claims to be Hiroshi's fan.
Soft de Hard na Monogatari 2 is novelware with some choices that determine branching. First versions were very difficult to capture and only in FM Towns version it's normal visual novel choices.
Game is great. Pictures are superb since they are scanned. Humor is top notch. The atmosphere of old soft developing company is unique and great. Lots of contemporary parodies make it even better. Plot matters little since it's mostly situation comedy
Well, I dumped part 1 erroneously. Now I get that nico video featuring 1h gameplay was not doing it to the game end. Now I think this game could really rival Lipstick ADV in 1988...
18. Tawhid (VNDB)
Year 2031. Main character is 28 year old ex-operative photographer who travels around the world while taking pictures of ruins destroyed by war. A 12-year old girl who joined you in Egypt is being chased by secret organization and even the farthest corners of ruins around the world won't stop them.
Champion Soft last game while on transition period to become Alice Soft. It was already brave to get Middle East into the setting. Each country had its own unique atmosphere. Egypt, Israel, Mongolia and Nepal are stages for each chapter and finally hero gets back to Japan. There's a vast attention to details and game can be enjoyed just by looking at the exotic views alone.
There is enough of both adventuring with exploration of ruins and escaping from the pursuit. Interactive command selection is simplified so pace is good. There are adult scenes since girl gets naked on the battlefield, but it's by no means an eroge. The most important feature about the game is the unique atmosphere that was not ever repeated by any other game. That makes game memorable years after play-through.
19. Yami no Iyo Densetsu ~Joouzuka Satsujin Jiken~ (VNDB)
You're a spiritual detective Shura and being requested to help Metropolian Police Department to investigate murder of a man who was sucked all the blood off and death of a girl in a different prefecture.
It's a horror adventure, but atmosphere is not as dense as in Wing Studio works. At battle scene game almost turns into RPG since you have HP, MP that increase as you win battles. Commands to choose from are few and no need for a guide to complete it and the pace is good. All the characters are unique and likable. Quite a few ero scenes are present.
Overall a good story-driven VN with good length, interesting narration and high level of satisfaction.
Examined VNs that did not fulfill selection criteria:
1. 38 Man Kilo no Kokuu VNDB
Block reason:<2 hours (30m)
Block reason:Antique (could not find any useful text material on the game)
3. Arcshu ~Kagerou no Jidai o Koete~ VNDB
Block reason:Fandisc (it's an ADV mini games that feature characters from ARCS jrpg and previous Wolf Team works)
4. Arthur: The Quest for Excalibur VNDB
Block reason:<2 hours (30m)
5. Idol Hakkenden VNDB
Block reason:<2 hours (45m)
6. James Clavell's Shōgun VNDB
Block reason:<2 hours (25m)
7. Kibun wa Pastel Touch VNDB
Block reason:Eroge
Short synopsis for VNDB:Main character is count Dracula and he proposes to a Japanese girl Umi. Response is put on hold till Dracula picks her up in the daytime. Three months later he visits her school as a male student to charm girls with magic and get Umi whereabouts from them.
8. Koube Ren'ai Monogatari VNDB
Block reason:<2 hours (1h)
Short synopsis for VNDB:A love story unfolds while sightseeing in Kobe.
9. Lonely Heart VNDB
Block reason:Antique (No text info found).
10. Lupin III ~Babylon no Ougon Densetsu~ VNDB
Block reason:<2 hours (45m)
11. Meitantei Holmes - M Kara no Chousenjou VNDB
Block reason:<2 hours (45m)
12. Mephist VNDB
Block reason:Antique(no plot info found)
13. Misty Vol.1 VNDB
Block reason:<2 hours (I'm always looking at length of one story if multiple present)
14. Misty Vol.2 VNDB
15. Paragon Sexa Doll VNDB
Block reason:<2 hours (1h 35m)
Short synopsis for VNDB:Two stars had military agreements to avoid conflict, but a war broke out from conflict on unmanned planet Alpha. Many sex doll robots were upgraded to execute military functions. Sex Doll Tina volunteered to avenge for her friend Barbara who got caught with spying charges.
16. Psychic Detective Series Vol. 1: Invitation - Kage kara no Shoutaijou VNDB
Block reason:<2 hours (25m)
17. Psychic Detective Series Vol. 2: Memories VNDB
Block reason:<2 hours (25m)
18. Review -Jashin Fukkatsu- VNDB
Block reason:<2 hours (1h)
19. Tanteidan X VNDB
Block reason:Antique (Failed to find text info)
20. Teito Taisen VNDB
Block reason:Antique (Failed to find info on game and since it's just movie adaptation better watch the movie)
21. Zerø - The 4th Unit Act.4 VNDB
Block reason:<2 hours (around 2h?) I'm not so sure now about its length, but considered that it's an old series and previous parts weren't too long, dumping that as well.
22. ZorkQuest: The Crystal of Doom VNDB
Block reason:<2 hours (30m)
6. I have a lot of trouble getting started on things. One area in particular that has been consistently difficult for me is going places – work, school, the store to buy muh colas before they close for the night, that sort of thing. I have recently discovered and begun formalizing a technique which seems to help with it. However, it is creepy, because it involves partially dissociating your mind from what your body is doing. Today I fashioned for it a shitty chuuni chant: there is strength in emptiness: Automaton!
To be honest, I’m half hoping the chant doesn’t work as a switch-flipper, considering how terrible it is. My apologies, O chuuni gods.
It begun one day when I was lying around, trapped in a familiar sensation where I seemed unable to will myself to any action, enmeshed in repetitive and irrelevant thought and generally getting nowhere important. I felt out of touch with the world, like there were no thread connecting me to this plane – indeed, was the world even real?
A seductive call beckoned. Perhaps I should try something I had done piecewise before, but never so deliberately, so completely – could I give my estranged body over to an imagined automaton, let the automaton collect the knick-knacks and tie the shoelaces and lock the doors and ride the elevators, and take back control once I needed to once again be human?
The attempt was a resounding, if alarming, success. My body proceeded to smoothly go to work, while I, dissociated, observed that I probably tie my shoelaces faster and more efficiently when I’m doing it on autopilot. In the end I spontaneously reintegrated over time without having to force it, which was a relief – one time I was at the store and borely (and likely boneheadedly) started practicing the mindfulness concept of framing your thoughts as things you are having rather than facts that are; while it didn’t really elucidate much, I did find myself unable to easily exit the frame, which was honestly a bit unchill.
I have used the automaton takeover concept like, two times after that, and it really does seem to work. It’s not just mindlessly doing things on autopilot either – the disconnect ensures that you have actual thinking time while carrying out business, and is the prime difference between this and pure distracted flow.
To use this technique you should probably be somewhat comfortable with feelings of derealization. The good news is that these correlate with depression, and I swear half of you fuckers want to kill yourselves, so it’s vaguely plausible that someone else might have had a similar experience. I do wonder if I have accidentally stumbled upon and formalized a Normie Technique(tm) that nobody told me about, but the fact that I’m worried that someone with psych issues will try it and end up being unable to return to united reality or fucked up in general does bolster me somewhat. Uhh, be careful trying this at home, I guess.
My personal motivation was that I was pretty derealized already, so making things more formal wasn’t really going to be that much of a problem. If you find yourself able to convincingly make that argument, this method may be worth trying.
563 b.gif?host=disearnestlydisearnest.wordpr
View the full article
7. I recently came to the sobering realization that I've been editing translations of visual novels for about a year now. I've edited some 40,000 translated lines across large chunks of four works, and in the process I've learned a whole lot. Mostly what I've learned is about the mechanics of how to write well, and correspondingly that's mostly what I've written about on this blog, but today I'm tackling a slightly different subject: how to arrange the time you spend editing.
This advice is principally targeted to people working on longer projects. If you're working on something shorter than say 4,000 lines, things change a little bit because it's much more feasible to easily keep the whole thing in your head with just a couple of readings, whereas with longer works, you're going to have to plan for it to be a marathon. Even so, most of this advice still applies to shorter works, but the key difference is that it's much more feasible to knock out an entire short work in a month or so, then let the script rest for a month or so, and then go back and give it all another fairly quick once-over in a week or two, and then call it done. With a longer work, you'll end up working on sections at a time and need to go back and work on random sections periodically over a period of many months.
So, that explanation done, here are the various techniques which work for me. It's worth mentioning that most of these are applicable not just to editing, but also to translation:
Read It First
If at all possible, you should read the whole piece once before you start working on it. If you can't read the original language and you're following closely behind the translator, then you don't have much of an option here of course, but if it's possible for you to read it, do it. Reading first will both save you time and result in a higher-quality product. The benefit of reading first is more easily recognizing broad themes and motifs as soon as you first work on them, and similarly, recognizing smaller-scale things like running gags which need to be set up correctly early on. The earlier you can start handling these things correctly, the less work will be required to go back and fix them up afterwards, and the less likely you are to simply miss something while going back to fix them up.
Push Your Changes Frequently
Every day's chunk of work should be pushed to a central server for your team (Google Sheets, Git, SVN, whatever). Your team members need to be able to see what you're doing, and hopefully will be reading what you check in and offering critique; no one person has all the answers. Don't sit on local changes and fuss at them until they're perfect. Do a day's work and push it.
Always Check Your Whole Set Of Changes Over Before Pushing
This is the most important piece of advice I have here, so pay attention.
Every time when I sit down to edit new lines, I generally work through about 100-200 lines of translated text, almost always with the game playing so that I can get all the added context (including voice over, but also scripting: scene changes aren't always obvious from your script editor, and sometimes they completely change the interpretation of a line). Once I'm done with that first editing pass with the game, I save my changes locally, and then I go read through all of my edited lines again in order (no game this time, and usually not even looking at the translation). During this second pass, I'm mostly looking for copy editing issues, like typos and grammar errors. I find a lot of them. Like, a whole lot. I'm a very good copy editor, but I've come to grips with the fact that when I'm line editing, I make a ton of mistakes. I rarely do any line editing again during this second pass (hopefully there's not much need to... although I usually do often find one or two lines I want to tweak), but I usually fix a solid 3-4 typos during this second pass, among the 100-200 lines I edited. Given that this second pass is pretty quick to do when the scene is still fresh in your mind, I consider this time very well-spent. My edited scripts still need QC (editing your own work is hard), but a great deal less than they would otherwise.
Keep Tweaking
After I've gone through that two-pass edit step, I usually won't look at a scene again for at least a month, often longer. However, I'll frequently hit natural stopping points when working through fresh sections of a script (e.g., maybe I finish a whole route, or I simply catch up with the translator on the route I'm working on). When that happens, I will go back and re-edit something I've already done. When I re-edit, usually I find things are fine, but I always find at least a few lines per scene I want to change. This second line edit takes much less time than the initial line edit, but still usually ends up with a fair number of changes. The rule for checking over these changes before pushing applies here, too: whenever you line edit, after you're done, save it all locally and read through the whole diff of changes for the day, mostly looking for copy editing mistakes: you'll find some, nearly every time.
The reason to do this is mostly that your perspective on the game will be evolving as you build more of a rapport with it: characters will become better established in your mind, and you'll want to make them consistent. Maybe your preferences around phrasing certain things will change. Because larger VN translation projects typically span a year (or multiple...), there's a lot of time for you to change your mind about things. You don't want the work to end up inconsistent, so the best remedy for this is to be constantly rereading chunks of it and tweaking them, massaging them until they're more internally consistent. These re-edits are always much faster than the initial edit, and doing them bears a lot of fruit in terms of quality.
In short:
10 Line edit
20 Copy edit
30 SLEEP 1 MONTH
40 GOTO 10
Work Slowly But Steadily: Avoid Burnout
VNs are long, and the time you can commit on any given day is always going to be a tiny fraction of what it will take to finish the work. If you tell yourself, "This weekend I'm going to sit down and work on this for six hours," you're only going to grow to dislike it before too long (it will feel like too much of a burden) and you're going to start slipping on those promises to yourself very quickly. The only way large projects get done on anything approaching a reasonable timeline is through a constant accumulation of bite-sized pieces of work. Plan to work on the project for 45 minutes a day, six or seven days a week, and you will be much less likely to get burned out and walk away from the project. Maybe every now and then you'll get motivated and work longer, getting more than the usual done on a given day, and that is all well and good, but such exceptional days will turn out to be a drop in the bucket compared to the constant steady progress from doing a regular, fixed amount of work every day.
In Summary
Working on a VN translation is a lot of difficult work, so treat it with respect. The above is what's worked for me to keep me going at this steadily for a year, constantly getting work done and constantly improving. What works for you? Got any tips to share?
8. A lot of people kept asking me where have I’ve been and why haven’t I updated my blog in the past 4 months. The reason is that I hit a rock bottom, I was in a huge slump from games and vns and any stories in general. It is all because of one single game that I played at the time, Hoshi o Miru Hito also known as Stargazer for the ancient but beloved Nintendo Famicom system. It wasn’t simply a game it was an experience. The knowledge that nothing like this has ever been made before and since the game’s release brought a sense of emptiness and depression on me. These past months I couldn’t get into anything, no book was good enough, no vn was able to grab me, no anime was entertaining, no game was fun anymore. I just couldn’t do anything anymore. This happens very rarely to me and I can count on one hand the stories that made me feel this way but even then it didn’t take me so long to recover. I don’t even know if I can make this game justice by writing about it, everything I write and everything I read looks like shit these days anyway but I also feel like this is something I have to tell people about. It makes me even sadder to know that almost no one knows that this game exists and as time flies the game has less and less chance of being discovered by a new generation of gamers and maybe eventually fade into the ether of empty dark void of obscurity. I don’t know if I’ll ever come back to playing other games or even be able to enjoy any other kind of story in any medium not only video games or jrpgs, visual novels or even books and movies. That is what Hoshi o Miru Hito left me feeling and I don’t see what else could fill this emptiness in me so I might as well just give up on everything right now. Where should I go now and how am I going to move on with my life when there is nothing that comes even close to a game like Hoshi o Miru Hito. I used to think I know what a good story is and there are some that I kept really close to my heart. Stories that moved me, got me thinking, taught me so much, and inspired me to write and use my creativity to create, build, and grow. What am I do? I guess you want to know what brought me to this state so let me tell you all about it. I hope you’ll listen to my story and some day try to play Hoshi o Miru Hito on your own and maybe I’ll have someone else to discuss a game nobody seems to know about. So here we go, let’s begin with my review. Not exactly review, I guess just an opinion piece but whatever. Let’s begin!
Play the game without a walkthrough. This is the first thing I want to tell people who decide to play this masterpiece. It is an old jrpg from the first Dragon Quest and Final Fantasy era and it has some of those gameplay conventions however it mostly avoids it with original take on similar old ideas. I say old ideas but at the time those ideas were still fresh in games so if you think about it even for its time Hoshi o Miru Hito broke new grounds by refusing to do things the traditional way. Having said that you do need to grind but not as much as its contemporaries and usually you’ll know when you need to grind. The game does a great job giving gamers an idea when is a good time to grind and for how long. But don’t get discouraged since there is so much more to the game than grinding. So take your time with the game you won’t be able to rush through anyway because there is a lot of story here get through and you should take your time enjoy the characters and what they are going through. Have fun reading the mental witty dialogue between them and the very profound narration and descriptive sentences. It all flows so naturally you won’t even notice how everything jells into each other from plot to backstory to info dumping and back to plot. The game never puts the player in an unfair unforgiving game braking frustrating situations like most jrpgs of that era and at every point it also maintains an amazing pace between story and gameplay sections so you never get tired of one or the other. It’s as if the game was play tested and polished for over 10 years and if that’s true then the game is even more ahead of its time. No two words about it, the developers should be celebrated every year for making this wonderful game. There should be a Hoshi o Miru Hito holiday or something. Well, maybe not. But it damn sure would have been cool to celebrate every year.
Roughly speaking the story is the biggest attraction of Hoshi o Miru Hito and it is about 90% make up of the entire game. Even right from the very first minute the story hooks you and throws you and the characters on an unbelievable journey of discovery, love, tragedy, loss, comedy, irony, growth, coming of age, despair, mystery, mind screws, life, philosophy and whatnot. It is difficult to categorize the story into a single genre because there is so much happening it is genre transcending in a scope beyond what can be encompassed in a single genre. At its core this is a science fiction story and it’s as hard as they come, I mean there will be long and awesome info dumps about pretty much every relevant topic you can think of, everything from genetics, neural energy, psychological effects of going through therapeutic ecological deformation, different kinds of bacteria, nano technological medicine, self replicating soils capable of separating themselves from earth and form their own planets, historical and futuristic philosophies, the Schrödinger equation (they give the best explanation I have every read), viruses transferred through light, microbiology, ecology and its effects on environmental evolutionary perpetual life igniting systems, financial markets, complex mathematics, quantum entanglement, super symmetry, super position, anti-gravity physics, particle physics, liquid and rigid body physics, astrophysics, skin discoloring light projectors, movie books and newspapers, molecular mutations, and it goes on and on. I could feel an entire encyclopedia with every new technical term the game throws at you and that’s just terms if I was to write down every info dump then we would be here forever until maggots will feed on your raw stinky rotting carcass when your bones turn to ash and we won’t stop until the year 3000 rolls along and maybe not even then. But the game doesn’t have a TIPS system unfortunately so if you want to review some terms you won’t be able to so you better either write things down or remember or you can just do nothing and simply enjoy the story. It is unfathomable how they even fit so much text into so little space because nothing short of some math genius could come up with such a sophisticated fast and powerful algorithm that even Google couldn’t come up with. At a time when games struggled with telling stories developers couldn’t fit much text in a game so much of the story was delegated to the game’s manual but here we have a game with so much text and it’s all in there, in the game and nothing story related is in the manual. The game also never had any extra printed materials like art or fan books. Not even an anime or novel adaptation. It’s probably for the better because an adaptation would butcher the wonderful writing and prose.
If you thought this was all then there is even more. The characters will be having lengthy and profound discussions about deep philosophical and transcendental issues relevant even today. Many questions will be raised (some will be answered) such as what is life, what is a true death, has the universe just came to be or is there a grand design, do we have the right to evolve our biology and technology, should we simply live not concerning ourselves other than procreating, are souls real, does life even has a meaning (the game will give you an answer and it’s not what you expect), evil really exists or just a different perspective. Throughout the game you’ll face many of these topics and trust me when I say this is less than 10% of what is covered in this story. The most surprising thing is how the game handles each theme and topic. By not just giving you long (but fascinating and never boring) info dumps it actually shows you how things play out throughout the story. Why and how ecology can shape life’s existence. How a change in one branch of quantum physics led to a new prophesized philosophy which in turn change some scientific beliefs and led to huge evolutionary jumps so much so that eventually people started developing ESP powers. It goes really deep into how ESP works and the inner workings of the ESP developed brains. I won’t be surprised if one of the writers has 3 PHDs in Neuroscience. Thankfully you don’t need to be a PHD yourself as the game goes to extreme lengths to make things clear and easy to swallow as possible. One wonders if the developers knew they are making a game for adults because it’s all a bit too heavy for children to read and understand. You can turn off the long expositions and info dumping in the options screen but you’ll be missing so much good stuff I don’t recommend you do it. Besides, when you turn it off you loose on the story’s great pacing since the information is very very carefully spoon fed to you almost entirely pre-calculated when, how, and how much stuff is revealed to you at every single moment. It’s not an easy style of writing to master but somehow the game manages to avoid the usual clichés and pacing destroying slowdown dragging in the mud unbearable game breaking issues with unsatisfying game play to story ratio.
Largely the story is scientifically sound with most of it real existing or theoretical science having been proven and came into existence. But the writers didn’t shy from diving into pseudo science and sometimes even adding fantasy elements as well. Things like ESP are obviously in the realm of pseudo science and not something we can confirm by any scientific theory or do an exact experiment. However the writers put a lot of special attention to give you a plausible explanation how this might possibly work and they do that with a straight and serious face and you as a reader are compelled to believe that yeah this is real science right here. There is no hand waving and suspension of disbelief every tiny idea is backed up by scientific reasoning that supplement real life theories and practices. One of the sub plots (the game is linear so all the topics are tightly waved into a single coherent story) is one of the main characters is captured and the rest of the team has to save her. The story leads into a laboratory where some crazy experiments were being conducted on humans which we later discover that through some incision into the a very tiny part of a brain (the incision is made with a thin, less than .0002 micrometer knife) a tiny machine is inserted into the brain called the GME, or Genetic Memory Extractor, as its name goes the tiny microscopic material extracted from the brain possibly contains the soul of a human and with it the entire genetic memory all the way back to the first ancestor. Yes, you just read it right. When the characters finally find their friend her condition could only be describes as a deep transelucidery state. Eventually to save her life two of the characters extract their own soul and fuse it with her to fill up what she has lost. This causes them to lose some of their own souls too. But no matter what she already lost her soul and will never get it back. This a bare minimum of that plot there is so much more going on and this is one of the least interesting least emotionally charged and least powerful events in the game. I tried to avoid spoilers as much as possible so I plucked one of the lesser parts of the game to give you an idea how much the characters have to go through before the epic finally comes to its conclusion.
Furthermore there is so much more going on I can’t possible describe it in a single blog post. Actually I don’t think even a short summary can be less than 100 pages. And that’s if I avoid all the spoilers. The game’s long scrip unfortunately has led to some graphical and auditory compromises. The graphical tiles are somewhat repetitive at times even more than what you’re used to see in similar games like Dragon Quest. But even here the developers were clever enough make up graphics that are used in service of the story rather than just being there. There is so much detail you won’t even notice it on your first playthrough. But later you’ll notice how things are arranged in a very special way. How at first something looks like a tiled background making up the overworld but suddenly the imagery is a visual representation of the main character’s state of shattered mind. The soul searching journey incubating throughout the story will come into full view once you decide to take your time and really think about what is it the game is trying to show you. The audio took the biggest hit. Obviously they had to compromise here and it is the place where the most compromising was made. There are just too few music tracks for such a long game. Yeah the music is awesome, at least most of it but some tracks are just there. These tracks serve no purpose and it would have been better to use the resources somewhere else. Just a waste of effort in my opinion. The sound effects are a miss as well. They aren’t any better than what you get in any archaic jrpg of the era. The effects do their job so there’s no complain but with a little bit more effort they could have turned it into so much more. None of these problems ruined the experience for me in any way and I was able to get used to these sounds very quickly and with no trouble at all. The developers squeeze every single juice out of the Famicom’s cartridge and every space and memory the system has available. It was an issue of balance of how much memory they dedicate to the presentation and how much to the story and cutscenes. Anymore attention to presentation would force the developers to cut the story and any putting more effort into the story (beyond what’s already there) will probably turn the game into an interactive fiction without any images or graphics and sound. Game development pushes people to their limit but sometimes you got to sacrifice something in order to achieve a better and a more equally balanced end result.
On the subject of actual story, it’s not possible to talk about it without showing why it is so good and what makes it so special among other jrpgs. So beware that I will discuss the story but also beware that the story is so huge and so long that I’ll barely scratch the surface. Our story begins with four young men and women waking up in the middle of what at first seems like a forest but very quickly appears to be an illusion. The only thing they know is that they don’t know who they are and they don’t remember anything. Once they enter their first town they discover a rumor about child kidnapping in the nearby forest. The children come back but they aren’t the same almost as if they are empty and soulless. The group decides to rest at an in. Before they go to sleep they decide to share everything they know or can remember about themselves. One of the female characters starts twitching unnaturally, her eyes roll back and she starts screaming some kind of number sequence. The entire group panics and tries to help her but their efforts were fruitless. Eventually she stopped and came back to her sense. She didn’t remember anything that happened just minutes before. Later than night while one of the characters taking a shower he noticed a very tiny unnatural bump in the back of his knee, he cuts it and reveals something that looks like a stone. He talks about it with the other crew and all of them seem to have the same object in the back of their knee. Next day they are asked by a woman if they saw her child, feeling pity for the woman they decide to try and find her child. This sub plot leads them discovering a strange experimental genocidal weapon. I won’t spoil this part of the game but lets just say that if you stay sane after this you’re simply not human. After the tragic event the group comes back to town only to have it destroyed by a mysterious circumstance. The story continues from there following our characters as they go on their journey of discovery and understanding of life, world, and the universe.
Over the course of your journey with the main cast you’ll meet many other characters and everyone is fleshed out enough to feel real, even the so called bad guys aren’t bad at all and every villain has incredible depth no matter how cruel their tactics may seem to you. It almost feel unnatural how all these people feel like they are living breathing personas as if you knew them your whole life. This is a mastery of writing shining through and through in this game. Some characters you meet only exist during a single sub plot and side quest but they are all unforgettable. One subplot involves a female mechanic who lives a lonely life but later we discover she was the one who developed the Rociter Feractic Ensimenal Propulsion system used by almost every flying craft in the world. She lost her first son when an experiment went wrong and he got caught inside the machine. Her 2nd son turned into liquid when using one of the machines she built started to melt because of the wrong dosage of Hyplicyclisic Estomonigmon (in game term, basically a power liquid used instead of electricity). Her husband left her soon after the death of the second son and only a month after that he died too, when one of her machines explodes and kills him along with other 200 people. In another part of the plot the characters enter the 4th dimension in search of God, it’s quite an amazing journey and it’s the best depiction of the fourth dimension I have ever seen, they way they visualize the world through and it was simply brilliant. The story is not all tragic and horrifying of course. There are plenty of bright and happy moments that will make you feel all fluffy inside. There are also a lot of really hilarious moments as well, like that time when they all go drinking and suddenly discover than one of them is underage. Another hilarious sub plot is one of the main characters is scared of dogs (later we discover why and it will make you cry) and this leads to lots of insanely funny scenes whenever some dog shows up, one time while he is in a shower, lol. Speaking of showers, the game doesn’t shy of naughty moments either and you can peek in a women’s shower though you don’t see anything and one of the characters even gets soup in one of his eyes for a few in game hours.
Lastly I must talk about the ending. Don’t worry there aren’t any spoilers here but I really feel like I have to say a few things about it if only to prepare you for what’s to come. The ending is… well it is a happy ending but it won’t leave you happy. After all that the characters went through, after all the happy times and mostly dark twisted tragic heartbreaking soul wrenching events you go through, after all the deaths and carnage the happy ending is an irony of circumstances. Could you really feel happy surviving and staying alive when you paid for it with so much bloodshed and loss. Can you call yourself happy when there’s no one to share it with. The writers knew they were taking too much risks so the final scenes of the epilogue (the epilogue is only story and no gameplay and it’s 50 minutes long) try to uplift your spirits. They try to give you hope and to look into the future, to live a better life, to become a better human being. But the question still remains is it all worth it in the end. The many twists and turns the story took, the unexpected deaths, the bloodshed, the slaughter, the warm happy, and the hilarious moments, the pains most of the characters go through and the tragic end to it all will be hard on some of you. Even if the ending tries very hard to be happy the memories leading up to will prevent you from smiling, only tears will be welcomed, only tears.
So I hope this gave you a small idea why I couldn’t recover for months after playing Hoshi o Miru Hito. I’m still not fully recovered. Hoshi o Miru Hito is the most grand and epic story with a huge all encompassing scope and the best story ever told in any jrpg. No in any video game. No, best story every told in any medium ever invented by a man. As a side note, there is a hack for the game to change up the graphics and balance the gameplay but it completely misses the point of the original meaning. There is also a modern 16-bit style remake made by fans of the game but without spoiling it I just want to say that there is a reason why the game works best in its original form and a remake cannot live up to the original in any way. Why would anyone even want to remake and redo a perfect masterpiece anyway it’s not like people who played it wished for it and the rest don’t even know it exists. It’s like taking a Mona Lisa painting and remaking it to have a better smile and blue eyes and maybe a blond hair and a flower garden in the backgroun and a blimp in the sky. You don’t tinker with classics and not when they are already so much as perfect as Hoshi o Miru Hito.
• 1
• 0
• 56
Recent Entries
Hear me, my dear sons, for now you stand within the kingdom's chalice. You look at the aristocrats with awe. You extend your hands, desiring their wealth, their lavish houses, their vibrant, elegant garments. You lust for their daughters. You envy their blood, their honors, their status. You wish to follow in their footsteps. As you grow, you discover the chasm that separates us; the bitter fate that awaits the common man. You learn to despise them – for all their glory, for all their light. Yet know this: all of that is but a flicker.
Be wary of the hidden depths masked behind those vicious smiles, the wickedness that smolders within their minds, the otherworldly intellect they possess, and the darkness that befalls the wide alleyways when the night comes. Petty kings rule petty kingdoms: blossoming gardens spiraling up into the sky, laden with lilacs and hydrangeas, leading up and far away from the downtown stench, where they can live oblivious to the pains of lesser men.
There is nothing in this world an aristocrat's money can't buy. Yet riches won't ever bring them true happiness: the richer they get, the poorer their fates.
Despite that, they keep on clinging to their usual lives, their putrid pasts. Unable to change, indifferent to the world around them. Their hearts remain cold, their gazes fixed somewhere beyond the murky horizon. What visions do they see? Maybe they don't want to change? Maybe they’ve already given up? Ultimately, what would they gain? Carnal pleasures became their only escape: the bitterness of the evening wine, the sweet, rose-fragrant lips, the blazing sensation of fever-moistened, intermingling bodies. The will and passion to live. The passion to live their short lives to the fullest extent, ignoring the dangers lurking behind every corner; chasing wildly after their dreams, until they run out of breath. Isn't it the same for us all? Do we really differ that much?
-The Sons and Daughters of Antioch, by anonymous writer
What is the true measure of a man?
9. Arcadeotic
Latest Entry
Well, I guess it’s time to show something.
Don’t worry, dearest follower of this site, I’m not dead, I’ve just been incredibly busy. I’ve had exams left and right, and thus, I’ve felt a little demotivated. But, I’ve never given up on this project, there’s no way I can. I mean, it’s been over a year since I started this “passion project”, and I still intend to finish it with all of my power.
I won’t give up, there’s no way I’ll allow myself that. This project is important to me, way more than you think. And yes, for a while, I’ve been incredibly dead, I’ve still answered every question coming my way on Twitter, and I’m still active as ever on my Discord server. There’s no need to worry, Biman -1- will see the light of day.
As for a “beta” patch some people have been asking, I will be looking for two beta-testers after proofreading is done, and after the testing, I will ship the translation out to the masses, and then focus on Biman -2.5- that I’ve neglected, and then onto bigger things.
I’ve been pretty horrible with my way of keeping all of you above water, but all I can do at this point is to ask for all of you to wait patiently and to have faith in me.
Well then, this’ll be another sporadic update that won’t become the norm for a while, but the next planned update is the recruitment of those before mentioned beta-testers.
Until then, everyone!
10. Miu
Well, he was right. This is not a route, but a mini-route. That can be finished in 15 minutes tops.
The content is itself not bad, just standard charage stuff. Don't expect anything great.
Final thoughts
On hindsight, I might not have needed to split the whole VN into so many posts. But I thought of writing them as soon as I finished them, so it ended up being that format.
Anyway, I'll just list down my complaints first:
1. The central plot device could have been explained better. Especially why the routes are ordered like they are. I understand they have tried to put forward a world lines theory of sorts, but that doesn't hold candle in terms of clarity and usage when compared to a certain other famous VN with world lines as its genre.
2. Was there really a need for a Michiru route, at all? She plays a brilliant support character in the other routes, so was it needed?
All that being said, I enjoyed almost all the routes, each in their own ways. Misaki's route had me grinning like anything, Makoto's route was really exciting, and Cro's route was an awesome ride and the best of the lot.
And I really liked the different route format, i.e. the climax is almost always about the confession, enabling ichaicha to be kept to a bare minimum (*glares at Senren Banka*).
So yeah, apart from Cro's route, this is your average charage I guess, with a more classic galge-type route structure. That's it.
Maybe I'll resume Rena's route in Senren Banka next. Aka long flurry of posts should calm down for a while, lol.
11. Boring introduction to understand why I'm making this tutorial. (You can skip this if you want)
So this tutorial is to create that so called "mechanical immersion" when you play a VN, kinda (?)
Personally I read old VNs with the help of chiitrans for parsing the text, to being able to immerse myself into the story I need to read it in full-screen mode (I just can't read it otherwise) and this is not a problem when I read new novels that allow me to kinda cheat and use chiitrans even in full-screen mode but what about old VNs? With VNs that have resolutions of 640x480 or 800x600 you can't go full-screen and use chiitrans at the same time, the parser will either look really bad or it simply won't show.
Now, to fix this issue I used to use a virtual machine called Oracle VM VirtualBox, that program allowed me to scale the screen at any size I wanted so I could play the VNs in "full-screen" mode and it kinda worked... but the problem is... using a virtual machine was a pain in the ass in general :rubycry: so I searched for an easier solution and here it is:
(End of the boring introduction)
What you will need:
(OnTopReplica, ResizeEnableRunner and maybe Windows on top)
All the softwares are free and ad-free too but you can scan them if you want.
1) Download, unpack and install everything (some of them don't require any installation).
2) See if you can take a shortcut by using ResizeEnableRunner, this wont work most of the time and it will make some VNs look bad, but you can try if you want.
Just resize the screen of the VN by clicking on the borders of the VN itself and while holding the left click drag it and expand it, just like you do with any other program.
3) If that method doesn't work (80% of the time) or you just don't like how it looks, then use OnTopReplica.
a) Open the program and you will see this:
Left click on it...trust me on this one.:OurLordAndSavior:
b) Select your VN and click in "-whole-"
c) Once your VN is selected you will know because the program will duplicate the screen, now you can resize it but that's not what we want to do, we want to go full-screen mode.
So go to: Resize > Full-screen. (you can also double click on the duplicated screen and it will do the same thing)
d) Now this is the important part, you will need to advance the text of the VN with your keyboard but you can also use your mouse.
Position the VN behind the duplicated window, then right click on the duplicated screen and click on "enable click-through", the duplicated screen will now be "transparent" so you can click through it.
Here is how it looks and a comparison using the stretching mode (with ResizeEnableRunner) and using the duplicated screen.
NOTE: The "resize enable runner" program uses the same method that Visual Novel Reader to stretch the visual novels.
Problems you may encounter while using the software:
*Some VNs will love to stay on top of the screen or they won't let you use, in my case chiitrans or the parser you normally use, for those cases use WindowsOnTop.
Open it > assign a hotkey to it > and when have that issue just force the program to stay on top, this is how I solved it with some VNs, for example setsunai, or just old VNs.
*Why I'm seeing black borders? It's annoying!
Easy peasy Japanesey, now you can read your VNs full of runes and 象形文字 in full-screen mode for a perfect immersion.:rize:
Btw, if you are wondering about the delay between the duplicated screen and the original, well there isn't any. (I'm sure there is probably some but it's imperceptible)
12. I finished this game the other day, and decided to write up a short blog post on it.
Ryuukishi Bloody † Saga is the "sequel" to Akabei Soft3's Ryakudatsusha no Inen, (though the only relevant connection is the main character, and it pretty much works as a standalone game tbh.) The game completely breaks away from the dark nukige-like feel the first game had, and shifts the genre completely into something rather light hearted. (Though it has some rather gore filled fight scenes here and there.) The art even changes completely, making it feel like a totally different game.
I made a post about the prequel a bit back, and the things I felt that game did wrong was handled very well in the sequel, making it blow my expectations away. For those interested in this game who don't like overly dark themed stories and therefore want to stay clear of the prequel, you should be fine playing the sequel only as long as you read a general summary of the first game. (There is a couple of smaller refrences to the first game in the sequel, though nothing major.)
For those interested, I wrote up a summary of the first game here. It doesn't cover absolutely everything, but it gives you the info you need to read the second game.
The game begins with the introduction of a young boy named Roy, and his family. They live a peaceful life, without a care in the world. That is, until a brutal gang of bandits knocks down their door and turns their lives into a living hell.
Sparing you of some of the gruesome details, Roy's father and mother is killed, and Roy and his sister is captured by the gang. His sister is passed around among the bandits, while Roy is given to a sadistic woman named Hamiro, who loves breaking the minds of young boys.
After being "toyed with" by Hamiro and her already mindbroken young slaves, Roy is brought outside to witness the murder of his own sister. She is tortured to death before his eyes at the command of the gang's leader.
One would think all of this should destroy Roy's mind, but instead he refuses to give in and plots to join the gang and slowly gain their trust, so that he can one day get his revenge and slay them all.
Time passes, and Roy grows up. Though he is still Hamiro's "slave", he never lets his mind falter, and he still plans on taking the lives of the bandits once he grows strong enough.
During the story, Roy falls in love with Hamiro's daughter, Meshia(?) (damn katakana names.) The two of them bond over their hellish lives, and Meshia tries to get Roy to run away with her. Sadly, before they can do anything about this plan, Hamiro decides Roy needs to realize his place, and has Meshia drugged and raped as Roy listens from the other side of a barricaded door. When he finally makes it into the room and murders everyone inside, (including Hamiro who simply laughs at the spectacle like the true monster she is,) Meshia is beyond saving. Roy reluctantly kills her, and the last thing she does is mouth the words "thank you."
Enraged, Roy seeks out the leader of the bandits and fights him. Although he almost loses his own life in the struggle, Roy manages to kill him and the game ends with Roy simply standing still covered in blood, asking himself... "what do I do now?"
The sequel begins a while after this, and Roy has seemingly found a new purpose: pursuing his father's occupation. (A herbalist(?) Is that the right word? He collects all sorts of plants, makes medicine and stuff like that.)
Bloody Saga starts off with Roy walking through the forest without a care in the world, headed for a city called Veludylun. However, on his way there he notices dark smoke rising from within the city walls, and he is stopped by a group of knights protecting the entrance. The city is in a state of emergency, as a massive dragon is attacking. This is apperantly not the first time this has happened, and as mortal weapons cannot harm the dragon because of its tough scales, the best the knights within the city can do is distract the dragon long enough until it leaves.
Seeing the people of the city in dire need of help, Roy volunteers to help out with taking care of the wounded, as well as rebuilding the parts of the city destroyed by the ruthless dragon's attack. Keeping his dark past locked away deep within his mind, Roy starts a new life in Veludylun. He befriends four female knights, (who were originally selected by the king as "guards" to keep an eye on the outsider,) and together with them he spends his days helping out rebuild the city, uncertain when the dragon will attack next.
The story of the game is nice, although it takes a bit of time for it to pick up. The slice of life moments are more than welcome though, as they don't feel boring at all. (At least not to me.) The heroines are great, and Roy is a fantastic main character. The villains are also well made, and the fight scenes are for sure one of the best aspects of the game.
I haven't read too many serious harem stories so I don't have many games to compare it to, but this one pulls it off very well. All the heroines gets enough screen time, they all have their own reasons for liking the MC and the harem doesn't feel forced. (My only complaint is the high amount of H-scenes... they really didn't need that many. 2-3 for each heroine would be fine, but instead there is like, 6 per heroine which is just a bit too much.)
Bloody Saga is an entertaining fantasy VN with good writing, epic action scenes and a decent amount of slice of life moments with the heroines. There is a good amount of focus on the romance between the MC and the knights, but enough weight is still put on the core story so that it doesn't become your standard charage love story. The setting and time period in which the story takes place is also a nice break from your standard VNs, and the voiced protagonist + the stunning art just makes the experience more enjoyable. Although the game has its darker moments, they fit into the story very well, and aren't sexualized pointlessly like in the first game. Overall this VN is a very enjoyable read, and I strongly recommend checking it out.
13. So, as the poll doesn't seem to give me an answer any time soon, I decided to start with this one.
For those who don't know, ableist means prejudice against those with disabilities. As most people in society are able-bodied, disabled people, as a minority, have to cope with toxic words and expressions that hurt them. By saying this words in a negative way, you are basically saying that disabled people are inferior. It's the same as using 'gay' as an insult. As those expressions are by no means necessary, we all can police ourselves in order to avoid them.
Also, when I say 'ableist expressions', I'm not saying those who use them are ableist. Most people aren't even aware of the ableist connotations, so they aren't the ones to fault, society is.
Here are some of the expressions and why they are ableist:
1. Blinded by ignorance, fear, etc. (offends blind people)
2. Crazy (offends people with mental diseases)
3. Cripple (a very offensive word for people with physical disabilities)
4. Dumb (refers to deaf people, or individuals with communication disorders)
5. Idiot (intellectual disabilities)
6. Imbecile (same as 5)
7. Lame (offends people who have mobility disabilities)
8. Stupid (same as 5)
9. Moron (same as 5)
10. Nuts (same as 5)
11. Psycho (same as 5)
12. Retarded (same as 5)
13. Special needs (an euphemism that is actually offensive. It's better to use the word "disabled")
There are, of course, many other offensive expressions, unfortunately. But this list is just to give you an idea in hope to convince you be more careful with your words from now on.
• 2
• 9
• 154
Recent Entries
I wanted to wait until the year is over before I write my list for this year, but of course I haven't had the time to actually look at the December releases and now the year is over. Luckily, there isn't much interesting for me anyway.
Before we start, just let me mention some noteworthy December titles:
• Venus Blood Ragnarok could be a nice game, but I don't really like the Venus Blood series, and as some kind of sequel it probably won't win me over anyway.
• I have played the trial of Meguru Sekai de Towanaru Chikai wo and it really didn't impress me. Looked like an even cheaper version of Izumo 4. And Izumo 4 was already a letdown last year. The RPG mechanics of Izumo 4 are very simple and Meguru Sekai de seems even simpler. But that is not all, contrary to Izumo 4, this time the game also looks pretty ugly! It that the reason why Yamamoto Kazue changed her name to Yamamoto ☆ Kazue for this? Well, I probably will play it nonetheless because I don't like to judge something I haven't really played, but eh, not in the near future.
That's it for December. The rest doesn't interest me or I already know it will be bad (You can't fool me Moonstone!). Did I just say I don't judge something I haven't played? Eh, whatever... :maple:
But now some honorable mentions of games that actually looked interesting but I haven't had the time to read them, so they won't be in my 2016 list:
• Dungeons & Daimeiwaku: I've read the trial, but haven't found the time to read the actual game. I'm sure it's good but it's kind of demotivating for me to read an RPG without any gameplay, heh.
• Iwaihime: Ryukishi07 and horror should be exactly what I'm looking for but for some reason I have this untouched on my hdd for a year now. Maybe I miss the Ryukishi07-art, since it doesn't look as good with generic moeblobs. :reeee:
• Trianthology: Now I have the art, but not a full Ryukishi07 game, haha. Still, I have to read this sometime soon.
• Kanojo * Step: This could probably be the "Best Charage" of 2016 as far as I have seen it. The humor is great, something that's really missing in pretty much every other modern charage. It's a shame that the genre charage has such a low priority for me, so I haven't really read much of this VN so far.
Now I mention some games people might expect in my list of 2016's best eroge, but I actually found them to be only mediocre:
• Shi ni Iku Kimi, Yakata ni Mebuku Zouo: Yeah, I wasn't as excited about this as everyone else. I do not fetishize gore. I like death as an element of a thrilling story, but death in Nikuniku has no impact. There is no meaning in killing someone when they can't die.
• Baka Dakedo Chinchin Shaburu no Dake wa Jouzu na Chii-chan: This game has a Genre-Shift tag of 3.0 on vndb and some people claim it would be different than what you expect it to be.
Spoiler: It's exactly what it looks like! WTF are the people talking about? lol
• Maitetsu: Borefest and not even usable as a lolige. Really only interesting for train otakus.
Now let's start with the list of maybe not necessarily the best but the most impressive eroge of 2016, for me at least:
27459.jpgEroge of the Year: Natsu no Kusari
Best game without a doubt, there is no other contender here. If I would make a Top 5 of 2017, the second place would already be many, many levels under this one. It's so good (or all the other recent games are just so bad, lol).
I have already written walls of text in other threads about this game so I won't repeat myself here. I will just say that this game is exactly what I'm looking for when I read an eroge, the reason why I even started with this medium.
Great characterization and the perfect use of narration, music, visuals and voices makes this one of the most compelling short stories I have read in years.
29582.jpgBest NTR Eroge: Dearest Blue
The lack of elf is really showing and I certainly can't say that any LiLiM game is really good, but at least with Dearest Blue LiLiM improved on their formula and made it its best game so far.
We still have some obvious "Do you want to get NTR'd"-choices, but they are not as in-your-face as in previous titles. Some choices make you wonder what will actually happen if you go this route and that at least is an improvement. In some instances there is some believable drama and especially one route is good where you can cheer for both, the protagonist and the rival because both characters are likable. The main heroine is also well-characterized, being an independent and strong woman without going too bitchy or slutty.
Still, not everything is good. Pretty much every male character except the aforementioned one is a one-dimensional evil villain. The biggest problem however is that the story is build around some kind of NTR/death-game where all the players need to outwit each other ... but in the end there are no mind games or twists happening! Heck, even I could probably win this game because every other character is so bad at this, not having a plan or making retarded moves. What a disappointment! Read this for the relationship drama, not for the death game.
Best Modern Oldschool Eroge: Ryuudouji Shimon no Inbou
This might be an unpopular opinion, but I really think that the writing in many of the latest Mink-games is topnotch. Really, if Mink-games weren't conceptually so limited (being only nukige), they could be really great.
Why is that? Is it because Mink is such an old company that they still carry on old writing traditions which were established in the Golden Age of eroge? That at least would explain why Ryuudouji Shimon no Inbou is so good when it conceptually shouldn't be!
Don't misunderstand me, Ryuudouji Shimon no Inbou is not a good slave-training eroge. There is no real SLG gameplay here, instead the only choice the player has is to visit 1 of 20 locations every day hoping that some kind of event happens there. There is no indication which area you have to visit, neither in script nor in the interface, it's completely by chance. Save-scumming and blindly gathering events at its finest, huh.
But if you can look behind this (preferably with a walkthrough), you will find a game that feels like the late 90s. Mainly because this is a blatant clone of 99's Yakin Byoutou. It's pretty much "Night Shift School Girls". Aside from the setting that is now a school, we have the same kind of characters, the same storyline and the same development Yakin Byoutou had. And it's great, because that formula still works 17 years later! The ugly main character, who has just enough understanding of the human psyche to get what he wants, the pure maidens who fall into his traps, not because they are sluts but because they are believable corrupted, and the evil female mastermind who is probably even worse than the main character. It's beautiful. The nostalgia is strong with this one.
28999.jpgBest Dark and Edgy Eroge: Tokage no Shippo Kiri
I'm cheating here a bit because the game was released in November 2015. But since the fandisc was released in January last year which means the full experience was only available to us in 2016, I see it as 2016 game.
There is not much to say here, what Derg not already said. Games like this prove that even low-budget publisher can make good games, if they just try. CYCLET is not Black Cyc, but it tries nonetheless to be something special and succeeds in setting itself apart from all the other low budget publisher. Maybe that is the reason we only had one CYCLET game in 2016; they emphasize on quality, not quantity.
28277.jpgBest RPG: Dungeon of Regalias
Let's not talk about the story and the characters. They are terrible, like in every Astronauts Sirius game. But damn is the gameplay addicting and actually good. Like in, really, really good.
This is probably one of the best ero-dungeon crawler existing. They did everything right with this one. Hard difficulty available from start. Skills as "items" which can be equipped so you can change your build without any resetting of skill points like in other games. Monsters who all act differently thus forcing you to change your strategy and character builds and party constantly. Great game.
27186.jpgBest SRPG: Sankai Ou no Yubiwa
I'm probably the only one, but ... I actually liked 2016's Eushully game. Yeah, really. Ok, it wasn't really a good SRPG; it actually had many flaws, like being far too easy and exploitable and the routes being too similar to each other... but... I liked the concept. Having six main characters who all represent a different kind of approach to the Ero-RPG genre is really something I wish more games would do. I hate the good guy, the antihero and the maou archetype, and sadly most Eushully games have one of these three as their main character. But Sankai Ou has also three other, more interesting protagonists to choose from. For example, I always wanted the angels to win in an Eushully game, and finally I have the opportunity to do this because the obligatory fallen-angel storyline is already covered in the other routes. Nice.
Biggest disappointment: Extravaganza ~Mushigurui Hen~
I was so hyped for this one. The first true Black Cyc game after 5 years of absence. Written by Banya Izumi who we could trust with the task of continuing a beloved series, right? Right?
No, it's awful. This game turned out to be an abomination which is quite remarkable because Black Cyc already milked the Extravaganza series with bad sidestories back when Black Cyc was still good for the most part. And Mushigurui Hen is even worse than these cheap nukige spinoffs. At least with cheap nukige fandisc, we know we get something bad. Mushigurui Hen on the other hand could have been good. It could have been awesome. But it wasn't.
So what went wrong? You have these great characters with their epic background stories, but what do they do with them? Putting them in school, because school life is, as we all now, the most important thing ever. And if a character is too old for school, just make them into rich snobs whose only problem in life is being too old (meaning being 23 years old) and lamenting about not having the time to go to the cinema with friends because work (meaning being CEO and chilling all the day in the office).
And if that wasn't bad enough, they also retcon a character death because this character was very popular back then and we need him for funny slice of life scenes now, yay.
The most offending thing however is how this "sequel" actually takes place after the second arc of the original Extravaganza, which means before the last third of the original game. And all the important character events in the third arc which really were the heart of the Extravaganza story apparently don't take place in this timeline. Instead, we get inferior and meaningless drama with derailed characters in Mushigurui Hen which is apparently now the canon timeline. Hurray.
Biggest insult: Everything released by Akabei Soft 3
I'm not going into this now, but literally every AKB3 game this year greatly offended me, just as Silky's Plus' games did last year. Let's just say that AKB3 makes games which represent everything that's wrong with modern eroge and have nothing left of what made eroge once great. (I might write a post explaining this statement in the near future.)
It saddens me how popular these games are. A dark prospect for 2017.
• 0
• 0
• 144
No blog entries yet
• 2
• 1
• 198
Recent Entries
Latest Entry
Hello again Fuwanovel, it's Dfbreezy (now named Kotario). This new entry is going to talk about the problems when motivating team members to work and to commune.
There are alot of problems with development. The ones that are most known, are funding, recruitment and meeting deadlines. Yes, those exist and are very well known because Devs highlight that very frequently. But one other element that is largely overlooked is team motivation or involvement. How is this a problem? Read further to find out.
Picture you have a nice concept. In fact you've completed the first draft of the said concept. You then go into recruitment based on the summary of the concept... not the concept itself. Of course the recruitee does a passover of the summary, checks whether the genres fit with him, and inquire about their all important pay.
This process is done and repeated over and over again in the EVN sector. It's basic practice to some extent. But that is where the problem begins friends. After the payment is settled, almost every team member (minus writers) never actually takes the time to assess the content of the project until it's over and done with. All the ask is for what you need and References.
That is the problem. You, in reading the content, should know what is needed based on the description of the scenes in the concept material. But most of that is waived based on "I'm working on multiple projects".
I myself have had this problem with my team, with only 50% of the team actually reading the content. Luckily my character artist falls in that category. Because of that, i feel we have a distinctive disadvantage against other studios whose members actively take part in shaping the story itself, not just the VN aspect.
It may just be my assumption and speculation at this point, but what if team immersion could affect the final product positively and make it far better than it normally is? Some good food for thought, i'd say.
14. So this year I've taken to writing some independent OELVN pieces, two freebies that have both gone on some sort of hiatus and one paid one that I'm nearing completion of. One of the freebies the project lead kind of had her own vision for what I was writing (and I was too verbose). The other the lead liked what I wrote and even had a demo made of it. You can view it on YouTube. I did the writing, and yes, it's based on the anime "Free!" No money changed hands, in case you were wondering, and my work was, also, free. But I haven't heard back from her or about the project since late spring. Not sure if it's just on hiatus or she got other writers to do the character routes.
That brings me to the paid project I'm close to finishing - "Stay! Stay! People's Democratic Republic of Korea" is, as you would suspect, in part a parody of Overdrive's "Go! Go! Nippon!" The project leader has been really supportive, and has mostly liked what I've given him. (There was one part where he toned down some of the tropes a bit. Maybe one day after release I'll be allowed to show the original script.) The intro, all "date" routes and interludes have been written and most edited. I'm down to the group outing and finale. I know where I want to go - it's a matter of actually getting it down. Sometimes it's hard working a full-time job, helping to raise two kids with a wife who also works full-time, and then writing on the side as well. So I'm not getting it done as fast as I'd like. I blame Fate/Grand Order, Drift Girls and Granblue Fantasy as well. CURSE YOU FUN PHONE/TABLET APP GAMES!!!
Darr over at DEVGRU_P has been keeping busy lately, adding in some other projects for production, such as Maid Mansion. He's a pretty nice guy and easy to work with. We're already talking future works beyond "Stay! Stay!" as well, so I'm pretty happy. I guess my main hope is that I'm as good a writer as I feel I am and the game/story goes over well.
Also, Eunji best Tsundere 2017!!! |
78953ad66851782e | History and Philosophy of Physics
1607 Submissions
[4] viXra:1607.0197 [pdf] submitted on 2016-07-16 18:47:40
Buenaventura Suarez the First American Astronomer in South America
Authors: Ramon Isasi
Comments: 11 Pages. Replacement of Version 2 and Version 1
In this short essay, we consider some details of the life of Father Buenaventura Suarez, his astronomical research and activities as a craftsman in the complete manufacture of his scientific instruments by using only the means that nature offered in those places where the Jesuit Missions were located. Also, we include a brief biography, a short description of his scientific productions and communications. Some comments regarding his book Lunario de un Siglo of 217 pages published the fourth Ed. in 1752 is given
Category: History and Philosophy of Physics
[3] viXra:1607.0144 [pdf] replaced on 2016-08-12 09:53:15
A New Model of Physics
Authors: Ratikanta Das
Comments: 13 pages
Abstract: Our new model supports an alternate cosmology quite different from the hypothesis of expanding universe with big bang origin. Everyone will be surprised to know that theories and equations obtained from this new cosmology are able to give simple explanations of many puzzles of physics such as: internal structure of fundamental particle, origin of mass, origin of charge, origin of strong force, wave- particle duality of matter and radiation and some others. Our new model stands on classical mechanics, but it supports quantum mechanics by deriving Schrödinger equation and de Broglie hypothesis. The model also describes a 4D classical technique (named as spiral transformation) for conversion of radiation into matter and vice versa. According to this model, particle-antiparticle pair are created when our flat three dimensional (3D) universe; which is supposed to be a grand 3D hyper spherical surface separating two super grand 4D worlds on either side; is deformed locally into two sides forming two 4D structures. The model also gives an expression for unified Coulomb and strong force.
Category: History and Philosophy of Physics
[2] viXra:1607.0054 [pdf] submitted on 2016-07-05 12:44:56
Time Comes Into Being if Two Different Three-Dimensional Spaces Meet
Authors: Tamas Lajtner
Comments: 13 Pages. New definition of time. New axiom of physics.
Time comes into being if two different three-dimensional spaces meet. This is the definition of time in the space-matter theory. Two different three-dimensional spaces exist if their elements have actions at different scales. The actions have their minimum and maximum values in the given dimensions which cannot be changed without loosing the given dimension. Space can be converted into matter and matter can be converted into space, but space and matter have two different ranges of actions. Time is the meeting point of space and matter. The space-matter model allows us to find the common root of space, matter and time. In space-matter model matter causes space waves. Solely through the use of space waves, we can express spatial distance, time and energy. It is possible to express all these phenomena in eVolt, so meter can be converted into second or into kg and vice versa.
Category: History and Philosophy of Physics
[1] viXra:1607.0044 [pdf] submitted on 2016-07-04 10:36:39
A "Dark Energy Pulsation Principle" Summary Version.(2)
Authors: Terubumi Honjou
Comments: 14 Pages.
A "dark energy pulsation principle" summary version.(2) Chapter 2 elementary particle pulsation principle. [6]The summary of the elementary particle pulsation principle. [7] The hypothesis of "the elementary particle pulsation principle." (the original of the 1980 announcement). [8]Grounds to suppose to be a lump of the energy that an elementary particle is super-high-speed, and pulsates.
Category: History and Philosophy of Physics |
a347bd3cfbedb229 | Presentation is loading. Please wait.
Presentation is loading. Please wait.
Rae §2.1, B&J §3.1, B&M §5.1 2.1 An equation for the matter waves: the time-dependent Schrődinger equation*** Classical wave equation (in one dimension):
Similar presentations
Presentation on theme: "Rae §2.1, B&J §3.1, B&M §5.1 2.1 An equation for the matter waves: the time-dependent Schrődinger equation*** Classical wave equation (in one dimension):"— Presentation transcript:
1 Rae §2.1, B&J §3.1, B&M §5.1 2.1 An equation for the matter waves: the time-dependent Schrődinger equation*** Classical wave equation (in one dimension): e.g. Transverse waves on a string: x Can we use this to describe the matter waves in free space? 2222 Quantum Physics
2 An equation for the matter waves (2)
Seem to need an equation that involves the first derivative in time, but the second derivative in space (for matter waves in free space) 2222 Quantum Physics
3 An equation for the matter waves (3)
For particle with potential energy V(x,t), need to modify the relationship between energy and momentum: Total energy = kinetic energy + potential energy Suggests corresponding modification to Schrődinger equation: Time-dependent Schrődinger equation 2222 Quantum Physics Schrődinger
4 The Schrődinger equation: notes
This was a plausibility argument, not a derivation. We believe the Schrődinger equation to be true not because of this argument, but because its predictions agree with experiment. There are limits to its validity. In this form it applies to A single particle, that is Non-relativistic (i.e. has non-zero rest mass and velocity very much below c) The Schrődinger equation is a partial differential equation in x and t (like classical wave equation) The Schrődinger equation contains the complex number i. Therefore its solutions are essentially complex (unlike classical waves, where the use of complex numbers is just a mathematical convenience) 2222 Quantum Physics
5 The Hamiltonian operator
Can think of the RHS of the Schrődinger equation as a differential operator that represents the energy of the particle. This operator is called the Hamiltonian of the particle, and usually given the symbol Kinetic energy operator Potential energy operator Hence there is an alternative (shorthand) form for time-dependent Schrődinger equation: 2222 Quantum Physics
6 2.2 The significance of the wave function***
Rae §2.1, B&J §2.2, B&M §5.2 2.2 The significance of the wave function*** Ψ is a complex quantity, so what can be its significance for the results of real physical measurements on a system? Remember photons: number of photons per unit volume is proportional to the electromagnetic energy per unit volume, hence to square of electromagnetic field strength. Postulate (Born interpretation): probability of finding particle in a small length δx at position x and time t is equal to Note: |Ψ(x,t)|2 is real, so probability is also real, as required. δx |Ψ|2 Total probability of finding particle between positions a and b is x Born 2222 Quantum Physics a b
7 Example Suppose that at some instant of time a particle’s wavefunction is What is: (a) The probability of finding the particle between x=0.5 and x=0.5001? (b) The probability per unit length of finding the particle at x=0.6? (c) The probability of finding the particle between x=0.0 and x=0.5? 2222 Quantum Physics
8 Normalization Total probability for particle to be anywhere should be one (at any time): Normalization condition Suppose we have a solution to the Schrődinger equation that is not normalized, Then we can Calculate the normalization integral Re-scale the wave function as (This works because any solution to the S.E., multiplied by a constant, remains a solution, because the equation is linear and homogeneous) Alternatively: solution to Schrödinger equation contains an arbitrary constant, which can be fixed by imposing the condition (2.7) 2222 Quantum Physics
9 Normalizing a wavefunction - example
2222 Quantum Physics
10 2.3 Boundary conditions for the wavefunction
Rae §2.3, B&J §3.1 2.3 Boundary conditions for the wavefunction The wavefunction must: 1. Be a continuous and single-valued function of both x and t (in order that the probability density be uniquely defined) 2. Have a continuous first derivative (unless the potential goes to infinity) 3. Have a finite normalization integral. 2222 Quantum Physics
11 2.4 Time-independent Schrődinger equation***
Rae §2.2, B&J §3.5, B&M §5.3 2.4 Time-independent Schrődinger equation*** Suppose potential V(x,t) (and hence force on particle) is independent of time t: RHS involves only variation of Ψ with x (i.e. Hamiltonian operator does not depend on t) LHS involves only variation of Ψ with t Look for a solution in which the time and space dependence of Ψ are separated: Substitute: 2222 Quantum Physics
12 Time-independent Schrődinger equation (contd)
Solving the time equation: The space equation becomes: or Time-independent Schrődinger equation 2222 Quantum Physics
13 Notes In one space dimension, the time-independent Schrődinger equation is an ordinary differential equation (not a partial differential equation) The sign of i in the time evolution is determined by the choice of the sign of i in the time-dependent Schrődinger equation The time-independent Schrődinger equation can be thought of as an eigenvalue equation for the Hamiltonian operator: Operator × function = number × function (Compare Matrix × vector = number × vector) [See 2246] We will consistently use uppercase Ψ(x,t) for the full wavefunction (time-dependent Schrődinger equation), and lowercase ψ(x) for the spatial part of the wavefunction when time and space have been separated (time-independent Schrődinger equation) Probability distribution of particle is now independent of time (“stationary state”): For a stationary state we can use either ψ(x) or Ψ(x,t) to compute probabilities; we will get the same result. 2222 Quantum Physics
14 2.6 SE in three dimensions Rae §3.1, B&J §3.1, B&M §5.1
To apply the Schrődinger equation in the real (three-dimensional) world we keep the same basic structure: BUT Wavefunction and potential energy are now functions of three spatial coordinates: Kinetic energy now involves three components of momentum Interpretation of wavefunction: 2222 Quantum Physics
15 Puzzle The requirement that a plane wave
plus the energy-momentum relationship for free-non-relativistic particles led us to the free-particle Schrődinger equation. Can you use a similar argument to suggest an equation for free relativistic particles, with energy-momentum relationship: 2222 Quantum Physics
16 3.1 A Free Particle Free particle: experiences no forces so potential energy independent of position (take as zero) Linear ODE with constant coefficients so try Time-independent Schrődinger equation: General solution: Combine with time dependence to get full wave function: 2222 Quantum Physics
17 Notes Plane wave is a solution (just as well, since our plausibility argument for the Schrődinger equation was based on this being so) Note signs: Sign of time term (-iωt) is fixed by sign adopted in time-dependent Schrődinger Equation Sign of position term (±ikx) depends on propagation direction of wave There is no restriction on the allowed energies, so there is a continuum of states 2222 Quantum Physics
18 3.2 Infinite Square Well Rae §2.4, B&J §4.5, B&M §5.4 V(x)
Consider a particle confined to a finite length –a<x<a by an infinitely high potential barrier x No solution in barrier region (particle would have infinite potential energy). -a a In well region: Boundary conditions: Continuity of ψ at x=a: Note discontinuity in dψ/dx allowable, since potential is infinite Continuity of ψ at x=-a: 2222 Quantum Physics
19 Infinite square well (2)
Add and subtract these conditions: Even solution: ψ(x)=ψ(-x) Odd solution: ψ(x)=-ψ(-x) Energy: 2222 Quantum Physics
20 Infinite well – normalization and notes
Notes on the solution: Energy quantized to particular values (characteristic of bound-state problems in quantum mechanics, where a particle is localized in a finite region of space. Potential is even under reflection; stationary state wavefunctions may be even or odd (we say they have even or odd parity) Compare notation in 1B23 and in books: 1B23: well extended from x=0 to x=b Rae and B&J: well extends from x=-a to x=+a (as here) B&M: well extends from x=-a/2 to x=+a/2 (with corresponding differences in wavefunction) 2222 Quantum Physics
21 The infinite well and the Uncertainty Principle
Position uncertainty in well: Momentum uncertainty in lowest state from classical argument (agrees with fully quantum mechanical result, as we will see in §4) Compare with Uncertainty Principle: Ground state close to minimum uncertanty 2222 Quantum Physics
22 3.3 Finite square well Rae §2.4, B&J §4.6 V(x) x -a a V0 I II III
Now make the potential well more realistic by making the barriers a finite height V0 Region I: Region II: Region III: 2222 Quantum Physics
23 Finite square well (2) Match value and derivative of wavefunction at region boundaries: Match ψ: Match dψ/dx: Add and subtract: 2222 Quantum Physics
24 Finite square well (3) Divide equations:
Must be satisfied simultaneously: Cannot be solved algebraically. Convenient form for graphical solution: 2222 Quantum Physics
25 Graphical solution for finite well
k0=3, a=1 2222 Quantum Physics
26 Notes Penetration of particle into “forbidden” region where V>E (particle cannot exist here classically) Number of bound states depends on depth of potential well, but there is always at least one (even) state Potential is even function, wavefunctions may be even or odd (we say they have even or odd parity) Limit as V0→∞: 2222 Quantum Physics
27 Example: the quantum well
Quantum well is a “sandwich” made of two different semiconductors in which the energy of the electrons is different, and whose atomic spacings are so similar that they can be grown together without an appreciable density of defects: Material A (e.g. AlGaAs) Material B (e.g. GaAs) Electron potential energy Position Now used in many electronic devices (some transistors, diodes, solid-state lasers) Kroemer Esaki 2222 Quantum Physics
28 3.4 Particle Flux Rae §9.1; B&M §5.2, B&J §3.2
In order to analyse problems involving scattering of free particles, need to understand normalization of free-particle plane-wave solutions. Conclude that if we try to normalize so that will get A=0. This problem is related to Uncertainty Principle: Position completely undefined; single particle can be anywhere from -∞ to ∞, so probability of finding it in any finite region is zero Momentum is completely defined 2222 Quantum Physics
29 Particle Flux (2) x a b More generally: what is rate of change of probability that a particle exists in some region (say, between x=a and x=b)? Use time-dependent Schrődinger equation: 2222 Quantum Physics
30 Particle Flux (3) Integrate by parts: Flux entering at x=a -
Flux leaving at x=b Interpretation: x a b Note: a wavefunction that is real carries no current 2222 Quantum Physics Note: for a stationary state can use either ψ(x) or Ψ(x,t)
31 Particle Flux (4) Sanity check: apply to free-particle plane wave.
Makes sense: # particles passing x per unit time = # particles per unit length × velocity Wavefunction describes a “beam” of particles. 2222 Quantum Physics
32 3.5 Potential Step Rae §9.1; B&J §4.3 V(x)
Consider a potential which rises suddenly at x=0: Case 1 V0 x Boundary condition: particles only incident from left Case 1: E<V0 (below step) x<0 x>0 2222 Quantum Physics
33 Potential Step (2) Continuity of ψ at x=0:
Solve for reflection and transmission: 2222 Quantum Physics
34 Transmission and reflection coefficients
2222 Quantum Physics
35 Potential Step (3) Case 2: E>V0 (above step)
Solution for x>0 is now Matching conditions: Transmission and reflection coefficients: 2222 Quantum Physics
36 Summary of transmission through potential step
Notes: Some penetration of particles into forbidden region even for energies below step height (case 1, E<V0); No transmitted particle flux, 100% reflection (case 1, E<V0); Reflection probability does not fall to zero for energies above barrier (case 2, E>V0). Contrast classical expectations: 100% reflection for E<V0, with no penetration into barrier; 100% transmission for E>V0 2222 Quantum Physics
37 3.6 Rectangular Potential Barrier
Rae §2.5; B&J §4.4; B&M §5.9 3.6 Rectangular Potential Barrier V(x) III I II Now consider a potential barrier of finite thickness: V0 x a Boundary condition: particles only incident from left Region I: Region II: Region III: 2222 Quantum Physics
38 Rectangular Barrier (2)
Match value and derivative of wavefunction at region boundaries: Match ψ: Match dψ/dx: Eliminate wavefunction in central region: 2222 Quantum Physics
39 Rectangular Barrier (3)
Transmission and reflection coefficients: For very thick or high barrier: Non-zero transmission (“tunnelling”) through classically forbidden barrier region: 2222 Quantum Physics
40 Examples of tunnelling
Tunnelling occurs in many situations in physics and astronomy: 1. Nuclear fusion (in stars and fusion reactors) 2. Alpha-decay Distance x of α-particle from nucleus V Initial α-particle energy V Coulomb interaction (repulsive) Incident particles Internuclear distance x Strong nuclear force (attractive) V Distance x of electron from surface Work function W Material 3. Field emission of electrons from surfaces (e.g. in plasma displays) Vacuum 2222 Quantum Physics
41 3.7 Simple Harmonic Oscillator
Rae §2.6; B&M §5.5; B&J §4.7 3.7 Simple Harmonic Oscillator Mass m Example: particle on a spring, Hooke’s law restoring force with spring constant k: x Time-independent Schrődinger equation: Problem: still a linear differential equation but coefficients are not constant. Simplify: change variable to 2222 Quantum Physics
42 Simple Harmonic Oscillator (2)
Asymptotic solution in the limit of very large y: Check: Equation for H: 2222 Quantum Physics
43 Simple Harmonic Oscillator (3)
Must solve this ODE by the power-series method (Frobenius method); this is done as an example in 2246. We find: The series for H(y) must terminate in order to obtain a normalisable solution Can make this happen after n terms for either even or odd terms in series (but not both) by choosing Hn is known as the nth Hermite polynomial. Label resulting functions of H by the values of n that we choose. 2222 Quantum Physics
44 The Hermite polynomials
For reference, first few Hermite polynomials are: NOTE: Hn contains yn as the highest power. Each H is either an odd or an even function, according to whether n is even or odd. 2222 Quantum Physics
45 Simple Harmonic Oscillator (4)
Transforming back to the original variable x, the wavefunction becomes: Probability per unit length of finding the particle is: Compare classical result: probability of finding particle in a length δx is proportional to the time δt spent in that region: For a classical particle with total energy E, velocity is given by 2222 Quantum Physics
46 Notes “Zero-point energy”: “Quanta” of energy: Even and odd solutions
Applies to any simple harmonic oscillator, including Molecular vibrations Vibrations in a solid (hence phonons) Electromagnetic field modes (hence photons), even though this field does not obey exactly the same Schrődinger equation You will do another, more elegant, solution method (no series or Hermite polynomials!) next year For high-energy states, probability density peaks at classical turning points (correspondence principle) 2222 Quantum Physics
47 4 Postulates of QM This section puts quantum mechanics onto a more formal mathematical footing by specifying those postulates of the theory which cannot be derived from classical physics. Main ingredients: The wave function (to represent the state of the system); Hermitian operators (to represent observable quantities); A recipe for identifying the operator associated with a given observable; A description of the measurement process, and for predicting the distribution of outcomes of a measurement; A prescription for evolving the wavefunction in time (the time-dependent Schrődinger equation) 2222 Quantum Physics
48 4.1 The wave function Postulate 4.1: There exists a wavefunction Ψ that is a continuous, square-integrable, single-valued function of the coordinates of all the particles and of time, and from which all possible predictions about the physical properties of the system can be obtained. Examples of the meaning of “The coordinates of all the particles”: For a single particle moving in one dimension: For a single particle moving in three dimensions: For two particles moving in three dimensions: The modulus squared of Ψ for any value of the coordinates is the probability density (per unit length, or volume) that the system is found with that particular coordinate value (Born interpretation). 2222 Quantum Physics
49 4.2 Observables and operators
Postulate 4.2.1: to each observable quantity is associated a linear, Hermitian operator (LHO). An operator is linear if and only if Examples: which of the operators defined by the following equations are linear? Note: the operators involved may or may not be differential operators (i.e. may or may not involve differentiating the wavefunction). 2222 Quantum Physics
50 Hermitian operators An operator O is Hermitian if and only if:
for all functions f,g vanishing at infinity. Compare the definition of a Hermitian matrix M: Analogous if we identify a matrix element with an integral: (see 3226 course for more detail…) 2222 Quantum Physics
51 Hermitian operators: examples
2222 Quantum Physics
52 Eigenvectors and eigenfunctions
Postulate 4.2.2: the eigenvalues of the operator represent the possible results of carrying out a measurement of the corresponding quantity. Definition of an eigenvalue for a general linear operator: Compare definition of an eigenvalue of a matrix: Example: the time-independent Schrődinger equation: 2222 Quantum Physics
53 Important fact: The eigenvalues of a Hermitian operator are real (like the eigenvalues of a Hermitian matrix). Proof: Postulate 4.2.3: immediately after making a measurement, the wavefunction is identical to an eigenfunction of the operator corresponding to the eigenvalue just obtained as the measurement result. Ensures that we get the same result if we immediately re-measure the same quantity. 2222 Quantum Physics
54 4.3 Identifying the operators
Postulate 4.3: the operators representing the position and momentum of a particle are (one dimension) (three dimensions) Other operators may be obtained from the corresponding classical quantities by making these replacements. Examples: The Hamiltonian (representing the total energy as a function of the coordinates and momenta) Angular momentum: 2222 Quantum Physics
55 Eigenfunctions of momentum
The momentum operator is Hermitian, as required: Its eigenfunctions are plane waves: 2222 Quantum Physics
56 Orthogonality of eigenfunctions
The eigenfunctions of a Hermitian operator belonging to different eigenvalues are orthogonal. If then Proof: 2222 Quantum Physics
57 Orthonormality of eigenfunctions
What if two eigenfunctions have the same eigenvalue? (In this case the eigenvalue is said to be degenerate.) Any linear combination of these eigenfunctions is also an eigenfunction with the same eigenvalue: So we are free to choose as the eigenfunctions two linear combinations that are orthogonal. If the eigenfunctions are all orthogonal and normalized, they are said to be orthonormal. 2222 Quantum Physics
58 Orthonormality of eigenfunctions: example
Consider the solutions of the time-independent Schrődinger equation (energy eigenfunctions) for an infinite square well: We chose the constants so that normalization is correct: 2222 Quantum Physics
59 Complete sets of functions
The eigenfunctions φn of a Hermitian operator form a complete set, meaning that any other function satisfying the same boundary conditions can be expanded as If the eigenfunctions are chosen to be orthonormal, the coefficients an can be determined as follows: We will see the significance of such expansions when we come to look at the measurement process. 2222 Quantum Physics
60 Normalization and expansions in complete sets
The condition for normalizing the wavefunction is now If the eigenfunctions φn are orthonormal, this becomes Natural interpretation: the probability of finding the system in the state φn(x) (as opposed to any of the other eigenfunctions) is 2222 Quantum Physics
61 Expansion in complete sets: example
2222 Quantum Physics
62 4.4 Eigenfunctions and measurement
Postulate 4.4: suppose a measurement of the quantity Q is made, and that the (normalized) wavefunction can be expanded in terms of the (normalized) eigenfunctions φn of the corresponding operator as Then the probability of obtaining the corresponding eigenvalue qn as the measurement result is Corollary: if a system is definitely in eigenstate φn, the result measuring Q is definitely the corresponding eigenvalue qn. What is the meaning of these “probabilities” in discussing the properties of a single system? Still a matter for debate, but usual interpretation is that the probability of a particular result determines the frequency of occurrence of that result in measurements on an ensemble of similar systems. 2222 Quantum Physics
63 Commutators In general operators do not commute: that is to say, the order in which we allow operators to act on functions matters: For example, for position and momentum operators: We define the commutator as the difference between the two orderings: Two operators commute only if their commutator is zero. So, for position and momentum: 2222 Quantum Physics
64 Compatible operators Two observables are compatible if their operators share the same eigenfunctions (but not necessarily the same eigenvalues). Consequence: two compatible observables can have precisely-defined values simultaneously. Measure observable R, definitely obtain result rm (the corresponding eigenvalue of R) Measure observable Q, obtain result qm (an eigenvalue of Q) Re-measure Q, definitely obtain result qm once again Wavefunction of system is corresponding eigenfunction φm Wavefunction of system is still corresponding eigenfunction φm Compatible operators commute with one another: Expansion in terms of joint eigenfunctions of both operators Can also show the converse: any two commuting operators are compatible. 2222 Quantum Physics
65 Example: measurement of position
2222 Quantum Physics
66 Example: measurement of position (2)
2222 Quantum Physics
67 Expectation values The average (mean) value of measurements of the quantity Q is therefore the sum of the possible measurement results times the corresponding probabilities: We can also write this as: 2222 Quantum Physics
68 4.5 Evolution of the system
Postulate 4.5: Between measurements (i.e. when it is not disturbed by external influences) the wave-function evolves with time according to the time-dependent Schrődinger equation. Hamiltonian operator. This is a linear, homogeneous differential equation, so the linear combination of any two solutions is also a solution: the superposition principle. 2222 Quantum Physics
69 Calculating time dependence using expansion in energy eigenfunctions
Suppose the Hamiltonian is time-independent. In that case we know that solutions of the time-dependent Schrődinger equation exist in the form: where the wavefunctions ψ(x) and the energy E correspond to one solution of the time-independent Schrődinger equation: We know that all the functions ψn together form a complete set, so we can expand Hence we can find the complete time dependence (superposition principle): 2222 Quantum Physics
70 Time-dependent behaviour: example
Suppose the state of a particle in an infinite square well at time t=0 is a `superposition’ of the n=1 and n=2 states Wave function at a subsequent time t Probability density 2222 Quantum Physics
71 Rate of change of expectation value
Consider the rate of change of the expectation value of a quantity Q: 2222 Quantum Physics
72 Example 1: Conservation of probability
Rate of change of total probability that the particle may be found at any point: Total probability is the “expectation value” of the operator 1. Total probability conserved (related to existence of a well defined probability flux – see §3.4) 2222 Quantum Physics
73 Example 2: Conservation of energy
Consider the rate of change of the mean energy: Even although the energy of a system may be uncertain (in the sense that measurements of the energy made on many copies of the system may be give different results) the average energy is always conserved with time. 2222 Quantum Physics
74 5.1 Angular momentum operators
Reading: Rae Chapter 5; B&J§§6.1,6.3; B&M§§ 5.1 Angular momentum operators Angular momentum is a very important quantity in three-dimensional problems involving a central force (one that is always directed towards or away from a central point). In that case it is classically a conserved quantity: Central point r F The origin of r is the same central point towards/away from which the force is directed. We can write down a quantum-mechanical operator for it by applying our usual rules: Individual components: 2222 Quantum Physics
75 5.2 Commutation relations***
The different components of angular momentum do not commute with one another. By similar arguments get the cyclic permutations: 2222 Quantum Physics
76 Commutation relations (2)
The different components of L do not commute with one another, but they do commute with the (squared) magnitude of the angular momentum vector: Note a useful formula: Important consequence: we cannot find simultaneous eigenfunctions of all three components. But we can find simultaneous eigenfunctions of one component (conventionally the z component) and L2 2222 Quantum Physics
77 5.3 Angular momentum in spherical polar coordinates
On this slide, hats refer to unit vectors, not operators. Spherical polar coordinates are the natural coordinate system in which to describe angular momentum. In these coordinates, z θ y r (see 2246) φ So the full (vector) angular momentum operator can be written x To find z-component, note that unit vector k in z-direction satisfies 2222 Quantum Physics
78 L2 in spherical polar coordinates
On this slide, hats refer to unit vectors, not operators. Depends only on angular behaviour of wavefunction. Closely related to angular part of Laplacian (see 2246 and Section 6). 2222 Quantum Physics
79 5.4 Eigenvalues and eigenfunctions
Look for simultaneous eigenfunctions of L2 and one component of L (conventional to choose Lz) Eigenvalues and eigenfunctions of Lz: Physical boundary condition: wave-function must be single-valued Quantization of angular momentum about z-axis (compare Bohr model) 2222 Quantum Physics
80 Eigenvalues and eigenfunctions (2)
Now look for eigenfunctions of L2, in the form (ensures solutions remain eigenfunctions of Lz, as we want) Eigenvalue condition becomes 2222 Quantum Physics
81 The Legendre equation Make the substitution
This is exactly the Legendre equation, solved in 2246 using the Frobenius method. 2222 Quantum Physics
82 Legendre polynomials and associated Legendre functions
In order for solutions to exist that remain finite at μ=±1 (i.e. at θ=0 and θ=π) we require that the eigenvalue satisfies (like SHO, where we found restrictions on energy eigenvalue in order to produce normalizable solutions) The finite solutions are then the associated Legendre functions, which can be written in terms of the Legendre polynomials: where m is an integer constrained to lie between –l and +l. Legendre polynomials: 2222 Quantum Physics
83 Spherical harmonics The full eigenfunctions can also be written as spherical harmonics: Because they are eigenfunctions of Hermitian operators with different eigenvalues, they are automatically orthogonal when integrated over all angles (i.e. over the surface of the unit sphere). The constants C are conventionally defined so the spherical harmonics obey the following important normalization condition: First few examples (see also 2246): 2222 Quantum Physics
84 Shapes of the spherical harmonics
z x y Imaginary Real To read plots: distance from origin corresponds to magnitude (modulus) of plotted quantity; colour corresponds to phase (argument). 2222 Quantum Physics (Images from
85 Shapes of spherical harmonics (2)
86 5.5 The vector model for angular momentum***
To summarize: l is known as the principal angular momentum quantum number: determines the magnitude of the angular momentum m is known as the magnetic quantum number: determines the component of angular momentum along a chosen axis (the z-axis) These states do not correspond to well-defined values of Lx and Ly, since these operators do not commute with Lz. Semiclassical picture: each solution corresponds to a cone of angular momentum vectors, all with the same magnitude and the same z-component. 2222 Quantum Physics
87 The vector model (2) Lz Example: l=2 Ly L
Magnitude of angular momentum is Component of angular momentum in z direction can be Lx 2222 Quantum Physics
88 6.1 The three-dimensional square well
Reading: Rae §3.2, B&J §7.4; B&M §5.11 6.1 The three-dimensional square well z Consider a particle which is free to move in three dimensions everywhere within a cubic box, which extends from –a to +a in each direction. The particle is prevented from leaving the box by infinitely high potential barriers. y x Time-independent Schrödinger equation within the box is free-particle like: V(x) -a a Separation of variables: take x, or y, or z with boundary conditions 2222 Quantum Physics
89 Three-dimensional square well (2)
Substitute in Schrödinger equation: Divide by XYZ: Three effective one-dimensional Schrödinge equations. 2222 Quantum Physics
90 Three-dimensional square well (3)
Wavefunctions and energy eigenvalues known from solution to one-dimensional square well (see §3.2). Total energy is 2222 Quantum Physics
91 6.2 The Hamiltonian for a hydrogenic atom***
Reading: Rae §§ , B&M Chapter 7, B&J §7.2 and §7.5 6.2 The Hamiltonian for a hydrogenic atom*** -e For a hydrogenic atom or ion having nuclear charge +Ze and a single electron, the Hamiltonian is r Note spherical symmetry – potential depends only on r +Ze Note: for greater accuracy we should use the reduced mass corresponding to the relative motion of the electron and the nucleus (since nucleus does not remain precisely fixed – see 1B2x): The natural coordinate system to use is spherical polar coordinates. In this case the Laplacian operator becomes (see 2246): This means that the angular momentum about any axis, and also the total angular momentum, are conserved quantities: they commute with the Hamiltonian, and can have well-defined values in the energy eigenfunctions of the system. 2222 Quantum Physics
92 6.3 Separating the variables
Write the time-independent Schrődinger equation as: Now look for solutions in the form Substituting into the Schrődinger equation: 2222 Quantum Physics
93 The angular equation We recognise that the angular equation is simply the eigenvalue condition for the total angular momentum operator L2: This means we already know the corresponding eigenvalues and eigenfunctions (see §5): Note: all this would work for any spherically-symmetric potential V(r), not just for the Coulomb potential. 2222 Quantum Physics
94 6.4 Solving the radial equation
Now the radial part of the Schrődinger equation becomes: Note that this depends on l, but not on m: it therefore involves the magnitude of the angular momentum, but not its orientation. Define a new unknown function χ by: 2222 Quantum Physics
95 The effective potential
This corresponds to one-dimensional motion with the effective potential V(r) First term: Second term: r 2222 Quantum Physics
96 Atomic units*** Atomic units: there are a lot of physical constants in these expressions. It makes atomic problems much more straightforward to adopt a system of units in which as many as possible of these constants are one. In atomic units we set: In this unit system, the radial equation becomes 2222 Quantum Physics
97 Solution near the nucleus (small r)
For small values of r the second derivative and centrifugal terms dominate over the others. Try a solution to the differential equation in this limit as We want a solution such that R(r) remains finite as r→0, so take 2222 Quantum Physics
98 Asymptotic solution (large r)
Now consider the radial equation at very large distances from the nucleus, when both terms in the effective potential can be neglected. We are looking for bound states of the atom, where the electron does not have enough energy to escape to infinity: Inspired by this, let us rewrite the solution in terms of yet another unknown function, F(r): 2222 Quantum Physics
99 Differential equation for F
Can obtain a corresponding differential equation for F: This equation is solved in 2246, using the Frobenius (power-series) method. The indicial equation gives 2222 Quantum Physics
100 Properties of the series solution
If the full series found in 2246 is allowed to continue up to an arbitrarily large number of terms, the overall solution behaves like (not normalizable) Hence the series must terminate after a finite number of terms. This happens only if So the energy is Note that once we have chosen n, the energy is independent of both m (a feature of all spherically symmetric systems, and hence of all atoms) and l (a special feature of the Coulomb potential, and hence just of hydrogenic atoms). n is known as the principal quantum number. It defines the “shell structure” of the atom. 2222 Quantum Physics
101 6.5 The hydrogen energy spectrum and wavefunctions***
Each solution of the time-independent Schrődinger equation is defined by the three quantum numbers n,l,m For each value of n=1,2,… we have a definite energy: For each value of n, we can have n possible values of the total angular momentum quantum number l: l=0,1,2,…,n-1 -1 l=0 l=1 l=2 l=3 For each value of l and n we can have 2l+1 values of the magnetic quantum number m: Traditional nomenclature: l=0: s states (from “sharp” spectral lines) l=1: p states (“principal”) l=2: d states (“diffuse”) l=3: f states (“fine”) …and so on alphabetically (g,h,i… etc) The total number of states (statistical weight) associated with a given energy En is therefore 2222 Quantum Physics
102 The radial wavefunctions
Radial wavefunctions Rnl depend on principal quantum number n and angular momentum quantum number l (but not on m) Full wavefunctions are: Normalization chosen so that: Note: Probability of finding electron between radius r and r+dr is Only s states (l=0) are finite at the origin. Radial functions have (n-l-1) zeros. 2222 Quantum Physics
103 Comparison with Bohr model***
Quantum mechanics Angular momentum (about any axis) shown to be quantized in units of Planck’s constant: Angular momentum (about any axis) assumed to be quantized in units of Planck’s constant: Electron otherwise moves according to classical mechanics and has a single well-defined orbit with radius Electron wavefunction spread over all radii. Can show that the quantum mechanical expectation value of the quantity 1/r satisfies Energy quantized and determined solely by angular momentum: Energy quantized, but is determined solely by principal quantum number, not by angular momentum: 2222 Quantum Physics
104 6.6 The remaining approximations
This is still not an exact treatment of a real H atom, because we have made several approximations. We have neglected the motion of the nucleus. To fix this we would need to replace me by the reduced mass μ (see slide 1). We have used a non-relativistic treatment of the electron and in particular have neglected its spin (see §7). Including these effects gives rise to “fine structure” (from the interaction of the electron’s orbital motion with its spin), and “hyperfine structure” (from the interaction of the electron’s spin with the spin of the nucleus) We have neglected the fact that the electromagnetic field acting between the nucleus and the electron is itself a quantum object. This leads to “quantum electrodynamic” corrections, and in particular to a small “Lamb shift” of the energy levels. 2222 Quantum Physics
105 7.1 Atoms in magnetic fields
Reading: Rae Chapter 6; B&J §6.8, B&M Chapter 8 (all go further than 2B22) 7.1 Atoms in magnetic fields Interaction of classically orbiting electron with magnetic field: Orbit behaves like a current loop: μ r v In the presence of a magnetic field B, classical interaction energy is: Corresponding quantum mechanical expression (to a good approximation) involves the angular momentum operator: 2222 Quantum Physics
106 Splitting of atomic energy levels
Suppose field is in the z direction. The Hamiltonian operator is We chose energy eigenfunctions of the original atom that are eigenfunctions of Lz so these same states are also eigenfunctions of the new H. 2222 Quantum Physics
107 Splitting of atomic energy levels (2)
(2l+1) states with same energy: m=-l,…+l (Hence the name “magnetic quantum number” for m.) Predictions: should always get an odd number of levels. An s state (such as the ground state of hydrogen, n=1, l=0, m=0) should not be split. 2222 Quantum Physics
108 7.2 The Stern-Gerlach experiment***
Produce a beam of atoms with a single electron in an s state (e.g. hydrogen, sodium) Study deflection of atoms in inhomogeneous magnetic field. Force on atoms is N Results show two groups of atoms, deflected in opposite directions, with magnetic moments S Consistent neither with classical physics (which would predict a continuous distribution of μ) nor with our quantum mechanics so far (which always predicts an odd number of groups, and just one for an s state). 2222 Quantum Physics Gerlach
109 7.3 The concept of spin*** Try to understand these results by analogy with what we know about the ordinary (“orbital”) angular momentum: must be due to some additional source of angular momentum that does not require motion of the electron. Known as “spin”. Introduce new operators to represent spin, assumed to have same commutation relations as ordinary angular momentum: Corresponding eigenfunctions and eigenvalues: (will see in Y3 that these equations can be derived directly from the commutation relations) 2222 Quantum Physics Goudsmit Uhlenbeck Pauli
110 Spin quantum numbers for an electron
From the Stern-Gerlach experiment, we know that electron spin along a given axis has two possible values. So, choose But we also know from Stern-Gerlach that magnetic moments associated with the two possibilities are Spin angular momentum is twice as “effective” at producing magnetic moment as orbital angular momentum. So, have General interaction with magnetic field: 2222 Quantum Physics
111 A complete set of quantum numbers
Hence the complete set of quantum numbers for the electron in the H atom is: n,l,m,s,ms. Corresponding to a full wavefunction Note that the spin functions χ do not depend on the electron coordinates r,θ,φ; they represent a purely internal degree of freedom. H atom in magnetic field, with spin included: 2222 Quantum Physics
112 7.4 Combining different angular momenta
So, an electron in an atom has two sources of angular momentum: Orbital angular momentum (arising from its motion through the atom) Spin angular momentum (an internal property of its own). To think about the total angular momentum produced by combining the two, use the vector model once again: L S Lz |L-S| Vector addition between orbital angular momentum L (of magnitude L) and spin S (of magnitude S): produces a resulting angular momentum vector J: quantum mechanics says its magnitude lies somewhere between |L-S| and L+S.(in integer steps). S Ly L For a single electron, corresponding `total angular momentum’ quantum numbers are Lx Determines length of resultant angular momentum vector L+S 2222 Quantum Physics Determines orientation
113 Example: the 1s and 2p states of hydrogen
The 1s state: The 2p state: 2222 Quantum Physics
114 Combining angular momenta (2)
The same rules apply to combining other angular momenta, from whatever source. For example for two electrons in an excited state of He atom, one in 1s state and one in 2p state (defines what is called the 1s2p configuration in atomic spectroscopy): First construct combined orbital angular momentum L of both electrons: Then construct combined spin S of both electrons: Hence there are two possible terms (combinations of L and S): …and four levels (possible ways of combining L and S to get different total angular momentum quantum numbers) 2222 Quantum Physics
115 Term notation Spectroscopists use a special notation to describe terms and levels: The first (upper) symbol is a number giving the number of spin states corresponding to the total spin S of the electrons The second (main) symbol is a letter encoding the total orbital angular momentum L of the electrons: S denotes L=0 P denotes L=1 D denotes L=2 (and so on); The final (lower) symbol gives the total angular momentum J obtained from combining the two. Example: terms and levels from previous page would be: 2222 Quantum Physics
116 7.5 Wavepackets and the Uncertainty Principle revisited (belongs in §4 – non-examinable)
Can think of the Uncertainty Principle as arising from the structure of wavepackets. Consider a normalized wavefunction for a particle located somewhere near (but not exactly at) position x0 Probability density: Can also write this as a Fourier transform (see 2246): x (expansion in eigenstates of momentum) k 2222 Quantum Physics
117 Fourier transform of a Gaussian
2222 Quantum Physics
118 Wavepackets and Uncertainty Principle (2)
Mean-squared uncertainty in postion Mean momentum: Mean-squared uncertainty in momentum: In fact, can show that this form of wavepacket (“Gaussian wavepacket”) minimizes the product of Δx and Δp, so: 2222 Quantum Physics
119 Wavepackets and Uncertainty Principle (3)
Summary Three ways of thinking of Uncertainty principle: Arising from the physics of the interaction of different types of measurement apparatus with the system (e.g. in the gamma-ray microscope); Arising from the properties of Fourier transforms (narrower wavepackets need a wider range of wavenumbers in their Fourier transforms); Arising from the fact that x and p are not compatible quantities (do not commute), so they cannot simultaneously have precisely defined values. General result (see third year, or Rae §4.5): 2222 Quantum Physics
Similar presentations
Ads by Google |
01e1bf5760b2c471 | Argumentation about de Broglie-Bohm pilot wave theory
A nice summary of standard arguments against de Broglie-Bohm theory can be found at R. F. Streater's "Lost Causes in Theoretical Physics" website. Ulrich Mohrhoff also combines the presentation of his position with an interesting rejection of pilot wave theory. These arguments I consider in a different file. Here, I consider the arguments proposed in several articles of Luboš Motl's blog "The reference frame": David Bohm born 90 years ago and Bohmists & segregation of primitive and contextual observables, Anti-quantum zeal and in off-topic responses of "Nonsense of the day: click the ball to change its color". Below, we refer to Luboš Motl simply as lumo (his nick in his blog). Another argument (also with lumo's participation), related to Lorentz-invariance, I have considered at another place.
If you know other interesting pages critical of de Broglie - Bohm pilot wave theory, Nelsonian stochastics, non-local hidden variable theories in general, as well as ether theories, please tell me about them.
The most important thing: Measurement theory
The most important part of physics are, of course, experiments. Moreover, this is also the point where lumo is simply wrong, so it is worth to start with it.:
... it is not true that the de Broglie-Bohm theory gives the same predictions in general. It can be arranged to do so in the case of one spinless particle. But in the real quantum theories we find relevant today, such as quantum field theory, de Broglie-Bohm theory cannot be constructed to match probabilistic QFT exactly, and one can see that its very framework contradicts observable facts.
At another place, we find some hint where his misunderstanding is located:
Your equations about "X" are completely irrelevant for the measurement of the spin. The problem is not when one wants to measure "X". Indeed, the measurement of "X" might occur analogously to its measurement in the spinless case. The problem occurs when one actually wants to measure the spin itself.
The projection of the spin j_z is an observable that can have two values, in the spin 1/2 case, either +1/2 or -1/2. It is a basic and completely well-established feature of QM that one of these values must be measured if we measure it.
How is your 17th century deterministic theory supposed to predict this discrete value? Like with "X", it must already have a classical value for this quantity. Except that in this case, it has to be discrete, so it can't be described by any continuous equation. ...
Preemptively: you might also argue that any actual measurement of the spin reduces to a measurement of "X". But it's not true. I can design gadgets that either absorb or not absorb the electron depending on its j_z. So they measure j_z directly. deBB theories of all kinds will inevitably fail, not being able to predict that with some probability, the electron is absorbed, and with others, they're not. This has nothing to do with "X" or some driving ways. It is about the probability of having the spin itself.
The last paragraph gives the hint: lumo has interpreted the claim that all measurements reduce to position measurements as "all measurements of the electron reduce to position measurements of the electron". If that would be true, I would concede that lumo's polemics against pilot wave theorists are justified. This was, by the way, the state of the art before Bohm's measurement theory appeared 1952. Thus, lumo's arguments illustrate in a nice way why de Broglie had given up pilot wave theory.
Once the question has been asked how the 17th century deterministic theory manages to predict discrete values, let's explain this story. As a 17th century theory, with real aristocratic origin, it leaves the hard work to servants (quantum operators), reserving for itself the final (and most important) decisions ;-).
First, there is some interaction of the wave function of the electron with the wave function of the measurement device. (There is of course also an equation for the position of the electron qel – the "X" in lumo's text – but it is completely irrelevant, not only at this stage, but in the whole process.) The result of the measurement is, as usual, a wave function of type
|ψ> = α1|up>|q1> + α2|down>|q2>.
This exploitation of standard QT is not enough – now decoherence will be exploited in an equally shameless way. We leave it to decoherence considerations to decide which observables of the measurement device become amplified or macroscopic. Assume the quantum states |q1>, |q2> are decoherence-preferred. In this case, decoherence amplifies the microscopic measurement results |q1>, |q2> into classical, macroscopically different states |c1>, |c2> . After finishing this hard job, it presents the following state:
|Ψ> = α1|up>|c1> + α2|down>|c2>.
Now, everything is prepared, it remains to make the really important decision which of the wave packets is the best one ;-). At this moment a hidden variable enters the scene. But, surprise, it is not the hidden variable of the electron qel (lumo's "X"), but that of the classical measurement device qc.
The job of qc is not a really hard one. After driving around (no, being driven around by quantum guides) in an almost unpredictable way, it simply takes the wave packet prepared for him by the quantum operators at the point of arrival ;-). In other words, we simply have to put the actual value of qc(t) into the full wave function |Ψ> to obtain the (unnormalized) effective wave function:
ψ(qe) = Ψ(qe, qc(t));
What we need for this scheme to work as an ideal quantum measurement is not much. We need that the two states of the macroscopic device |c1>, |c2> do not (significantly) overlap as functions of the hidden variable qc. In this case, whatever the value of qc, the result ψ(qe) will be a unique choice between two effective wave functions, namely between |up> if qc is in the support of |c1>, and |down> otherwise. And we need the quantum equilibrium assumption for qc to obtain the probabilities for these two choices as |α1|2 resp. |α2|2.
Thus, everything works as in quantum theory – Born rule as well as state preparation by measurement (only without any ill-defined wave function collapse or subdivision of the world into a classical and quantum part, or the equally ill-defined "subdivision of the world into systems" used in many worlds or other decoherence-based approaches).
But maybe one of the two assumptions we have used used are wrong? Given Valentini's subquantum H-theorem, together with the numerical results of Valentini and Westman, which show a remarkable relaxation to equilibrium already in the two-dimensional case in a quite short period of time (arXiv:quant-ph/0403034), there is not much hope for observations of non-equilibrium in our universe.
One can, of course, also doubt that macroscopically different states do not have a significant overlap in the hidden variables. Such doubts have been, for example, expressed by Wallace and Struyve for pilot wave field theories. See my paper "Overlaps in pilot wave field theories" at arXiv:0904.0764 about the solution of this problem.
About the zeros of the wave function
There is a second point where experiment is involved, with an easy solution:
How do we know that "m=l_z/hbar" must be an integer? Well, it is because the wave function "psi(x,y,z)" of the "m"-eigenstates depends on "phi", the longitude (one of the spherical or axial coordinates), via the factor "exp(i.m.phi)" which must be single-valued. Only in terms of the whole "psi", we have an argument.
However, when you rewrite the complex function "psi(r,theta,phi)" in the polar form, as "R.exp(iS)", the condition for the single-valuedness of "psi" becomes another condition for the single-valuedness of "S" up to integer multiples of 2.pi. If you write the exponential as "exp(iS/hbar)", the "action" called "S" here must be well-defined everywhere up to jumps that are multiples of "h = 2.pi.hbar".
That's a nice argument, and, because of this argument, today the original form of de Broglies "pilot wave theory" is preferred in comparison with the "Bohmian mechanics" version proposed 1952 by Bohm. In pilot wave theory, the pilot wave is really a wave, and you can apply the original argument to show that these observables are quantized. In Bohm's second order version, this is different, and the quantization of certain observables becomes, indeed, problematic. This has been another reason for me (beyond history, see arXiv:quant-ph/0609184) to prefer the name "pilot wave theory" in comparison with "Bohmian mechanics".
More generally, something very singular seems to be happening near the "R=0" strings in the Bohmian model of space.
The "model of space" in pilot wave theory is a trivial one, nothing strange happens there if R = 0. The singularity of the velocity at these points is harmless – a simple rotor localized in a string, moreover, there is nothing in the place where velocity becomes undefined.
So even though the Bohmian mechanics stole the Schrödinger equation from quantum mechanics, the superficially innocent step of rewriting it in the polar form was enough to destroy a key consequence of quantum mechanics - the discreteness of many physical observables.
If there would be property rights for equations or functions, one could argue as well that Schrödinger has stolen the wave function from de Broglies pilot wave theory. Fortunately, such nonsense does not exist in science. But there is a point worth to be mentioned: Without pilot wave theory, there would be no Schrödinger picture, and we would have to use the Heisenberg formalism all the time. And if some Bohm would have found the Schrödinger equation later, it would have been named, as well, an unnecessary superconstruction and banned from physics, for almost the same reasons.
About relativistic symmetry and the preferred frame
Last but not least, there are some claims that pilot wave theories will be unable to recover QFT predictions in the relativistic domain. Unfortunately for his argumentation, the equivalence theorem remains to be a theorem even in the relativistic domain – nothing used in it has any connection to the particular choice of spacetime symmetry. Thus, if the quantum theory has relativistic symmetry for it's observable predictions, the same holds for the observable predictions of pilot wave theory.
More concretely, it is inconsistent with modern physics in many ways, as we will see.
Special relativity combined with the entanglement experiments is the most obvious example. Bell's theorem proves that if a similar deterministic theory reproduces the high correlations observed in Nature (and predicted by conventional quantum mechanics), namely the correlations that violate the so-called Bell's inequalities, the objects in the theory must actually send physical superluminal signals.
But superluminal signals would look like signals sent backward in time in other inertial frames. It follows that at most one reference frame is able to give us a causal description of reality where causes precede their effects. At the fundamental level, basic rules of special relativity are inevitably violated with such a preferred inertial frame.
I was already afraid that lumo does not even understand that in a preferred frame everything is fine with causality. The introduction was, at least, the highly dramatic one which is typical for such crank cases.
I like the formulation "at most". Sounds as if we would really like to have more reference frames and are, now, very disturbed that at most one preferred frame is available ;-).
You might think that the experiments that have been made to check relativity simply rule out a fundamentally privileged reference frame. Well, the Bohmists still try to wave their hands and argue that they can avoid the contradictions with the verified consequences of relativity.
Who is hand waving here? Lumo might, of course, think that experiments rule out a hidden preferred frame. But it's his job, in this case, to point out which observations rule out such a preferred frame. As long as he fails to do it, I don't even have contradictions with any verified consequence of relativity to wave my hands.
I wonder whether they actually believe that there always exists a preferred reference frame, at least in principle, because such a belief sounds crazy to me (what is the hypothetical preferred slicing near a black hole, for example?).
I'm happy to answer this question: The preferred coordinates are harmonic. Given, additionally, the global CMBR frame, with time after big bang as the time coordinate, this prescription is already unique. For a corresponding theory of gravity, mathematically simply GR on flat background in harmonic gauge, physically with preferred frame and ether interpretation, see my generalization of the Lorentz ether to gravity.
But it is possible to see that one can't get relativistic predictions of a Bohmian framework for all statistically measurable quantities at the same moment, not even in principle. If a theory violates the invariance under boosts "in principle", it is always possible to "amplify" the violation and see it macroscopically, in a statistically significant ensemble. If such a violation existed, we would have already seen it: almost certainly. .
I would be interested to learn more about this mystical way to amplify high energy violations of Lorentz symmetry into the low energy domain, without access to the necessary high energies. As far, it is lumo who is waving his hands.
I know that there are some nice observations, which use the extremely large distances light has to travel for some astronomical observations, to obtain boundaries for a frequency dependence of the velocity of light. Some of the boundaries obtained in this and different ways suggest even that these Lorentz-violating effects are absent for distances below Planck length. But Planck length is merely the distance where quantum gravity becomes important. The fundamental distance where our continuous field theories start to fail may be different.
In proper quantum mechanics, locality holds. If one considers a Hamiltonian that respects the Lorentz symmetry - such as a Hamiltonian of a relativistic quantum field theory - the Lorentz symmetry is simply exact and it guarantees that signals never propagate faster than light.
In proper quantum mechanics, one can define the operators that generate the Poincaré group and rigorously derive their expected commutators. Also, it is exactly true that operators in space-like-separated regions exactly commute with each other. This fact is sufficient to show that the outcome of a measurement in spacetime point B is never correlated with a decision made at a space-like-separated spacetime point A.
These facts allow us to say that quantum field theory respects relativity and locality. The actual measurements can never reveal a correlation that would contradict these principles. And it is the actual measurements that decide whether a statement in physics is true or not. Bohmian mechanics is different because these principles are directly violated. You may try to construct your mechanistic model in such a way that it will approximately look like a local relativistic theory but it won't be one. Consequently, you won't be able to use these principles to constrain the possible form of your theory. Moreover, tension with tests of Lorentz invariance may arise at some moment.
First, there is no reason not to use some symmetry principles for one part of the theory which do not hold for another part of it. For example, the symplectic structure in the classical Hamilton formalism has another symmetry group – the group of all canonical transformations – than the whole theory including the Hamiltonian.
Then, to postulate a fundamental Poincare symmetry is, of course, a technically easy way if one wants to obtain a theory with Poincare symmetry. But what is the purpose of a postulated global Poincare symmetry in a situation where the observable symmetry is different, depends on the physics, as in general relativity? Whatever the representation of the gμν(x) on the Minkowski background – it will (except for simple conformally trivial cases) have a different light cone almost everywhere. If the Minkowski background lightcone is the smaller one, one has somewhere to violate the background Poincare symmetry. It may be always the other way. But in this case, the axioms of the theory give only restrictions for the background Minkowski light cone, not for the physical light cone. Thus, tensions with the physical Lorentz invariance may arise in the same way, because the theory only looks like one which, in the particular point x, has the Lorentz invariance for the metric gμν(x). But really it is a theory with Lorentz invariance for a different metric ημν, with a larger light cone, thus, allows for superluminal information transfer relative to gμν(x).
String theory, as far as I understand, obtains gravity as a spin two field on Minkowski background. This requires, as far as I understand, that this problem is solved in string theory. Fine. Means, it is a solvable one.
The contradiction between relativity and semi-viable Bohmian models (that violate Bell's inequalities, and they have to in order not to be ruled out by experiments) is a very profound problem of these models. It can't really be fixed.
Again, nice formulation. Sounds like poor Bohmians have tried hard not to violate Bell's inequalities and finally given up. "Semi-viable" is also a nice word. But the "very profound problem" remains hidden. (A nice place for problems in a hidden variable theory.;-))
Instead, I prefer to follow the weak suggestions one can obtain based on mathematical equivalence proofs. When I construct a pilot wave theory based on a relativistic QFT, it seems really hard to avoid the consequences of this theorem to violate Lorentz invariance. At least, I don't know how to manage this. we obtain a pilot wave theory which does not violate observable relativistic symmetries. Simply because there is an equivalence proof for observables.
Today, we have some more concrete reasons to know that the hidden-variable theories are misguided. Via Bell's theorem, hidden-variable theories would have to be dramatically non-local and the apparent occurrence of nearly exact locality and Lorentz invariance in the world we observe would have to be explained as an infinite collection of shocking coincidences.
I'm impressed by the verbal power of "dramatically nonlocal", even more by the "infinite collection of shocking coincidences". Sounds really impressive. But I would not name a nonlocality, which, because of an equivalence theorem, cannot be used even for information transfer, and can be observed only indirectly, via violations of Bell's inequality, a dramatical one. Instead, it seems to me the most non-dramatical one. As well, I would distinguish the simple and straightforward consequences of an equivalence theorem from an "infinite collection of shocking coincidences". Instead, I would be more surprised if an quantum equilibrium large distance low energy limit would not change anything in the symmetry group of a theory.
Last but not least, the Lorentz group is simply the invariance group of a quite prosaic wave equation, an equation we find almost everywhere in nature. And such, a wave equation (or it's linearization) usually defines also an effective (and in general curved) Lorentz metric, so that the wave equation becomes the harmonic equation of this Lorentz metric. As a consequence, for everything which follows such a wave equation we obtain local Lorentz symmetry. (See arXiv:0711.4416, arXiv:gr-qc/0505065 for overviews.) To assume that a symmetry, which so often and for very different materials appears as an effective symmetry in condensed matter theory, is fundamental, is a hypothesis which seems quite unnatural for me.
... and the ether ...
The similarity with the luminiferous aether seems manifest. ...
I just don't think that this is a rationally sustainable belief. It's just another repetition of the old story of the luminiferous aether.
About the similarity with the aether I fully agree with lumo ;-)))). But what is irrational in the belief that there is an ether? I would like to hear some details. I would be really interesting to hear which of the beliefs expressed in my ether model for particle physics are not rationally sustainable.
Now, it seems we have finished the claims of empirical inadequacy. It's time to consider the metaphysical arguments.
About signs of the heavens
It is not surprising in any way that the new, Bohmian equation for "X(t)" can be written down: it is clearly always possible to rewrite the Schrödinger equation as one real equation for the squared absolute value (probability density) and one for the phase (resembling the classical Hamilton-Jacobi equation). And it is always possible to interpret the first equation as a Liouville equation and derive the equation for "X(t)" that it would follow from. There's no "sign of the heavens" here.
I think there are "signs of the heavens" here. First, the guiding equation for the velocity is a nice, simple, and local (in configuration space) equation. The derivation mentioned by lumo could as well lead to a dirty nonlocal one.
Then, the equation for the phase resembles the classical Hamilton-Jacobi equation, and for constant density becomes simply identical with it. Now, the same guiding equation is, as well, part of the classical Hamilton-Jacobi theory – a theory which was in no way related to the conservation law of the first derivation.
Now, Hamilton-Jacobi theory is really beautiful mathematics, it has all properties of "signs of the heavens", even if taken only alone. See arXiv:quant-ph/0210140 for an introduction. That one and the same simple law for velocity gives, on one hand, Hamilton-Jacobi theory in the classical limit, and, on the other hand, a Liouville equation, is, at least for me, a sufficiently strong hint from the mathematical heaven. In many worlds I have not seen any comparable signs of beauty.
And there is, of course, the really beautiful derivation of the whole quantum measurement formalism.
How to distinguish useful improvements from unnecessary superconstructions
The mechanistic models add a new layer of quantities, concepts, and assumptions.
Indeed, every new, more fundamental theory adds a new layer of quantities, concepts, and assumptions. So what?
[Einstein] called the picture an unnecessary superconstruction.
Appeal to authority does not count. And there is no reason to expect that the father of relativity would like a theory which violates his child. But how to distinguish unnecessary superconstructions from interesting more fundamental theories? Above add something to the old theory. But useful more fundamental theories allow to explain something else from the old theory: Some postulates of the old theory can be derived now. So, one has to compare what one has to add with what can be derived now.
This relation is quite nice for pilot wave theory: The new layer is, essentially, the configuration together with a single additional equation – the guiding equation for the configuration. What can be derived from this equation is, instead, the whole measurement theory of quantum mechanics, including the Born rule and the state preparation by measurement. Compared with the Copenhagen interpretation, the additional layer also replaces the "classical part" of this interpretation and removes the collapse from the theory.
These last two points have been a major motivation of other reinterpretations as well. In particular, for many worlds it seems to be the only aim. The interpretation I prefer to name "inconsistent histories" is focussed on this aim too. Thus, two things which have been obtained in pilot wave theory first, have been widely recognized today as important contributions to the foundations of quantum theory. One can object that pilot wave theory does not get rid of the classical part, but even extends it into the quantum domain. This depends on what one considers as problematic with the classical part: If the problem is the imprecision of this notion, the absence of well-defined rules for this part, then it is clearly solved in pilot wave theory. Anyway, pilot wave theory was the first interpretation with completely unitary dynamics for the wave function, without a collapse.
One can perhaps create classical mechanistic models that mimic the internal workings of quantum mechanics in many situations. For example, one can write a computer simulation. But you can't say that the details of such a program or Bohmian picture is justified as soon as you confirm the predictions of conventional quantum mechanics.
There is no necessity to justify every detail. The important point of the pilot wave interpretation is that to explain the observable facts there is no necessity to reject classical logic, realism, or to introduce many worlds, inconsistent histories, correlations without correlata or other quantum strangeness and mysticism. We have at least one simple, realistic, even deterministic, explanation of all observable facts. That's enough to reject quantum mystery. Why should we justify every detail of some particular realistic model? There may be several realistic models compatible with observation. I would expect this anyway, given large distance universality.
The mechanistic models add a new layer of quantities, concepts, and assumptions. They are not unique and they are not inevitable. The similarity with the luminiferous aether seems manifest. If they only reproduce the statistical predictions of quantum mechanics, you could never know which mechanistic model is the right one: it could be a computer simulation written by Oracle for Windows Vista, after all.
But what's the problem with this? Is Nature obliged to work with theories which can be inevitably reconstructed by internal creatures? You could never know? Big problem. Anyway, our theories are only guesses about Nature, and we can never know if they are really true. If you doubt, I recommend to read Popper. (I ignore here, for simplicity, the modern ways to recognize the truth of theories, like counting the number of papers written about them, or getting inspirations about the language in which God wrote the world.)
Moreover, science has developed a lot of criteria which allow to compare theories which do not make different predictions: Internal consistency, simplicity, explanatory power, symmetry, mathematical beauty. Lumo uses such arguments himself, thus, he is aware of their power. They are usually sufficient to rule out most of the competing models. And if there remain a few different theories, all in agreement with observation, this is not problematic at all – it is even useful: It allows to see the difference between the empirically established parts of these theories – these parts will be shared by all viable theories – and the remaining, metaphysical parts, which may be very different in the different theories. Thus, they serve as a useful tool to show the boundaries of what science can tell at a given moment.
For example, today the existence of pilot wave theory shows that almost all of the quantum strangeness, in particular the rejection of realism, "quantum logic", and the esoterics of many worlds, are in no way forced on us by any empirical evidence, but purely metaphysical choices of some particular interpretations.
What are the fundamental beables?
I could make things even harder for the Bohmian framework by looking into quantum field theory. What are the real, "primitive" properties in that case?
In the simplest case of a scalar field, the natural candidate for the "primitive property" or the "beable" is simply the field phi(x). This is a very old idea, proposed already by Bohm. But the effective fields of the standard model are also bad candidates for really fundamental beables. They are, last but not least, only effective fields, not fundamental fields. In my opinion, one needs a more fundamental theory to find the true beables.
My proposal for such more fundamental beables can be found in my papers ( see here). Even if pilot wave theory is not mentioned in the paper about my model, it is quite obvious that the canonical quantization proposal for fermion fields I have made there allows to apply the standard formalism of pilot wave theory to obtain a pilot wave version of this theory.
Problems with spin and with particle ontology in quantum field theories
A large part of lumo's arguments is directed against two particular versions of pilot wave theory – strangely, I don't like them too. The first one is the idea to describe particles with spin using only wave functions of particles with spin, but leaving the configuration without spin. In this case, the wave function is no longer a complex function on configuration space, but a function with values in some higher-dimensional Hilbert space. But, as a consequence, the very nice pilot wave way to obtain the classical limit via the Hamilton-Jacobi theory no longer works, and one would have to use the dirty old way based on wave packets to obtain some classical limit.
There are other examples of such pilot wave theories. First, this trick was used by Bell, who has proposed a pilot-wave-like field theory with beables for fermions, but not for bosons. Now, one can argue that this is already sufficient, and leave the bosons without beables. The reverse situation was a theory from Struyve and Westman for the electromagnetic field. Again, it has been argued that this is sufficient. And, for the purpose to obtain a realistic theory which is able to recover QFT predictions, it is. But I think that such pilot wave theories are sufficient only for one purpose: To be used as a quick and dirty existence proof for realistic theories in situations where some parts of the theory cause problems. For this purpose, they are indeed sufficient, if the part of the theory represented in the beables is large enough to distinguish all macroscopic states – a quite weak requirement. If one doubts that a theory without fermions, or without bosons, is sufficient for this, one should think about renormalization: If we use these incomplete theories to describe one type of the bare fields (for some energy), then all types of the dressed fields already depend on this single type.
The second type of theories I don't want to defend are theories with particle ontology in the domain of field theory. One reason is that semiclassical gravity shows nicely that fields are more fundamental, and the pilot wave beables have to be, of course, fundamental. Then, to handle variable particle numbers is a dirty job. There should be something more beautiful. Particles which pretend for a status of beables should be at least conserved.
Therefore, the parts of the argumentation where lumo attacks particle theories I can leave unanswered. Let's note only that a short look at the particle-based approach to field theory in arXiv:quant-ph/0303156 suggests that lumo's arguments don't hit this target as well. This version introduces stochastic jumps into the theory (showing, by the way, that pilot wave theorists are not preoccupied with determinism). But I can leave the comparison to the reader.
About the "segregation" among observables
Because experiments eventually measure some well-defined quantities, the likes of Bohm think that there must exist preferred observables - and operators - that also exist classically. They are classical to start with, they think. Positions of objects are an important example.
But the quantum mechanical founding fathers have known from the very beginning that this was a misconception. All Hermitean operators acting on a Hilbert space may be identified with some real classical observables and none of them is preferred.
I think it is a misconception to interpret pilot wave theory as preferring some observables. It is not an accident that Bell has even proposed another word, beables, for the configuration space variables in pilot wave theory. In particular, measurements of the beables play no special role at all, nor in the classical limit, nor everywhere else in pilot wave theory. To derive the measurement theory, we don't need them (this would be circular anyway). What we need are the actual values of the beables, not some results of observations. Indeed, let's assume for simplicity we consist of atoms, which are the beables of some simplified pilot wave theory. Then, a theory about our observations does not need anything about our observations of atoms – if we "observe" them at all, then only in a quite indirect way, and most people do not observe atoms at all. Therefore, observations of atoms cannot play any role in an explanation of our everyday observations. Of course, in these explanations atoms have to play a role, at least indirectly – as constituent parts of our brain cells. But these atoms inside our brain cells are nothing we observe, if we observe something in everyday life. Thus, we use only the atoms themself, not the observations of atoms, in such explanations of our observations.
Thus, as observables the beables play no special role – in particular, the theory of their measurements can be derived in the same way, without danger of circularity. In particular, their measurements have to be described by self-adjoint operators or POVMs as those of every other observable too. In this sense, there are no preferred observables in pilot wave theory.
And this construction is actually very unnatural because it picks "X" as a preferred observable in whose basis the wave vector should be (artificially) separated into the probability densities and phases
Configurations (I prefer "q" instead of "X", because "X" is associated with usual space, while "q" is associated with configuration space) play indeed a special role. But this is the same special role they play in the Lagrange formalism as well as in Hamilton-Jacobi theory. Above are very beautiful, useful approaches. I don't remember to have heard any objections that the Lagrange formalism is unnatural, because it picks "q" as a preferred observable. Instead, the Lagrange formalism is an extremely important tool in modern physics, in quantum field theory as well as in general relativity. Moreover, this "segregation" is a very natural one: If nothing changes, the configuration remains the same, while the velocities have to be zero. Instead, I have found the symmetry between such different things as position and momentum in the Hamilton equations (and, similarly, in the canonical approach to quantum theory) always strange and unnatural, (even if, because of its symmetry, beautiful).
So why lumo does not fight against segregation in the Lagrange formalism? The segregation is the same, the poor momentum variables are degraded to the role of "derivatives". (Or maybe he does? I have not checked. Anyway, the important role of the Lagrange formalism in modern science, which is based on exactly the same "segregation", is a fact which shows that there is nothing wrong with this particular segregation.)
In order to celebrate the Martin Luther King Jr Day, I will dedicate the rest of the text to a fight against the segregation of observables. :-) So my statement is very modest – that observables can't be segregated into the "real" primitive ones and the "fictitious" contextual ones – a fact that trivially rules out all theories (such as the Bohmian ones) that are forced to do so.
... I guess that you must agree that the "philosophical democracy" between all observables is pleasing and natural.
I see no reason at all to find such a "democracy" pleasing. You can observe a honest guy telling us the truth. As well you can observe how a liar is telling us lies. Above are observable. There may be even more symmetry between them. They may even make the same claims: "I have seen that he has stolen the money". That means, without segregation among observables, without destroying observable symmetry, we have to give them equal status. I don't plan to follow this idea, and will always prefer a segregation between truth and lies, even if this destroys some observable symmetries.
The segregation between contextual and non-contextual observables is less important, but is part of our everyday life as well. You can ask somebody about things he has not decided yet. He will think about them, possibly argue with you, and, maybe, give you an answer. This answer does not exist before you have started to argue with him, it is, therefore, contextual. Arguing with somebody else, he could have made a different decision. (Last but not least, this is one purpose of communication – to modify our decisions, if we hear good arguments to do this.) In other words, this answer will be contextual. But in a different situation, he has already decided about this question, and the answer was already part of reality of his thoughts when you have asked him. In this case, the answer is not contextual. Above answers we observe as results of complex verbal interactions, and they are, in this sense, on equal foot. Nonetheless, a realistic theory about his thoughts has to segregate between them. Without segregation, he should be or almighty, able to think and decide about all imaginable questions before you ask him, or completely dependent, deciding about nothing before you ask him.
In all these cases, the same "formalism" is used to obtain the results – communication in human language. Thus, that the same formalism – that of self-adjoint operators, or, more general, of POVM's – is used to describe the results of interactions in quantum theory is in no way an argument against this particular segregation.
Clearly, some quantities in the real world look more classical than others. But what are the rules of the game that separates them? The Bohmists assume that everything that "smells" like "X" or "P" is classical while other things are not. ...
Clearly, they want some quantities that often behave classically in classical limits.
Clearly not. The "segregation" in pilot wave theory is between configuration and momentum variables, and it is in no way related with one of them being "more classical". In classical situations, above behave classically, and the same segregation exists in classical theory too, in the Lagrange formalism as well as in Hamilton-Jacobi theory. There is no place in pilot wave theory where one has to care that something in the behaviour of the configuration is "classical": In the classical limit, it follows automatically, from the classical Hamilton-Jacobi equation, that everything behaves classically. For other questions this is simply irrelevant.
It is the many worlds community which is focussed around the classical limit. That's reasonable – they have a very hard job to construct something which at least sounds plausible (at least if one uses words like "contains" for a linear relation between some points in a Hilbert space, talks about "evolution" of branches without defining any evolution law, and applies decoherence techniques without explaining how to obtain the decomposition into systems one needs to apply them).
In order to simplify their imagination, the Bohmists imagined the existence of additional classical objects – the classical positions.
Simplification has, it seems, been removed from the aims of science. Ockham's razor is out, simple theories have to be rejected. The higher the dimension, the better.
But the objects are in no way additional. They have been part of the Copenhagen interpretation: It's classical part contains, in particular, all the measurement results. And Schrödinger's cat proves that a unitary wave function alone is not sufficient, that we need something else. Or some non-unitary collapse, or some particular configuration as in pilot wave theory. Something – be it the collapsed wave function, or some different entity – has to describe the reality we see: or the dead, or the living cat. Many worlds claims something different, but introduces, for this purpose, the "branches" – some sort of collapsed wave functions without collapse, or configurations without a guiding equation, which is claimed to be "contained" in the wave function. (How a decomposition of some vector into a linear combination of others defines a containment relation remains unclear. A concept where a function like ψ(q) = 42 "contains" all possible universes has it's appropriate place in the Hitchhiker's Guide to the Galaxy, not in scientific journals.) The approach named "consistent histories" leaves us with many inconsistent histories, subdivided into families.
Theories with physical collapse need dirty and artificial non-unitary modifications of the Schrödinger equation. The branches of many worlds are, it seems, left today without any equations at all. (A very scientific approach, indeed. Time to rename it into "many words"). Only pilot wave theory gives us a nice, simple, and beautiful equation for this "additional" entity. Moreover, it allows, just for nothing, to derive the whole measurement formalism of quantum theory.
Imagination is completely irrelevant for these questions. I see, of course, no reason to object if a theory allows to simplify our imaginations too. Instead, I would count it as one additional advantage of a theory. But I recognize that this attitude is not shared by other scientists. And there are, indeed, good reasons to prefer theories which are complex and mystical. Imagine you are in a company of nice girls (or boys, whatever you prefer), and they ask you what you are doing. Isn't it much more impressive if you can tell them about curved spacetimes, large dimensions, a strange new quantum realism, or even quantum logic, many worlds and other strange quantum things? Compare this with the poor 17th century scientist, the fighter against any form of mystery, the classical loser in every popular mystery film. The choice is quite obvious.
About history
Louis de Broglie wrote these equations for the position of one particle, David Bohm generalized them to N particles.
Not correct, the configuration space version of pilot wave theory was presented by de Broglie already at the Solvay conference. See de Broglie, L., in “Electrons et Photons: Rapports et Discussions du Cinquieme Conseil de Physique”, ed. J. Bordet, Gauthier-Villars, Paris, 105 (1928), English translation: G. Bacciagaluppi and A. Valentini, Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference”, Cambridge University Press, and arXiv:quant-ph/0609184 (2006)
I think that in analogous cases, we wouldn't be using the name of the "updater" for the final discovery.
After having read something about the history of this theory (I do not care that much about history), I use "pilot wave theory" instead of "Bohmian mechanics". But Bohm has a point too: de Broglie has broken his theory as not viable, being unable to develop the general measurement theory. This has been done by Bohm. Therefore, if I use names, I use now the combination "de Broglie-Bohm".
Of course that I have always known that Bell constructed his inequalities because he wanted to prove exactly the opposite than what he proved at the end. He was unhappy until the end of his life. Bad luck. Nature doesn't care if some people can't abandon their prejudices.
This sounds like lumo thinks that Bell has tried to prove, with his inequalities, that quantum mechanics is wrong. This does not sound very plausible. It is quite clear that he liked Bohmian mechanics, that he has seen it's nonlocality as an argument against it, and tried to remove this argument, by showing that this nonlocality is a necessary property of all hidden variable theories. About his bets before the experiments have been performed, there is the following quote: "In view of the general success of quantum mechanics, it is very hard for me to doubt the outcome of such experiments. However, I would prefer these experiments, in which the crucial concepts are very directly tested, to have been done and the results on record. Moreover, there is always the slim chance of an unexpected result, which would shake the world." (Freire, arXiv:quant-ph/0508180,p.20)
[arguing against "I've read that the Broglie-Bohm theory makes the same predictions that the normal quantum randomness theory makes but the latter was chosen because it was conceived first.":]
Concerning the first point, people can have various theories in the first run. But once they have all possible alternative theories, they can compare them.
Second, it is not true that the probabilistic interpretation was conceived "first". Quite on the contrary. Technically, it's true that de Broglie wrote his pilot wave theory in 1927, one year after Max Born proposed the probabilistic interpretation, but the very idea that the wave connected with the particle was "real" was studied for many years that preceded it. Both de Broglie (1924) and Schrödinger (1925) explicitly believed that the wave was real which is incorrect.
Given that de Broglie has given up pilot wave theory shortly after 1927, unable to find a viable measurement theory for other observables than position, one can say that pilot wave theory appeared in a viable form only 1952, with Bohm's measurement theory. At that time, the Copenhagen interpretation was already well-established (even if the label "Copenhagen interpretation" was coined only later). So there was an advantage of historical accident for the standard interpretation.
In 1952, Bohm wrote down a very straightforward multi-particle generalization of de Broglie's equations and added a very controversial version of "measurement theory". Is it a substantial improvement you expect from 25 years of progress?
Depends on how many people have worked on it during this time. In this case, most of these 25 years nobody has worked on it. In particular, de Broglie himself had broken it, because he was unable to find the "very controversial" measurement theory found later by Bohm. Bohm, who was 1927 only 10 years old, had not worked most of this time in this domain too. Thus, very few man-years have been sufficient to transform a theory broken by it's creator as not viable into a viable theory. I would name this a sufficiently efficient and substantial improvement.
The next important defender of this theory – again almost alone for a long time – was Bell. The results of his work in the foundations of quantum theory are also well-known. Despite their foundational character, they have caused a large experimental activity. Thus, also a quite efficient relation between man-years and results.
(Given that lumo has not understood the main point of Bohm's measurement theory, we can ignore the characterization of this theory as "very controversial").
About decoherence and the classical limit
Moreover, the question which of them will emerge as natural quantities in a classical limit cannot be answered a priori. Which observables like to behave classically? Well, it is those whose eigenstates decohere from each other.
The role of decoherence in the classical limit is largely overexaggerated, see the Hyperion discussion about this (Ballentine, Classicality without Decoherence: A Reply to Schlosshauer, Found Phys (2008) 38: 916-922, DOI 10.1007/s10701-008-9242-0, Schlosshauer, Classicality, the ensemble interpretation, and decoherence: Resolving the Hyperion dispute, Found Phys (2008) 38: 796-803, DOI 10.1007/s10701-008-9237-x, arXiv:quant-ph/0605249, Wiebe and Ballentine, Phys. Rev. A 72:022109, 2005, also arXiv:quant-ph/0503170).
Essentially, you can measure every operator, together with every other, if the accuracy of the common measurement is below the boundaries of the uncertainty relations. And in the classical h → 0 limit they all like to behave classically.
Everything in this real world is quantum while the classical intuition can only be an approximation, and it is a good approximation only if decoherence is fast enough i.e. if the interference between the different eigenstates is eliminated. If it is so, the quantum probabilities may be imagined to be ordinary classical probabilities and Bell's inequalities are restored. So if you want to know whether a particular quantity may be imagined to be classical, you need to know how quickly its eigenvectors decohere from each other. And the answer depends on the dynamics. Decoherence is fast if the different eigenvectors are quickly able to leave their distinct fingerprints in the environment with which they must interact.
A nice description of the decoherence paradigm. The little dirty secret of decoherence is that it depends on some decomposition of the world into systems. Such a decomposition can be found, without problems, if we have some classical context as in the Copenhagen interpretation, or some well-defined configuration of the universe as in pilot wave theory, by considering an environment of the actual state of the universe. But without such a background structure you have nothing to start these decoherence considerations. The different systems we see around us – cats, for example – cannot be used for this purpose, at least not if we want to avoid circular reasoning. The Hamilton operator, taken alone, is not enough to derive a decoherence-preferred basis uniquely.
Mechanistic models of state-of-the-art quantum theories are not available: it is partly because it's not really possible and it's not natural but it is also partly because the champions of Bohmian mechanics are simply not good enough physicists to be able to study state-of-the-art quantum theories. They're typically people with philosophical preconceptions who simply believe that the world has to respect their rules of "realism" or even "determinism".
I have a quite nice "mechanistic model" for the standard model of particle physics. One which essentially allows to compute the SM gauge group (as a maximal group which fulfills a few simple "mechanistic" axioms). How many more years (and how many more man-years) string theory needs to reach something comparable?
The idea of "philosophical preconceptions" is quite funny. My concept is quite pragmatical: If there is a simple way to do the things, use it. Simplicity is a good thing, independent of the age or the popularity of the particular concept. About determinism I don't care even today, in particular I have certain sympathies for Nelson's stochastics. And I have as well looked at non-realistic interpretations of quantum theory, like the concept I prefer to name "inconsistent histories". But I think there should be really good evidence to justify the rejection of such simple, general, fundamental and beautiful principles like realism. But pilot wave theory would be preferable even without it, simply for the beauty of the guiding equation.
Last but not least, some funny but unimportant polemics
The attempts to return physics to the 17th century deterministic picture of the Universe are archaic traces of bigotry of some people who will simply never be persuaded by any overwhelming evidence – both of experimental and theoretical character – if the evidence contradicts their predetermined beliefs how the world should work.
Well formulated. I like such polemics. Especially replacing the standard 19th century in such flames by 17th century is nice. But there is room for enhancement. In philosophy of science, I follow Popper, who likes to identify the origin of some of his ideas in Ancient Greece. I also prefer the economic system based on ideas of Adam Smith in comparison with much more modern ones developed by Lenin and Mao, so one can identify this sympathy for old ideas as deeply rooted in my personality. Indeed, I think there is nothing wrong with old ideas.
To describe pilot wavers as "predetermined" sounds really nice, but is, unfortunately, wrong. There are, of course, people who follow predetermined ideas. But these are the ideas they have learned in their youth. Where are the proponents of pilot wave ideas supposed to have learned it? What I was teached was quantum theory and Marxism-Leninism, not pilot wave theory and Adam Smith. And I remember, in particular, some uncritical fascination learning von Neumann's proof of impossibility of a classical picture. I have had nor a prejudice for 17th century determinism, nor any of the "bourgeois prejudices" the communists have liked to argue against.
It was not predetermination, but the power of arguments (in particular, of Bell's "speakable and unspeakable in quantum mechanics"), which has persuaded me to switch to pilot wave theory. And an important part of this argumentative power was the simple proof of equivalence between pilot wave theory and quantum theory. There simply is no experimental evidence against pilot wave theory.
And, indeed, the "experimental evidence" presented by lumo was (in his polarizer argument, and similar ones about spins) based on the common error not to take into account the measurement device, or (in his quantization argument) not applicable to de Broglies version of pilot wave theory. About the theoretical evidence judge yourself.
But the very fact that the Bohmists actually don't work on the cutting-edge physics of spins, fields, quarks, renormalization, dualities, and strings is enough to lead us to a very different conclusion: they're just playing with fundamentally wrong toy models and by keeping their focus on the 1-particle spinless case, they want to hide the fact that their obsolete theory contradicts pretty much everything we know about the real world.
It is always fun to compare the "very facts" of such claims with reality. The one-particle spinless case has never been in the focus of my interest, except if this appears sufficient to show some serious problems of other interpretations ( arXiv:0901.3262, arXiv:0903.4657). The results of my work with spins, fields, and quarks I have already mentioned. And even renormalization is on my todo list, even if some other problems have, yet, higher priority for me.
I'm not sure that naming strings and dualities "cutting-edge physics" is justified. This is clearly a domain of research I leave to lumo – it may have a value as a nice exercise in mathematics, which is an important part of human culture, even if it has nothing to do with physics. Of course, one never knows – results of pure mathematicians, who have been proud of doing things which will never find an application, are applied today in cryptography. It would be a really nice joke if some result found by lumo would find a physical application in some hidden variable ether theory ;-). |
ab44caef571e780e | Thomson Reuters
Quantum Computers - Published: March 2010
Interview Date: May 2009
Download this article
Daniel Lidar Daniel Lidar
From the Special Topic of Quantum Computers
According to our Special Topics analysis of quantum computers research over the past decade, the work of Dr. Daniel Lidar ranks at #4 by total number of papers, based on 79 papers cited a total of 1,814 times between January 1, 1999 and December 31, 2009.
In the Web of Science®, Dr. Lidar currently has 113 original articles, reviews, and proceedings papers from 1998-2010, cited a total of 3,318 times. Six of these papers have been named as Highly Cited Papers in the field of Physics in Essential Science IndicatorsSM from Thomson Reuters.
Dr. Lidar is Associate Professor in the Departments of Electrical Engineering and Chemistry at the University of Southern California in Los Angeles. He is also the Director and co-founding member of the USC Center for Quantum Information Science and Technology (CQIST).
In this interview, he talks with about his highly cited research on quantum computers.
What first drew your interest to the field of quantum computing?
As I was finishing my Ph.D. research on scattering theory and disordered systems I realized I wanted a change of subject. A fellow graduate student told me about Shor's algorithm—the algorithm for efficient factoring which launched the field of quantum computing—and I was hooked. You can read about my educational background and research experiences at the end of this page.
This was in '96, and Shor had published his algorithm two years earlier, so the field was still relatively embryonic and unpopulated. All the papers that had been written on the subject literally fit on my desk. I thought it would be a good idea to move into an exciting young field of study which I could read everything about, and decided to switch to quantum computing. Moreover, this field seemed to contain a nice mix of fundamental questions and practical applications, both of which appealed, and continue to appeal, to me.
I wrote my first quantum computing paper with my Ph.D. advisor Ofer Biham several months later. At the time there were very few postdoc positions in the field, and I was lucky enough to find one at UC Berkeley, in Birgitta Whaley's group. Several terrific graduate students joined the group, and we worked well together. It was a very productive and inspiring period, which solidified my interest in quantum computing.
What is your main focus in the field?
My main focus is on ensuring that quantum computers can work reliably in spite of their extreme fragility. Quantum computers are particularly susceptible to decoherence, which is the result of their inevitable interactions with their environments. Decoherence can be thought of the process by which a quantum system becomes classical. For a quantum computer this means the loss of any computational advantage over classical computers.
I have worked on a variety of different approaches designed to ensure the reliable operation of quantum computers in the presence of inevitable decoherence and other sources of noise and imperfections. These approaches include "hiding" a quantum computer from its environment (decoherence-free subspaces), minimizing the interaction between computer and environment (dynamical decoupling), and correcting errors induced by the environment and other noise sources (quantum error correcting codes).
"One reason that decoherence-free subspaces are important is because they offer a way to protect quantum information 'for free."
Most of my recent work on overcoming decoherence has focused on dynamical decoupling. The basic idea comes from the spin-echo effect in nuclear magnetic resonance. More than 50 years ago Hahn observed that the rapidly decaying signal from nuclear magnetic resonance measurement could be revived, or "refocused," by applying a series of strong and frequent modulating pulses to the system under investigation.
This idea was picked up in the quantum computing community, starting with Lorenza Viola and Seth Lloyd in 1998, to overcome decoherence in quantum computers. In 2005 my former student Kaveh Khodjasteh (now a postdoc at Dartmouth College) and I proposed a variation on dynamical decoupling we called "concatenated dynamical decoupling," which involves recursively constructed pulse sequence and can in principle reduce decoherence to arbitrarily low levels much faster than other decoupling methods, and is inherently fault-tolerant to some degree.
Very recently we generalized these ideas, together with Lorenza Viola, to show that arbitrarily accurate quantum logic gates can also be implemented using concatenated pulses design ("Arbitrarily accurate dynamical control in open quantum systems," Physical Review Letters 104[9]: art. no. 090501, 5 March 2010). I continue to work on dynamical decoupling with my students Gregory Quiroz and Wan-Jung Kuo.
Another focus of my work (mostly with my former student Joseph Geraci, now a scientist at the Ontario Cancer Biomarker Network) has been the design of algorithms which could run more efficiently on quantum than on classical computers, especially those pertaining to simulations of classical physics.
I have also been quite interested in developing methods to experimentally measure and characterize quantum noise channels, technically known as quantum process tomography. My former student Masoud Mohseni (now a postdoc at MIT) and I found efficient ways to do this using techniques borrowed from quantum error correction theory.
Most recently I have been devoting much of my attention, together with my postdocs Dr. Ali Rezakhani and Alioscia Hamma (now at the Perimeter Institute) and graduate students Wan-Jung Kuo and Kristen Pudenz, to an approach to quantum computing called "adiabatic quantum computing," which I find particularly intriguing and promising.
In the adiabatic approach the idea is to very slowly change the interactions between the particles in the quantum computer, so that while the initial interactions correspond to a Hamiltonian with a very simple ground state, the final interactions correspond to a Hamiltonian with a complicated ground state that encodes the answer to a hard computational question.
There are some similarities between this idea and the well-known simulated annealing algorithm, but the difference is that in adiabatic quantum computing the system is supposed to always remain in its ground state, so that the entire evolution actually takes place very close to zero temperature. This implies some natural robustness against decoherence, as well as intriguing connections to quantum phase transitions, and more widely to condensed matter physics.
As a result adiabatic quantum computing provides a natural bridge between computer science and physics, which I find particularly appealing. I have always believed in the fruitfulness of combining ideas and techniques from a variety of different fields, which is another reason I was initially drawn to quantum computing.
Your most influential paper in the field wasn't actually included in our analysis, as it was published a year prior to our 10-year time window, but it's important to address it—your 1998 Physical Review Letters paper, "Decoherence-Free Subspaces for Quantum Computation," (Lidar DA, et al., 81[12]: 2594-7, 21 September 1998). Why is this paper cited so much?
This paper was among the first to point out that symmetry can be used to hide quantum information from the detrimental effects of decoherence. Symmetry has always fascinated physicists, and putting it to use to overcome some of the initial skepticism directed at quantum computing in light of decoherence was apparently an idea that was appreciated by the community.
I should point out that my USC colleague Paolo Zanardi co-authored an earlier paper on a closely related topic (Zanardi P, Rasetti M, "Noiseless quantum codes", Physical Review Letters 79[17]: 3306-9, 27 October 1997) which really provided the inspiration for our 1998 paper.
Many of your papers in our analysis deal with decoherence-free subspaces. Could you explain what these are and why they are important?
A decoherence-free subspace is a way to exploit a pre-existing symmetry in order to hide quantum information from the environment that tries to corrupt this information by measuring the state of the quantum computer.
Here is a classical analogy: You have two coins and want to use them to store one bit of classical information. "Easy," you say, "since two coins represent two bits of information." But now imagine that some nasty demon keeps flipping the coins at random, so that your efforts to store a bit are frustrated. Fortunately the demon can only flip both coins simultaneously. Is it still possible to reliably store a classical bit?
A moment's reflection reveals that the answer is yes: define the two subspaces "equal" (even parity) and "opposite" (odd parity). The first is the subspace comprising the states {heads,heads} and {tails,tails}. The second subspace is {heads,tails} and {tails,heads}. Now call "equal" 0 and call "opposite" 1. Since the demon can only flip both coins together, these 0 and 1 are protected. Indeed, under the demon's action {heads,heads}<–>{tails,tails} and {heads,tails}<–>{tails,heads}, so that the two subspaces never get mixed.
Thus, instead of encoding a bit into each coin, we should encode a bit into the parity of the two coins. What is special about parity? It is the fact that it respects the symmetry induced by the demon's inability to distinguish between the two coins, a permutation symmetry.
One reason that decoherence-free subspaces are important is because they offer a way to protect quantum information "for free." Almost all other methods require active intervention. But perhaps more importantly, it turns out that the idea of using symmetry to protect quantum information actually lays at the heart of all other quantum information protection methods as well. Thus the concept of a decoherence-free subspace provides the starting point for a unified theory of quantum information protection.
Another important reason is that of all the ideas for quantum information protection, decoherence-free subspaces have been probably been most thoroughly experimentally tested. There is now plenty of experimental evidence, in a variety of different systems (linear optics, trapped ions, nuclear magnetic resonance, quantum dots), that decoherence-free subspaces exist and can be used as a first layer of defense in the quest to protect quantum information.
The Physical Review Letters paper you wrote with Alireza Shabani last year, "Vanishing Quantum Discord is Necessary and Sufficient for Completely Positive Maps" (102[10]: art. no. 100402, 13 March 2009), has been receiving citation attention. Would you tell us a bit about this paper?
This paper addresses a fairly technical problem, with a foundational flavor. Almost all of the mathematical work in quantum computing, and more generally in quantum information theory, takes for granted that quantum dynamics of open (non-isolated) systems can be described using a relatively simple type of transformation called a completely positive map, which is in some sense a generalization of the Schrödinger equation.
My former student Alireza Shabani (now a postdoc at Princeton) and I wanted to understand the precise conditions under which such transformations actually apply to open quantum systems. Building on an earlier breakthrough by Cesar Rodriguez-Rosario et al., we showed that the necessary and sufficient condition for the validity of the completely positive map description (given certain standard assumptions) is that the system (e.g., a quantum computer) and its environment are purely classical correlated, i.e., contain no quantum correlations whatsoever.
Technically such correlations can be quantified using a quantity called quantum discord, and the case of purely classical correlations is when the discord vanishes. The implication of our result is that the standard tool of completely positive maps now has a well-defined and well-understood domain of applicability.
More fundamentally, the implication is that any correlations with non-vanishing quantum discord can be interpreted as not allowing a consistent separation between system and environment, since when the transformation describing the evolution of the system is not a completely positive map, it is well known that in some cases non-physical predictions can arise, in particular events with negative probabilities.
Were there any papers not covered in our analysis that you feel are particularly key to the field? Which papers and why?
Well, clearly the 1998 Physical Review Letters paper mentioned above is in this category, but it wasn't covered since it was published prior to 1999.
Two other papers, which are named as Highly Cited Papers in the field of Physics by Essential Science Indicators from Thomson Reuters, but which were left out of the Special Topics analysis are "Concatenating decoherence-free subspaces with quantum error correcting codes" (with D. Bacon and K.B. Whaley, Physical Review Letters 82[22]: 4556-9, 31 May 1999), and "Quantum phase transitions and bipartite entanglement" (with L.A. Wu and M.S. Sarandy, both former postdocs of mine with faculty positions in Spain and Brazil, respectively, Physical Review Letters 93[25]: art. no. 250404, 17 December 2004).
The first of these established that the information-protection methods of decoherence-free subspaces and quantum error correcting codes can be combined to yield a single, more economical and robust method than provided by either methods used separately. The second paper resolved a problem at the interface of quantum computing and condensed matter theory: why quantum phase transitions are often accompanied by drastic changes in quantum entanglement.
How has the field of quantum computing changed in the past decade? Where do you hope to see it go in the next?
The field has undergone tremendous progress, from one driven mostly by theoretical work to a thriving discipline with numerous experimental groups and steady progress toward the goal of building larger and more robust quantum information-processing devices, and a much clearer theoretical understanding of the source of the power of quantum computers and of means of making them robust.
The field has also gained official recognition with a topical group in the American Physical Society, several dedicated journals, special sections in established journals, and dedicated federal funding.
In the next decade I would hope to see the first generation of commercially viable quantum computers, perhaps as dedicated machines capable of performing specialized simulation tasks (the efforts of the Canadian startup D-Wave Systems Inc. are notable in this regard).
Daniel Lidar, Ph.D.
Departments of Electrical Engineering and Chemistry
University of Southern California
Los Angeles, CA, USA
Additional Information: Read about the educational background and research experiences of Dr. Daniel Lidar.
Daniel Lidar's current most-cited paper in Essential Science Indicators, with 176 cites:
Kempe J, et al., "Theory of decoherence-free fault-tolerant universal quantum computation," Phys. Rev. A 63(4): art. no. 042307, April 2001. Source: Essential Science Indicators from Thomson Reuters.
Download this article
Special Topics : Quantum Computers : Daniel Lidar Interview - Special Topic of Quantum Computers
Science Home | About Thomson Reuters | Site Search
Copyright | Terms of Use | Privacy Policy
left arrow key
right arrow key
Close Move |
7d6f1ccc9874bc96 | The Tight-binding model for electronic band structure
Like the free-electron-gas model, the tight-binding model belongs in the independent-electrons framework.
This model is basically the only time that we see (at the level of the Struttura della Materia course) the actual calculation of an electronic band structure, that takes explicitly into account the presence of the periodic lattice potential.
Contrary to the free-electron picture, the tight-binding model describes the electronic states starting from the limit of isolated-atom orbitals.
This simple model gives good quantitative results for bands derived from strongly localized atomic orbitals, which decay to essentially zero on a radius much smaller than the next neighbor half-distance in the solid.
For the more interesting bands, the conduction bands, the results of tight-binding are usually in rather poor agreement with experiment. As we shall see, tight binding could be systematically improved by including additional levels/bands, so that the accuracy of the calculated bands increases, at the expense of the simplicity and transparency of the model.
Here we shall apply the general formalism to the simplest case of an isolated s band.
The starting point of this model is the decomposition of the total single-electron Hamiltonian into:
H = Hat + ΔU(r) ,
where Hat contains the kinetic energy plus the potential of a single ion (the one placed at R=0, say), and ΔU(r) is the potential generated by all the others ions in the lattice except for the one already considered. The eigenfunctions of the atomic problem satisfy the Schrödinger equation
H φn(r) = En φn(r) ,
where n represents collectively a full set of (orbital) atomic quantum numbers. We then expand a generic state localized around the atom as
φ(r) = sumn bn φn(r)
[notice that this state is not quite generic, as the continuum states of the atom are left out of the sum]. We than make a combination of such localized states, with the symmetry of the lattice:
ψ(r) = sumR exp(i k·R) φ(r-R)
[Exercise: verify that this state has the Bloch property ψ(r+R) = exp(i k·R) ψ(r) ].
For fixed k, we plug this state into the Schrödinger equation:
H ψ(r) = [Hat + ΔU(r)] ψ(r) = E(k) ψ(r)
This differential equation maps to a matrix equation by multiplication on the left by φm*(r) and integration over the r variable:
sumn A(k)mn bn = E(k) sumn B(k)mn bn
This, for each fixed k, is a generalized eigenvalue problem for the matrix A(k), with a "metric" matrix B in place of the identity. E(k) represents the eigenvalue, corresponding to the eigenvector b (of course, also b is k-dependent, in general). The "energy" matrix above is:
A(k)mn = sumR exp(i k·R) int φm*(r) H φn(r-R) dr
The "overlaps" matrix above is:
B(k)mn = sumR exp(i k·R) int φm*(r) φn(r-R) dr
where the sums over R extend over all the lattice points. This is conventiently rearranged (using the decomposition H = Hat + ΔU(r), the fact that φm*(r) are (left) eigenstates of Hat and the orthonormality of φn(r) and φm(r) ) into:
sumn C(k)mn bm = (E(k) - Em) sumn B(k)mn bn
where the new matrix C(k) contains what is left of the Hamiltonian, i.e.
C(k)mn= sumR exp(i k·R) int φm*(r) ΔU(r) φn(r-R) dr
Now the eigenvalue (E(k) - Em) of the generalized secular problem measueres the displacement of the "band" energy E(k) with respect to the original atomic value Em.
The size of this matrix eigenvalue problem is clearly as large as the number of eigenstates of the atomic problem, i.e. infinite. It is therefore necessary to do some approximation here. In particular, one could hope that all the off-diagonal matrix elements of the matrices at the right side of this eq. could be neglected for some given level m. This cannot work for atomic degenerate levels (such as p, d, f... orbitals) where the couplings between the degenerate levels form the main part of the Hamiltonian, the one that resolves the degeneracy: for the case of d-degenerate levels, one has to solve at least a d-dimensional matrix problem (at each value of k).
The only case where it sometimes makes sense forgetting the interaction with all levels with nneqm is with atomic s (l=0) nondegenerate levels. In this approximation, the matrix equation becomes a 1×1 trivial problem, for which one takes bm=1 and all the other bnneqm=0, and only the "m" energy equation:
C(k)mm = (E(k) - Em) B(k)mm
remains, with solution:
E(k) = Em + C(k)mm/B(k)mm
Now, in the infinite R-sums of the definitions of the B(k) and C(k) matrices, it is convenient to separate the contribution from R=0, the contribution from R in the first shell around the origin (first or nearest neighbors), the contribution of the second shell around the origin (second neighbors), etc. The R=0 contribution to B(k) is 1, and that to C(k) is
χ = int ΔU(r) |φm(r)|² dr
which is a negative quantity, reflecting the attraction that the "other" nuclei produce on the band electron, which was not there when the atom was isolated. We shall indicate the Rneq0 contributions to B(k) and C(k) as
β(R)= int φm*(r) φm(r-R) dr
γ(R)= int φm*(r) ΔU(r) φm(r-R) dr
respectively. Note that Accordingly, one can replace the complex exponentials with cosines, and rearrange the solution for E(k) to:
E(k) = Em + [χ + sumRneq0 cos(k·R) γ(R)] / [1 + sumRneq0 cos(k·R) β(R)]
This E(k) gives the tight-binding band structure in terms of a set of parameters β(R), χ and γ(R). We also have an explicit recipe to compute these parameters in terms of overlap integrals at different sites.
Due to the exponential decay of the atomic wave functions at large distance, both the overlap integrals β(R) and the energy integrals γ(R) become exponentially small for large distance R between the centers of the atoms. It therefore makes sense to ignore all the integrals outside some Rmax, which would bring in only negligible corrections to the bandstructure E(k). One may obtain a band structure depending on a minimal number of parameters by making further rather radical approximations:
In this approximation the band energy simplifies further to:
E(k) = Em + χ + γ sumR(NN) cos(k·R)
where γ indicates the value of γ(R) for the nearest neighbors. In this approximation, the band is determined by 2 parameters only: Em + χ, which tunes the band mean energy, and γ, which sets the band width.
[Further details for this model can be studied, for example, in the Ashcroft-Mermin textbook "Solid State Physics" (Chap. 10).]
1. Write out explicitly the s-band tight-binding E(k) for a 1-dimensional lattice of spacing a, in the approximations outlined above. Draw it out in the first Brillouin zone (-π/a, π/a), assuming that the original atomic level position is at -6.3eV, that the on-site potential integral χ=-0.7eV and the NN overlap integral is γ=-1.2eV. How would that change upon inclusion of second-near neighbor overlap γ'=0.4eV? How would the band change upon further inclusion of nearest neighbor orthogonalization correction β=0.15? Draw the three bands.
nearest neighbor only: E(k) = Em + χ + 2 γ cos(k a) = -7eV -2.4eV cos(k a)
all corrections: E(k) = Em + [χ + 2 γ cos(k a) + 2 γ' cos(2 k a)]/ [1 + β cos(k a)]
2. In 3 dimensions E(k) is a function of the 3 variables (kx,ky,kz). Write out explicitly the s-band tight-binding E(k) for a 3-dimensional simple-cubic (SC) lattice of spacing a, in the simplest NN approximation (no β correction) outlined above. Draw a slice of the band in the first Brillouin zone for kx in (-π/a, π/a), ky=0, kz=0. On the same (kx,E(k)) plot, draw another slice of the band for ky changing to π/a (all the rest left unchanged). What is the total width of this band?
RESULT: E(k) = Em - χ - 2 γ [cos(kx a) + cos(ky a) + cos(kz a)]. Total bandwidth = 12 γ
3. Write out explicitly the s-band tight-binding E(k) for a 3-dimensional fcc (face centered cubic) lattice of spacing a, taking into account the overlap integrals to the 12 nearest neighbor sites. Expand E(k) around k=0 and show that (as happens in all cubic cases) the dispersion is isotropic for small k, and similar to that of free fermions.
NOTE: The first Brillouin zone of the fcc lattice has a nontrivial shape. This does not affect the resolution of this specific exercise, but it is interesting to see the region of k-space containing all independent k points where the computed E(k) is defined.
4. Write out explicitly the s-band tight-binding E(k) for a 2-dimensional square lattice of spacing a, taking into account the overlap integrals to the 4 nearest neighbor sites. Fill the states according to the 0-temperature Fermi distribution, so that 1 (non-interacting spin-½) electron per site is present, like in, e.g., in an hypothetic square-lattice Na. What is the resulting shape of the Fermi "surface" (i.e. Fermi line) in the (kx,ky) plane? Compare to the Fermi "surface" of the free electrons dispersion.
HINT: use the symmetry of the band around its energy center, and the fact that an s-band can accommodate 2 (spin-½) electrons at each k-point (thus a total 2×(number of sites) electrons in the completely filled band).
5. The benzene molecule C6H6 can be thought of as a ring of 6 sites, on each of which a C atom is placed. Each C atom shares one electron in delocalized pz (out-of plane) orbitals, which can be roughly described with the tight-binding s-band model (negative γ, including only nearest neighbor C), in the non-interacting-electron approximation. The excitation energy to the lowest excited state is observed 2.0 eV.
1. Draw a single-particle energy diagram for these delocalized electronic states.
RESULT: allowed independent k = n 2 π/(6 a), with n=0, ±1, ±2, +3.
2. Fill the 6 lowest spin-orbital states with the electrons provided by the C atoms: compute the ground-state total energy.
RESULT: EGS = 8 γ
3. Draw the lowest excited 6-electron state, by promoting one electron to the lowest empty state. Compute the energy of this 1st excited state and its excitation energy referred to the ground state. By comparison to the experimental data, determine γ.
RESULT: E1st ex = 6 γ, ExcE1st ex = -2 γ, whence γ = -1.0 eV
4. Determine the excitation energies referred to the ground state of the 2nd and 3rd excited states.
RESULT: ExcE2st ex = 3.0 eV, ExcE3st ex = 4.0 eV
1. The sign of γ(R) is determined by the relative signs of the tails of the wavefunctions φm(r) and φm(r-R) in the overlapping region. For s-orbitals, γ(R) has the same sign as ΔU(r), i.e. negative. As a consequence, s-band tigtht binding places the bottom of the band at k=0 (similarly to a free-electron-like dispersion, see ex. 3).
However, for p bands one can see qualitatively that the overlapping tails of the wavefunctions φm(r) and φm(r-R) can have opposite sign (the p wavefunction changes sign across some plane passing in the nucleus), thus γ(R)>0 in that case. As a consequence, a p band shows typically a band maximum at k=0, with negative curvature of E(k). This does not apply to the π-band of benzene in the exercise above, as the nodal plane where φm(r) changes sign is parallel to the line joining neighboring atoms.
2. The tight-binding method is an approximate method for computing bandstructures. The approximation involved is a truncation of the basis.
Another standard elementary technique is the perturbative method: the starting point of the free-electron parabolic dispersion is perturbed by a periodic potential, assumed to be "weak". A qualitative understanding of how the bands change and how gaps open can be gathered by a perturbative analysis. Exact bands are then obtained if the plane-waves basis is used in a full diagonalization approach as in current electronic-structure codes.
Another popular toy bandstructure scheme is the Kronig-Penney model.
The best thing about these methods is that they all give the same qualitative result: a band spectrum. This is not a random coincidence: it reflects the discrete translational invariance of the periodic potential, the symmetry enforcing Bloch's theorem.
Comments and debugging are welcome!
created: 11 Jan 2002 last modified: 16 Jan 2012 by Nicola Manini |
14a43ffa8be857b4 | Psychology Wiki
Wavefunction collapse
34,189pages on
this wiki
In certain interpretations of quantum mechanics, wave function collapse is one of two processes by which quantum systems apparently evolve according to the laws of quantum mechanics. It is also called collapse of the state vector or reduction of the wave packet. The reality of wave function collapse has always been debated, i.e., whether it is a fundamental physical phenomenon in its own right (which may yet emerge from a theory of everything) or just an epiphenomenon of another process, such as quantum decoherence). In recent decades the quantum decoherence view has gained popularity.
History and ContextEdit
By the time John von Neumann wrote his famous treatise Mathematische Grundlagen der Quantenmechanik in 1932, the phenomenon of "wave function collapse" was accommodated into the mathematical formulation of quantum mechanics by postulating that there were two processes of wave function change:
1. The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement, as outlined above.
2. The deterministic, unitary, continuous time evolution of an isolated system that obeys Schrödinger's equation (or nowadays some relativistic, local equivalent).
In general, quantum systems exist in superpositions of those basis states that most closely correspond to classical descriptions, and -- when not being measured or observed, evolve according to the time dependent Schrödinger equation, relativistic quantum field theory or some form of quantum gravity or string theory, which is process (2) mentioned above. However, when the wave function collapses -- process (1) -- from an observer's perspective the state seems to "leap" or "jump" to just one of the basis states and uniquely acquire the value of the property being measured, e_i, that is associated with that particular basis state. After the collapse, the system begins to evolve again according to the Schrödinger equation or some equivalent wave equation.
Hence, in experiments such as the double-slit experiment each individual photon arrives at a discrete point on the screen, but as more and more photons are accumulated, they form an interference pattern overall.
The existence of the wave function collapse is required in
On the other hand, the collapse is considered as redundant or just an optional approximation in
The cluster of phenomena described by the expression wave function collapse is a fundamental problem in the interpretation of quantum mechanics known as the measurement problem. The problem is not really confronted by the Copenhagen interpretation which simply postulates that this is a special characteristic of the "measurement" process. The Everett many-worlds interpretation deals with it by discarding the collapse-process, thus reformulating the relation between measurement apparatus and system in such a way that the linear laws of quantum mechanics are universally valid, that is, the only process according to which a quantum system evolves is governed by the Schrödinger equation or some relativistic equivalent. Often tied in with the many-worlds interpretation, but not limited to it, is the physical process of decoherence, which causes an apparent collapse. Decoherence is also important for the interpretation based on Consistent Histories.
Note that a general description of the evolution of quantum mechanical systems is possible by using density operators and quantum operations. In this formalism (which is closely related to the C*-algebraic formalism) the collapse of the wave function corresponds to a non-unitary quantum operation.
Note also that the physical significance ascribed to the wave function varies from interpretation to interpretation, and even within an interpretation, such as the Copenhagen Interpretation. If the wave function merely encodes an observer's knowledge of the universe then the wave function collapse corresponds to the receipt of new information -- this is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent. One of the paradoxes of quantum theory is that wave function seems to be more than just information (otherwise interference effects are hard to explain) and often less than real, since the collapse seems to take place faster-than-light and triggered by observers.
ŗ==See also==
Around Wikia's network
Random Wiki |
5c5fb1eef8ebe5ec | Molecular modelling is changing chemistry – could it change education too? Ida Emilie Steinmark takes a closer look at computational chemistry methods
Chemistry isn’t what it used to be. There was a time when things were, some might argue, simpler. It was practical, wet, occasionally dirty and sometimes smelly. Now, a large group of chemists are exchanging the 8-hour lab sessions for screen marathons, not working with chemicals but simulating them. The rise of molecular modelling is changing the face of chemistry in exciting new ways, which could present both opportunities and challenges to the chemical sciences as a whole and to chemistry education.
Molecular modelling has been called the fourth axis of chemistry – it lies somewhere between theory, observation and experiment. At its core are attempts to describe the state and behaviour of molecules through computer simulations – a fundamentally different approach to the rest of chemistry. For Carmen Domene, a computational chemist at King’s College London, it is also a way to see chemistry from a completely new angle: ‘The computer is a microscope for things that other techniques can’t see,’ she says, paraphrasing the great computational biophysicist Klaus Schulten, who passed away in 2016.
Swift and simple or slow and steady?
Simulations are like calculated pictures of atoms and molecules, sometimes at a moment in time, sometimes over a dynamic range. This time dimension, along with size, mostly determines what type of simulation you can use. ‘If you want to try to model some particular situation, you have to bear in mind the [size] of your process, and the timescale,’ says Carmen. ‘The system might have one atom or one million atoms. You will have to choose which kind of technique is appropriate for what you want to model and what you want to understand.’ This means we can roughly divide the field into two halves: molecular mechanics and quantum chemistry.
Molecular mechanics is the classical physics approach and is therefore arguably more intuitive. ‘The atoms are represented like balls with a particular mass and a particular charge and they are linked to each other with springs,’ Carmen explains. A computational chemist will then calculate the energy of a molecule based on a function that includes energy terms related to bonding and non-bonding interactions – a function futuristically named the force field. ‘Then you can use Newton’s laws of motion and Hooke’s law to study a dynamic system,’ Carmen says. ‘You can generate movies of how the particles evolve with time. But with that approach, you can’t understand chemical reactions.’
This is where the shortcomings of molecular mechanics start to show: it is a quick and efficient method but it is simplified and lacks accuracy. The other approach, quantum mechanics, makes up for this with its rigour. Leaving the simplistic ball-and-spring system behind, quantum mechanistic methods are aimed at solving Schrödinger equations, allowing scientists to study electronic structure and the details of bond making and bond breaking processes. Unfortunately, it is also very computationally intensive. ‘If we go to really large systems, like reactions taking place within proteins, you have a system with one hundred to one million atoms,’ Carmen says. ‘You can’t really use quantum chemistry with those large systems.’
As computers continue to improve, they’re capable of calculating even larger parts quantum chemically, but this isn’t necessarily the joyride it would appear to be. ‘We have better and better computers, but the problem we have now is the data,’ Carmen says. ‘We don’t have the storage, and when we try to analyse the data, it takes a long, long time.’ This issue isn’t limited to chemical fields. In fact, if anything is going to put a spanner in the works of the computational revolution, data could be it. ‘The bottleneck is the amount of data we produce and how to store it,’ she says. Luckily she adds: ‘But you don’t really need quantum for all the calculations we do.’
Clearly, the optimal method would be one that combined the speed of molecular mechanics and the accuracy of quantum mechanics. A massive step towards this was achieved in 1976 when two scientists, Arieh Warshel and Michael Levitt, published a paper about enzymatic reactions that outlined a new, powerful tool named QM/MM. ‘Imagine you are on top of a bridge over a motorway, and you can see the cars passing. If you use quantum chemistry, you just take a picture of one of these [cars passing],’ Carmen says. ‘But what [Warshel and Levitt] were able to do with this technique, was to see all the cars going past, one after the other. They used Newton’s laws of motion so you can see what happens in time, but the description of the system was quantum.’
Here, the most important part of the system under investigation, where actual chemistry takes place, is handled by quantum chemical calculations while the rest of the system is dealt with by molecular mechanics. By doing it this way, Warshel and Levitt were able to simulate a much larger system in a computationally efficient way. This was so groundbreaking they were awarded the 2013 Nobel prize in chemistry, along with Martin Karplus, who had distinguished himself in almost all types of molecular modelling, most notably in the molecular dynamics of biological systems. According to Carmen, the advantages of the approach are obvious. ‘In some systems, the environment is very important and you have to include it in your calculations and that’s why you need QM/MM,’ she explains. ‘Where the reaction takes place, you describe it in a quantum chemistry way, and then you see the rest of the environment classically.’
Simulating the extra mile
Molecular modelling has, since its first implementation, proved to be immensely useful, particularly where chemistry overlaps with molecular biology. Simulation is a great way to study intricate biological problems like protein folding, enzyme reactions and conformational change. But it also finds great use in core chemical subjects. Natalie Fey, computational chemist and lecturer at Bristol University, is using computational methods to study organometallic catalysis. ‘Energetically a lot of catalytic cycles are very, very finely balanced. So what I’m interested in is whether we can use computational chemistry to support that process, but in the longer term also get ahead of it – to be able to predict what the best system is for a given reaction.’
She outlines three main ways in which computational methods can complement synthesis.
Firstly, it allows a more quantitative approach. Take characterising and comparing ligands on a transition metal catalyst for example. ‘You might say that chemists do this all the time by just looking at structures, and that’s absolutely true. What we can do by adding the computation is to make it quantitative. Instead of just saying “they look different”, we can actually say “they’re different by x amount”,’ she explains.
A second factor is predictability, which can avoid months of trial and error in the lab. ‘We can try to predict barriers to reactions,’ she says, ‘and we can see how a change to the catalyst can change those barriers. That is something that is very, very hard to do synthetically, but it’s our bread and butter computationally.’
And finally, there’s the issue of transition states. ‘Computational chemistry is really the only way to observe transition states in chemical reactions – they cannot be directly observed experimentally,’ Natalie explains. ‘By actually allowing us to plot reaction energy profiles, not only can we compare processes in terms of whether they’re thermodynamically favourable, but also whether they’re kinetically favourable.’
For anyone who’s ever taken a mechanistic organic or inorganic course, that should impress – and chemistry as a field certainly is impressed. ‘We really cannot satisfy the demand people now have for adding computational studies to their synthetic work,’ she says. ‘If you look through the literature now, there are more and more studies, and it’s being demanded by referees as well. They’ll say, “Ok, so you’ve come up with an idea about what the reaction mechanism might be, but can you prove it? Can you support that suggestion by running some calculations?”’
Worth a thousand words
It’s clear that computational chemistry has reached maturity within the chemical sciences, but that does leave some questions for educators. Should it be taught in school and in more undergraduate courses, and if so, how? Alan Shusterman, computational chemist and professor of chemistry at Reed College in Oregon, US, says it’s difficult to find the room even if you think it’s important. ‘From a teacher’s perspective, they don’t necessarily see how to make it work. Where would computers go in the lesson plan?’ he explains. ‘It’s a challenge for me, too, to find room in a very crowded curriculum.’
One trick might be to incorporate modelling methods into other topics that are already taught, not unlike how computation is being used outside purely computational research. ‘Computation is permeating all kinds of research now, so many experimental papers have computational components,’ Alan says. ‘Students in my organic chemistry class are working with molecular models and electronic structure models as part of their laboratory experience. They learn how to build models, how to predict the energy of different conformations of molecules and how to predict NMR and IR spectra.’ That way, students are exposed to molecular modelling while practicing their existing chemical knowledge.
In fact, using computation and modelling provides a great opportunity for teaching students, according to Alan. ‘Their [gained] skillset directly translates into being a chemist. I emphasise graphics a lot, so they’re seeing things they otherwise would only have explained to them using words or through mathematical equations,’ he says. ‘It’s about trying to make concepts real. How do molecules move? How are electrons distributed? How do molecules stick together? I can show them a picture.’
He also stresses the exposure gives them the ability to interpret computational data and results in a meaningful way. ‘Part of my advanced course is reading papers and discussing them because I know students want to understand that kind of language.’
Despite the advantages of modelling in chemistry education, there are also challenges. In the English AS- and A-level curriculum, while there’s a big focus on the students’ practical and mathematical abilities (with good reason), computer skills aren’t mentioned, despite a big push recently to introduce schoolchildren to computing and coding. As such, for chemistry students starting university, computational methods might feel a little out of place. ‘Most students don’t come to it thinking “oh yes, this is a natural part of chemistry”,’ Alan explains. ‘Some look at it and think, “This doesn’t contain the features of chemistry that drew me to the subject”.’ This bias can of course also work in the opposite direction – students who don’t particularly like laboratory work but enjoy computational work may not realise chemistry could still be for them.
Regardless of how the chemistry education field decides to handle computational methods and molecular modelling, it appears these tools are here to stay. A 2012 report from Goldbeck Consulting explains that the number of publications and the citation impact in simulation and modelling have grown faster than the science average – and that was five years ago. Currently, it is hard to imagine that development halting.
And while there are undoubtedly challenges to bringing molecular modelling into education, these challenges are hardly unique to modelling, but rather general for all new scientific approaches. And maybe the strengthening of computation in education will result in not just better computational chemists, but better chemists in general. |
e640d6c30b0d791d | Time's arrow and Boltzmann's entropy
From Scholarpedia
Joel L. Lebowitz (2008), Scholarpedia, 3(4):3448. doi:10.4249/scholarpedia.3448 revision #137152 [link to/cite this article]
Jump to: navigation, search
Post-publication activity
Curator: Joel L. Lebowitz
The arrow of time expresses the fact that in the world about us the past is distinctly different from the future. Milk spills but doesn't unspill; eggs splatter but do not unsplatter; waves break but do not unbreak; we always grow older, never younger. These processes all move in one direction in time - they are called "time-irreversible" and define the arrow of time. It is therefore very surprising that the relevant fundamental laws of nature make no such distinction between the past and the future. This in turn leads to a great puzzle - if the laws of nature permit all processes to be run backwards in time, why don't we observe them doing so? Why does a video of an egg splattering run backwards look ridiculous? Put another way: how can time-reversible motions of atoms and molecules, the microscopic components of material systems, give rise to the observed time-irreversible behavior of our everyday world? The resolution of this apparent paradox is due to Maxwell, Thomson and (particularly) Boltzmann. These ideas also explain most other arrows of time - in particular; why do we remember the past but not the future?
What is time
Time is arguably among the most primitive concepts we have—there can be no action or movement, no memory or thought, except in time. Of course this does not mean that we understand, whatever is meant by that loaded word "understand", what time is. As put by Saint Augustine.
In a book entitled Time's Arrow and Archimedes' Point the Australian philosopher Huw Price describes well the ``stock philosophical debates about time. These have not changed much since the time of Saint Augustine or even earlier.
"... Philosophers tend to be divided into two camps. On one side there are those who regard the passage of time as an objective feature of reality, and interpret the present moment as the marker or leading edge of this advance. Some members of this camp give the present ontological priority, as well, sharing Augustine's view that the past and the future are unreal. Others take the view that the past is real in a way that the future is not, so that the present consists in something like the coming into being of determinate reality. .... Philosophers in the opposing camp regard the present as a subjective notion, often claiming that now is dependent on one's viewpoint in much the same way that here is. Just as "here" means roughly "this place", so "now" means roughly "this time", and in either case what is picked out depends where the speaker stands. In this view there is no more an objective division of the world into the past, the present, and the future than there is an objective division of a region of space into here and there.
Often this is called the block universe view, the point being that it regards reality as a single entity of which time is an ingredient, rather than as a changeable entity set in time."
A very good description of the block universe point of view is given by Kurt Vonnegut in his novel Slaughterhouse-Five. The coexistence of past, present and future forms one of the themes of the book. The hero, Billy Pilgrim, speaks of the inhabitants of Tralfamadore a planet in a distant galaxy: "The Tralfamadorians can look at all different moments just the way we can look at a stretch of the Rocky Mountains, for instance. They can see how permanent all the moments are, and they can look at any moment that interests them. It is just an illusion we have here on earth that one moment follows another like beads on a string, and that once a moment is gone it is gone forever."
This view (with relativity properly taken into account) is certainly the one held by most physicists—at least when they think as physicists. It is well expressed in the often quoted passage from Einstein's letter of condolences upon the death of his youthful best friend Michele Besso: "Michele has left this strange world just before me. This is of no importance. For us convinced physicists the distinction between past, present and future is an illusion, although a persistent one."
There are however also more radical views about time among physicists. At a conference on the Physical Origins of Time Asymmetry which took place in Mazagon, Spain, in 1991, the physicist Julian Barbour conducted an informal poll about whether time is fundamental. Here is Barbour's account of that from his book The End of Time.
"During the Workshop, I conducted a very informal straw-poll, putting the following question to each of the 42 participants: Do you believe time is a truly basic concept that must appear in the foundations of any theory of the world, or is it an effective concept that can be derived from more primitive notions in the same way that a notion of temperature can be recovered in statistical mechanics?
The results were as follows: 20 said there was no time at a fundamental level, 12 declared themselves to be undecided or wished to abstain, and 10 believed time did exist at the most basic level. However, among the 12 in the undecided/abstain column, 5 were sympathetic to or inclined to the belief that time should not appear at the most basic level of theory."
Matter in space-time
In this article, the intuitive notion of space-time as a primitive undefined concept is taken as a working hypothesis. This space-time continuum is the arena in which matter, radiation and all kinds of other fields exist and change.
Many of these changes have a uni-directional order "in time", or display an arrow of time. One might therefore expect, as Feynman puts it, that there is some fundamental law which says, that "uxels only make wuxels and not vice versa." But we have not found such a law.... "so this manifest fact of our experience is not part of the fundamental laws of physics." The fundamental microscopic laws (with some, presumably irrelevant, exceptions) all turn out to be time symmetric. Newton's laws, the Schrödinger equation, the special and general theory of relativity, etc., make no distinction between the past and the future—they are "time-symmetric". As put by Brian Greene in his book "The Fabric of the Cosmos: Space, Time and the Structure of Reality", "no one has ever discovered any fundamental law which might be called the Law of the Spilled Milk or the Law of the Splattered Egg."
It is only secondary laws, which describe the behavior of macroscopic objects containing many, many atoms, such as the second law of thermodynamics, (discussed below), which explicitly contain this time asymmetry. The obvious question then is; how does one go from a time symmetric description of the dynamics of atoms to a time asymmetric description of the evolution of macroscopic systems made up of atoms.
In answering that question, one may mostly ignore relativity and quantum mechanics. These theories, while essential for understanding both the very large scale and the very small scale structure of the universe, have a "classical limit" which is adequate for a basic understanding of time's arrow. One may also for simplicity ignore waves, made up of photons, and any entities smaller than atoms and talk about these atoms as if they were point particles interacting with each other via some pair potential, and evolving according to Newtonian laws.
In the context of Newtonian theory, the "theory of everything" at the time of Thomson, Maxwell and Boltzmann, the problem can be formally presented as follows: The complete microscopic (or micro) state of a classical system of \(N\) particles, is represented by a point \(X\) in its phase space \(\Gamma\ ,\) \( X =(r_1, p_1, r_2, p_2, ..., r_N, p_N), r_i\) and \(p_i\) being three dimensional vectors representing the position and momentum (or velocity) of the \(i\)th particle. When the system is isolated, say in a box \(V\) with reflecting walls, its evolution is governed by Hamiltonian dynamics with some specified Hamiltonian \(H(X)\) which we will assume for simplicity to be an even function of the momenta: no magnetic fields. Given \(H(X)\ ,\) the microstate \(X(t_0)\ ,\) at time \(t_0\ ,\) determines the microstate \(X(t)\) at all future and past times \(t\) during which the system will be or was isolated. Let \(X(t_0)\) and \(X(t_0+\tau)\ ,\) with \(\tau\) positive, be two such microstates. Reversing (physically or mathematically) all velocities at time \(t_0+\tau\ ,\) we obtain a new microstate, \(RX\ .\) \[ RX = (r_1,-p_1, r_2,-p_2, ...,r_N,-p_N). \] If we now follow the evolution for another interval \(\tau\) we find that the new microstate at time \(t_0 + 2\tau\) is just \(RX(t_0)\ ,\) the microstate \(X(t_0)\) with all velocities reversed: Hence if there is an evolution, i.e. a trajectory \(X(t)\ ,\) in which some property of the system, specified by a function \(f(X(t))\ ,\) behaves in a certain way as \(t\) increases, then if \(f(X) = f(RX)\) there is also a trajectory in which the property evolves in the time reversed direction. So why is one type of evolution, the one consistent with an entropy increase in accord with the "second law" of thermodynamics, common and the other never seen?
An example of the entropy increasing evolution is the approach to a uniform temperature of systems initially kept isolated at different temperatures, as exemplified by putting a glass of hot tea and a glass of cold water into an insulated container. It is common experience that after a while the two glasses and their contents will come to the same temperature.
This is one of the "laws" of thermodynamics, a subject developed in the eighteenth and nineteenth century, purely on the basis of macroscopic observations—primarily the workings of steam engines—so central to the industrial revolution then taking place. Thermodynamics makes no reference to atoms and molecules, and its validity remains independent of their existence and nature—classical or quantum. The high point in the development of thermodynamics came in 1865 when Rudolf Clausius pronounced his famous two fundamental theorems: 1. The energy of the universe is constant. 2. The entropy of the universe tends to a maximum.
The "second law" says that there is a quantity called entropy associated with macroscopic systems which can only increase, never decrease, in an isolated system. In Clausius' poetic language, the paradigm of such an isolated system is the universe itself. But even leaving aside the universe as a whole and just considering our more modest example of two glasses of water in an insulated container, this is clearly a law which is asymmetric in time. Entropy increase is identified with heat flowing from hot to cold regions leading to a uniformization of the temperature. But, if we look at the microscopic dynamics of the atoms making up the systems then, as noted earlier, if the energy density or temperature inside a box \(V\) gets more uniform as time increases, then, since the energy density profile is the same for \(X\) and \(RX\ ,\) there is also an evolution in which the temperature gets more nonuniform.
There is thus clearly a difficulty in deriving or showing the compatibility of the second law with the microscopic dynamics. This is illustrated by the impossibility of time ordering of the snapshots in {Fig. 1} using solely the microscopic dynamical laws: the time symmetry of the microscopic dynamics implies that if (a, b, c, d) is a possible ordering so is (d, c, b, a).
Figure 1: A sequence of "snapshots", a, b, c, d taken at times \(t_a, t_b, t_c, t_d\ ,\) each representing a macroscopic state of a system, say a fluid with two "differently colored" atoms or a gas in which the shading indicates the local density. How would one order this sequence in time?.
The explanation of this apparent paradox, due to Thomson, Maxwell and Boltzmann, shows that not only is there no conflict between reversible microscopic laws and irreversible macroscopic behavior, but, as clearly pointed out by Boltzmann in his later writings, there are extremely strong reasons to expect the latter from the former. (Boltzmann's early writings on the subject are sometimes unclear, wrong, and even contradictory. His later writings, however, are generally very clear). These reasons involve several interrelated ingredients which together provide the required distinction between microscopic and macroscopic variables and explain the emergence of definite time asymmetric behavior in the evolution of the latter despite the total absence of such asymmetry in the dynamics of the former.
To describe the macroscopic state of a system of \(N\) atoms in a box \(V\ ,\) say \( N {}^>_\sim 10^{20}\ ,\) we make use of a much cruder description than that provided by the microstate \(X\ .\) We shall denote by \(M\) such a macroscopic description or macrostate. As an example we may take \(M\) to consist of the specification, to within a given accuracy, of the energy and number of particles in each half of the box \(V\ .\) A more refined macroscopic description would divide \(V\) into \(K\) cells, where \(K\) is large but still \(K << N\ ,\) and specify the number of particles, the momentum, and the amount of energy in each cell, again with some tolerance.
Clearly \(M\) is determined by \(X\) but there are many \(X\)'s (in fact a continuum) which correspond to the same \(M\ .\) Let \(\Gamma_M\) be the region in \(\Gamma\) consisting of all microstates \(X\) corresponding to a given macrostate \(M\) and denote by \(|\Gamma_M|=(N! h^{3N})^{-1} \int_{\Gamma_M}\prod_{i=1}^Ndr_{i}p_i\ ,\) its symmetrized \(6N\) dimensional Liouville volume in units of \(h^{3N}\ .\) At this point this is simply an arbitrary choice of units. It is however a very convenient one for dealing with the classical limit of quantum systems. ath.nyu.edu/faculty/varadhan/
Time evolution of macrostates: An example
Consider a situation in which a gas of \(N\) atoms with energy \(E\) (with some tolerance) is initially confined by a partition to the left half of the box \(V\ ,\) and suppose that this constraint is removed at time \(t_a\ ,\) see Fig. 1. The phase space volume available to the system for times \(t>t_a\) is then fantastically enlarged compared to what it was initially, roughly by a factor of \(2^N\ .\) If the system contains 1 mole of gas then the volume ratio of the unconstrained phase space region to the constrained one is far larger than the ratio of the volume of the known universe to the volume of one atom.
Let us now consider the macrostate of this gas as given by \(M=\left({N_L \over N} , {E_L \over E}\right)\ ,\) the fraction of particles and energy in the left half of \(V\) (within some small tolerance). The macrostate at time \(t_a, M=(1, 1)\ ,\) will be denoted by \(M_a\ .\) The phase-space region \(|\Gamma| = \Sigma_E\ ,\) available to the system for \(t> t_a\ ,\) i.e., the region in which \(H(X) \in (E, E + \delta E), \delta E << E\ ,\) will contain new macrostates, corresponding to various fractions of particles and energy in the left half of the box, with phase space volumes very large compared to the initial phase space volume available to the system. We can then expect (in the absence of any obstruction, such as a hidden conservation law) that as the phase point \(X\) evolves under the unconstrained dynamics and explores the newly available regions of phase space, it will with very high probability enter a succession of new macrostates \(M\) for which \(|\Gamma_{M}|\) is increasing. The set of all the phase points \(X_t\ ,\) which at time \(t_a\) were in \(\Gamma_{M_a}\ ,\) forms a region \(T_t \Gamma_{M_a}\) whose volume is, by Liouville's Theorem, equal to \(|\Gamma_{M_a}|\ .\) The shape of \(T_t\Gamma_{M_a}\) will however change with \(t\) and as \(t\) increases \(T_t\Gamma_{M_a}\) will increasingly be contained in regions \(\Gamma_M\) corresponding to macrostates with larger and larger phase space volumes \(|\Gamma_M|\ .\) This will continue until almost all the phase points initially in \(\Gamma_{M_a}\) are contained in \(\Gamma_{M_{eq}}\ ,\) with \(M_{eq}\) the system's unconstrained macroscopic equilibrium state. This is the state in which approximately half the particles and half the energy will be located in the left half of the box, \(M_{eq} = ({1\over 2}, {1 \over 2})\) i.e. \(N_L /N\) and \(E_L/ E\) will each be in an interval \(\left({1 \over 2} - \epsilon, {1 \over 2} + \epsilon\right)\ ,\) \(N^{-1/2} << \epsilon << 1\ .\)
\(M_{eq}\) is characterized, in fact defined, by the fact that it is the unique macrostate, among all the \(M_\alpha\ ,\) for which \(|\Gamma_{M_{eq}}| / |\Sigma_E| \simeq 1\ ,\) where \(|\Sigma_E|\) is the total phase space volume available under the energy constraint \(H(X) \in (E, E + \delta E)\ .\) (Here the symbol \(\simeq\) means equality when \(N \to \infty\ .\)) That there exists a macrostate containing almost all of the microstates in \(\Sigma_E\) is a consequence of the law of large numbers. The fact that \(N\) is enormously large for macroscopic systems is absolutely critical for the existence of thermodynamic equilibrium states for any reasonable definition of macrostates, in the above example e.g. for any \(\epsilon\ ,\) such that, \(N^{-1/2} << \epsilon << 1\ .\) Indeed thermodynamics does not apply (is even meaningless) for isolated systems containing just a few particles. Nanosystems are interesting and important intermediate cases: Note however that in many cases an \(N\) of about 1,000 will already behave like a macroscopic system: see related discussion about computer simulations below.
After reaching \(M_{eq}\) we will (mostly) see only small fluctuations in \(N_L(t) / N\) and \(E_L(t) / E\ ,\) about the value \({1 \over 2}\ :\) typical fluctuations in \(N_L\) and \(E_L\) being of the order of the square root of the number of particles involved. (Of course if the system remains isolated long enough we will occasionally also see a return to the initial macrostate—the expected time for such a Poincaré recurrence is however much longer than the age of the universe and so is of no practical relevance when discussing the approach to equilibrium of a macroscopic system.)
As already noted earlier, the scenario in which \(|\Gamma_{M(X(t))}|\) increase with time for the \(M_a\) shown in Fig.1 cannot be true for all microstates \(X\subset \Gamma_{M_a}\ .\) There will of necessity be \(X\)'s in \(\Gamma_{M_a}\) which will evolve for a certain amount of time into microstates \(X(t)\equiv X_t\) such that \(|\Gamma_{M(X_t)}|<|\Gamma_{M_a}|\ ,\) e.g. microstates \(X\in \Gamma_{M_a}\) which have all velocities directed away from the barrier which was lifted at \(t_a\ .\) What is true however is that the subset \(B\) of such "bad" initial states has a phase space volume which is very very small compared to that of \(\Gamma_{M_a}\ .\) This is what is meant by the statement that entropy increasing behavior is typical; a more extensive discussion of typicality is given later.
Boltzmann's entropy
The end result of the time evolution in the above example, that of the fraction of particles and energy becoming and remaining essentially equal in the two halves of the container when \(N\) is large enough (and `exactly equal' when \(N \to\infty\)), is of course what is predicted by the second law of thermodynamics.
It was Boltzmann's great insight to connect the second law with the above phase space volume considerations by making the observation that for a dilute gas \(\log |\Gamma_{M_{eq}}|\) is proportional, up to terms negligible in the size of the system, to the thermodynamic entropy of Clausius. Boltzmann then extended his insight about the relation between thermodynamic entropy and \(\log |\Gamma_{M_{eq}}|\) to all macroscopic systems; be they gas, liquid or solid. This provided for the first time a microscopic definition of the operationally measurable entropy of macroscopic systems in equilibrium.
Having made this connection Boltzmann then generalized it to define an entropy also for macroscopic systems not in equilibrium. That is, he associated with each microscopic state \(X\) of a macroscopic system a number \(S_B\) which depends only on \(M(X)\) given, up to multiplicative and additive constants (which can depend on \(N\)), by \[\tag{1} S_B(X) = S_B (M(X)) \]
with \[\tag{2} S_B(M) = k \log|\Gamma_{M}|, \]
This is the Boltzmann entropy of a classical system, Penrose (1970) N. B. This definition uses two equations to emphasize their logical independence which is important for the discussion of quantum systems.
Boltzmann then used phase space arguments, like those given above, to explain (in agreement with the ideas of Maxwell and Thomson) the observation, embodied in the second law of thermodynamics, that when a constraint is lifted, an isolated macroscopic system will evolve toward a state with greater entropy. In effect Boltzmann argued that due to the large differences in the sizes of \(\Gamma_M\ ,\) \(S_B(X_t) = k \log |\Gamma_{M(X_t)}|\) will typically increase in a way which explains and describes qualitatively the evolution towards equilibrium of macroscopic systems.
These very large differences in the values of \(|\Gamma_M|\) for different \(M\) come from the very large number of particles (or degrees of freedom) which contribute, in an (approximately) additive way, to the specification of macrostates. This is also what gives rise to typical or almost sure behavior. Typical, as used here, means that the set of microstates corresponding to a given macrostate \(M\) for which the evolution leads to a macroscopic increase (or non-decrease) in the Boltzmann entropy during some fixed macroscopic time period \(\tau\) occupies a subset of \(\Gamma_M\) whose Liouville volume is a fraction of \(|\Gamma_M|\) which goes very rapidly (exponentially) to one as the number of atoms in the system increases. The fraction of "bad" microstates, which lead to an entropy decrease, thus goes to zero as \(N\to \infty\ .\)
Typicality is what distinguishes macroscopic irreversibility from the weak approach to equilibrium of probability distributions (ensembles) of systems with good ergodic properties having only a few degrees of freedom, e.g. two hard spheres in a cubical box. While the former is manifested in a typical evolution of a single macroscopic system the latter does not correspond to any appearance of time asymmetry in the evolution of an individual system. Maxwell makes clear the importance of the separation between microscopic and macroscopic scales when he writes: "the second law is drawn from our experience of bodies consisting of an immense number of molecules. ... it is continually being violated, ..., in any sufficiently small group of molecules ... . As the number ... is increased ... the probability of a measurable variation ... may be regarded as practically an impossibility."
On the other hand, because of the exponential increase of the phase space volume with particle number, even a system with only a few hundred particles, such as is commonly used in molecular dynamics computer simulations, will, when started in a nonequilibrium `macrostate' \(M\ ,\) with `random' \(X \in \Gamma_M\ ,\) appear to behave like a macroscopic system. After all, the likelihood of hitting, in the course of say one thousand tries, something which has probability of order \(2^{-N}\) is, for all practical purposes, the same, whether \(N\) is a hundred or \(10^{23}\ .\) Of course the fluctuation in \(S_B\) both along the path towards equilibrium and in equilibrium will be larger when \(N\) is small, c.f. [2b]. This will be so even when integer arithmetic is used in the simulations so that the system behaves as a truly isolated one; when its velocities are reversed the system retraces its steps until it comes back to the initial state (with reversed velocities), after which it again proceeds (up to very long Poincare recurrence times) in the typical way.
We might take as a summary of such insights in the late part of the nineteenth century the statement by Gibbs and quoted by Boltzmann (in a German translation) on the cover of his book Lectures on Gas Theory II:
``In other words, the impossibility of an uncompensated decrease of entropy seems to be reduced to an improbability.
Initial conditions
Once we accept the statistical explanation of why macroscopic systems evolve in a manner that makes \(S_B\) increase with time, there remains the nagging problem (of which Boltzmann was well aware) of what we mean by "with time": since the microscopic dynamical laws are symmetric, the two directions of the time variable are a priori equivalent and thus must remain so a posteriori.
In terms of Fig. 1 this question may be put as follows: why can one use phase space arguments to predict the macrostate at time \(t\) of an isolated system whose macrostate at time \(t_b\) is \(M_b\ ,\) in the future, i.e. for \(t > t_b\ ,\) but not in the past, i.e. for \(t < t_b\ ?\) After all, if the macrostate \(M\) is invariant under velocity reversal of all the atoms, then the same argument should apply equally to \(t_b + \tau\) and \(t_b -\tau\ .\) A plausible answer to this question is to assume that the nonequilibrium macrostate \(M_b\) had its origin in an even more nonuniform macrostate \(M_a\ ,\) prepared by some experimentalist at some earlier time \(t_a < t_b\) (as is indeed the case in Figure 1) and that for states thus prepared we can apply our (approximately) equal a priori probability of microstates argument, i.e. we can assume its validity at time \(t_a\ .\) But what about events on the sun or in a supernova explosion where there are no experimentalists? And what, for that matter, is so special about the status of the experimentalist? Isn't he or she part of the physical universe?
Put differently, where ultimately do initial conditions, such as those assumed at \(t_a\ ,\) come from? In thinking about this we are led more or less inevitably to introduce cosmological considerations by postulating an initial "macrostate of the universe" having a very small Boltzmann entropy. To again quote Boltzmann: "That in nature the transition from a probable to an improbable state does not take place as often as the converse, can be explained by assuming a very improbable [small \(S_B\)] initial state of the entire universe surrounding us. This is a reasonable assumption to make, since it enables us to explain the facts of experience, and one should not expect to be able to deduce it from anything more fundamental". While this requires that the initial macrostate of the universe, call it \(M_0\ ,\) be very far from equilibrium with \(|\Gamma_{M_0}|<< |\Gamma_{M_{eq}}|\ ,\) it does not require that we choose a special microstate in \(\Gamma_{M_0}\ .\) As also noted by Boltzmann elsewhere "We do not have to assume a special type [read microstate] of initial condition in order to give a mechanical proof of the second law, if we are willing to accept a statistical viewpoint...if the initial state is chosen at random...entropy is almost certain to increase." This is a very important aspect of Boltzmann's insight: it is sufficient to assume that this microstate is typical of an initial macrostate \(M_0\) which is far from equilibrium.
This going back to the initial conditions, i.e. the existence of an early state of the universe (presumably close to the big bang) with a much lower value of \(S_B\) than the present universe, as an ingredient in the explanation of the observed time asymmetric behavior, bothers some scientists. A common question is: how does the mixing of the two colors after removing the partitions in Fig. 1 depend on the initial conditions of the universe? The answer is that once you accept that the microstate of the system in 1a is typical of its macrostate the future evolution of the macrostates of this isolated system will indeed look like those depicted in Fig 1. It is the existence of inks of different colors separated in different compartments by an experimentalist, indeed the very existence of the solar system, etc. which depends on the initial conditions. In a "typical" universe everything would be in equilibrium.
It is the initial state of the universe plus the dynamics which determines what is happening at present. Conversely, we can deduce information about the initial state from what we observe now. As put by Feynman, Feynman, et al. (1967) "It is necessary to add to the physical laws the hypothesis that in the past the universe was more ordered, in the technical sense, [i.e. low \(S_B\)] than it is today...to make an understanding of the irreversibility."
Figure 2: With a gas in a box, the maximum entropy state (thermal equilibrium) has the gas distributed uniformly; however, with a system of gravitating bodies, entropy can be increased from the uniform state by gravitational clumping leading eventually to a black hole. From Reference number 8
A very clear discussion of initial conditions is given by Roger Penrose in connection with the "big bang" cosmology, Penrose, (1990 and 2005). He takes for the initial macrostate of the universe the smooth energy density state prevalent soon after the big bang: an equilibrium state (at a very high temperature) except for the gravitational degrees of freedom which were totally out of equilibrium, as evidenced by the fact that the matter-energy density was spatially very uniform. That such a uniform density corresponds to a nonequilibrium state may seem at first surprising, but gravity, being purely attractive and long range, is unlike any of the other fundamental forces. When there is enough matter/energy around, it completely overcomes the tendency towards uniformization observed in ordinary objects at high energy densities or temperatures. Hence, in a universe dominated, like ours, by gravity, a uniform density corresponds to a state of very low entropy, or phase space volume, for a given total energy, see Fig. 2.
The local `order' or low entropy we see around us (and elsewhere)—from complex molecules to trees to the brains of experimentalists preparing macrostates—is perfectly consistent with (and possibly even a necessary consequence of, i.e. typical of) this initial macrostate of the universe. The value of \(S_B\) at the present time, \(t_p\ ,\) corresponding to \(S_B (M_{t_p})\) of our current clumpy macrostate describing a universe of planets, stars, galaxies, and black holes, is much much larger than \(S_B(M_0)\ ,\) the Boltzmann entropy of the "initial state", but still quite far away from \(S_B(M_{eq})\) its equilibrium value. The `natural' or `equilibrium' state of the universe, \(M_{eq}\ ,\) is, according to Roger Penrose, Penrose (1990 and 2005), one with all matter and energy collapsed into one big black hole. Penrose gives an estimate \(S_B(M_0) / S_B(M_{t_p}) / S_{eq}\sim 10^{88} / 10^{101} / 10^{123}\) in natural (Planck) units, see Fig. 3.
Figure 3: The creator locating the tiny region of phase-space—one part in \(10^{10^{123}}\)—needed to produce a \(10^{80}\)-baryon closed universe with a second law of thermodynamics in the form we know it. From Reference number 8. If the initial state was chosen randomly it would, with overwhelming probability, have led to a universe in a state with maximal entropy. In such a universe there would be no stars, planets, people or a second law.
It is this fact that we are still in a state of low entropy that permits the existence of relatively stable neural connections, of marks of ink on paper, which retain over relatively long periods of time shapes related to their formation. Such nonequilibrium states are required for memories- in fact for the existence of living beings and of the earth itself.
We have no such records of the future and the best we can do is use statistical reasoning which leaves much room for uncertainty. Equilibrium systems, in which the entropy has its maximal value, do not distinguish between past and future.
Penrose's consideration about the very far from equilibrium uniform density "initial state" of the universe is quite plausible, but it is obviously far from proven. In any case it is, as Feynman says, both necessary and sufficient to assume a far from equilibrium initial state of the universe, and this is in accord with all cosmological evidence. The "true" equilibrium state of the universe may also be different from what Penrose proposes. There are alternate scenarios in which the black holes evaporate and leave behind mostly empty space, c.f. Chen and Carroll.
The question as to why the universe started out in such a very unusual low entropy initial state worries Penrose quite a lot (since it is not explained by any current theory) but such a state is just accepted as a given by Boltzmann. Clearly, it would be nice to have a theory which would explain the "cosmological initial state", but such a theory is not available at present. The "anthropic principle" in which there are many universes and ours just happens to be right, or we would not be here, is too speculative for an encyclopedic article.
• R. P. Feynman, The Character of Physical Law, MIT Press, Cambridge, Mass. (1967), ch. 5.
• S. Goldstein and J. L. Lebowitz, On the Boltzmann Entropy of Nonequilibrium Systems, Physica D, 193, 53-66, (2004); {b)} P. Garrido, S. Goldstein and J. L. Lebowitz, The Boltzmann Entropy of Dense Fluids Not in Local Equilibrium, Phys. Rev. Lett. 92, 050602, (2003).
• J. L. Lebowitz, {a)} Macroscopic Laws and Microscopic Dynamics, Time's Arrow and Boltzmann's Entropy, Physica A 194, 1–97(1993);
• J.L. Lebowitz, {b}}Boltzmann's Entropy and Time's Arrow, Physics Today, 46, 32–38(1993); see also letters to the editor and response in "Physics Today", 47, 113-116 (1994);{c)} Microscopic Origins of Irreversible Macroscopic Behavior, Physica A, 263, 516–527, (1999);
• J.L. Lebowitz, {d)} A Century of Statistical Mechanics: A Selective Review of Two Central Issues, Reviews of Modern Physics, 71, 346–357, 1999; {e)} From Time-symmetric Microscopic Dynamics to Time-asymmetric Macroscopic Behavior: An Overview, to appear in European Mathematical Publishing House, ESI Lecture Notes in Mathematics and Physics.
• O. Penrose, Foundations of Statistical Mechanics, Pergamon, Elmsford, N.Y. (1970): reprinted by Dorer (2005).
• R. Penrose, The Emperor's New Mind, Oxford U.P., New York(1990), ch. 7: The Road to Reality, A. E. Knopf, New York(2005), ch. 27–29.
• S.M. Carroll and J. Chen, Spontaneous Inflation and the Origin of the Arrow of Time, arXiv:hep-th/0410270v1
Internal references
Recommended reading
• For a general history of the subject and references to the original literature see S.G. Brush, The Kind of Motion We Call Heat, Studies in Statistical Mechanics, vol. VI, E.W. Montroll and J.L. Lebowitz, eds. North-Holland, Amsterdam, (1976).
• For a historical discussion of Boltzmann and his ideas see articles by M. Klein, E. Broda, L. Flamn in The Boltzmann Equation, Theory and Application, E.G.D. Cohen and W. Thirring, eds., Springer-Verlag, 1973.
• For interesting biographies of Boltzmann, which also contain many quotes and references, see E. Broda, Ludwig Boltzmann, Man—Physicist—Philosopher, Ox Bow Press, Woodbridge, Conn (1983); C. Cercignani, Ludwig Boltzmann; The Man Who Treated Atoms, Oxford University Press (1998); D. Lindley, Boltzmann's Atom: The Great Debate that Launched a Revolution in Physics, Simon & Shuster (2001).
See also
Personal tools
Focal areas |
ab11a69b6eaf57e6 | Ernst Chladni
From Monoskop
Jump to: navigation, search
Ernst Chladni, c1825. Lithography by Ludwig Albert von Montmorillon.
Born November 30, 1756(1756-11-30)
Wittenberg, Electorate of Saxony (today Germany)
Died April 4, 1827(1827-04-04) (aged 70)
Breslau, Kingdom of Prussia (today Wrocław, Poland)
Ernst Florens Friedrich Chladni was a physicist and musician. His work includes research on vibrating plates and the calculation of the speed of sound for different gases. For this some call him the "father of acoustics". He also did pioneering work in the study of meteorites, and therefore is regarded by some as the "father of meteoritics" as well.
He was a corresponding member of the Academy of Sciences in St. Petersburg (1794), Royal Society of Harlem in the Netherlands, Royal Society of Natural Scientists in Berlin, Society of the Arts and Sciences in Mainz, Academy of the Applied Arts in Erfurt, Philomatic Society of Paris, Batavia Society in Rotterdam, and societies in Munich and Göttingen.[12]
Martin Chladni
Chladni came from an educated family of academics and learned men. Chladni's great-grandfather, Georg Chladni (Chladenius, 1637–92), studied at gymnasium in Banská Bystrica (Neusohl) and theology at a university in the Protestant city of Wittenberg. He served as head of a protestant school in Špania Dolina (Herrengrund) near Banská Bystrica (1666), and from 1667 as a protestant rector of a church in Kremnické Bane (Johannesberg) 4 km away from Kremnica (Kremnitz), a mining town in the Kingdom of Hungary (today central Slovakia). After shortly living in Görlitz, Upper Lusatia, he returned to Kremnica. But after criticising Jesuits and Catholic church from his position of a Lutheran theologian, during the Counter Reformation he had to flee the town (on 19 October 1673), along with his wife and 4-year old son, Martin. From 1680 he worked as a rector of a church in Hanswald where he died.[1][2]
Chladni's grandfather, Martin Chladni (Chladen, Chladenius, Chladenio, 25.10.1669–12.9.1725, born in Kremnica [13]), was also a Lutheran theologian. He studied philosophy and theology in Wittenberg (1688), and received litentiate in theology in 1704. He served in Dresden, Ubigau (1695), Jassen; in 1710 became professor of theology at the University of Wittenberg; in 1719 provost; and in 1720–21 was dean of the faculty of theology and later rector of the university. He authored around 70 religious scripts and dissertations, in Latin and German. He had one son, Ernst. Died in Wittenberg.[3]
Chladni's uncle, Justus Georg Chladni (Chladenius, 1701–65), was a law professor at University of Wittenberg. Another uncle, Johann Martin Chladni (1710–59), was a theologian and historian, and professor at the University of Erlangen and the University of Leipzig.
Chladni's birthplace, Mittelstrasse 5 (right), Wittenberg.
Ernst Florens Friedrich Chladenius was born in 1756 in Wittenberg, Electorate of Saxony (today Germany). As a consequence of the Reformation in the 16th century, Wittenberg had become the cultural centre of Europe due to the work of Martin Luther and Philipp Melanchthon, and the University Leucorea of Wittenberg was considered the most important north of the Alps. In the 18th century, however, Wittenberg had meanwhile degraded to the status of a provincial town in Saxony, and the university as well had lost its great renown.[4][5]
His father, Ernst Martin Chladni (Chladenius, 6.8.1715–4.3.1782 [14]), was first professor of law (1746), and ten times dean of the law faculty of the University of Wittenberg. He published almost 50 works, and was also a court counselor in Saxony and antiques expert.[6][7]
His mother was Johanna Sophia (born Clement, 10.5.1735–6.3.1761, married in 1753), daughter of a High Court notary (Hofgerichtsprotonotar) and later lawyer at the Consistory of Wittenberg, Johann Gottlieb Clement (9.11.1692–24.12.1759, born in Großbothen(?) near Leipzig, died in Wittenberg), and his wife Anna Sophia, a daughter of Johann Christoph Wichmannshausen.[15]
After death of his mother, his father remarried Elisabeth Johanna Charlotte (born Greipziger, 1725–21.4.1801), widow of privy councilor of the Duke of Württemberg and Lord of Genthe, Meinecke.[8][16]
Early years[edit]
Ernst Florens Friedrich's sister passed away when she was a baby and he grew up as the only child. Chladni received home education; his father had very strict, even despotic way of upbringing. Ernst Friedrich could only leave his study room after his permission; couldn't go even to their garden other than accompanied by his mother; and was not allowed to have any friends. At the age of 7 he was already reading scientific books, studied stars and maps, secretly learning Dutch, and dreamt about becoming a seafarer.[9][10][11]
From May 1771 to March 1774 he visited the boarding-school [Landesschule] of St. Augustinus at Grimma near Leipzig. These Landesschulen had been established in secularized monasteries after the reformation, and served as colleges of the Saxonian state for the future government officials, teachers and Protestant preachers. The severe education at home continued in Grimma. Chladni was not allowed to stay at a hostel like the other pupils, but had to live in the flat of one of his teachers, and therefore was again under permanent supervision. His father disapproved of his interest in medicine and insisted that Ernst Friedrich become a lawyer.[12][13][14]
From 1776, Chladni studied law and philosophy (as well as mathematical and physical geography, physics, biology, and geometry) at the University of Leipzig, and obtained a philosophy degree in 1781 and law degree in 1782. Around that time he changed his surname from Chladenius to Chladni. During his studies he joined a masonic lodge of Leipzig, Minerva zu den drei Palmen[17]. Upon his return to Wittenberg, his father arranged a lawyer position for him.[15]
Scientific career[edit]
Lecturing in Wittenberg (1783–92)
After the death of his father in 1782, Chladni's life changed fundamentally. He felt responsible for his stepmother, which was the main motivation to stay in Wittenberg, although his financial situation was difficult. At the University of Wittenberg one of the two professors in mathematics passed away in 1784, and Chladni applied for the vacant position. But the position was cancelled, and he had to abandon his hopes. In 1783–92 he gave lectures, first on legal subjects, from 1784 on geometry and mathematical geography, and from 1786 in his real field of research, acoustics.[16][17]
Early experiments in acoustics (from 1782)
He moved to acoustics as a relatively underexplored scientific field at the time.[18] In 1782 he started with extensive experiments in his flat. Chladni wrote: "For a long time it was my main activity to analyse such sound sources, which had not yet been studied. Up to now only vibrations of strings and vibrations of air in wind instruments were the subjects of studies. And now I performed experiments on transversal vibrations of rods, which had been the subject of theoretical studies of Leonhard Euler and Daniel Bernouilli, and then on the vibrations of plates, which were an unknown field."[19][20][21]
Exploration of sound figures (from 1785)
Bowing Chladni plate.
First Chladni investigated transversal vibrations of rods with different boundary conditions. The violin bow was the instrument for the mechanical excitation. The idea for this came from a publication on the glass harmonica by the Bach biographer Nicolaus Forkel. This source intimately influenced Chladni's work on instrument making. The essential idea to the sound patterns came from the study of the works by Georg Christoph Lichtenberg. In 1777 Lichtenberg succeeded in making spark discharges in dielectrics visible by decorating the objects with sulphur and minium powders (see below). This motivated Chladni to apply fine sand to his plates and rods. With this method of sound patterns he could confirm the formulas for the characteristic frequencies of rods, which had been derived theoretically. Chladni had a sensitive ear. He could discriminate frequencies differing by less than a semitone. With experiments with vibrating plates with flexural rigidity - the two-dimensional counterpart to rods - Chladni opened a field, which had hardly been studied neither theoretically nor experimentally. Existing theories by Euler and Michael Golovin had been in contradiction to Chladni's experiments.[22]
Chladni studied systematically the sound patterns of circular, square, and rectangular plates, by fixing them with his fingers at different points, thus enforcing at these points the occurence of nodal lines. The results were published in 1787 in Entdeckungen über die Theorie des Klanges with 11 plates and a total of 166 figures. At the end of this book Chladni reminded of the unsolved problem of the mathematical treatment of flexural vibrations of plates.[23]
Chladni, demonstrating his experiments in the palais of the prince of Thurn and Taxis, Regensburg, 1800.
Entrance to the house Zur Goldenen Kugel, Chladni's home in Wittenberg (1801–13).
Die Akustik, 1802. View online.
Lecture tours (from 1791)
From 1791, after his invention of sound figures and building his music instrument (Euphon, see below), also motivated by lack of money, throughout his life he was doing lecture/performance/exhibition tours, usually lasting several months, to present his work around Europe, each time returning for a short period to Wittenberg. In 1792 he sold his home in Mittelstrasse 5 (but lived in this house until the death of his stepmother in 1801). The first travel led him first to Dresden, then to Berlin. Originally he meant primarily to perform music using his instruments, but noticed that his sound figures attracted much more acclaim and made it central to his presentations. Chladni's demonstrations in many royal academies and scientific institutions frequently drew large crowds who were duly impressed with the aesthetically sophisticated qualities of vibrating plates. He quickly came to be known as a "traveling scientist", and used the visits of various cities to acquire new contacts among scientists but also to study various materials on acoustics in its libraries and archives. He visited Germany (Dresden, 1791, c1797; Berlin, Jan–Feb 1792, Dec 1798–Mar 1799, 1826–Feb 1827; Göttingen, Dec 1792–Jan 1793; Bremen, Feb 1793; Hamburg, Mar–Apr 1793; Erfurt, Aug 1795; Weimar, 1803; Munich, c1798, 1812), Denmark (Copenhagen, Apr 1793, 1797; Flensburg, 1794), Prussia (Danzig, 1794; Königsberg, 1794; Breslau, Feb–Apr 1827), Russia (Riga, 1794; Petersburg, May 1794; Tallinn, 1794), Austria/Bohemia (Prague, c1797; Vienna, c1798, 1812; Karlsbad, 1812), Netherlands (Amsterdam, Dec 1807), Belgium (Brussels, c1808), France (Paris, Dec 1808–Mar 1810; Strasbourg, 1810), Switzerland (Basel, May 1810; Zürich, 1810), Italy (Turin, October 1810–Apr 1811; Milan, Apr–May 1811; Bologna, 1811; Florence, May 1811; Venice, 1811), among other places. Between 1802–12 he visited half Europe. In 1815 he started a new period of journeys, and in lectures included his theory of meteors.[24][25][26][27]
Contact with Lichtenberg (from 1792) and Kempelen
In December 1792–January 1793, Chladni visited Göttingen, where he met and befriended Lichtenberg. This contact was the beginning of Chladni's interest in meteors and later his theory of their extraterrestric origin.[28][29] Lichtenberg was often visited by Wolfgang von Kempelen who had built a chess automaton to make fun of the scientists from around Viennese Court and was conducting a serious research of acoustics (he built a speaking machine, 1769–1804 [18]; published the book Mechanismus der menschlichen Sprache, 1791). He organised a meeting where Kempelen presented his speaking machine to Chladni, who later tested and discussed Kempelen's hypotheses in his books Beiträge zur praktischen Akustik und zur Lehre vom Instrumentenbau (1821) and Über die Hervorbringung der menschlichen Sprachlaute (1824).[30]
New home (1801)
In 1801 he moved from his family house to the Zur Goldenen Kugel [To the Golden Ball] house on Schlossstrasse 10. In this house the future physicist Wilhelm Weber was born 1804.[31]
Die Akustik (1802)
In 1802 he published his breakthrough work Die Akustik which soon acquired a status of foundational work of a new scientific field and him a title of "father of acoustics". It was the first systematic description of the vibrations of elastic bodies. The arrangement of the book in chapters on (i) sound generation, (ii) sound propagation, and (iii) sound reception was new, too. In the book he compiled, commented and built upon numerous articles on acoustics found on his travels across Europe.
An encounter with Goethe (1803)
On the occasion of a visit to Weimar, in January 1803 Chladni met Goethe to whom he gave a copy of his book. About their first encounter, Goethe wrote the following in a letter to Schiller: "Doctor Chladni has arrived and has brought his complete Acoustics in a quarto volume. I have already read half of it and shall give you a somewhat agreeable oral report on its content, substance, method, and form. He belongs to [..] those blissful persons who have not the faintest idea that there is something as Naturphilosophy and who are only attentively trying to observe phenomena which they will then classify and make use of them as well as their natural talent is capable in the matter and is trained for the matter." They met again in Goethe's home in June 1812. A few years later, however, Goethe has made several positive statements on Chladni's behalf in public.[32][33]
Paris (1808–10), meeting Napoleon and translation of Die Akustik (both 1809)
Dedication page from Traité d'acoustique, 1809.[3]
During his stay in Paris in December 1808 Chladni presented his work at the French Academy of Sciences which organised the evaluation commission consisting of physicists Étienne de Lacepède, Prony, Hauy, and music scientists Mehul, Gretry, and Gossec. His study was met with very positive feedback and Pierre-Simon Laplace, together with Gay-Lussac, Alexander von Humboldt and Arago, proposed him to translate it to French. Napoleon was also interested in a demonstration of Chladni's experiments and invited him to the Tuilerie Palace through the mediation of Laplace. While performing artists were rather often invited to court, the invitation of a scientist was a singularity. In February 1809, Chladni presented Napoleon (also present were Laplace, La Cépède, Berthollet) his sound figures, mathematical foundations of acoustics and performed a composition by Haydn on his Clavicylinder, and the following day received a 6,000-franc grant to translate his work to French. In doing so, he encountered a problem. For the German terms Schall, Klang and Ton, used by Chladni with different meanings, the French language only knows one concept, namely son, e.g. sound. A Frenchman he asked about this matter gave him the following answer: "Notre diablesse de langue ne veut pas se prêter à l'expression de toutes les idées possibles. Il faut même quelquefois sacrifier une idée aux caprices de la langue." [Our devil of a language does not want to lend itself to the expression of all ideas possible. Sometimes it is even necessary to sacrifice an idea to the caprices of the language." Chladni seized this opportunity to a thorough modification, he eliminated the out-of-date and added new ideas. The French edition was eventually published in November 1809, with dedication to Napoleon (which caused him trouble after Napoleon became enemy of the rest of Europe). Chladni left Paris in March 1810.[34][35][36][37]
Move to Kemberg (1813)
In the summer of 1813, Chladni suffered a great loss. The remnants of Napoleon's army were entangled in numerous fights and skirmishes in Saxony after the retreat from Moscow, returning to France. Wittenberg was besieged by the Prussians, so Chladni was compelled to move to a dwelling with a single room in the small town of Kemberg, situated 15 km to the south of his home town (where he was visited in 1821 by Felix Mendelssohn-Bartholdy[38] among others). In the autumn of the same year the flat he left in Wittenberg burnt out, flared up by a fire rocket which had hit the neighbouring house. Chladni deplored the loss of many objects dear to him, including excerpts and notes on experiments. He had been, however, lucky to have rescued most of his belongings, among them the Euphon and the Clavicylinder. A single room served him as bedroom, workshop, and drawing room at the same time. He spent the rest of his life in Kemberg, only interrupted by the still numerous lecture tours.[39][40]
In Munich, Chladni met Joseph von Fraunhofer, a physicist who sparked his interest in optics.[41]
Room acoustics
In Die Akustik, a book with 310 pages, room acoustics took up only 7 pages, reflecting the state of the knowledge in 1802. At the beginning of the 19th century room acoustics was treated in geometrical terms. The importance of resonances had been recognized, and it was known that the ear can distinguish at most 9 different sound impulses per second. But on sound absorption and related questions there was a lot of obscurity.[42]
During a stay in Berlin in 1825 Chladni met the architect Carl Theodor Ottmer, who showed him his drafts for the new building of the Singakademie in Berlin (today Maxim Gorki Theatre). The Singakademie was a choir conducted by Carl Friedrich Zelter and devoted itself to the performance of works by Johann Sebastian Bach, and the building became one of the best music halls in Germany until its devastation in 1943.[43]
Weber brothers
Chladni's students, brothers Ernst Heinrich and Wilhelm Eduard Weber, took upon Chladni's research of acoustics and solved many of his outlined questions[19][20][21][22][23]. Resulting book, Wellenlehre, auf Experimente gegründet (1825)[24], is dedicated to Chladni. He tested their findings both in theory and experiment and did not find any errors.[44] (W. E. Weber, together with Carl Friedrich Gauss, went on to construct the first electromagnetic telegraph in 1833 [25].)
Inspired by the acoustic research of the Weber brothers, Chladni resumed his work from Die Akustik, but the resulting book Kurze Übersicht der Schall- und Klanglehre (1827) did not reach level of its famous forerunner.[45]
Languages and art
Through reading fiction Chladni kept himself fluent in Greek, Latin, Dutch, French, English, and Italian, and also spoke oriental languages and dialects. He had a habit of reading non-German literature in its original language. Thanks to his extensive travels he also had a strong knowledge of painting and sculpture.[46]
In February 1827 Chladni went from Berlin to Breslau where he gave lectures. He passed away in the night of 3 to 4 April. The evening before his death he met the mineralogist Henrik Steffens who reported on this meeting in his memoirs[47]. Chladni was born in the same year as Mozart, and died in the same year as Beethoven. He lived a nomad life from his mid-30s and according to his own words never received an acceptable offer of a professorship (despite the offers for positions in Berlin, Jena, and Dresden in the late 1810s).[48] Chladni never married and had no kids. In his will, he bequeathed his collection of 41 meteors[49] to the Berlin Mineralogy Museum, 5000 Thaler to his tenant farmer [Hauswirth], 600 to the poor, and 600 to the city of Kemberg for a new tower clock and paving.[50] The site of his grave had been forgotten.
He gave a detailed survey of his life in the introduction to the French edition of Akustik (1809) as well as in his Neue Beyträge (1817).
Goethe, who himself was often criticized for his diverse study of natural sciences, wrote in 1817: "Who will criticize our Chladni, the proud of the nation? The world owes to him gratitude, since he made the sound visible. And what is more distant from this subject than the study of the meteorites? Not at all, but that an ingenious man feels the impetus to study two natural phenomena which are far away from each other, and investigates both of them continuously. Let us be grateful for the benefit we gained from it!"[51]
A small lunar impact crater that lies near the northwest edge of Sinus Medii, in the central part of the Moon, is named after Chladni[26], as well as asteroid no. 5053.
In Doktor Faustus (1947), Thomas Mann based the character of the father of the hero of the novel, Adrian Leverkühn, on Chladni. He gets introduced in its introductory chapter. This father likes to "speculate", e.g. "to labour on nature, to stimulate phenomena to tempt it by exposing its work through experiments." These experiments are rather peculiar. There are drops moving like amoebae and eating up each other. There are structures grown in saline solutions suggesting to be mosses or algae. And there are sounds appearing in the form of geometrical patterns. All these phenomena and experimenting anyway in the poet's eyes are the work of the "tempter".[52]
Rejection of the monochord[edit]
In 1787, Chladni explained, on the basis of his sound patterns, how various sounds coexist in the vibration of the same body. But this simultaneous presence of com-possible sounds, he said, cannot be reduced, in a Pythagorean way, to the overtones of a fundamental: there are also "inharmonic and irrational relationships", due to the irregularities of the vibrating body.[27] This is why Chladni dismisses the monochord, the instrument that was the basis of acoustic theory and calculation since Greek antiquity; as Chladni writes in the introduction to his Akustik: "a string is only one sort of sonorous body," among many others. This dismissal of the Pythagorean monochord and calculation is Chladni’s revolution, his modernity. It opens the possibility of experimenting with all vibrating bodies, while his sound patterns allow us to see their complex vibrating structure.[53]
The debate over the vibrating string was part of the gradual divorce of acoustics from music that took place in the 18th century. Music, as one of the four sciences of the medieval quadrivium, was traditionally part of mathematics. The tradition still held in the 18th century to some extent. The beginning of acoustics as a branch of physics is often dated from Chladni's Entdeckungen (1787), but throughout the century natural philosophers raised questions about the production and propagation of sound that were not properly part of harmonics.[54]
Vibrations of a metal rod.
The problem of Chladni's time was that it was not possible to determine frequency of particular tone (pitch) of the sound body (string, rod, membrane) since its vibrations were too fast to be followed and counted by the eye. Based on Mersenne’s laws of vibrating strings (1636/37) – the number of vibrations of a string is inversely proportional to the length of the string, and, proportional to the square root of its tension (which were further improved by Joseph Sauveur[28]) – Chladni constructed a tonometer (1800). The idea was to attach one end of the rod long enough to count its vibrations on its other end (e.g. 4 per second). By shortening it to its half the tone and frequency would increase four times (square of 2). Chladni would shorten the rod until its tone would correspond to the tone of the string whose frequency he was looking for.[29][55] Chladni constructed a tonometer made of bars, whose rates of vibration was determined as above. With this he hoped to be able to determine the rate of vibration of any sonorous body whatever. However the results given by experiment only approximate those demanded by theory.[30]
Chladni figures (Klangfiguren)[edit]
Chladni figures, from Neue Beyträge (1817).[4]
Chladni figures, from Neue Beyträge (1817).[5]
The phenomenon was earlier mentioned by Leonardo da Vinci in his notebook, and as well discussed by Galileo Galilei. He noticed that small pieces of bristle laid on the sounding-board of a musical instrument, were violently agitated on some parts of the surface, while on other parts they did not appear to move, and wrote about it in his work Dialogo sopra i due massimi sistemi del mondo [Dialogue Concerning the Two Chief World Systems] (1632). Later, Robert Hooke of Oxford University proposed to observe the vibrations of a bell by strewing flour upon it (c1680).[32]
Chladni did not mention the experiments of Galilei and Hooke in his own writings. Regardless of whether he was aware of them or not, he was the first to examine the phenomenon systematically. His original inspiration were the electrical figures of Lichtenberg, who made the experiment of scattering an electrified powder over an electrified resin cake; the arrangement of the powder revealing the electric condition of the surface[33]. In 1785, Chladni set out to explore this phenomenon from the perspective of acoustics.[56] He explained in a biographical preface to the French edition of Die Akustik (1809):
"As an admirer of music, the elements of which I had begun to learn rather late, that is, in my nineteenth year, I noticed that the science of acoustics was more neglected than most other portions of physics. This excited in me the desire to make good the defect, and by new discovery to render some service to this part of science. In 1785 I had observed that a plate of glass or metal gave different sounds when it was struck at different places, but I could nowhere find any information regarding the corresponding modes of vibration. At this time there appeared in the journals some notices of an instrument made in Italy by the Abbé Mazzocchi, consisting of bells, to which one or two violin bows were applied. This suggested to me the idea of employing a violin bow to examine the vibrations of different sonorous bodies. When I applied the bow to a round plate of glass fixed at its middle it gave different sounds, which, compared with each other, were (as regards the number of their vibrations) equal to the squares of 2, 3, 4, 5, &c.; but the nature of the motions to which these sounds corresponded, and the means of producing each of them at will, were yet unknown to me. The experiments on the electric figures formed on a plate of resin, discovered and published by Lichtenberg, in the memoirs of the Royal Society of Göttingen, made me presume that the different vibratory motions of a sonorous plate might also present different appearances, if a little sand or some other similar substance were spread on the surface. On employing this means, the first figure that presented itself to my eyes upon the circular plate already mentioned, resembled a star with ten or twelve rays, and the very acute sound, in the series alluded to, was that which agreed with the square of the number of diametral lines."[34][35]
Birth of modern acoustics
‘Acoustics’ was an experimental investigative enterprise in the early 19th century. The group of so-called ‘acousticians’ included Chladni, Young, Félix Savart, Colladon, Faraday, Charles Wheatstone, Lissajous, Tyndall, Koenig, A. Mayer, etc. Their experimental works were summarized in John Tyndall's Sound (1867)[36]. In addition to ‘acousticians’, there were researchers who did research theoretically on the making and transmitting of sound in the mathematical manner. This group included Bernoulli, Jean LeRond d'Alembert, Euler, Joseph Louis Lagrange, Poisson, Sophie Germain, G. Ohm, Kirchhoff, Riemann, Donkin, S. Earnshaw, etc. For them analysis was the central method of dealing with problems associated with sound. Their investigations were not closely connected to the empirical and experimental findings gathered by the ‘acousticians’.[37](p 2765)
Further developments in acoustics[edit]
Savart's toothed wheel.
The influence of Chladni's Traité d'acoustique (1809) on the scientific research in France shows up in the work by Félix Savart, who was the direct successor of Chladni in France in the field of experimental acoustics. With a gearwheel siren with a diameter of 82 cm and 720 teeth built by Savart a precise measurement of the frequency of tones became possible. Savart measured the upper limit of audibility and found the high value of 24,000 Hz.[57]
In Germany, Hermann von Helmholtz wrote his Die Lehre von den Tonempfindungen 60 years later.
An unsolved problem in Chladni's days was the tone quality, the timbre. For a long time it was known that a sound of the same pitch, produced with different musical instruments, has different qualities. Chladni assumed the coexistence of weak noises (schwache Geräusche) with each sound, being responsible for the different tone qualities. Georg Simon Ohm solved the problem in 1843. He found that in the human ear a Fourier harmonic analysis of the tone takes place. In addition the ratio of intensities of harmonic fundamentals is important, whereas phase differences between the harmonics are irrelevant. Harmonics had been a research field in acoustics, which was underestimated strongly by Chladni.[58]
Mathematical solutions of Chladni figures[edit]
In his Akustik, Chladni didn't offer mathematical explanation of his figures and neither of other acoustic phenomena.
Sophie Germain
After Chladni's audience to Napoleon in 1808, he encouraged the Paris Academy of Sciences to announce a prize of 3,000 Francs for mathematical explanation of Chladni figures, particularly "to give the mathematical theory of the vibration of an elastic surface and to compare the theory to experimental evidence", which then spurred a plethora of research in waves and acoustics. The award was given in 1816 to the mathematician Sophie Germain for her work Recherches sur la théorie des surfaces élastiques[59],[38][60]. Her explanation was incomplete (she found the correct differential equation, but the hypothesis she applied for the derivation of this equation was partly incorrect leading to the wrong boundary conditions), but she received the prize all the same since it was acknowledged that her treatise signified an essential progress.[61]
Charles Wheatstone's approximation by cosine and sine functions
Charles Wheatstone had tried in 1833[62] to approximate Chladni figures using sine and cosine functions.[63] He showed that in square and rectangular plates every figure, however complicated, was the resultant of two or more sets of isochronous parallel vibrations; and by means of simple geometrical relations he carried out the principle of the ‘superposition of small motions’ without the aid of any profound mathematical analysis, and succeeded in predicting the curves that given modes of vibration should produce.[39]
Sample from Ritz's mathematical solution of Chladni figures, 1909. Download in PDF.
G. Kirchhoff's solution for circular plate
Kirchhoff came up with the correct mathematical model in 1850[64], treating Chladni figures on a square plate as eigenpairs (eigenvalues and corresponding eigenfunctions) of a biharmonic operator. He also managed to solve the Chladni problem for the special case of a circular plate, which, due to symmetry, is much easier to handle. For other configurations, the partial differential eigenvalue problem with the free boundary conditions simply proved to be too difficult to solve.[65]
W. Voigt's solution for rectangular plate with two or four clamped boundaries by elementary integration
In the case of clamped boundaries, the problem greatly simplifies, and W. Voigt found the general solution in 1893[66] for a rectangular plate with two or four clamped boundaries by elementary integration. Toward the end of the 19th century, the great expert in sound, John William Strutt, later Baron Rayleigh, summarized the situation in (1894): "The problem of a rectangular plate, whose edges are free, is one of great difficulty, and has for the most part resisted attack"[40][67].[68]
Walter Ritz's solution
In his groundbreaking paper (1909)[69], Walter Ritz presented a method for computing Chladni figures: instead of trying to solve the partial differential eigenvalue problem directly (and neither through boundary conditions of the problem), he proposed to use the principle of energy minimization (Prinzip der kleinsten Wirkung), from which even those equations and conditions could be derived.[70]
Chladni figures of irregularly shaped plates have experienced a surprising topicality nowadays. The reason is the equivalence between the stationary wave equation, the Helmholtz equation, and the stationary Schrödinger equation for a particle moving freely in a box with reflecting walls. This enables the examination of such quantum billiards, and, in the case of irregularly formed walls, of the quantum chaos by means of vibrating plates[41]. Nodal patterns are also of central importance in totally different domains, in fields of light[42], in earthquake damage pattern[43], and even in pattern formation in the visual cortex[44].[71]
Chladni figures in Naturphilosophy[edit]
Lichtenberg figures, 1777.[6][7][8][9][10][11]
The very fact that Chladni's work with the sound figures was triggered by Lichtenberg's electrostatic figures for which he was trying to find an acoustic analogue, coincided with a naturphilosophische search for symmetries and signs of hidden relationships among natural forces.[72]
According to the most general version of this romantic theory, manifest for example in the writings of Herder, all of nature speaks through its form, and the physiognomy of the natural world is cast as language, the "book of nature" that merely awaits correct deciphering. A more restricted variant holds that only those aspects of nature which have a formal feature reminiscent of inscription are to be described as hieroglyphs. Here nature seems to be saying something in a language that the human race can no longer understand, that it has forgotten. But this language is in fact the most ordinary language, the Ur-alphabet in which creation was, as it were, spelled out. Indeed, unlike all subsequent languages, what marks this primordial language is that it does not require any code at all since, here, sign and referent are the same. These hieroglyphs are what they mean. Their unintelligibility today is simply an index of the extent to which the present era has lost touch with that nature. For the German romantics, there were generally only two ways to reestablish contact with this Ur-language: either through the direct, but ephemeral, recreation of that language through poetry, or the more tedious, step-by-step relearning of that alphabet through the scientific exploration of nature. The task of physics was thus to make legible once again the currently unintelligible hieroglyphs of nature. Indeed, for the romantics, the discoveries of contemporary physics seemed to confirm the promising visions of the poets.[73]
Through this reading, Lichtenberg's figures (1777) made the then mysterious phenomenon of electricity finally become readable. With Chladni figures (1787), for the first time, one could associate acoustic phenomena to specific graphic figures which, most importantly, were "drawn" by the sounds themselves. These were not arbitrary but were rather in some sort of a "necessary" - indexical - relation to the sounds.[74]
This very problem was explored in 1806, when the prominent Naturphilosoph Hans Christian Ørsted--who would become famous for his discovery of electromagnetism in 1820--used Chladni's technique (using alcohol and lycopodion powder instead of sand) in a further effort to disclose a connection between sound and electricity.[75]
Ritter (via Benjamin)
Both Lichtenberg's and Chladni's figures intrigued another romantic physicist, Johann Wilhelm Ritter. In them, Ritter saw nature's own form of script-Ur-images and hieroglyphs that constituted the true alphabet of the Book of Nature. He wrote in 1810[76]:
"It would be beautiful if what became externally clear here were also exactly what the sound pattern is for us inwardly: a light pattern, fire-writing. [..] Every sound would then have its own letter directly to hand [..] That inward connection of word and script - so powerful that we write when we speak [..] has long interested me. Tell me: how do we transform the thought, the idea, into the wqord; and do we ever have a thought or an idea without its hieroglyph, its letter, its script? Truly, it is so: but we do not usually think of it. But once, when human nature was more powerful, it really was more extensively thought about; and this is proved by the existence of word and script. Their original, and absolute, simultaneity was rooted in the fact that the organ of speech itself writes in order to speak. The letter alone speaks, or rather: word and script are, at source, one, and neither is possible without the other [..] Every sound pattern is an electric pattern, and every electric pattern is a sound pattern. [..] My aim [..] was therefore to re-discover, or else to find the primeval or natural script by means of electricity. [..] In reality the whole of creation is language, and so is literally created by the word, the created and creating word itself [..] But the letter is inextricably bound up with this word both in general and in particular. [..] All the plastic arts: architecture, sculpture, painting, etc. belong pre-eminently among such script, and developments and derivations of it."[45], quoted in Walter Benjamin[77].
Ritter held the opinion that material images, like Chladni figures, entailed the true language--a pictorial language--of science. He reveled in the pure multiformity of the Klangfiguren, their symmetry, and their relationship to other forms in nature. While the mathematical approach to sound was by no means excised, it was this respect for the image and the attitude that pictures could give meaningful signs of phenomena that excited the Naturphilosophen.[78]
The natural philosopher, Rosetta stone sleuth, and undulatory optical theorist Thomas Young embraced the pictorial approach to the study of sound. In 1800, Young introduced a new technique for obtaining a visual image of the motion of a vibrating string, while referring to Chladni figures. He also pioneered a means for creating permanent inscriptions of sonic vibrations (1807). In the 19th century, Wilhelm Weber and Guillaume Wertheim, as well as many other investigators, devised related ways to preserve the traces of styluses attached to sounding bodies, such as rods and tuning forks.[79]
The research of Chladni figures in the 1810s–20s by the physiologist Jan Evangelista Purkyně, which he later discussed with Goethe, is also notable. For more, see the article on Purkyně's work.
Chladni figures in modern aesthetics[edit]
Towards the end of the 19th century, the identification of recording instruments and graphs with language became less obvious, but the association was not entirely lost. German philosophers such as Friedrich Nietzsche, Walter Benjamin[80] and Theodor Adorno carried the pursuit of graphical "ur-languages" from Chladni and Ritter into the aesthetics of recorded music.
Nietzsche had referred to the sound figures in his On Truth and Falsity in Their Ultramoral Sense (1873)[46][47]:[81]
"One can imagine a man who is quite deaf and has never had a sensation of tone and of music; just as this man will possibly marvel at Chladni's sound figures in the sand, will discover their cause in the vibrations of the string, and will then proclaim that now he knows what man calls 'tone'; even so does it happen to us all with language. When we talk about trees, colours, snow and flowers, we believe we know something about the things themselves, and yet we only possess metaphors of the things, and these metaphors do not in the least correspond to the original essentials."[48]
Adorno in his essay "The Form of the Phonograph Record" (1934) saw in Chladni's sound figures a kind of primal gramophony.[82] For him, the mechanical reproduction of music reversed the process of turning signs (the musical score) into music and instead turned music into language:[83]
"This occurs at the price of its immediacy, yet with the hope that, once fixed in this way, it will some day become readable as the 'last remaining universal language since the construction of the tower', a language whose determined yet encrypted expressions are contained in each of its 'phrases'. If, however, notes were still the mere signs for music, then, through the curves of the needle on the phonograph record, music approaches decisively its true character as writing. Decisively, because this writing can be recognized as true language to the extent that it relinquishes its being as mere signs: inseparably committed to the sound that inhabits this and no other acoustic groove. If the productive force of music has expired in the phonograph records, if the latter have not produced a form through their technology, they instead transform the most recent sound of old feelings into an archaic text of knowledge to come. [..] A good part of this is due to physics, at least to Chladni's sound figures, to which--according to the discovery of one of the most important contemporary aesthetic theorists [here Adorno means Walter Benjamin]--Johann Wilhelm Ritter referred as the script-like Ur-images of sound."[84]
For Adorno, the phonograph record had the advantage over the musical score in that it had written on it a language, not "mere signs". The machine avoided the trap of semiosis--the "mechanical" assignment of mere signs to music--and preserved its aesthetic value in a new language. He attributed the source of this argument to Chladni and Ritter, who first saw the possibility of "inscribing music without it ever having sounded".[85] In its reification of both sound and time, for Adorno, the phonograph record recalled Chladni figures in another sense as well: where music writes itself there is no writing subject. The record eliminates the subject (and the concomitant economy of intentionality) from the musical inscription.[86]
Hans Jenny, Cymatics, 2001. Download in PDF. On the cover: Light refracting through a small sample of water (about 1.5 cm in diameter) under the influence of vibration. Although there appear to be 12 elements comprising this figure, closer examination reveals that it consists of 2 opposed hexagonal elements.
Medical doctor and Anthroposophist Hans Jenny (1904–72) extended Chladni's explorations (also inspired by systems theory), conducting rigorous experiments with liquids, and using oscillators for precise calibration of audio signals. Jenny delved deeply into the many types of periodic phenomena but especially the visual display of sound. He pioneered the use of laboratory grown piezoelectric crystals, which were quite costly at that time. Hooking them up to amplifiers and frequency generators, the crystals functioned as transducers, converting the frequencies into vibrations that were strong enough to set the plates into resonance. Jenny used a wide variety of different materials, including glass, copper, wood, steel, cardboard and ceramics for the plates, on which he was spreading fine powder lycopodium spores of a club moss and quartz sand. He also performed a series of experiments with liquid glycerin in water and light refracted in a single drop of water containing fine particles that reflect the light source, in a series of experiments that yielded his most famous images (see cover of his book). Much of this work is documented in still photos which were compiled into two volumes of Kymatik [Cymatics] published in 1967 and 1972, and republished in 2001 as a single edition[87].[49] Later, he speculated about the potential healing powers of certain sound frequencies, thought that has been presented as fact by some of his kookier followers.
Chladni figures in the arts[edit]
Alvin Lucier's piece The Queen of the South (1972) is using Chladni figures (Lucier is said to be influenced by Jenny's book on cymatics).[50]
Jenny's work was also followed up by Center for Advanced Visual Studies (CAVS) founder György Kepes at MIT. His work Flame Orchard included an acoustically vibrated piece of sheet metal in which small holes had been drilled in a grid. Small flames of gas burned through these holes and thermodynamic patterns were made visible by this setup.
Photographer, philosopher and Cymatic researcher, Alexander Lauterwasser has used finely crafted crystal oscillators to resonate steel plates covered with fine sand and also to vibrate small samples of water in Petri dishes. His first book, Water Sound Images (2006), features imagery of light reflecting off of the surface of water set into motion by sound sources ranging from pure sine waves, to music by Ludwig van Beethoven, Karlheinz Stockhausen, electroacoustic group Kymatik (who often record in surround sound ambisonics), and overtone singing.[51] (Video), (Video).
Carsten Nicolai's Milch (Milk) (2000) reveals how sound frequencies ranging from 10 to 150 Hz, almost imperceptible to the ear, alter patterns of disturbances they caused in milk.[52]
In Protrude, Flow (2001) by Sachiko Kodama and Minako Takeno, sounds in the exhibition space, including those of the audience, interactively transform three-dimensional patterns in black magnetic fluid, which appears to be choreographed to its sonic environment.[53](Video).
Instrument design[edit]
Variations of this technique are still commonly used in the design and construction of acoustic instruments such as violins, guitars, and cellos. Since the 20th century it has become more common to place a loudspeaker driven by an electronic signal generator over or under the plate to achieve a more accurate adjustable frequency.
Music instruments[edit]
Chladni's first Euphon, 1790.
Clavicylinder built for Chladni by Luigi Concone, Turin, 1811.
Since at least 1738, a musical instrument called a Glassspiel or Verillon created by filling 18 beer glasses with varying amounts of water was popular in Europe. The beer glasses would be struck by wooden mallets shaped like spoons to produce "church and other solemn music". Benjamin Franklin was sufficiently impressed by a verillon performance on a visit to London in 1757 that he created his own instrument, the "armonica" in 1761.
Franklin's armonica inspired several other instruments, including two created by Chladni. In 1790, Chladni invented the musical instrument called Euphon (e.g. beautiful sound; not to be confused with the brass instrument euphonium) composed of glass rods and steel bars made to sound through rubbing with moistened fingers. Chladni also improved on the Hooke's "musical cylinder" (or "string phone", of 1672) to produce another instrument, the Clavicylinder (1799)[54], which however did not get as popular as Franklin's armonica at all.
Chladni described his Clavicylinder in the following way:
"The Clavi-cylinder contains a set of keys, and behind them a glass cylinder, seven centimeters (about three inches) in diameter, which is turned by means of a pedal, and loaded wheel. This cylinder is not the sounding body, but it produces the sound by friction on the interior mechanism. The sounds may be prolonged at pleasure, with all the gradations of crescendo, and diminuendo, in proportion as the pressure on the keys is increased or diminished. This instrument- is never out of tune. It contains four octaves and an half, from ut, the lowest in the harpsichord, up to fa."[55]
In 1794, Chladni published Über den Ursprung der von Pallas gefundenen, in which he proposed that meteorites have an extraterrestrial origin. This was a controversial statement at the time, since meteorites were thought to be of volcanic origin. With this book Chladni also became one of the founders of modern meteorite research.
Chladni was initially ridiculed for his claims of an outer space origin for meteorites, but the important minds of his period agreed with this view, including Lichtenberg and Humboldt, and his writings sparked scientific curiosity that eventually led more researchers to support his theory. In 1795 a large stony meteorite (c28 kg) was observed during its fall to earth at a cottage outside of Wold Newton, Yorkshire, England. A piece of this ordinary chondrite, known as the Wold Cottage meteorite, was provided to British chemist Edward Howard who, along with French mineralogist Jacques de Bournon, carefully analyzed the elemental composition of the meteorite and concluded that an extraterrestrial origin was likely. In 1803 a meteor shower over L'Aigle, France peppered the town with over 3000 fragments of meteorites with hundreds of witnesses to the stones falling. The L'Aigle meteor shower was investigated by French physicist and astronomer Jean Baptiste Biot, under commission of the French Minister of the Interior. Unlike Chladni's book and the scientific publication by Howard and de Bournon, Biot's article was a popular and lively report on meteorites that convinced a number of people of the veracity of Chladni's initial insights.
Discovery of longitudinal vibrations[edit]
Longitudinal vibrations of rods.
When the bow had an acute angle to the string, Chladni heard notes which were 3 to 5 octaves higher than the usual tones. At first he examined the phenomenon with strings and then with rods from different materials. Thus Chladni discovered longitudinal vibrations (or dilatational vibrations) of bodies. In 1796 he gave a lecture in Erfurt at the Kurfürstlich Mainzische Akademie nützlicher Wissenschaften and presented first results of his experiments on this subject. Chladni now distinguished between transversal and longitudinal vibrations as it is usual today.[88]
Looking into Chladni's last book from 1827, one notes a confusion of notation on this subject in papers of other authors. Chladni found that the frequencies are reciprocal to the length of the string or the rod. If the diameter or the tension of the string is changed there are only negligible variations of the frequency for the longitudinal vibrations. Chladni had difficulties to find the dependence of the frequency on the density of the material. When Chladni investigated cylindrical rods, he discovered their torsional vibrations. In the first publication in 1796 and 1797 on this topic one gets the impression that he classifies this type of vibrations as a third class of vibrations in addition to the transversal and longitudinal ones. But in the later publications he argues against this possible misunderstanding and denotes these vibrations a special form of transversal vibrations.[89]
Weber brothers in 1825 had an idea, how to make longitudinal vibrations visible. They used glass tubes, and distributed dry sand in the interior of the tubes. The tube was hold horizontally and excited to longitudinal vibrations by rubbing the tube. The grains of sand started to move and formed little piles. This method was further developed by August Kundt in 1866 to the well-known method of dust figures.[90]
Measuring the speed of sound in solids and gases[edit]
Chladni must have noticed quickly that the technique of longitudinal vibrations can be used to measure the sound velocity in sound bodies. To this end he applied an indirect method. At that time only the velocity of sound in air was known (by work of Pierre Gassendi, begun in 1635). Chladni was assuming that the longitudinal vibrations of air in cylindrical form (e.g. in an organ pipe) are analogous to the longitudinal vibrations of a rod. Rods of a material which is examined will be fixed in the centre, e.g., for the fundamental vibration the length of the rod is half the wavelength of the tone. This tone is now compared with the fundamental vibration of an organ pipe with the same length, it shows the same vibrational state. With the velocity of sound in air, Chladni could measure the velocity of sound in several solids (tin, silver, copper, glass, iron, several kinds of woods). He published the results in 1797.[91][56][92]
In 1798 Chladni visited the chemist and botanist Franz von Jacquin in Vienna, and in his laboratory he did experiments to determine the velocity of sound in gases. In Wittenberg Chladni lacked the necessary equipment to carry out difficult scientific experiments, therefore he often used the devices of other scientists when visiting them on his journeys. To determine the velocity of sound of gases Chladni used the same idea he applied earlier for solids. He compared the tone of an organ pipe in a special gas with the tone of this pipe in air. Thus he obtained the speed of sound in oxygen, nitrogen, carbon dioxide, nitrogen oxide and hydrogen.[93]
Vibrations of tuning forks[edit]
A tuning fork mounted on a sounding-box.
Musicians had used tuning forks since their invention in 1711 by John Shore, but they were not considered worthy of attention by scientists until the work of Ernst Chladni, who was the first to systematically investigate their vibrations.[57][58][59].
Chladni found that a tuning fork in its fundamental oscillation may be looked upon as the flexural vibration of a rod with two nodal points. If this rod is bent to a fork, the nodal points in the middle approach each other. If the tuning fork is struck with a mallet, the higher eigenvibrations, being unharmonious to the fundamental vibration, are excited only weakly and decay very quickly. In 1826 Chladni published further studies on the tuning fork. Rotating a tuning fork about 360 degrees he noticed four maxima and minima of intensity. If the forks vibrate out of phase, he argued, the teeth periodically approach each other and depart from each other. In the latter case the air experiences a velocity in outward direction. At the same time the spacing between the teeth is extended, and the air moves to the inside. Between these regions there must be directions, where the air has the velocity zero. After a half-period all velocities change their signs, with an analogous change in the emission pattern.[94]
Chladni's law[edit]
In Die Akustik (1802) Chladni observed that the addition of one nodal circle raised the frequency of a circular plate by about the same amount as adding two nodal diameters, a relationship that Rayleigh (1894) called Chladni's law.[60]
1. RumanovskýStadtrucker 1961, pp. 13–14
2. Mešterová 2006, p. 127
3. Mešterová 2006, pp. 127–128
4. Ullmann 2007, pp. 25–26
5. Stöckmann 2007, p. 15
6. RumanovskýStadtrucker 1961, pp. 14–15
7. Mešterová 2006, p. 128
8. RumanovskýStadtrucker 1961, p. 20
9. RumanovskýStadtrucker 1961, pp. 14–16
10. Mešterová 2006, p. 128
11. Stöckmann 2007, pp. 15–16
12. RumanovskýStadtrucker 1961, pp. 18–19
13. Ullmann 2007, p. 26
14. Stöckmann 2007, p. 16
15. RumanovskýStadtrucker 1961, pp. 18–19
16. RumanovskýStadtrucker 1961, p. 21
17. Ullmann 2007, p. 27
18. RumanovskýStadtrucker 1961, p. 22
19. RumanovskýStadtrucker 1961, p. 24
20. Ullmann 1996, p. 14
21. Ullmann 2007, p. 27
22. Ullmann 2007, p. 27
23. Ullmann 2007, p. 27
24. RumanovskýStadtrucker 1961, pp. 28–29
25. Ullmann 2007, pp. 28–30
26. Ullmann 1996, pp. 47–51
27. Ullmann 1996, pp. 117–121
28. Ullmann 1996, p. 53
29. Ullmann 2007, p. 28
30. RumanovskýStadtrucker 1961, pp. 34–37
31. Ullmann 2007, p. 28
32. Stöckmann 2007, pp. 18–19
33. Ullmann 1996, pp. 115–116
34. Melde 1866, pp. 14–16
35. RumanovskýStadtrucker 1961, pp. 43–49
36. Stöckmann 2007
37. Ullmann 2007, p. 30
38. Heise 2007, p. 3
39. Melde 1866, p. 18
40. Stöckmann 2007, p. 21
41. Ullmann 2007, p. 30
42. Ullmann 2007, p. 30
43. Ullmann 2007, pp. 30–31
44. RumanovskýStadtrucker 1961, pp. 51–52
45. RumanovskýStadtrucker 1961, p. 52
46. RumanovskýStadtrucker 1961, p. 53
47. Steffens 1844, pp. 291–297
48. Stöckmann 2007, p. 22
49. Ullmann 2007, p. 30
50. Melde 1866, p. 37
51. Goethe 1817
52. Stöckmann 2007, p. 19
53. Szendy 2008, p. 25
54. HankinsSilverman 1995, p. 95
55. Mešterová 2006, pp. 128–129
56. RumanovskýStadtrucker 1961, pp. 24–27
57. Ullmann 2007, p. 31
58. Ullmann 2007, p. 31
59. Germain 1821
60. RumanovskýStadtrucker 1961, pp. 83–84
61. Stöckmann 2007, p. 21
62. Wheatstone 1833
63. GanderWanner 2012, p. 18
64. Kirchhoff 1850
65. GanderWanner 2012, pp. 17–18
66. Voigt 1893
67. Strutt (Baron Rayleigh) 1894
68. GanderWanner 2012, p. 19
69. Ritz 1909
70. GanderWanner 2012, pp. 19–28
71. Stöckmann 2007, p. 21
72. HankinsSilverman 1995, pp. 130–132
73. Levin 1990, pp. 38–39
74. Levin 1990, p. 39
75. HankinsSilverman 1995, p. 132
76. Ritter 1810, pp. 227–246
77. Benjamin 1998, pp. 213–214
78. HankinsSilverman 1995, p. 132
79. HankinsSilverman 1995, pp. 132–133
80. Benjamin 1998, pp. 213–214
81. Szendy 2008, pp. 22–23
82. Szendy 2008, pp. 21–22
83. HankinsSilverman 1995, pp. 145–146
84. Adorno 1990, pp. 59–60
85. HankinsSilverman 1995, p. 146
86. Levin 1990, p. 41
87. Jenny 2001
88. Ullmann 2007, pp. 28–29
89. Ullmann 2007, p. 29
90. Ullmann 2007, p. 29
91. Ullmann 2007, p. 29
92. Mešterová 2006, pp. 129–130
93. Ullmann 2007, p. 29
94. Ullmann 2007, p. 29
Chladni's dissertations[edit]
• De banno contumaciae, Dissertation, Leipzig, 1781. (Latin)
• De charactere ecclesiastico principum, Dissertation, 1782. (Latin)
Entdeckungen über die Theorie des Klanges, 1787. View online.
Die Akustik, 2nd edition, 1830. View online.
Traité d'acoustique, 1809. View online.
Works by Chladni on acoustics[edit]
Works by Chladni on meteoritics[edit]
• Über den Ursprung der von Pallas gefundenen und anderer ihr ähnlicher Eisenmassen und über einige damit in Verbindung stehende Naturerscheinungen [On the Origin of the Pallas Iron and Others Similar to it, and on Some Associated Natural Phenomena], Leipzig and Riga, 1794. (German)
• Reprinted as Über den kosmischen Ursprung der Meteorite und Feuerkugeln (1794), with commentary by Gimter Hoppe, Leipzig: Ostwalds Klassiker der exakten Wissenschaften, 1979. (German)
• Über Feuermeteore und die mit denselben herabgefallenen Massen, Vienna: J. G. Heubner, 1820. (German) [68] [69] [70]
Monographs on Chladni[edit]
Ivan Rumanovský, Ivan Stadtrucker, E.F.F. Chladný: Otec akustiky a meteoritiky, 1961. (Slovak) Download PDF.
Dieter Ullmann, Ernst Florens Friedrich Chladni, 1983. (German) Download PDF.
Dieter Ullmann, Chladni und die Entwicklung der Akustik von 1750-1860, 1996. (German) Download PDF.
Book chapters, Papers and Articles on Chladni[edit]
• "On the invention of the euphon, and other acoustic discoveries of C. F. F. Chladni", Philosophical Magazine Series 1, Vol. 2, No. 8 (1799), pp 391-398. [71]
• Goethe, Johann Wolfgang (1817). "Schicksal der Handschrift". Zur Morphologie.
• Wilhelm Weber, "Lebensbild E. F. F. Chladnis", in Allgemeine Enzyklopädie der Wissenschaften und Künste, Vol. 21, edited by J. S. Ersch and J. G. Gruber, Leipzig, 1830. Reprinted in: Wilhelm Weber, Werke, Vol. 1: Akustik, Mechanik, Optik und Wärmelehre Berlin, 1892. (German)
• Steffens, Henrich (1844). "Die letzten". Was ich erlebte: aus der Erinnerung niedergeschrieben, Vol. IX. Breslau: Josef Max. pp. 275–368 (German).
• Eugen von Lommel, "Chladni, Ernst Florenz Friedrich", Allgemeine Deutsche Biographie, Bd. 4, Leipzig: Duncker & Humblot, 1876, pp 124–126.
• Marie-Nicolas Bouillet, Alexis Chassang (eds.), "Ernst Chladni", in Dictionnaire universel d’histoire et de géographie, 1878. (French)
• Erich Ebstein, "Aus Chladnis Leben und Wirken", Mitteilungen zur Geschichte der Medizin und der Naturwissenschaften, 4 (1905), pp 438-460. (German)
• K. Locwenfeld, "E. F. F. Chladni. Skizze von Leben und Werk", Abhandlungen aus dem Gebiet der Naturwissenschaften, 22 (1929), Naturwiss. Verein Hamburg, pp 117-144. (German)
• R. Földes, "Chladni", Výročná správa Československej reálky, Košice, 1931. (Slovak)
• R. Földes, "Chladni. Otec akustiky", Časopis pro pěstování matematiky a fysiky, No. 8, 1931, p 9. (Slovak)
• Hans Schimank, "Beiträge Zur Lebensgeschichte von E. F. F. Chladni", Sudhoffs Archiv für die Geschichte der Medizin und der Naturwissenschaften, 37 (1953) pp 370-376. (German)
• Hans Schimank, "Chladni, Ernst Florenz Friedrich", Neue Deutsche Biographie, Bd. 3, Berlin: Duncker & Humblot, 1957, p 205. [72] (German)
• Günter Hoppe, "Das Pallas-Eisen, ein Ausgangspunkt fÜr die Meteoritenrheorie E. F. F. Chladnis (1794)", Zeitschrift fur geologische Wissenschaften, 4 (1976), Berlin, pp 521-528. (German)
• Günter Hoppe, "Ernst Florens Friedrich Chladni. Zum 150. Todestag des Begründers der Meteoritenkunde", Chemie der Erde, 36 (1977), pp 249-262. (German)
• Günter Hoppe, "Goethes Ansichten über Meteorite und sein Verhältnis zu dem Physiker Chladni", Goethe-Jahrbuch, 95 (1978) pp 227-240. (German)
• T. D. Rossing, "Chladni's Law for Vibrating Plates", American Journal of Physics 50 (1982), pp 271–274.
• Dieter Ullmann, "Chladnis Italienreise nach Briefen von J.P. Schulthesius", NTM-Schriftenreihe fur Geschichte der Naturwissenschaften, Technik und Medizin, Vol. 2, No. 19 (1982), Leipzig, pp 51-57. Correction ibid. No. 20 (1983), p 89. (German)
• Dieter Ullmann, "Chladni und die Entwicklung der experimentellen Akustik um 1800", Archive History Exact Sci. 31 (1984), pp 35-52. (German)
• Walther Killy (ed.), Literaturlexikon: Autoren und Werke deutscher Sprache, Bd. 2, 1988 p 408. (German)
• Dieter Ullmann, "Chladni und Ottmer - ein frühes Beispiel für die Zusammenarbeit von Akustiker und Architekt", Acustica 71, H.1 (1990), pp 58-63. (German)
• U.B. Marvin, "Ernst Florenz Friedrich Chladni (1756-1827) and the origins of modern meteorite research", Meteoritics & Planetary Science, 31 (1996), pp 545-588.
• "Ernst Florens Friedrich Chladni (1756–1827) and the origins of modern meteorite research". Meteoritics 31 (5): 545–588. 1996.
• Dieter Ullmann, "Chladnis Beiträge zur Raumakustik", NTM International Journal of History & Ethics of Natural Sciences, Technology & Medicine, Vol. 14, No. 1 (Feb 2006), pp 1-8. (German)
• Myles W. Jackson, Harmonious Triads: Physicists, Musicians, and Instrument Makers in Nineteenth-Century Germany, MIT Press, 2006.
• Mešterová, Jana (2006). "Ernest Florens Fridrich Chladný – Chladni: Fyzik so slovenskými koreňmi, nazývaný otec akustiky a meteoritiky". XXIII. Zborník dejín fyziky. Bratislava: Slovenská spoločnosť pre dejiny vied a techniky pri SAV. pp. 127–132 (Slovak).
• Heise, B. (6 2007). "Chladni's clavicylinder and some imitations". The European Physical Journal Special Topics 145 (1): 3–14.
• Stöckmann, Hans-Jürgen (6 2007). "Chladni meets Napoleon". The European Physical Journal Special Topics 145 (1): 15–23. German version, 2006.
• Ullmann, Dieter (6 2007). "Life and work of E.F.F. Chladni". The European Physical Journal Special Topics 145 (1): 25–32.
• J. Biała, "Ernst Florens Friedrich Chladni — ojciec akustyki i meteorytyki", Mat. V Konf. Meteorytowej, Wrocław, 2008. (Polish)
• W. Czajka, "Cmentarz Wielki we Wrocławiu - miejsce pochówku E.F.F. Chladniego", Mat. V Konf. Meteorytowej, Wrocław, 2008. (Polish)
• A. Dobrucki, "Ernst Chladni i początki nowoczesnej akustyki", Mat. V Konf. Meteorytowej, Wrocław, 2008. (Polish)
• Marian Stępniewski, Hubert Sylwestrzak, "Ernst Florens Friedrich Chladni (1756–1827) — ojciec meteorytyki", Przegląd Geologiczny, 3 (2008). (Polish)
• more
On Chladni figures[edit] |
8d95f9a0c54a8541 | Unitary Irreducible Representations of the Poincaré Group
[Dr. Chris Oakley's Home Page] [The search for a quantum field theory]
Unitary Irreducible Representations of the Poincaré Group
Most particle physicists will recognise this title immediately, but to non-specialists it will be just gibberish.
But even if you need to skip the technical bits, you might find the observations about the sociology of physics interesting.
When, as a second-year graduate student in Oxford, I submitted a not-particularly-significant contribution to this subject to the journals in 1983, the reaction seemed to be one of boundless rage. I still do not understand why. Outside of Penrose's group in Oxford (which I did not belong to, having been attached to the physics rather than mathematics department) the only qualified person in any way supportive was Professor Lochlainn O'Raifeartaigh of the Dublin Institute of Advanced Studies.
Even if you do not understand the title, you may be prepared to believe that it (the title) is mathematically precise.
As one enrolled on one of the less mathematical and more practical physics courses as an undergraduate (Oxford), I found more-abstract mathematics a huge boon. I had the devil’s own job getting to understand quantum mechanics on the basis of plausibility arguments and classical analogies that were all the rage at the time. But when I discovered infinite-dimensional complex vector spaces (from Dirac’s book), my eyes were opened. A good strategy when working with quantum mechanics is this: do not try to understand it by relating it to the everyday world, just try to understand it through the mathematics. It opened up a whole world of operators and operator algebras. Indeed, one of the neater things that the physics undergrad ever does is to derive the spectrum of eigenstates of the angular momentum operators. I loved the power and generality of this, especially given how ad hoc and arbitrary so much of the other stuff we had been taught seemed to be. I now know that all one is doing here is classifying the Unitary Irreducible Representations of the Three-Dimensional Rotation Group (or, more accurately, its double cover SU(2)), but it seemed like a lot of output for little input: any complex vector space that has three operators Jx, Jy and Jz with the SU(2) algebra acting non-trivially on it must be expressible as linear combinations of states characterised by a spin (a non-negative half integer) and a spin z-component (a half-integer which steps from minus the spin to plus the spin in whole steps). What is more, having more than one set of operators does not invalidate the analysis - it merely makes it more interesting, and it is crucial to Atomic Physics that one may, using so-called Clebsch-Gordon coefficients, switch between different classifications of quantum states with different behaviour under the rotation group.
Traditionally, one represents spin states with the pair (s, ms), with ms being allowed to roam over the 2s+1 different values as shown in the table below:
Spin (s) Spin z-component (ms)
0 0
1/2 -1/2, 1/2
1 -1, 0, 1
3/2 -3/2, -1/2, 1/2, 3/2
2 -2, -1, 0, 1, 2
etc. etc.
However (s, ms) is not the only way to represent spin: one may also use SU(2) tensors (something, by the way, that is rarely taught in quantum mechanics courses). The basic building block here is a two-component vector in a complex vector space known as a spinor. It can be labelled Ta, where a = 1 or 2. Higher spins are then just formed as symmetrical tensor products, and the table above becomes
Spin (s) Tensor
0 T
1/2 Ta
1 Tab
3/2 Tabc
2 Tabcd
etc. etc.
That these tensors are symmetrical follows from irreducibility: for example a rank two tensor that is not symmetric can be written as Tab + U εab where εab is the alternating tensor (a preserved constant tensor of SU(2)), showing that the tensor here is reducible as it mixes both the symmetric spin one Tab and the scalar (spin zero) U. Note that the number of components match: for example in spin 3/2 the independent components are T111, T112, T122, and T222 ( T121, for example is not independent as symmetrisation gives T112) - in one-to-one correspondence with the ms values of -3/2, -1/2, 1/2 and 3/2.
It would be very inconvenient to do all of angular momentum in atomic physics in terms of SU(2) tensors, but if one did at least one would have no need to calculate any Clebsch-Gordon coefficients as here one need only follow the requirements of symmetry and covariance. Coupling two spin 1/2's, for example, leads to spin 1 and spin 0. Calculating the Clebsch-Gordon coefficients in this case is simple enough, but with SU(2) tensors it is even simpler: the symmetrization Xa Yb +Xb Ya is spin 1 and the antisymmetrization Xa Yb -Xb Ya , being proportional to the alternating tensor, is spin zero.
The rotation group is a symmetry that comes with the physical principle that space is isotropic. As is the case in classical mechanics, symmetries result in conservation laws: angular momentum conservation follows from rotational invariance, energy conservation from time translation invariance and momentum conservation from space translation invariance. But in quantum mechanics symmetry also tells one the size and shape of one's building blocks.
In Special Relativity, one also has symmetry with respect to different inertial frames of reference - in other words, the physical laws experienced by two inertial frames of reference in motion with respect to each other must be the same. The symmetry group of Special Relativity is known as the Poincaré Group and has ten dimensions: one for time translation, three for space translation, three for rotation and three for invariance under change of velocity of the reference frame.
The unitary irreducible representations of the Poincaré group were first classified by Wigner in 1939. This is an extremely interesting piece of mathematics, and one that has far-reaching consequences, as it shows that, in much the same way that quantum states in atomic physics are classified by their angular momentum quantum numbers, quantum states in relativistic quantum theory are comprised of entities classified by mass and spin.
The classification by mass implies the Klein-Gordon equation, an equation for a free particle that reduces to the Schrödinger equation in the non-relativistic limit, and it is highly satisfying that it can be obtained from such general considerations and not merely classical analogy. Similarly, the source-free Maxwell equations follow just from the stipulation that the photon has mass zero and spin one (helicity one, actually, but there is no need to get into that here).
One thing that is not so nice about the Wigner analysis, though, is the way that he handles spin. He introduces a thing called (by everyone else) the Wigner rotation which, although covariant under the rotation group, is not covariant under the larger Lorentz group. This is unfortunate as things that are easy to demonstrate with the Lorentz group - or, more accurately, the group SL(2,C), which is its double cover - are very hard when one uses the Wigner rotation. For example, the proof of the spin-statistics theorem in Weinberg's Quantum Field Theory, Volume 1, Chapter 5, using this notation, is sufficiently complex that he advises his readers to skip it on the first reading. With SL(2,C), though, the proof is just a few lines. The difference between the Wigner rotation and SL(2,C) here is very much like the difference between the two tables of spin states for the rotation group above. Both are correct: it is just that for the more theoretical work, the second is easier to work with.
I had learned my SL(2,C) from the Penrose group, this being a pre-requisite for Twistor theory. Physicists, also, were having to learn it at the time (1982) as it was needed for supersymmetry. Realising that, unlike the careless physicists, Penrose had thought the subject out properly, I quickly switched to his conventions, and then worked out the tensor structures of the irreducible representations of the Poincaré group as an alternative to the Wigner rotation. The massless case was the most interesting, as the analysis gave an equation which was the massless Dirac equation for spin 1/2, the source-free Maxwell equations for spin 1 and linearised, source-free General Relativity for spin 2. This equation was well known to the Penrose group, and was so simple that it was hard to believe that papers in theoretical physics deriving Lagrangians for massless particles of arbitrary spin (by Fang and Fronsdal) were solving the same problem. These papers had ferociously complex formulae whose connection with the underlying principles was all-but-impossible to discern. Nonetheless, it was these Lagrangians that were the jumping-off point for higher-spin studies in particle physics.
So I decided to see if they did amount to the same thing. They did, and to see the Fang-Fronsdal formulae drop out after weeks of swimming in index soup was the second most satisfying moment of my brief academic career. The Fang-Fronsdal Lagrangian was in terms of the gauge potential whereas the Penrose one was in terms of the field strength. The field strength was the physical thing, but it could be derived from the gauge potential. Certain changes, known as gauge transformations, could be applied to the gauge potential without changing the field strength. The Fang-Fronsdal Lagrangians followed if one required that the Lagrangian was invariant under the largest possible class of gauge transformations. The Lagrangians were not unique, though. Just as gauge fixing terms can be added to the Maxwell Lagrangian without affecting any dynamics, so with higher spin, gauge fixing is possible by, amongst other things, leaving out auxiliary fields.
Anyway, feeling pleased with myself, I submitted my work to a journal. I am not sure what I was expecting, but what I got back was that I was a complete idiot who knew nothing. Apparently, the bits of my paper that were not merely repetitions of text books were just wrong. When I got them to be specific, it was easy to show that the criticisms were invalid, but that made no difference: whatever happened, they were not going to publish the paper. Looking back, I think that the mentality was just that there was only one way of doing this - their way, the right way, and if anyone tried anything different then they would do everything they could to prevent this being seen. They certainly were not going to consider the possibility that something as fundamental as the Wigner rotation could be improved on - and all this in spite of of the evidence from Penrose and others. These journal referees were anonymous, and I doubt that they would have made some of their more intemperate remarks if they had had to take responsibility. But when criticisms are sufficiently ignorant, they become like water off a duck's back: I ignored them and the work became the core of a D.Phil. thesis on arbitrary-spin field theory.
I revisited a lot of the standard ground of so-called "axiomatic" field theory in the process, including the spin-statistics theorem. Luckily one of the experts in the field who was prepared to accept that the last word had not necessarily been written on Wigner's Unitary Irreducible Representations of the Poincaré Group was Professor O'Raifeartaigh, who was my external examiner, and I was passed in June 1984.
I do see papers on higher spin appearing on ArXiv on a fairly regular basis: mostly generalisations of one kind or another, but, given that so far we can only be sure that fundamental particles of spin 1/2 and 1 exist (no, I do not believe in the Higgs), I question the value. On the other hand the fact that one can build interacting field theory entirely from free particle states means that the Wigner work, instead of being interesting but ultimately irrelevant - as many seem to imagine - is essential. |
0f4fbd2258c31033 | Tuesday, May 29, 2012
Study of photosynthesis reveals new physics
The experimental study of photosynthesis is beginning to reveal new physics. Already now it has become clear that quantum effects in much longer scales as believed in orthodoxy are involved. The latest revelation is decribed in a Science News article. The title is Unusual Quantum Effect Discovered in Earliest Stages of Photosynthesis. When photon excites the cell, it excites one electron inside chromophore. The surprising finding was that single photon actually excites several chromophores simultaneously!
What does this statement mean? One could think photon as a finite sized object (in TGD Universe space-time sheet that I have used to call "massless extremal") with size scale of wavelength which for visible photons is of order cell size. The photon would excite a superposition of states in which one of the chromophores is excited. The system formed by chromophores behaves as quantum coherent system. If quantum coherence is not present, one can say that photon excites just single chromophore at time.
Quantum coherence makes possible amplification for the rate of process. In incoherent situation the rate would be proportional to the number N of chromophores. In coherent situation it would in the optimal case be proportional to N2 (constructive interference). Note that one could imagine also second option in which photon literally excites several electrons belonging to different chromophores separately but this does not make sense to me.
This kind of simultaneous excitation would make sense in TGD framework if photo space-time sheet has size of order wave length, which is indeed of the order of cell size. If photon is dark then its size is scaled by the ratio hbar/hbar0 so that even chromophores in several cells could be excited simultaneously!
For TGD based model of photosynthesis see the chapter Macroscopic quantum coherences and quantum metabolism as different sides of the same coin: part I of "TGD Universe as a Conscious Hologram".
Monday, May 28, 2012
Gamma ray diffraction from silicon prism?
Orwin O'Dowd sent a very interesting link to a popular article telling about refraction of gamma rays from silicon prisms. This should not be possible and since I love anomalies I got interested. Below I discuss the discovery from the point of standard physics and TGD point of view.
Basic ideas about refraction
Absorption, reflection, and refraction are basic phenomena of geometric optics describing the propagation of light in terms of light rays and neglecting interference and diffraction making it possible for light to "go around the corner". The properties of medium are described in terms of refraction index n which in general is a complex quantity. The real part of n gives the phase velocity of light in medium using vacuum velocity c as unit, which - contrary to a rather common misconception - can be also larger than c as a phase velocity which cannot be assigned to energy transfer. The imaginary part characterizes absorption. n depends in general on frequency of the incoming light and the resonant interactions of light with the atoms of medium make themselves manifest in the frequency dependence of n - in particular in absorption described by the imaginary part of n.
What happens at the boundary of two media - reflection or refraction - is characterized the the refraction index boundary conditions for radiation fields at the boundary, which are essentially Maxwell's equations at the discontinuity. Snell's law tells what happens to the direction of the beam and states essentially that only the momentum component of incoming photon normal to the boundary changes in these processes since only the translational symmetry in normal direction is changed.
How refractive index is determined?
What determines the index of refraction? To build a microscopic theory for n one must model what happens for the incoming beam of light in medium. One must model the scattering of light from the atoms of the medium.
In the case of condensed matter X ray diffraction is excellent example about this kind of theory. In this case the lattice structure of the condensed matter system makes the situation simple. For infinitely large medium and for an infinitely wide incoming beam the scattering amplitude is just the Fourier transform of the density of atoms for the change of f the wave vector (or equivalently momentum) of photon, which must be a vector in the resiprocal lattice of the crystal lattice. Therefore the beam is split into beams in precisely defined directions. The diffracted beam has a sharp maximum in forward direction and the amplitude in this direction is essentially the number of atoms.
In less regular situation such as for water or bio-matter for which regular lattice structure typically exists only locally the peaking to forward direction, is even more pronounced, and in the first approximation the beam travels in the direction that it has after entering to the system and only the phase velocity is changed and attenuation takes place. Diffraction patterns are however present also now and allow to deduce information about the structure of medium in short length scales. For instance, Delbrueck diffraction from biological matter allowed to deduce structural information about DNA and deduce its structure.
This description contains an important implicit assumption. The width and length of the incoming photon beam must be so large that the number of atoms inside it is large enough. If this condition is not satisfied, the large scale interference effects crucial for diffraction do not take place. For very narrow beams the situation approaches to a scattering from single atom and one expects that the beam is gradually widened but that it does not make sense to speak about refraction index and that the application of Snell's law does not make sense. Incoming photons see individual atoms rather than the lattice of atoms. For this reason the prevailing wisdom has been that it does not make sense to speak about bending of gamma rays from solid state. A gamma ray photon with energy of one MeV corresponds to a wavelength λ of about 10-12 meters which is of same order as electron Compton length. One expects that the width and length of gamma ray beam is measured using λ as a natural unit. Even width of 100 wavelengths corresponds to 1 Angstrom which corresponds to the size scale of single atom.
The real surprise was that gamma rays bend in prisms made from silicon! The discovery was made by a group of scientists working in Ludwig-Maximilians-Universität in Munich. The group was led by Dietrich Habs. The article about the discovery is also published in Phys Rev Lett May 3 issue. The gamma ray energies where in the range .18-2 MeV. The bending known as refraction was very small using every day standards. The value of the refractive index which gives the ratio c/v for light velocity c to the light velocity v in silicon is 1+10-9 as one learns from another popular article. When compared to the predictions of the existing theory, the bending was however anomalously large. By the previous argument it should not be even possible to talk about bending.
Dietrich Habs suggests that so called Delbrueck scattering of gamma rays from virtual electron positron pairs created in the electric fields of atoms could explain the result (see this). This scattering would be diffraction (scattering almost totally in forward direction as for light coming through a hole). This cannot however give rise to an effective scattering from a many-atom system unless the gamma ray beam is effectively or in real sense scaled up. The scattering would be still from single atom or even part of single atom. One could of course imagine that atoms themselves have hidden structure analogous to lattice structure but why virtual electron pairs could give rise to it?
In the following I discuss two TGD inspired proposals for how the diffraction that should not occur could occur after all?
Could gamma rays scatter from quarks?
There is another strange anomaly that I discussed for a couple of years ago christened as the incredibly shrinking proton. It was found that protons charge distribution deviates slightly from the expected one. The TGD inspired explanation was based on the observation that quarks in proton are rather light having masses of 5 and 20 MeV. These correspond to gamma ray energies. Therefore the Compton wave lengths of quarks are also rather long, much longer than the Compton length of proton itself! Parts would be larger than the whole! The explanation for this quantum mystical fact would be that the Compton length corresponds to length scale assignable to color magnetic body of quark. Could it be that the scattering gamma rays see the magnetic bodies of 3× 14 = 42 valence quarks of 14 nucleons of Si nucleus. The regular structure of atomic nucleus as composite of quark magnetic would induce the diffractive pattern. If so, we could do some day nuclear physics and perhaps even study the structure of proton by studying diffraction patterns of gamma rays on nuclei!
Could part of gamma beam transform to large hbar gamma rays?
Also the hierarchy of Planck constants comes in mind. Scaling of hbar for a fixed photon energy scales up the wavelength of gamma ray. Could some fraction of incoming gamma rays suffer a phase transition increasing their Planck constant? The scaling of Planck constant make gamma rays to behave like photons with scaled up wavelength. Also the width of the beam would be zoomed up. As a result the incoming gamma ray beam would see a group of atoms instead of single atom and for a large enough value of Planck constant one could speak of diffraction giving rise to refraction.
For years ago I considered half jokingly the possibility that hierarchy of Planck constants could imply quantum effects in much longer scales than usually (see this). Diffraction would be a typical quantum effect involving interference. Perhaps even the spots seen sometimes in ordinary camera lense could be analogous to diffractive spots generated by diffraction of large hbar visible photons through a hole (they should usually appear in the scale of visible wavelength about few microns, see this). Take this as a joke!
I also proposed that strong classical em fields provide the environment inducing increase of Planck constant at some space-time sheets. The proposal was that Mother Nature is theoretician friendly (see this). As perturbation expansion in powers of 1/hbar fails, Mama Nature scales up hbar to make the life of her theorizing children easier;-). Strong electric and magnetic fields of atomic nuclei believed by Habs to be behind the diffraction might provide the manner to generate large Planck constant phases and dark matter.
Does thermodynamics have a representation at the level of space-time geometry?
R. Kiehn has proposed what he calls Topological Thermodynamics (TTD) as a new formulation of thermodynamics. The basic vision is that thermodynamical equations could be translated to differential geometric statements using the notions of differential forms and Pfaffian system. That TTD differs from TGD by a single letter is not enough to ask whether some relationship between them might exist. Quantum TGD can however in a well-defined sense be regarded as a square root of thermodynamics in zero energy ontology (ZEO) and this leads leads to ask seriously whether TTD might help to understand TGD at deeper level. The thermodynamical interpretation of space-time dynamics would obviously generalize black hole thermodynamics to TGD framework and already earlier some concrete proposals have been made in this direction.
One can raise several questions. Could the preferred extremals of Kähler action code for the square root of thermodynamics? Could induced Kähler gauge potential and Kähler form (essentially Maxwell field) have formal thermodynamic interpretation? The vacuum degeneracy of Kähler action implies 4-D spin glass degeneracy and strongly suggests the failure of strict determinism for the dynamics of Kähler action for non-vacuum extremals too. Could thermodynamical irreversibility and preferred arrow of time allow to characterize the notion of preferred extremal more sharply?
It indeed turns out that one can translate Kiehn's notions to TGD framework rather straightforwardly.
1. Kiehn's work 1- form corresponds to induced Kähler gauge potential implying that the vanishing of instanton density for Kähler form becomes a criterion of reversibility and irreversibility is localized on the (4-D) "lines" of generalized Feyman diagrams, which correspond to space-like signature of the induced metric. The localization of heat production to generalized Feynman diagrams conforms nicely with the kinetic equations of thermodynamics based on reaction rates deduced from quantum mechanics. It also conforms with Kiehn's vision that dissipation involves topology change.
2. Heat produced in a given generalized Feynman diagram is just the integral of instanton density and the condition that the arrow of geometric time has definite sign classically fixes the sign of produced heat to be positive. In this picture the preferred extremals of Kähler action would allow a trinity of interpretations as non-linear Maxwellian dynamics, thermodynamics, and integrable hydrodynamics.
3. The 4-D spin glass degeneracy of TGD breaking of ergodicity suggests that the notion of global thermal equilibrium is too naive. The hierarchies of Planck constants and of p-adic length scales suggests a hierarchical structure based on CDs withing CDs at imbedding space level and space-time sheets topologically condensed at larger space-time sheets at space-time level. The arrow of geometric time for quantum states could vary for sub-CDs and would have thermodynamical space-time correlates realized in terms of distributions of arrows of geometric time for sub-CDs, sub-sub-CDs, etc...
The hydrodynamical character of classical field equations of TGD means that field equations reduce to local conservation laws for isometry currents and Kähler gauge current. This requires the extension of Kiehn's formalism to include besides forms and exterior derivative also induced metric, index raising operation transforming 1-forms to vector fields, duality operation transforming k-forms to n-k-forms, and divergence which vanishes for conserved currents.
For background see the chapter Basic Extermals of Kähler action of "Physics in Many-Sheeted Space-time" or the article Does thermodynamics have a representation at the level of space-time geometry?.
Friday, May 25, 2012
Negentropic entanglement, metabolism, and acupuncture
It is interesting to try to develop a detailed model of acupuncture in TGD framework (motivation came from the comment of Ulla to earlier posting). Consider following assumptions.
1. ATP (metabolic energy) - negentropic entanglement connection is true and formation of high energy phosphate bond generates somehow negentropic entanglement.
2. Pain means loss of negentropic entanglement and healing at the fundamental level - in particular pain relief - involves regeneration of negentropic entanglement.
3. Fundamental metabolic energy currencies correspond to zero point kinetic energies E0≈ π2 /2mL2 at space-time sheets labelled by p-adic primes determining their size scale L=(hbar/hbar0)Lp. Therefore the generation of metabolic energy storages means at fundamental level driving charged particles to smaller space-time sheets (the smaller the space-time sheet, the higher the zero point kinetic energy). The driving force is basically electric force so that electric fields are needed.
4. Metabolic energy storage - generation of ATP - means generation of negentropic entanglement. Assume that this entanglement is assignable to the smaller space-time sheet.
1. The simplest possibility is that the electrons at this space-time sheet form Cooper pairs and negentropic entanglement is between them. The decay of Cooper pairs would make ATP unstable and the decay to ADP would mean use of metabolic energy quantum and also a loss of negentropic entanglement. This conforms with the generalized form of the second law allowing generation of genuine negentropy but predicting that it does not last for ever. The lifetime of ATP - about 40 minutes - gives an estimate for the life time of the electronic Cooper pairs. The negative charge of ATP would be due to the electronic Cooper pairs.
2. A simple estimate for the order of magnitude of Kähler magnetic energy of the flux tube assuming far from vacuum extremal and quantization of the Kähler magnetic flux (BKS= n× hbar for constant magnetic field in a flux tube of cross section S) shows that the Kähler magnetic energy is much higher than zero point kinetic energy of electron pair. Especially so for large values of hbar since magnetic energy behaves as EB∝ hbar3L0/S by the proportionalities B∝ hbar B0 and L= hbar L0. In this case the magnetic flux tube should be pre-existing and correspond to acupuncture meridian emerging from the node.
3. For near vacuum extremals the flux tube could be generated in the process. The use of the metabolic energy would mean dropping of electrons to larger space-time sheet and possibly even the disappearence of the magnetic flux tube in this case. This option does not look too plausible however.
5. The generation of metabolic energy storages (ATP) requires energy feed. In the formation of ATP from ADP
the acceleration of protons and electrons in the electric of cell membrane plays a key role. The electric energy gained in the process is transformed to metabolic energy and could means the formation of a flux tube carrying the Cooper pair. Assume that a similar process occurs also in much longer length scales for weaker electric fields scaling like 1/hbar2 for given p-adic prime (and 1/Lp2 as function of p-adic length scale) so that electric potential between the ends of the flux tube remains the same. Assume that quantum direct currents are in question. If so, the function of the direct currents of Becker can be identified as a manner to generate metabolic energy and negentropic entanglement. This is natural since healing is involved.
Armed with these assumptions one can try to understand why metal needles are essential for acupuncture.
1. The basic idea is that the presence of the needle makes possible the generation of direct quantal currents accelerating electrons in electric field which is sum of pre-existing field and the field possibly generated by
the needle. After gaining some minimum energy electrons can jump to a smaller space-time sheet and give rise to negentropically entangled Cooper pairs.
2. The needle could serve as a mere donor of electrons giving rise to a quantal direct current in turn leading to the generation of metabolic energy and negentropic entanglement.
3. Second possibility is that the needle also generates a strong additional contribution to the existing electric field.
1. Basic wisdom from from electrodynamics is that any sharp conducting charged object - such as metal needle- tends to create a strong electric field around the tip. This is the reason for why one should not go below a tree during thunder storm. Suppose that acupuncture needle becomes charged when touching the skin. One could test this assumption by replacing acupuncture needles with non-conducting material to see whether the healing effect is lost. One could also test whether the metal need is in non-vanishing potential with respect to Earth or measure directly the electric field in the vicinity of the needle tip.
2. If the needle generates negative charge, an opposite charge must be generated somewhere else and electric field lines connecting the needle and its end to it. These field lines could be along magnetic flux tubes carrying also longitudinal electric field. The natural assumption is that the flux tubes correspond to meridians emanating from the acupuncture node to which needle is sticked to. Another possibility is that needle remains neutral as total but develops a density of surface charge via polarization in existing electric field. Also in this case an additional electric field is generated and should be analogous to that of a a thin electric dipole in external electric field.
3. Under these assumptions quantum currents can flow along the meridians and load the metabolic batteries provided the strength of the generated field is high enough. The situation could resemble quite closely to that for the generation of nerve pulse. There would be pre-existing electric field along flux tuve not too far from critical for the generation of quantal direct current. The field generated by the needle would induce depolarization so that quantal direct current of some minimal strength could flow between the ends of the flux tube with acceleration giving providing electrons with energy making possible transfer to the smaller space-time sheet.
Nanna Goldman et al have provided empirical evidence for the expectation that the healing effect of the acupuncture involves metabolism (see the popular article in Sciencedaily).
The group has found that adenosine is essential for the pain killing effects of acupuncture. For mice with a normal adenosine level acupuncture reduced dis-comfort by two-thirds. In special "adenosine receptor knock-out mice"
acupuncture had no effect. When adenosine was turned on in the tissues, the discomfort was reduced even in the absence of acupuncture. During and after an acupuncture treatment, the level of adenosin in tissues near the needles was 24 times higher than before the treatment. In the abstract of the article it is stated that it is known for long time that acupuncture generates signals which induce brain to generate natural pain killing endorphins but that also adenosine acts as a natural pain killer.
Adenosine is the basic building block of AXP, X=M,D,T (adenosin-X-phosphate, X=mono,di,tri). Therefore the findings suggest that the electric fields generated or amplified by the presence of acupuncture needles loads metabolic batteries by generating ATP. Adenosine could be partially generated as decay products of AXPs. Tissue itself could
increase adenosine concentration to make possible its transformation to AXP utilizing electric field energy. From the popular article one cannot conclude whether the authors propose a connection with metabolism. The results are consistent with the assumption that the AXPs generated from adenosin accompany negentropic entanglement. This can occur in the scale of entire body and meridians could also make possible direct signalling with brain.
The contents of this posting can be found also at my homepage as an article with title Quantum Model for the Direct Currents of Becker.
Wednesday, May 23, 2012
Emotional about dark matter
Dark matter has become a subject of heated debates. There is very heavy tendecy to believe on existence of dark matter in the standard sense, that is dark matter appearing as a spherical halos around galaxies. This belief is now challeged by experimental facts.
1. Indications for new particles such as that found by Fermi is automatically interpreted as indications for dark matter. This is strange and based only on the belief that the main stream view is correct. The signals could be real but need not have anything to do with galactic dark matter halo.
Here the situation resembles that for Higgs. Despite the fact that the decay signatures of the bump differ from those for standard model Higgs, Nobel prizes have been already shared by some bloggers for the discovery of Higgs.
2. Dark matter is automatically identified as spherical halo of galactic dark matter. There are actually three quite recent observations challenging the existence of this halo. One of the observations led to the claim that the environment of solar system does not contain dark matter as part of galactic halo as it should do. As Resonaances tells, this claim was challenged by another group arguing that a change of the model for velocity distributions of stars allows to get the dark matter there. Non-specialist cannot say anything about this. But again Lubos made without hesitation the conclusion, which fits with his beliefs and wishes: the problem is settled, dark matter halo is there.
3. This tendency to draw very rapid conclusions looks strange in the light of the fact that there are two other observations challenging the dark matter halo (see this and also the earlier posting). About these findings neither Resonances, Lubos, nor Sean Carroll say nothing.
Sean also wonders why people get so emotional about dark matter. My guess is that possible end for funding makes anyone emotional;-). We still remember how bitter the fight to save the status of string theory as the only possible theory of everything was. It is now over: no string theorists have been hired during this year as Peter Woit reports but still some true believers are raging.
If you were not born yesterday, it might occur to you that galactic dark matter halos define one of the basic assumptions of - I dare guess all - recent models of cosmology receiving funding. The incorrectness of this assumption would mean a catastrophic re-distribution of funding. Researchers meet a painful moral challenge: facts or funding? It is certainly always possible to play with parameters to get rid of experimental findings which do not fit the paradigm.
Personally I want to keep my mind open. It is easy to be honest, when one has nothing to lose anymore! TGD based explanation for the velocity distribution of distant stars around galaxy relies on magnetic flux tubes carrying dark matter having galaxies around them like pearls in necklace. No halo is needed to explain the velocity spectrum and the prediction is free motion of galaxies along the flux tubes. TGD predicts also dark matter as a hierarchy of phases with non-standard value of Planck constant. The guess is that there contribution to the mass density is small as also gravitational effects but this is just a guess.
Theoretical particle physics has been in a state of stagnation for four decades. This is a statement which I read more and more often in physics blogs. The statement is true. My conviction is that things went wrong when GUTs became the paradigm. To my view already standard model is partially wrong: color is not spin-like quantum number at fundamental level although it is so in an excellent approximation. And there are indeed experimental anomalies supporting the new view about color. Whether Higgs is there or not will be seen in near future. TGD prediction is that Higgs is effectively replaced by entire scaled up variant of hadron physics.
After GUTS came SUSY and then string models became the paradigm. Strings theory period is over, and SUSY in standard sense fighting desperately for survival. Also dark matter halo is getting thinner and thinner!
What about GUTS? Is the GUT paradigm the one that goes next? For instance, Nima Arkani-Hamed has talked about return to roots. The basic prediction of GUTS is the instability of proton to particular kind of decays. Nothing has been detected despite continual efforts but this fact has gone un-noticed. It does not require too much cynicism to understand the reluctance to notice the obvious. If proton is stable, practically 40 years of particle physics theory comes stumbling down. This would have really tragic implications for funding!
Addition:Lubos has become even more emotional about dark matter. Lubos refuses to consider any other than the standard view about dark matter and sees different views as a denial of dark matter. There are now three experiments suggesting strongly that galactic dark matter does not form a spherical halo around the galaxy. As mentioned, Lubos is completely silent about two of these experiments: this kind of intellectual in-honesty is part - maybe even unconscious part - of the psychology of denial.
As explained, TGD allows to understand the existing findings: in this model dark matter is carried by string like objects defined by magnetic flux tubes - stringlike objects albeit not fundamental strings. Just by getting rid of the obsession of fundamental strings and replacing them with 3-D objects one would obtain a theory that works, and predicts that the Universe is filled with string like objects having a concrete physical interpretation whereas the strings of string models have no physical counterparts! How simple but how difficult to accept! If Witten had discovered TGD it would have been for decades the theory of everything:-).
Addition: An interesting new twist in the fight for the survival of galactic dark matter halos has emerged. Recall that Bidin, Carraro, and Mendez claimed that the upper bound for the density of dark matter in the nearby environment of Sun is considerably ower than predicted by the galactic halo model. Then Bovy and Tremain stated that the analysis contains error and that their own analysis gives the value of dark matter density predicted by the halo model. Lubos declared with necessary ad hominems that the situation settled: galactic halo is there just as high priests of theoretical physics have decided. Now Hontas Farmer wrote an interesting informal article claiming that the argument of Bovy and Tremain contains a logical error: the predicted density appears as input! Warmly recommended: at least I regard real analysis as more interesting that fanatic claims peppered by ad hominem insults.
Saturday, May 19, 2012
DC currents of Becker, part III: Are the direct currents quantal?
Becker summarizes his findings by stating that living matter is effective semiconductor. There are pairs of structures in positive and negative potential in various scales and the current between the plates of this effective capacitor flows when above some minimum potential difference. The current flows from positive to negative pole and could be electron current. Also proton current in opposite direction can be considered but electron current is experimentally favored. For instance consciousness is lost when magnetic field is used to deflect the current.
In TGD framework natural carriers of these currents would be magnetic flux tubes carrying also electric fields. A very simple deformation of the imbeddings of constant longitudinal magnetic fields gives also longitudinal electric field. With a slight generalization one obtains helical electric and magnetic fields. A crucial difference is that these currents would be quantal rather than ohmic currents even in the length scale of biological body and even longer scales assignable to the magnetic body.
The following argument allows to understand the physical situation.
1. A precise everyday analogy is vertical motion in the gravitational field of Earth between surface and some target at given height h. If the kinetic energy is high enough, the particle reaches the target. If not, the particle falls back. In quantum case one expects that the latter situation corresponds to very small probability amplitude at the target (tunneling to classically forbidden kinematic region).
2. Now electric field replaces gravitational field. Suppose that the classical electric force experienced by the particle is towards the capacitor plate taking the role of the surface of Earth. Below critical field strength the charged particle cannot reach the target classically and quantum mechanically this occurs only by tunneling with vanishingly small probability.
3. Particles with opposite value of charge experience force which accelerates them and classically they certainly reach the second plate. What happens in quantum situation? It seems that this situation is essentially identical with the first one: one has linear potential in finite interval and wave functions are localized in this range. One can equivalently regard these states as localize near the second capacitor plate.
4. A good analogy is provided by atoms: classically electron would end down to the nucleus but quantization prevents this. Also now one can imagine stationary solutions for which the electric currents for individual charges vanish at the plates although classically there would be a current in another direction. Also quantum mechanically non-vanishing conserved current is possible: all depends on boundary conditions.
Basic model
Consider now the situation at more quantitative level.
1. One can assign complex order parameters Ψk to various Bose-Einstein condensates of supra phases and obey Schrödinger equation
i∂t Ψk= (-(hbar2/2mk)∂z2+ qk E z) Ψk .
Here it is assumed that the situation is effectively one-dimensional. E is the value of constant electric field.
2. The Schrödinger equation becomes non-linear, when one expresses the electric field in terms of the total surface charge density associated with the plates of effective capacitor. In absence of external electric field it is natural to assume that the net surface charge densities σ at the plates are of opposite sign so that the electric field inside the capacitor is proportional to
σ= E= ∑ σi= ∑i qiΨbariΨi .
This gives rise to a non-linear term completely analogous to that in non-linear Schrödinger equation. A more general situation corresponds to a situation in which the region interval [a,b] bounded by capacitor plates a and b belongs to a flux longer tube like structure [A,B]: [a,b] ⊂ [A,B]. In this case one has
Etot= E+E0 .
This option is needed to explain the observations of Becker that the local strengthening of electric field increases the electron current: this would be the case in the model to be discussed if this field has a direct opposite to the background field E0. One could also interpret E as quantized part of the electric field and E0 as classical contribution.
3. The electric currents are given by
jk= i× hbar (qk/2mk)ΨbarkzΨk .
In stationary situation the net current must vanish:
k jk=0 .
A stronger condition is that individual currents vanish at the plates:
jk=0 .
It must be emphasized that this condition does not make sense classically.
Explicit form of Schrödinger equation
Consider now the explicit form of Schrödinger equation in a given electric field.
1. The equation is easy to solve by writing the solution ansatz in polar form (the index k
labelling the charge particle species will be dropped for notational convenience).
Ψ= R (a exp(iU)+ bexp(-iU))exp(-iEnt)
For real solutions current vanishes identically and this is something which is not possible classically.
It is convenient to restrict the consideration to stationary solutions, which are energy eigen states with energy value En and express the general solution in terms of these.
2. The Schrödinger equation reduces with the change of variable
z→ (z-z0)/z1 == x ,
z0=En/qE ,
z1=(hbar2/2mqE)1/3 .
(∂x2 +x)Ψ= 0 .
The range [0,z0] for z is mapped to the range [-z0/z1,0]. z0/z1 has positive sign as is easy to verify. The value range of x is therefore negative irrespective of the sign of qE. This is equation for Airy functions. Airy functions are encounterd in WKB approximation in the approximation that potential function is linear. These functions appear also in the model of rainbow.
The change of variable leads automatically to solutions restricted near the plate where the situation is completely analogous to that in gravitational field of Earth. For stationary solutions test charge in a given background field would be localized near capacitor plate with opposite sign of charge. A strong background field could be created by charges which do not correspond to the ionic charges defining ionic currents. Electrons and protons could define this field possibly associated with flux tubes considerably longer than the distance between capacitor plates.
3. Using the polar representation Ψ= Rexp(iU) Schrödinger equation reduces to two equations
[ (∂x2 -Ux2+x)R]cos(U)+[Uxx +2∂xR∂xU] sin(U)= 0 ,
[ (∂x2 -Ux2+x)R]sin(U)- [Uxx -2∂xR∂xU ]cos(U)= 0 .
Note that both (R,U) and (R,-U) represent solutions for given value of energy so that the solution can be chosen to be proportional to cos(U) or sin(U). The electric current j is conserved and equal to the current at x=0 and given by
j= (hbar/2m) (Ux/z1)R2 , z1= (hbar/2mqE)1/3 .
The current vanishes if either Uz is zero or if the solution is of form Ψ= Rsin(U).
Semiclassical treatment
In semiclassical approximation potential is regarded as so slowly varying that it can be regarded as a constant. In this situation one can write the solution of form Rexp(iU) as
Ψ= R0exp[(i/h bar) ∫0z (2m)1/2(E-qEz)1/2dz] =R0exp(i∫0x x1/2dx) .
The plate at which the initial values are given can be chosen so that the electric force is analogous to gravitation at the surface of Earth. This requires only to replaced coordinate z with a new one vanishing at the plate in question and gives to the energies a positive shift E0= qE0h.
1. The semiclassical treatment of the equation leads to Bohr rules
∮ pzdz/hbar= (2/hbar)∫0h pz dz= n .
This gives
∮ pzdz/hbar=(2(2m)1/2/hbar)∫0h (En-qEz)1/2 dz= 2∫0x0 x1/2=(4/3)×x03/2 =n .
Note that the turning point for classical orbit corresponds to zmax= En/qE.
2. One obtains
En=n2/3 E0 ,
E0= (1/2)× (qE×hbar2/r×m1/2)2/3
r= ∫01 (1-u)1/2du =2/3 .
The value of zmax is
zmax=En/qE= (n2/3/2r2/3)× (hbar2/qEm)1/3 .
3. The approximation R=R0=constant can make sense only if the position of the second plate is below zmax. This is possible if the value of n is large enough (n2/3 proportionality), if the mass m of the charged particle is small enough (m-1/3 proportionality raising electron and also proton to special position, or if the strength of electric field is small enough (E-1/3 proportionality). The value zmax is proportional to hbar2/3 so that a phase transition increasing Planck constant can induce current flow.
Possible quantum biological applications
The proposed model for quantum currents could provide quantum explanation for the effective semiconductor property assigned to the DC currents of Becker.
1. The original situation would be stationary with no currents flowing. The application of external electric field in correct direction would reduce the voltage below the critical value and currents would start to flow. This is consistent with Becker's findings if there is background electric field E0 so that the applied field has direction opposite to E0 so that the field strength experienced by charged particles is reduced and it is easier for them to reach the second plate. This is of course a possible objection against the proposal.
2. Becker's DC currents appear in several scales. They are assigned with the pairs formed by CNS and perineural tissue (this includes also glia cells) and by frontal and occipital lobes. Acupuncture could involve the generation of a DC supra current. The mechanism would be essential in the healing. Also the mechanism generating qualia could involve generation of supra currents and dielectric breakdown for them. The role of the magnetic flux tubes in TGD inspired biology suggests that the mechanism could be universal. If this were the case one might even speak about Golden Road to the understanding of living matter at basic level.
Even the generation of nerve pulse might be understood in terms of this mechanism. One can argue that neurons have higher evolutionary level than the system pairs to which only electron currents or electron and proton currents can be assigned. This because the value of Planck constant is higher for the magnetic flux tubes carrying the quantal ionic currents.
1. For Bose-Einstein condensate the simplest choice is n=1 at both plates. The energy eigenvalues would naturally differ by the shift E0= qE0h at the two plates for given particle type. Under these assumptions the current can flow appreciably only if the voltage is below the minimum value. This is certainly a surprising conclusion but brings in mind what happens in the case of neuronal membrane. Indeed, hyper-polarization has a stabilizing - something difficult to understand classically but natural quantum mechanically.
2. The reduction of membrane potential slightly below the resting potential generates nerve pulse. Also a phase transition increasing the value of Planck constant might give rise to quantal direct currents and generate flow of ionic currents giving rise to nerve pulse. Stationary solutions are located near either capacitor plate. What comes in mind is that nerve pulse involves a temporary change of the capacitor plate with this property.
3. If electron and proton currents flow as direct currents one encounters a problem. Nerve pulse should begin with direct electron currents and followed by direct proton currents and only later ions should enter the game if at all. The existing model for nerve pulse however assumes that at least electrons flow as oscillating Josephson currents rather than direct quantal currents. This is quite possible and makes sense if the cell membrane thickness small - that is comparable to electron Compton length as assumed in large hbar model for the nerve pulse. This assumption might be necessary also for proton and would make sense if the Planck constant for protonic flux tubes is large enough. For ions the Compton length would be much smaller than the thickness of cell membrane and direct currents would be natural.
If the Planck constant is same for biologically important ions, direct quantum currents would be generated in definite order since in h<zmax one has zmax∝ m-1/3∝ A-1/3. The lightest ions would start to flow first.
1. Nerve pulses can generated by voltage gated channels for potassium and calcium. Voltage gated channels would correspond to magnetic flux tubes carrying electric field. For voltage gated channels Na+ ions with atomic weight A=23 and nuclear charge Z=11 start to flow first, then K+ ions with atomic weight A=39 and Z=19 follow. This conforms with the prediction that lightest ions flow first. The nerve pulse duration is of order 1 millisecond at most.
2. Nerve pulses can be also generated by voltage gated Ca+2 channels. In this case the duration can be 100 ms and even longer. Ca has A=40 and Z=20. The proper parameter is x=r2/qA, r=hbar/hbar0. One has
x(Ca++)/x(Na+)= (r(Ca++/r(Na+))2 × (23/2× 40) .
(r(Ca++/r(Na+))2≈2 would allow to compensate for the increased weight and charge of Ca++ ions.
4. The objection is that Na+ and K+ are not bosons and therefore cannot form Bose-Einstein condensates. The first possibility is that one has Cooper pairs of these ions. This would imply
x(Ca++)/x(2Na+)= [r(Ca++/r(Na+)]2 × 23/20 .
Ca++ and Na+ pair would be in very similar position for a given value of Planck constant. This is a highly satisfactory prediction. Another manner to circumvent the problem is more science fictive and assumes that the Na+ ions are exotic nuclei behaving chemically as Na+ but having one charged color bond between nucleons (allowing in TGD view about nuclear physics).
It remains to be seen whether this model is consistent with the model of cell membrane as almost vacuum extremal or whether the vacuum extremal based model could be modified by treating ionic currents as direct currents. In the vacuum extremal model classical Z0 gauge potential is present and would give a contribution to the counterpart of Schrödinger equation. The ratio x(Ca++)/x(2Na+) for the parameter x=r2/q(A-Z)A (em charge q is replaced with neutron number in good approximation) equals to 1.38 and is not therefore very far from unity.
The many-sheetedness of space-time is expected to play a key role and one should precisely specify which sheets are almost vacuum extremals and which sheets are far from vacuum extremals. One expects that magnetic flux tubes are far from vacuum extremals and if voltage gated ionic channels are magnetic flux tubes, the proposed model might be consistent with the model of cell membrane as almost vacuum extremal.
The effects of ELF em fields on vertebrate brain
The effects of ELF em fields on vertebrate brain occur both in frequency and amplitude windows. Frequency windows can be understood if the effect occur at cyclotron frequencies and correspond to absorption of large $\hbar$ photons. A finite variation width for the strength of magnetic field gives rise to a frequency window. The observed quantal character of these effects occurring at harmonics of fundamental frequencies leads to the idea about cyclotron Bose-Einstein condensates as macroscopic quantum phases. The above considerations support the assumption that fermionic ions form Cooper pairs.
I have tried to understand also the amplitude windows but with no convincing results. The above model for the quantum currents however suggests a new approach to the problem. Since ELF em fields are in question they can be practically constant in the time scale of the dynamics involved. Suppose that the massless extremal representing ELF em field is orthogonal to the flux tube so that the ions flowing along flux tube experience an electric force parallel to flux tube. What would happen that the ions at the flux tube would topologically condensed at both the flux tube and massless extremal simultaneously and experience the sum of two forces.
This situation is very much analogous to that defined by magnetic flux tube with longitudinal electric field and also now quantum currents could set on. Suppose that semiconductor property means that ions must gain large enough energy in the electric field so that they can leak to a smaller space-time sheet and gain one metabolic quantum characterized by the p-adic length scale in question. If the electric field is above the critical value, the quantum current does not however reach the second capacitor plate as already found: classically this is of course very weird. If the electric field is too weak, the energy gain is too small to allow the transfer of ions to smaller space-time sheet and no effect takes place. Hence one would have an amplitude window.
The amplitude window occur in widely separate ranges 1-10 V/m and around 10-7 V/m. Of course, also other frequency ranges might be possible. Fractality and the notion of magnetic suggests a possible explanation for the widely different frequency ranges. Both p-adic length scale hypothesis and the hierarchy of Planck constants suggest that some basic structures associated with the cell membrane have fractal counterparts in a wide length scale range and correspond to binary structures. Magnetic flux tubes carrying quantal DC currents of Becker would be the most natural candidate in this respect since these currents appear in several length scales inside organism. Also the counterparts of lipid layers of cell membrane could be involved. If so, one must include to the hierarchy of amplitude windows also fields in the range corresponding to the cell membrane resting potential of about 6× 106 V/m. This is of course only a rough order of magnitude estimate since perturbations of these field are in order.
By fractality the most natural guess is that the voltage along the flux tube is invariant under the scale of Planck constant. This would mean that the electric field would behave as 1/L2 propto 1/hbar2 as a function of the length scale characterizing the scale variant of the structure. If so the range E=1-10 V/m assignable also to EEG would correspond to a length scale of 7.7-24 μm corresponding to cell length scale. Perhaps the direct currents run between cells layers. E=10-7 V/m would in turn correspond to 7.8 cm which corresponds to size scale of human brain hemisphere (experiments were carried out for vertebrates). Could the direct quantum currents in question run between brain hemispheres along corpus callosum?
The contents of this posting can be found also at my homepage as an article with homepage title Quantum Model for the Direct Currents of Becker.
DC currents of Becker, part II: earlier TGD based model
On basis of Becker's and others' observations described in the previous posting one can try to develop a unified view about the effects of laser light, acupuncture, and DC currents. It is perhaps appropriate to start with the following - somewhat leading - questions inspired by a strong background prejudice that the healing process - with control signals from CNS included - utilizes the loading of many-sheeted metabolic batteries by supra currents as a basic mechanism. In the case of control signals the energy would go to the "moving of the control knob".
1. Becker assigns to the system involved with DC currents an effective semiconductor property. Could the effective semiconductor property be due the fact that the transfer of charge carriers to a smaller space-time sheet by first accelerating them in electric field is analogous to the transfer of electrons between conduction bands in semiconductor junction? If so, semiconductor property would be a direct signature of the realization of the metabolic energy quanta as zero point kinetic energies.
2. Supra currents flowing along magnetic flux tubes would make possible dissipation free loading of metabolic energy batteries. This even when oscillating Josephson currents are in question since the transformation to ohmic currents in semiconductor junction makes possible energy transfer only during second half of oscillation period. Could this be a completely general mechanism applying in various states of regeneration process. This might be the case. In quantal situation the metabolic energy quanta have very precise values as indeed required. For ohmic currents at room temperature the thermal energies are considerably higher than those corresponding to the voltage involved so that they seem to be excluded. The temperature at magnetic flux tubes should be however lower than the physiological temperature by a factor of order 10-2 at least for the voltage of -1 mV. This would suggest high Tc super-conductivity is only effective at the magnetic flux tubes involved. The finding that nerve pulse involves a slight cooling of the axonal membrane proposed in the TGD based model of nerve pulse to be caused by a convective cooling due the return flow of ionic Josephson currents would conform with this picture.
3. What meridians are and what kind of currents flow along them? Could these currents be supra currents making possible dissipation-free energy transfer in the healthy situation? Does the negative potential of order -1 mV make possible flow of protonic supra currents and loading of metabolic batteries by kicking protons to smaller space-time sheets? Could electronic supra currents in opposite direct induce similar loading of metabolic batteries? Could these tow miniature metabolisms realize control signals (protons) and feedback (electrons)?
The model answering these questions relies on following picture. Consider first meridians.
1. The direct feed of metabolic energy as universal metabolic currencies realized as a transfer of charge carriers to smaller space-time sheets is assumed to underly all the phenomena involving healing aspect. Meridian system would make possible a lossless metabolic energy feed - transfer of "Chi" - besides the transfer of chemically stored energy via blood flow. The metabolic energy currencies involved are very small as compared to .5 eV and might be responsible only for "turning control knobs". The correlation of the level of consciousness with the overall strength of DC electric fields would reduce to the level of remote metabolic energy transfer.
2. The model should explain why meridians have not been observed. Dark currents along magnetic flux tubes are ideal for the energy transfer. If the length of the superconducting "wire" is long in the scale defined by the appropriate quantum scale proportional to hbar, classical picture makes sense and charge carriers can be said to accelerate and gain energy ZeV. For large values of hbar an oscillating Josephson current would be in question. The semiconductor like structure at the end of meridian -possibly realized in terms of pair of space-time sheets with different sizes- makes possible a net transfer of metabolic energy even in this case as pulses at each half period of oscillation. The transfer of energy with minimal dissipation would thus explain why semiconductor like property is needed and why acupuncture points have high value of conductivity. The identification of meridians as invisible magnetic flux tubes carrying dark matter would explain the failure to observe them: one further direct demonstration for the presence of dark matter in biological systems.
3. In the case of regeneration process NEJs would be accompanied by a scaled down version of meridian with magnetic flux tubes mediating the electronic Josephson current during blastema generation and protonic supra current during the regeneration proper. Space-time sheets of proton resp. electron correspond to kp and ke= kp+11. In a static situation many-sheeted Gauss law in static situation would guarantee that voltages over NJE are same.
4. One can of course worry about the smallness of electrostatic energies ZeV as compared to the thermal energy. Zero point kinetic energy could correspond also to the magnetic energy of the charged particle. For sufficiently large values of Planck constant magnetic energy scale is higher than the thermal energy and the function of voltage could be only to drive the charged particles along the flux tubes to the target: and perhaps act as a control knob with electrostatic energy compensating for the small lacking energy. Suppose for definiteness magnetic field strength of B=.2 Gauss explaining the effects of ELF em fields on brain and appearing in the model of EEG. Assume that charged particle is in minimum energy state with cyclotron quantum number n=1 and spin direction giving negative interaction energy between spin and magnetic field so that the energy is (g-2)hbar eB/2mp. Assume that the favored values of hbar correspond to number theoretically simple ones expressible as a product of distinct Fermat primes and power of 2. In the case of proton with g≈ 2.7927 the standard metabolic energy quantum E0= .5 eV would require roughly hbar/hbar0=17× 234. For electron g-2≈ α/π≈ .002328 gives hbar/hbar0=5× 17× 230.
Consider next NEJs and semiconductor like behavior and charging of metabolic batteries.
1. Since NEJ seems resembles cell membrane in some respects, the wisdom gained from the model of cell membrane and DNA as tqc can be used. The model for nerve pulse and the model for DNA as topological quantum computer suggest that dark ionic currents flowing along magnetic flux tubes characterized by a large value of Planck constant are involved with both meridians and NJEs and might even dominate. Magnetic flux tubes act as Josephson junctions generating oscillatory supra currents of ions and electrons. For large values of hbar also meridians are short in the relevant dark length scale and act as Josephson junctions carrying oscillatory Josephson currents.
2. The findings of Becker suggest that acu points correspond to sensory receptors which are normally in a negative potential. The model for the effects of laser light favors (but only slightly) the assumption that in a healthy situation it is protons arriving along magnetic flux tubes which are kicked to the smaller space-time sheets and that negative charge density at acu point attracts protons to the acu point. Electrons could of course flow in reverse direction along their own magnetic flux tubes and be kicked to the smaller space-time sheets at the positive end of the circuit. In the case of brain, protonic end would correspond to the frontal lobes and electronic end to the occipital lobes. This kind of structure could appear as fractally scaled variants. For instance, glial cells and neurons could form this kind of pair with neurons in negative potential and glial cells in positive potential as suggested by the fact that neuronal damage generates positive local potential.
3. Classically the charge carriers would gain energy E=ZeV as they travel along the magnetic flux tube to NJE. If this energy is higher than the metabolic energy quantum involved, it allows the transfer of charge carrier to a smaller space-time sheet so that metabolic resources are regenerated. Several metabolic quanta could be involved and the value of V(t) would determine, which quantum is activated. The reduction of the V below critical value would lead to a starvation of the cell or at least to the failure of control signals to "turn the control knob". This should relate to various symptoms like pain at acupuncture points. In a situation requiring acupuncture the voltage along flux tubes would be so small that the transfer of protons to the smaller space-time sheets becomes impossible. As a consequence, the positive charge carriers would accumulate to the acu point and cause a further reduction of the voltage. Acupuncture needle would create a "wound" stimulating large positive potential and the situation would be very much like in regeneration process and de-differentiation induced by acupuncture could be understood.
Many questions remain to be answered.
1. What causes the de-differentiation of the cells? The mere charging of metabolic energy batteries perhaps? If so then the amount of metabolic energy- "chi"- possessed by cell would serve as a measure for the biological age of cell and meridian system feeding "chi" identified as dark metabolic energy would serve as a rejuvenating agent also with respect to gene expression. Or does the electric field define an external energy feed to a self-organizing system and create an electromagnetic environment similar to that prevailing during morphogenesis inducing a transition of cells to a dedifferentiated state? Or could DNA as tqc allow to understand the modification of gene expression
as being due to the necessity to use tqc programs appropriate for regeneration? Or should cells and wounded body part be seen as intentional agents doing their best to survive rather than as passive parts of biochemical system?
2. Acupuncture and DC current generation are known to induce generation of endorphins. Do endorphins contribute to welfare by reducing the pain or do they give a conscious expression for the fact that situation has improved as a result of recharging of the metabolic energy batteries?
3. Could the continual charging of metabolic energy batteries by DC currents occur also in the case of cell membrane? The metabolic energy quantum would be around .07 eV in this case and correspond to p-adic length scale k=140 for proton (the quantum is roughly a fraction 1/8 of the fundamental metabolic energy quantum .5 eV corresponding to k=137).
DC currents of Becker. Part I: Becker's and other's findings
Robert Becker proposed on basis of his experimental work that living matter behaves as a semiconductor in a wide range of length scales ranging from brain scale to the scale of entire body. Direct currents flowing only in preferred direction would be essential for the functioning of living manner in this framework.
One of the basic ideas of TGD inspired theory of living matter is that various currents, even ionic currents, are quantal currents. The first possibility is that they are Josephson currents associated with Josephson junctions but already this assumption more or less implies also quantal versions of direct currents. The TGD inspired quantum model for nerve pulse assumes that quantal ionic currents through the cell membrane are Josephson currents so that the situation is automatically stationary. One can criticize this assumption since the Compton length of ions for the ordinary value of Planck constant is so small that magnetic flux tubes carrying the current through the membrane look rather long in this length scale. Therefore either Planck constant should be rather large or one should have a non-ohmic quantum counterpart of a direct current in the case of ions and perhaps also protons in the case of neuronal membrane: electronic and perhaps also protonic currents could be still Josephson currents. This would conform with the low dissipation rate.
The notion of direct current is indeed familiar already from the work of Robert Becker and in the following the results related to laser induced healing, acupuncture, and DC currents are discussed first. The obvious question is whether these direct currents are actually currents and whether they could be universal in living matter. A TGD inspired model for quantal direct currents is proposed and its possible implications for the model of nerve pulse are discussed. Whether the model is consistent with the vacuum extremal property remains an open question.
I have divided the blog posting to three parts. The postings can be found at my homepage as an article with homepage title Quantum Model for the Direct Currents of Becker.
The findings of Robert Becker (the book Electromagnetism and Life by Becker and Marino can be found from web) meant a breakthrough in the development of bioelectromagnetics. One aspect of bioelectromagnetic phenomena was the discovery of Becker that DC currents and voltages play a pivotal role in various regeneration processes. Why this is the case is still poorly understood and Becker's book is a treasure trove for anyone ready to challenge existing dogmas. The general vision guiding Becker can be summarized by a citation from the introduction of the book.
Growth effects include the alteration of bone growth by electromagnetic energy, the restoration of partial limb regeneration in mammals by small direct currents, the inhibition of growth of implanted tumors by currents and fields, the effect upon cephalocaudal axis development in the regenerating flatworm in a polarity-dependent fashion by applied direct currents, and the production of morphological alterations in embryonic development by manipulation of the electrochemical species present in the environment. This partial list illustrates the great variety of known bioelectromagnetic phenomena.
The reported biological effects involve basic functions of living material that are under remarkably precise control by mechanisms which have, to date, escaped description in terms of solution biochemistry. This suggests that bioelectromagnetic phenomena are fundamental attributes of living things - ones that must have been present in the first living things. The traditional approach to biogenesis postulates that life began in an aqueous environment, with the development of complex molecules and their subsequent sequestration from the environment by membranous structures. The solid-state approach proposes an origin in complex crystalline structures that possess such properties as semiconductivity, photoconductivity, and piezoelectricity. All of the reported effects of electromagnetic forces seem to lend support to the latter hypothesis.
Observations relating to CNS
The following more quantitative findings, many of them due to Becker, are of special interest as one tries to understand the role of DC currents in TGD framework.
1. CNS and the rest of perineural tissue (tissue surrounding neurons including also ∈dexglial cellglial cells) form a dipole like structure with neural system in positive potential and perineural tissue in negative potential. There is also an electric field along neuron in the direction of nerve pulse propagation (dendrites correspond to - and axon to +) (note that motor nerves and sensory nerves form a closed loop). Also microtubules within axon carry electric field and these fields are probably closely related by the many-sheeted variants of Gauss's and Faraday's laws implying that voltages along two different space-time sheets in contact at two points are same in a static situation.
2. A longitudinal potential along front to back in brain with frontal lobes in negative potential with respect to occipital lobes and with magnitude of few mV was discovered. The strength of the electric field correlates with the level of consciousness. As the potential becomes weaker and changes sign, consciousness is lost. Libet and Gerard observed traveling waves of potentials across the cortical layers (with speeds of about 6 m/s: TGD inspired model of nerve pulse predicts this kind of waves kenociteallb/pulse ). Propagating potentials were discovered also in glial cells. The interpretation was in terms of electrical currents.
3. It was found that brain injury generated positive polarization so that the neurons ceased to function in an area much larger than the area of injury. Negative shifts of neuronal potentials were associated with incoming sensory stimuli and motor activity whereas sleep was associated with a positive shift. Very small voltages and currents could modulate the firing of neurons without affecting the resting potential. The "generating" potentials in sensory receptors inducing nerve pulse were found to be graded and non-propagating and the sign of the generating potential correlated with sensory input (say increase/reduction of pressure). Standard wisdom about cell membrane has difficulties in explaining these findings.
4. The natural hypothesis was that these electric fields are accompanied by DC currents. There are several experimental demonstrations for this. For instance, the deflection of assumed DC currents by external magnetic field (Hall effect) was shown to lead to a loss of consciousness.
Observations relating to regeneration
The second class of experiments used artificial electrical currents to enhance regeneration of body parts. These currents are nowadays used in clinical practice to induce healing or retard tumor growth. Note that tissue regeneration is a genuine regeneration of an entire part of organism rather than mere simple cell replication. Salamander limb generation is one of the most studied examples. Spontaneous regeneration becomes rare at higher evolutionary levels and for humans it occurs spontaneously only in the fractures of long bones.
1. An interesting series of experiments on Planaria, a species of simple flatworm with a primitive nervous system and simple head-to-tail axis of organization, was carried out. Electrical measurements indicated a simple head-tail dipole field. The animal had remarkable regenerative powers; it could be cut transversely into a number of segments, all of which would regenerate a new total organism. The original head-tail axis was preserved in each regenerate, with that portion nearest the original head end becoming the head of the new organism. The hypothesis was that the original head-tail electrical vector persisted in the cut segments and provided the morphological information for the regenerate. The prediction was that the reversal of the electrical gradient by exposing the cut surface to an external current source of proper orientation should produce some reversal of the head-tail gradient in the regenerate. While performing the experiment it was found found that as the current levels were increased the first response was to form a head at each end of the regenerating segment. With still further increases in the current the expected reversal of the head-tail gradient did occur, indicating that the electrical gradient which naturally existed in these animals was capable of transmitting morphological information.
2. Tissue regeneration occurs only if some minimum amount of neural tissue is present suggesting that CNS plays a role in the process although the usual neural activity is absent. The repeated needling of the stump had positive effect on regeneration and the DC current was found to be proportional to innervation. Hence needling seems to stimulate innervation or at least inducing formation of DC currents. Something like this might occur also in the case of acupuncture.
3. Regeneration involves de-differentiation of cells to form a blastema from which the regenerated tissue is formed. Quite early it was learned that carcinogens induce de-differentiation of cells because of their steric properties and by making electron transfer possible and that denervation induces tumor formation. From these findings Becker concluded that the formation of blastema could be a relatively simple process analogous to tumor growth whereas the regeneration proper is a complex self-organization process during which the control by signals from CNS are necessary and possibly realized in terms of potential waves.
4. Regeneration is possible in salamander but not in frog. This motivated Becker and collaborators to compare these situations. In an amputated leg of both salamander and frog the original negative potential of or order -1 mV went first positive value of order +10 mV. In frog it returned smoothly to its original value without regeneration. In salamander it returned during three days to the original base line and then went to a much higher negative value around -20 mV (resting potential is around -70 mV) followed by a return to the original value as regeneration had occurred. Thus the large negative potential is necessary for the regeneration and responsible for the formation of blastema. Furthermore, artificial electron current induced regeneration also in the case of frog and in even in the denervated situation. Thus the flow of electrons to the stump is necessary for the formation of blastema and the difference between salamander and frog is that frog is not able to provide the needed electronic current although positive potential is present.
5. It was also learned that a so called neural epidermic junction (NEJ) formed in the healing process of salamander stump was responsible for the regeneration in the presence of nervation. The conclusion was that the DC voltage and electronic current relevant for regeneration can be assigned the interface between CNS and tissue rather than with the entire nerve and regeneration seems to be a local process, perhaps a feed of metabolic energy driving self-organization. Furthermore, NEJ seems to make possible the flow of electrons from CNS to the stump.
6. The red blood cells of animals other than mammals are complete and possess thus nuclei.
Becker and collaborators observed that also red blood cells dedifferentiated to form blastema.
Being normally in a quiescent state, they are ideal for studying de-differentiation. It was found that electric current acted as a trigger at the level of cell membrane inducing de-differentiation reflected as an increased amount of mRNA serving as signal for gene expression. Also pulsed magnetic field was found to trigger the de-differentiation, perhaps via induced electric field. By the way, the role of the cell membrane fits nicely with the view about DNA-cell membrane system as topological quantum computer with magnetic flux tubes connecting DNA and cell membrane serving as braids.
7. The experiments of Becker and collaborators support the identification of the charge carriers of DC currents responsible for the formation of large negative potential of stump as electrons. The test was based on the different temperature dependence of electronic and protonic conductivities. Electronic conductivity increases with temperature and protonic conductivity decreases and an increase was observed. In TGD based model also super-conducting charge carriers are possible and this finding does not tell anything about them.
Gene activation by electrostatic fields?
The basic question concerns the method of activation. The discovery of chemists Guido Ebner and Guido Schuerch raises the hope that these ideas might be more than over-active imagination and their work also provides a concrete proposal for the activation mechanism. Ebner and Schuerch studied the effect of electrostatic fields on the growth and morphogenesis of various organisms. Germ, seeds, or eggs were placed between conducting plates creating an electric field in the range .5-2 kV/m: note that the Earth's electric field is in the range .1-4 kV/m and of the same order of magnitude.
The outcome was rather surprising and in the year 1989 their employer Ciba Geigy (now Novaris) applied for a patent "Method of enhanced fish breeding" for what is called Ciba Geigy effect. The researchers describe how fishes (trouts) develop and grow much better, if their eggs have been conditioned in an electrostatic field. The researchers report kenocitebneu/docu that also the morphology of the fishes was altered to what seems to represent an ancient evolutionary form: this was not mentioned in the patent.
The chemists founded their own Institute of Pharmaceutical Research near Basel, where Guido Ebner applied for another very detailed patent, which was never granted (it is not difficult to guess the reasons why!). In the patent he describes the effect of electrostatic fields on several life forms (cress, wheat, corn, fern, micro-organisms, bacteria) in their early stage of development. A clear change in the morphogenesis was observed. For instance, in one example fern had all sort of leaves in single plant apparently providing a series of snapshots about the evolution of the plant. The evolutionary age of the first leaf appeared to be about 300 million years whereas the last grown-up leaf looked close to its recent form.
If one takes these finding seriously, one must consider the possibility that the exposure to an electrostatic field can activate passive genes and change the gene expression so that older morphologies are expressed. The activation of not yet existing morphologies is probably more difficult since strong consistency conditions must be satisfied (activation of program requires activation of a proper hardware). This would suggest that genome is a kind of archive containing also older genomes even potential genomes or that topological quantum computer programs (see this) kenociteallb/dnatqc determine the morphology to certain extent and that external conditions such as electric field determine the self-organization patters characterizing these programs.
It is known that the developing embryo has an electric field along the head-tail axis and that this field plays an important role in the control of growth. These fields are much weaker than the fields used in the experiment. p-Adic length scale hierarchy however predicts an entire hierarchy of electric fields and living matter is indeed known to be full of electret structures. The strength of the electric field in some p-adic length scale related to DNA might somehow serve as the selector of the evolutionary age. The recapitulation of phylogeny during the ontogeny could mean a gradual shift of the activated part of the memone, perhaps assignable to tqc programs, and be controlled by the gradually evolving electric field strength.
The finding that led Ebner to his discovery was that it was possible to "wake up" ancient bacteria by an exposure to an electrostatic field. The interpretation would be in terms of loading of metabolic batteries. This would also suggest that in the case of primitive life forms like bacteria the electric field of Earth has served as metabolic energy source whereas in higher life forms endogenous electric fields have taken the role of Earth's electric field.
Thursday, May 10, 2012
A Universe from Nothing
The book A Universe from Nothing: Why There Is Something Rather than Nothing by Lawrence Krauss has stimulated a lot of aggressive debate between Krauss and some philosphers and of course helped in gaining media attention.
Peter Woit wrote about the debate - not so much about the contents of the book - and regarded the book boring and dull. He sees this book as an end for multiverse mania: bad philosophy and bad physics. I tried to get an idea about what Krauss really says but failed: Woit's posting concentrates on the emotional side (the more negative the better;-)) as blog posting must do to maximize the number of readers.
Peter Woit wrote also a second posting about the same theme. It was about Jim Holt's book Why Does the World Exist?: An Existential Detective Story. Peter Woit found the book brilliant but again it remained unclear to me what Jim Holt really said!
Sean Carroll has a posting about the book talking more about the contents of the book. This posting was much more informative: not just anecdotes and names but an attempt to analyze what is involved.
In the following I will not consider the question "Why There Is Something Rather than Nothing" since I regard it as pseudo question. The very fact that the question is made implies that something - the person who poses the question - exists. One could of course define "nothing" as vacuum state as physicists might do but with this definition the meaning of question changes completely from what it is for philosophers. Instead, I will consider the notion of existence from physics point of view and try to show how non-trivial implications the attempt to define this notion more precisely has.
What do we mean with "existence"?
The first challenge is to give meaning for the question "Why There Is Something Rather than Nothing". This process of giving meaning is of course highly subjective and I will discuss only my own approach. To my opinion the first step is to ask "What existence is?". Is there only single kind of existence or does existence come in several flavors? Indeed, several variants of existence seem to be possible. Material objects, mathematical structures, theories, conscious experiences, etc... It is difficult to see them as members of same category of existence.
This question was not made by Sean Carroll ,who equated all kinds of existence with material existence - irrespective of whether they become manifest as a reading in scale, as mathematical formulas, or via emotional expressions. Carroll did not notice that already this assumption might lead to astray. Carroll did the same as most mainstream physicists would do and I am afraid that also Krauss makes the same error. I dare hope that philosophers criticizing Krauss have avoided this mistake: at least they made clear what they thought about the deph of philosophical thinking of physicists of this century.
Why Carroll might have done something very stupid?
1. The first point is that this vision- known as materialism in philosophy - suffers from serious difficulties. The basic implication is that consciousness is reduced to physical existence. Free will is only an illusion, all our intentions are illusions, ethics is illusion and moral rules rely on illusion. Everything was dictated in Big Bang at least in the statistical sense. Perhaps we should think twice before accepting this view.
2. Second point is that that one ends up with heavy difficulties in physics itself: quantum measurement theory is the black sheep of physics and it is not tactful to talk about quantum measurement theory in the coffee table of physicists. The problem is simply that that the non-determinism of state function reduction - necessary for the interpretation of experiments in Copenhagen interpretation - is in conflict with the determinism of Schrödinger equation. The basic problem does not disappear for other interpretations. How it is possible that the world is both deterministic and deterministic at the same time? There seems to be two causalities: could they relate to two different notions of time? Could the times for Schrödiner equation and state function reduction be different?
I have just demonstrated that when one speaks about ontology, sooner or later begin to talk about time. This is unavoidable. As inhabitants of everyday world we of course know that the experienced time is not same as the geometric time of physicists. But as professional physicists we have been painfully conditioned to identify these two times. Also Carroll as a physics professor makes this identification - and does not even realize what he is doing - and starts to speak about time evolution as Hamiltononian unitary evolution without a single world about the problems of quantum measurement theory.
With this background I am ready to state what the permanent readers of the blog could do themselves. In TGD Universe the notion of existence becomes much more many-faceted thing as in the usual ultranaive approach of materialistic physicist. There are many levels of ontology.
1. Basic division is to "physical"/"objective" existence and conscious existence. Physical states identified as their mathematial representations ("identified" is important!: I will discuss this later) correspond the "objective" existence. Physical states generalize the solutions of Schrödinger equations: they are not counterparts for time=constant snapshots of time evolutions but counterparts for entire time evolutions. Quantum jumps take between these so that state function reduction does not imply failure of determinism and one avoids the basic paradox. This however implies that one must assign subjective time to the quantum jumps and geometric time to the counterparts of evolution of Schrödinger equation. There are two times.
In this framework the talk about the beginning of the Universe and what was before the Big Bang becomes nonsense. One can speak about boundaries of space-time surfaces but they have little to do with the beginning and end which are notions natural in the case of experienced time.
2. One can divide the objective existence into two sub-categories. Quantum existence (quantum states as mathematical objects) and classical existence having space-time surfaces as its mathematical representation. Classical determimism fails in its standard form but generalizes, and classical physics ceases to be an approximation and becomes exact part of quantum theory as Bohr orbitology implies by General Coordinate Invariance alone. We have ended up with tripartimism instead of monistic materialism.
3. One can divide the geometric existence on sub-existences based on ordinary physics obeying real topology and various p-adic physics obeying p-adic topology. p-Adic space-time sheets serve as space-time correlates for cognition and intentionality whereas real space-time sheets are correlates for what we call matter.
4. Zero energy ontology (ZEO) represents also a new element. Physical states are replaced with zero energy states formed by pairs of positive and negative energy states at the boundaries of causal diamond (CD) and correspond in the standard ontology to physical events formed by pairs of initial and final states. Conservations laws hold true only in the scale characterizing given CD. Inside given CD classical conservation laws are exact. This allows to understand why the failure of classical conservation in cosmic scales is consistent with Poincare invariance.
In this framework Schrödinger equation is only a starting point from which one generalizes. The notion of Hamiltonian evolution seen by Carroll as something very deep is not natural in relativistic context and becomes non-sensical in p-adic context. Only the initial and final states of evolution defining the zero energy state are relevant in accordance with strong form of holography. U-matrix, M-matrix and S-matrix become the key notions in ZEO.
5. A very important point is that there is no need to distinguish between physical objects and their mathematical description (as quantum states in Hilbert space of some short). Physical object is its mathematical description. This allows to circumvent the question "But what about theories: do also theories exist physically or in some other sense?". Quantum state is theory about physical state and physicist and mathematician exists in quantum jumps between them. Physical worlds define the Platonia of the mathematician and conscious existence is hopping around in this Platonia: from zero energy state to a new one. And ZEO allows all possible jumps! Could physicist or mathematician wish anything better;-)!
This list of items shows how dramatically the situation changes when one realizes that the materialistic dogma is just an assumption and in conflict with what we have known experimentally for almost century.
Could physical existence be unique?
The identification of physical (or "objective") existence as mathematical existence raises the question whether physics could be unique from the requirement that the mathematical description with which it is identical exists. In finite-dimensional case this is certainly not the case. Given finite-D manifold allows infinite number of different geometries. In infinite-dimensional case the situation changes dramatically. One possible additional condition is that the physics in question is maximally rich in structure besides existing mathematically! Quantum criticality has been my own phrasing for this principle and the motivation comes that at criticality long range fluctuations set on and the system has fractal structure and is indeed extremely richly structured.
This does not yet say much about what are the basic objects of this possibly existing infinite-dimensional space. One can however generalize Einstein's "Classical physics as space-time geometry" program to "Quantum physics as infinite dimensional geometry of world of classical worlds (WCW)" program. Classical worlds are identified as space-time surfaces since also the finite-dimensional classical version of the program must be realized. What is new is "surface": Einstein did not consider space-time as a surface but as an abstract 4-manifold and this led to the failure of the geometrization program. Sub-manifold geometry is however much richer than manifold geometry and gives excellent hopes about the geometrization of electro-weak and color interactions besides gravitation.
If one assumes that space-time as basic objects are surfaces of some dimension in some higher-dimensional space, one can ask whether it is possible for WCW to have a geometry. If one requires geometrization of quantum physics, this geometry must be Kähler. This is a highly non-trivial condition. The simplest spaces of this kind are loop spaces relating closely to string models: their Kähler geometry is unique from the existence of Riemann connection. This geometry has also maximal possible symmetries defined by Kac-Moody algebra, which looks very physical. The mere mathematical existence implies maximal symmetries and maximally beatiful world!
Loops are 1-dimensional but for higher-dimensional objects the mathematical constrains are much more stringent as the divergence difficulties of QFTs have painfully taught us. General Coordinate Invariance emerges as an additional powerful constraint and symmetries related to conformal symmetry generalizing from 2-D case to symmetries of 3-D light-like surfaces turns out to be the key to the construction. The requirement of maximal symmetry realized by conformal invariance leads to correct space-time dimension and also dictates that imbedding space has M4× S decomposition with light-cone boundaries also possess huge conformal symmetries giving rise to additional infinite-D symmetries.
There are excellent reasons to believe that WCW geometry is unique. The existence would be guaranteed by a reduction to generalized number theory: M4× CP2 forced by standard model symmetries becomes the unique choice if one requires that classical number fields are essential part of the theory. "Physics as infinite-D geometry" and "Physics as Generalized Number Theory" would be the basic principles and would imply consistency with standard model symmetries.
Wednesday, May 09, 2012
Let all flowers to flourish in the Garden of Science
Phil Gibbs has a very nice posting with title Physics on the Fringe: Book Review talking about the importance of allowing communication of all ideas in science. In Big Science heavy censorship had been accepted as something completely natural during last decades and mediocricy and conformism have become the main virtues of scientist. I encourage the readers to read the posting. I glue below my own comment describing by a concrete example how dangerous it is to censor out bottleneck ideas, and that sooner or later the community must face the truth in any case.
The best argument in favor of allowing all flowers to flourish in the Garden of Science comes from the development of particle physics during the last decades. One cannot speak of triumph;-).
|
921f4f02081c5eb5 | We can obtain a energy band structure of a crystal from a DFT calculation in Vasp. Each point in each band represents an energy eigenvalue with a corresponding wave function from the Schrödinger equation. I want to choose a specific K-point in the Brillouin zone for a specific band and investigate the symmetry of the corresponding wave function.
Supposedly, I can obtain the wave function from the WAVECAR file. It should yield a number of coefficients which can be used to construct the wave function from the basis functions used in the calculations. However, I do not need the full wave function. I only want to know the symmetry of the function. If my basis was small, I could easily identify the symmetry by inspection, but with hundreds (?) of coefficients this seems difficult.
How can I find the symmetry of the wave function from my VASP calculation? Please, also let me know if it seems like I am asking the wrong question.
• 2
$\begingroup$ +1. I've just made some edits to make the "question" part of your post standout more, and used a code block for WAVECAR. Also we have a VASP chat room. Please stop by and write "hello" there so that we remember you when there's any announcement or discussion about VASP which may be interesting for VASP users! $\endgroup$ Mar 25 at 15:42
• $\begingroup$ You may take a look at this: andrew.cmu.edu/user/feenstra/wavetrans $\endgroup$
– Jack
Mar 26 at 6:13
It seems you are looking for a package to compute the irreducible representation of electronic states computed by VASP. This has been recently developed:
J. Gao J, Wu Q, Persson C, Z. Wang. Irvsp: to obtain irreducible representations of electronic states in the VASP. Comput. Phys. Comm. 261, 107760 (2021). https://doi.org/10.1016/j.cpc.2020.107760.
And the code is provided on github: https://github.com/zjwang11/irvsp
Your Answer
|
205fe03730a6ee71 | 4 molecular structure n.
Skip this Video
Loading SlideShow in 5 Seconds..
4. Molecular structure PowerPoint Presentation
Download Presentation
4. Molecular structure
4. Molecular structure
11 Vues Download Presentation
Télécharger la présentation
4. Molecular structure
Presentation Transcript
1. 4. Molecular structure The Schrödinger Equation for molecules The Born-Oppenheimer approximation 4.1. Molecular orbital theory 4.1.1 The hydrogen molecule-ion 4.1.2 The structure of diatomic molecules 4.1.3 Heteronuclear diatomic molecules 4.1.4 Energy in the LCAO approach 4.2. Molecular orbitals for polyatomic systems 4.2.1 The Hückel approximation 4.2.2 The band theory of solids 2+
2. The Schrödinger Equation for molecules All properties of a molecule (= M nuclei + n electrons) can be evaluated if we find the wavefunction (x1, x2,..., xn) by solving the Schrödinger equation (SE): H =E. H= Ttot + Vtot= (TN + Te )+ (VeN + Vee + VNN) The total kinetic operator of the molecule is composed of a part for the nuclei TN and one for the electrons Te. The total potential energy operator is the sum of the electron/electron (Vee), electron/nucleus (VeN) and nucleus/nucleus (VNN) interactions. Rj are the nucleus coordinates and ri the electronic coordinates. Mj is the mass of the nucleus j and mi is the electron mass. Zj is the number of protons in the nucleus j and e is the charge of an electron. Solving with the best approximation the SE is the challenge of quantum chemistry (W. Kohn and J. Pople: Nobel Prize in Chemistry 1998). In this chapter, we introduce the vocabulary and the basic principles to understand the electronic structure of molecules.
3. The Born-Oppenheimer approximation The electrons are much lighter than the nuclei (me/mH1/1836) their motion is much faster than the vibrational and rotational motions of the nuclei within the molecule. A good approximation is to neglect the coupling terms between the motion of the electrons and the nuclei: this is the Born-Oppenheimer approximation. The Schrödinger equation can then be divided into two equations: 1) One describes the motion of the nuclei. The eigenvalues of this nuclear part of the SE gives the discrete energetic levels of the vibration and rotation of the molecule. see Chap 16: the vibrational and rotational spectroscopies are used to observed transition between these energetic levels. 2) The other one describes the motion of the electrons around the nuclei whose positions are fixed. This electronic part of the SE is the “electronic Schrödinger equation”: The knowledge of the electronic wavefunction is necessary to understand chemical bonding, electronic and optical properties of the matter. In the rest of the chapter, we’ll only speak about electronic wavefunction.
4. D0 The electronic Schrödinger equation The nuclear coordinate R appears as a parameter in the expression of the electronic wave function. An electronic wave function elect(R,r) and an energy Eelectare associated to each structure of the molecule (set of nuclei coordinates R). For each variation of bond length in the molecule (each new R), we can solve the electronic SE and evaluate the energy that the molecule would have in this structure: the molecular potential energy curve is obtained (see Figure). The molecule is the most stable (minimum of energy) for one specific position of the nuclei: the equilibrium position Re. R The zero energy corresponds to the dissociated molecule. The depth of the minimum, De, gives the bond dissociation energy, D0, considering the fact that vibrational energy is never zero, but ½ħ : D0=De- ½ħ
5. e- rA rB R A B 4.2. Molecular orbital (MO) theory 4.2.1 The hydrogen molecule-ion: H-H+ A. Linear combination of atomic orbitals (LCAO) The electron can be found in an atomic orbital (AO) belonging to atom A (i.e.; 1s of H) and also in an atomic orbital belonging to B (i.e.; 1s of H+) The total wavefunction should be a superposition of the 2 AO and it is called molecular orbital LCAO-MO. Let’s write the atomic orbitals on the two atoms by the letters A and B. = N(A B) N is the normalization constant. is the overlap integral related to the overlap of the 2 AO
6. + B. Bonding orbitals The LCAO-MO: += {2(1+S)}-1/2(A + B), with A=1s, B=1s of H Probability density: 2+= {2(1+S)}-1(A2 + B2 + 2AB) A2 = probability density to find the e- in the atomic orbital A B2 = probability density to find the e- in the atomic orbital B 2AB = the overlap density represents an enhancement of the probability density to find the e- in the internuclear region: the electron accumulates in regions where AO’s overlap and interfere constructively; which creates a bonding orbital 1 with one electron. 2+ C. Antibonding orbitals - -= {2(1-S)}-1/2(A - B) Probability density: 2-= {2(1-S)}-1(A2 + B2 - 2AB) -2AB = reduction of the probability density to find the e- in the internuclear region: there is a destructive interference where the two AO overlap, which creates an antibonding orbital *. Nodal plane between the 2 nuclei 2-
7. D. Energy of the states and * e- rA rB R A B H-H+: One electron around 2 protons += {2(1+S)}-1/2(A + B) =- H = E -= {2(1-S)}-1/2(A - B) =+ j= measure of the interaction between a nucleus and the electron density centered on the other nucleus k= measure of the interaction between a nucleus and the excess probability in the internuclear region S= measure of the overlap between 2 AO. S decreases when R increases. Note: S=0 for 2 orthogonal AO.
8. 4.2.2 Structure of diatomic molecules Now, we use the molecular orbitals (= + and *= -) found for the one-electron molecule H-H+; in order to describe many-electron diatomic molecules. A. The hydrogen and helium molecules H2: 2 electrons ground-state configuration: 12 E Increase of electron density He2: 4 electrons ground-state configuration: 12 2*2 E E- E+<E- He2 is not stable and does not exist E+
9. B. Bond order n= number of electrons in the bonding orbital n*= number of electrons in the antibonding orbital Bond order:b=½(n-n*) The greater the bond order between atoms of a given pair of elements, the shorter is the bond and the greater is the bond strength. C. Period 2 diatomic molecules According to molecular orbital theory, orbitals are built from all orbitals that have the appropriate symmetry. In homonuclear diatomic molecules of Period 2, that means that two 2s and two 2pz orbitals should be used. From these four orbitals, four molecular orbitals can be built: 1, 2*, 3, 4*. 1, 2*, 3, 4*. With N atomic orbitals the molecule will have N molecular orbitals, which are combinations of the N atomic orbitals.
10. dioxygen O2: 12 valence electrons The two last e- occupy both the x* and the y* in order to decrease their repulsion. The more stable state for 2e- in different orbitals is a triplet state. O2 has total spin S=1 (paramagnetic) The two 2px give one x and one x* The two 2py give one y and one y* Bond order = 2 Note: The orbitals together give rise to an cylindrical distribution of charge. Electrons can circulate around this torus can create magnetic effect detected in NMR
11. B. Hybridization sp3 h1= s + px + py+ pz h2= s - px + py- pz h3= s - px - py+ pz h4= s + px - py- pz An orthonormal set of hybrid orbitals is created by applying a transformation on the orthonormal hydrogenic orbitals. The sp3, sp2 or sp hybrid orbitals are linear combinations of the AO’s, they appear as the resulting interference between s and p orbitals. sp3 hybridization + - Each hybrid orbital has the same energy and can be occupied by one electron of the promoted atom CH4 has 4 similar bonds.
12. H H H H sp2 hybridization hi The sp2 hybrid orbitals lie in a plane and points towards the corners of an equilateral triangle. 2pz is not involved in the hybridization, and its axis is perpendicular to the plane of the triangle. h1= s +21/2 py h2= s + (3/2)1/2 px - (1/2)1/2 py h3= s - (3/2)1/2 px - (1/2)1/2 py 120° An hybrid orbital has pronounced directional character because it has enhanced amplitude in the internuclear region, coming from the constructive resulting interference. Consequently, the bond formation is accompanied with a high stability gain. 2pz In ethene CH2=CH2, the hybrid orbitals of each C atom create the backbone of the molecule via 3 bonds (2 C-H and 2 C-C). The remaining 2pz of the 2 C atoms create a bond preventing internal rotation.
13. x y z H Nu C X H H sp hybridization In ethyne, HCCH: Formation of 2 bonds (with C and H) using the 2 hybrid orbitals h1 and h2. The remaining 2px and 2py can form two bonds between the two carbon atoms. h1= s + pz h2= s - pz Other possible hybridization ? Note: 'Frozen' transition states: pentavalent carbon et al ; Martin-JC; Science.vol.221, no.4610; 5 Aug. 1983; p.509-14. Organic ligands have been designed for the stabilization of specific geometries of compounds of nonmetallic elements. These ligands have made possible the isolation, or direct observation, of large numbers of trigonal bipyramidal organo-nonmetallic species. Many of these species are analogs of transition states for nucleophilic displacement reactions and have been stabilized by the ligands to such a degree that they have become ground-state energy minima. Ideas derived from research on these species have been applied to carbon species to generate a molecule that is an analog of the transition state for the associative nucleophilic displacement reaction. The molecule is a pentavalent carbon species that has been observed by nuclear magnetic resonance spectroscopy. Sketch of the transition state species during a nucleophilic substitution SN2 CH3X + Nu- CH3Nu + X-
14. 4.2.3 Heteronuclear diatomic molecules A diatomic molecule with different atoms can lead to polar bond, a covalent bond in which the electron pair is shared unequally by the 2 atoms. A. Polar bonds 2 electrons in an molecular orbital composed of one atomic orbital of each atom (A and B). = cA A + cB B |ci|2= proportion of the atomic orbital “i” in the bond The situation of covalent polar bonds is between 2 limit cases: 1) The nonpolar bond (e.g.; the homonuclear diatomic molecule): |cA|2= |cB|2 2) The ionic bond in A+B- : |cA|2= 0 and |cB|2=1 Example: HF The H1s electron is at higher energy than the F2p orbital. The bond formation is accompanied with a significant partial negative charge transfer from H to F.
15. C. Variation principle All the properties of a molecule can be found if the wavefunction is known: e.g., the electron density distribution or the partial atomic charge. The wavefunction can be found by solving the Schrödinger equation…. But the latter is impossible to solve analytically!! We use the variation principle to go around that problem and find approximate wavefunction. The variation principle is the basic principle to determine wavefunction of complicated molecular systems. The idea is to optimize the coefficient cA and cB of the wavefunction, =cAA + cBB, such that the system is the most stable, i.e. the energy is minimal. Variation principle: “If an arbitrary wavefunction is used to calculate the energy, the value calculated is never less than the true energy.” In the end of the chapter, we’ll try to find the best coefficients cito give to the trial wavefunction in order to approach the true energy. More sophisticated methods: (i) increase the set of atomic orbitals, called “the basis set” from which the molecular orbital is expressed; (ii) improve the description of the system by using more correct Hamiltonian H. (cA, cB) =cAA + cBB H=E E >Etrue Aim: Try to find the (cA, cB) that minimize the calculated E
16. 4.2.4 Energy in the LCAO approach (1) Numerator: A is a Coulomb integral: it is related to the energy of the e- when it occupies A. ( < 0) is a Resonance integral: it is zero if the orbital don’t overlap. (at Re, <0) (1) Denominator: (1)
17. Let’s find the “zeros” or roots of the polynomial vs. cA and cB We want the cA minimizing E, we then impose: We want the cB minimizing E, we then impose: Secular equations In order to have a solution, other than the simple solution cA= cB= 0, we must have: Secular determinant should be zero The 2 roots give the energies of the bonding and antibonding molecular orbitals formed from the AOs
18. D. Two simple cases 1) Homonuclear diatomic molecules: =cAA + cBB with A= B A= B= (1) (2) bonding (1) (2) antibonding 0 antibonding= {2(1-S)}-1/2(A - B) bonding= {2(1+S)}-1/2(A + B)
19. 0 Eantibonding= - E- Ebonding = E+- Since: 0 < S < 1 Eantibonding > Ebonding Note 1: He2 has 4 electrons ground-state configuration: 12 2*2 He2 is not stable! Note 2: If we neglect the overlap integral (S=0), Eantibonding = Ebonding= The resonance integral is an indicator of the strength of covalent bonds
20. 2) Heteronuclear diatomic molecules when we neglect the overlap integral: S=0 S=0 Limit case: if (A -B)>> : E+A and + A: the 2 electrons are localized on one atom, it’s the case of a 100% ionic bond. The higher the difference (A -B), the more the ionic character of the bond will be. The smaller (A -B), the more covalent the bond will be. Note: for a complete resolution of the problem, we need to inject the values of E in the secular equations and find the coefficients of the wavefunction. A B E- E+
21. 4.3. Molecular orbitals for polyatomic systems 4.3.1 The Hückel approximation Here, we investigate conjugated molecules in which there is an alternation of single and double bonds along a chain of carbon atoms. In the Hückel approach, the orbitals are treated separately from the orbitals, the latter form a rigid framework that determine the general shape of the molecule. All C are considered similar only one type of coulomb integral for the C2p atomic orbitals involved in the molecular orbitals spread over the molecule. A. The secular determinant The molecular orbitals are expressed as linear combinations of C2pz atomic orbitals (LCAO), which are perpendicular to the molecular plane. Ethene, CH2=CH2: =cAA + cBB, where A and B are the C2pz orbitals of each carbon atoms. Butadiene, CH2=CH-CH=CH2: =cAA + cBB+ccC + cDD The coefficients can be optimized by the same procedure described before: express the total energy E as a function of the ci and then minimize the E with respect to those coefficients ci. Inject the energy solutions in the secular equations and extract the coefficients minimizing E.
22. i 2pz i+2 i+1 Following these methods and since A= B= , we obtain those secular determinants: Ethene, CH2=CH2: Butadiene, CH2=CH-CH=CH2: Hückel approximation: 1) All overlap integrals Sij= 0 (ij). 2) All resonance integrals between non-neighbors, i,i+n=0 with n 2 3) All resonance integrals between neighbors are equal, i,i+1= i+1,i+2 = Severe approximation, but it allows us to calculate the general picture of the molecular orbital energy levels.
23. B. Ethene and frontier orbitals Within the Hückel approximation, the secular determinant becomes: E- = - energy of theLowest Unoccupied Molecular Orbital (LUMO) E+ = + energy of theHighest Occupied Molecular Orbital (HOMO) LUMO= 2* 2|| HOMO= 1 HOMO and LUMO are the frontier orbitals of a molecule. those are important orbitals because they are largely responsible for many chemical and optical properties of the molecule. Note: The energy needed to excite electronically the molecule, from the ground state 12 to the first excited state 11 2*1is provided roughly by 2|| ( is often around -0.8 eV) Chap 17
24. Ethene ethene deshydrogenation nickel Ethyne
25. C. Butadiene and delocalization energy 4th order polynomial 4 roots E There is 1e- in each 2pz orbital of the four carbon atoms 4 electrons to accommodate in the 4 -type molecular orbitals the ground state configuration is 12 22 The greater the number of internuclear nodes, the higher the energy of the orbital Butadiene C4H6: total -electron binding energy, E isE = 2E1+2E2= 4 + 4.48 with two -bonds Ethene C2H4:E = 2 + 2 with one -bond Two ethene molecules give: E = 4 + 4 for two separated -bonds. The energy of the butadiene molecule with two -bonds lies lower by 0.48 (-36kJ/mol) than the sum of two individual -bonds: this extra-stabilization of a conjugated system is called the “delocalization energy” 3 nodes = E4 2 nodes = E3 LUMO= 3* = E2 1 node HOMO= 1 = E1 0 node Top view of the MOs
26. D. Benzene and aromatic stability Scheme of the different orbital overlaps Each C has: - 3 electrons in (sp2) hybrid orbitals 3 bonds per C - 1 electron in 2pz one bond per C 6* E6 = - 2 6 electrons 2pz to accommodate in the 6 -molecular orbitals the ground state configuration is 12 22 32. E4,5= - 4* Benzene C6H6: total -electron binding energy, E is E = 2E1+4E2= 6 + 8 with three -bonds Three ethene molecules give: E = 6 + 6 for 3 separated -bonds. The delocalization energy is 2 (-150kJ/mol) 5* E2,3 = + 2 3 E1 = + 2 1
27. Benzene C6H6 is more stable than the hexatriene. Both molecules has 3 -bonds, but the cyclic structure of benzene stabilizes even more the -electrons. The symmetry of benzene creates two degenerated -bonds (2 and 3). When they are occupied, this is a more stable situation than the 12 22 32 configuration for hexatriene aromatic stability 6* 4* 5* 3 2 3 2 1 1
28. E. Is it possible to extend the conjugated chain? What happens to the electronic structure? We are used to the great impact scientific discoveries have on our ways of thinking. This year's Nobel Prize in Chemistry is no exception. What we have been taught about plastic is that it is a good insulator - otherwise we should not use it as insulation in electric wires. But now the time has come when we have to change our views. Plastic can indeed, under certain circumstances, be made to behave very like a metal - a discovery for which Alan J. Heeger, Alan G. MacDiarmid and Hideki Shirakawa are to receive the Nobel Prize in Chemistry 2000. How can plastic become conductive? Plastics are polymers, molecules that form long chains, repeating themselves like pearls in a necklace. In becoming electrically conductive, a polymer has to imitate a metal, that is, its electrons need to be free to move and not bound to the atoms. The first condition for this is that the polymer consists of alternating single and double bonds, called conjugated double bonds. Polyacetylene is prepared through polymerization of the hydrocarbon acetylene. Polyacetylene However, it is not enough to have conjugated double bonds. To become electrically conductive, the plastic has to be disturbed - either by removing electrons from (oxidation), or inserting them into (reduction), the material. The process is known as doping.
29. 4.3.2 The band theory of solids Solids are composed of a 3-dimensional array of atoms bound to each other via their valence atomic orbitals. The resulting combinations of these atomic orbitals, involved in the bonds forming this 3-D array, are orbitals spread all over the solid. Solids can be classified with respect to the behavior of their electrical conductivity ()vs. Temperature (T): A metallic conductor: decreases as T is raised A semiconductor: increases as T is raised Note that an insulator appears as a semiconductor with very low conductivity. doped Conductivity Note: S= -1
30. A. Formation o f bands Simple model for a solid: the one-dimensional solid, which consists of a single, infinitely long line of atoms, each one having one s orbital available for forming molecular orbitals (MOs). When the chain is extended: The range of energies covered by the MOs is spread This range of energies is filled in with more and more orbitals For a chain of N, the width of therange of energies of the MOs is finite, while the number of molecular orbitals is infinite: This is called a band . 4 “s” band
31. For a chain of N atoms, i.e. for N atomic orbitals 1s, the chain has N molecular orbitals. The energy of these MOs can be found by solving the secular determinant: Each molecular orbital is characterized by a energy labeled with k. For N, the Ek-Ek+1 0. But the “s” band still has finite width: EN -E1 4 When the overlap of “p” atomic orbitals is taken into account, a “p” band is formed. Their may be a “band gap”: a range of energies to which no orbitals correspond. k=N: highest energy orbital fully antibonding k=1: lowest energy orbital fully bonding
32. B. Influence of the temperature If each atom of the solid (composed of N atoms) contributes with one electron and since 2 electrons can be in one molecular orbital, then only half of the N molecular orbitals are filled at T=0. The HOMO is called the Fermi level. The HOMO-LUMO energy difference is zero this is a metal. Since the energy between the HOMO and the LUMO is zero, then just a little energy can promote an electron in an unoccupied level. Under electrical potential difference, some electrons are therefore very mobile and give rise to electrical conductivity. The excitation energy can be provided via an increase of temperature. The population of the orbitals is given by the Fermi-Dirac distribution: is the electron chemical potential, that is -EF for metals (T=0) When T increases, there are more electrons excited towards empty orbitals…. However the conductivity decreases because the vibration of the nuclei increases more collisions between the transported electrons and the nuclei less efficient transport.
33. When the HOMO-LUMO energy difference is non-zero there is anelectronic gap. It is the case of semiconductors. If the band gap is small, thermal excitations can promote electrons to unoccupied levels; consequently, those electrons can participate to the electrical conductivity. That’s why the conductivity of semiconductor increases with the temperature. The insulator are characterized by a huge band gap the electrons cannot reach the unoccupied levels: no conductivity. |
54dfff1507f63320 | Discussion 12 – Hydrogenic Atom : Radial Wavefunction ... Discussion 12 – Hydrogenic...
Click here to load reader
• date post
• Category
• view
• download
Embed Size (px)
Transcript of Discussion 12 – Hydrogenic Atom : Radial Wavefunction ... Discussion 12 – Hydrogenic...
• Discussion 12 – Hydrogenic Atom : Radial Wavefunction In Discussion 11 you separated the wavefunction and Schrödinger equation for any central potential V(r) into a radial part R(r) and an angular part Y(θ,φ). You solved the angular part; that gave you the spherical harmonics Y l m (θ ,φ) . In Homework 11, you solve the radial equation for the simple harmonic oscillator. Here, we will
solve the radial equation for a very important system indeed: a hydrogenic atom , namely an atom with a single electron of charge e and a nucleus of charge Ze. The central potential seen by the electron is
V (r) = − Ze 2
in SI units. At right is the same strategy box as on homework; it is pretty much universal for solving the radial part of the spherically- separated Schrödinger equation. It greatly resembles the method you used to obtain the energy eigenfunctions of a harmonic oscillator in a Cartesian coordinate, but there are two important differences when the radial coordinate r is the independent variable .The differences are highlighted in red.
Problem 1 : Separation of Variables & Step 1 Checkpoints 1
Our goal is, as always, to “solve the Schrödinger equation”, i.e. to find the eigenstates of the Hamiltonian, which are the energy eigenstates of the system. Last week you made huge progress: you found that for a central potential V(r),
Ĥ = p̂
2m +V (r) = − !
2m +V (r) = 1
r2 − !
2m ∂ ∂r
r2 ∂ ∂r
⎛ ⎝⎜
⎞ ⎠⎟ +
2m ⎡
⎣ ⎢
⎦ ⎥ +V (r)
(a) Your separated form ψ ( !r ) = R(r)Y (θ ,φ) led to a class of solutions Ylm (θ ,φ) for the angular part that are
eigenfunctions of both L² and Lz , with eigenvalues ! 2l(l +1) and !m respectively. Plug this info into the SE,
Ĥ R(r) Y l m (θ, φ) = E R(r) Y lm (θ, φ) ,
to obtain the radial equation for R(r).
(b) The new element in step 1 of the strategy box is to switch from R(r) to u(r) ≡ r R(r). (This reduces the number of terms and makes the resulting equation more similar in form to the 1D SE.) It’s just algebra:
in terms of u(r) ≡ r R(r) , the radial SE is
− ! 2
2m ′′u + V (r)+ !
2m l(l +1) r2
⎣ ⎢
⎦ ⎥u = Eu
Next, we switch to dimensionless variables as much as possible. This is still step 1 and will enormously
Radial SE : Strategy Box 1. Use dimensionless quantities to
simplify equation to solve (SE), and switch to u(r) ≡ r R(r)
2. Find asymptotic behaviour of solutions as r → ±∞ and r → 0 to ensure normalizability.
3. Guess ψ = asymptotic behaviour × power series … & plug in SE.
4. Terminate power series to again ensure normalizability.
1 (a)
dr r 2 dR
dr ⎛ ⎝
⎞ ⎠ −
2mr 2
!2 V (r) − E[ ]R = l l + 1( )R (b) remember: ℏ has units of angular momentum … answer: 1/distance².
d 2u
dr 2 r 2 = u −
2mE !2
r 2 + l l + 1( ) − Z e 2
4πε 0
2mr !2
⎡ ⎣⎢
⎤ ⎦⎥
(d) Hint: think of the force and/or potential energy between two charges …
answ: energy · distance (e) energy · distance (f) 197 eV·nm (g,h) checked by later parts (i) λ = Zα −2mc2 / E (j) 0.53 × 10–10 m
• simplify our work. It seems clear that we should multiply the radial SE by −2m / !2 . That will give 2mE / !2
on the right-hand side. What are the units of 2mE / !2 ?
(c) To make all the coefficients in front of u(r) dimensionless, we should therefore multiply the entire radial SE by −2m / !2 × distance² … so by −2mr
2 / !2 . Multiply the radial SE in the box by −2mr 2 / !2 and rearrange the
terms a bit so that the term with u′′ is on its own on the left-hand side.
(d) Next let’s work on the potential the electron sees from the singly-charged nucleus,
V (r) = − Ze 2
4πε0r First, here are some REALLY BIG THINGS TO KNOW. What are the units of e² / 4πε0 ? Tactic: think of a familiar formula (look up …) that is close to the combination you are analyzing; that is usually the fastest way to figure out the units of a term with a quantity like ε0 in it that has highly non-trivial units.
(e) What are the units of the EXTREMELY USEFUL combination !c ?
(f) Calculate !c in units of eV · nm, where 1 eV = 1.6 × 10–19 J of energy and 1 nm = 10–9 m of distance. Totally equivalent units are MeV · fm, where 1 MeV = 106 eV and 1 fm = 10–6 nm.
(g) 197 is so close to 200 that EVERYONE in nuclear / particle physics knows that !c = 200 MeV ⋅ fm , and EVERYONE in atomic / optical physics knows that !c = 200 eV ⋅nm . This is accurate to 1.5%, perfect! Super! OK, now take the ratio of the combinations in parts (d) and (e). This ratio is universally called α :
It is dimensionless by construction, so it is a dimensionless measure of the strength of the electromagnetic interaction. It is often called the electromagnetic coupling constant. Using some consistent set of units, calculate the inverse of this number, 1 / α.
(h) α = 1/137 to 4 significant digits! This is also a BIG THING TO KNOW. The particle whose wavefunction we are calculating is an atomic electron. Its mass m appears in our equations. Well, everyone in atomic or subatomic physics knows not the mass m of elementary particles exactly, but instead their rest energy mc2. That comes out in units of energy, and for atomic or subatomic particles, the perfect energy unit is the electron-volt, eV = 1.6 × 10–19 J. In atomic physics, the electron mass is universally known as mc2 = 0.5 MeV , which is another BIG THING TO KNOW. Now back to the radial equation. We found the dimensionless combination 2mEr
2 / !2 in an earlier part, so let’s introduce variables to exploit that:
K ≡ −2mE
! = −2mc
2E !c
has distance units, ∴ ρ ≡ K r is dimensionless.
ρ ≡ Kr will serve as our dimensionless distance. From part (c), our radial equation is :
d 2u dr2
r2 = u − 2mE !2
r2 + l l +1( )− Z e 2
4πε0 2mr !2
⎣ ⎢
⎦ ⎥
Rewrite this, replacing all incidences of r with ρ /K , so that we are solving for u(ρ) now instead of u(r) , and so that u′′ now means d 2u / dρ 2 instead of d 2u / dr2 .
(i) To the right of the obviously dimensionless term l(l+1) is the electric potential term. It should now look like · ρ . What is this ? We’ll henceforth label it λ.
α ≡ e
• CHECKPOINT: At this point your radial SE should have this form :
′′u (ρ) = u(ρ) 1− λ ρ + l(l +1)
ρ 2 ⎡
⎣ ⎢
⎦ ⎥ where λ ≡ Zα
(j) There’s one more important quantity to introduce: the Bohr radius, a0 = !c / (αmec 2 ) . Calculate its value
using the fabulous numbers from the boxes on the previous page. It will turn out to be the average radius of the hydrogen ground state (in the somewhat unusual manner shown below).
That was the last BIG THING TO KNOW, i.e. the last of the numerical quantities that every physicist knows by heart (at least, those related to atoms).
a0 =
!c α mec
2 = 0.5Å = Bohr radius
; we will find that the hydrogen ground state has 1 r ground
= 1 a0
Problem 2 : Step 2 = Asymptotic Behaviour Checkpoints 2
Next step: find the asymptotic behaviour of u(ρ) . As you see in the strategy box, you have to consider not only the behaviour as ρ = K r→∞ but also the behaviour as ρ → 0 . The spherical coordinate system has “coordinate singularities” at the origin r = 0 and at the poles θ = 0 and π. We must always check these spots for unphysical behaviour like functions going to ∞ (which a physical wavefunction cannot do!)
(a) From the radial equation in the box at the top of the page, take the approximation ρ →∞ and see what physically-reasonable asymptotic solution u∞(ρ) you obtain. REMEMBER from class: the asymptotic solution is an approximate solutions to an approximate equation, which takes a bit of getting used to.
(b) Now do the same for the limit ρ → 0 . What physically reasonable asymptotic solution u0 (ρ) do you obtain in this region?
Problem 3 : Step 3 = Power Series Solution Checkpoints 3
Now that we have the behaviour of u(ρ) at large and small ρ , we can assume that the remaining behaviour in the “middle” region of finite ρ is a well-behaved function that we will call h(ρ) . Our proposed solution form is then u(ρ) = u∞(ρ) u0 (ρ) h(ρ) . We will try a power-series solution for h(ρ) – the polynomial method :
u(ρ) = e−ρ ρ l+1 h(ρ) where h(ρ) = aj ρ j
∑ We plug this u(p) back into the radial SE and, after some tedious and completely uninstructive algebra we get an equation for h(ρ) :
′′h ρ[ ]+ ′h 2 −ρ + l +1( )⎡⎣ ⎤⎦ + h λ − 2 l +1( )⎡⎣ ⎤⎦ = 0
Using this equation, find the recursion relation for the coefficients aj in the power series. |
243644f9f6d42073 |
Quantum to classical?
The superposition principle is a hallmark of quantum theory which emerges from one of the most fundamental equations of quantum mechanics, the Schrödinger equation. It describes particles in the framework of wave functions, which, much like water waves on the surface of a pond, can exhibit interference effects. But in contrast to water waves, which are a collective behavior of many interacting water molecules, quantum waves can also be associated with isolated single particles.
Perhaps the most elegant example of the wave nature of particles is the double-slit experiment, in which a particle’s wave function simultaneously passes through two slits and interferes. This effect has been demonstrated for photons, electrons, neutrons, atoms and even molecules, and it raises a question that physicists and philosophers have struggled with since the earliest days of quantum mechanics: how do these strange quantum effects transition into the classical world with which we are all familiar
Experimental approach
Alternative quantum models and macroscopicity
A generalized measure called macroscopicity is used to classify just how well alternative models are ruled out by such experiments, and the experiments of Fein et al. published in Nature Physics indeed represent an order of magnitude increase in macroscopicity. “Our experiments show that quantum mechanics, with all its weirdness, is also amazingly robust, and I’m optimistic that future experiments will test it on an even more massive scale,” says Fein. The line between quantum and classical is getting blurrier all the time.
Publication in Nature Physics
“Quantum superposition of molecules beyond 25 kDa”, Y. Y. Fein, P. Geyer, P. Zwick, F. Kiałka, S. Pedalino, M. Mayor, S. Gerlich, and M. Arndt, Nat. Phys. (2019). doi: 10.1038/s41567-019-0663-9; |
f417df28bba261b7 | What is the physical reason for Schrodinger equation to be linear? Though in physics many interactions or dynamics are found non linear.
2 Answers 2
It should be understood that physics - at least in it's current form - does not provide answers to "Why these laws?" questions. It can only describe an emergent law from a deeper and more fundamental one. Quantum theory is so far the most fundamental framework we have, so there is no more fundamental "reason" to describe it's structure aside from finding links between various properties of the theory.
The linearity of the Schrödinger equation is a consequence of the more general superposition principle. This principle states that causes add up linearly towards effects and it is postulated.
But what lead us to this postulate? Experimental observations - wave effects such as interference, and certain experiments with spin/polarization of particles. See e.g. the double-slit experiment for interference and Malus's law for polarized light - even though you pass a beam of perfectly polarized photons through a polarizer at a different angle, they can be "linearly decomposed" into photons of different polarizations and a part of them passes through. I.e. the photons that pass through will be polarized in accord with the orientation of the polarizer and this process can be fully understood just through the linearity of quantum-mechanical states.
However, the postulation of linearity was only a consequence of knowing mainly linear wave equations. These effects are conceivable in a theory with slight non-linearities and, indeed, this has been proposed. This article briefly reviews the proposals and their experimental tests that yielded that the proposed nonlinearities are beyond detection scope.
The linked article also provides a "proof" of linearity of evolution of quantum mechanics under some reasonable assumptions. But I would understand it more as a proof of a deeper connection between the usual structure of operators and linear state spaces with the general linearity of quantum mechanical evolution. I.e., the article shows that we would have to switch to a different framework, without states $|\psi\rangle$, linear hermitian operators and their usual interpretation, to include non-linearity in quantum mechanics.
So the conclusion is - it seems that linearity of the quantum-mechanical evolution (aka Schrödinger equation) is a vital part of the structure of the theory. Nevertheless, we can never justify the linearity completely, the main reason for it is "it just works". But that does not preclude the possibility of a paradigm shift including the introduction of non-linearity.
• $\begingroup$ Could you please fix the reference to the paper you mention? That's the last link in the answer. It seems nobody followed it in all those years, since it goes to the wiki page on polarizer (the same as the link before). $\endgroup$ Sep 17, 2018 at 14:21
It is better seen in the Heisenberg representation. Physical quantities, Observables, are represented by hermitian linear operators. Equation of movement is then (for a non-relativistic massive particle) :
$$ m \dfrac{d^2\hat X(t)}{dt^2} = - \dfrac{\partial V(\hat X)}{\partial \hat X}(t) \tag{1}$$
with the quantization conditions :
$[\hat X(t),m \dfrac{d\hat X(t')}{dt}]_{|t=t'} =i \hbar$
The equation $(1)$ is an equation between operators, so we have :
$$\forall |\psi\rangle, \quad m \dfrac{d^2\hat X(t)}{dt^2} |\psi\rangle = - \dfrac{\partial V(\hat X)}{\partial \hat X}(t) |\psi\rangle\tag{2}$$
Here $|\psi\rangle$ is a constant state (not depending on time).
Equation $(2)$ clearly arises because, in Quantum mechanics, one is using linear operators.
Now, this does not mean that equation $(1)$ is a linear equation relatively to the position operator $\hat X(t)$. This is generally not the case, except for very particular cases (free particle, harmonic oscillator)
We may also use an energy integral equation which is also an equation between operators :
$$ \frac{m}{2} \dot {\hat X(t)}^2 + V(\hat X)(t) = E \tag{3}$$
where $E$ is a constant matrix (not depending on time). We have then :
$$\forall |\psi\rangle, \quad \frac{m}{2} \dot {\hat X(t)}^2|\psi\rangle + V(\hat X)(t)|\psi\rangle = E |\psi\rangle\tag{4}$$
As before, this equation is "linear" in $\psi$, but does not correspond to a linear equation of movement for $\hat X(t)$, except if the potential is zero or at most quadratic in $\hat X$
Your Answer
|
5e087f966dfefd22 | Theoretical Chemistry
cp_and_qmc Copyright: © theochem
Our main research topics are Quantum Monte Carlo methods and Chemical Bonding. Quantum Monte Carlo, QMC, is a stochastic technique for solving the Schrödinger Equation. We apply QMC methods mainly for the electronic Schrödinger equation and calculate energy differences such as reaction energies, activation barriers, and the gaps between different states of the same molecule.
The diffusion quantum Monte Carlo variant, DMC, is one of the most accurate methods for electron structure calculations with the additional advantage of allowing for highly parallel computer code. This is a most important feature as computer hardware has developed recently and will develop for at least another decade to computer systems with very many cores. QMC codes are highly efficient on massively parallel computers with many thousand cores.
Due to the stochastic nature QMC allows to employ highly accurate but compact many-electron wave functions. Through the analysis of our compact wave functions new ways of insight into chemical bonding becomes possible, in particular into the many-electron nature of wave function that is ignored in the typical orbital analysis because orbitals are one-electron functions.
qmc Copyright: © theochem
Our group has been developing its own QMC programme called „amolqc“ over more than a decade. The code features strong multideterminant and optimisation capabilities.
A. Lüchow, Quantum Monte Carlo methods, Wiley Interdisciplinary Reviews, Comp. Mol. Sci., Vol. 1, 388-402, 2011, DOI: 10.1002/wcms.40
A. Lüchow, Maxima of |Ψ|2: A connection between quantum mechanics and Lewis structures, J. Comput. Chem., 35, 854 – 864, 2014, DOI: 10.1002/jcc.23561 |
ddab14fe1aee1696 | Hermitian Analyticity, IR/UV Mixing and Unitarity of Noncommutative Field Theories
Chong-Sun Chu, Jerzy Lukierski 1 and Wojtek J. Zakrzewski
Centre for Particle Theory, Department of Mathematical Sciences, University of Durham, Durham, DH1 3LE, UK
Institute for Theoretical Physics, University of Wroclaw, pl. M. Borna 9, 50-205 Wroclaw, Poland
11Supported by KBN grant 5P03B05620
The IR/UV mixing and the violation of unitarity are two of the most intriguing aspects of noncommutative quantum field theories. In this paper the relation between these two phenomena is explained and established in an explicit form. We start out by showing that the -matrix of noncommutative field theories is hermitian analytic. As a consequence, a noncommutative field theory is unitary if the discontinuities of its Feynman diagram amplitudes agree with the expressions calculated using the Cutkosky formulae. These unitarity constraints relate the discontinuities of amplitudes with physical intermediate states; and allow us to see how the IR/UV mixing may lead to a breakdown of unitarity. Specifically, we show that the IR/UV singularity does not lead to the violation of unitarity in the space-space noncommutative case, but it does lead to its violation in a space-time noncommutative field theory. As a corollary, noncommutative field theory without IR/UV mixing will be unitary in both the space-space and space-time noncommutative case. To illustrate this, we introduce and analyse the noncommutative Lee model–an exactly solvable quantum field theory. We show that the model is free from the IR/UV mixing in both the space-space and space-time noncommutative cases. Our analysis is exact. Due to absence of the IR/UV mixing one can expect that the theory is unitary. We present some checks supporting this claim. Our analysis provides a counter example to the generally held beliefs that field theories with space-time noncommutativity are non-unitary.
Non-Commutative Geometry, Unitarity, Analyticity, S-Matrix
preprint: hep-th/0201144
1 Introduction
Recently there has been a lot of activities in constructing and understanding field theories on noncommutative spacetime (see e.g. [2, 4]). There are many reasons why such approaches are of interest, most of them related to the desire to take into consideration the quantum gravity effects and to understand the nature of spacetime at very short distances (see e.g. [6, 8]). Some of the most recently considered noncommutative geometries are the noncommutative Minkowski space [9, 10, 11, 12, 13], the fuzzy sphere [14, 15, 16], and the -Minkowski spacetime [17, 18, 19]. The algebra of functions on noncommutative is generated by noncommutative space–time coordinates obeying the commutation relations ().
where is an anti-symmetric constant matrix. The fuzzy sphere is generated by Hermitian operators satisfying the defining relations ().
Here the noncommutativity parameter has the dimension of length and should be taken positive. The radius of the fuzzy sphere is quantized, in units of , by
The -Minkowski spacetime is defined by the basic relations between the three commuting space coordinates ( ) and a noncommutative quantum time variable :
In this paper we consider the case of noncommutative . This topic has been studied extensively (for a recent review, see e.g. [2, 4] and references therein). Field theory on this noncommutative space can be obtained by the replacement of standard products of fields by the Moyal -product induced by the relation (1.1) 222 We denote , , and use the notation . ,
In the momentum basis, the result of such an operation is the appearance of an additional Moyal phase factor
Due to this phase factor one has to fix a definite cyclic ordering (say, anti-clockwise) of the momenta that enter any vertex of a given Feynman diagram.
An intriguing phenomenon for the quantum field theory on noncommutative is the existence of an infrared/ultraviolet (IR/UV) mixing [20] in the quantum effective action. Due to this mixing, IR singularities arise from integrating out the UV degrees of freedom. This threatens the renormalizability and even the consistency of a QFT on noncommutative . Hence a better understanding (beyond the technical level) of the mechanism of IR/UV mixing and possible ways to resolve it are certainly highly desirable. We recall that so far in the literature, field theory on noncommutative has been quantized by following the standard perturbative procedures: namely, the action is expanded around the free action and the corresponding Feynman rules are then written down. This is justified in the commutative case; however, since the introduction of necessarily breaks the Lorentz symmetry from to a smaller group that is left unbroken by the commutation relations (1.1), it is actually quite unnatural to employ the standard perturbative vacuum, i.e. the one defined by the free action and so respecting the full Lorentz symmetry. This leads one to suspect that the IR/UV mixing may be reflecting only the properties of the perturbation theory, and may be altered or disappear completely in the full nonperturbative regime (see for example, [21]). An exactly solvable field theory would be a good ground for testing this idea [22]. This leads us to introduce and study the noncommutative Lee model.
Another intriguing phenomenon for any quantum field theory on noncommutative spacetime is that unitarity could be violated. It is commonly believed that noncommutative field theory with space-space noncommutativity is unitary, while theory with space-time noncommutativity is not. This is consistent with the fact that space-space noncommutative field theory can be embedded in string theory [13, 23, 24, 25, 26, 27], while field theory with space-time noncommutativity cannot [28, 29, 30]. In [31] it was found that the unitarity constraints (see (2.17)) are satisfied for noncommutative theories with space noncommutativity but are violated for theories with a noncommuting time (see also [32, 33, 34, 35] for recent discussions). However these constraints are, in general, actually a stronger statement than the unitarity itself. The constraints presume a symmetric condition (see (2.18)) which is not generally valid. Without making any additional assumptions, in this paper, we examine directly the analyticity and unitarity of the -matrix of a general noncommutative field theory. We show that Feynman amplitudes of a noncommutative theory are hermitian analytic (see (2.12)), a useful characterization of the -matrix as introduced and proven by Olive [36]. As a result, the statement that the -matrix is unitary takes the boundary-analytic form (2.13); and that the discontinuity of a Feynman diagram amplitude can be computed according to the Cutkosky formulae [37].
Although these two phenomena have received a lot of attention and have been throughly discussed in the literature, as far as we know, the relation between them has not been identified explicitly and explained before. One of the main aims of this paper is to identify and explain such a relation between IR/UV singularity and the possible violation of unitarity in a noncommutative field theory. This relation will be established through the boundary-analytic unitarity constraints (2.13). The basic idea is that the unitarity constraints allow one to relate the discontinuity of a scattering amplitude in a physical region with the appearance of intermediate states that can be put on-shell in this region. However, in a noncommutative theory, IR singularities can also be generated due to the IR/UV mixing. These new singularities do not correspond to any physical intermediate degrees of freedom. So, generally, one can expect that the unitarity constraints could be violated. In this paper we show, that in the case of space-space noncommutativity, the new IR singularities are safe in the sense that they do not generate any discontinuities in the scattering amplitudes. However, the IR singularities do generate such discontinuities in the space-time noncommutative case. This is the basic field theoretic mechanism for the violation of unitarity in a noncommutative theory. We stress that this violation of unitarity occurs only if time is noncommuting and in the presence of singularities due to the IR/UV mixing.
To illustrate the above ideas, we introduce and analyse the noncommutative Lee model. Lee model [38] is an exactly solvable, nonrelativistic model. The noncommutative Lee model can be defined by using the deformed product of fields (1.5). The model remains exactly solvable. We show that the noncommutative Lee model is free from the IR/UV mixing both at the perturbative level, and in the full exact answer. Thus the noncommutative Lee model does not provide a resolution of the IR/UV mixing issue. This may appear to be disappointing from the point of view of looking for a nonperturbative resolution of the IR/UV mixing issue. Nevertheless, the absence of an IR/UV singularity in a noncommutative field theory is nontrivial. This is one of the main results of this paper. Moreover, due to the absence of the IR/UV mixing, one can expect, from the the above mentioned general arguments, that the Lee model with space-time noncommutativity is unitary. We provide some further arguments to support this claim.
The plan of our presentation is as follows: In section 2.1, we review some basic facts about the -matrix of commutative field theory. In section 2.2, we prove that Feynman diagram amplitudes in a noncommutative field theory are hermitian analytic and we investigate the consequences of this statement on the unitarity of the theory. We show that the usual form of the unitarity constraints used by many people is not correct in general. We derive the correct form of the unitarity constraints and show how they can be used to check the unitarity of a given noncommutative theory. In section 2.3, we explain how a IR/UV singularity may lead to a breakdown of unitarity in space-time noncommutative field theory. In section 3, we study the issue of the IR/UV mixing and unitarity in the noncommutative Lee model. In section 3.1 we describe the commutative Lee model. We show that this model is renormalizable with the renormalization constants easily computed in a closed form. It is well known that the original Lee model in 4-dimensional spacetime has a ghost state and is not unitary [38, 39, 40]. We discuss improved versions of the original Lee model that do not have these problems; and restrict ourselves to these models when we introduce noncommutativity and address the issue of the unitarity of the noncommutative model. This we do in section 3.2 where we introduce the space-space noncommutative and the space-time noncommutative Lee model via the substitutions (3.33) and (3.34). We show that there is no IR/UV mixing in either case and one can expect that the theory is unitary. We present some arguments supporting this claim.
2 Unitarity and Hermitian Analyticity
In this section, we discuss some useful properties of the -matrix. We refer the reader to [41] and to the excellent monograph [42] for further details on this subject. We follow the notations and nomenclature of [42].
2.1 -Matrix in the Commutative Case
First we consider the commutative case. Unitarity of a quantum field theory follows from the existence of a hermitian Hamiltonian. In terms of the onshell -matrix, unitarity is the statement that
Due to the cluster decomposition property of the -matrix, it is meaningful to decompose into two parts
with is the transition matrix. Written in terms of , we have
where the sum is over all intermediate states associated with putting particles onshell. The -matrix and the transition matrix are defined for external particles with real momenta. Since both are invariant under proper Lorentz transformations their matrix elements (transition amplitudes) must be functions of Lorentz scalars which can be formed out of the momenta. We call a combination of external lines of the amplitude for a given physical process a channel, and two channels whose lines are disjoint and exhaustive a reaction. For an amplitude with external lines, there are different reactions provided that we exclude reactions with single-particle channels and do not distinguish the direction of the reaction. The channel invariant variable is the square of the energy in the given channel ,
where are the momenta of incoming and outgoing lines, respectively. ’s are generalizations of the Mandelstam variables for scattering. It is convenient to discuss the singularity structure of a scattering amplitude in terms of the space of these different channel invariants. For more details see: [41].
The transition amplitudes typically have singularities. In perturbation theory, the transition amplitude is given by the sum of a number of Feynman diagrams , each corresponding to a different channel. The Feynman integral is typically of the form
where is a real normalization factor that contains the couplings and factors of , etc and ’s are the external momentum. As we have said before, the integral can be written in terms of the ’s. If one extends to the complex plane, then the singularities are typically branch points in the complex -plane 333 The locations of the singularities are determined by the Landau equations, see for example [42]. We remark that the Landau equations are entirely fixed in terms of the singularity manifold of the integrand of the Feynman integral, and since noncommutativity modifies the integrand by a phase factor, the Landau equations are unmodified by noncommutativity. . Extending to the complex domain, one can think of (or ) as the boundary value of an analytic function defined on the complex -plane. The resulting analytic function has singularities on the real -axis that correspond to physically accessible momenta. These singularities are called the physical region singularities. In addition, this analytic function may have additional singularities that correspond to external momenta that are not physically accessible. The analysis of these additional singularities is more complicated and is not usually performed.
The existence of singularities in the amplitude is a consequence of unitarity [36]. The reasoning is that as the channel invariant increases past a certain threshold (in the physical region of the considered amplitude) that corresponds to a new possible intermediate state, a new term enters the unitarity equation and this gives rise to a singularity in that channel. Such singularities are called normal thresholds. The physical region is divided into segments by the normal thresholds singularities. It can shown, within perturbation theory, that the amplitudes in these segments can be continued consistently into the complex plane and be related analytically if one adopts in the Feynman integrals the prescription by replacing , . This corresponds to associating an with a channel invariant when it is close to a normal threshold. The prescription in the correct invariant is appropriate for all physical region normal thresholds in all amplitudes [42]. Furthermore it can be shown that the Feynman amplitudes (and hence also ) are hermitian analytic [36], i.e. they satisfy:
As a consequence of the hermitian analyticity (2.12), the unitarity relation (2.9) can be put in a more elegant form
Here denotes the boundary values, on the real axis, respectively from above and below the cut, of a complex function ,
and is the discontinuity across this cut
The relation (2.13) is actually somewhat stronger. Indeed, as a result of unitarity and hermitian analyticity, it holds for each individual Feynman diagram [37]
In (2.13) and (2.16) the discontinuities in a given channel of the amplitude are associated with normal thresholds.
In terms of Feynman diagrams, the matrix elements are given, respectively, in terms of the prescription: . The RHS of (2.16) can be computed using the “cutting rules” of Cutkosky [37]: first cut the diagram in all possible ways such that the cut propagators can go on shell simultaneously (for a given set of ’s), then, for each cut, replace the propagators by in the relativistic case, and by in the nonrelativistic case. Finally sum the contributions of all possible cuts.
Before we embark on the noncommutative case, let us remark that the equation (2.9) is sometimes written in the form [43] (or for ),
To arrive at this form, the following symmetric relation
has been assumed. This relation holds, for example, when the theory is -invariant and rotationally invariant, and the basis vectors are chosen to be eigenstates of the total angular momentum [44]. However, we would like to stress that this relation is not true in general. Failure of (2.17) can be due to either the symmetry condition (2.18) or the unitarity of the theory (2.9) not being satisfied or if the amplitude possesses singularities which are not due to the possible intermediate states. Therefore, generically, (2.17) is not a conclusive check of whether a given theory is unitary or not. In the next subsection we show that the hermitian analyticity remains valid in the noncommutative case and, therefore, that (2.13) and (2.16) can be used to check unitary of a noncommutative theory.
2.2 -Matrix in the Noncommutative Case
In a noncommutative quantum field theory the propagators take the same form as in the commutative case while the vertices are modified by the Moyal phase factor (1.6) that arises from the noncommutative multiplication. For example, in the noncommutative model, the modification of the (real) coupling is a multiplication by a real factor
where and are the momenta entering the vertex. However, it is easy to see that when the theory involves more fields, the modification of the vertex is, generally, a phase factor. For example, this is the case for the noncommutative Lee model to be introduced in the next section. The phase factor (1.6) is cyclically symmetric but not permutation symmetric. Therefore, the symmetric relation is, in general, not valid.
Since Lorentz invariance is broken, in addition to the channel invariants we have introduced above, the -matrix of a noncommutative field theory generally depends also on the variables
A novelty in noncommutative theory is the possible existence of the IR/UV mixing [20], which states that the amplitudes in a noncommutative theory become singular in the limit as one removes the cutoff, i.e. . These singularities occur in the physical region of momenta but do not correspond to normal thresholds since the IR/UV singularities are not related to any new degrees of freedom. One may extend the amplitude analytically to above the cut associated with these singularities by adding to . This corresponds to extending the prescription for the Feynman diagram to the cutoff: since the combination often appears together [20].
Hermitian analyticity
Next we examine the hermitian analyticity of a noncommutative Feynman diagram. We show that the Feynman amplitudes for noncommutative theories are hermitian analytic. To see this, we note that under the complex conjugation, the Moyal phase factor (1.6) becomes
i.e. it reverses the cyclic ordering of the momenta entering the vertex. We can interpret the RHS as the Moyal phase factor of a vertex which is the mirror image of the original one, see figure 1. In the operator language the RHS of (2.21) corresponds to a Wick contraction in the reverse order. For example,
where in this example. In general, let
be the product of the Moyal phase factors associated with the vertices of a Feynman diagram . We have
where is the mirror diagram of .
k1 \psfragk2 \psfragk3 \psfragkN \psfragG \psfragGB A Feynman diagram
Figure 1: A Feynman diagram and its mirror diagram . They have the opposite Moyal phase factors.
In a noncommutative theory, the Feynman amplitude for a diagram takes the form
Here is the propagator of the -th internal line and the mass square has a small imaginary part and is a real normalization factor that contains the couplings 444 We emphasis that the couplings (bare as well as the renormalized one) have to be real. As we discuss at the end of section 3.1, the original Lee model (defined in 4-dimensional spacetime and with the dispersion relations (3.2)) has an imaginary bare coupling [38] and Hermitian analyticity does not hold, in both the commutative and noncommutative cases. However, the improved Lee models have real couplings and so have hermitian analytic -matrix. and factors of , etc. Complex conjugating, one has
where we have used in the last step the observation that a change of sign in the imaginary part of the mass (or cutoff) corresponds to the change of sign in the imaginary part of (or ). In the discussion given above, for the clarity of the argument, we have been careful to indicate which diagram ( or ) is to be drawn for the Feynman amplitude to be computed. However this is not really necessary as which diagram has to be drawn is already clear once the the process to be considered ( or ) is specified. Therefore, can simply write
Thus we have shown that the Feynman diagrams (and hence the -matrix) of a noncommutative theory are hermitian analytic. We stress that our result is general and does not depend on the detailed form of the propagators or vertices. For example, it applies to the noncommutative Lee model to be introduced in section 3.
2.3 Unitarity Constraints and their Relation to the IR/UV Singularities
Note that the symmetric condition (2.18) is, in general, not valid and so the condition (2.17) may not hold even if a theory is unitary. However, since Feynman amplitudes satisfy hermitian analyticity, (2.13) and (2.16) hold if the -matrix is unitary. Therefore we propose to use (2.13) or (2.16) instead of (2.17) 555 In [31], the propagator diagram in the noncommutative and the scattering diagram in the noncommutative were considered. It is easy to see that the symmetric condition (2.18) is satisfied for these processes, and so checking of (2.17) constitutes a valid test of unitarity for the noncommutative theories considered there. as a check of unitarity.
Before we consider a specific model, let us discuss how the IR/UV singularities may lead to a breakdown of unitarity in general. Generally, a new IR/UV singularity in a scattering amplitude can be a pole or a branch point in for some . Note that
where we have chosen, for example, , with all other components vanishing. Therefore for space noncommutativity, is positive definite and so there is no new contribution to the discontinuity of the amplitude from this singularity. However, in the case of space-time noncommutativity, is not of definite sign in the physical region [31]. Therefore if is a branch point singularity, there will now be a new contribution to the LHS of (2.16). Since the IR/UV singularities do not correspond to any intermediate degrees of freedom that can go on shell, these new contributions will not be accounted for by the “onshell” sum and (2.16) will be violated. This is the basic mechanism how unitarity is violated by the IR/UV singularities when time is noncommuting. Both the IR/UV singularity and the noncommuting time must be present in order to violate unitarity. Finally, we would like to add, as was shown in [32], that even when one tries to add new degrees of freedom to satisfy the cutting rules in a formal sense, these new degrees of freedom have to be tachyonic and so the theory is inconsistent.
3 An Application: The Noncommutative Lee Model
In this section, we consider the Lee model in spacetime dimensions and its noncommutative generalization. In particular, we consider the issues of the IR/UV mixing and unitarity for the noncommutative Lee model. We will find that due to the presence of the Moyal phase factors the symmetric condition is not satisfied. Therefore one should check unitarity using (2.16). We show that (and this result is exact) the noncommutative Lee model is free from any IR/UV singularity. As a result, one can expect that the noncommutative Lee model is unitary for both the space-space and space-time noncommutative case. We give further arguments supporting this claim.
Another model which is free from the IR/UV mixing is the noncommutative Chern-Simon model. This model is finite and, as shown by [45], free from the IR/UV mixing at the one loop level. However, it is actually a free theory, at least in the axial gauge [46]. Thus this model is not suitable for our purposes.
3.1 Commutative Case
The Lee model was originally introduced by Lee in [38] where it was shown that the model is renormalizable with its mass, wavefunction and charge renormalizations easily performed in an exact manner. In the following, we follow the presentation of [44]. The model has two fermions and with masses respectively, and a real scalar with mass . The Hamiltonian for the free fields is:
where , , are the dispersion relations for the free and particles, and are the annihilation operators of the and particles, respectively. In the original Lee model [38], and the fermions are taken to be very heavy while is assumed to be relativistic. In this case, the dispersion relations are given by
The Galilei-invariant form [49]
as well as the relativistic choice [50]
were also studied in the literature. The interacting Hamiltonian of the model is taken to be given by
where is a form factor 666 Note that, in principle, one can also use a more general form factor that depends on the momentum of the pair. It is easy to see that this amounts to a simple replacement
in the analysis below. introduced to smooth out the interaction to avoid the divergences connected with a point interaction. In fact can be taken to be and the divergences can be absorbed by renormalization. This is the case of interest to us. However as we will see, the introduction of noncommutativity to the Lee model amounts to a modification of by a phase factor. Therefore we will keep explicitly in the presentation below, with the understanding that it will be set to 1 (or to the Moyal phase factor for the noncommutative case) in the final answer.
We note that the interaction is nonlocal in space even in the limit . To see this, it is convenient to introduce the negative and positive frequency parts of :
In terms of and , can be written in the coordinate space as
where is the Fourier transform of the Lee model form factor and in the limit . It is now clear that the coupling term is nonlocal in space since the operation of taking the positive frequency part involves the integration over all space. However the model is local in time.
Since the theory is local in time, it can be described equivalently in the Lagrangian formulation by performing the Legendre transformation. The Lagrangian density of the model is given by
where is the free part:
and the interaction is described by
The Lagrangian formulation will be useful when we introduce an electric deformation of the model.
The Lee model can be solved by considering directly the Schrödinger equation with the Hamiltonian where is given by (3.1) with the choice (3.2) and is given by (3.5). Due to the structure of the interaction (3.12), the only elementary interaction of the theory involves the process
In a standard relativistic model, the antiparticle would appear and the crossed reaction
would be possible, but this is not allowed in the Lee model due to the particular form of the interaction Hamiltonian (3.5). The system possesses two simple conservation laws
where are the total numbers of particles, respectively. Due to the conservation laws (3.15), the eigenfunctions of contain only a finite number of particles and, consequently, the theory is exactly solvable [38].
The quantization of the theory is straightforward. Locality in time allows us to perform the standard canonical quantization of the theory. The nontrivial commutation relations of the field operators are
with the rest equal to zero. The vacuum of the theory is defined by
It is easy to verify that
thus we can take the and –quanta as the physical particles (of masses and , respectively) and identify , , and there is only the renormalization of the mass of to be considered.
Without any loss of generality we consider the dispersion relation s (3.2) in order to study the renormalization of the theory. Consider the sector of the theory associated with one physical -particle. Denote the physical V-particle as . Due to the conservation law (3.15), we have
with the wavefunction still to be determined. Here is an eigenstate of
The normalization of yields
Contracting (3.20) with , one obtains
On the other hand, contracting (3.20) with , one obtains
which gives
Note that eq. (3.24) corresponds to the case when the particle is stable; i.e. it cannot spontaneously decay into an and particle. The decay of the particle is allowed in the case of eq. (3.25). The renormalized coupling can be obtained by requiring the scattering process
to be nonzero in the limit .
As a result, we obtain the following renormalization constants
The integrals in (3.27) and (3.28) are generally divergent in the limit . As usual, all the scattering amplitudes (, , , etc.) become finite after we have performed the renormalization (3.27), (3.28) and (3.29)777For example, in the sector , the renormalized scattering amplitudes , and were studied in [47] and [48].
A \psfragAd \psfragp \psfragk \psfragf1 \psfragV \psfragN \psfragphi \psfragVd \psfragNd \psfragphid \psfragf2 \psfragf3 \psfragf4for each loop momentum integration: Feynman rules for the Lee model
Figure 2: Feynman rules for the Lee model
We would like to add a couple of comments:
i) One can perform a path integral quantization of the theory and one obtains the Feynman rules given in figure 2. Using these Feynman rules, it is straightforward to show that the above results for the renormalization can also be obtained in the Lagrangian framework and are exact in perturbation theory. Later we will use these Feynman rules to study the noncommutative Lee model, particularly, in the time noncommuting case.
ii) In the original Lee model [38], and the mass renormalization constant is linearly divergent while the wavefunction renormalization is logarithmically divergent. It has been shown that the different choices (3.3) (Galilean kinematics) and (3.4) (relativistic kinematics) of dispersion relations lead to finite renormalizations when .
Unitarity and the ghost state
The relation (3.29) between the renormalized coupling and the bare coupling can be rewritten as (with set to 1)
For , is logarithmically divergent. If is to remain fixed and nonvanishing, the bare coupling has to be imaginary
and the wavefunction renormalization,
This contradicts the interpretation of as the probability of finding a bare quantum in the physical -particle state. Such negative probabilities imply that the -matrix is not unitary. In fact one can show that [39] corresponds to a new state in the theory. This state has a negative norm and is referred to as the “ghost state” by Kallen and Pauli. As a result, the -matrix is explicitly non-unitary. In fact, the not unitarity of the theory is related to the original Hamiltonian being non-Hermitian due to the presence of an imaginary bare coupling.
Two improvements of the original Lee model are possible. One is to consider other dispersion relations e.g. (3.3) and (3.4). This leads to 4-dimensional theory with finite renormalizations and without a ghost [49, 50]. Another possibility is to consider the Lee model in lower dimensions [51]. In , the integral in (3.30) is finite and so the model is ghost free for physical coupling . The improved Lee model is still exactly solvable in both cases. To minimize the number of new formulae, we consider the second class of models when we generalize to the noncommutative case.
3.2 The Noncommutative Lee Model
The noncommutative framework is generated by using the -product (1.5). As mentioned in the introduction, the noncommutative deformation can be introduced either in the Hamiltonian or the Lagrangian formulation in the magnetic case (). The replacement (1.5) amounts to the following substitution in the formula (3.5):
In the electric case with nonvanishing components 888 Besides magnetic and electric cases one can also consider lightlike deformations [52], corresponding to the case . , the substitution takes the form
Obviously the -product involves an infinite number of time derivatives. The nonlocalities in time destroy not just the usefulness of the Hamiltonian formulation, but also the standard way of relating the Lagrangian and the Hamiltonian description 999For recent efforts at introducing a Hamiltonian framework for Lagrangian densities nonlocal in time see [53, 54, 55]. We have not been able to employ these results here in a constructive way.. We are thus left only with the Lagrangian framework. For example, when there is only the nonvanishing component , one obtains the modification of the product of and fields
in the interaction Lagrangian (3.12). Note that due to the associativity of the Lagrangian and the integration over spacetime, the -product of the three fields in (3.12) can be represented by a modification of the product for any pair of fields ( as in (3.35), or ).
Note also that the phase factor in (3.33) and (3.34) does not lead to a real factor as in the noncommutative scalar case. Thus the noncommutative modification in the Lee model involves a complex factor. This, in particular, implies that the symmetric condition (2.18) is not satisfied.
Quantization of the magnetically deformed theory can be achieved by using either the canonical quantization, or equivalently a path integral quantization. In the electric case, canonical quantization fails due to the nonlocality in time. Nevertheless, formally, the theory can be quantized using the path integral method. In the following, we will use the path integral method to analyze both the magnetic and the electric Lee models. The Feynman rules are those of figure 1 with given by (3.33) and (3.34) and work for general . To be specific, below we consider the noncommutative Lee model in dimensional spacetime and with the standard dispersion relations (3.2).
Renormalization and (no) IR/UV mixing
Since the effect of noncommutativity is a modification (3.33) or (3.34) of by a phase factor, it is clear that the mass, wavefunction and coupling renormalization (depending on ) are not affected. Thus we conclude that the renormalization constants of the noncommutative Lee model are exactly computable and are independent of the noncommutativity parameter .
Moreover, one can easily convince onself that the UV-divergences of the theory reside in planar diagrams that simply do not have nonplanar counterparts. Thus the UV-divergences of the noncommutative Lee model remain untouched in the limit when the cutoff is removed. This is quite different from the other noncommutative field theories which display an intriguing mixing of IR/UV [20]. In these models, the introduction of a nonzero noncommutativity improves the UV convergence of nonplanar diagrams but also leads to new IR singularities for these diagrams. In the present case of the noncommutative Lee model, there simply are no UV-divergences in the nonplanar diagrams, and hence there are also no new IR singularities that could be generated. We conclude that the noncommutative Lee model is free from IR/UV mixing. This result is exact.
First we consider the unitarity constraints at the one loop level. Due to the structure of the vertices (figure 2) in the theory, it is easy to convince oneself that only planar diagrams can be drawn at the one loop level. Therefore the one loop Feynman amplitudes take the form
where are the corresponding amplitudes in the commutative case, and is the Moyal phase factor associated with the planar diagram. As a result, the equation (2.16) is satisfied since
In the second step, we have used the fact that the constraint (2.16) is satisfied for the commutative Lee model since this model is unitary (or one can verify this in a straightforward manner since the ’s that appear in the sum are tree level ones). In the last step we have used the fact that the planar Moyal phase factor of the 1-loop diagram decomposes simply into the product of factors of the tree level ones:
Note that due to the form of the modification for the one-loop amplitude (3.36), checking the imaginary part (2.17) would lead to the incorrect conclusion that the noncommutative Lee model is not unitary at a one loop level. Note also that the above argument is general and does not depends on whether is spacelike or timelike. Therefore, we conclude that the noncommutative Lee model is unitary at a one loop level for general . This result is valid to all orders in .
p1 \psfragp2 \psfragp3 \psfragp4 \psfragk \psfragV \psfragVd \psfragt A nonplanar diagram
Figure 3: A nonplanar diagram
At a higher loop level, one can have nonplanar diagrams, for example, the one in figure 3. The phase factor associated with this diagram is
The second phase factor depends on the loop momentum and is a characterization of a nonplanar diagram. As one can check easily, this amplitude is regular in the variable (and hence ). Generally, due to the absence of the IR/UV singularity, a nonplanar amplitude will be regular in the variable and so there is no new discontinuity in the LHS of the unitarity equation (2.16). Since both the LHS and RHS are regular in , the unitarity constraint will be satisfied at the zeroth order in . Although we believe this to be the case, it may not be easy to verify the unitarity relations to all orders in as one would have to exploit various nontrivial relations among special functions and integrals. The fact that unitarity constraints are satisfied at a one loop level; and also (at the zeroth order in ) for any higher loop amplitude, is already a nontrivial property of the noncommutative Lee model. Without any other source of violation of unitarity in sight, we expect that the noncommutative Lee model is unitary for any .
4 Discussion
In this paper, we have discussed and examined two basic aspects of noncommutative field theories: the IR/UV mixing and unitarity. We have showed that the -matrix of a noncommutative field theory is hermitian analytic. This implies that unitarity provides a direct evaluation of the discontinuities associated with the cuts of normal thresholds. We have also explained how the IR/UV singularities can lead to a violation of unitarity for field theories with space-time noncommutativities. As a corollary, we have argued that a noncommutative field theory without any IR/UV mixing will be unitary in both the space-space and space-time noncommutative cases.
As an illustration of the general discussion, we have introduced and analysed the noncommutative Lee model. We have found that the model is entirely free from the IR/UV mixing This result is exact. Our general arguments show that the noncommutative Lee model is unitary in both the space-space and space-time noncommutative cases. Simple explicit checks are consistent with this claim. Thus we provide a counter example to the general belief that field theories with space-time noncommutativity have to be non-unitary.
A consistent quantum field theory on a noncommutative spacetime should be unitary. It should also be free from the problems related to the IR/UV mixing. One can broadly divide the IR/UV mixing phenomena in noncommutative field theories into those that could be called good ones and bad ones. For example, the IR/UV singularities which appear in a purely bosonic noncommutative gauge theory or in a noncommutative QED are bad ones [56]. However, IR/UV singularities are milder and may be absent [57] in the presence of supersymmetry. The milder form of the IR/UV mixing in supersymmetric noncommutative gauge theories leads to a decoupling of the degrees of freedom in the IR [58, 59]. Not only the degrees of freedom become free in the IR [59], they also trigger spontaneous supersymmetry breaking [60] in the presence of an appropriate Fayet-Iliopoulos D-term and play the rôle of the hidden sector. This we refer to as good IR/UV mixing effects. More details are provided in [61]. With unitarity better understood and (some) IR/UV mixing turned to be our advantage, it seems not unreasonable to contemplate that nature could indeed be noncommutative (at least at some level of explanation of its phenomena).
CSC and JL would like to thank Luis Alvarez-Gaume for useful discussions and comments and the theory group at CERN, where this work was started, for its hospitality. We would also like to thank David Fairlie, Pei-Ming Ho, Valya Khoze, Rodolfo Russo, Lenny Susskind, Richard Szabo and Gabriele Travaglini for helpful discussions.
For everything else, email us at [email protected]. |
2f17fea10f7d0d05 | KukaXoco Quantum
KukaXoco Quantum Theory
The equivalence of classical and quantum computing
This Web page is devoted to proofs that there are few if any mathematical differences between classical physics and quantum physics, especially given the lack of a well-formed semantics for physics. These proofs force the conclusion that there are mathematical systems for which the mathematics is equivalent for applied classical physics and applied quantum physics. A subset of this conclusion includes the applied physics that is computing. And thus, there are mathematical formulisms for which classical computing and quantum computing are equivalent. The challenge then, addressed on other parts of this Web site, is deriving mathematical formalisms that express these equivalances, formalisms that can be engineered in software to completely enable universal quantum algorithm execution on 'classical computers' (i.e., without physical/optical qubits).
The equivalence of classical and quantum physics
That classical physics and quantum physics have much in common, much overlap, dates back to the alternative formalism of quantum mechanics due to Eugene Wigner, who in a 1932 paper [Wigner1932] introduced a density matrix basis for quantum mechanics. One consequence of his formalism is that, to quote [Matulis2019] "quantum symbols and equations are quite similar to their classical counterparts" (discussed below).
For a long time, Wigner's formalisms was treated as a second-class formalism, as opposed to the wavefunction matrix formalism of Heisenberg and Schrodinger, because of one bizarre consequence of Wigner's formalism - the appearance of negative probabilities, a concept even today is outside of the mainstream of probability theory, and barely mentioned in undergraduate and graduate physics programs.
Even Wigner himself did not mention these negative probabilities much after his 1932 paper, and for decades they were referred to as 'pseudo-probabilities', useful math tools, but not a part of reality because there weren't being measured. But starting in 1996, negative probabilities were measured in the laboratory and more recently have been engineered for practical uses (and thus much more a part of reality than strings). Which lends support, starting from Wigner's formalism, to the argument that there are few if any differences between classical and quantum physics (much like the few differences between American English and British English :-)
What follows are excerpts from papers in recent years arguing these few if any differences between classical physics and quantum physics. Which forces the conclusion that there should be few differences between classical computing and quantum computing. (Note: this is Appendix C of a comprehensive review of negative probabilities).
Note: this Web page can be equivalently titled "The Corruption of the Teaching of Quantum Mechanics and Quantum Computing", given that none of the following is taught in the introductory courses, even though the following makes it much easier to transition from the classical to the quantum.
FEJÉR - 1915
In 1915, L. Fejér published a paper on polynomials, “Ueber trigonometrische polynome” (in J. reine u. angew. Math., v146 53-82), one of the implications of which is that there are no fundamental differences between classical and quantum mechanical probabilities, and that much of quantum mechanics can be derived from classical mechanics. I quote from a 1998 paper by F.H. Frohner, “Missing link between probability theory and quantum mechanics: the Riesz-Fejér theorem” [Fröhner1998] that excellently explores all of the implications of Fejér’s work with regards to the near mathematical equivalence of classical and quantum physics (implications pretty much ignored by everyone afterwards, and not recognized by Fejér and Riesz).
Abstract: ... The superposition principle is found to be a consequence of an apparently little-known mathematical theorem for non-negative Fourier polynomials published by Fejér in 1915 that implies wave-mechanical interference already for classical probabilities. Combined with the classical Hamiltonian equations for free and accelerated motion, gauge invariance and particle indistinguishability, it yields all basic quantum features - wave-particle duality, operator calculus, uncertainty relations, Schrödinger equation, CPT invariance and even the spin-statistics relationship - which demystifies quantum mechanics to quite some extent.
Page 651: The final conclusion is (1) traditional probability theory can be extended by means of the Riesz-Fejer superposition theorem, without change of the basic sum and product rules from which it unfolds, hence without violation of Cox's consistency conditions; (2) the resulting probability wave theory turns out to be essentially the formalism of quantum mechanics inferred by physicists with great effort from the observation of atomic phenomena.
Page 652: From this viewpoint quantum mechanics looks much like an error propagation (or rather information transmittal) formalism for uncertainty-afflicted physical systems that obey the classical equations of motion.
WIGNER - 1932
The near equivalence of classical and quantum mechanics can be traced back as far as Wigner’s 1932 paper [Wigner1932] where he introduced his density matrix formalism of quantum mechanics (which led to the appearance of negative probabilities). [Matulis2019], Section 8, has a nice summary of the implicit equivalence in Wigner’s paper:
[The] main advantage [of the Wigner formalism] is that in this representation quantum symbols and equations are quite similar to their classical counterparts. For instance, the coordinate and momentum operators convert themselves just to the numbers similar to the coordinate x and the momentum v used in classical mechanics. Their mean values are expressed in terms of simple integrals where they are multiplied by the Wigner function. The density of the particles as a function of space and time is just the integral of the Wigner function. … In the case of simple systems (that of free particles) the density matrix in the Wigner representation satisfies the classical Liouville equation, and the quantum effects may reveal themselves only due to additional restrictions such as the boundary or initial conditions to be satisfied by the density matrix.
The Liouville equation describing the time evolution of the phase space distribution function of classical mechanics, the Liouville theorem asserting that the phase space distribution function is constant among the trajectories of a Hamiltonian dynamical system.
Sadly, Wigner died before negative probabilities were first measured in laboratory experiments. Had he lived long enough, he would have rewritten his 1932 paper by deleting the following paragraph (where P() is the Wigner function for both pure and mixed states):
Of course, P(x1, . . . , xn; p1, . . ., pn) cannot be really interpreted as the simultaneous probability for coordinates and momenta, as is clear from the fact, that it may take negative values. But of course this must not hinder the use of it in calculations as an auxiliary function which obeys many relations we would expect from such a possibility.
MOYAL - 1949
In 1949, José Enrique Moyal, in a paper "Quantum Mechanics as a Statistical Theory" [Moyal1949], independently derives the distributions of Wigner, and recognizes that they are quantum moment-generating functionals, and thus the basis of an elegant encoding of all quantum expectation values, and hence quantum mechanics, in classical phase space (from the Wikipedia page).
ZACHOS - 2002
Cosmas Zachos, at Argonne National Laboratory, in his paper "Deformation Quantization: quantum mechanics lives and works in phase space" [Zachos2002], provides an excellent summary of how Wigner's formalism gets rid of much of quantum mechanics: wavefunctions, Hilbert space, operators, etc.
[The third formalism after Schrodinger/Heisenberg Hilbert space, and then Feynman's path integrals] is the phase space formalism, based on Wigner's (1932) quasi-distribution function and Weyl's (1927) correspondence between quantum mechanical operators and ordinary c-number phase-space functions. The crucial comosition structure of these functions, which relies on the *-product, was fully understood by Groeneweld (1946), who, together with Moyal (1949), pulled the entire formulation together. ...
[Wigner's distribution function] is a special representation fo the density matrix (n the Weyl correspondence). Alternatively, it is a generating function for all spatial autocorrelation functions of a given quantum mechanical wavefunction.
... the central conceit of this review is that the above input wavefunctions may ultimately be foreited, since the Wigner functions are determined, in principle, as the solutions of suitable functional equations. Connections to the Hilbert space formalism of quantum mechanics may thus be ignored, ...
It is not only wavefunctions that are missing in this formulation. Beyond an all-important (noncommutative, associative, pseudodifferential) operation, the *-product, which encodes the entire quantum mechanical action, there are no operators. Observables and transition amplitudes are phase-space integrals of c-number function (which compose through the *-product), weighted by the Wigner function, as in statistical mechanics. ... the computation of observables and the associated concepts are evocative of classical probability theory.
[Wigner's] formulation of quantum mechanics is useful in describing quantum transport processes in phase space, of importance in quantum optics, nuclear physics, condensed matter, and the study of semiclassical limits of mesoscopic systems and the transition to classical statistical mechanics. It is the natural language to study quantum chaos and decoherence, and provides intuition in quantum mechanical interference, probability flows as negative probability backflows, and measurements of atomic systems. ...
As a significant aside, the Wigner function has extensive practical applications in signal processing and engineering (time-frequency analysis), since time and energy (frequency) constitute a pair of Fourier-conjugate variables just like the x and p of phase space.
Claeys and Polkovnikov, in their paper, "Quantum eigenstates from classical Gibbs distributions", show how complete the Wigner formalism is (sometimes referred to as the Wigner/Moyal and Wigner/Weyl formalisms reflecting the similar writings of Moyal and Weyl):
Abstract: ... We discuss how the language of wave functions (state vectors) and associated non-commuting Hermitian operators naturally emerges from classical mechanics by applying the inverse Wigner-Weyl transform to the phase space probability distribution and observables. In this language, the Schrödinger equation follows from the Liouville equation, with h-bar now a free parameter. ... We illustrate this correspondence by showing that some paradigmatic examples such as tunneling, band structures, and quantum eigenstates in chaotic potentials can be reproduced to a surprising precision from a classical Gibbs ensemble, without any reference to quantum mechanics and with all parameters (including h-bar) on the order of unity.
Note: what is of additional interest in [Claeys2020] is that their extension of classical mechanics also gives rise to negative probabilities. "Interestingly, it is now classical mechanics which allows for apparent negative probabilities to occupy eigenstates, dual to the negative probabilities in Wigner's quasiprobability distribution."
BRACKEN - 2008
A.J. Bracken, in his paper “Quantum mechanics as an approximation to classical mechanics in Hilbert space” [Bracken2008], uses Wigner’s function in another way to create a near classical/quantum equivalence:
Classical mechanics is formulated in complex Hilbert space with the introduction of a commutative product of operators, an antisymmetric bracket, and a quasi-density operator. These are analogues of the star product, the Moyal bracket, and the Wigner function in the phase space formulation of quantum mechanics. Classical mechanics can now be viewed as a deformation of quantum mechanics. The forms of semi-quantum approximations to classical mechanics are indicated.
VILLE - 1948
Jean-André Ville, in a paper “Theory and Applications of the Notion of a Complex Signal” [Ville1958], independently derives Wigner's quasiprobability functions from a purely signal theory point of view (one year later, Moyal also independently derives Wigner's function, but back again in the world of quantum mechanics). In Part Three, "Distribution of Energy in the Time Frequency Domain", equations (6), (8), and (10) on page 22 of the article are basically Wigner's function. While he does not discuss how his function goes negative, he does tie his analysis back to quantum mechanics (page 11):
We treat this question in Part III, according to the following principles: a signal may be considered as being a certain amount of energy, whose distribution in time (given by the form of the signal) and in frequency (given by the spectrum) is known. If the signal extends through an interval of time T and an interval of frequencies Ω, we have a distribution of energy in a rectangle TΩ . We know the projections of this distribution upon the sides of the rectangle, but we do not know the distribution in the rectangle itself. If we try to determine the distribution within the rectangle, we run into the following difficulty: if we cut up the signal on the time scale, we display frequencies; if we cut up on the frequency scale, we display the times. The distribution cannot by determined by successive measures. A simultaneous determination must be sought, which has only a theoretical significance. Therefore, we must operate either on the signal or on the spectrum. But for the signal where, for example, time is a variable, frequency is properly speaking an operator (the operator (pi/2 j)d/dt, for frequencies in cps). We have determined the simultaneous distribution of t and of (pi/2 j)d/dt, by methods of calculus of probabilities, which easily leads to the instantaneous spectrum (and just as easily to the distribution in time of the energy associated with one frequency). It is seen that the formal character of the method of calculation used is imposed by the difficulty encountered, which is analogous to that which occurs in quantum mechanics when non-permutable operators must be composed.
It may be more than “analogous”.
We thus see, that over 70 years ago, in the works of classical physics and mathematics of Fejer, Wigner, Moyal and Ville, that one can express much of quantum mechanics without any 'spooky' assumptions, but instead just with the mathematics of classical mechanics. As others argue below, if you add the constraint that classical measurements can't be infinitely precise, you pretty much get all of quantum mechanics in classical mechanics. That is, the mathematics of the classical is much the same the mathematics of the quantum. Which means that one subset of this latter statement, applied mathematics, allows us to state that the applied mathematics of the classical is much the same as the applied mathematics of the quantum. With computing being applied mathematics, you get that classical computing should be much the same as quantum computing. You just need to find the right mathematics. Since we engineers axiomatically accept that you can't measure anything with infinite precision (even with the best Hewlett-Packard equipment), we engineers accept the challenge of finding this mathematics.
And that all of this is barely discussed in standard quantum mechanics (starting with it is never taught), nor in stardard quantum computing, is a sad statement of intellectual corruption.
Christian Baumgarten publishes an interesting paper, “How to (Un)-Quantum Mechanics” [Baumgart2018], in which he argues quite bluntly that “that the real difference between Classical Mechanics and Quantum Mechanics is not mathematical”:
If the metaphysical assumptions ascribed to classical mechanics are dropped, then there exists a presentation in which little of the purported difference between quantum and classical mechanics remains. This presentation allows us to derive the mathematics of relativistic quantum mechanics on the basis of a purely classical Hamiltonian phase space picture. It is shown that a spatio-temporal description is not a condition for but a consequence of objectivity. It requires no postulates. This is achieved by evading spatial notions and assuming nothing by time translation invariance. ... that the real difference between Classical Mechanics and Quantum Mechanics is not mathematical.
He couples this with a reference to much debunking of the mysteries:
Many, maybe most, of the alleged mysteries of QM have been debunked before, or their non-classicality has been critically reviewed. We went beyond a mere critic of the standard approach: as we have shown there is little in the mathematical formalism of quantum theory that cannot be obtained from classical Hamiltonian mechanics.
With regards to non-locality, he writes:
The intrinsic non-locality of un-quantum mechanics explains why it makes only limited sense to ask where an electron or photon “really” is in space: the electron is not located at some specific position in space at all. Because physical ontology is not primarily defined by spatial notions, it is meaningless to ask if it can simultaneously “be” at different positions. Surely it can, since projected onto space-time, the electron has no definite location, but “is” a wave.
He concludes:
The math [of CM and QM] is literally the same. The only victim of our presentation is the metaphysical presupposition that space is fundamental. This however is in agreement with the experimental tests of Bell’s theorem: it is a price we have to pay anyhow.
In other papers, he shows how Schrodinger’s equation [Baumgart2020] and Dirac’s equation can be simply derived by abandoning the illogic of ‘point’ particles:
As Rohrlich’s analysis reveals, the alleged intuitive-ness and logic of the notion of the point particle fails, on closer inspection, to provide a physically and logically consistent classical picture. If we dispense this notion, Schrödinger’s equation can be easily derived and might be regarded as a kind of regularization that allows to circumvent the problematic infinities of the ‘classical’ point-particle-idealization. Our presentation demonstrates that the “Born rule”, which states that ψ⋆ψ is a density (also a “probability density” is positive semidefinite), can be made the initial assumption of the theory rather than its interpretation. However, as well-known, Schrödinger’s equation is not the most fundamental equation, but is derived from the Dirac equation. Only for the Lorentz covariant Dirac equation we can expect full compatibility with electromagnetic theory. We have shown elsewhere how the Dirac equation can be derived from ‘first’ (logical) principles [14–16]. The derivation automatically yields the Lorentz transformations, the Lorentz force law [17–19] and even Maxwell’s equations [14] in a single coherent framework.
Again, if the mathematics of classical and quantum mechanics are mostly the same, then there will be much overlap between the mathematics of classical and quantum computing, allowing the small gap to be filled in by negative probabilities to then eliminate the need for quantum computing hardware (other than your cellphone or personal computer).
Quantum, classical and intermediate: a measurement model Diederik Aerts and Thomas Durt, in the paper "Quantum, classical and intermediate: a measurement model" [Aerts1994] (Free University of Brussels, 1994) present a measurement system that continuously transfers smoothly (with the correct math) from classical to a hybrid/intermediate to a quantum state.
“The limit of zero fluctuations is classical and the limit of maximal fluctuations is quantum.”
“If we consider the structure of the intermediate case, we can show that the Hilbert space axioms of quantum mechanics are no longer valid), but are replaced by a more general structure, and this explains why it is not possible to have a continuous transition between quantum and classical within the orthodox Hilbert space quantum mechanics. We shall show that for the intermediate case, the probability model is not quantum (representable by a Hilbertian probability model). Both results indicate that the fundamental difficulty of describing the measurement process might be due to a structural shortcoming of the available physical theories (quantum mechanics and classical mechanics).”
Ghenadie Mardari and James Greenword argue that one interpretation allows quantum superposition to be a classical process, in their paper “Classical sources of non-classical physics: the case of linear superposition” [Mardari2004]:
Classical linear wave superposition produces the appearance of interference. This observation can be interpreted in two equivalent ways: one can assume that interference is an illusion because input components remain unperturbed, or that interference is real and input components undergo energy redistribution. Both interpretations entail the same observable consequences at the macroscopic level, but the first approach is considerably more popular. This preference was established before the emergence of quantum mechanics. Unfortunately, it requires a non-classical underlying mechanism and fails to explain well-known microscopic observations. Classical physics appears to collapse at the quantum level. On the other hand, quantum superposition can be described as a classical process if the second alternative is adopted. The gap between classical mechanics and quantum mechanics is an interpretive problem.
T' HOOFT - 2012/2015/2021
Gerard ‘t Hooft that there are few differences between the classical and the quantum, in his paper “Quantum mechanics from classical logic” [Hooft2012], from the Abstract:
Although quantum mechanics is generally considered to be fundamentally incompatible with classical logic, it is argued here that the gap is not as great as it seems. Any classical, discrete, time reversible system can be naturally described using a quantum Hilbert space, operators, and a Schrödinger equation. The quantum states generated this way resemble the ones in the real world so much that one wonders why this could not be used to interpret all of quantum mechanics this way. Indeed, such an interpretation leads to the most natural explanation as to why a wave function appears to “collapse” when a measurement is made, and why probabilities obey the Born rule. Because it is real quantum mechanics that we generate, Bell’s inequalities should not be an obstacle.
Three years later, 't Hooft writes a 250 page paper on how to view quantum mechanics as nothing more than a tool, not a theory, for analyzing classical systems, using cellular automaton techniques, in his paper, "The Cellular Automaton Interpretation of Quantum Mechanics" [tHooft2015]. He writes:
Abstract: ... Quantum mechanics is looked upon as a tool, not as a theory. Examples are displayed of models that are classical in essence, but can be analyzed by the use of quantum techniques, and we argue that even the Standard Model, together with gravitational interactions, might be viewed as a quantum mechanical approach to analyze a system that could be classical at its core. We explain how such thoughts can conceivably be reconciled with Bell’s theorem, and how the usual objections voiced against the notion of ‘superdeterminism’ can be overcome, at least in principle. Our proposal would eradicate the collapse problem and the measurement problem. Even the existence of an “arrow of time” can perhaps be explained in a more elegant way than usual.
Six years later, 't Hooft shows how any quantum mechanical model can be modeled by a sufficiently complex classical system of equations, in his paper, "Explicit construction of Local Hidden Variables for any quantum theory up to any desired accuracy" [tHooft2021], while also arguing that we don't need the un-real real numbers of Cantor and Dedekind. He writes:
Abstract: The machinery of quantum mechanics is fully capable of describing a single ontological world. Here we discuss the converse: in spite of appearances, and indeed numerous claims to the contrary, any quantum mechanical model can be mimicked, up to any required accuracy, by a completely classical system of equations. An implication of this observation is that Bell’s theorem cannot hold in many cases. This is explained by scrutinising Bell’s assumptionsc oncerning causality, retrocausality, statistical (in-)dependence, and his fear of ‘conspiracy’ (there is no conspiracy in our constructions).
Conclusion: What one can notice from the results of this paper is, that we cannot have just any set of real numbers for these interaction parameters [of the Standard Model]. Finiteness of the lattice of fast fluctuating parameters would suggest that, if only we could guess exactly what the fast moving variables are, we should be able to derive all interactionsin terms of simple, rational coefficients. Thus, a prudent prediction might be made:
All interaction parameters for the fundamental particles are calculable in terms of simple, rational coefficients.
Manaka Okuyama and Masayuki Ohzeki, in their paper, "Quantum speed limit is not quantum" [Okuyama2017], show yet another non-distinction between quantum mechanics and classical mechanics:
Abstract: The quantum speed limit (QSL), or the energy-time uncertainty relation, describes the fundamental maximum rate for quantum time evolution and has been regarded as being unique in quantum mechanics. In this study, we obtain a classical speed limit corresponding to the QSL using the Hilbert space for the classical Liouville equation. Thus, classical mechanics has a fundamental speed limit, and QSL is not a purely quantum phenomenon but a universal dynamical property of the Hilbert space. Furthermore, we obtain similar speed limits for the imaginary-time Schrödinger equations such as the master equation.
Flavio Del Santo and Nicolas Gisin, in their paper "Physics without determinism: alternative interpretations of classical physics" [Santo2019], argue that another supposed difference between classical and quantum mechanics is not true and is an arbitrary distinction (from the Conclusion):
However, it seems clear that the empirical results of both classical and quantum mechanics can fit either in a deterministic or indeterministic framework. Furthermore, there are compelling arguments to support the view that the same conclusion can be reached for any given physical theory – a trivial way to make an indeterminate theory fully determined is to "complete" the theory with all the results of every possible experiments that can be performed.
They continue these arguments in a later paper, "The relativity of indeterminacy" [Santo2021], "... in this paper, we note that upholding reasonable principles of finiteness of information hints at a picture of the physical world that should be both relativistic and indeterministic".
Del Santo argues much the same in another paper, "Indeterminism, causality and information: has physics ever been deterministic?" [Santo2020], concluding:
"... compelling arguments [of Suppes and Werndl] show that every physical theory, including classical and quantum mechanics, can be interpreted either deterministically or indeterministically and no experiment will ultimately discriminate between these two opposite worldviews."
Suggesting that there is some classical computing theory with little difference from quantum computing.
KRUKOV - 2020
Alexey Kryukov, in a series of papers in the last few years, argues much the same as Baumgarten, for example, in his paper "Mathematics of the classical and the quantum" [Kryukov2020] (from the Abstract):
Newtonian dynamics is shown to be the Schrödinger dynamics of states constrained to a sub-manifold of the space of states, identified with the classical phase space of the system. Quantum observables are identified with vector fields on the space of states. ... Under the embedding, the normal distribution of measurement results associated with a classical measurement implies the Born rule for the probability of transition of quantum states.
Ashida, Gong and Ueda at the University of Tokyo, in their paper "Non-hermitian physics" [Ashida2020], review another similarity of classical and quantum physics – that in the real world, many classical and quantum processes violate one of the key postulates of quantum mechanics – hermicity (which "ensures the conservation of probability in an isolated system, and guarantees the real-valuedness of an expectation value of energy with respect to a quantum state"). But since few real-world systems are isolated, the hermicity is less useful for distinguishing classical and quantum mechanics. They have a table of this commonality (page 3):
System/Process Physical origin of non-Hermicity Theoretical Methods
Photonics Gain/loss of photons Maxwell equations
Mechanics Friction Newton equations
Electrical circuits Joule heating Circuit equations
Stochastic processes Nonreciprocity of state transitions Fokker-Planck eqn.
Soft matter/fluid Nonlinear instability Linear hydrodynamics
Nuclear reactions Radiative decays Projection methods
Mesoscopic systems Finite lifetimes of resonances Scattering theory
Open quantum systems Dissipation Master equation
Quantum measurement Measurement backaction Quantum trajectories
In a different approach to deriving quantum mechanics from (relativistic) classical mechanics, Andrzej Dragan and Artur Ekert [Dragan2020] derive quantum mechanics with one assumption – extending the Lorentz transformation into the superluminal region:
We show that the full mathematical structure of the Lorentz transformation, the one which includes the superluminal part, implies the emergence of non-deterministic dynamics, together with complex probability amplitudes and multiple trajectories. ... Here we show that if we retain the superluminal terms, and take the resulting mathematics of the Lorentz transformation seriously, then the notion of a particle moving along a single path must be abandoned, and replaced by a propagation along many paths, exactly like in quantum theory.
Starting in the 1960s, G R Allcock studied the idea of 'quantum probability backflow', an interference effect involving the wave-aspects of quantum particles. Bracken and Melloy [Bracken2014, discussed below] tie this backflow to negative probabilities: "Negative probability moving to the right has the same effect on the total probabilities in the left and right quadrants as positive probability moving to the left, thus giving rise to the backflow phenomenon."
More interestingly, a paper by Arseni Goussev at the University of Portsmouth [Goussev2020] argues that by using the Wigner representation of the wave packet, one can show that the negative flow of probability seen in the quantum world is rooted in classical mechanics. And nicely, shortly after Goussev’s paper, a paper by Matulis and Acus proposes that a classical system of a chain of masses interconnected by springs [Matulis2020], structured in a certain way, exhibits a negative flow of energy, first seen in quantum systems, in a classical mechanical wave. Both papers discussed below at [Goussev2020]. Earlier, Matulis and Acus, in “Classical analog to the Airy wave packet” [Matulis2019], in which they offer a solution of the Liouville equation for an ensemble of free particles that is a classic analog to the non-dispersive quantum accelerating Airy wave packet.
MORGAN - 2020
Peter Morgan, at Yale University, in his paper "An algebraic approach to Koopman classical mechanics" [Morgan2020] writes:
Abstract: ... In this form [a variant of the Koopman-vonNeumann approach], the measurement theory for unary classical mechanics can be the same as and inform that for quantum mechanics, expanding classical mechanics to include non-commutative operators so that it is close to quantum mechanics, ... The measurement problem as it appears in unary classical mechanics suggests a classical signal analysis approach that can also be successfully applied to the measurement problem of quantum mechanics.
Gabriele Carcassi and Christine Aidala, at the University of Michigan, in their paper "The fundamental connections between Hamiltonian mechanics, quantum mechanics and information entropy" [Carcassi2020], discusses one difference between classical and quantum physics that doesn't impact their engineering:
Abstract: We show that the main difference between classical and quantum systems can be understood in terms of information entropy. ... As information information entropy can be used to characterize how much of the state of the whole system identifies the state of its parts, classical systems can have arbitrarily small information entropy while quantum systems cannot.
Some years earlier, Johannes Kofler and Caslav Bruker [Kofler2007] also write that as long as you don't do something that you can't do ("arbitrarily small") as in [Carcassi2020], the differences between classical and quantum have much overlap. They write:
Conceptually different from the decoherence program, we present a novel theoretical approach to macroscopic realism and classical physics within quantum theory. It focuses on the limits of observability of quantum effects of macroscopic objects, i.e., on the required precision of our measurement apparatuses such that quantum phenomena can still be observed. First, we demonstrate that for unrestricted measurement accuracy no classical description is possible for arbitrarily large systems. Then we show for a certain time evolution that under coarse-grained measurements not only macrorealism but even the classical Newtonian laws emerge out of the Schrödinger equation and the projection postulate.
FELDMAN - 2020
Michael Feldman, in his paper, "Information-theoretic interpretation of quantum formalism" [Feldman2020], writes of another reduction of quantum information processing:
Abstract: We present an information theoretic interpretation of quantum formalism based on a Bayesian framework and device of any extra axiom or principle. Quantum information is merely construed as a technique of Bayesian inference for analyzing a logical system subject to classical constraints, while still enabling the use of all relevant Boolean variable batches. ... In the end, our major conclusion is that quantum information is nothing but classical information processed by Bayesian inference techniques and as such, consubstantial with Aristotelian logic.
David Ellerman (Univ. of Ljubljana) in his paper, "Probability theory with superposition events: a classical generalization in the direction of quantum mechanics", shows one can recreate much of quantum mechanics with an extension to basic probability theory:
Abstract: In finite probability theory, events are subsets of the outcome set. Subsets can be represented by 1-dimensional column vectors. By extending the representation of events to two dimensional matrices, we can introduce “superposition events”. Probabilities are introduced for classical events, superposition events, and their mixtures by using density matrices. Then probabilities for experiments or ‘measurements’ of all these events can be determined in a manner exactly like in quantum mechanics (QM) using density matrices. Moreover, the transformation of the density matrices induced by the experiments or ‘measurements’ is the Lüders mixture operation as in QM. And finally, by moving the machinery into then-dimensional vector space over Z2, different basis sets become different outcome sets. That ‘non-commutative’ extension of finite probability theory yields the pedagogical model of quantum mechanics over Z2 that can model many characteristic non-classical results of QM.
Fabio Anza and James Crutchfield, at Univ.Cal. Davis, in their paper, "Geometric quantum thermodynamics" [Anza2020], that discusses similiarities of classical and quantum thermodynamics:
Abstract: Building on parallels between geometric quantum mechanics and classical mechanics, we explore an alternative basis for quantum thermodynamics that exploits the differential geometry of the underlying state space. ... Building on parallels between geometric quantum mechanics and classical mechanics, we explore an alternative basis for quantum thermodynamics that exploits the differential geometry of the underlying state space.
VAUGON - 2020
Michael Vaugon, in his paper "A mathematician’s view of geometrical unification of general relativity and quantum mechanics" [Vaugon2020], uses psuedo-Riemannian geometry to describe both classical and quantum physics:
Abstract: This document contains a description of physics entirely based on a geometric presentation: all of the theory is described giving only a pseudo-Riemannian manifold (M,g) of dimension n>5 for which the tensor g is, in studied domains, almost everywhere of signature (-,-,+, ... .+). No object is added in this space-time, no general principle is assumed. The properties we demand to some domains of (M,g) are only simple geometric constraints, essentially based on the concept of "curvature". These geometric properties allow to define, depending on considered cases, some objects (frequently depicted by tensors) that are similar to the classical physics ones, they are however built here only from the tensor g. The links between these objects, coming from their natural definitions, give, applying standard theorems from the pseudo-Riemannian geometry, all equations governing physical phenomena usually described by classical theories, including general relativity and quantum physics. The purely geometric approach introduced here on quantum phenomena is profoundly different from the standard one. Neither Lagrangian nor Hamiltonian is used. This document ends with a presentation of our approach of complex quantum phenomena usually studied by quantum field theory.
KLAUDER - 2020
John Klauder, in his paper, "A unified combination of classical and quantum system" [Klauder2020], writes:
Abstract: ... In the final sections we illustrate how alternative quantization procedures, e.g., spin and affine quantizations, can also have smooth paths between classical and quantum stories, and with a few brief remarks, can also lead to similar stories for non-renormalizable covariant scalar fields as well as quantum gravity.
Xinyu Song and others at Shanghai University, in their paper, "Statistical analysis of quantum annealing" [Song2021], show that how up to a certain scale, classical and quantum annealing are equivalent:
[Our paper shows that for less than a 1000 qubits, classical annealing (simulated annealing) and quantum annealing are mathematically identical]: “We show that if the classical and quantum annealing are characterized by equivalent Ising models, then solving an optimization problem, i.e., finding the minimal energy of each Ising model, by the two annealing procedures, are mathematically identical."
|
53c10596825678de | You are currently browsing the monthly archive for September 2019.
I’ve just uploaded to the arXiv my paper “Almost all Collatz orbits attain almost bounded values“, submitted to the proceedings of the Forum of Mathematics, Pi. In this paper I returned to the topic of the notorious Collatz conjecture (also known as the {3x+1} conjecture), which I previously discussed in this blog post. This conjecture can be phrased as follows. Let {{\bf N}+1 = \{1,2,\dots\}} denote the positive integers (with {{\bf N} =\{0,1,2,\dots\}} the natural numbers), and let {\mathrm{Col}: {\bf N}+1 \rightarrow {\bf N}+1} be the map defined by setting {\mathrm{Col}(N)} equal to {3N+1} when {N} is odd and {N/2} when {N} is even. Let {\mathrm{Col}_{\min}(N) := \inf_{n \in {\bf N}} \mathrm{Col}^n(N)} be the minimal element of the Collatz orbit {N, \mathrm{Col}(N), \mathrm{Col}^2(N),\dots}. Then we have
Conjecture 1 (Collatz conjecture) One has {\mathrm{Col}_{\min}(N)=1} for all {N \in {\bf N}+1}.
Establishing the conjecture for all {N} remains out of reach of current techniques (for instance, as discussed in the previous blog post, it is basically at least as difficult as Baker’s theorem, all known proofs of which are quite difficult). However, the situation is more promising if one is willing to settle for results which only hold for “most” {N} in some sense. For instance, it is a result of Krasikov and Lagarias that
\displaystyle \{ N \leq x: \mathrm{Col}_{\min}(N) = 1 \} \gg x^{0.84}
for all sufficiently large {x}. In another direction, it was shown by Terras that for almost all {N} (in the sense of natural density), one has {\mathrm{Col}_{\min}(N) < N}. This was then improved by Allouche to {\mathrm{Col}_{\min}(N) < N^\theta} for almost all {N} and any fixed {\theta > 0.869}, and extended later by Korec to cover all {\theta > \frac{\log 3}{\log 4} \approx 0.7924}. In this paper we obtain the following further improvement (at the cost of weakening natural density to logarithmic density):
Theorem 2 Let {f: {\bf N}+1 \rightarrow {\bf R}} be any function with {\lim_{N \rightarrow \infty} f(N) = +\infty}. Then we have {\mathrm{Col}_{\min}(N) < f(N)} for almost all {N} (in the sense of logarithmic density).
Thus for instance one has {\mathrm{Col}_{\min}(N) < \log\log\log\log N} for almost all {N} (in the sense of logarithmic density).
The difficulty here is one usually only expects to establish “local-in-time” results that control the evolution {\mathrm{Col}^n(N)} for times {n} that only get as large as a small multiple {c \log N} of {\log N}; the aforementioned results of Terras, Allouche, and Korec, for instance, are of this type. However, to get {\mathrm{Col}^n(N)} all the way down to {f(N)} one needs something more like an “(almost) global-in-time” result, where the evolution remains under control for so long that the orbit has nearly reached the bounded state {N=O(1)}.
However, as observed by Bourgain in the context of nonlinear Schrödinger equations, one can iterate “almost sure local wellposedness” type results (which give local control for almost all initial data from a given distribution) into “almost sure (almost) global wellposedness” type results if one is fortunate enough to draw one’s data from an invariant measure for the dynamics. To illustrate the idea, let us take Korec’s aforementioned result that if {\theta > \frac{\log 3}{\log 4}} one picks at random an integer {N} from a large interval {[1,x]}, then in most cases, the orbit of {N} will eventually move into the interval {[1,x^{\theta}]}. Similarly, if one picks an integer {M} at random from {[1,x^\theta]}, then in most cases, the orbit of {M} will eventually move into {[1,x^{\theta^2}]}. It is then tempting to concatenate the two statements and conclude that for most {N} in {[1,x]}, the orbit will eventually move {[1,x^{\theta^2}]}. Unfortunately, this argument does not quite work, because by the time the orbit from a randomly drawn {N \in [1,x]} reaches {[1,x^\theta]}, the distribution of the final value is unlikely to be close to being uniformly distributed on {[1,x^\theta]}, and in particular could potentially concentrate almost entirely in the exceptional set of {M \in [1,x^\theta]} that do not make it into {[1,x^{\theta^2}]}. The point here is the uniform measure on {[1,x]} is not transported by Collatz dynamics to anything resembling the uniform measure on {[1,x^\theta]}.
So, one now needs to locate a measure which has better invariance properties under the Collatz dynamics. It turns out to be technically convenient to work with a standard acceleration of the Collatz map known as the Syracuse map {\mathrm{Syr}: 2{\bf N}+1 \rightarrow 2{\bf N}+1}, defined on the odd numbers {2{\bf N}+1 = \{1,3,5,\dots\}} by setting {\mathrm{Syr}(N) = (3N+1)/2^a}, where {2^a} is the largest power of {2} that divides {3N+1}. (The advantage of using the Syracuse map over the Collatz map is that it performs precisely one multiplication of {3} at each iteration step, which makes the map better behaved when performing “{3}-adic” analysis.)
When viewed {3}-adically, we soon see that iterations of the Syracuse map become somewhat irregular. Most obviously, {\mathrm{Syr}(N)} is never divisible by {3}. A little less obviously, {\mathrm{Syr}(N)} is twice as likely to equal {2} mod {3} as it is to equal {1} mod {3}. This is because for a randomly chosen odd {\mathbf{N}}, the number of times {\mathbf{a}} that {2} divides {3\mathbf{N}+1} can be seen to have a geometric distribution of mean {2} – it equals any given value {a \in{\bf N}+1} with probability {2^{-a}}. Such a geometric random variable is twice as likely to be odd as to be even, which is what gives the above irregularity. There are similar irregularities modulo higher powers of {3}. For instance, one can compute that for large random odd {\mathbf{N}}, {\mathrm{Syr}^2(\mathbf{N}) \hbox{ mod } 9} will take the residue classes {0,1,2,3,4,5,6,7,8 \hbox{ mod } 9} with probabilities
\displaystyle 0, \frac{8}{63}, \frac{16}{63}, 0, \frac{11}{63}, \frac{4}{63}, 0, \frac{2}{63}, \frac{22}{63}
respectively. More generally, for any {n}, {\mathrm{Syr}^n(N) \hbox{ mod } 3^n} will be distributed according to the law of a random variable {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} on {{\bf Z}/3^n{\bf Z}} that we call a Syracuse random variable, and can be described explicitly as
\displaystyle \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) = 2^{-\mathbf{a}_1} + 3^1 2^{-\mathbf{a}_1-\mathbf{a}_2} + \dots + 3^{n-1} 2^{-\mathbf{a}_1-\dots-\mathbf{a}_n} \hbox{ mod } 3^n, \ \ \ \ \ (1)
where {\mathbf{a}_1,\dots,\mathbf{a}_n} are iid copies of a geometric random variable of mean {2}.
In view of this, any proposed “invariant” (or approximately invariant) measure (or family of measures) for the Syracuse dynamics should take this {3}-adic irregularity of distribution into account. It turns out that one can use the Syracuse random variables {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} to construct such a measure, but only if these random variables stabilise in the limit {n \rightarrow \infty} in a certain total variation sense. More precisely, in the paper we establish the estimate
\displaystyle \sum_{Y \in {\bf Z}/3^n{\bf Z}} | \mathbb{P}( \mathbf{Syrac}({\bf Z}/3^n{\bf Z})=Y) - 3^{m-n} \mathbb{P}( \mathbf{Syrac}({\bf Z}/3^m{\bf Z})=Y \hbox{ mod } 3^m)| \ \ \ \ \ (2)
\displaystyle \ll_A m^{-A}
for any {1 \leq m \leq n} and any {A > 0}. This type of stabilisation is plausible from entropy heuristics – the tuple {(\mathbf{a}_1,\dots,\mathbf{a}_n)} of geometric random variables that generates {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})} has Shannon entropy {n \log 4}, which is significantly larger than the total entropy {n \log 3} of the uniform distribution on {{\bf Z}/3^n{\bf Z}}, so we expect a lot of “mixing” and “collision” to occur when converting the tuple {(\mathbf{a}_1,\dots,\mathbf{a}_n)} to {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}; these heuristics can be supported by numerics (which I was able to work out up to about {n=10} before running into memory and CPU issues), but it turns out to be surprisingly delicate to make this precise.
A first hint of how to proceed comes from the elementary number theory observation (easily proven by induction) that the rational numbers
\displaystyle 2^{-a_1} + 3^1 2^{-a_1-a_2} + \dots + 3^{n-1} 2^{-a_1-\dots-a_n}
are all distinct as {(a_1,\dots,a_n)} vary over tuples in {({\bf N}+1)^n}. Unfortunately, the process of reducing mod {3^n} creates a lot of collisions (as must happen from the pigeonhole principle); however, by a simple “Lefschetz principle” type argument one can at least show that the reductions
\displaystyle 2^{-a_1} + 3^1 2^{-a_1-a_2} + \dots + 3^{m-1} 2^{-a_1-\dots-a_m} \hbox{ mod } 3^n \ \ \ \ \ (3)
are mostly distinct for “typical” {a_1,\dots,a_m} (as drawn using the geometric distribution) as long as {m} is a bit smaller than {\frac{\log 3}{\log 4} n} (basically because the rational number appearing in (3) then typically takes a form like {M/2^{2m}} with {M} an integer between {0} and {3^n}). This analysis of the component (3) of (1) is already enough to get quite a bit of spreading on { \mathbf{Syrac}({\bf Z}/3^n{\bf Z})} (roughly speaking, when the argument is optimised, it shows that this random variable cannot concentrate in any subset of {{\bf Z}/3^n{\bf Z}} of density less than {n^{-C}} for some large absolute constant {C>0}). To get from this to a stabilisation property (2) we have to exploit the mixing effects of the remaining portion of (1) that does not come from (3). After some standard Fourier-analytic manipulations, matters then boil down to obtaining non-trivial decay of the characteristic function of {\mathbf{Syrac}({\bf Z}/3^n{\bf Z})}, and more precisely in showing that
\displaystyle \mathbb{E} e^{-2\pi i \xi \mathbf{Syrac}({\bf Z}/3^n{\bf Z}) / 3^n} \ll_A n^{-A} \ \ \ \ \ (4)
for any {A > 0} and any {\xi \in {\bf Z}/3^n{\bf Z}} that is not divisible by {3}.
If the random variable (1) was the sum of independent terms, one could express this characteristic function as something like a Riesz product, which would be straightforward to estimate well. Unfortunately, the terms in (1) are loosely coupled together, and so the characteristic factor does not immediately factor into a Riesz product. However, if one groups adjacent terms in (1) together, one can rewrite it (assuming {n} is even for sake of discussion) as
\displaystyle (2^{\mathbf{a}_2} + 3) 2^{-\mathbf{b}_1} + (2^{\mathbf{a}_4}+3) 3^2 2^{-\mathbf{b}_1-\mathbf{b}_2} + \dots
\displaystyle + (2^{\mathbf{a}_n}+3) 3^{n-2} 2^{-\mathbf{b}_1-\dots-\mathbf{b}_{n/2}} \hbox{ mod } 3^n
where {\mathbf{b}_j := \mathbf{a}_{2j-1} + \mathbf{a}_{2j}}. The point here is that after conditioning on the {\mathbf{b}_1,\dots,\mathbf{b}_{n/2}} to be fixed, the random variables {\mathbf{a}_2, \mathbf{a}_4,\dots,\mathbf{a}_n} remain independent (though the distribution of each {\mathbf{a}_{2j}} depends on the value that we conditioned {\mathbf{b}_j} to), and so the above expression is a conditional sum of independent random variables. This lets one express the characeteristic function of (1) as an averaged Riesz product. One can use this to establish the bound (4) as long as one can show that the expression
\displaystyle \frac{\xi 3^{2j-2} (2^{-\mathbf{b}_1-\dots-\mathbf{b}_j+1} \mod 3^n)}{3^n}
is not close to an integer for a moderately large number ({\gg A \log n}, to be precise) of indices {j = 1,\dots,n/2}. (Actually, for technical reasons we have to also restrict to those {j} for which {\mathbf{b}_j=3}, but let us ignore this detail here.) To put it another way, if we let {B} denote the set of pairs {(j,l)} for which
\displaystyle \frac{\xi 3^{2j-2} (2^{-l+1} \mod 3^n)}{3^n} \in [-\varepsilon,\varepsilon] + {\bf Z},
we have to show that (with overwhelming probability) the random walk
\displaystyle (1,\mathbf{b}_1), (2, \mathbf{b}_1 + \mathbf{b}_2), \dots, (n/2, \mathbf{b}_1+\dots+\mathbf{b}_{n/2})
(which we view as a two-dimensional renewal process) contains at least a few points lying outside of {B}.
A little bit of elementary number theory and combinatorics allows one to describe the set {B} as the union of “triangles” with a certain non-zero separation between them. If the triangles were all fairly small, then one expects the renewal process to visit at least one point outside of {B} after passing through any given such triangle, and it then becomes relatively easy to then show that the renewal process usually has the required number of points outside of {B}. The most difficult case is when the renewal process passes through a particularly large triangle in {B}. However, it turns out that large triangles enjoy particularly good separation properties, and in particular afer passing through a large triangle one is likely to only encounter nothing but small triangles for a while. After making these heuristics more precise, one is finally able to get enough points on the renewal process outside of {B} that one can finish the proof of (4), and thus Theorem 2.
1. Elementary multiplicative number theory
2. Complex-analytic multiplicative number theory
3. The entropy decrement argument
4. Bounds for exponential sums
5. Zero density theorems
6. Halasz’s theorem and the Matomaki-Radziwill theorem
7. The circle method
8. (If time permits) Chowla’s conjecture and the Erdos discrepancy problem [Update: I did not end up writing notes on this topic.]
|
2992bace71f3a799 | Measurement and evolution
In an earlier post, we sketched the basic mathematical description of quantum mechanics, culminating in the general description of quantum states as (reduced) density matrices. We also claimed that generic measurements are not orthogonal projections, and evolution is not unitary. We shall here expand upon the aforementioned infrastructure to explain these statements, resolving some un-answered questions in the process. We shall again draw from Preskill’s Quantum Information and Computation course notes, as well as a lecture given by Mario Flory on POVMs and superoperators.
The naïve picture is that, as a consequence of Schmidt decomposition, one can write the density matrix for a mixed state as an ensemble of orthogonal pure states, the eigenvalues of which are interpreted as the probability of their occurring. When we measure the system, we project onto one of these eigenstates, hence the notion of measurements as orthogonal projections. And indeed this works fine for isolated systems; but as explained previously, this is an idealization. The problem that demands a more generalized notion of measurement is that an orthogonal measurement in a tensor product {\mathcal{H}_A\otimes\mathcal{H}_B} is not necessarily orthogonal if we restrict to subsystem {A} alone.
Let us first make the notion of orthogonal projections a bit more precise, following von Neumann’s treatment thereof. To perform a measurement of an observable {M}, we couple the system to some classical pointer variable that we can actually observe, in the literal sense of the word. In particular, we assume that the pointer is sufficiently heavy that the spreading of its wavepacket can be neglected during the measurement process (it is classical, after all). The Hamiltonian describing the interaction of the pointer with the system is then approximated by {H=\lambda MP}, where {\lambda} is the coupling between the pointer’s momentum {P} and the observable under study. The time evolution operator is therefore
\displaystyle U(t)=\mathrm{exp}\left(-i\lambda tMP\right)=\sum_i\left|i\right>\mathrm{exp}\left(-i\lambda t M_iP\right)\left<i\right|~, \ \ \ \ \ (1)
where in the second equality we’ve expanded {M} in the diagonal basis, {M=\sum_i\left|i\right>M_i\left<i\right|}. (Note that we are implicitly assuming that either {\left[M,H_0\right]=0}, where {H_0} is the original, unperturbed Hamiltonian, or that the measurement occurs so quickly that free evolution of the system can be neglected throughout. We’re also suppressing hats/bold-print on the operators, since this is clear from context).
Since {P=-i\partial_x} is the generator of translations for the pointer, it shifts the position-space wavepacket thereof by some amount {x_0}: {e^{-ix_0P}\psi(x)=\psi(x-x_0)}. Thus, if the system is initially in a superposition of {M} eigenstates unentangled with the state of the pointer {\left|\psi(x)\right>}, then after time {t} it will evolve to
\displaystyle U(t)\left(\sum_i\alpha_i\left|i\right>\otimes\left|\psi(x)\right>\right) =\sum_i\alpha_i\left|i\right>\otimes\left|\psi\left( x-\lambda tM_i\right)\right>~. \ \ \ \ \ (2)
Now the position of the pointer is correlated with the value of the observable {M}. Thus, provided the pointer’s wavepacket is sufficiently narrow such that we can resolve all values of {M_i} (namely, {\Delta x\lesssim\lambda t\Delta M_i}, which can be guaranteed by making the pointer sufficiently massive since {\Delta x\gtrsim1/\Delta p=(mv)^{-1}}), observing that the position of the pointer has shifted by {\lambda tM_i} is tantamount to measuring the eigenstate {\left|i\right>}, which occurs with probability {\left|\alpha_i\right|^2}. In this manner, the initial state of the quantum system, call it {\left|\phi\right>}, is projected to {\left|i\right>} with probability {\left<i|\phi\right>^2}. This is von Neumann’s model of orthogonal measurement, which involves so-called projection valued measurements, or PVMs.
Of course, in principle the measurement process could project out some superposition of eigenstates, rather than a single position eigenstate as in the above example. Indeed, if we can couple any observable to a pointer, then we can perform any orthogonal projection in Hilbert space. Thus to formulate the above more generally, consider a set of projection operators {P_a} such that {\sum_aP_a=1}. Carrying out the measurement procedure above takes the initial (pure) state {\left|\phi\right>\left<\phi\right|} to
\displaystyle \frac{P_a\left|\phi\right>\left<\phi\right|P_a}{\left<\phi|P_a|\phi\right>} \ \ \ \ \ (3)
with probability
\displaystyle \mathrm{Prob}(a)=\left<\phi|P_a|\phi\right>~, \ \ \ \ \ (4)
as usual.
Thus far we have been referring to measurements on a single isolated Hilbert space, for which PVMs suffice. But in practice we only ever deal with subsystems, for which our concept of measurement must be suitably extended. As we shall see, the relevant entities for the job are positive operator valued measures, or POVMs. The key difference between a POVM and a PVM is that the latter are a subset of the former for which the eigenstates are orthogonal by construction.
Mathematically, a POVM is a measure (basically, a partition of unity) whose values are non-negative self-adjoint operators on Hilbert space. That is, denoting the set of operators that comprise the POVM by {\{F_a\}}, it has the properties {F_a=F_a^\dagger}, {\left<\psi|F_a|\psi\right>\geq0}, and {\sum_aF_a=1}, where {\left|\psi\right>\in\mathcal{H}}. The idea is that a POVM element {F_a} is assigned to every possible measurement result such that {\left<\psi|F_a|\psi\right>=\mathrm{Prob}(a)} (hence the requirement that these sum to 1).
Given the positivity of the operators {F_a}, there exists a (not necessarily unique) set of so-called measurement operators {\{M_a\}} such that {F_a=M_a^\dagger M_a}. Introducing these operators allows one to express the state immediately after measurement in the usual manner:
\displaystyle \left|\psi_a\right>=\frac{M_a\left|\psi\right>}{\left<\psi\right|M_a^\dagger M_a\left|\psi\right>^{1/2}}~. \ \ \ \ \ (5)
Note that this expression is precisely the same as that given for PVMs above; in other words, {M_a=P_a} identically. The difference here is that in the case of a POVM, repeated measurement will not necessarily yield the same result. This is because unlike the {P_a}, which are idempotent orthogonal projection operators, the {F_a} are not projectors, and hence the state after measurement does not exist in a single orthogonal eigenstate. The PVM {\{P_a\}}, which is used in decomposing an observable {A=\sum_aa_aP_a}, corresponds to the special case of a POVM with {F_a=P_a\left(=M_a\right)}.
To elaborate on this slightly further, let us take the familiar example of a tensor product space {\mathcal{H}=\mathcal{H}_A\otimes\mathcal{H}_B}, containing an initial state {\rho_{AB}=\rho_A\otimes\rho_B} and a PVM given by {\{P_a\}}. We now wish to restrict our attention to {\mathcal{H}_A}, so we define a new set of operators {\{F_a\}} acting thereupon that faithfully reproduces the outcome labeled by index {a} of a measurement on {\mathcal{H}}, namely:
\displaystyle \mathrm{Prob}(a)=\mathrm{tr}\left( P_a\rho_{AB}\right)=\mathrm{tr}_A\left(\mathrm{tr}_B\left( P_a\rho_{AB}\right)\right)\equiv\mathrm{tr}_A\left({F_a\rho_A}\right)~. \ \ \ \ \ (6)
We may obtain an explicit expression for {F_a} by writing this expression in component form. Recall that a reduced density matrix can be written in terms of basis vectors as
\displaystyle \rho_A=\mathrm{tr}_B\left(\left|\psi\right>\left<\psi\right|\right)=\sum_{ijm}a_{mj}^*a_{ij}\left|i\right>_{A~A}\left<m\right|~. \ \ \ \ \ (7)
Since {j} is a dummy index, this requires two indices when written in matrix notation, {\left(\rho_A\right)_{im}}. This implies that four indices will label the tensor product {\rho_{AB}=\rho_A\otimes\rho_B}. The quantity {F_a\rho_A} therefore carries two free indices (since {F_a} is a map from {\mathcal{H}_A\rightarrow\mathcal{H}_A}), and similarly {P_a\rho_{AB}} carries four, all of which will be summed over when taking the appropriate traces. Hence the above expression, in component form, is
\displaystyle \begin{aligned} \sum_{ijmn}\left( P_a\right)_{nj,mi}\left(\rho_A\right)_{ij}&\left(\rho_B\right)_{mn}=\sum_{ij}\left( F_a\right)_{ji}\left(\rho_A\right)_{ij}\\ \implies\left( F_a\right)_{ji}=&\sum_{mn}\left( P_a\right)_{nj,mi}\left(\rho_B\right)_{mn}~, \end{aligned} \ \ \ \ \ (8)
where {\{\left|i\right>\}}, {\{\left|j\right>\}} and {\{\left|m\right>\}}, {\{\left|n\right>\}} are orthonormal bases for {\mathcal{H}_A} and {\mathcal{H}_B}, respectively. With this expression for {F_a} in hand, one can show (see, e.g., Preskill p87) that the {F_a} do indeed satisfy the properties claimed for it above, namely Hermiticity, positivity (non-negativity), and completeness {\left(\sum_aF_a=I_A\right)}. As we have emphasized however, they are not necessarily orthogonal, which is again the crucial difference between POVMs and PVMs. Indeed, the number of {F_a}‘s is limited by the dimension of the total Hilbert space {\mathcal{H}}, which may be arbitrarily greater than that of {\mathcal{H}_A}.
As one might have expected given that POVMs act on subspaces, a POVM can be lifted to a PVM by expanding the Hilbert space of the former and performing the latter in the resulting superspace. This is the content of Neimark’s (sometimes transliterated from the Cyrillic “Ðаймарк” as “Neumark”) theorem. Note that the converse also holds: any PVM on a Hilbert space reduces to a POVM on any subspace thereof. This means that one can realize a POVM as a PVM on an enlarged Hilbert space, which allows one to obtain the correct measurement probabilities (by which we mean, the relative weights in the ensemble; see below) by performing orthogonal projections. Conversely, an orthogonal measurement of a bipartite system {\mathcal{H}_A\otimes\mathcal{H}_B} may be a nonorthogonal POVM on {A} alone.
In addition to the crucial role they play in measurement, POVMs are useful for formulating a suitable generalization of evolution that applies to subsystems. By way of example, suppose the initial state in {\mathcal{H}=\mathcal{H}_A\otimes\mathcal{H}_B} is given by {\rho_{AB}=\rho_A\otimes\left|0\right>_{BB}\left<0\right|}. Since evolution of the total bipartite system is unitary, it is described by the action of a unitary operator {U_{AB}},
\displaystyle U_{AB}\left(\rho_A\otimes\left|0\right>_{BB}\left<0\right|\right) U_{AB}^\dagger~, \ \ \ \ \ (9)
whereupon the density matrix of subsystem {A} is
\displaystyle \rho'_A=\mathrm{tr}_B\left( U_{AB}\left(\rho_A\otimes\left|0\right>_{BB}\left<0\right|\right) U_{AB}^\dagger\right) =\sum_n{}_B\left<n\right|U_{AB}\left|0\right>_B\rho_A{}_B\left<0\right| U_{AB}^\dagger\left|n\right>_B~, \ \ \ \ \ (10)
where {\{\left|n\right>\}} is an orthonormal basis for {\mathcal{H}_{B}}, and {{}_B\left<n\right|U_{AB}\left|0\right>_B\equiv M_n} is an operator acting on {\mathcal{H}_{A}}. Note that it follows from the unitarity of {U_{AB}} that
\displaystyle \sum_nM_n^\dagger M_n =\sum_n{}_B\left<0\right|U_{AB}^\dagger\left|n\right>_{BB}\left<n\right| U_{AB}\left|0\right>_B ={}_B\left<0\right|U_{AB}^\dagger U_{AB}\left|0\right>_B =I_A~. \ \ \ \ \ (11)
We may thus expression {\rho'_A} succinctly as
\displaystyle \rho'_A=\sum_nM_n\rho_A M_n^\dagger\equiv\$\left(\rho_A\right)~, \ \ \ \ \ (12)
where {\$} is a linear map that takes density matrices to density matrices (linear operators to linear operators). Such a map, when the above property of {M_n} is satisfied, is called a superoperator, which we’ve written here in the so-called operator sum or Kraus representation. The operator sum representation of a given superoperator {\$} is not unique, since performing the trace over {\mathcal{H}_B} in a different basis would lead to different measurement operators {N_i}. However, any two operator sum representations of the same superoperator are related by a unitary change of basis, e.g., {N_i=U_{in}M_n} (in other words, the {M_n} may be thought of as a particular choice of the {E_a} considered above).
The mapping {\$:\rho\rightarrow\rho'} inherits the usual properties from {\rho}: it is Hermitian, positive, and trace-preserving ({\mathrm{tr}\rho'=1} if {\mathrm{tr}\rho=1}). But these are not quite sufficient to ensure that our bipartite system evolves unitarily. The basic reason is that we are limiting our attention to subsystem {A}, and have no guarantee that there does not exist an uncoupled system {B} that evolves in such a manner as to screw things up. To amend this, we demand that {\$_A} instead satisfy complete positivity: given any extension of {\mathcal{H}_A} to {\mathcal{H}_A\otimes\mathcal{H}_B}, {\$_A} is completely positive in {\mathcal{H}_A} if {\$_A\otimes I_B} is positive for all such extensions. For an example of the necessity of this requirement, see Preskill p97-98 for an exposition of the transposition operator, {T:\rho\rightarrow\rho^T}, which is a positive operator that is not completely positive.
In addition to these three necessary properties, it is also customary to assume that {\$} is linear. As alluded in the previous post on the subject, non-linear evolution is difficult to reconcile with the ensemble interpretation, due to the inherently linear nature of probability. In some sense, linearity is demanded by the probabilistic interpretation — and indeed, as explained in Preskill, non-linear evolution can lead to rather strange consequences — but I’m not aware of any rigorous proof. Nonetheless, for the time being we shall demand this property of superoperators as well.
Unitary evolution, for an isolated system, is described by the Schrödinger equation. The analagous equation for general evolution by superoperators is called the Master equation. Preskill elaborates on this in some detail in section 3.5, but we will restrain ourselves from getting involved in such details here. Instead, we merely observe that unitary evolution can be thought of as the special case in which the operator sum contains only a single term. Under unitary evolution, pure states can only evolve to pure states:
\displaystyle \left|\psi\right>\left<\psi\right| \rightarrow U\left(\left|\psi\right>\left<\psi\right|\right) U^\dagger =\left|\psi'\right>\left<\psi'\right|~, \ \ \ \ \ (13)
and similarly mixed states remain mixed. But superoperators allow the evolution of pure states to mixed states. This is called decoherence. It is the process by which initially pure states become entangled, and consequently, it plays a fundamental role in both the mathematics of quantum mechanics and the (philosophical) interpretation thereof.
To connect back to our earlier example, suppose we perform a POVM on {\mathcal{H}_A}. By (11) and (12), this is tantamount to evolving the system with a superoperator that takes
\displaystyle \rho\rightarrow\sum_a\sqrt{F_a}\rho\sqrt{F_a}~. \ \ \ \ \ (14)
By Neimark’s theorem, the POVM {\{F_a\}} has a unitary representation on the bipartite space {\mathcal{H}}, meaning that there exists a unitary {U_{AB}} such that
\displaystyle U_{AB}:\left|\phi\right>_A\otimes\left|0\right>_B\rightarrow\sum_a\sqrt{F_a}\left|\phi\right>_A\otimes\left|a\right>_B~. \ \ \ \ \ (15)
In other words, the bipartite system undergoes a unitary transformation that entangles {A} with {B},
\displaystyle \left|\phi\right>_A\left|0\right>_B\rightarrow\sum_aM_a\left|\phi\right>_A\left|0\right>_B~. \ \ \ \ \ (16)
We could thus describe the measurement by a PVM on {\mathcal{H}_B} that projects onto {\{\left|a\right>\}} with probability
\displaystyle \mathrm{Prob}(a)=_A\left<\phi\right|M_a^\dagger M_a\left|\phi\right>_A=\mathrm{tr}\left( F_a\rho_A\right)~, \ \ \ \ \ (17)
where the second equality follows from comparison with (6). Normalizing the final state accordingly, we may write (14) as
\displaystyle \rho\rightarrow\$\rho=\frac{\sqrt{F_a}\rho_A\sqrt{F_a}}{\mathrm{tr}\left( F_a\rho_A\right)}~. \ \ \ \ \ (18)
We mentioned previously that for POVMs, repeated measurements will not necessarily yield the same result. Now we see why: the result of such a general measurement (that is, on a subsystem) is given an ensemble of pure states, and thus we require a description in terms of a density matrix rather than as a single (orthogonal) eigenstate.
This is also the description we would use if we knew only that a measurement had been performed, but were ignorant of the results. For example, suppose we perform a measurement by probing the system with a single particle (say, a photon from a laser). Immediately after the interaction with the probe, but before the interaction with the classical detector that records it, the system is in an entangled state. We would thus describe the process as evolution by a superoperator that produces a density matrix/ensemble as above. In other words, the system has slightly decohered: if the initial state were pure, some of the coherence has been lost upon evolution to a mixed state. The subsequent interaction with the (classical) detector that we colloquially think of as “measurement” is simply the same process of decoherence on a hugely expanded scale: the (now mixed) state becomes entangled with the trillions of particles that comprise the detector, decohering essentially instantaneously to a classical state. All the uniquely quantum information of the system has now been lost.
This is what is referred to as “collapse of the wavefunction” in the Copenhagen interpretation. The reason for the invalidity of this interpretation is that it posits a projection onto a single eigenstate as a result of observation (by which we simply mean, interaction with the measurement apparatus; anthropocentric language aside, consciousness is emphatically not involved in any fundamental way). But as we’ve seen above, a proper description of measurement is that of entanglement with the environment under evolution via superoperators. The measurement process proceeds by POVMs, not PVMs, on the (sub)system under study. And while at the end of the day one does arrive at an eigenstate in the expanded Hilbert space (that includes the measurement apparatus/detector/observer/etc), this is a consequence of decohering to a classical state, rather than directly projecting to it. Decoherence can thus be thought of as giving the appearance of wavefunction collapse; but as evidenced by the countless reams of confused literature on quantum foundations and related areas, it is most dangerous to indulge in such simplifications so blithely. (We note in passing that the “wavefunction of the universe” never decoheres, since evolution in an isolated system is unitary).
Another important fact that no doubt contributes to the collapse confusion is that decoherence is irreversible. Consider composing two superoperators to form a third: if {\$_1} describes the evolution from {t_0} to {t_1>t_0}, and {\$_2} describes the evolution from {t_1} to {t_2>t_1}, then {\$_1\circ\$_2} is a superoperator describing the evolution from {t_0} to {t_2}. But the inverse of a superopertor is only a superopertor if it is unitary. This is in stark contrast to unitary evolution, which is perfectly invertible: we can run the equations backwards as well as forwards. Not so for superoperators: inverting {\$_1\circ\$_2} will not result in a superoperator that evolves backwards from {t_2} to {t_0}. In other words, decoherence implies an arrow of time, and an irrevocable loss of quantum information. And while the former implication has philosophical implications which we shall not digress upon here, the latter is not at all surprising: as stated above, decoherence is the process by which quantum states become classical.
Several open questions remain. Perhaps chief among them is our failure to fully resolve the “disconcerting dualism” between deterministic evolution and probabilistic measurement. Insofar as probability is a statement of our ignorance and thus fundamentally epistemic, any formulation of quantum mechanics that relies thereupon is doomed to suffer the same characterization, for what does it mean to say that nature is fundamentally probabilistic? We may ask whether the associated lack of predictivity in quantum mechanics stems from the fact that there does not exist a state which is an eigenstate of all observables. One also wonders whether it is possible to formulate a consistent theory with non-linearly evolving superoperators, and what the interpretation thereof would be vis-à-vis probabilistic ensembles (that is, to what extent we can free ourselves from probability if we distance ourselves from the linearity it imposes). Zurek’s work on decoherence contains some clarifying insight into this issue, but that’s a subject for another post.
It is tempting to speculate that the issue of how to properly describe measurement and evolution lies at the heart of the black hole information paradox, wherein a black hole formed from the collapse of an initially pure state appears to evolve to a mixed state, in violation of the supposedly unitary S-matrix. Indeed, for various reasons, this picture is almost certainly too naïve. In particular, evolution is not unitary, but it remains to be shown precisely how a more ontologically accurate rendition of the problem would solve it.
This entry was posted in Physics. Bookmark the permalink.
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s |
437d38f5346c8bbe | All Issues
Volume 27, 2022
Volume 26, 2021
Volume 25, 2020
Volume 24, 2019
Volume 23, 2018
Volume 22, 2017
Volume 21, 2016
Volume 20, 2015
Volume 19, 2014
Volume 18, 2013
Volume 17, 2012
Volume 16, 2011
Volume 15, 2011
Volume 14, 2010
Volume 13, 2010
Volume 12, 2009
Volume 11, 2009
Volume 10, 2008
Volume 9, 2008
Volume 8, 2007
Volume 7, 2007
Volume 6, 2006
Volume 5, 2005
Volume 4, 2004
Volume 3, 2003
Volume 2, 2002
Volume 1, 2001
Discrete & Continuous Dynamical Systems - B
June 2021 , Volume 26 , Issue 6
Select all articles
Variational solutions of stochastic partial differential equations with cylindrical Lévy noise
Tomasz Kosmala and Markus Riedle
2021, 26(6): 2879-2898 doi: 10.3934/dcdsb.2020209 +[Abstract](1363) +[HTML](331) +[PDF](398.97KB)
In this article, the existence of a unique solution in the variational approach of the stochastic evolution equation
driven by a cylindrical Lévy process \begin{document}$ L $\end{document} is established. The coefficients \begin{document}$ F $\end{document} and \begin{document}$ G $\end{document} are assumed to satisfy the usual monotonicity and coercivity conditions. The noise is modelled by a cylindrical Lévy processes which is assumed to belong to a certain subclass of cylindrical Lévy processes and may not have finite moments.
Convergence rate of solutions toward stationary solutions to the isentropic micropolar fluid model in a half line
Haibo Cui and Haiyan Yin
2021, 26(6): 2899-2920 doi: 10.3934/dcdsb.2020210 +[Abstract](1162) +[HTML](312) +[PDF](436.74KB)
In this paper, we study the asymptotic behavior of solutions to the initial boundary value problem for the one-dimensional compressible isentropic micropolar fluid model in a half line \begin{document}$ \mathbb{R}_{+}: = (0, \infty). $\end{document} We mainly investigate the unique existence, the asymptotic stability and convergence rates of stationary solutions to the outflow problem for this model. We obtain the convergence rates of global solutions towards corresponding stationary solutions if the initial perturbation belongs to the weighted Sobolev space. The proof is based on the weighted energy method by taking into account the effect of the microrotational velocity on the viscous compressible fluid.
Periodic solutions to non-autonomous evolution equations with multi-delays
Pengyu Chen
2021, 26(6): 2921-2939 doi: 10.3934/dcdsb.2020211 +[Abstract](1196) +[HTML](308) +[PDF](379.78KB)
In this paper, we provide some sufficient conditions for the existence, uniqueness and asymptotic stability of time \begin{document}$ \omega $\end{document}-periodic mild solutions for a class of non-autonomous evolution equation with multi-delays. This work not only extend the autonomous evolution equation with multi-delays studied in [37] to non-autonomous cases, but also greatly weaken the condition presented in [37] even for the case \begin{document}$ a(t)\equiv a $\end{document} by establishing a general abstract framework to find time \begin{document}$ \omega $\end{document}-periodic mild solutions for non-autonomous evolution equation with multi-delays. Finally, one illustrating example is supplied.
Global phase portraits and bifurcation diagrams for reversible equivariant Hamiltonian systems of linear plus quartic homogeneous polynomials
Yuzhou Tian and Yulin Zhao
2021, 26(6): 2941-2956 doi: 10.3934/dcdsb.2020214 +[Abstract](1422) +[HTML](376) +[PDF](2049.56KB)
This paper is devoted to the complete classification of global phase portraits for reversible equivariant Hamiltonian systems of linear plus quartic homogeneous polynomials. Such system is affinely equivalent to one of five normal forms by an algebraic classification of its infinite singular points. Then, we classify the global phase portraits of these normal forms on the Poincaré disc. There are exactly \begin{document}$ 13 $\end{document} different global topological structures on the Poincaré disc. Finally we provide the bifurcation diagrams for the corresponding global phase portraits.
Optimal control of leachate recirculation for anaerobic processes in landfills
Marzia Bisi, Maria Groppi, Giorgio Martalò and Romina Travaglini
2021, 26(6): 2957-2976 doi: 10.3934/dcdsb.2020215 +[Abstract](1188) +[HTML](354) +[PDF](733.68KB)
A mathematical model for the degradation of the organic fraction of solid waste in landfills, by means of an anaerobic bacterial population, is proposed. Additional phenomena, like hydrolysis of insoluble substrate and biomass decay, are taken into account. The evolution of the system is monitored by controlling the effects of leachate recirculation on the hydrolytic process. We investigate the optimal strategies to minimize substrate concentration and recirculation operation costs. Analytical and numerical results are presented and discussed for linear and quadratic cost functionals.
Lyapunov exponents of discrete quasi-periodic gevrey Schrödinger equations
Wenmeng Geng and Kai Tao
2021, 26(6): 2977-2996 doi: 10.3934/dcdsb.2020216 +[Abstract](1220) +[HTML](360) +[PDF](378.51KB)
In the study of the continuity of the Lyapunov exponent for the discrete quasi-periodic Schrödinger operators, there is a pioneering result by Wang-You [21] that the authors constructed examples whose Lyapunov exponent is discontinuous in the potential with the \begin{document}$ C^0 $\end{document} norm for non-analytic potentials. In this paper, we consider this operators for some Gevrey potential, which is an analytic function having a Gevrey small perturbation, with Diophantine frequency. We prove that in the large coupling regions, the Lyapunov exponent is positive and jointly continuous in all parameters, such as the energy, the frequency and the potential. Note that all analytic functions are also Gevrey ones. Therefore, we also obtain that all of the large analytic potentials are the non-perturbative weak Hölder continuous points of the Lyapunov exponent in the Gevrey topology with \begin{document}$ C^0 $\end{document} norm. It is the first result about the continuity in non-analytic potential with this norm and is complementary to Wang-You's result.
Asymptotic profiles of the endemic equilibrium of a reaction-diffusion-advection SIS epidemic model with saturated incidence rate
Renhao Cui
2021, 26(6): 2997-3022 doi: 10.3934/dcdsb.2020217 +[Abstract](1481) +[HTML](352) +[PDF](476.0KB)
In this paper, we consider a reaction-diffusion SIS epidemic model with saturated incidence rate in advective heterogeneous environments. The existence of the endemic equilibrium (EE) is established when the basic reproduction number is greater than one. We further investigate the effects of diffusion, advection and saturation on asymptotic profiles of the endemic equilibrium. The individuals concentrate at the downstream end when the advection rate tends to infinity. As the the diffusion rate of the susceptible individuals tends to zero, a certain portion of the susceptible population concentrates at the downstream end, and the remaining portion of the susceptible population distributes in the habitat in a non-homogeneous way; on the other hand, the density of infected population is positive on the entire habitat. The density of the infected vanishes on the habitat for small diffusion rate of infected individuals or the large saturation. The results may provide some implications on disease control and prediction.
The Keller-Segel system with logistic growth and signal-dependent motility
Hai-Yang Jin and Zhi-An Wang
2021, 26(6): 3023-3041 doi: 10.3934/dcdsb.2020218 +[Abstract](1305) +[HTML](314) +[PDF](386.14KB)
The paper is concerned with the following chemotaxis system with nonlinear motility functions
subject to homogeneous Neumann boundary conditions in a bounded domain \begin{document}$ \Omega\subset \mathbb{R}^2 $\end{document} with smooth boundary, where the motility functions \begin{document}$ \gamma(v) $\end{document} and \begin{document}$ \chi(v) $\end{document} satisfy the following conditions
\begin{document}$ (\gamma, \chi)\in [C^2[0, \infty)]^2 $\end{document} with \begin{document}$ \gamma(v)>0 $\end{document} and \begin{document}$ \frac{|\chi(v)|^2}{\gamma(v)} $\end{document} is bounded for all \begin{document}$ v\geq 0 $\end{document}.
By employing the method of energy estimates, we establish the existence of globally bounded solutions of ($\ast$) with \begin{document}$ \mu>0 $\end{document} for any \begin{document}$ u_0 \in W^{1, \infty}(\Omega) $\end{document} with \begin{document}$ u_0 \geq (\not\equiv) 0 $\end{document}. Then based on a Lyapunov function, we show that all solutions \begin{document}$ (u, v) $\end{document} of ($\ast$) will exponentially converge to the unique constant steady state \begin{document}$ (1, 1) $\end{document} provided \begin{document}$ \mu>\frac{K_0}{16} $\end{document} with \begin{document}$ K_0 = \max\limits_{0\leq v \leq \infty}\frac{|\chi(v)|^2}{\gamma(v)} $\end{document}.
The impact of toxins on competition dynamics of three species in a polluted aquatic environment
Yuyue Zhang, Jicai Huang and Qihua Huang
2021, 26(6): 3043-3068 doi: 10.3934/dcdsb.2020219 +[Abstract](1136) +[HTML](368) +[PDF](1233.99KB)
Accurately assessing the risks of toxins in polluted ecosystems and finding factors that determine population persistence and extirpation are important from both environmental and conservation perspectives. In this paper, we develop and study a toxin-mediated competition model for three species that live in the same polluted aquatic environment and compete for the same resources. Analytical analysis of positive invariance, existence and stability of equilibria, sensitivity of equilibria to toxin are presented. Bifurcation analysis is used to understand how the environmental toxins, plus distinct vulnerabilities of three species to toxins, affect the competition outcomes. Our results reveal that while high concentrations lead to extirpation of all species, sublethal levels of toxins affect competition outcomes in many counterintuitive ways, which include boosting coexistence of species by reducing the abundance of the predominant species, inducing many different types of bistability and even tristability, generating and reducing population oscillations, and exchanging roles of winner and loser in competition. The findings in this work provide a sound theoretical foundation for understanding and assessing population or community effects of toxicity.
An almost periodic Dengue transmission model with age structure and time-delayed input of vector in a patchy environment
Jing Feng and Bin-Guo Wang
2021, 26(6): 3069-3096 doi: 10.3934/dcdsb.2020220 +[Abstract](1220) +[HTML](363) +[PDF](803.51KB)
In this paper, we propose an almost periodic multi-patch SIR-SEI model with age structure and time-delayed input of vector. The existence of the almost periodic disease-free solution and the definition of the basic reproduction ratio \begin{document}$ R_{0} $\end{document} are given. It is shown that the disease is uniformly persistent if \begin{document}$ R_0>1 $\end{document}, and it dies out if \begin{document}$ R_0<1 $\end{document} under the assumptions that there exists a small invasion and the same travel rate of susceptible, infective and recovered host population in different patches. Finally, we illustrate the above results by numerical simulations. In addition, a simple example shows that the basic reproduction ratio may be underestimated or overestimated if an almost periodic coefficient is approximated by a periodic one.
Mean-square delay-distribution-dependent exponential synchronization of chaotic neural networks with mixed random time-varying delays and restricted disturbances
Quan Hai and Shutang Liu
2021, 26(6): 3097-3118 doi: 10.3934/dcdsb.2020221 +[Abstract](1170) +[HTML](309) +[PDF](469.18KB)
This paper investigates the delay-distribution-dependent exponential synchronization problem for a class of chaotic neural networks with mixed random time-varying delays as well as restricted disturbances. Given the probability distribution of the time-varying delay, stochastic variable that satisfying Bernoulli distribution is formulated to produce a new system which includes the information of the probability distribution. Based on the Lyapunov-Krasovskii functional method, the Jensen's integral inequality theory and linear matrix inequality (LMI) technique, several delay-distribution-dependent sufficient conditions are developed to guarantee that the chaotic neural networks with mixed random time-varying delays are exponentially synchronized in mean square. Furthermore, the derived results are given in terms of simplified LMI, which can be straightforwardly solved by Matlab. Finally, two numerical examples are proposed to demonstrate the feasibility and the effectiveness of the presented synchronization scheme.
A subgrid stabilizing postprocessed mixed finite element method for the time-dependent Navier-Stokes equations
Yueqiang Shang and Qihui Zhang
2021, 26(6): 3119-3142 doi: 10.3934/dcdsb.2020222 +[Abstract](1143) +[HTML](353) +[PDF](886.55KB)
A postprocessed mixed finite element method based on a subgrid model is presented for the simulation of time-dependent incompressible Navier-Stokes equations. This method consists of two steps: the first step is to solve a subgrid stabilized nonlinear Navier-Stokes system on a coarse grid to obtain an approximate solution \begin{document}$ u_{H}(x,T) $\end{document} at the final time \begin{document}$ T $\end{document}, and the second step is to postprocess \begin{document}$ u_{H}(x,T) $\end{document} by solving a stabilized Stokes problem on a finer grid or by higher-order finite element elements defined on the same coarse grid. Stability of the method and error estimates of the processing solution are analyzed. Numerical results on an example with known analytic solution and the flow around a circular cylinder are given to verify the theoretical predictions and demonstrate the effectiveness of the proposed method.
A mathematical model to restore water quality in urban lakes using Phoslock
Pankaj Kumar Tiwari, Rajesh Kumar Singh, Subhas Khajanchi, Yun Kang and Arvind Kumar Misra
2021, 26(6): 3143-3175 doi: 10.3934/dcdsb.2020223 +[Abstract](1626) +[HTML](360) +[PDF](4962.44KB)
Urban lakes are the life lines for the population residing in the city. Excessive amounts of phosphate entering water courses through household discharges is one of the main causes of deterioration of water quality in these lakes because of the way it drives algal productivity and undesirable changes in the balance of aquatic life. The ability to remove biologically available phosphorus in a lake is therefore a major step towards improving water quality. By removing phosphate from the water column using Phoslock essentially deprives algae and its proliferation. In view of this, we develop a mathematical model to investigate whether the application of Phoslock would significantly reduce the bio-availability of phosphate in the water column. We consider phosphorus, algae, detritus and Phoslock as dynamical variables. In the modeling process, the introduction rate of Phoslock is assumed to be proportional to the concentration of phosphorus in the lake. Further, we consider a discrete time delay which accounts for the time lag involved in the application of Phoslock. Moreover, we investigate behavior of the system by assuming the application rate of Phoslock as a periodic function of time. Our results evoke that Phoslock essentially reduces the concentration of phosphorus and density of algae, and plays crucial role in restoring the quality of water in urban lakes. We observe that for the gradual increase in the magnitude of the delay involved in application of Phoslock, the autonomous system develops limit cycle oscillations through a Hopf-bifurcation while the corresponding nonautonomous system shows chaotic dynamics through quasi-periodic oscillations.
Revisit of the Peierls-Nabarro model for edge dislocations in Hilbert space
Yuan Gao, Jian-Guo Liu, Tao Luo and Yang Xiang
2021, 26(6): 3177-3207 doi: 10.3934/dcdsb.2020224 +[Abstract](1788) +[HTML](317) +[PDF](547.94KB)
In this paper, we revisit the mathematical validation of the Peierls–Nabarro (PN) models, which are multiscale models of dislocations that incorporate the detailed dislocation core structure. We focus on the static and dynamic PN models of an edge dislocation in Hilbert space. In a PN model, the total energy includes the elastic energy in the two half-space continua and a nonlinear potential energy, which is always infinite, across the slip plane. We revisit the relationship between the PN model in the full space and the reduced problem on the slip plane in terms of both governing equations and energy variations. The shear displacement jump is determined only by the reduced problem on the slip plane while the displacement fields in the two half spaces are determined by linear elasticity. We establish the existence and sharp regularities of classical solutions in Hilbert space. For both the reduced problem and the full PN model, we prove that a static solution is a global minimizer in a perturbed sense. We also show that there is a unique classical, global in time solution of the dynamic PN model.
Reversible polynomial Hamiltonian systems of degree 3 with nilpotent saddles
Montserrat Corbera and Claudia Valls
2021, 26(6): 3209-3233 doi: 10.3934/dcdsb.2020225 +[Abstract](1063) +[HTML](305) +[PDF](4216.56KB)
We provide normal forms and the global phase portraits in the Poincaré disk for all Hamiltonian planar polynomial vector fields of degree 3 symmetric with respect to the \begin{document}$ x- $\end{document}axis having a nilpotent saddle at the origin.
Invariant measures of stochastic delay lattice systems
Zhang Chen, Xiliang Li and Bixiang Wang
2021, 26(6): 3235-3269 doi: 10.3934/dcdsb.2020226 +[Abstract](1177) +[HTML](352) +[PDF](468.24KB)
This paper is concerned with the existence and uniqueness of invariant measures for infinite-dimensional stochastic delay lattice systems defined on the entire integer set. For Lipschitz drift and diffusion terms, we prove the existence of invariant measures of the systems by showing the tightness of a family of probability distributions of solutions in the space of continuous functions from a finite interval to an infinite-dimensional space, based on the idea of uniform tail-estimates, the technique of diadic division and the Arzela-Ascoli theorem. We also show the uniqueness of invariant measures when the Lipschitz coefficients of the nonlinear drift and diffusion terms are sufficiently small.
Global well-posedness of a 3D Stokes-Magneto equations with fractional magnetic diffusion
Yingdan Ji and Wen Tan
2021, 26(6): 3271-3278 doi: 10.3934/dcdsb.2020227 +[Abstract](1027) +[HTML](288) +[PDF](346.6KB)
This paper is devoted to the global well-posedness of a three-dimensional Stokes-Magneto equations with fractional magnetic diffusion. It is proved that the equations admit a unique global-in-time strong solution for arbitrary initial data when the fractional index \begin{document}$ \alpha\ge\frac32 $\end{document}. This result might have a potential application in the theory of magnetic relaxtion.
Dynamic observers for unknown populations
Chris Guiver, Nathan Poppelreiter, Richard Rebarber, Brigitte Tenhumberg and Stuart Townley
2021, 26(6): 3279-3302 doi: 10.3934/dcdsb.2020232 +[Abstract](1128) +[HTML](334) +[PDF](421.89KB)
Dynamic observers are considered in the context of structured-population modeling and management. Roughly, observers combine a known measured variable of some process with a model of that process to asymptotically reconstruct the unknown state variable of the model. We investigate the potential use of observers for reconstructing population distributions described by density-independent (linear) models and a class of density-dependent (nonlinear) models. In both the density-dependent and -independent cases, we show, in several ecologically reasonable circumstances, that there is a natural, optimal construction of these observers. Further, we describe the robustness these observers exhibit with respect to disturbances and uncertainty in measurement.
Asymptotic behavior of non-autonomous random Ginzburg-Landau equation driven by colored noise
Lingyu Li and Zhang Chen
2021, 26(6): 3303-3333 doi: 10.3934/dcdsb.2020233 +[Abstract](1055) +[HTML](329) +[PDF](440.33KB)
This paper investigates mainly the long term behavior of the non-autonomous random Ginzburg-Landau equation driven by nonlinear colored noise on unbounded domains. Due to the noncompactness of Sobolev embeddings on unbounded domains, pullback asymptotic compactness of random dynamical system associated with such random Ginzburg-Landau equation is proved by the tail-estimates method. Moreover, it is proved that the pullback random attractor of the non-autonomous random Ginzburg-Landau equation driven by a linear multiplicative colored noise converges to that of the corresponding stochastic system driven by a linear multiplicative white noise.
Entropy-dissipating finite-difference schemes for nonlinear fourth-order parabolic equations
Marcel Braukhoff and Ansgar Jüngel
2021, 26(6): 3335-3355 doi: 10.3934/dcdsb.2020234 +[Abstract](1212) +[HTML](278) +[PDF](438.46KB)
Structure-preserving finite-difference schemes for general nonlinear fourth-order parabolic equations on the one-dimensional torus are derived. Examples include the thin-film and the Derrida–Lebowitz–Speer–Spohn equations. The schemes conserve the mass and dissipate the entropy. The scheme associated to the logarithmic entropy also preserves the positivity. The idea of the derivation is to reformulate the equations in such a way that the chain rule is avoided. A central finite-difference discretization is then applied to the reformulation. In this way, the same dissipation rates as in the continuous case are recovered. The strategy can be extended to a multi-dimensional thin-film equation. Numerical examples in one and two space dimensions illustrate the dissipation properties.
Dynamics at infinity and Jacobi stability of trajectories for the Yang-Chen system
Yongjian Liu, Qiujian Huang and Zhouchao Wei
2021, 26(6): 3357-3380 doi: 10.3934/dcdsb.2020235 +[Abstract](1338) +[HTML](393) +[PDF](958.44KB)
The present work is devoted to giving new insights into a chaotic system with two stable node-foci, which is named Yang-Chen system. Firstly, based on the global view of the influence of equilibrium point on the complexity of the system, the dynamic behavior of the system at infinity is analyzed. Secondly, the Jacobi stability of the trajectories for the system is discussed from the viewpoint of Kosambi-Cartan-Chern theory (KCC-theory). The dynamical behavior of the deviation vector near the whole trajectories (including all equilibrium points) is analyzed in detail. The obtained results show that in the sense of Jacobi stability, all equilibrium points of the system, including those of the two linear stable node-foci, are Jacobi unstable. These studies show that one might witness chaotic behavior of the system trajectories before they enter in a neighborhood of equilibrium point or periodic orbit. There exists a sort of stability artifact that cannot be found without using the powerful method of Jacobi stability analysis.
Qualitative properties and bifurcations of a leaf-eating herbivores model
Jiyu Zhong
2021, 26(6): 3381-3407 doi: 10.3934/dcdsb.2020236 +[Abstract](972) +[HTML](298) +[PDF](1525.08KB)
In this paper, we discuss the dynamics of a discrete-time leaf-eating herbivores model. First of all, to investigate the bifurcations of the model, we study the qualitative properties of a fixed point, including hyperbolic and non-hyperbolic. Secondly, applying the center manifold theorem, we give the conditions that the model produces a supercritical flip bifurcation and a subcritical flip bifurcation respectively, from which we find a generalized flip bifurcation point. And then, we prove rigorously that the model undergoes a generalized flip bifurcation and give three parameter regions that the model possesses two period-two cycles, one period-two cycles and none respectively. Next, computing the normal form, we prove that the model undergoes a subcritical Neimark-Sacker bifurcation and produces a unique unstable invariant circle near the fixed point. Finally, by numerical simulations, we not only verify our results but also show a saddle period-five cycle and a saddle period-six cycle on the invariant circle.
Global existence and Gevrey regularity to the Navier-Stokes-Nernst-Planck-Poisson system in critical Besov-Morrey spaces
Jinyi Sun, Zunwei Fu, Yue Yin and Minghua Yang
2021, 26(6): 3409-3425 doi: 10.3934/dcdsb.2020237 +[Abstract](1175) +[HTML](292) +[PDF](415.63KB)
The paper is concerned with the Navier-Stokes-Nernst-Planck-Poisson system arising from electrohydrodynamics in \begin{document}$ \mathbb{R}^d $\end{document}. By means of the implicit function theorem, we prove the global existence of mild solutions for Cauchy problem of this system with small initial data in critical Besov-Morrey spaces. In comparison to the previous works, our existence result provides a new class of initial data, for which the problem is global solvability. Meanwhile, based on the so-called Gevrey estimates, we verify that the obtained mild solutions are analytic in the spatial variables. As a byproduct, we show the asymptotic stability of solutions as the time goes to infinity. Furthermore, decay estimates of higher-order derivatives of solutions are deduced in Morrey spaces.
Dynamic analysis on an almost periodic predator-prey system with impulsive effects and time delays
Demou Luo and Qiru Wang
2021, 26(6): 3427-3453 doi: 10.3934/dcdsb.2020238 +[Abstract](1122) +[HTML](305) +[PDF](440.05KB)
This article is concerned with a generalized almost periodic predator-prey model with impulsive effects and time delays. By utilizing comparison theorem and constructing a feasible Lyapunov functional, we obtain sufficient conditions to guarantee the permanence and global asymptotic stability of the system. By applying Arzelà-Ascoli theorem, we establish the existence and uniqueness of almost-periodic positive solutions. A feasible numerical simulation is provided to explain the suitability of our main criteria.
On 3d dipolar Bose-Einstein condensates involving quantum fluctuations and three-body interactions
Yongming Luo and Athanasios Stylianou
2021, 26(6): 3455-3477 doi: 10.3934/dcdsb.2020239 +[Abstract](1498) +[HTML](294) +[PDF](430.32KB)
We study the following nonlocal mixed order Gross-Pitaevskii equation
where \begin{document}$ K $\end{document} is the classical dipole-dipole interaction kernel, \begin{document}$ \lambda_3>0 $\end{document} and \begin{document}$ p\in(4,6] $\end{document}; the case \begin{document}$ p = 6 $\end{document} being energy critical. For \begin{document}$ p = 5 $\end{document} the equation is considered currently as the state-of-the-art model for describing the dynamics of dipolar Bose-Einstein condensates (Lee-Huang-Yang corrected dipolar GPE). We prove existence and nonexistence of standing waves in different parameter regimes; for \begin{document}$ p\neq 6 $\end{document} we prove global well-posedness and small data scattering.
2020 Impact Factor: 1.327
5 Year Impact Factor: 1.492
2020 CiteScore: 2.2
Special Issues
Email Alert
[Back to Top] |
9f3a6d56956dd3a5 | Weakly turbulent instability of anti-de Sitter space
Piotr Bizoń Institute of Physics, Jagiellonian University, Kraków, Poland Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut, Golm, Germany Andrzej Rostworowski Institute of Physics, Jagiellonian University, Kraków, Poland
February 21, 2022
We study the nonlinear evolution of a weakly perturbed anti-de Sitter (AdS) space by solving numerically the four-dimensional spherically symmetric Einstein-massless-scalar field equations with negative cosmological constant. Our results suggest that AdS space is unstable under arbitrarily small generic perturbations. We conjecture that this instability is triggered by a resonant mode mixing which gives rise to diffusion of energy from low to high frequencies.
Introduction. The past decade has witnessed growing interest in spacetimes which asymptotically approach anti-de Sitter space (AdS) which is the unique maximally symmetric solution of the vacuum Einstein equations with negative cosmological constant. Despite this flurry of activity, motivated mainly by the AdS/CFT duality conjecture, the very basic question ”Is AdS stable?” stability has been rarely raised (cf. a0 for a notable exception), let alone answered. This is in stark contrast to two other maximally symmetric solutions of the vacuum Einstein equations – Minkowski space (with zero cosmological constant) and de Sitter space (with positive cosmological constant) – which are known to be stable under small perturbations ck ; f1 . The key feature of asymptotically AdS spacetimes, distinguishing them from asymptotically flat or dS spacetimes, is the presence of a timelike boundary at (spatial and null) infinity where suitable boundary conditions need to be prescribed in order to make the evolution well-defined f2 . For no-flux boundary conditions one gets a Hamiltonian conservative system on an effectively bounded domain boundary from which energy cannot escape (as for waves propagating inside a perfect cavity), hence the asymptotic stability of AdS space is precluded while the question of its stability, as we will see below, touches upon the KAM theory for partial differential equations. Model. In this Letter we report on numerical simulations which shed some new light on the problem of stability of AdS spacetime. As a simple model of asymptotically AdS dynamics we consider a self-gravitating spherically symmetric real massless scalar field in dimensions whose evolution is governed by the Einstein-scalar system with negative cosmological constant
For the metric we assume the ansatz
where and is the standard metric on the round unit two-sphere. The ranges of dimensionless coordinates are (for the ’unwrapped’ version of AdS considered in this paper) and . We assume that , , and are functions of . Inserting the ansatz (3) into the field equations (1-2) and introducing auxiliary variables and (hereafter overdots and primes denote derivatives with respect to and , respectively), we obtain a coupled quasilinear elliptic-hyperbolic system, consisting of the wave equation
and the constraints (using units where )
Note that the length scale drops out from the equations. The pure AdS solution is given by . We want to solve the system (4-6) for smooth initial data with finite total mass. Smoothness at the origin implies that near we have the power series expansions
where we used normalization so that is the proper time at the center. These expansions are uniquely determined by a free function . Smoothness at spatial infinity and finiteness of the total mass imply that near we have (using )
where and free functions , uniquely determine the power series expansions. It follows from (Weakly turbulent instability of anti-de Sitter space) that for smooth initial data there is no freedom in imposing the boundary data (this is directly related to the fact that the corresponding linearized operator is essentially self-adjoint iw ), hence the initial value problem is well-defined (see hs for the rigorous proof) without the need of specifying boundary data at infinity. Numerical results. We solved the system (4-6) numerically using a fourth-order accurate finite-difference code. We used the method of lines and a 4th-order Runge-Kutta scheme to integrate the wave equation (4) in time, where at each step the metric functions were updated by solving the hamiltonian constraint (5) and the slicing condition (6). Preservation of the momentum constraint was monitored to check the accuracy of the code.
Solutions shown in Figs. 1 and 2 were generated from Gaussian-type initial data of the form
with fixed width and varying amplitude . For such data the scalar field is well localized in space and propagates in time as a narrow wave packet. For large amplitudes the wave packet quickly collapses, which is signalled by the formation of an apparent horizon at a point where drops to zero. As the amplitude is decreased, the horizon radius decreases as well and goes to zero for some critical amplitude . This behavior is basically the same as in the asymptotically flat case, because for the influence of the AdS boundary is negligible. At criticality the term becomes completely irrelevant, hence the solution with amplitude asymptotes (locally, near the center) the discretely self-similar critical solution discovered by Choptuik in the corresponding model with matt . For amplitudes slightly below the wave packet travels to infinity, reflects off the boundary, and collapses while approaching the center. Lowering gradually the amplitude we find the second critical value for which . As keeps decreasing, this scenario repeats again and again, that is we obtain a decreasing sequence of critical amplitudes for which the evolution, after making reflections from the AdS boundary, locally asymptotes Choptuik’s solution. Specifically, we verified that in each small right neighborhood of the horizon radius scales according to the power law with . Fig. 1 shows that has the shape of the right continuous sawtooth curve with finite jumps at each . Notice that , where denotes the time of collapse. We stress that is the radius of the first apparent horizon that forms on the hypersurface; eventually all the matter falls into the black hole and the solution settles down to the Schwarzschild-AdS black hole with mass equal to the initial mass (cf. hs2 ). It appears that , indicating that there is no threshold for black hole formation, however we did not determine precise values of for because the computational cost of bisection increases rapidly with (since, in order to resolve the collapse, solutions have to be evolved for longer times on finer grids ).
Let us mention that the analogous problem in dimensions was studied previously by Pretorius and Choptuik cp who emphasized the challenges inherent in numerical simulations of AdS dynamics, however their analysis was primarily focused on the threshold for black hole formation before any reflection off the AdS boundary takes place (as for our data with amplitude ).
Horizon radius vs amplitude for initial data (
Figure 1: Horizon radius vs amplitude for initial data (9). The number of reflections off the AdS boundary before collapse varies from zero to nine (from right to left).
In the following we consider the development of general (gaussian and other) small initial data, focusing attention on early and intermediate pre-collapse phases of evolution. We found that the Ricci scalar at the center, , can serve as a good indicator for the onset of instability. This quantity oscillates with frequency (as it takes time for the wave packet to make the round trip from and back to the center). An upper envelope of these oscillations is shown in Fig. 2a, where several clearly pronounced phases of evolution can be distinguished. During the first phase the amplitude remains approximately constant but after some time there begins a second phase of (roughly) exponential growth, followed by subsequent phases of steeper and steeper growth, until finally the solution collapses. We find that the time of onset of the second phase scales as (see Fig. 2b), which means that arbitrarily small perturbations eventually start growing. Note that this behavior is morally tantamount to instability of AdS space, regardless of what happens later, in particular whether the solution will collapse or not. In the remainder of this Letter we sketch a preliminary attempt to explain the mechanism of this instability in the framework of weakly nonlinear perturbation theory.
(a) (a)
Figure 2: (a) for solutions with initial data (9) for four moderately small amplitudes. For clarity of the plot only the upper envelopes of rapid oscillations are depicted. After making between about fifty (for ) and five-hundred (for ) reflections, all solutions finally collapse.
(b) The curves from the plot (a) after rescaling .
Weakly nonlinear perturbations. We seek an approximate solution of the system (4-6) with initial data in the form
where and all higher-order iterates have zero initial data. Inserting (10) into the system (4-6) and collecting terms of the same order in , we obtain a hierarchy of linear equations which can be solved order-by-order. At the first order we obtain
This equation is a particular case of the master equation describing the evolution of linearized perturbations of AdS space, analyzed in great detail by Ishibashi and Wald iw . The Sturm-Liouville operator is essentially self-adjoint on . Below we denote the inner product on this Hilbert space by . The eigenvalues and eigenfunctions of are () and
where is the normalization factor ensuring that . The positivity of all the eigenvalues implies that AdS space is linearly stable. By elementary separation of variables, the solution of Eq.(11) is given by the superposition of eigenmodes
where the amplitudes and phases are determined by the initial data.
The back-reaction on the metric appears at the second order and can be readily integrated to yield
At the third order we get the inhomogeneous equation
where . Projecting Eq.(16) on the basis (12) we obtain an infinite set of decoupled forced harmonic oscillations for the generalized Fourier coefficients
Let be a set of indices of nonzero modes in the linearized solution (13). A lengthy but straightforward calculation yields that each triad such that gives rise to a resonant term in (i.e., a term proportional to or ). Some of these resonances can be removed by a multiscale (or Poincaré-Lindstedt) technique, but for general initial data there is no way to remove all the resonances and consequently we get secular terms which grow linearly in time and invalidate the perturbation expansion when . For example, for the single-mode initial data, , only contains a resonant term and this term can be eliminated by introducing a slow modulation of phase in (13) to yield the approximation which is uniformly valid up to at least . In contrast, for the two-mode initial data, , the resonant terms in and can be eliminated by suitable phase modulations but the resonant term in persists and produces a secular term in .
In accord with this analysis, we observe a striking difference in the evolution of these two kinds of solutions. For (non-resonant) one-mode data the solution remains near the initial state during the entire simulation time, which indicates stability (although we cannot exclude a metastable behavior with a very long lifetime). In contrast, for (resonant) two-mode data we observe the onset of exponential instability at time , as in the case of gaussian-type data (9). Notice that for -mode initial data the number of triads satisfying the diophantine resonance condition grows rapidly with and hence, so does the number of unremovable secular terms. We believe that these secular terms are progenitors of the higher-order resonant mode mixing which shifts the energy spectrum to higher frequencies. To quantify this effect, we introduce a spectral decomposition of the total energy as follows. We define the projections and , and next, using the orthogonality relationships , we express the total conserved energy as the Parseval sum
The weakly nonlinear multiscale perturbation analysis of the system (4-6) will be elaborated in detail elsewhere br , where we will also connect the nonlinear instability of generic quasiperiodic solutions of the linearized equations to the violation of non-resonance conditions of the KAM theorem for PDEs and Arnold diffusion.
Evolution of the fraction of the total energy contained in the first
Figure 3: Evolution of the fraction of the total energy contained in the first modes for the two-mode initial data with .
Conclusions. There is growing (numerical and theoretical) evidence that the phenomenon of weak turbulence, i.e. the tendency of solutions to shift their energy from low to high frequencies, is common for (non-integrable) nonlinear wave equations on bounded domains, notably it has recently been proven for the nonlinear Schrödinger equation on torus c_team ; cf . To our knowledge, this paper is the first result in this direction for Einstein’s equations. An important difference between wave equations defined on a fixed background and Einstein’s equations should be stressed. For the former, the weakly turbulent behavior is compatible with global-in-time smooth evolution. In contrast, for Einstein’s equations the transfer of energy to high frequencies cannot proceed indefinitely because concentration of energy on smaller and smaller scales inevitably leads to the formation of a black hole.
Admittedly, the results presented above raise more questions than give answers. One of the key physical questions concerns the role of negative cosmological constant in the observed phenomenon. Is it dynamical or kinematical? In other words, is an extra attractive force due to essential in triggering gravitational collapse for arbitrarily small perturbations, or the only role of is to confine the evolution in an effectively bounded domain?
In this paper we studied perturbations of for the Einstein-massless-scalar system in dimensions, but we observed qualitatively the same behavior (in particular, instability of ) for the dimensional vacuum Einstein equations within the cohomogeneity-two biaxial Bianchi IX ansatz of bcs . This result and its implications for the AdS/CFT conjecture will be discussed elsewhere. Acknowledgments: We thank Helmut Friedrich for helpful discussions and encouragement, and Piotr Chruściel for critical reading of the manuscript. This work was supported in part by the NCN grant NN202 030740.
For everything else, email us at [email protected]. |
36f5d75b06faf3aa | 12 Mar 2020
Here’s How You Can Make Sense of Your SPM Results (UPDATED)
Now that the SPM bombshell has been revealed to you, you’re probably riding in total jubilation, complete disappointment or somewhere between the two.
Yes, not everyone will find themselves hopping in the air for an enthusiastic newspaper photographer. Nor will everyone be privileged enough to strut down a parade while flashing blue slips of paper (#wefeelyou) (#thereismoretolife). So, before you drown yourself in celebration or post-exam envy (blame it on Facebook), know that there’s more to your SPM results than whipping out straight As.
In fact, your results could be the secret key to understanding which course you should pursue and which you should completely avoid. All you need to do is pinpoint the subjects that you did well in and narrow it down to the ones you’re most interested in. Then, look for your chosen subject on this list to discover which courses would suit you best.
#1. Languages (Malay, English, Chinese, Tamil)
If you enjoy hefty tomes of language and literature, you may officially crown yourself as a linguaphile (read: a lover of languages and words).
Language lovers are great communicators, possess a good sense of sensitivity towards cultural differences and offer diversity to a team.
While the first few career options for language fanatics are almost always translator, interpreter or foreign language teacher, you’re not necessarily constrained to such narrow paths. Here are several degrees that you can pony up to: language, creative writing, literatureinternational relations (think diplomats), mass communication (unleash your creativity in the line of advertising, journalism and public relations), law and education.
#2. History
Are you fuelled by a passion for memorising facts, dates and random trivia?
More than just skilled in cramming, history buffs are great in analysing and interpreting information, formulating solid arguments and communicating their views.
While a great number of people perceive history as “fun stuff” or a field that will never secure you a job beyond being a high school teacher, what they may not know is that a solid foundation in history poses a strong basis for a number of careers.
Degrees that you can consider include: journalism (a major under mass communication), law, political science, anthropologylanguage, economics and education.
Fun Fact: History as a university discipline is not all about constructing a mental tome of sorts and merely memorising the overflowing historical facts and events of the past in meticulous detail. Rather, history majors gaze towards a greater understanding of patterns, such as the cause and effects of human behaviour and existence.
#3. Moral and Islamic Studies
Do you constantly ponder over the meaning of life, reflect upon faith, culture and reason, and strive to make informed ethical choices?
If this sounds like you AND you’ve sailed through the lengthy scrolls of moral values or holy scriptures with ease, then the study of theology, religion, sociology and philosophy might just be up your street.
These areas of study are progressively earning a substantial slot in our world where religious beliefs are the movers and shakers behind social and political events. On top of furnishing your personal interests, it lays out an arena for you to decipher the complexities of life while preparing you to etch a positive impact in the world.
#4. Mathematics and Additional Mathematics
Do you find the black and white nature of mathematics soothing in our chaotic world? Always find yourself getting a kick out of solving knotty equations?
If so, you’re probably great at logical analysis, careful deduction and finding answers based on patterns and structures.
You may think that your love for distilling complex and real-world mathematical puzzles may only lead you to mathematics and actuarial science courses, but you’re far from the truth! You can also explore areas such as computer science, engineering, finance and accounting.
Fun Fact: Despite the tainted image of mathematics as a dreary and strenuous subject to be endured rather than to be enjoyed, in actuality, it’s one of the most creative disciplines accessible to all. Saving lives, devising video games and exploring the secrets of the universe — it’s all in a day’s work for a mathematician!
#5. Physics
Many may find physics to be one of the more gruelling subjects in school. Don’t get us wrong — there are heaps of Big Bang Theory doppelgangers who genuinely adore throwing out things like Schrödinger equation and making references to quantum mechanics. But, if you happen to rejoice in the study of energy, fields and mass, your ability to reason, application of maths and problem-solving skills are probably top-notch.
Degrees that you can flex your muscles include: engineering, sciences, information technology, architecture and finance.
Fun Fact: The significance of physics to society today is most easily exhibited by our reliance on technology. Without physics, there would be no space rockets, light bulbs, digital cameras, cars, cell phones, aeroplanes or heck, computers.
#6. Chemistry
Perhaps some of us are baffled by the encyclopaedic chemical elements from the periodic table, but for many, chemistry can be a fascinating field of study.
Lovers of molecular properties and chemical reactions are generally whizzes at numeracy and problem-solving.
Chemistry enthusiasts can consider the following degrees: chemical engineering (with good grades in physics), health sciences (think medicine, pharmacy and nutrition, with good grades in Biology), biochemistry, forensic science and psychology.
Fun Fact: While the glamorous chunk of chemistry resides within a sealed lab — working with microscopic atoms and particles — the influence of chemistry is actually all around us, from the water we sip to the medicines that keep us alive!
#7. Biology
Are you a fan of understanding how the human body works? Are you enthralled by the process of evolution as well as the genes and cells that pose as the building blocks of our physical world?
The natural pathway for you may be a Degree in Science, focusing on biology. From labs and zoos to ocean liners in the Arctic and fieldwork in the Amazon jungle, biologists strive to comprehend how animals and organisms work (including us humans) to stop the spread of disease, track down natural resources and improve public health.
Beyond biology, other areas of study you can consider include: health sciences (medicine, pharmacy, dentistry and nutrition, which are subject to good grades in chemistry), bioscience, physiotherapy, environmental science and psychology.
#8. Economics
If you’ve victoriously conquered supply and demand as well as international trade and price system, congratulations! Those who excel in Economics generally have developed strong analytical and evaluative skills that are valuable in many degrees.
A natural path would be to explore a Degree in Economics, a business major that isn’t all about the numbers and embraces both the social and the political world. As a social science, it plays a vital role in decrypting the mysteries of how people and society operate.
However, other areas of study that you can consider include: accounting, business, actuarial science, political science and finance.
We hope this helps put your results into perspective. Now, go forth and discover incredible career paths based on the subjects you’ve aced.
Picked a course but unsure where to study it? We can help you choose the best college!
Leave a comment |
1a6c315fa056b2fa | I'm trying to build an intuition for what quantization really means and came up with the following two possible "visualizations":
1. The quantization of the energy levels of a harmonic oscillator is the result of a wave function that is confined in a potential well (namely of quadratic profile). It is the boundary conditions of that well that give rise to standing waves with a discrete number of nodes---hence the quantization.
2. Photons are wave packets, i.e., localized excitations of the electromagnetic field that happen to be traveling at the speed of light.
On the one hand, #1 explains quantization as the result of the boundary conditions, and on the other hand #2 explains it as the localization of an excitation. Both pictures are perfectly understandable from classical wave mechanics and yet we don't think of classical mechanics as quantized.
With the above in mind, what is intrinsically quantized about quantum mechanics? Are my "intuitions" #1 and #2 above contradictory? If not, how are they related?
PS: Regarding #2, a corollary question is: If photons are wave packets of the EM field, how does one explain the fact that a plane, monochromatic wave pervading all of space, is made up of discrete, localized excitations?
My question is somewhat distinct from this one in that I'd rather not invoke the Schrödinger equation nor resort to any postulates, but basically build on the two intuitions presented above.
7 Answers 7
First and second quantization
Quantization is a misleading term, since it implies discreteness (e.g., of the energy levels), which is not always the case. In practice (first) quantization refers to describing particles as waves, which in principle allows for discrete spectra, when boundary conditions are present.
The electromagnetic waves behave in a similar fashion, exhibiting discrete spectra in resonators. Thus, technically, quantization of the electromagnetic field corresponds to second quantization of particles.
Second quantization arises when dealing with many-particle systems, when the focus is not anymore on the wave nature of the states, but on the number of particles in each state. The discreteness (of particles) is inherent in this approach. For the electromagnetic field this corresponds to the first quantization, and the filling particles, whose number is counted, are referred to as photons. Thus, photon is not really a particle, but an elementary excitation of electromagnetic field. Associating a photon with a wave packet is misleading, although it appeals to intuition. (One could argue however that physically observed photons are always wave packets, since to have truly well-defined energy they would have to exist for infinite time, which is not possible.)
This logic of quantization is applied to other wave-like fields, such as wave excitations in crystals: phonons (sound), magnons, etc. One speaks sometimes even about diffusons - quantized excitation of a field described by the duffusion equation.
Uncertainty relation
An alternative way to look at quantization is from the point of view of the Heisenberg uncertainty relation. One switches from classical to quantum theory by demanding that canonically conjugate variables cannot be measured simultaneously (e.g., position and momentum, $x,p$ can be simultaneously measured in classical mechanics, but not in quantum mechanics). Mathematically this means that the corresponding operators do not commute: $$ [\hat{x}, \hat{p}_x]_- = \imath\hbar \Rightarrow \Delta x\Delta p_x \geq \frac{\hbar}{2}.$$ The discreteness of spectra then shows up as discrete eigenvalues of the operators.
This procedure can be applied to anything - particles or fields - as long as we can formulate it in terms of Hamiltonian mechanics and identify effective position and momenta, on which we then impose the non-commutativity. E.g., for electromagnetic field, one demands the non-commutativity of the electric and the magnetic fields at a given point.
• 2
$\begingroup$ "when boundary conditions are present" — boundary conditions are always present. Quantization arises when the motion is confined, not when BCs are present. $\endgroup$
– Ruslan
Apr 9, 2020 at 7:43
• $\begingroup$ Thanks, this is indeed more precise way ti say it. $\endgroup$ Apr 9, 2020 at 8:29
• $\begingroup$ Could you explain how your last section, "Uncertainty relation", is illustrative of quantization? Uncertainties arise even classically between any two Fourier conjugates. (Hence the Heisenberg uncertainty relation is perhaps better thought of as a dispersion relation?) That said, I can't see what that has to do with quantization or discreteness of any physical quantity. $\endgroup$
– Tfovid
Apr 9, 2020 at 12:59
• $\begingroup$ @Tfovid In this section I speak of quantization in an even more general sense, as moving from classical to quantum description of physical phenomena. In classical physics position and momenta can be measured simultaneously, whereas in quantum they cannot. All the math can be built using this as the departing point - the discrete energy spectra than appear as the eigenvalues of non-commuting matrices, representing the physical quantities. Indeed, formally the uncertainty principle is just what you get when analyzing Fourier conjugates, but here it has specific physical meaning. $\endgroup$ Apr 9, 2020 at 13:15
• 1
$\begingroup$ Always a sincere and affectionate thank you for your cooperation in Physics.SE. $\endgroup$
– Sebastiano
Feb 7, 2021 at 18:53
Actually in the case of your #2 there is no quantization since the energy spectrum of plane waves is continuous: there is a continuous range of $k$-vectors and thus a continuous range of energies. The wave packet is just a superposition of plane waves, with continuously varying $k$ (or $\omega$) so not quantized.
To highlight the difference I will refer to an old paper of Sir Neville Mott, "On teaching quantum phenomena." Contemporary Physics 5.6 (1964): 401-418:
The student may ask, why is the movement of electrons within the atom quantized, whereas as soon as an electron is knocked out the kinetic energy can have any value, just as the translational energy of a gas molecule can? The answer to this is that quantization applies to any movement of particles within a confined space, or any periodic motion, but not to unconfined motion such as that of an electron moving in free space or deflected by a magnetic field.
• $\begingroup$ If you shine a plane-wave laser light into a screen but then dim it until only individual photon "impacts" appear on the screen, doesn't that mean that the photons (i.e., wave packets) arose from the plane wave? I'm still not sure where superposition comes into play (?) $\endgroup$
– Tfovid
Apr 8, 2020 at 21:14
• 1
$\begingroup$ The wave packet is not a plane wave: it contains a superposition of plane waves each with different $k$-vectors and thus different energies. If you want plane wave you cannot have localization. Also @Vadim's comment on the E&M field is quite relevant here. $\endgroup$ Apr 8, 2020 at 21:25
• $\begingroup$ I understand how a wave packet is the superposition of several plane waves. The gap in my understanding is as follows: What I recall from my undergrad physics is that if you start with a plane wave and then dim it, you end up with photon impacts on the screen. I.e., you start with a continuous all-pervasive wave, but measure localized impacts. Is this what quantization means? $\endgroup$
– Tfovid
Apr 9, 2020 at 13:37
• $\begingroup$ Also, I'm not really sure what to make of that quote from that 1964 paper. Is the autor saying that we have quantization iff we have confinement, hence my point #1? $\endgroup$
– Tfovid
Apr 9, 2020 at 13:38
• $\begingroup$ yes quantization is the result of confinement. Localized pulses are not planes waves.; there’s nothing else to say. $\endgroup$ Apr 9, 2020 at 13:41
I am trying to answer your title question, because your questions about the atomic absorption/emission being quantized, is covered in the other answers.
You are asking "How does quantization arise in quantum mechanics?", and "If photon are wave packets of the EM field, how does one explain the fact that a plane, monochromatic wave pervading all of space, is made up of discrete, localized excitations?".
If you accept that our universe is fundamentally quantum mechanical, then you need to describe the forces that govern it, and you need to describe how the forces act on matter by propagating mediators.
The EM force needs to be quantized to fully describe its interaction with matter. Photons, quanta of light are the only way to describe how light interacts with matter at the level of individual absorptions/emissions.
The weak force is bound by the heavy mediators, the W and Z, and the strong force is bound by confinement, using gluons. Both are in this way fully quantized, when we describe how they act on matter.
In other words, the weak and the strong force are, in some sense, "fully quantum" in that their importance to our world comes completely from their quantized description
Are there weak force waves?
The only exception is gravity, where we do not yet have a full quantum description of how exactly gravity acts on matter by propagating mediators, the hypothetical gravitons. But as you say, the need arises, because we are trying to describe the universe in cases where the gravitational forces are extreme, and dominate over all other forces (singularity).
So the answer to your question is, you can beautifully describe the universe by classical theories, like EM waves and GR waves, if you want to go with big scales, but as soon as you are trying to describe how forces act on matter (exceptions are photon-photon or gluon-gluon interactions) on quantum scale (elementary particles) you need a quantized force.
• 1
$\begingroup$ I don’t see how this answers the question. This is more of a comment really. $\endgroup$ Apr 8, 2020 at 23:32
• $\begingroup$ @ZeroTheHero I though he is asking how does quantization arise in QM. Maybe I just answered his title. Other then that he is basically asking why atoms can only absorb/emit quanta of light. I believe your answer covers that. I will edit. $\endgroup$ Apr 8, 2020 at 23:39
Quantisation means that the classical description of a particle having independent position and momentum at any time is replaced by a probabilistic description in which these numerical properties are not fundamental to the description of matter, but are determined in measurement processes. As summarised by Paul Dirac:
“In the general case we cannot speak of an observable having a value for a particular state, but we can … speak of the probability of its having a specified value for the state, meaning the probability of this specified value being obtained when one makes a measurement of the observable.”
The difference between this and classical probability theory is that classical probabilities are determined by unknowns, but quantum probabilities are actually indeterminate. Mathematically a probability density can be split into a function and its complex conjugate using the Born rule (this much is trivial) and quantum superposition is then the natural way to describe a logical disjunction (the result of a measurement may be one thing $\mathrm{OR}$ another). This much gives us the structure of a Hilbert space.
It is not trivial, but it can be proven, that preserving the probability interpretation under time evolution requires unitarity, and that the conditions for Stone's theorem are obeyed. The general form of the Schrodinger equation follows.
This much has been well established in mathematical foundations of quantum mechanics, but it is generally not covered in text books which are concerned with application, not foundations and interpretation. I have written a paper for the purpose of clarifying The Hilbert space of conditional clauses.
I would suggest you give more credence to your idea #1. Photons can be explained in the framework of idea #1. After all, the reason we need photons in our theory is to explain why light energy only seems to come in discrete units.
Here's a way of understanding the energy eigenstates of quantum systems that matches pretty well with your idea #1.
1. Consider a the classical mechanics of some system, like a particle in a potential well.
2. Compute the period of the system as a function of the total energy of the system, $T(E)$.
3. Energy eigenstates are those states where the wavefunction reinforces itself constructively as it propagates. Combining this idea with the Planck-Einstein equation we see that the the allowed energies are the ones that satisfy $E T(E) = 2\pi \hbar n$ for some integer $n$. Different systems have different $T(E)$ and solving this equation for $E$ in terms of $n$ yields the energy spectrum.
This system works heuristically for one-dimensional one-particle systems. It misses things like zero-point energy and gets constant factors wrong, and it is hairy to extend to more dimensions and particles, but it does tend to give you the right asymptotic structure so I think it's helpful conceptually. I suggest that you can use it the explain photons as well.
Explaining the energy quantization of one-particle systems
In the one-particle world the classical state of the system is determined by a single position function. Given a classical trajectory $x(t)$ with total energy $E$ you look for the period of $x(t)$ so that $x(t+T(E)) = x(t)$.
Note that I haven't mentioned boundary conditions. Boundary conditions are important in this idea insofar as they are what create periodic classical trajectories! Classical systems without attractive potential wells don't have periodic classical trajectories and so their quantum analogues don't have discrete spectra, just a continuous free spectrum. The physical idea is periodic classical trajectories, which can be caused by attractive potential wells, which manifest mathematically in boundary conditions in Schrodinger's equation.
Explaining the energy quantization of the EM field
In the electromagnetic world the classical state of the system is determined by an electromagnetic field function, $A_\mu (\vec{x},t)$. For a classical field solution $A_\mu(\vec{x},t)$ with total classical energy $E$ you look for the period $A_\mu(\vec{x},t+T(E)) = A_\mu(\vec{x},t)$ and then solve $ET(E) = 2\pi\hbar n$. If you do this you find that there are infinitely many solutions for $n =1$ corresponding to $E = \hbar c|\vec{k}|$ where $\vec{k}$ is some vector. For higher $n$ you find more solutions, $E = \hbar c n|\vec{k}|$. This suggests that the energy eigenstates of the quantum electromagnetic field come in particle like chunks where one particle has energy proportional to its momentum and you can have arbitrary numbers of these particles. These are what we call photons.
Note also that we have a quantized energy spectrum without any special boundary conditions to speak of. Again, the quantization comes from the periodic classical field solutions. In the case of the EM field the periodic field solutions arise because of the EM wave equation, rather than being caused by an external potential.
Now, there are a ton of problems with every step of this conceptual approach. For one, if do the math you immediately find that a classical field that is periodic in time (and hence in space?) doesn't have finite total energy!
However, I am arguing that your idea #1 explains photons, and so you should take idea #1 and the fundamental idea that explains both photons and the quantization of energy levels in simpler systems.
• $\begingroup$ Thanks! Very interesting way of looking at it. Strange that it seems to make more sense in the old Bohr-Sommerfeld way of looking at quantum mechanics than in the more modern Hilbert space version. Or maybe it's just easier to connect it up with classical mechanics, since it was closer to it historically. $\endgroup$ Nov 16, 2021 at 8:14
Although Schrödinger titled his 1926 pioneering 4 communications 'Quantization as eigenvalue problem', this is misleading. The discretization by boundary conditions applies to classical waves in strings and resonators. Not the energy gets discrete values, but the wavelength and subsequently the frequency.
The quantization of the electromagnetic field to photons of energy hf has nothing to do with boundary conditions, too.
The stationary Schrödinger equation for the harmonic oscillator has the following mathematical property. Any given solution with energy, E, is connected with all other solutions of energies
E+hf, E+2hf,... and E-hf, E-2hf,...
This holds true for all solutions, not only for Schrödinger*s eigensolutions! This means, that this equation has got an intrinsic discrete structure, independent of any boundary condition.
Now, all sulutions except Schrödinger's eigensolutions represent perpetua mobilia and hence violate the energy conservation law. This makes them to be unphysical.
I agree that it is easier to visualize boundary conditions than recursion formulae.
More details are in publications by Dieter Suisky, who had the core idea, and myself under the title 'quantization as selection problem'.
Have fun! Peter
Answering just the title question I would say that $E=h\nu$ is the basic equation. It implies that matter, photons, electrons etc., is described by waves with a frequency and hence a wavenumber. It also implies that these waves are normalized to represent these energy quanta. Wave equations such as Schrödinger, Klein-Gordon and Dirac describe basically the Einstein relation between energy and momentum, $E^2=m^2+p^2$. The Noether theorem provides the expressions for energy and others conserved quantities. These observations can build the intuitive picture that you are looking for.
Your Answer
|
f7883031ebd25320 | Monthly Archives: December 2020
Another tiny victory for electrons being magnetic monopoles? Solar corona temperature ‘explained’.
Since the official version of the electron is that it is a magnetic dipole, as such it cannot be accelerated by magnetic fields. But often people do not understand why but if I explain it via atomic hydrogen all of a sudden everybody thinks ‘hey that is logical’.
Explanation via atomic hydrogen:
Atomic hydrogen is made from one proton and one electron and as such it cannot be accelerated by electrical fields.
End of the explanation.
Ok ok the electric field can be so strong that the hydrogen atoms get ripped apart but as long as that is not the case it can’t be accelerated by any electric field. Now if electrons are truly magnetic dipoles, if that were true they cannot be accelerated by magnetic fields.
Tiny problem: Every idiot looking at those beautiful video’s from the sun can see with their own eyes that the plasma gets accelerated by magnetic fields… And there are more problems; the surface temperature of the sun is far below that of the solar atmosphere or the solar corona.
In the last five years I have given plenty of explanations of how those solar loops and stuff likely work. It is very likely that below all those sun spots the plasma is actually rotating, spitting out the electrons that are getting much more accelerated compared to the protons and as such we have a giant dynamo made from rotating plasma.
Well nobody talks about that because university people only talk about the stuff you find in expensive journals like Nature or Science. And of course I am not going to waste my money on journals that are too expensive anyway & read only by overpaid perfumed princes from the universities…
But let’s go to the video stuff from Anton Petrov. He talks about Eugene Parker and Mr. Parker is the guy who’s name is used in the Parker solar probe. Already in the 1970-ties Eugene explained how the solar corona could be so hot compared to the surface of the sun. According to Anton he explained it via mini flares coming from the surface heathing the atmosphere of the sun. With such an explanation Eugene Parker avoided being a pariah by stating that this is only possible if electrons and protons are not magnetic dipoles but are all magnetic monopoles and as such carry magnetic charge. My estimate is that both Eugene and Anton do not have any fucking clue as why the plasma gets accelerated, if you keep on hanging to that retarded Gauss law for magnetism it is very hard to explain stuff like that.
Anyway, here is the video and the title is:
We Finally Know Why Sun’s Corona Is So Extremely Hot
I would add: No you don’t, observation is not explanation.
Observing solar plasma acceleration by magnetic fields is not an explanation if your belief system is the standard model of physics in general and the Gauss law for magnetism in particular.
Anyway, Anton rightly remarkes that those mini solar loops & flares are only observed on a tiny part of the sun this does not prove the overall temperature of the solar corona…
He is right with that, Let me end this post with two pictures and after that we will split and say goodbye until the next year 2021.
Once more: if electrons are magnetic dipoles, why do they get accelerated?
And the last picture, it is not a mini-loop but one of those giant loops:
Ok, that was it for the last post of the year.
See you in the next year.
The total differential for the complex plane & the 3D and 4D complex numbers.
I am rather satisfied with the approach of doing the same stuff on the diverse complex spaces. In this case the 2D complex plane and the 3D & 4D complex number systems. By doing it this way it is right in your face: a lot of stuff from the complex plane can easily be copied to higher dimensional complex numbers. Without doubt if you would ask a professional math professor about 3D or higher dimensional complex numbers likely you get a giant batagalization process to swallow; 3D complex numbers are so far fetched and/or exotic that it falls outside the realm of standard mathematics. “Otherwise we would have used them since centuries and we don’t”. Or words of similar phrasing that dimishes any possible importance.
But I have done the directional derivative, the factorization of the Laplacian with Wirtinger derivatives and now we are going to do the total differential precisely as you should expect from an expansion of the century old complex plane. There is nothing exotic or ‘weird’ about it, the only thing that is weird are the professional math professors. But I have given up upon those people years ago, so why talk about them?
In the day to day practice it is a common convention to use so called straight d‘s to denote differentiation if you have only one variable. Like in a real valued function f(x) on the real line, you can write df/dx for the derivative of such a function. If there are more then one variable the convention is to use those curly d’s to denote it is partial differentiation with respect to a particular variable. So for example on the complex plane the complex variable z = x + iy and as such df/dz is the accepted notation while for differentiation with respect to x and y you are supposed to write it with the curly d notation. This practice is only there when it comes to differentiation, the opposite thing is integration and there only straight d‘s are used. If in the complex plane you are integrating with respect to the real component x you are supposed to use the dx notation and not the curly stuff.
Well I thought I had all of the notation stuff perfectly figured out, oh oh how ultrasmart I was… Am I writing down the stuff for the 4D complex numbers and I came across the odd expression of dd. I hope it does not confuse you, in the 4D complex number system I always write the four dimensional numbers as Z = a + bl + cl^2 + dl^3 (the fourth power of the imaginary unit l must be -1, that is l^4 = -1, because that defines the behavior of the 4D complex numbers) so inside Z there is a real variable denoted as d. I hope this lifts the possible confusion when you read dd
More on the common convention: In the post on the factorization of the Laplacian with Wirtinger derivatives I said nothing about it. But in case you never heard about the Wirtinger stuff and looked it up in some wiki’s or whatever what, Wirtinger derivatives are often denoted with the curly d‘s so why is that? That is because Wirtinger derivatives are often used in the study of multi-variable complex analysis. And once more that is just standard common convention: only if there is one variable you can use a straight d. If there are more variable you are supposed to write it with the curly version…
At last I want to remark that the post on the factorization of the Laplacian got a bit long: in the end I needed 15 pictures to publish the text and I worried a bit that it was just too long for the attention span of the average human. In the present years there is just so much stuff to follow, for most people it is a strange thing to concentrate on a piece of math for let’s say three hours. But learning new math is not an easy thing: in your brain all kind of new connections need to be formed and beside a few hours of time that also needs sleep to consolidate those new formed connections. Learning math is not a thing of just spending half an hour, often you need days or weeks or even longer.
This post is seven pictures long, have fun reading it and if you get to tired and need a bit of sleep please notice that is only natural: the newly formed connetions in your brain need a good night sleep.
Here we go with the seven pictures:
Yes, that’s it for this post. Sleep well and think well & see you in the next post. (And oh oh oh a professional math professor for the first time in his or her life they calculate the square Z^2 of a four dimensional complex number; how many hours of sleep they need to recover from that expericence?)
See ya in the next post.
Majorana particles gone? How significant is this?
It is no secret that over the years I have made a few rather bald predictions when it comes to magnetism. I think each and every electron is in fact a magnetic monopole and not a ‘tiny magnet’ like the professional professors think. As such I have predicted that all nuclear fusion reactors based on the magnetic confinement design will never work because if you turn such a machine on it will accelerate the electrons to crazy speeds creating tons of instability and turbulence in the plasma. And if I say ‘hey where is your experimental proof that electrons are actually magnetic dipoles’ of course nothing happens. You must never forget that all these physics professors are extremetly important persons and they will never mingle with inferior shit like people who are unemployed for almost 20 years now… No no no, we are not going to react on some crazy stuff like electrons are not magnetic dipoles. If we start doing that, the next day a homeless person will come along saying if we heal the electrons from their trauma’s you get better beer. No no no, incompetent people aren’t unemployed for no reason at all. We will neglect weirdo’s like that, after all we are the physics professors and we are on the edge of becoming masters of the universe once we have our quantum computers up and running!
I also predicted a few years back that the approach of the Delft university combined with Microsoft will not work because the Majorana particles they are based upon do not exist. It is very simple: a Majorana particle is it’s own anti particle, physics professors thought that if you combine an electron with a hole (the absense of an electron) you have a structure that is it’s own anti particle. But if you combine an electron with a hole, what mechanism is there to ensure the hole and the electron also carry opposite magnetic charges? Just like the electron and the positron carry opposite electric charges the same should go for the magnetic charges after my lazy unemployed insights.
A couple of days back when reading the news on Google news I made an internet search for ‘quantum computing’ and to my surprise there was an article from last May in the local newpaper De Volkskrant stating there was ‘bad luck’ for the heroes at Delft university. Oopsy toopsy, the Delft heroes had to withdraw an article published in Nature where the existance of Majorana particles was claimed. The fact that Nature published this bullshit in the first place is also interesting, likely the peer review mechanism works perfectly for getting the best of science into that outlet of articles. Today I checked what the publication fees are for Nature, it is only 5000€ or so (I more or less expected it to be at least 10000€) so today I learned something too… Those fees are of course a fundamental root cause as you will never find my name Reinko Venema in such overpaid shit publications. Why should I lay out 5000€ for something that will never ever pass the peer review hurdle? Since I am not an overpaid person, most of the time I try to spend my money wise.
I am sorry that the news article is in Dutch, but here is a link:
Tegenslag voor Nederlandse pionier van de quantumcomputer.
Anyway I consider this a very small success, of course the main success I am hunting for is that plasma instability in fusion reactors is caused by the acceleration of the applied magnetic fields and as such we cannot halt climate change with fusion reactors. But we have to do with overpaid university shitholes, so the waiting will be long. The combination of being overpaid while at the same time you are to stupid to ‘do the math’ often does not give a good outcome. As an example for that, look at the last financial crises that started in 2008: all those overpaid bankers and the troubles they created.
We close this post with a few pictures. The copyright of the content goes to Sam Rentmeester who likely made the photo from that extremely important Delft based physics professor:
Wow wow wow, do electrons carry magnetic charge?
Ok, that was it for this post. The next post is about math and we will dive into the total differential for 2D, 3D and 4D comples numbers.
Factorization of the Laplacian (for 2D, 3D and 4D complex numbers).
Originally I wanted to make an oversight of all ways the so called Dirac quantization condition is represented. That is why in the beginning of this post below you can find some stuff on the Dirac equation and the four solutions that come with that equation. Anyway, Paul Dirac once managed to factorize the Laplacian operator, that was needed because the Laplacian is part of the Schrödinger equation that gives the desired wave functions in quantum mechanics. Well I had done that too once upon a time in a long long past and I remembered that the outcome was highly surprising. As a matter of fact I consider this one of the deeper secrets of the higher dimensional complex numbers. Now I use a so called Wirtinger derivative; for example on the space of 3D complex numbers you take the partial derivatives into the x, y and z direction and from those three partial derivatives you make the derivative. And once you have that, if you feed it a function you simply get the derivative of such a function.
Now such a Wirtinger derivative also has a conjugate and the surprising result is that if you multiply such a Wirtinger derivative against it’s conjugate you always get either the Laplacian or in the case of the 3D complex numbers you get the Laplacian multiplied by the famous number alpha.
That is a surprising result because if you multiply an ordinary 3D number X against it’s conjugate you get the equation of a sphere and a cone like thing. But if you do it with parital differential operators you can always rewrite it into pure Laplacians so there the cones and spheres are the same things…
In the past I only had it done on the space of 3D numbers so I checked it for the 4D complex numbers and in about 10 minutes of time I found out it also works on the space of 4D complex numbers. So I started writing this post and since I wanted to build it slowly up from 2D to 4D complex numbers it grew longer than expected. All in all this post is 15 pictures long and given the fact that people at present day do not have those long timespan of attention anymore, may be it is too long. I too have this fault, if you hang out on the preprint archive there is just so much material that often after only five minutes of reading you already go to another article. If the article is lucky, at best it gets saved to my hard disk and if the article has more luck in some future date I will read it again. For example in the year 2015 I saved an article that gave an oversight about the Dirac quantization condition and only now in 2020 I looked at it again…
The structure of this post is utterly simple: On every complex space (2D, 3D and 4D) I just give three examples. The examples are named example 1, 2 and not surprising I hope, example 3. These example are the same, only the underlying space of complex numbers varies. In each example number 1 I define the Wirtinger derivative, in example 2 I take the conjugate while in the third example on each space I multiply these two operators and rewrite the stuff into Laplacians. The reason this post is 15 pictures long lies in the fact that the more dimensions you have in your complex numbers the longer the calculations get. So it goes from rather short in the complex plane (the 2D complex numbers) to rather lengthy in the space of 4D complex numbers.
At last I would like to remark that those four simultanious solutions to the Dirac equation it once more shouts at your face: electrons carry magnetic charge and they are ot magnetic dipoles! All that stuff like the Pauli matrices where Dirac did build his stuff upon is sheer difficult nonsense: the interaction of electron spin with a magnetic field does not go that way. The only reason people in the 21-th century think it has some merits is because it is so complicated and people just loose oversight and do not see that it is bogus shit from the beginning till the end. Just like the math professors that neatly keep themselves stupid by not willing to talk about 3D complex numbers. Well we live in a free world and there are no laws against being stupid I just guess.
Enough of the blah blah blah, below are the 15 pictures. And in case you have never ever heard about a thing known as the Wirtinger derivative, try to understand it and may be come back in five or ten years so you can learn a bit more…
As usual all pictures are 550×775 pixels in size.
Oh oh the human mind and learning new things. If a human brain learns new things like Cauchy-Riemann equations or the above factoriztion of the Laplacian, a lot of chages happen in the brain tissue. And it makes you tired and you need to sleep…
And when you wake up, a lot of people look at their phone and may be it says: Wanna see those new pictures of Miley Cyrus showing her titties? And all your new learned things turn into insignificance because in the morning what is more important compared to Miley her titties?
Ok my dear reader, you are at the end of this post. See you in the next post.
Three video’s to kill the time.
Orginally I wanted to include some video in the previous post that serves as a teaser post for the impending factorization of the Laplacian for 2D, 3D and 4D complex numbers. But it was already late at night and only adding one video made the post look like it is just as chaotic as I always am…;)
So let’s get started with video number 1: Goodbye Determinism, Hello Heisenberg Uncertainty Principle from Irvin Ash. This Irvin guy is one of those professional Youtubbers that apearently can make money by throwing out a lot of video’s. In his case it is often physics and in my view he only repeats what he has read or seen in other video’s. There is not much original thinking in but hey Irvin can make a buck and it keeps him busy.
But in one of the video’s he is making such a strange mistake, it is so stupid that it is unbelievable. It is like stating that 1 + 1 = 3 or like 1 – (-1) = 0. Some mistakes or faults are so trivial that no matter what your own brain instantly recognizes something is going wrong. In this case Irvin explains the double slit experiment and his explanation for the first place where interference disappears is that they are out of phase by one wavelength… I wonder how you can make such a mistake without your own brain instantly jumping in with ‘that is not right’.
Why does his brain not react?
I also made a nice cube from the above screen shot:
I think I was 16 when we had to do such calculations…
And finally the video itself:
The second video is from Sabine Hossenfelder. Unlike Irvin Sabine has a lot of original thinking to share and as such she is a far cry from a talking book like Irvin Ash. In her video she explains how medical magnetic resonance devices work. Back in the time when I figered out that it is just not logical on all kinds of levels that electrons and other spin half particles are magnetic dipoles, for me it was important to find alternative explanations for things like MRI devices. In physics it is well known that accelerating electrons and protons give off electro-magnetic radiation, if there is zero acceleration no radiation is emmited. So the explanation as given in the video cannot be right, it is about magnetic moments that start spinning round and ‘therefore’ give off radiation. Problem with this is: there is no real acceleration so what explains the emitted radiation?
If protons and electrons carry magnetic charge, that is they are magnetic monopoles, all of a sudden there is room for acceleration and as such you can observe those resonance frequencies. Compare it to a music intrument: if you have a guitar with zero tension on the wires, it will never produce any sound let alone some cute music. In MRI scans there is also a static magnetic field, only when the protons and electrons are magnetic monopoles this ‘brings the tension’ needed for the resonance to work in the first place. Sorry Sabine, your version of physical reality has a lot of holes in it because it is based on the Gauss law for magetism and that law says that no magnetic monopoles exist…
You explanation does not carve any wood Sabine; why is the static magnetic field needed?
In case you never dived into the niceties of MRI scanners, please see the video. And don’t forget to be a bit critical: if protons are really magnetic dipoles, then what the fuck is that static magnetic field doing? But if protons (and electrons) carry magnetic charge all of a sudden things become logical. Not that I expect during my lifetime only one of the professional physics professors to say that I am in the right, but there is no use in getting emotional. All I do is repeating the nonsense that goes on as accepted common knowledge while it is retarded: If a proton has two magnetic poles then why do you need the static magnetic field?
The third video is about how Paul Dirac succeeded into factorizing the Laplacian differential operator. It is far different from how I managed to do that; I used so called Wirtinger derivatives and multiply those against their conjugate and voila: there is your factorization. No, Paul Dirac used 4×4 matrices that anti-commute and as such he was able to get rid of a nasty square root. Phyics people go totally bonkers on that calculation, I do not. Not that I do not like it, but Paul made the mistake of basing his matrices on the Pauli matrices for electron spin. And the Pauli matrices can’t be correct because it is based on the flawed idea that electrons are magnetic dipoles.
There is a funny anecdote going round about Paul Dirac. It says: There is no God and Dirac is his prophet. But serious: If electrons were magnetic dipoles you instantly run into dozens of weird problems. Like permanent magnets, of they are explained by the spins of the electrons aligning themselves and just as if you have a bunch of tiny magnets they will form a large permanent one. But in chemistry and electron pair with the same spin is known as an anti-binding electron pair. How can in permanent magnets the alignment of electrons enforce each other while in chemistry that causes a non-binding electron pair? Once more: I only use logic. It is logical that electrons, protons and neutrons carry net magnetic charge and as such are always magnetic monopoles.
Enough of the blah blah blah, here is the last video of this post:
At last a ‘cube picture’ for the Dirac thing:
Ok, that was all I had to day. Thanks for your attention and don’t forget to turn enough math professors into bio-diesel. Everybody knows that bio-diesel made from math professors is the finest quality there is on this entire earth… So good luck with the hunt for math professors…;)
Teaser for the next post on Wirtinger derivatives.
Man oh man, the previous post was from 12 Nov so time flies like crazy. Originally I wanted to write a post on a thing you can look up for yourself: the Dirac quantization condition. I have an old pdf about that and it says that it was related to the exponential circle on the complex plane. Although the pdf is from the preprint archive, it is badly written and contains a ton of typo’s and on top of it: the way the Dirac quantization is formulated is nowhere to be found back on the entire internet. In the exponent of the exponential circle there is iqg where q represents an elementary electric charge and g is the magnetic monopole charge according to Paul Dirac. Needless to say I was freaked out by this because I know a lot about exponential curves but all in all the pdf is written & composed so badly I decided not to use it.
After all when I say that electrons carry magnetic charge and do not have bipolar magnetic spin, the majority of professional physics professors will consider this a very good joke. And if I come along with a pdf with plenty of typo’s the professional professors will view that as a validation that I am the one who has cognitive problems and of course they are the fundamental wisecracks when it comes to understandig electron spin. Our Pauli and Dirac matrices are superior math, in the timespan of a hundred years nothing has come close to it they will say.
Here is a screen shot of what freaked me out:
Furthermore I was surprised that the so called professional physics professors have studied stuff like ‘dyons’. So not only a Dirac magnetic monopole (without an electric charge but only a magnetic charge), a dyon is a theoretical particle that has both electric and magnetic charge. But hey Reinko, isn’t that what you think of the electron? There are two kinds of electrons, all electrons have the same electric charge but the magnetic charge comes in two variants.
There are so many problems with the idea that electrons are magnetic dipoles, but the profs if you give them a fat salary will talk nonsense like they are a banker in the year 2007.
So I decided to skip the whole Dirac quantization stuff and instead focus a bit on factorizing the Laplacian differential operator. I the past I have written about that a little bit, so why not throw in a Google search because after all I am so superior that without doubt my results will be found on page 1 of such a Google search! In reality it was all ‘Dirac this’ and ‘Dirac that’ when it comes to factorization of the Laplacian on page 1 of the Google search. So I understood the physics professors have a serious blockade in their brains because this Dirac factorization is only based on some weird matrices that anti-commute. These are the Pauli and Dirac matrices and it is cute math but has zero relation to physical reality like the electron pairs that keep your body together.
No more of the Dirac nonsense! I sat down and wrote the factorization of the Laplacian for 4D complex numbers on a sheet of paper. Let me skip all this nonsense of Dirac and those professional physics professors and bring some clarity into the factorization of the Laplacian.
It took at most 10 minutes of time, it is just one sheet of paper with the factorization. I hope this is readable:
Anyway it factorizes the Laplacian…
So that is what I have been doing since 12 April, since the last post on this website. I have worked my way through the 2D complex plane, the 3D complex numbers and finally I will write down what did cost me only 10 minutes of time a few weeks ago…
In a few days the post wil be ready, may be this week. If not next week & in the meantime you are invited to think about eletrons and why it is not possible that they are magnetic dipoles.
See you in the next post. |
0ea9bad036a23554 | Background and Motivation
The mathematical sciences play a vital part in all aspects of modern society. Without research and training in mathematics, there would be no engineering, economics or computer science; no smart phones, MRI scanners, bank accounts or PIN numbers. Mathematics is playing a key role in tackling the modern-day challenge of cyber security and in predicting the consequences of climate change, whereas in manufacturing sectors such as automotive and aerospace industries benefit from superior virtual design processes. Likewise the life sciences sector, with significant potential for economic growth, would not be in such a strong position without mathematics research and training, providing the expertise integral to the development of areas such as personalised healthcare and pharmaceuticals, and underpinning development of many medical technologies. The
emergence of truly massive datasets across most fields of science and engineering increases the need for new tools from the mathematical sciences.
One of the classic ways in which mathematical science research plays a role in the economy is through collecting data towards understanding it, by using tools and techniques, enabling the discovery of new relationships or models. Modelling of physical phenomena already dates back several centuries, and well-known systems of equations with the names of Maxwell, Navier-Stokes, Korteweg-de Vries and more recently the Schrödinger equation plus many others are now well established. But it was not until the advent of computers in the middle of the previous century, and the development of sophisticated computational methods (like iterative solution methods for large sparse linear systems) that this could be taken to a higher level, by performing computations using these models. Software tools with advanced computational mathematical techniques for the solution of the aforementioned systems of equations have become common place, and are heavily used by engineers and scientists.
Mirroring this activity is increased awareness by society and industry that mathematical simulation is ubiquitous to address the challenging problems of our times. Industrial processes, economic models and critical events like floods, power failures or epidemics have become so complicated that its realistic description does not require the simulation of a single model, but rather the cosimulation of various models. Better scientific understanding of the factors governing these will provide routes to greater innovation power and economic well-being across an increasingly complex, networked world with its competitive and strongly interacting agents. Industry, but also science, is highly dependent on the development of virtual environments that can handle the complex problems that we face today, and in the future.
For example, if the origins of life are to be explained, biologists and mathematicians need to work together, and most of the time spent will be on evaluating and simulating the mathematical models. Using the mathematics of evolutionary dynamics, the change from no life to life (referring to the self-replicating molecules dominating early Earth) can be explained. Another example is the electronics industry, which all of us rely on for new developments in virtually every aspect of our everyday life. Innovations in this branch of industry are impossible without the use of virtual design environments that enable engineers to develop and test their complex designs behind a screen, without ever having to go into the time-consuming (several months) process of prototyping.
Principles of computational science and engineering rooted in modern applied mathematics are at the core of these developments, subjects that are set to undergo a renaissance in the 21st century. Indeed, no less a figure than Stephen Hawking is on record as saying that the 21st century will be the century of complexity. Another great figure, yet young, is Fields medallist Terence Tao, who was a major contributor to the recently published document entitled “The mathematical sciences in 2025”, stating: “Mathematical sciences work is becoming an increasingly integral and essential component of a growing array of areas of investigation in biology, medicine, social sciences, business, advanced design, climate, finance, advanced materials, and many more – crucial to economic growth and societal well- being”.
Growing computing power, nowadays including multicore architectures and GPU’s, does not provide the solution to the ever growing demand for more complex and more realistic simulations. In fact, it has been demonstrated that Moore’s Law, describing the advances in computing power over the last 40 years, equally holds for mathematical algorithms. Hence, it is important to develop both faster computers and faster algorithms, at the same time. This is essential if we wish to keep up with the growing demands by science and technology for more complex simulations. Traditionally, algorithmic speed-ups have come from developments in the area of linear system solution, in which iterative algorithms developed since the 1970’s have been prominent and very effective. Since the start of the new century, however, another powerful development is seen in mathematics, as well as in the systems and control area. This field, which we term ‘model reduction’ for simplicity (we will detail this further in the next section), aims at capturing essential features of models, thereby drastically reducing the size of the problem to be solved. As it holds many promises, this will be the basis for the challenge addressed in this COST Action. |
f3b226121a796823 | Dismiss Notice
Join Physics Forums Today!
I Difference between field and wave equation
1. Jun 25, 2017 #1
Hello! I am reading some introductory stuff on Klein-Gordon equation and I see that the author mentions sometimes that in a certain context the K-G equation "is a classical field equation, not a quantum mechanical field equation". I am not sure I understand. What is the difference between the two? Isn't the wave function a field, as it assigns to each point in space a number? I.e. can someone give me a clear explanation of what is the difference between a classical field, the wave function of a particle and a quantum field? Thank you!
2. jcsd
3. Jun 25, 2017 #2
Staff: Mentor
Please give the specific reference you are reading.
4. Jun 25, 2017 #3
Staff: Mentor
What context? Please be specific.
5. Jun 26, 2017 #4
Staff: Mentor
You may be referring to the following:
The Kelin Gordon Equation with zero mass and no sources or sinks is really Maxwell's Equations in disguise which is very interesting and may be telling us something - maybe something important. But the KG equation can be derived from simple symmetry considerations so it maybe that's all that's going on.
Classically fields reveal themselves by what they do eg exerting forces on charged particles or in the case of gravity exerting forces on mass. They sometimes can be explained, as in the case of gravity, by something more fundamental but still classical such as space-time curvature.
In standard QM the wave-function is a field that is technically the representation of a quantum state in terms of the position basis without detailing exactly what that is because you really need the advanced concept of Rigged Hilbert Spaces to do it. If you are interested some threads have discussed it eg
In Quantum Field Theory the field is a field of quantum operators. It turns out mathematically they are the same as the second quantization formulation of QM which is one of the formulations of QM:
That's how 'particles' come out of QFT.
6. Jun 26, 2017 #5
No, the wave function is a complex field defined on the configuration space. There is only one particular exceptional case where the configuration space is the same as the usual space, namely a single point particle without any additional properties like spin or charge. If you have, instead, two point particles, the configuration space is already 6-dimensional, because the configuration is defined by two points in usual space, the positions of the two particles.
Instead, classical wave equations are wave equations on usual three-dimensional space ##\mathbb{R}^3##. The classical example is the electromagnetic field. The Klein-Gordon equation is even simpler, given that it has only a single component. Which makes it an equation for a scalar field.
There is also another difference. You cannot measure the wave function - what you can measure are probabilities, which can be computed given the wave function. Instead, classical fields can be measured: The electric and magnetic field can be easily measured. There may be some problems with this if the field is a potential, and what is measurable is not the potential itself, but only potential differences. Nonetheless, even in this case a classical field is much closer to something measurable.
7. Jun 26, 2017 #6
Staff: Mentor
Not really, no. A scalar field and a vector field are not identical. Similar, yes, but not identical.
8. Jun 26, 2017 #7
User Avatar
Science Advisor
Gold Member
2017 Award
I would say it's maybe a careful book, indicating right from the beginning that the Klein-Gordon equation cannot be interpreted as a non-relativistic Schödinger equation to describe the position probability of a single particle. The reason is that the Noether current of phase invariance for the complex field is not positive semi-definite and thus cannot be interpreted as probability distribution.
Also already for the free-particle case, the single-particle energy is not bounded from below as for the Schrödinger equation since for the plane-wave modes of the KG equation (i.e., the would-be momentum eigenstates for a particle) you have ##E=\pm \sqrt{\vec{p}^2+m^2}## while for the Schrödinger equation it's uniquely ##E=\vec{p}^2/(2m)##.
The physical reason for all this trouble is well-known nowadays: If you have interacting particles (and only these are interesting, because they are observable), scattering with relativistic energy transfers (i.e., energies larger than the mass of the particles), you can always easily destroy and create particles, and this is what in fact happens in accelerators like the LHC, where you produce a lot of new particles when colliding two protons at very high energies (of now around 13 TeV).
That's why relativistic QT has to describe the typical situation where the particle number is not conserved, and thus a many-body description is necessary. The most convenient way is to formulate this in terms of quantum field theory, and indeed the most successful QT ever is the Standard Model of elementary particles. The use of QFT allows to solve the above mentioned problems: Quantizing the Klein-Gordon field, you can decompose (for the free-field case) it into plane-wave modes. The trick to avoid the interpretation problem for the field modes with negative frequencies is to just write a creation operator in front of them. This leads to the prediction that the Klein-Gordon field describes two kinds of particles: The one occuring in ##\hat{\phi}(x)## with an annihilation operator in front of the positive-frequency solutions (usually called "particle states") and the one with a creation operator in front of the negative-frequency solutions (usually called "anti-particle states"). It turns out that the particles and anti-particles have precisely the same mass.
Now the conserved Noether current from the symmetry under redifinition of the phase factors, a U(1) symmetry), becomes a physical meaning: It's related to positive charges for the particle modes and to negative charges for the anti-particle modes. If you include interactions and want the U(1) symmetry stay intact you've always some function of ##\hat{\phi}^{\dagger} \hat{\phi}##, i.e., Lagrangians like ##\mathcal{L}_{\text{int}}=-\lambda (\hat{\phi}^{\dagger} \hat{\phi})^2##. Then in the interactions the net charge stays always constant in scattering processes, and thus the net-particle number (i.e., number of particles minus number of anti-particles) stays constant, but it doesn't need to be a positive number to be interpreted as a charge (the electric charge is an example for this).
If in addition you make the interactions local, i.e., make the Lagrangian polynomials of the field operators and their 1st derivatives with respect to the spacetime variables and demand a Hamiltonian bounded from below you necessarily get some very fundamental consequences: One is that you can work out wave equations for particles of any spin. Klein-Gordon fields describe particles with spin 0. Weyl or Dirac fields describe particles with spin 1/2, vector fields particles with spin 1, etc. etc. Then it turns out when quantizing such kinds of QFTs you must have bosons for integer-spin (i.e., canonical commutation relations for fields and canonical field momenta) and fermions for half-integer-spin (i.e., canonical anticommutation relations for fields and canonical field mometa) in order to have an energy bounded from below. This holds true for all so far observed particles (be they "elementary" or "composed" particles).
Another general consequence is the PCT theorem, according to which any Poincare-invariant local QFT with an energy bounded from below is automatically also invariant under the PCT transformation, i.e., the transformation consisting of spatial reflections (P for "parity"), charge conjugation (C, which exchanges each particle by its anti-particle), and time-reversal transformation (T) is always a symmetry. Also this theorem has been tested and never failed so far (while the weak interaction breaks P, T, and CP invariances, it still obeys PCT invariance).
9. Jun 28, 2017 #8
User Avatar
Staff Emeritus
Science Advisor
What's a little strange to me is that on the one hand, the Klein Gordon equation is the natural relativistic generalization of the Schrodinger equation. On the other hand, the interpretation of [itex]|\psi|^2[/itex] as a probability density, which is the heart of the interpretation of the wave function in QM, doesn't apply in any straight-forward manner to the solutions of the KG equation. The KG equation is best interpreted as a field equation for a relativistic scalar field, not as describing the evolution of a probability amplitude. (Of course, probabilities pop again when it's reinterpreted as a quantum field, rather than a classical field)
10. Jul 1, 2017 #9
User Avatar
Science Advisor
Gold Member
2017 Award
The key to the interpretation of ##\psi^* \psi## as a probability distribution in the case of the Schrödinger equation is that it is the density of a conserved Noether charge. The symmetry is symmetry under multiplication of ##\psi## with a constant phase factor. The application of Noether's theorem to the Schrödinger equation leads to the corresponding current
$$\vec{j}=-\frac{\mathrm{i}}{2m} (\psi^* \vec{\nabla} \psi-\psi \vec{\nabla} \psi^*).$$
It's easy to prove from Schrödinger's equation that this fullfills the continuity equation with ##\rho=\psi^* \psi##. Indeed we have
$$\partial_t (\psi^* \psi)=\dot{\psi}^* \psi + \psi^* \dot{\psi}=-\mathrm{i} (\psi \hat{H} \psi^*-\psi^* \hat{H} \psi)=\frac{\mathrm{i}}{2m} (\psi \Delta \psi^*-\psi^* \Delta \psi)=-\vec{\nabla} \cdot \vec{j},$$
which means that
$$\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} \psi^* \psi=\text{const}$$
under the time evolution governed by the Schrödinger equation. Since ##\psi^* \psi## is interpreted as the probability density for the position of the particle the constant is normalized to ##1##, and once normalized it stays normalized for all times, as it must be.
In the KG case the same arguments hold true too, but the conserved four-current reads
$$j^{\mu} = \mathrm{i} [\phi^* \partial^{\mu} \phi - \phi \partial^{\mu} \phi^*],$$
i.e., the Noether charge is not positive definite, and thus you cannot interpret ##j^0## as probability distribution for the position of the particle.
Have something to add?
Draft saved Draft deleted |
91c5e804de1626d3 | Dismiss Notice
Join Physics Forums Today!
B Dirac Equation vs. Schrodinger Equation
1. Jul 18, 2016 #1
The Dirac equation is the more generalized form of the Schrodinger equation and accounts for relativistic effects of particle motion (say an electron) by using a second order derivative for the energy operator. If you have an electron that is moving slowly relative to the speed of light, then you can get away with the Schrodinger equation, which is first order in time.
This makes no sense to me, Why is it that you need to bump the energy operator up to a second order once you get moving at close to the speed of light? Conversely, why does the energy shift to a first order derivative when the electron slows down. I haven't been able to find an explanation for this.
2. jcsd
3. Jul 18, 2016 #2
User Avatar
Staff Emeritus
Science Advisor
Homework Helper
Gold Member
2017 Award
The second order derivative is necessary for Lorentz invariance.
When you go to the non relativistic limit from the relativistic case (I suggest looking at the Klein-Gordon equation first), you can get rid of an overall phase related to the rest energy (ie, mass). The leading correction is a first order derivative wrt time.
4. Jul 18, 2016 #3
Thanks for the response, Orodruin, but that pithy explanation doesn't advance my understanding much of the situation. Admittedly, I'm a bit of an amateur in these affairs, hence the "B" rating in the title, so perhaps you or someone else can unpack your response some so I can integrate it into my level of understanding.
What is that level? Well, here's what I at least think I understand. We have three basic or main wave equations in QM. The famous one is the Schrodinger equation which essentially mimics the classical-physics energy-mass-momentum relationships. Total energy=kinetic energy + potential energy. In the Hamiltonian formulation I think it's H=P^2/2m + V, or something like that.
So, in the classical formulation, H is first order, and P is second order. So Schrodinger thought to mimic that by introducing the Energy and Momentum operators and, viola, you have the Schrodinger equation.
What about the relativistic version where we have E^2=P^2+M^2 (setting c to 1)? Well, at first glance we could just replace E and P with the quantum operators that represent both and all should be good. This is the Klein-Gordon equation but it doesn't work for fermions. Why? Who knows, but Dirac's solution was to try and find a solution to Einstein's relativistic formula, E^2=P^2+M^2, but by formulating an equation that saw E as a first order term as in the classical relation rather than the Einsteinian relativistic second order relation. The result of his efforts were the Dirac matrices, of course, and I understand the logic behind those fairly well.
What I missed, though, was the part where "The second order derivative is necessary for Lorentz invariance." I don't know what this means. I think I understand pretty well SR and Lorentz invariance/co-cariance/transforms, at least as it relates to the 4-vector and d(tau)^2=dt^2-dx^2 variety, but I'm not sure what you mean by "The second order derivative is necessary for Lorentz invariance." Again, I'm an amateur at this so I need it explained in relation to the amateurish understanding of the situation as I've outlined above. Any help would be fantastic!
5. Jul 19, 2016 #4
User Avatar
Science Advisor
Homework Helper
If the wavefunction is a scalar with respect to the Lorentz group, then the derivative operator acting on it should be a scalar, too. Hence it can be only 0th, 2nd, 4th, 6th, etc. order. 0th order is the mass term, 2nd is the D'Alembertian, 4th and etc. are non-physical.
6. Jul 20, 2016 #5
User Avatar
Staff Emeritus
Science Advisor
Umm. The Dirac equation only involves first order derivatives. Are you thinking of the Klein-Gordon equation?
7. Jul 23, 2016 #6
Thanks for all the replies, but these one sentence responses are not really helping me answer my question. These seem to be more of dictionary definitions of the differences between the various wave functions rather than an explanation of why they are different per the specific concern I had. Which is all fine and good, and I'm glad everyone is giving each other in this thread virtual "high fives" with their "likes," but I want to appeal to someone who can demonstrate their didactic prowess and go beyond the phrase "it has to do with the Lorentz group" to try to explain this to a "B" level inquirer.
Most of the resources I've looked at to try to glean an answer to this equation do not mention the "Lorentz" group or invariance when they discuss the motivation for the Dirac equation. They just say that Dirac was looking for a first order solution to Einstein's relativistic equation that the Klein-Gordon solution didn't seem to solve. So I have no idea how the "Lorentz group" solves this problem and by simply stating that it does does not help me. So someone please unpack this.
FF to 16 minutes in:
Nowhere in these discussions are the statement that Dirac developed his equations to satisfy some Lorentz relationship. So please explain.
Maybe I'm wrong, but my question seems fairly straightforward and legitimate. Why is an electron moving at a slow speed modeled accurately by a first order time derivative but a fast moving electron can only be modeled by a second order time derivative?
8. Jul 23, 2016 #7
Ok, so we don't incur a further hiccup with this misinterpretation of my quandary, let me clarify that Dirac's equations involve a first order derivative solution to E^2=p^2+m^2 through the process outlined in the first video I posted above. The Klein-Gordon equation left all the quantities squared. But the Schrodinger equation from the outset unapologetically used a first order time derivative while simultaneously setting the space derivative to p^2/2m. Where was the justification for this? SImply because it fit with the classical energy-momentum relationship? Please, someone give me a brief history of this, it seems so basic I don't understand why someone can't explain it to me.
9. Jul 23, 2016 #8
Apologies if this is a non answer as i am unqualified but how Dirac came to the final solution is moot.
The thing to ask is; is it correct in known cases and can it predict stuff.
Physics is the most creative act of the human mind, the op is asking for someone to explain the creative process, nobody has ever been able to explain creativity.
10. Jul 23, 2016 #9
A. Neumaier
User Avatar
Science Advisor
And the relativistic classical energy-momentum relationship leads in the same way to the Klein-Gordon equation and its variants, such as the Dirac equation. From the outset, unapologetically.
11. Jul 24, 2016 #10
Think of it this way - the Schrödinger equation is of first order with respect to time, but second order with respect to spatial derivatives. This is a problem once you go into the realm of relativistic physics, quite simply because in relativity time and space are on equal footing, so the wave equation must reflect this. Klein-Gordon fixes this ( it is fully Lorentz covariant ), but it does not account for spin. Hence the need for the various other relativistic wave equations.
12. Jul 24, 2016 #11
Thank you Markus, I think you're the first person who has actually understood what my quandary is. I also agree with your statements above. However, regarding those statements, this again is my central question: Why is time and space on an equal footing in relativistic physics but not in non-relativisitic physics? (To use your phrasing) Perhaps this is a question that doesn't have an answer at the moment, which is fine, but if that's the case, then I wish people would just say that instead of throwing out terms like "Lorentz group/invariance," and my favorite, D'Alembertian :redface:, and not elaborating on it.
13. Jul 24, 2016 #12
User Avatar
Staff: Mentor
Because it works, i.e. leads to predictions that agree with old and new experiments to date. If there were serious experimental evidence otherwise, then physicists (as a community) would eventually change their minds. And there are theorists who consider such possibilities, and experimentalists who look for them. Do a Google search for "Lorentz violations." If something like that turns out to be true, it will be a sure route to a Nobel prize.
Last edited: Jul 24, 2016
14. Jul 24, 2016 #13
User Avatar
Staff Emeritus
Science Advisor
The first-order version is an approximation to the second-order version, which should be valid for low energies. I don't think there is anything more complicated to it than that.
Start with the relativistic energy-momentum relation:
[itex]E = \sqrt{p^2 c^2 + m^2 c^4}[/itex]
You can solve for this as a power series in [itex]p[/itex]:
[itex]E = mc^2 + \frac{1}{2} \frac{p^2}{m} - \frac{1}{8c^2} \frac{p^4}{m^4} + ...[/itex]
Now, go to quantum mechanics by letting this be an operator equation acting a wave function [itex]\Psi[/itex]:
The full Schrodinger equation is:
[itex]H \Psi = i \hbar \frac{\partial \Psi}{\partial t}[/itex]
Then you can define a nonrelativistic hamilton [itex]\mathcal{H}[/itex] to be [itex]H - mc^2[/itex], and similarly an auxiliary nonrelativistic wave function [itex]\psi[/itex] to be [itex]\Psi e^{\frac{i mc^2 t}{\hbar}}[/itex]. Then [itex]\psi[/itex] obeys the equation:
[itex]\mathcal{H} \psi = i \hbar \frac{\partial \psi}{\partial t}[/itex]
If you only keep the lowest-order terms in [itex]\frac{1}{c^2}[/itex], then you have the nonrelativistic Schrodinger equation.
15. Jul 24, 2016 #14
User Avatar
Science Advisor
In non-relativistic physics, the laws are invariant under Galilean transformations. For example, if a law like [itex]F = ma [/itex] is valid in a certain coordinate system and you rotate your axes in space, the law is also valid in the new coordinate system. The important thing is that no matter how you change your space coordinates, you can always choose your time coordinate independently.
In relativistic physics, the laws are invariant under Lorentz transformations. These include not only rotations in space but also something similar to rotations in spacetime. So time and space are no longer independent. An equation which involves derivatives of different orders for time and space can't be invariant under such transformations.
Last edited: Jul 24, 2016
16. Jul 24, 2016 #15
User Avatar
Science Advisor
Homework Helper
Another very simple way to rephrase kith's second paragraph is: because special relativity has an invariant speed (which is also the maximum value attainable by matter particles), both space and time become relative, hence one can say they are on equal footing.
17. Jul 24, 2016 #16
Ok, I get the concept of the "invariant interval" when it comes to world-lines in special relativity. Actually, in a bit of lost history, Einstein initially wanted to call his model the "Invariante" theory, or the theory of invariance, seeing as the minus sign between space and time always kept proper time a constant and the the 4-velocity at M^2C^2, etc:
But, still, this doesn't answer my question. You can make a statement that, at high speeds, both space and time become relative, but that doesn't address the question as to why this is so, and that, at the limit of non-relativistic speeds, a first order time derivative works just fine.
Maybe stevendaryl has the right answer, maybe it has to do with a Taylor series expansion kind of thing. I'm terrible at math so I'll have to unpack that to see if it satisfies me. It wouldn't be the first time. I was highly rewarded when I worked out the Taylor series expansion of E=MC^2 to find the second order (or was it first?) kinetic energy term and the higher-order adjustments (way cool). Perhaps this is the same deal..
18. Jul 24, 2016 #17
Because the laws of physics - including your wave equation - are the same in all inertial frames of reference. Hence, the line element must be preserved when you go from one frame into another; this is possible only if you treat time and space equally.
Have something to add?
Draft saved Draft deleted
Similar Discussions: Dirac Equation vs. Schrodinger Equation
1. Spin vs Dirac Equation (Replies: 11) |
072cf5be0e77c5e8 |
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
Time-independent Schrödinger equation, normalizing
1. Jun 20, 2017 #1
An electron coming from the left encounters/is trapped the following potential:
-a<x<0; V=0
0<x<a; V=V0
infinity elsewhere
the electron has energy V0
a)Write out the wave function
b)normalize th wave function
2. Relevant equations
3. The attempt at a solution
for -a<x<0
and for 0<x<a
and 0 elsewhere
i used the sine and cosine because it seemed it would be better for continuity condition in x=0, if you would use exponential form please do explain why.
so this is what my teacher expects for a).
for b)
applying continuity conditions on x=0 i get:
and so:$$\int_{-a}^{0}|Ψ(x)|^2=1$$
im a bit confused here, is this the norm or the module? i think its the norm and if so ot might have been worth it to write the wave function in exponential form, so before i transcribe this big integral please clarify this for me.
Furthermore this should look like a particle traped in a box correct? i dont really understand what happens when E=V, i understand the probabiity part, it decays linearly further inside the step, correct?
And what about if E>V0 is it a particle traped in a box, but in the 0-a area the amplitude decreses? And the allowed energy levels for that area start at V0? what about penetration? and when E is smaller what happens?
Thank you!
Edit:would it be easier if i shifted the potential by -a so that it is in the range [0;2a]?
Last edited: Jun 20, 2017
2. jcsd
3. Jun 24, 2017 #2
User Avatar
Staff Emeritus
Science Advisor
Homework Helper
Education Advisor
##B=C## isn't quite correct. You should also apply continuity conditions for ##\psi(x)## at ##x=-a## and ##x=a##.
The normalization requirement is
$$\int_{-\infty}^\infty \lvert \psi(x) \rvert^2\,dx = 1.$$ In this problem, since the wave function vanishes for ##|x|>a##, you have
$$\int_{-a}^a \lvert \psi(x) \rvert^2\,dx = 1.$$
Have something to add?
Draft saved Draft deleted
Similar Discussions: Time-independent Schrödinger equation, normalizing |
34b00836785c4e71 | Tuesday, September 20, 2005
Faster than light or not
I don't know about the rest of the world but here in Germany Prof. Günter Nimtz is (in)famous about his display experiments that he claims show that quantum mechanical tunneling happens instantaneously rather than according to Einstein causality. In the past, he got a lot of publicity for that and according to Heise online he has at least a new press release.
All these experiments are similar: First of all, he is not doing any quantum mechanical experiments but uses the fact that the Schrödinger equation and the wave equation share similarities. And as we know, in vacuum, Maxwell's equations imply the wave eqaution, so he uses (classical) microwaves as they are much easier to produce than matter waves of quantum mechanics.
So what he does is to send a pulse these microwaves through a region where "classically" the waves are forbidden meaning that they do not oscillate but decay exponentially. Typically this is a waveguide with diameter smaller than the wavelength.
Then he measures what comes out at the other side of the wave guide. This is another pulse of microwave which is of course much weaker so needs to be amplified. Then he measures the time difference between the maximum of the weaker pulse and the maximum of the full pulse when the obstruction is removed. What he finds is that the weak pulse has its maximum earlier than the unobstructed pulse and he interprets that as that the pulse has travelled through the obstruction at a speed greater than the speed of light.
Anybody with a decent education will of course immediately object that the microwaves propagate (even in the waveguide) according to Maxwell's equations which have special relativity build in. Thus, unless you show that Maxwell's equations do not hold anymore (which Nimtz of course does not claim) you will never be able to violate Einstein causality.
For people who are less susceptible to such formal arguments, I have written a little programm that demonstrates what is going on. The result of this programm is this little movie.
The programm simulates the free 2+1 dimensional scalar field (of course again obeying the wave equation) with Dirichlet boundary conditions in a certain box that is similar to the waveguide: At first, the field is zero everywhere in the strip-like domain. Then the field on the upper boundary starts to oscillate with a sine wave and indeed the field propagates into the strip. The frequency is chosen such that that wave can in fact propagate in the strip.
(These are frames 10, 100, and 130 of the movie, further down are 170, 210, and 290.) About in the middle the strip narrows like in the waveguide. You can see the blob of field in fact enters the narrower region but dies down pretty quickly. In order to see anything, in the display (like for Nimtz) in the lower half of the picture I amplify the field by a factor of 1000. After the obstruction ends, the field again propagates as in the upper bit.
What this movie definitely shows is that the front of the wave (and this is what you would use to transmit any information) everywhere travels at the same speed (that if light). All what happens is that the narrow bit acts like a high pass filter: What comes out undisturbed is in fact just the first bit of the pulse that more or less by accident has the same shape as a scaled down version of the original pulse. So if you are comparing the timing of the maxima you are comparing different things.
Rather, the proper thing to compare would be the timing when the field first gets above a certain level, one that is actually reached by the weakend pulse. Then you would find that the speed of propagation is the same independant of the obstruction being there or not.
Update: Links updated DAMTP-->IUB
Ralf Buelow said...
Hello: I am working at the museum in Mannheim where the most recent experiment of Professor Nimtz' is displayed (well, I'm actually the guy who asked him to install it in our Einstein exhibition), and I can assure you that it is not identical with the old Mozart experiment. See www.heise.de/newsticker/meldung/64004 for details.
Sincerely Ralf Buelow
Wolfgang said...
Nice exercise!
What program did you use to get from the output of your C program to the mpg file ?
Robert said...
That's 'convert' from the ImageMagick suite.
Wolfgang said...
thank you for the hint. I think I puzzled it together:
i) your C program generates PGM files (portable grayscale image)
ii) the convert utilty from of ImageMagick reads your PGM files and generates the mpg movie.
Very nice and thanks.
B Nettles said...
So, are you saying that the "tunnelling" causes a phase shift in the propagated wave? Is this simply a phase velocity trick?
Not that I'm supporting Nimtz (far from it), but why haven't we heard guys like Neill Tyson screaming about this?
Robert said...
It's not clear to me what you mean by 'phase shift' for a wave that is not monochromatic.
What I am saying is that of a wave packet it's only the first bit (length order of width of obstruction) that get through, the rest is reflected.
Again, phase vs. group velocity is well defined only at a single frequency/wave length, it varies if you have dispersion (like here). The speed of information (given in terms of the support of the Green's function) is determined by the limit of d omega/dk of infinite omega or k->0. And this is c here and in all of Nimtz' experiments. |
87c5bb2367888ed2 | fredag 29 juli 2016
Secret of Laser vs Secret of Piano
There is a connection between the action of a piano as presented in the sequence of posts The Secret of the Piano and a laser (Light Amplification by Stimulated Emission of Radiation), which is remarkable as an expression of a fundamental resonance phenomenon.
To see the connection we start with the following quote from Principles of Lasers by Orazio Svelto:
• There is a fundamental difference between spontaneous and stimulated emission processes.
• In the case of spontaneous emission, the atoms emit e.m waves that has no definite phase relation with that emitted by another atom...
• In the case of stimulated emission, since the process is forced by the incident e.m. wave, the emission of any atom adds in phase to that of the incoming wave...
A laser hus emits coherent light as electromagnetic waves all in-phase, and thereby can transmit intense energy over distance.
The question is how the emission/radiation can be coordinated so that the e.m. waves from many/all atoms are kept in-phase. Without coordination the emission will become more or less out-of-phase resulting in weak radiation.
The Secret of the Piano reveals that the emission from the three strings for each note in the middle register, which may have a frequency spread of about half a Herz, are kept in phase by interacting with a common soundboard through a common bridge in a "breathing mode" with the soundboard/bridge vibrating with half a period phase lag with respect to the strings. The breathing mode is initiated when the hammer feeds energy into the strings by a hard hit.
In the breathing mode strings and soundboard act together to generate an outgoing sound from the soundboard fed by energy from the strings, which has a long sustain/duration in time, as the miracle of the piano.
If we translate the experience from the piano to the laser, we understand that laser emission/radiation is (probably) kept in phase by interaction with a stabilising half a period out-of-phase forcing corresponding to the soundboard, while leaving part of the emission to strong in-phase action on a target.
An alternative to quick hammer initiation is in-phase forcing over time, which requires a switch from input to output by half a period shift of the forcing.
We are also led to the idea that black body radiation, which is partially coherent, is kept in phase by interaction with a receiver/soundboard. Without receiver/soundboard there will be no radiation. It is thus meaningless to speak about black body radiation into some vacuous nothingness, which is often done based on a fiction of "photon" particles being spitted out from a body even without receiver, as physically meaningless as speaking into the desert.
torsdag 28 juli 2016
New Quantum Mechanics 10: Ionisation Energy
Below are sample computations of ground states for Li1+, C1+, Ne1+ and Na1+ showing good agreement with table data of first ionisation energies of 0.2, 0.4, 0.8 and 0.2, respectively.
Note that computation of first ionisation energy is delicate, since it represents a small fraction of total energy.
onsdag 27 juli 2016
New Quantum Mechanics 9: Alkaline (Earth) Metals
The result presentation continues below with alkaline and alkaline earth metals Na (2-8-1), Mg (2-8-2), K (2-8-8-1), Ca (2-8-8-2), Rb (2-8-18-8-1), Sr (2-8-18-8-2), Cs (2-8-18-18-8-1) and Ba (2-8-18-18-8-2):
New Quantum Mechanics 8: Noble Gases Atoms 18, 36, 54 and 86
The presentation of computational results continues below with the noble gases Ar (2-8-8), Kr (2-8-18-8), Xe (2-8-18-18-8) and Rn (2-8-18-32-18-8) with the shell structure indicated.
Again we see good agreement of ground state energy with NIST data, and we notice nearly equal energy in fully filled shells.
Note that the NIST ionization data does not reveal true shell energies since it displays a fixed shell energy distribution independent of ionization level, and thus cannot be used for comparison of shell energies.
New Quantum Mechanics 7: Atoms 1-10
This post presents computations with the model of New Quantum Mechanics 5 for ground states of atoms with N= 2 - 10 electrons in spherical symmetry with 2 electrons in an inner spherical shell and N-2 electrons in an outer shell with the radius of the free boundary as the interface of the shells adjusted to maintain continuity of charge density. The electrons in each shell are smeared to spherical symmetry and the repulsive electron potential is reduced by the factor n-1/n with n the number of electrons in a shell to account for lack of self repulsion.
The ground state is computed by parabolic relaxation in the charge density formulation of New Quantum Mechanics 1 with restoration of total charge after each relaxation and shows good agreement with table data as shown in the figures below.
The graphs show as functions of radius, charge density per unit volume in color, charge density per unit radius in black, kernel potential in green and total electron potential in cadmium red. The homogeneous Neumann condition at the interface of charge density per unit volume is clearly visible.
The shell structure with 2 electrons in the inner shell and N-2 in the outer shell is imposed based on a principle of "electron size" depending on the strength of effective kernel potential, which gives the familiar pattern of 2-8-18-32 of electrons in successively filled shells as a consequence of shell volume of nearly constant thickness scaling quadratically with shell number. This replaces the ad hoc unphysical Pauli exclusion principle with a simple physical principle of size and no overlap.
The electron size principle allows the first shell to house at most 2 electrons, the second shell 8 electrons, the third 18 electrons, et cet.
In the next post similar results for Atoms 11-86 will be presented and it will be noted that a characteristic of a filled shell structure 2-8-18-32- is comparable total energy in each shell, as can be seen for Neon below.
The numbers below show table data of total energy in the first line and computed in second line, while the groups show total energy, kinetic energy, kernel potential energy and electron potential energy in each shell.
måndag 25 juli 2016
New Quantum Mechanics 6: H2 Molecule
Computing with the model of the previous post in polar coordinates with origin at the center of an H2 molecule assuming rotational symmetry around the axis connecting the two kernels, gives the following results (in atomic units) for the ground state using a $50\times 40$ uniform mesh:
• total energy = -1.167 (kernel potential: -4.28, electron potential: 0.587 and kinetic: 1.147)
• kernel distance = 1.44
in close correspondence to table data (-1.1744 and 1.40). Here is a plot of output:
söndag 24 juli 2016
New Quantum Mechanics 5: Model as Schrödinger + Neumann
This sequence of posts presents an alternative Schrödinger equation for an atom with $N$ electrons starting from a wave function Ansatz of the form
• $\psi (x,t) = \sum_{j=1}^N\psi_j(x,t)$ (1)
as a sum of $N$ electronic complex-valued wave functions $\psi_j(x,t)$, depending on a common 3d space coordinate $x$ and a time coordinate $t$, with non-overlapping spatial supports $\Omega_j(t)$ filling 3d space, satisfying for $j=1,...,N$ and all time:
• $i\dot\psi_j + H\psi_j = 0$ in $\Omega_j$, (2a)
• $\frac{\partial\psi_j}{\partial n} = 0$ on $\Gamma_j(t)$, (2b)
where $\Gamma_j(t)$ is the boundary of $\Omega_j(t)$, $\dot\psi =\frac{\partial\psi}{\partial t}$ and $H=H(x,t)$ is the (normalised) Hamiltonian given by
• $H = -\frac{1}{2}\Delta - \frac{N}{\vert x\vert}+\sum_{k\neq j}V_k(x)$ for $x\in\Omega_j(t)$,
with $V_k(x)$ the repulsion potential corresponding to electron $k$ defined by
• $V_k(x)=\int\frac{\psi_k^2(y)}{2\vert x-y\vert}dy$,
and the electron wave functions are normalised to unit charge of each electron:
• $\int_{\Omega_j(t)}\psi_j^2(x,t) dx=1$ for $j=1,..,N$ and all time. (2c)
The differential equation (2a) with homogeneous Neumann boundary condition (2b) is complemented by the following global free boundary condition:
• $\psi (x,t)$ is continuous across inter-electron boundaries $\Gamma_j(t)$. (2d)
The ground state is determined as a the real-valued time-independent minimiser $\psi (x)=\sum_j\psi_j(x)$ of the total energy
• $E(\psi ) = \frac{1}{2}\int\vert\nabla\psi\vert^2\, dx - \int\frac{N\psi^2(x)}{\vert x\vert}dx+\sum_{k\neq j}\int V_k(x)\psi^2(x)\, dx$,
under the normalisation (2c), the homogeneous Neumann boundary condition (2b) and the free boundary condition (2d).
In the next post I will present computational results in the form of energy of ground states for atoms with up to 54 electrons and corresponding time-periodic solutions in spherical symmetry, together with ground state and dissociation energy for H2 and CO2 molecules in rotational symmetry.
In summary, the model is formed as a system of one-electron Schrödinger equations, or electron container model, on a partition of 3d space depending of a common spatial variable and time, supplemented by a homogeneous Neumann condition for each electron on the boundary of its domain of support combined with a free boundary condition asking continuity of charge density across inter-element boundaries.
We shall see that for atoms with spherically symmetric electron partitions in the form of a sequence of shells centered at the kernel, the homogeneous Neumann condition corresponds to vanishing kinetic energy of each electron normal to the boundary of its support as a condition of separation or interface condition between different electrons meeting with continuous charge density.
Here is one example: Argon with 2-8-8 shell structure with NIST Atomic data base ground state energy in first line (526.22), the computed in second line and the total energies in the different shells in three groups with kinetic energy in second row, kernel potential energy in third and repulsive electron energy in the last row. Note that the total energy in the fully filled first (2 electrons) and second shell (8 electrons) are nearly the same, while the partially filled third shell (also 8 electrons out of 18 when fully filled) has lower energy. The color plot shows charge density per unit volume and the black curve charge density per unit radial increment as functions of radius. The green curve is the kernel potential and the cyrano the total electron potential. Note in particular the vanishing derivative of charge density/kinetic energy at shell interfaces.
lördag 2 juli 2016
New Quantum Mechanics 4: Free Boundary Condition
This is a continuation of previous posts presenting an atom model in the form of a free boundary problem for a joint continuously differentiable electron charge density, as a sum of individual electron charge densities with disjoint supports, satisfying a classical Schrödinger wave equation in 3 space dimensions.
The ground state of minimal total energy is computed by parabolic relaxation with the free boundary separating different electrons determined by a condition of zero gradient of charge density. Computations in spherical symmetry show close correspondence with observation, as illustrated by the case of Oxygen with 2 electrons in an inner shell (blue) and 6 electrons in an outer shell (red) as illustrated below in a radial plot of charge density showing in particular the zero gradient of charge density at the boundary separating the shells at minimum total energy (with -74.81 observed and -74.91 computed energy). The green curve shows truncated kernel potential, the magenta the electron potential and the black curve charge density per radial increment.
The new aspect is the free boundary condition as zero gradient of charge density/kinetic energy. |
0e1d9e2924827fb4 | AQME Advancing Quantum Mechanics for Engineers
by Tain Lee Barzso, Dragica Vasileska, Gerhard Klimeck
Introduction to Advancing Quantum Mechanics for Engineers and Physicists
Discovery that is Possible through Quantum Mechanics
Quantum Mechanics for Engineers: Podcasts
Quantum Mechanics for Engineers: Course Assignments
Because understanding quantum mechanics is so foundational to an understanding of the operation of nanoscale devices, almost every Electrical Engineering department (in which there is a strong nanotechnology experimental or theoretical group) and all Physics departments teach the fundamental principles of quantum mechanics and their application to nanodevice research. Several conceptual sets and theories are taught within these courses. Normally, students are first introduced to the concept of particle-wave duality (the photoelectric effect and the double-slit experiment), the solutions of the time-independent Schrödinger equation for open systems (piece-wise constant potentials), tunneling, and bound states. The description of the solution of the Schrödinger equation for periodic potentials (Kronig-Penney model) naturally follows from the discussion of double well, triple well and n-well structures. This leads the students to the concept of energy bands and energy gaps, and the concept of the effective mass that can be extracted from the pre-calculated band structure by fitting the curvature of the bands. The Tsu-Esaki formula is then investigated so that, having calculated the transmission coefficient, students can calculate the tunneling current in resonant tunneling diode and Esaki diode. After establishing basic principles of quantum mechanics, the harmonic oscillator problem is then discussed in conjunction with understanding vibrations of a crystalline lattice, and the idea of phonons is introduced as well as the concept of creation and annihilation operators. The typical quantum mechanics class for undergraduate/first-year graduate students is then completed with the discussion of the stationary and time-dependent perturbation theory and the derivation of the Fermi Golden Rule, which is used as a starting point of a graduate level class in semiclassical transport. Coulomb Blockade is another discussion a typical quantum mechanics class will include.
Particle-Wave Duality
pic1_duality.png A wave-particle dual nature was discovered and publicized in the early debate about whether light was composed of particles or wave. Evidence for the description of light-as-waves was well established at the turn of the century when the photoelectric effect introduced firm evidence of a light-as-particle nature. This dual nature was found to also be characteristic of electrons. Electron particle nature properties were well documented when the DeBroglie hypothesis, and subsequent experiments by Davisson and Germer, established the wave nature of the electron.
Particle-Wave Duality: an Animation
This movie helps students to better distinguish when nano-things behave as particles and when they behave as waves. The link below connects to an exercise on these concepts.
Introductory Concepts in Quantum Mechanics: an Exercise
Solution of the Time-Independent Schrödinger Equation
Piece-Wise Linear Barrier Tool in AQME – Open Systems
pcpbt1.bmp pcpbt2.bmp pcpbt3.bmp
Available resources:
Bound States Lab in AQME
The Bound States Lab in AQME determines the bound states and the corresponding wavefunctions in a square, harmonic, and triangular potential well. The maximum number of eigenstates that can be calculated is 100. Students clearly see the nature of the separation of the states in these three prototypical confining potentials, with which students can approximate realistic quantum potentials that occur in nature.
The panel below (left) shows energy eigenstates of a harmonic oscillator. Probability density of the ground state that demonstrates purely quantum-mechanical behavior is shown in the middle panel below. Probability density of the 20th subband demonstrates the more classical behavior as the well opens (right panel below).
pic6_state1top.png pic7_state2left.png pic8_state3right.png
Available resources:
Energy Bands and Effective Masses
Periodic Potential Lab in AQME
pic10_perpot2.png pic9_perpot1.png The Periodic Potential Lab in AQME solves the time-independent Schrödinger Equation in a 1-D spatial potential variation. Rectangular, triangular, parabolic (harmonic), and Coulomb potential confinements can be considered. The user can determine energetic and spatial details of the potential profiles, compute the allowed and forbidden bands, plot the bands in a compact and an expanded zone, and compare the results against a simple effective mass parabolic band. Transmission is also calculated. This lab also allows the students to become familiar with the reduced zone and expanded zone representation of the dispersion relation (E-k relation for carriers).
Available resources:
Periodic Potentials and Bandstructure: an Exercise
Band Structure Lab in AQME
pic12_band2.png pic11_band1.png Band structure of Si (left panel) and GaAs (right panel).
In solid-state physics, the electronic band structure (or simply band structure) of a solid describes ranges of energy that an electron is “forbidden” or “allowed” to have. It is due to the diffraction of the quantum mechanical electron waves in the periodic crystal lattice. The band structure of a material determines several characteristics, in particular, its electronic and optical properties. The Band Structure Lab in AQME enables the study of bulk dispersion relationships of Si, GaAs, InAs. Plotting the full dispersion relation of different materials, students first get familiar with a band structure of a direct band gap (GaAs, InAs), as well as indirect band gap semiconductors (Si). For the case of multiple conduction band valleys, students must first determine the Miller indices of one of the equivalent valleys, then, from that information they can deduce how many equivalent conduction bands are in Si and Ge, for example. In advanced applications, the users can apply tensile and compressive strain and observe the variation in the band structure, band gaps, and effective masses. Advanced users can also study band structure effects in ultra-scaled (thin body) quantum wells, and nanowires of different cross sections. Band Structure Lab uses the sp3s*d5 tight-binding method to compute E(k) for bulk, planar, and nanowire semiconductors.
Available resource:
Bulk Band Structure: a Simulation Exercise
diamond.png The figure on the left illustrates the first Brillouin zone of FCC lattice that corresponds to the first Brillouin zone for all diamond and Zinc-blende materials (C, Si, Ge, GaAs, InAs, CdTe, etc.). There are 8 hexagonal faces (normal to 111) and 6 square faces (normal to 100). The sides of each hexagon and each square are equal.
Supplemental Information: Specification of High-Symmetry Points
Symbol Description
Γ Center of the Brillouin zone
Simple Cube
M Center of an edge
R Corner point
X Center of a face
Face-Centered Cubic
K Middle of an edge joining two hexagonal faces
L Center of a hexagonal face
U Middle of an edge joining a hexagonal and a square face
W Corner point
X Center of a square face
Body-Centered Cubic
H Corner point joining four edges
N Center of a face
P Corner point joining three edges
A Center of a hexagonal face
H Corner point
K Middle of an edge joining two rectangular faces
L Middle of an edge joining a hexagonal and a rectangular face
M Center of a rectangular face
Real World Applications
Schred Tool in AQME
pic13_schred1.png pic14_schred2.png pic15_schred3.png
The Schred Tool in AQME calculates the envelope wavefunctions and the corresponding bound-state energies in a typical MOS (Metal-Oxide-Semiconductor) or SOS (Semiconductor-Oxide- Semiconductor) structure and in a typical SOI structure by solving self-consistently the one-dimensional (1-D) Poisson equation and the 1D Schrödinger equation. The Schred tool is specifically designed for Si/SiO2 interface and takes into account the mass anisotropy of the conduction bands, as well as different crystallographic orientations.
Available resources:
1-D Heterostructure Tool AQME
The 1-D Heterostructure Tool AQME simulates confined states in 1-D heterostructures by calculating charge self-consistently in the confined states, based on a quantum mechanical description of the one dimensional device. The greater interest in HEMT devices is motivated by the limits that will be reached with scaling of conventional transistors. The 1D Heterostructure Tool in that respect is a very valuable tool for the design of HEMT devices as one can determine, for example, the position and the magnitude of the delta-doped layer, the thickness of the barrier and the spacer layer for which one maximizes the amount of free carriers in the channel which, in turn, leads to larger drive current. This is clearly illustrated in the examples below.
1dhet1.png 1dhet2.png
Available resources:
Resonant Tunneling Diode Lab in AQME
rtd1.png rtd2.png
Put a potential barrier in the path of electrons, and it will block their flow; but, if the barrier is thin enough, electrons can tunnel right through due to quantum mechanical effects. It is even more surprising that, if two or more thin barriers are placed closely together, electrons will bounce between the barriers, and, at certain resonant energies, flow right through the barriers as if there were none. Run the Resonant Tunneling Diode Lab in AQME, which lets you control the number of barriers and their material properties, and then simulate current as a function of bias. Devices exhibit a surprising negative differential resistance, even at room temperature. This tool can be run online in your web browser as an active demo.
pic18_restunn.png pic19_restun2.png
Available resources:
Quantum Dot Lab in AQME
Available resources:
Scattering and Fermi’s Golden Rule
Scattering is a general physical process whereby some forms of radiation, such as light, sound, or moving particles are forced to deviate from a straight trajectory by one or more localized non-uniformities in the medium through which they pass. In conventional use, scattering also includes deviation of reflected radiation from the angle predicted by the law of reflection. Reflections that undergo scattering are often called diffuse reflections, and unscattered reflections are called specular (mirror-like) reflections. The types of non-uniformities (sometimes known as scatterers or scattering centers) that can cause scattering are too numerous to list, but a small sample includes particles, bubbles, droplets, density fluctuations in fluids, defects in crystalline solids, surface roughness, cells in organisms, and textile fibers in clothing. The effects of such features on the path of almost any type of propagating wave or moving particle can be described in the framework of scattering theory. In quantum physics, Fermi’s golden rule is a way to calculate the transition rate (probability of transition per unit time) from one energy eigenstate of a quantum system into a continuum of energy eigenstates, due to a perturbation. The Bulk Monte-Carlo Lab in AQME calculates the scattering rates dependence versus electron energy of the most important scattering mechanisms for the most commonly used materials in the semiconductor industry, such as Si, Ge, GaAs, InSb, GaN, SiC. For proper parameter set for, for example, 4H SiC please refer to the following article.
Available Resources:
Coulomb Blockade
sentence to transition to ending list of AQME tools
AQME constituent tools
Piece-Wise Constant Potential Barriers Tool
Bound States Calculation Lab
Band Structure Lab
Periodic Potential Lab
1D Heterostructure Tool
Resonant Tunneling Diode Simulator
Quantum Dot Lab
Bulk Monte Carlo Lab
Coulomb Blockade Simulation
Created on , Last modified on |
287f5c892a5366c0 | måndag 30 september 2013
Thomas Stocker's Defense of IPCC Climate Models at Variance with Observations
Global temperature increased according to observations in the periods 1920 - 40 and 1978 - 1996 with about 0.35 C, while the temperature was slightly decreasing in the periods 1880 - 1920, 1940 - 1978, 1997 - 2013.
The main argument used by IPCC co-chairman Thomas Stocker when defending the climate models of IPCC AR5 predicting steady warming, was that the 17 year period 1997 - 2013 with no warming was too short to allow any conclusion that CO2 forcing was too small to be observed, while the 18 year period 1978 - 1996 with warming was long enough be able to attribute with 95% likelihood the warming to CO2 forcing. Stocker clarified the argument by adding that a period of 30 years would be required to be able to detect a trend in global cooling.
PS1 The temperature curves for the periods 1920 - 1950 and 1973 - 2013 are very similar with first warming and then slight cooling, only shifted with a steady rise of about 0.5 C/century after the Little Ice Age. The rise 1920 - 1940 is not attributed to CO2 by IPCC while the rise 1976 - 1996 is. The logic is missing.
PS2 The above graph produced by IPCC appears to present lower than actual temperatures before 1960 thus enhancing the warming thereafter.
Will Skeptics Now Be Able to Unite?
As the IPCC along with its politicized scientific apparatus now sinks into the Deep Ocean, it is natural to ask if skeptics of different brands, from IPCC refugees over lukewarmers to socalled deniers, will now be able to unite instead of beating each other with secteristic fervor?
In particular, I could ask if the ban of my writings on some skeptics blogs, because of my questioning of the reality of a Holy Sky Spirit of Back Radiation or DLR (Downwelling Longwave Radiation), can now be lifted?
In particular, can the lack of global warming since 1997 under steadily rising CO2 levels, be viewed as evidence of non-existence of radiative forcing as an effect of DLR from a Holy Sky Spirit? Is DLR fictional in the same sense as the Holy Spirit, when confronted with observed realities?
PS One of the blogs where I have been banned is Roy Spencer's because of my insistence that back radiation is non-physical and that the starting point of 1 C warming from doubled CO2 is a definition based on a simple algebraic relation (Stefan-Boltzmann's law), which does not have any real meaning for the complex system of global climate. Roy sums up the basic physics supposedly carrying climate
modeling as follows:
• It is sometimes said that climate models are built upon physical first principles, as immutable as the force of gravity or conservation of energy (which are, indeed, included in the models). But this is a half-truth, at best, spoken by people who either don’t know any better or are outright lying.
• The most physically sound portion of global warming predictions is that adding CO2 to the atmosphere causes about a 1% energy imbalance in the system (energy imbalances are what cause temperature to change), and if nothing else but the temperature changes, there would only be about a 1 deg. C warming response to a doubling of atmospheric CO2 (we aren’t even 50% of the way to doubling).
• But this is where the reasonably sound, physical first principles end (if you can even call them that since the 1 deg estimate is a theoretical calculation, anyway).
Roy thus appears to question the 1 C and so we agree on this point. The red card must then result from back radiation.
söndag 29 september 2013
Judith Curry: From Sick to Healthy Climate Science
Judith Curry has gone a long way from supporter to opponent of the CO2 global warming science of IPCC, by realizing that the science of IPCC is sick and therefore has to be eliminated to allow healthy climate science to develop:
Since 97% of institutions and people of climate science reportedly have been infected by IPCC, Judith is asking for a revolution with a small group of healthy skeptics leading climate science into the future. Interesting perspectives...
PS1 Judith started her transformation from alarmist to heretic by suddenly realizing that "back radiation" as the basis of greenhouse gas alarmism, is non-physical, which was also my door to skepticism.
PS2 Judith makes the same analysis as Pointman:
• We’ve just witnessed the embarrassing and public humiliation of climate science as a field of honest scientific endeavour. It has lost all claim to be taken seriously and is now tarred with the same pathological science brush that aberrations like Lysenkoism or Eugenics were. It’s now up to the non-activist scientists in the field, who’ve stayed silent for far too long, to save it from that fate by speaking out and reclaiming their field from fanatics posing as scientists. As Elvis said, it’s now or never.
PS3 Judy's death sentence has now been printed in Financial Post.
fredag 27 september 2013
IPCC Follows Warming into the Deep Ocean
Swedish Minister of Climate Lena Ek assisting IPCC Co-chairman Thomas Stocker when presenting the Deep Ocean explanation of the non-existence of global warming.
Here is a summary the 2 hour IPCC webcast press conference presenting the Approved Summary for Policymakers concluding the yet unpublished IPCC 5th Asssessment Report Climate Change 2013: The Physical Science Basis:
The key role is played by Thomas Stocker, Co-Chair of IPCC Working Group 1, who reports that he has only slept 6 hours the last 4 days, which is less than 2 hours per night, and thus is very tired.
What has kept him awake is to come up with a convincing explanation why climate models predicting steady warming, while observations show no warming at all over the last 17 years, still can be used for reliable predictions over periods longer than 17 years.
No wonder that Stocker is tired, because his task has not been easy and lack of sleep is not the best precondition for good scientific work. Accordingly his explanation that the warming, which should have been observed on the Earth surface but was not observed, has been transferrred into the Deep Ocean where it cannot be observed, because it is so deep, was not convincing to media allowed to pose questions at the press conference. Nor the alternative of putting the blame on volcanic eruptions. In the Summary this was phrased as follows:
• The reduced trend in radiative forcing is primarily due to volcanic eruptions...
But Stocker did not mention during the press conference the third alternative presented in the Summary:
• There may also be a contribution from forcing inadequacies and, in some models, an overestimate of the response to increasing greenhouse gas forcing.
This was the reason he could not sleep, and why IPCC now will sink into the Deep Ocean.
PS The consensus message from IPCC to world policymakers is that any connection between the Deep Ocean of IPCC and the Deep Throat of Watergate, very likely (96%) is unprecedented, unequivocal and therefore very alarming.
tisdag 24 september 2013
The Funeral of IPCC: Too Strong Response to Greenhouse-Gas Forcing
The leaked IPCC AR5 Summary for Policymakers tells the world and its leaders that climate models tuned to the observed warming 1970 - 1998, do not fit with the observed lack of warming 1998 - 2013
• There is very high confidence that models reproduce the more rapid warming in the second half of the 20th century.
IPCC thus admits that climate models are constructed to have
• too strong a response to increasing greenhouse-gas forcing,
and are unable to capture
• unpredictable climate variability.
This must be the end of IPCC since IPCC was formed on the sole doctrine of strong a response of green-house gas (CO2) forcing in climate model predictions.
Since IPCC was born in Stockholm from the mind of the Swedish meteorologist Bert Bolin, it is fully logical that the funeral of IPCC now takes place in Stockholm along with the Bert Bolin Center for Climate Research.
PS Concerning climate predictions recall the prediction I made in 2009.
torsdag 19 september 2013
Royal Swedish Academy of Sciences Platform for IPCC
IPCC announces to release its 5th Assessment Report on September 27 on a platform offered by The Royal Swedish Academy of Sciences under the title Climate Change: the state of the science:
• On 27 September 2013, IPCC’s Working Group I releases the Summary for Policymakers of the first part of the IPCC’s Fifth Assessment Report, Climate Change 2013: the Physical Science Basis. This is the first event at which the Working Group I Co-Chairs present the findings of the newly approved report to the general public.
This historic event expresses the historically strong bond between IPCC and the Royal Swedish Academy and Swedish climate science, with IPCC determining the state of science and the Academy acting as platform for the political CO2 alarmism of IPCC. If IPCC falls so will the Academy.
PS A key finding to be reported is:
It's come to this: "The heat is still coming in, but it appears to have gone into the deep ocean and, frustratingly, we do not have the instruments to measure there"
Climate change: IPCC cites global temperature rise over last century | Environment | The Observer
"The heat is still coming in, but it appears to have gone into the deep ocean and, frustratingly, we do not have the instruments to measure there," said Professor Ted Shepherd of Reading University. "Global warming has certainly not gone away." [ViaGWPF]
tisdag 17 september 2013
Staggering Consequences of One IPCC Graph
• Models predict one thing and the data show another.
lördag 14 september 2013
Quantum Mechanics as Smoothed Particle Mechanics
Simulations using this approach are under way.
tisdag 10 september 2013
The Crisis in Modern Physics: Too Complicated
The last sequence of posts on Quantum Contradictions 1 - 20 gives examples of the crisis in modern physics recently described by the Perimeter Institute Director Neil Turok as follows:
• We have to get people to try to find the new principles that will explain the simplicity.
The crisis in modern physics resulting from the confusion of modern physicists originates from the statistical mechanics of Boltzmann used by Planck in a desperate attempt to explain blackbody radiation as statistics of quanta, which led to the quantum mechanics of Bohr and Heisenberg based on atomistic roulettes without casusality and physical reality.
But blackbody radiation can be explained without statistics in a classical model subject to finite precision computation as exposed on Computational Blackbody Radiation, which is simple and therefore possibly correct in the spirit of the above.
måndag 9 september 2013
Quantum Contradictions 20: Averaged Hartree Model between Scylla and Charybdis
torsdag 5 september 2013
Quantum Contradictions 19: Summary
Here is a summary of contradictions of text book quantum mechanics:
1. The multidimensional wave function, as the solution of the multidimensional linear Schrödinger's equation as the basic mathematical model of quantum mechanics, is not a legitimate scientific concept according to Nobel Laureate Walter Kohn, because it cannot be solved analytically nor computationally.
2. It follows that the linear multidimensional Schrödinger equation, which is an ad hoc model invented by Schrödinger and canonized in its Copenhagen Interpretation by Bohr, Born, Dirac and Heisenberg (and then abandoned by Schrödinger), should be removed from text books.
3. Doing so eliminates the need of inventing the microscopic roulettes of the Copenhagen Interpretation in order to give the multidimensional wave function at least some physical meaning, roulettes which violate basic physical principles of causality and reality and therefore were never accepted by Einstein and Schrödinger despite strong pressure from the physics community to confess to statistics.
4. With the linear multidimensional Schrödinger equation and its statistics put into the wardrobe of scientific horrors, focus can instead be put on developing non-linear three-dimensional deterministic equations describing the atomistic world formed by interaction of positive kernels and negative electrons such as Hartree and density functional models. The challenge is then to e.g. explain the (shell structure) of the periodic table from such a model.
5. A tentative such model will be described in final post of this series.
tisdag 3 september 2013
Quantum Contradictions 18: Heisenberg
The text book canon of quantum mechanics was formed by Bohr and Heisenberg in the 1920s and was named the Copenhagen Interpretation (CI) by Heisenberg in the 1950s.
Let us seek the origin and motivation behind the Copenhagen Interpretation in Heisenberg's confessional treatise The Physicist's Concept of Nature. We find the following basic beliefs of Heisenberg:
• Even in the ancient atomic theory of Democritus and Leucippus it was assumed that large-scale processes were the results of many irregular processes on a small scale.
• Thus we always use concepts which describe behaviour on the large scale without in the least bothering about the individual processes that take place on the small scale.
• Now, if the processes which we can observe with our senses are thought to arise out of the inter-actions of many small individual processes, we must conclude that all natural laws may be considered to be only statistical laws.
• Thus it is contended that while it is possible to look upon natural processes either as determinedby laws, or else as running their course without any order whatever, we cannot form any picture of processes obeying statistical laws.
• Planck, in his work on the theory of radiation, had originally encountered an element of uncertainty in radiation phenomena. He had shown that a radiating atom does not deliver up its energy continuously, but discreetly in bundles. This assumption of a discontinuous and pulse-like transfer of energy, like every other notion of atomic theory, leads us once more to the idea that the emission of radiation is a statistical phenomenon.
• However, it took two and a half decades before it became clear that quantum theory actually forces us to formulate these laws precisely as statistical laws and to depart radically from determinism.
• With the mathematical formulation of quantum-theoretical laws pure determinism had to be abandoned.
We understand that Heisenberg believed that the microscopic atomic world is a roulette world of non-deterministic processes for which we cannot form any pictures but we anyway have to believe obey statistical laws.
But atomic roulettes require microscopics upon microscopics, since a roulette is not a simple pendulum but a complex mechanical device, which leads to reduction in absurdum and thus a logical deadlock. This was understood and voiced by Schrödinger and Einstein but Bohr and Heisenberg could scream louder and took the game despite shaky shady arguments.
But a scientific deadlock is a deadlock and so a new direction away from the quagmire of microscopic statistics must be found. |
35c527fd400dffbc | Wave Packet Dynamics
Requires a Wolfram Notebook System
Requires a Wolfram Notebook System
Wave packet dynamics can be studied by pump-probe femtosecond spectroscopy of vibrations of molecules in excited states (see, e.g. A. Assion et al., Z. Phys. D, 36, 1996 pp. 265–271). As a simple example, consider a superposition of the lowest three eigenstates of the harmonic oscillator. The initial state is set by choosing the relative amplitudes a, b, and c of levels 0, 1, and 2. The Demonstration shows the system's time development in a static harmonic oscillator potential.
Contributed by: Gerhard Schwaab and Chantal Lorbeer (Ruhr University, Bochum) (March 2011)
Open content licensed under CC BY-NC-SA
Mathematically, a wave packet is an integrable superposition of the wave functions of the stationary problem with time-dependent development coefficients, , where is an eigenfunction of the stationary Schrödinger equation. The time-development coefficient in the case of a stationary potential is given by .
Based on the harmonic oscillator and a superposition of the lowest three eigenstates, this can be written as , with .
A simplified wave function is given using the appropriate Hermite polynomial : .
The recurrence times are determined by the lowest level of the harmonic oscillator that changes most slowly. Because the harmonic oscillator levels are equidistant, the example is especially simple, showing a strict periodic behavior. For the Demonstration real relative amplitudes , , and were chosen for didactic reasons, although in general the relative amplitudes may be complex. In the figure, the real part, the imaginary part, or the probability density of the lowest three harmonic oscillator levels ( = 0, 1, 2) is shown, together with the resulting total wave function.
Feedback (field required)
Email (field required) Name
Occupation Organization |
5c22874e0a6c2de3 | Educational Research
At present, there are two methods to method chemistry problems: computational quantum chemistry and non-computational quantum chemistry Computational quantum chemistry is mostly concerned with the numerical computation of molecular electronic structures by ab initio and semi-empirical tactics and non-computational quantum chemistry offers with the formulation of analytical expressions for the properties of molecules and their reactions. Computational chemistry is a UK and Irish sales partner of Wavefunction Incorporated – a USA based sector top developer of advanced chemistry software program for analysis, education and drug discovery. The term theoretical chemistry could be defined as a mathematical description of chemistry, whereas computational chemistry is generally utilised when a mathematical process is sufficiently well created that it can be automated for implementation on a personal computer.
Mobility and ubiquitous connectivity, cloud solutions and safety, massive information and analytics, software program and up-stacking capabilities driving development in the telecommunications and technology sectors could be correctly leveraged to actively allow ubiquitous customized healthcare, deliver considerable healthcare and social value, and comprehend scalable growth.
Computational chemists frequently attempt to resolve the non-relativistic Schrödinger equation , with relativistic corrections added, although some progress has been created in solving the totally relativistic Dirac equation In principle, it is feasible to resolve the Schrödinger equation in either its time-dependent or time-independent type, as suitable for the issue in hand in practice, this is not possible except for quite small systems.
Chapters organized in a tutorial structure and written in a non-mathematical style let students and researchers to access computational solutions outdoors their immediate area of experience. The U.S. Bureau of Labor Statistics predicts a 15% raise in the quantity of personal computer and data study scientists” (a category that incorporates pc scientists as nicely as computational scientists) between 2012 (26,700 jobs) and 2022 (30,800 jobs). Most computational chemists perform complete time, and lots of of them have versatile function schedules.
Computational chemistry strategies are recognised as vital tools in the chemical sciences exactly where they are employed to answer queries posed by fundamental science and to challenging problems faced by industry. Expert-level computational chemists may pursue a teaching and/or research career in academia, or they may perhaps work in industry or for a government agency or national laboratory.
Students will be taught by academic employees at the EaStCHEM Study School of Chemistry, which is a partnership involving the Schools of Chemistry at the University of Edinburgh and the University of St Andrews. Chemists have been carrying out computations for centuries, but the field we know nowadays as computational chemistry” is a item of the digital age. Numerous continue in investigation via post-doctoral analysis fellowships, when others develop and test chemical application.
Leave a Reply |
43361fcdcfcf71e4 | LUCIFER in the Temple of the Dog I
LUCIFER in the Temple of the Dog I
By Jack Heart, Orage & Friends
Every story has a beginning and an end, everything in between is just a story…
The oldest stories known come from the Aborigine people of Australia. Their stories go back at least thirty thousand years. They are passed on orally by the tribe’s elders under a rigid tradition called “the law” which ensures the preservation of the Aborigines ancient tribal narratives. Linguistic scholars who have studied them have noted the Aborigines ability to sustain “the inter-generational scaffolding needed to transmit stories over vast periods.” 1
Aborigine tribal lore has been academically documented to chronicle the thawing of the Ice Age and the flooding of the Australian coastline thirteen-thousand years ago.2 According The Wisdom Keepers an episode of Ancient Aliens, the television show purporting to document alien intervention in human history, Aborigine lore also recounts meteorite impacts, tsunamis, volcanic eruptions, earthquakes and solar eclipses…3
What is certain is that aborigine culture ignores the brutal realities of its own existence and focuses on what is now called the dreamtime. The word dreamtime itself is a mistranslation of the Aborigine word alcheringa, which means the uncreated source; a source which was always there, which perpetually yields fresh materials from which everything that is perceived is derived.
To the aborigine the dreamtime is an altered state of consciousness that lies across the uncharted chasms of the mind, a place where everything that ever was has been imprinted forever in the aether. Nothing that was, nothing that is, can be lost and it can always be accessed by going back to the beginning through ceremonies and dreams.
According to Ancient Aliens; “in many ways the concept of dreamtime mirrors the ancient Hindu idea of the Akashic records.” 4 This may not be true…
The idea of Akashic records go back no further than Madam Blavatsky and Theosophy, a system of mysticism which she founded. Akasha simply means aether in Sanskrit.
The expansion of the microcosm into the macrocosm and contraction back of the macrocosm into the microcosm is a doctrine of just about every reputable school of mysticism. “As it is above is so it is below” to the Hermitic. “And the living creatures rush forth and return” as it is written in verse 537 of the Zohar: Concerning the Eyes of Microprosopus…
If Blavatsky and her followers got the idea from anywhere other than a library that there was an astral hall of cosmic records it was from Tibetan lamas schooled in the all but forgotten ways of the ancient Bon religion. Bon was the mysterious religion of Tibet before Buddhism, a primal type of animism that believes all things animate and inanimate are sourced from an invisible world.
Ancient Aliens is a show that is often painful to watch yet is a necessity for any serious student of human history. The show has by far its finest moment in its decade long existence when it proposes that the Aborigines concept of the dreamtime matches a leading edge property of String Theory called the “Holographic paradigm.”5
There are tears in the fabric of Mans reality that upon scrutiny open to abysses of darkness. Quantum entanglement as been proven over and over again in laboratories whose annual budget would bankrupt a small country. Einstein was wrong and his precious “particles” do react with each other by some mechanism that travels faster than light. Anyone who’s ever had a premonition should have known that…
In the Holographic universe, quantum entanglement the enigma of superluminal interaction between particles –what a baffled Einstein called “spooky action at a distance,” petulantly denying its existence in the face of all the evidence (even then) 6 – is easily explained. What are being observed in particle physics are not particles at all, but different aspects of interference patterns generated by the collision of spherical frequency waves emanating from an Event Horizon.
The Holographic paradigm postulates, in fact takes it as a given, that at the threshold of the time-space continuum, what physicists call the cosmological horizon, lay the source of everything that is, ever was, or will be. The information that composes the universe is never lost or changed. It’s immutable and is broadcast in oscillating signals, generating a chaotic sea of fluctuating frequencies that are picked up by mans senses and translated by the mind into the three dimensional world in which he finds himself.
In short; consciousness takes place inside a frequency receiver and “reality” is a television show…
The empirical evidence is overwhelming that the human brain works in the exact same manner as a hologram. This is called the Holonomic brain theory by neuroscientists. Many just cannot accept its implications. But its founder Karl Pribram, who held professorships for ten years at Yale and thirty at Stanford, was the Albert Einstein of neuroscience…
Pribram died in the beginning of 2015 at the age of ninety-five after a long and distinguished career working side by side with such giants in science as BF Skinner, Jon von Neumann and David Bohm; arguably the most brilliant physicist that the Anglo-American empire produced during the twentieth century.
Bohm collaborated closely with Pribram in the formulation of the Holonomic brain theory, but his earlier radical communist political affiliations would have barred him from the inner sanctums of the Stanford Research Institute.
There at Menlo Park, in the womb of madness, Pribram would have had access to at least some of the classified material of Harold Puthoff and Russell Targ. Throughout the seventies Puthoff and Targ were weaponizing the paranormal for Americas Department of Defense. They were working in the outer limits of quantum entanglement. In fact, Pribram admits to consulting with both Puthoff and Targ about it before beginning his collaboration with Bohm…7
In the same interview, from years ago, Pribram explains that “when an input comes in through one of the senses to the brain, it has to then become encoded in some way so that there is a representation.”8 Pribram calls these representations memory traces and says they have no localized point of origin in the brain.
“If you hack away at the brain” in surgery “you would expect that whatever representational process there is and –call it a memory trace if you will– that it would really be impaired tremendously, that you would remove a memory,” like cutting off a piece of a picture. “It doesn’t work that way.”
Pribram –a highly skilled neurosurgeon– noted among other things for his experimental work at the Yerkes Primate Center, of which he became director, recounts that “when lesions occur in the brain there is never any particular memory trace that is removed.” Recalling from over a half century of experience he continues “you may remove something, like the way to retrieve, to get back out the memory. For instance; you might not be able to talk about it but you can still write a note and say what it is you mean.”9
But the overall method by which these memories are spread throughout the brain, enabling them to avoid damage from injury, has always been a mystery.
Pribram explains that it was discovered in the late fifties that the input from the retina is organized in spots, then focused into lines in the cerebral cortex suggesting that the cerebral cortex is filled with cells that act as line detectors. These cells are sensitive to lines at multiple orientations and once you have lines you can create “circles, faces, stick figures, whatever” to formulate images.10
The idea that the cerebral cortex was interpreting interference patterns can be traced back to Germany in 1906.11 Decades later, John Lashley, Pribram’s mentor at the Yerkes Primate Center, reached the same conclusion.
Interference patterns can be seen in the water if you cast two stones in a pool. When the series of concentric waves generated by each of the stones clash the resulting confused ripples or wavelets are interference patterns.
In the interview Pribram asks “what might constitute those interference patterns in the brain” and “given interference patterns, how do you get an image out of that?”12 He then answers his own questions saying both problems were solved when people started building holograms at the University of Michigan and at Stanford (around 1962). He qualifies that by saying “because a hologram is a photographic store of ripples, of interference patterns. Instead of pebbles on a pond, what you have is light beams hitting the film.” 13The light then spreads in ripples over the surface of the film.
Pribram continues “Every light beam that hits does that and the neighboring ones do it and the neighboring ones and so you got every light beam, every part of a beam essentially spread over the entire surface. That’s why mathematically it’s called a spread function.”14
In a hologram that spread function is translated into images and with every passing year in neuroscience it becomes more and more apparent, Pribram uses the word “overwhelmingly,”15 that the brain functions in the same manner.
Pribram goes on to say that “over the last thirty years or so more and more evidence has accumulated to suggest strongly that the cerebral cortex acts as a resonator. It resonates to the frequencies of energies that are being transduced by the receptors; it’s the frequencies of energies.” He emphasizes that this is not an epiphany. German scientists were talking about it in 1906…16
Holography works by using interference patterns to encode information about a three dimensional object into what is, for all intents and purposes, a two dimensional light beam. The interference patterns can then be translated back into a three dimensional object. A tremendous amount of information can be stored and transferred this way.
Another profoundly functional feature of the hologram and analogous to the non-locality of memory in the human brain, is that all information is stored throughout the entire hologram. As long as a part of the hologram is big enough to contain the interference pattern, it can recreate the entire image stored in the hologram.
Holographic technology is based on the Fourier transform, a type of integral transfer sometimes called an improper Riemann integral. The Fourier transform itself is a mathematical function originally used in the nineteenth century to show the transfer of heat between two systems. Fourier transforms are the foundation of Spectral Analysis in the late twentieth and early twenty-first century.
In a Fourier transform two graphs are created; one showing the frequency domain and the other the time domain. The differential is then mapped between the two domains and through various permutations of the equations a spread sheet is achieved of all the individual frequencies that constitute a function of time, what is defined as a signal…
Often it is easier to solve a problem in the time domain by working on it in the frequency domain. Afterwards transformation of the result can be made back to the time domain by reversing the equation, what is called an inverse Fourier transform. The entire signal can be filtered simply by changing the frequencies in the frequency domain…
A Fourier transform can, theoretically, be used to send a function of the three dimensional continuum into a moving four dimensional mass or vice a versa…
The father of the Holograph is 1971 Nobel Prize recipient Dennis Gabor, who right after WW II produced the math –called windowed Fourier transforms– necessary to make one. Gabor served in a Hungarian artillery unit during WW I and in the twenties was instrumental in the development of the electron microscope in Berlin. When the National Socialists came to power in 1933 Gabor, a Hungarian Jew that had converted to Lutherism, fled Germany to England.
By the time Gabor worked with them, Fourier transforms had been infused with the genius of Bernhard Riemann, the nineteenth century German mathematician who broke the back of Euclidian geometry for good, making quantum physics and relativity possible. Erwin Schrödinger, the twentieth century Austrian physicist whose wave equation would become one of the two pillars of quantum physics and the foundation of wave mechanics.
David Hilbert, the German mathematician who taught most of the others and after whom Hilbert’s Space is named, and Werner Heisenberg the discoverer of Heisenberg’s Uncertainty Principle, the other pillar of quantum physics…
Gabor would have at least had access if not worked directly with the legendary Jon von Neumann, Hilbert’s best pupil. Gabor and von Neumann were both Jews, native Hungarians and born to money, although von Neumann’s education under Hilbert had been paid for by the Rockefeller Foundation. Von Neumann was in fact titled nobility, besides being the man who named Hilbert’s Space in Hilbert’s honor.
Von Neumann was perhaps the most brilliant mathematician who ever lived. He would leave Berlin upon concluding his tutelage under Hilbert and be in Princeton by the end of 1929…
At Princeton, von Neumann delighted in playing Prussian marching music so loud on his gramophone that Einstein, who was in an adjoining office, would have to ask the authorities to intervene. In vain, there was nothing Einstein or anyone else could do about it. Von Neumann wrote the textbook for Quantum mechanics; Mathematische Grundlagen der Quantenmechanik, or in English Mathematical Foundations of Quantum Mechanics.
His mathematical contributions to civilization could fill a library, but his real achievements remain classified till this day. It is said that when von Neumann was dying of cancer, while under sedation he was surrounded by a Special Forces guard to insure he didn’t blurt out any of the empires secrets.
Von Neumann would tell anyone who would listen, delighted in it, that he had mathematically proven Einstein wrong. Most academics, although they could not understand his math, believed him and still do… Although they are now fonder of the experimental results of John Stewart Bell for their Einstein bashing…17
Einstein had always insisted that there were hidden variables that when discovered would reconcile quantum physics, which is indeterminate, and relativity, which is determinate. In Einstein’s vision of the future there would be just one unified field of physical phenomena and that would be determinant.
In physics, determinant means events transpire as a result of a mechanistic necessity and are therefore predictable. They follow laws. All physical phenomena should follow rules.
But they don’t. In Quantum physics, quantum entanglement is not the only enigma. There is the double slit experiment where an individual particle is fired through a slit and another through a different slit at a screen. What shows up on the screen is a wave interference pattern which could have only been made by waves passing through the slit…
There is the wave function collapse and quantum randomness in general. If the observer calculates the position of a “sub-atomic particle” in space they cannot calculate its momentum because the very act of locating it influences its trajectory. If they find its momentum, the act of their doing so prevents them from finding its position. That’s the short definition of Heisenberg’s Uncertainty Principle.
It’s all about predicting probabilities in a matrix, nothing is certain and the observer is part of the equation, anathema to ‘good science…’
Erwin Schrödinger, who won the Nobel Prize in 1933 for providing the equation that makes it all work, was more than just a scientist. A philosopher and poet at heart, he was a lifelong student of the Vedas and believed individual consciousness was a manifestation of the universal whole.
Back then, Schrödinger described the prevailing interpretation of quantum physics, now called the Copenhagen interpretation, as making no distinction “between the state of a natural object and what I know about it, or perhaps better, what I can know about it if I go to some trouble. Actually — so they say — there is intrinsically only awareness, observation, measurement.”18
The Copenhagen interpretation is the prevailing school of thought in quantum physics to this very day. As George Berkeley, the father of Immaterialism and therefore the Copenhagen interpretation, said three hundred years ago; nothing can exist if there is nothing to see it, “esse est percipi,” to be is to be perceived.
After serving as an apprentice to the mysterious German scientist; Max Wien, heir of Friedrich Paschen’s late nineteenth century experimental research on hydrogen spectral lines in the infrared region, Schrödinger would begin publishing papers about atomic theory and the theory of spectra in the early twenties…
He would publish his famous equation in 1926. In the twenty-first century, it’s still the tool mathematicians use to describe a wave function. In the Copenhagen interpretation the wave function is the most complete description that can be given to a physical system.
In Quantum mechanics the Schrödinger equation predicts probability distributions from which results are drawn. A probability distribution is a mathematical description of a random phenomenon. There are no exact results and at the time Schrödinger is quoted as saying “I don’t like it, and I’m sorry I ever had anything to do with it.”19
Einstein was livid. Not only was special relativity no longer feasible but perhaps relativity itself. As every school child knows he said “God does not play dice with the universe!”
Schrödinger worked closely with Einstein in the ensuing years, attempting to formulate a unified field theory and reconcile the whole mess into one determinant science, but by the end of the forties he had abandoned those efforts. In a 1952 lecture, he made the first documentable reference to what has become known as the multiverse, prefacing it by saying that what he was about to say might “seem lunatic.” 20
Schrödinger went on to tell his perplexed audience that when his equations seem to be describing several different histories they are “not alternatives but all really happen simultaneously…”21
Famously, in 1956 Schrödinger would refuse to speak about nuclear energy at an important lecture during the World Energy Conference, giving a philosophical lecture instead because he had become skeptical about the entire subject. He would cause a great deal of controversy in the physics community after that, abandoning the idea of particles altogether and adopting the wave-only theory also put forth by Hugh Everett III in his many-worlds interpretation of the multiverse.
In the many-worlds interpretation, the wave in the quantum state is the only thing that is real and under the appropriate conditions it will exhibit particle-like behavior. In Everett’s multiverse, everything that ever could have happened in the past did and every possibility spawns its own universe where that possibility did and does occur.
After Jon von Neumann died prematurely of cancer in 1957 Hugh Everett III would become the Anglo-American empires go-to guy on Quantum physics…
Pilot Waves were first proposed by Einstein in an effort to explain the wave interference patterns produced by particles in cases like the double slit experiment. He had hoped that they could be explained deterministically if the particle were somehow guided by an electromagnetic field; “which would thus play the role of what he called a Führungsfeld or guiding field.”22
The idea of a pilot wave was picked up and made mathematically feasible by Louis de Broglie in 1927, but with little support from a physics community now enamored by Heisenberg and the Copenhagen interpretation it died a slow death from neglect.
De Broglie’s math was resurrected by David Bohm in 1952 and renamed Bohmian mechanics. Heisenberg, who had been “profoundly unsympathetic”23 to the idea from its inception in the twenties wrote in 1955 that it was nothing more than an “exact repetition” of the Copenhagen interpretation “in a different language…”24
Regardless of the value of “Bohmian mechanics” the rest of what David Bohm had to say about the holographic universe may be a summation of everything that was really learned by man in the twentieth century (outside of course all those in this account who had an above top secret clearance…).
Bohm said there were two worlds. The primary one he called the Implicate Order or the enfolded order. He said the enfolded order was “the ground out of which reality emerges.”25The other world, “reality,” the world of the human senses, the world where consciousness dwells, he called the Explicate Order or the unfolded order.
“What we take for reality, Bohm argues, are surface phenomena, explicate forms that have temporarily unfolded out of an underlying implicate order. Within this deeper order forms are enfolded within each other so systems which may well be separated in the Explicate Order are contained within each other in the Implicate Order.”26
Superficially it would appear the two worlds are “dual forms related by an integral transfer” but the reality is the unfolded order cannot exist independent of the enfolded order.27
Bohm, always a pariah to the powers that be because of his politics sometimes had his work classified before he could even finish it. In the Manhattan project he was barred access to Los Alamos and was not allowed to write the thesis for his own scattering equations.
Einstein had always been his mentor, shielding him and preventing his ostracism from academia and Bohm had always worked closely with him in Einstein’s quest to save physics as he knew it. But by the end of the war Bohm had come to the conclusion that quantum mechanics would never become a deterministic science. He stopped looking for deterministic mechanisms as the cause of quantum phenomena and set out to show that the events could be attributed to a far deeper underlying reality.
Bohm’s idea of an Implicate and Explicate order mirror the conclusions reached by Mircea Eliade, the world’s foremost theological scholar of the WW II era…
Eliade said there are only the Sacred and the Profane. The Sacred is the place of mythology, where the gods and archetypes dwell together with all the things that establish the very structure of this world. The Sacred is the First Cause of the Gnostics, the alcheringa of the Aborigine and the Implicate Order of Bohmian mechanics.
The Profane is the material things of this world, the things that have nothing to do with the Sacred. They are basically just like the set in an old black and white movie story… Eliade said they “acquire their reality, their identity, only to the extent of their participation in a transcendent reality.”28 In other words, it is only through its participation in the Sacred that the Profane finds validation.
Through his myths, his ceremonies and his rituals, even in his behavior and dreams, man manifests the Sacred into the Profane. It is Man himself that breaths reality into the fleeting and phantasmagorical world of the Profane…
Eliade said that in order to uphold the world of the Profane, the Scared must be manifested into it, over and over again. He called these incarnations, these places where the Sacred intersects with the Profane, the Eternal Return (not to be confused with Nietzsche’s Eternal Return, just as important but more to do with the cycle of the Yuga’s and the Mandela). Eliade called these manifestations of the Sacred into the Profane hierophanies.
Eliade maintained that all Shamanic practices in cultures uncluttered by the poisons of twentieth century rationalism, indeed the foundation of all Paleolithic spiritual practices, was an attempt to produce these hierophanies.
No one was, nor ever will be, more influential than Mircea Eliade, not even the vaunted Joseph Campbell. But present day academia with its penchant for semantics and cutting the whole up into smaller and smaller pieces till there is nothing left to see at all (both Pribram29 and Bohm30 warned the world about this), still rails against him. They say Eliade painted all cultures with too broad a brush stroke and seem to feel that their exceptions are more important than his whole, the same mistake Einstein made…
But even Eliade’s staunchest critic; Geoffrey Kirk, Regius Professor of Greek at the University of Cambridge from 1974 to 1984 and prolific author himself, concedes that what Eliade said about the Eternal Return fit the culture of Australia’s aborigines like Cinderella’s slipper…
There has always been something dark and foreboding about Australia. Master of horror H P Lovecraft wrote about it in The Shadow out of Time. There is something menacing, something unspoken and threatening, a nameless fear of the stark and unforgiving land and an instinctual loathing of its native aborigine inhabitants that runs like an unseen current through the hard White men who dispossessed them.
In 1770 a British exploratory expedition led by James Cook would land in Botany Bay where the great city of Sidney now stands. They began shooting the natives immediately and the fighting would continue for over a hundred and fifty years. It finally subsided after the Coniston massacre in 1928 in the Northern Territory, which left over a hundred Aborigine dead.
Overall the fighting left thousands of Whites dead and hundreds of thousands of Aborigines. There were no pitched battles; the fighting was at close quarters, often hand to hand before repeating rifles were invented and savagely brutal, more like gang fights than military engagements. Atrocities were committed by both sides and in the interest of political correctness a well documented history of cannibalism among the Aborigine has been kept suppressed by the authorities.31
The Aborigine bore no animosity towards Whites because of their skin color. Eating the dead was strictly business in a land where distances are endless and the sun relentless. As settlers claimed the rights to all Australia’s fertile land the Stone Age hunting and gathering lifestyle of the Aborigine provided less and less sustenance. Resentment, and hunger, became inevitable.
But a journal from as late as 1849 explains how the Aborigine viewed Whites as their “ancestors who have returned to them again.”32 The archived diary describes how the Aborigine, before eating each other, would “scorch off the entire outer skin or epidermis which reveals the ‘true skin’ which in all branches of the human race is quite white.”33
“Their impression being that when they die ‘The black fellow England walk and by and by jump up white fellow.’”34
Australia is rivaled for geological anomalies only by its nearest neighbor Papua New Guinea. Both have stood in isolation for what academia says is sixty thousand years. Only their indigenous tribes, more like ghosts than men, can testify as to what cataclysmic events they may have witnessed.
In the Kimberley region of Western Australia four thousand year old cave paintings depict fantastic beings from the dreamtime called Wandgina. Local Aborigine believe the actions of the Wandgina in the dreamtime manifest themselves as features in the landscape of Australia’s Great Western Desert. They believe these beings control the wind, the rain and the lighting…
The Wandgina
Rising like a specter out of the center of the Australian continent and on an otherwise almost unbroken horizon is Uluru or Ayers Rock, an isolated hill that appears like a single great stone has been imbedded into the earth. Uluru, a Mecca for tourists, is famous for its glowing red appearance at dusk and dawn and is sacred to the Aborigine.
At two miles long, over a mile wide and eleven hundred feet high Uluru is by far Australia’s best known geological anomaly. But just as striking is Kata Tjuta, fifteen and a half miles to the west and Mount Conner, slightly to the south and forty-five miles east of Uluru.
Kata Tjuta or the Olga’s consists of thirty six domes covering a little less than eight and half square miles, the tallest being Mount Olga at over seventeen hundred feet high. Mount Conner covers eight and half square miles and rises nine hundred and eighty-four feet at its highest point. All of them are conglomerates of granite-like stone and gravel cemented by a matrix of sandstone, about 50% feldspar, 25–35% quartz and up to 25% rock fragments.
Explanations abound for how the island mountains, called inselbergs by academics, got to be in the western desert. They range from the electric universe theory which postulates that they are the result of an immense electrical discharge, to creationism which of course believes they were scoured out by the deluge, all the way to academia’s old standby of a greased pig, erosion…
Local Aborigines believe most of the south face of Uluru is the result of a war fought in the dreamtime between the carpet-snakes (Kunyia) and the venomous-snakes (Liru). The northwestern corner of Uluru and most of its north face were formed as a result of the activities of the hare-wallaby’s (Mala) and the comings and goings of other dreamtime entity’s fill in the rest of Uluru’s geological features.
To the Aborigine it is the dreamtime that generates this world and with it the landscape…
Black Mountain National Park is located at the northern end of Queensland, a little over five miles from the Coral Sea. “The park” is just a restricted three square mile area around a pile of dark colored granite boulders, some the size of houses. The pile reaches almost a thousand feet in height. Academics have explanations for this striking geological anomaly but to the untrained and perhaps the more objective eye the boulders appear to have been placed there by unknown methods for unknown reasons.
Black Mountain has a sinister reputation among Whites as well as the Aborigine. The Aborigine call it Kalkajaka or place of the spear and avoid it. People disappear around Kalkajaka and the people who go looking for them disappear too.
Some believe the missing have simply been lost forever in the labyrinthine passages between the boulders. Others claim the missing were eaten or enslaved by reptilian aliens that, among other things, have been sighted around the rocks. They believe reptilian aliens have a secret base under Black Mountain where UFO sightings are a regular occurrence.
UFO’s have been receiving a lot of attention lately in Australia. An Australian himself, Duncan Roads –editor of Nexus Magazine for over a quarter century and the most respected name in the alternative media– recounts “Australia is certainly a hot spot of UFO sightings. We’ve had a phenomenal growth in the reporting of UFO sightings by the general public especially since the advent of the internet.”35
Roads points to the area around the Blue Mountains in Australia’s New South Wales “as a hotspot of UFO sightings and other mysteries. There is certainly a lot of mystery in the Blue Mountains. Campers, bushwalkers, explorers all have got tails of mystery, disappearing people, strange tunnels, strange noises and strange creature sightings…”36
According to Aboriginal tribal elder Kevin Gavi Duncan “the Blue Mountains is a very sacred area, sacred place, especially the highest places, because we would be closer to Baiame, closer to god.”37
The human disappearances in the Blue Mountains seem to be focused around Mount Yengo. Called the Uluru of the east, the flat top of Mt. Yengo rises about a thousand feet above a plateau and is believed by academics to be all that remains of an ancient volcano. Perhaps because of its prominent flat top, Aborigine tribes believe that after he was done with the act of creating this world their creator god Baiame leapt back up into the spirit world from Mt. Yengo.
Roads continues “UFO sightings of the Blue Mountains have triggered many magazine articles, radio shows and books. A lot of people have come forward over the last few decades to document and put onto the record their own experiences.”38 Rex Gilroy, author of Mysterious Australia, has unearthed accounts of UFO sightings in the Blue Mountains by nineteenth century pioneers…39
Ancient Aliens straight man David Hatcher Childress theorizes that the Blue Mountains are a “stargate, some portal to another dimension and jumping to hyperspace perhaps…”40 Childress speculates “For some reason Australia was the place where they put this hyperspace portal used by extra terrestrials.”41
Duncan continues “there are stories that elders would say, that some people have actually travelled back to the Morning Star and have come back again.”42 Earlier, standing in front of an ancient rock carving depicting Baiame about forty miles southeast of Mt. Yengo, Duncan explained “Baiame came from a place that we call the Morning Star within the Mirrabooka. Mira means stars and booka means river. That is the Milky Way that flows across the North Star. ”43
Baiame, Bulgandry Aboriginal Engraving site, Brisbane Water National Park, New South Wales, Australia
Duncan then gives his interpretation of the petroglyph. Baiame “holds the Moon in one hand and the Morning Star in the other. Which is a bit like what we call planet earth and these are the two moons which exist around the Morning Star in the Mirrabooka.”44
What the petroglyph shows is Baiame with his arms outstretched and a giant knife horizontal across his naval. The hilt is under his left arm. He is holding a circle in his right hand and a crescent in his left. Below the crescent is another circle suspended in mid air and slightly smaller than the one he holds in his right hand. To the right of the free floating circle, perfectly horizontal to it, is a much smaller almost tiny circle. Slightly to the right of the tiny circle and above it is another tiny circle.45
If the two tiny circles are rotated about two hundred and eighty degrees clockwise or ninety degrees counter clockwise so that the tiny circle that was furthest from Baiame is now in the hilt of the knife you would have close to an image of what, left to right, is in the middle of Australia. Mount Conner would be the large circle, now furthest right.
The Three Sisters rock formation is about fifty miles to the Southwest of Mt. Yengo. The three craggy pillars of sandstone tower above the lush Jamison Valley. No doubt conjuring memories in Australia’s early Anglo-Saxon settlers of the three Wyrd Sisters crouched at their cauldron casting spells on both gods and men in Shakespeare’s Macbeth.
Wyrd is an old Anglo-Saxon word meaning destiny, to come to pass, to become. By the fifteenth century it had come to mean having the power to control fate. In sixteenth century Scotland and northern England wyrd implied that an event was miraculous. It wasn’t till the early nineteenth century that weird came to mean something was odd. The Proto-Indo-European root is wert meaning to turn or to rotate…
In the 1965 epic science fiction novel Dune by Frank Herbert the Wyrding Way is an overwhelming close quarter fighting technique used by the story’s messianic hero and his rebel armies with devastating effectiveness. In hand to hand combat its adepts are able to maneuver around and strike their opponents at speeds that resemble teleportation to the observer and words and sounds can be amplified to become lethal weapons.
Mastery of the Wyrding Way required the adoption of a completely different concept of what the space-time continuum is and what its cause and effect are. The essence of the Wyrding Way is summed up in both the motto and the mantra of its practitioners “my mind affects my reality.”
Wyrd is a notion taken from the pre-Christian religion of the Norseman. In Old Norse the word is Urðr. It is also the name of the mother of the Norns, female beings who rule over the destiny of gods and men. There are many Norns, good and evil, who appear at a person’s side at their birth and decide upon their future.
Urðr (fate), Verðandi (present) and Skuld (karmic debt) are the most powerful of the Norns and said to have come to intervene in a time long past when the gods ruled too haughtily over men. The three beautiful maidens pour the purifying waters of the Urðarbrunnr (Well of Urðr) over the Yggdrasil (Tree of Life) to keep it eternally rejuvenated.
The Urðarbrunnr is said to be one of three wells, one under each of the three roots of the Yggdrasil. Each root reaches to a different far off land. The other two wells are Hvergelmir (bubbling boiling spring), located beneath a root in Niflheim (Abode of Mist), and Mímisbrunnr (Mímir’s well), located beneath a root near the home of the frost jötnar (Giant). It was said that Odin gave one of his eyes to drink from the Mímisbrunnr, the well of wisdom and understanding.
Aside from Tasmania and parts of New Zealand Australia’s Blue Mountains is the last real stop in the Pacific Ocean before the Antarctic. The Blue Mts. are about as far away as you can get from the land of the Norsemen on the Baltic Sea. But as Caroline Cory author of The Visible and Invisible Worlds of God notes “there are several umbilical cords on the planet. This particular location is located exactly at negative thirty-three latitude.”46
Cory then recites the standard alien enthusiast dogma about the thirty-three degree latitude of planet earth aligning with the center of the galaxy and how it is “continuously being visited from different parts of the planetary system from different parts of the galaxy and even from beyond this galaxy, from way out in the universe.”47
Most amateur UFO enthusiasts have never heard of Bruce Cathie and his book; Harmonic 33, published way back in 1968. But most professional researchers are well acquainted with the book and many new age authors use Cathie’s math to validate their Tinkerbellian speculations.
“Even while you read this interplanetary space ships are rebuilding a world grid system from which it appears they can draw motive power and they are possibly using the grid for navigational purposes.” 48
This is the cover sentence in Harmonic 33. There are rumors that the original book was immediately pulled from bookstore shelves, edited, then rereleased with Cathie put under wraps and assigned a handler, never to produce anything again of any consequence for the general public, though he would write a few more books.
Cathie, a New Zealand airline pilot, saw his first UFO in 1952. He would be fascinated till he died in 2013. He began collecting data and collating it with sightings by other pilots over New Zealand. Using techniques borrowed from French UFO researcher Aimé Michel he was able to establish two track lines where aerial anomalies were being regularly encountered. From there he “was able to form a complete grid network over the whole of the New Zealand…”49
Cathie learned that the American survey ship Eltanin had taken some of the strangest photographs of the twentieth century off the west coast of South America. There, thirteen thousand feet beneath the waves mounted on the pacific sea bed was an “aerial-like object” that was “two-to-three-feet high and had six main crossbars spaced evenly up its stem with a smaller one at the top. Each set of crossbars had a small ball at the end of each arm.”50
Later one of the scientists who had been on board the Eltanin told Cathie the object was thought to be metallic and an artifact of some kind. Cathie was able to align his New Zealand grid with the coordinates of the artifact fashioning what he reasoned was a world energy grid and perhaps used as a galactic navigational tool by extra-terrestrials.
Interestingly enough, in light of Erwin Schrödinger’s actions at the World Energy Conference in 1956, Cathie did not believe nuclear weapons could be detonated randomly but would have to be at exactly the right coordinates at exactly the right time to work. Using his world energy grid he started publically predicting the exact times and places of test sites before they got him muzzled…
In Cathie’s own words “It was only a matter of time before I realized that the energy network formed by the grid was already known to a powerful group of international interests and scientists. It became obvious that the system had many military applications, and that political advantage could be gained by those with secret knowledge of this nature. It would be possible for a comparatively small group, with this knowledge, to take over control of the world.” 51
Cathie concluded that the “whole of physical reality was in fact manifested by a complex pattern of interlocking wave-forms.”52
Aliens are a very grey area, as is reality itself. What the Explicate Order translates out of the Implicate Order, what the Sacred manifests in the Profane, they are like points in a wave that show up as a particle. Just as surely they are guided only by Heisenberg’s Uncertainty Principle…
Something is going on in the Blue Mountains, always has been. It’s been categorized by twenty-first century academia as paranormal but it’s something Australia’s aboriginal people are well acquainted with.
Duncan Roads is the man who introduced Bruce Cathie to the general public. He knows words like von Neumann knew numbers. He says “the Australian aborigines have a connection and a relationship with what we call extra terrestrials and UFO’s which goes back tens of thousands of years. Their rather nonplussed by their existence, they have developed an awareness of individual types of visitors from what we call outer space.”53
The Three Sisters crouch at the south edge of the town of Katoomba, an Anglo-Saxon enclave of artists and artisans. They can be viewed from its golf course and are the most famous landmark in The City of Blue Mountains, a ribbon of contiguous towns, which lie on New South Wales Main Western railway line. The City of Blue Mountains has dubbed itself ‘The City within a World Heritage National Park.’ It has Sister City Relationships with Sanda City, Japan and Flagstaff, Arizona in the USA.
Located in the southwest of the Four Corners, an area famed for its paranormal activities, Flagstaff is the unofficial capital of the Navaho (Diné) Nation and the Hopi, the priestly tribe who are the keepers of the Diné’s most profound secrets.
Like a penitent kneeling at the foot of the alter Flagstaff prostrates itself at the south foot of Agassiz Peak, Freemont Peak and Doyle Peak in the Kachina Peaks Wilderness.
To the Hopi this area, part of the San Francisco Peaks, the remains of an eroded composite volcano, is the most sacred place in the Four Corners. In fact it is the most sacred place in the world…
The San Francisco Peaks are where the doorways open up for their gods, which they call Kachina, to come forth when they are called in the powerful ceremonies performed by the Hopi.
The Kachina are supernatural beings said to control the wind, the rain and the lighting…
At 11,464 feet Doyle Peak was the site of the world’s highest astronomical observation point from 1927-1932. Built by the Lowell Observatory, the stated purpose of the cabin on the south side of the summit was to scan the heavens and make spectroscopic observations, especially in ultraviolet and infrared wavelengths…
In 2005 “a collaborative project team formed, the heart of which is still active today, including NASA scientists, Navajo Medicine Men, and both NASA and Navajo educators.”54 Flagstaff is the home of the Lowell Observatory, the U.S. Naval Observatory and the United States Geological Survey Flagstaff Station…
Rock art from Sego Canyon at the northern frontier of the Four Corners.
1 – Reid, Nick, and Patrick D. Nunn. “Ancient Aboriginal Stories Preserve History of a Rise in Sea Level.” The Conversation. 13 Jan. 2015. Web. 25 July 2016.
2 – Ibid.
3 – “Ancient Aliens S11E07 – The Wisdom Keepers.” 11:00. YouTube, 7 July 2016. Web. 26 July 2016.
4 –Ibid. 29:33.
5 –Ibid. 30:09.
6– MARKOFF, JOHN. “Sorry, Einstein. Quantum Study Suggests ‘Spooky Action’ Is Real.” Science. New York Times, 21 Oct. 2015. Web. 3 Aug. 2016.
7 – “Karl Pribram ‘Holographic Brain’ New Dimensions 1:12:52.” Youtube. Insightfreeman, 5 Dec. 2012. Web. 15 Aug. 2016.
8 –Ibid. 23:34.
9 – Ibid. 23:52 – 25:04.
10 – Ibid. 30:58.
11 – Ibid. 37:07.
12 – Ibid. 38:47.
13 –Ibid. 39:18.
14 – Ibid. 39:50.
15 – Ibid. 49:15.
16 – Ibid. 51:12.
17 – Goldstein, Sheldon. “Bohmian Mechanics.” 2. The Impossibility of Hidden Variables … or the Inevitability of Nonlocality? Stanford Encyclopedia of Philosophy, 26 Oct. 2001. Web. 24 Aug. 2016. Substantive revision Mon Mar 4, 2013
18 – Ibid. – 1. The Completeness of the Quantum Mechanical Description.
19 – “A Quantum Sampler.” Science. New York Times, 6 Dec. 2002. Web. 26 Aug. 2016.
20 –Deutsch, David. “The Beginning of infinity,” page 310.
21 – Ibid.
22 – Goldstein, Sheldon. “Bohmian Mechanics.” 3. History.
23 – Ibid.
24 – Ibid. 15. Objections and Responses
25 – Peat, David. “Non-Locality in Nature and Cognition.” Nature, Cognition And System II. Page 304, 1992. Web. 21 Aug. 2016:
26 – Ibid.
27 – Ibid.
28 – Eliade, Mircea. “The Myth of the Eternal Return: Cosmos and History.” Page 5, Princeton: Princeton UP, 1971.
29 – “KARL PRIBRAM – A Holonomic Brain Theory”: 2:35 & 8:08. Youtube. Faustin Bray, 12 Apr. 2011. Web. 29 Aug. 2016.
30 – Bohm, David. “Wholeness and the Implicate Order,” 1980.
31 – Cooke R.N. Rtd., James. ANTHROPOPHAGITISM IN THE ANTIPODES OR CANNIBALISM IN AUSTRALIA. N.p.: n.p., 1997. Print. A privately published collection of documented accounts.
32 – Ibid. Page 3. Henry de Burgh, Diary in Battye Library quoted in The Breakaways by W. de Burgh, St George Books, Perth, 1981.
33 – Ibid.
34 – Ibid.
35 – “Ancient Aliens S11E07 – The Wisdom Keepers.” 1:35.YouTube, 7 July 2016. Web. 26 July 2016.
36 –Ibid. 23:47.
37 –Ibid. 23:05.
38 – Ibid. 24:09.
39- Ibid. 24:33.
40- Ibid. 26:02.
41 – Ibid.
42 – Ibid. 26:35.
43 – Ibid. 8:26.
44 – Ibid. 8:54.
45 – Ibid. 9:07.
46 – Ibid. 25:37.
47 – Ibid.
48 – White, L.G. (1969, September/October). HARMONIC 33 Reviewed by L.G. White. Retrieved September 12, 2016, from
49 – Cathie, Bruce. “The Harmonic Conquest of Space.” Nexus Magazine, Oct. 1994. Web. 13 Sept. 2016.
50 – Ibid.
51 – Ibid.
52 – Ibid.
53 – “Ancient Aliens S11E07 – The Wisdom Keepers.” 27:00.YouTube, 7 July 2016. Web. 26 July 2016.
54 – “NASA and the Navajo Nation.” NASA Astrobiology at NASA Life In The Universe. Ed. Julie Fletcher. NASA, n.d. Web. 16 Sept. 2016. |
0d978d1720829229 | What Can Quantum Machine Learning Do?
What can quantum computers do? In this post, we explore the concept of many-body problem in quantum chemistry that may be one of the most immediate applications of quantum computers and quantum machine learning. A main goal of quantum chemistry is to predict the structure, stability, and reactivity of molecules. Modelling how each of these particles interact and affect each other is essentially impossible. In principle, this requires solving the Schrödinger equation for the many-electron problem, a task that is so computationally intensive that even our modern supercomputers fail to perform them fast enough.
The Trade-Off Every AI Company Will Face
Machines learn faster with more data, and more data is generated when machines are deployed in the wild. However, bad things can happen in the wild and harm the company brand. Putting products in the wild earlier accelerates learning but risks harming the brand; putting products in the wild later slows learning but allows for more time to improve the product in-house and protect the brand.
Commercialize quantum technologies in five years
The field of quantum computing will soon achieve a historic milestone — quantum supremacy. It is still unknown whether application-related algorithms will be able to deliver big increases in speed using the sorts of processors that will soon be available. But when quantum hardware becomes sufficiently powerful, it will become possible to test this and develop new types of algorithms. |
b15e210268621461 | Neutrons Optical Devices
September 22, 2015 9:27 am Published by
Optics deals with waves and their transformation by objects, i.e., scattering. Accordingly, since quantum mechanics describes massive particles as traveling wavesneutron optics deals with scattering of the neutrons’ de Broglie (or matter) wave. The source of this phenomenon may be of solid nature – more precisely the fermi pseudo potential of atoms, or represented by magnetic fields which effect the neutron’s wave function likewise.
Direct Current (DC) Spin Rotators
rrrrrrrrrrrrrrrrrrNext generation (3D-printed) Spin Rotators
Radio Frequency (RF) Spin Flippers
Supermirror Polarizer
Neutrons in magnetic fields: Larmor precession
The motion of a free propagating neutron, interacting with magnetic is described by a nonrelativistic Schrödinger equation also referred to as Pauli equation, given by , where and are the mass (kg) and the magnetic moment (, with J/T) of the neutron, respectively and and contains the Pauli matrices . A solution is found by the two dimensional spinor wave function of the neutron, which is denoted as , with spatial wave function . The state vector for the spin eigenstates denoted as and is given by , introducing polar angle and azimuthal angle , which can be represented on a Bloch sphere as shown below (left). In the field of general two-level systems (qubits) the term Bloch sphere is conventionally used, whereas the term Poincaré sphere is more common for representation of light polarization.
The neutron couples via its permanent magnetic dipole moment to magnetic fields, which is described by the Hamiltonian , where magnetic fields of stationary and/or time dependent origin are utilized for arbitrary spinor rotations in neutron optics. When a neutron enters a stationary magnetic field region (non-adiabatically), the motion of its polarization vector, defined as the expectation value of the Pauli spin matrices is described by the Bloch-equation, exhibiting Larmor precession: , where is a gyromagnetic ratio given by . This is the equation of motion of a classical magnetic dipole in a magnetic field, which shows the precession of the polarization vector about the magnetic field with the Larmor frequency , which is depicted above (right). The Larmor precession angle (rotation angle), solely depends on the strength of the applied magnetic field and the propagation within the field and is given by , where and are the length of the magnetic-field region traversed by the neutrons and the neutron velocity, respectively (see here for a detailed derivation of Larmor precession).
Larmor precession is often utilized in co called Direct Current (DC) spin-rotators, or spin-flippers (if the spinor rotation angle is set to 180 deg), which is depicted below. The illustrated field configuration assures a highly non-adiabatic transit required for Larmor precession. In practice a second coils (-direction), perpendicular to the original coil (-direction) is necessary to compensate the field component of the guide field (-direction).
Using the formalism of quantum mechanics, a spin rotation through an angle about an axis pointing in direction is described by the unitary transformation operator , which can be written as (see here for a detailed derivation of the unitary transformation ). Note that for a stationary magnetic field the total energy of a neutron is indeed a conserved quantity since . Thus, neither the momentum nor the potential (Zeeman magnetic energy) is a conserved quantity due to and , which is depicted below (left).
In a purely time dependent magnetic field the total energy of a neutron is not a conserved quantity: . Energy can be exchanged with the magnetic field via photon interaction. However the momentum is conserved , since due to the fact that is purely time dependent. Therefore the change in the total energy must origin from the potential energy . A diagram of the kinetic, potential and total energy is shown above (right) 1.
RF_all copy
An oscillating RF field and a static magnetic field—a configuration used in nuclear magnetic resonance (NMR)—is also capable of spin flipping. An oscillating RF field can be viewed as two counter-rotating fields. In the frame of one of the rotating components, the other is rotating at double-frequency and can be neglected (rotating-wave approximation). The static field component of magnitude is fully suppressed in the case of frequency resonance, i.e. for the oscillation frequency . If, in addition, the amplitude-resonance condition — determining the amplitude of the rotating field — is fulfilled, a spin flip occurs. A consequence of the rotating-wave approximation is the so-called Bloch–Siegert shift, which gives rise to a correction term for the frequency resonance now reading . The above-explained combination of static and time-dependent magnetic fields is exploited in Radio Frequency (RF) flippers, as depicted in below (see here for detailed calculations).
Next generation (3D-printed) Spin Rotators
A recent projects originates in a cooperation with the group of Dieter Süss (Physics of functional materials) from the faculty of physics, university of vienna where they operate a Christian Doppler lab. This particular group demonstrated recently that an end-user 3D printer can be used to print polymer-bonded rare-earth magnets with a complex shape. A Fusing Deposition Modeling (FDM) 3D printer with a maximum temperature of 260°C, and a nozzle diameter of 0.4 mm is used to print magnetic structures with layer heights between 0.05 and 0.3 mm. The polymer-bonded magnetic compound consists of PA11 and magnetically isotropic NdFeB powder MQP-S-11-9. This source material is compounded and extruded into suitable filaments in the desired ratio of 85 wt.% MQP-S-11-9 powder. As example of an optimized permanent magnetic system a Larmor-spin rotator for a polarized neutron interferometer setup was 3D printed.
3D printes Magnets
Polarizer/Analyzer: Multi-Layer Supermirrors
For understanding the mode of operation of the applied neutron polariser we have to recap some basic concepts of of neutron optic such as the refraction index. The time dependent Schrödinger equation , with yields . From the strong (nuclear) interaction of the neutron we have the Fermi Pseudopotenital denoted as superm, with the coherent scattering length an the atom number density ( denotes the position of each scattering center). From the magnetic interaction the contribution is given by , with , where is the magnetic moment of the neutron. So for materials containing Fe, Ni, or Co we have for the index of refraction . Since can become complex, which accounts for absorption or incoherent scattering. In general we have < 1 so the potential is repulsive. For neutrons vacuum () is an optically denser medium compared to most elements ! So neutron are totally reflected, if . With and we get and a critical angle , which is depicted on the left side. For example Ni has a critical angle of 0.1°/Å. The polarizer and analyzer (spin filter) consist of multilayer structure of two media and having different coherent scattering length . For an incident angle at every single boundary layer there will occur a transmitted an a reflected sub beam. If the thickness of the layers is chosen in such way that the partial waves of the reflected sub beams have a opticalScreenshot 2017-04-07 16.26.06 path difference of constructive interference will be observed. If the thickness of the layers varies only slightly, from layer to layer, there will be an appropriate “lattice constant” for a diversity of wavelengths. If alternating a magnetic and a non-magnetic medium is utilized not only the nuclear scattering length, but also the magnetic scattering length has to be considered . This can be used for beam polarization, since the sign of the magnetic scattering length depends of the orientation of the spin towards the magnetization of the medium. If a combination is chosen such that the sum of the nuclear scattering length and the magnetic scattering length for one spin component (for instance ) equals the scattering length of the non-magnetic substance, then this spin component will not be reflected, since there is no difference in the refractive index of the two layers for this spin component. However, the other spin component () will be (partly) reflected. The transmitted spin component ( ) is absorbed after the last layer. An arrangement as discussed here is referred to as supermirror, often used as polarizer or analyzer 2,3. A multilayer composed of layers whose thicknesses are varied gradually layer by layer reflects neutrons with a wide wavelength range. The thicknesses of a layer thereby vary between 5 and 35 nm and the number of layers can be several thousand. A Supermirrors are usually characterised by its critical angle as which can go up to .
1. F. Mezei, Physica B (Amsterdam) 151, 74 (1988).
2. F. Mezei, Commun. Phys. 1, 81 (1976).
3. F. Mezei and P. A. Dagleis Commun. Phys. 2, 41 (1977). |
944be9725960ac0f | Download Power of one qumode for quantum computation Please share
yes no Was this document useful for you?
Thank you for your participation!
Power of one qumode for quantum computation
The MIT Faculty has made this article openly available. Please share
how this access benefits you. Your story matters.
Liu, Nana, Jayne Thompson, Christian Weedbrook, Seth Lloyd,
Vlatko Vedral, Mile Gu, and Kavan Modi. “Power of One Qumode
for Quantum Computation.” Physical Review A 93, no. 5 (May 3,
2016). © 2016 American Physical Society
As Published
American Physical Society
Final published version
Thu May 26 13:05:58 EDT 2016
Citable Link
Terms of Use
Article is made available in accordance with the publisher's policy
publisher's site for terms of use.
Detailed Terms
PHYSICAL REVIEW A 93, 052304 (2016)
Power of one qumode for quantum computation
Nana Liu,1,* Jayne Thompson,2 Christian Weedbrook,3 Seth Lloyd,4 Vlatko Vedral,1,2,5,6 Mile Gu,2,7,8,† and Kavan Modi9,‡
Clarendon Laboratory, Department of Physics, University of Oxford, Oxford OX1 3PU, United Kingdom
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543, Singapore
CipherQ Corporation, Toronto, Ontario, Canada M5B 2G9
Department of Mechanical Engineering and Research Laboratory of Electronics, Massachusetts Institute of Technology,
Cambridge Massachusetts 02139, USA
Department of Physics, National University of Singapore, 3 Science Drive 2, Singapore 117543, Singapore
Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China
School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 639673, Singapore
Complexity Institute, Nanyang Technological University, Singapore 639673, Singapore
School of Physics and Astronomy, Monash University, Victoria 3800, Australia
(Received 29 October 2015; revised manuscript received 11 March 2016; published 3 May 2016)
Although quantum computers are capable of solving problems like factoring exponentially faster than the
best-known classical algorithms, determining the resources responsible for their computational power remains
unclear. An important class of problems where quantum computers possess an advantage is phase estimation,
which includes applications like factoring. We introduce a computational model based on a single squeezed state
resource that can perform phase estimation, which we call the power of one qumode. This model is inspired by
an interesting computational model known as deterministic quantum computing with one quantum bit (DQC1).
Using the power of one qumode, we identify that the amount of squeezing is sufficient to quantify the resource
requirements of different computational problems based on phase estimation. In particular, we can use the
amount of squeezing to quantitatively relate the resource requirements of DQC1 and factoring. Furthermore,
we can connect the squeezing to other known resources like precision, energy, qudit dimensionality, and qubit
number. We show the circumstances under which they can likewise be considered good resources.
DOI: 10.1103/PhysRevA.93.052304
Quantum computing is a rapidly growing discipline that
has attracted significant attention due to the discovery of
quantum algorithms that are exponentially faster than the
best-known classical ones [1–4]. One of the most notable
examples is Shor’s factoring algorithm [2], which has been a
strong driver for the quantum computing revolution. However,
the essential resources that empower quantum computation
remain elusive. Knowing what these resources are will have
both great theoretical and great practical consequences. This
knowledge will motivate designs that take optimal advantage
of such resources. In addition, it may further illuminate the
quantum-classical boundary.
In pure-state quantum computation, it is known that entanglement is a necessary resource to achieve a computational
speed-up [5]. This is no longer true for mixed-state quantum
computation and it is unclear if a single entity can quantify
the computational resource in these models. Or if multiple
resources appear as candidates, it has not been made explicit
what the relationship is between these different resources. One
notable example is the deterministic quantum computation
with one quantum bit (DQC1) model [6]. This model contains
little entanglement and purity [7,8]. Yet it can solve certain
computational problems exponentially faster than the bestknown classical algorithms by using a highly mixed target
state and a single pure control qubit. However, it is unclear
how to compare the resources needed for DQC1 and factoring
on an equal footing since there is currently no example of
both of these two problems solved using the same model.
Although suggestions have been made that factoring requires
more resources than DQC1 [9], a direct quantitative relation
between the two is still lacking.
To address this challenge, in this paper we propose a
continuous-variable (CV) extension of DQC1 by replacing
the pure qubit with a CV mode, or qumode. We call this
model the power of one qumode. We demonstrate that our
model is capable of reproducing DQC1 and factoring in
polynomial time. This enables us to identify a CV resource
in our model, called squeezing, to compare factoring and
DQC1 on the same level. Squeezed states are also useful
resources in other contexts, like gaining a quantum advantage
in metrology [10–12] and in CV quantum computation [13,14].
The term “squeezing” could refer to either the squeezing
parameter r or the squeezing factor s0 = exp(r). For quantifying resources in the context of computational complexity, it is
important to make a distinction between these two definitions
since they are exponentially separated. We justify our use of
the squeezing factor over the squeezing parameter by showing
how it can be interpreted as inverse precision, which is a known
resource in computational complexity [15].
By inputting the squeezed state as the pure qumode, we can
solve both the hardest problem in DQC1 and the phase estimation problem. We can relate the squeezing factor to the degree
of precision in phase estimation and the total computation time.
As an application, we can show that there exists an algorithm
©2016 American Physical Society
NANA LIU et al.
PHYSICAL REVIEW A 93, 052304 (2016)
using our model that can factor an integer efficiently in time
and it requires a squeezing factor that grows exponentially with
the number of bits to encode this integer. Another algorithm
in our model can recover DQC1 with no squeezing.
A further way of interpreting the squeezing factor is through
the dimensionality of a qudit that can be encoded in the
squeezed state, which we later examine. In some cases, the
squeezing factor can also be considered as an energy resource,
while the squeezing parameter can be interpreted in terms of
the number of qubits. We discuss all these connections more
precisely later in this paper.
Before moving on, let us remark that our architecture is an
example of a hybrid computer: It jointly uses both discrete and
CV systems. A similar hybrid model using a pure target state
was given by Lloyd [16] to find eigenvectors and eigenvalues.
Hybrid models for computing are interesting in their own right
for providing an alternative avenue to quantum computing that
bypasses some of the key obstacles to fully CV computation
using linear optics or fully discrete-variable models [16,17].
This creates an important best-of-both-worlds approach to
quantum computing.
The most difficult DQC1 problem, called DQC1-complete,
is estimating the normalized trace of a unitary matrix [18,19].
This problem turns out to be important for a diverse set of
applications [18,20,21], such as estimating the Jones polynomial. Computing the normalized trace of a unitary begins
with a pure control qubit in the state |+ = (|0 + |1)/ 2
and a target register made up of n qubits that are in a fully
mixed state 1/2n . Next the control and target registers interact
via a controlled-unitary operation, represented by U =
|00| ⊗ 1 + |11| ⊗ U , where U acts on the qubits in the
target register. The control qubit measurement statistics yields
the normalized trace of U , i.e., σx + iσy = Tr(U )/2n . The
circuit for DQC1 is shown in Fig. 1. To estimate the normalized
trace to within error δ, that is, Tr(U )/2n ± δ, we need to run
the computation TDQC1 ∼ 1/[min{Re(δ),Im(δ)}]2 times [22].
Since δ is independent of the size of U , this computation is
efficient and DQC1 has an exponential advantage over the
best-known classical algorithms [23].
In this paper we extend DQC1 by replacing the pure control
qubit with a pure CV state (qumode), while keeping the
target register the same. The total input state in our model
FIG. 1. DQC1 circuit. The control state is |+ and the target state
is n = log2 N qubits in a maximally mixed state. Here U is an N × N
matrix and one can measure the final average spin of the control state
to recover the normalized trace of U .
FIG. 2. Power of one qumode circuit. We can have a squeezed
state |ψ0 as the control state. The target state consists of n =
log2 N qubits in a maximally mixed state as in DQC1. Here Ux ≡
exp(ixH τ/x0 ), where x0 is a constant and τ is the gate running
time. Its relationship to the unitary in DQC1 is Ux = U xτ/x0 . We
make final measurements of the control state in the momentum basis.
The momentum measurements in this model can be used to recover
the normalized trace of an N × N matrix U and also to factor the
integer N .
is thus a hybrid state of discrete-variables and a CV. See
Fig. 2 for the circuit diagram of our model. We first show
how our model can perform the quantum phase estimation
algorithm [24]. We use this to efficiently compute (in time)
a DQC1-complete problem, thus showing that this model
contains DQC1. Next, we show that our model can perform
Shor’s factoring algorithm, which is based on the phase
estimation algorithm.
The aim in the phase estimation problem is to find the
eigenvalues of a Hamiltonian, H |uj = φj |uj . The complete
set of eigenvalues of H is given by {φj }. We encode the
Hamiltonian H into a unitary transformation, CU , that acts
on the hybrid input state. We call CU the hybrid control gate
and is defined as CU = exp(i x̂ ⊗ H τ/x0 ), where the position
operator x̂ acts on the qumode [25]√and τ is the running time
of the hybrid gate. Here x0 ≡ 1/ mω, where m,ω are the
mass and frequencies of the harmonic oscillator corresponding
to the qumode [26]. Like the control gate U in DQC1, the
hybrid control gate can also be decomposed into elementary
operations (see Appendix A). If the qumode is in a position
eigenstate |x and |uj is a state of target register qubits, the
action of the hybrid control gate is
CU |x ⊗ |uj = |x ⊗ Ux |uj = |x ⊗ eiφj xτ/x0 |uj , (1)
where x is the eigenvalue of x̂ and Ux ≡ exp(ixH τ/x0 ). In
our model, we apply CU to a maximally
mixed state of n
qubits and a qumode state |ψ0 = G(x)|xdx. G(x) is the
wave function of the initial qumode in the position basis. After
implementing this gate, the target register is discarded, and the
qumode is in the state
ρf = n
G(x)G∗ (x )Tr[ei(x−x )H τ/x0 ]|xx |dxdx . (2)
Next, we measure this state in the basis of the momentum
operator p̂ [27], i.e., p|ρf |p. This measurement yields the
momentum probability distribution
1 P (p) = n
G(x)G∗ (x )ei(x−x )φm τ/x0 p|xx |pdxdx 2 m
1 G(φm τ/x0 − p)G ∗ (p − φm τ/x0 ),
2n m
where we used p|x = (1/ 2π ) exp(−ixp) and the
of G(x) is denoted by G(p) =
(1/ 2π ) −∞ exp(ixp)G(x)dx.
If we choose our wave function G(x) carefully, we can
employ our model to recover the eigenvalues of H . Suppose
we initialized the control mode in a coherent state |α, chosen
for its experimental accessibility [28]. If we measure the
probability distribution of pE ≡ px0 /τ , where x0 and τ are
known inputs and pE has dimensions of energy, we find (see
Appendix B for a derivation)
τ −τ 2 {pE −[φm + Im(α) ]}2
P (pE ) = √ n
π 2 m=1
where Im(α) is the imaginary component of α [29]. We can
see that the probability distribution is a sum of Gaussian
distributions. It has individual peaks centered at each shifted
eigenvalue φj + Im(α) with an individual spread given by the
inverse of τ . By sampling this probability distribution we can
infer the position of the peaks to any finite precision. Thus, it
is possible to perform phase estimation to arbitrary accuracy
just by increasing τ alone. However, to estimate eigenvalues to
a precision better than a polynomial in n = log2 N , we require
τ greater than polynomial in n = log2 N . Thus, the coherent
state no longer suffices for Shor’s factoring algorithm, which
requires high precision phase estimation. In such cases, we
require a further resource that we identify to be the squeezing
A finite squeezed state is defined by G(x) =
√ 1
[1/( sπ 4 )]exp[−x 2 /(2s 2 )], where s ≡ s0 x0 and s0
parametrizes the amount of squeezing in the momentum
direction [30]. We call s0 the squeezing factor. Its wave
function in x has a Gaussian profile with standard deviation
1/s0 . By inputting a squeezed state into our model, the
probability distribution in pE becomes
s0 τ −(s0 τ )2 (pE −φm )2
2n π m=1
P (pE ) =
Comparing this to Eq. (4) we see that the coherent state plays
the same role as an unsqueezed state (i.e., s0 = 1). The method
for retrieving the eigenvalues is now identical to that of the
coherent state, except now we can take advantage of a large
squeezing factor instead of nonpolynomial gate running time.
We can see the relationship between the squeezing factor
and gate running time more explicitly. Let Tbound be the upper
bound to the total number of momentum measurements we
are willing to make for phase estimation. If we need to
recover any eigenvalue of the Hamiltonian to accuracy E , the
following time-energy condition is satisfied (see Appendix C
for a derivation),
Tbound τ s0 E 1,
PHYSICAL REVIEW A 93, 052304 (2016)
class, which includes the local Hamiltonian problem [31].
For an exponentially greater precision in phase estimation,
however, an exponentially higher squeezing factor is needed.
We see from Eq. (6) that the squeezing factor serves as
a rescaling of the energy “uncertainty” E . Similarly to
phase estimation, increased squeezing can also retrieve the
corresponding eigenvectors to greater precision [32].
We can see the precise relationship between the squeezing
factor and the inverse precision from Eq. (6) by considering
when the maximum total gate running time resource is
constrained. When the time resource is constant, the minimum
squeezing factor required for efficient phase estimation is the
inverse precision, i.e., s0 ∼ 1/E .
This relationship can be seen more intuitively by considering a problem whose solution is given by the central position
x0 of a squeezed state with squeezing factor s0 . From the
central limit theorem, it requires t ∼ 1/(s02 η2 ) measurements
of the position x to get within precision η = |x − x0 | of the
center. Thus, for a fixed number of measurements (or time), the
squeezing factor scales as the inverse of precision s0 ∼ 1/η.
Another way we can see s0 as the inverse precision is to
consider when we are trying to resolve the distance between
two adjacent Gaussian peaks φ. We see later that factoring in
our model is essentially this problem with φ ∼ 1/N = 1/2n ,
where N is the number to be factored. Each Gaussian has
standard deviation 1/s0 . If the distance between these peaks
is closer than this length scale, it becomes difficult to resolve
the two peaks. Thus, 1/s0 is the maximum resolution for φ,
which is another precision scale. This fact is used when we
later examine the qubit and qudit encoding in our model.
We begin with an observation that the average of exp(ipE )
can reproduce the normalized trace of U ≡ exp(iH ) in the
following way,
− 12 Tr(Uτ )
eipE P (pE )dpE = e 4s0
where P (pE ) is given by Eq. (5) and Uτ ≡ exp(iH τ ). For
an N × N matrix Uτ , we use n = log2 N . If we wish to
recover the normalized trace of U to within an error δ [i.e.,
Tr(U )/2n ± δ], we require τ = 1 and TDQC1 measurements of
momentum [34] in our model. This is equivalent to running
our hybrid gate once per momentum measurement and then
averaging the corresponding values {exp(ipE )}.
This computation of the normalized trace is as efficient as
DQC1 if TDQC1 is independent of N = 2n . By employing the
central limit theorem we find (see Appendix E for a derivation)
TDQC1 (6)
where E can be a function of the size of the Hamiltonian.
In an efficient protocol the maximum total gate running
time Tbound τ is bounded by a polynomial in n. When the
inverse of E is also a polynomial in n, efficient phase
estimation is still possible for a squeezing factor polynomial
in n. For example, this is useful for the verification of
problems in the quantum-Merlin-Arthur (QMA) complexity
F (s0 )
where F (s0 ) = sinh[1/(2s02 )] + exp[−1/(2s02 )] and F (s0 ) →
1 very quickly with increasing s0 [35]. Equation (8) shows
that TDQC1 is upper bounded by a quantity dependent only
on the squeezing and not on the size of the matrix. In fact,
even when s0 = 1 (equivalent to a coherent state input) our
qumode model is sufficient to efficiently compute (in time)
the normalized trace of U , thus reproducing DQC1. This can
NANA LIU et al.
PHYSICAL REVIEW A 93, 052304 (2016)
also be viewed as a consequence of E being independent of
N = 2n in Eq. (6).
Factoring is the problem of finding a nontrivial multiplicative factor of an integer N . The classically hard part can be
reduced to a phase estimation problem, where the quantum
advantage in phase estimation can be exploited. We show how
the corresponding phase estimation problem can be solved in
our model and how much squeezing resource is required.
Factoring can be reduced to phase estimation in the
following way. There is a known classically efficient algorithm
that can find a nontrivial factor of N once it is given a random
integer q in the range 1 < q < N [2]. However, this algorithm
relies on prior knowledge of the order r of q, where r is
an integer r N satisfying q r ≡ 1 mod N . Thus, the main
difficulty lies in finding this order r, which is believed to
be a classically hard problem. It turns out that this order
can be encoded into the eigenvalues of a suitably chosen
Hamiltonian Hq .
Here we begin with a squeezed control state and a target
state of n = log2 N qubits in a maximally mixed state. Let
our hybrid control gate be CUq = exp(i x̂ ⊗ Hq τ/x0 ). Next we
choose a suitable Hamiltonian Hq whose eigenvalues contain
the order r. We define a unitary exp(iHq ) which acts on
a qubit state |lmodN like exp(iHq )|lmodN = |lqmodN ,
where l is an integer 0 l < N. When l = q k for an integer k r, exp(iHq r)|q k mod N = |q k q r mod N = |q k
mod N. Here the eigenvalues of Hq are 2π m/r, where m is
an integer 1 m < r. However, for qubits in a mixed state
we have l = q k in general. In these cases, we define a more
general “order” rd , where exp(iHq rd )|ld mod N = |ld q rd
mod N = |ld mod N . Here rd is an integer rd r that
divides r [9] and satisfies ld q rd mod N = ld mod N . The
integer d labels the set of states {|ld q h mod N}, where
h rd is an integer. Thus, for general ld , the eigenvalues
of Hq can be written as 2π md /rd , where md is an integer
1 m d < rd .
These eigenvalues do not give r directly. However, we can
always rewrite md /rd in the form m/r since rd is a factor of r.
In general, there will be a single fraction m/r corresponding
to many possible md and rd . If we call this multiplicity cm
for a given m/r, then following Eq. (5) we can write the pE
probability distribution as measured by the final control state
rd −1
s0 τ −(2πs0 τ )2 ( 2πE − r d )2
P (pE ) = √ n
π 2 d m =0
s0 τ
=√ n
2 pE
cm e−(2πs0 τ ) ( 2π − r ) .
m 2
This probability distribution is a sum of Gaussian functions
with amplitudes cm and centered on m/r. To recover the order r
from the above probability distribution, it is sufficient to satisfy
two conditions. The first condition is to be able to recover the
fractions m/r to within the interval [m/r − 1/(2N 2 ),m/r +
1/(2N 2 )] [36]. Thus, the larger the number we wish to factor,
the more squeezing we need to improve the precision of the
phase estimation. The second requirement is for m and r to
be coprime, which enables us to find r. This requirement is
satisfied with probability less than O{ln[ln(N )]}.
Subject to the above two conditions, we can compute the
probability that a correct r is found using the momentum probability distribution in our model. We derive in Appendix F the
number of runs Tfactor < O{ln[ln(N )]}/erf(π s0 τ/N 2 ) needed
to factor N , which is inversely related to the probability
of finding a correct r. In the large N limit, to achieve the
same efficiency as Shor’s algorithm using qubits, which is
Tfactor ∼ O{ln[ln(N )]} = O{ln[ln(N )]} Tbound [37], it is thus
sufficient to choose
s0 τ ∼ 22n .
This can also be derived from Eq. (6) using E = 2π/(2N 2 ),
where Tbound ∼ 1. If we let s0 = 1 for the coherent state,
this requires total computing time to scale exponentially
with the size of the problem (i.e., log2 N ). Thus, to ensure
polynomial total computing time, we can choose instead τ ∼ 1
and s0 ∼ 22n .
We saw that the squeezing factor can be interpreted as an
inverse precision since the two quantities are also polynomially
related. There are also other quantities polynomially related to
the squeezing factor like energy and the dimensionality of the
qubit that can be encoded in our squeezed state. We discuss
their relationship to the squeezing factor and in what ways they
can and cannot also be considered resources.
Energy may be considered a resource if it is required in the
initial preparation of the necessary input states. In a quantum
optical setting, for example, energy is required for preparing a
squeezed state resource. The minimum energy Emin required
is that needed to create the number of particle excitations np corresponding to a certain amount of squeezing since Emin ∝
np . The number of particle excitations is itself regularly
considered as the primary resource in the context of quantum
metrology. For our squeezed state np = sinh2 [ln(s0 )], where
for a large squeezing factor np ∝ s02 . Thus, energy and the
squeezing factor are polynomially related.
This interpretation of the squeezing factor as an energy can
help us understand why s0 of the order O[exp(n)] is necessary
for factoring in our algorithm. We can consider performing
factoring in our model as swapping m = log2 N pure control
qubits in the qubit factoring protocol with a single qumode. A
simple example to illustrate this phenomenon is to consider a
simple computation |0⊗μ → |1⊗μ . Suppose the computation
is performed using μ qubits encoded in μ two-level atoms. Let
the energy gap between the ground (|0) and the first excited
state (|1) be E. Then a total energy of μE is required
for the computation. If we use a single CV mode instead,
for instance, a harmonic oscillator with 2μ energy levels, the
total energy required to perform this computation is 2μ E,
which has exponential scaling in μ. This is very similar to the
exponential scaling in log2 N observed in our model.
However, there are also two reasons why it is not ideal to
consider energy as a resource. First, having no energy does not
guarantee that the computational power of a high squeezing
factor cannot be achieved. An example is spin-squeezing in the
case of energy-degenerate spin states. Second, having large
amounts of available energy also does not guarantee more
efficient computation. If we instead use a coherent state with
high coherence α and hence large energy (since np = |α|2 ),
we still cannot factor in polynomial time.
The GKP (Gottesman-Kitaev-Preskill) encoding [38] allows one to encode a qudit, or a discrete-variable quantum
state with D dimensions [39] into a CV mode. We use this
encoding scheme as an illustration. This can work for CV states
whose probability distribution (in momentum, for example)
can be described as a sum of Gaussian functions, each with
standard deviation w and neighboring centers are separated by
a distance φ. Since the precision associated with each peak
is on the order w, we can fit a total of φ/w distinguishable
copies of this distribution where each copy is separated from
its neighbor by a unit φ/w along the momentum axis. If
we represent each degree of freedom by one such distribution,
then there are D = φ/w degrees of freedom available to this
CV state just by displacement in momentum. These D degrees
of freedom can be mapped onto a qudit of dimensionality D.
Given an encoding like GKP, we can write D ∼ s0 φ
since in our case w = 1/s0 . Thus, here s0 is interpreted as
the inverse precision 1/w. Since φ is the distance between
adjacent Gaussian peaks in our probability distribution P (pE ),
to accomplish factoring, we require s0 = 22n = N 2 and φ =
1/N , so D = N . For DQC1, s0 = 1 and D = 2 (since we only
need a single qubit). Thus, D and s0 are also polynomially
A qudit of dimension D is equivalent to m = log2 D
pure qubits, where D is polynomially related to s0 . Thus,
for factoring, the required number of control qubits in our
algorithm scales as m ∼ O[poly(n)] compared to m = 1 for
DQC1, where n = log2 N is the number of target register
qubits. Here we see that the number of qubits for the
two problems are not exponentially separated. There is an
important result of Shor and Jordan [18], which compares the
computational power of DQC1 with an n-qubit target register
and a model that is an m-control qubit extension of DQC1.
Their result claims that if m is logarithmically related to n,
then this model still has the same computational power as
DQC1. On the other hand, if m is polynomially related to
n, then this model is computationally harder than DQC1. If
we use n = log2 N, then the Shor and Jordan result make
clear that the number of control pure qubits m in these two
different models are not separated exponentially, even though
one model has higher computational power. However, like
the time resource in these two models, D = 2m in these two
models are exponentially separated, which suggests that D
may be preferred over m, in the context of these particular
algorithms, as a good quantifier for a computational resource.
PHYSICAL REVIEW A 93, 052304 (2016)
That the required number of control qubits scales as m ∼
O[poly(n)] is not too surprising since we observe a similarity
between our model and standard phase estimation. Our model
has more in common with standard phase estimation than
DQC1, even though it is a hybrid extension of DQC1. We can
see that by taking the average of momentum measurements
in our model, we obtain the average of the eigenvalues of
the Hamiltonian. The momentum average, however, does not
give the normalized trace of the unitary matrix U as may be
expected from DQC1. This can be understood by taking a
discretized version of our model, where one uses instead |x
for x = 0,1,2, . . . ,N . Then the circuit reduces to the standard
phase estimation circuit, which requires the m = log2 N pure
control qubits which we traded for a single qumode. From this,
we can also see that our model using an infinite squeezing
factor is an analog of the standard phase estimation using an
infinite number of qubits, which in both models allow us to
attain infinite precision in phase estimation.
We add that this comparison with standard phase estimation
further strengthens our claim that s0 ∼ 22n = N 2 is sufficient
and maybe even necessary for factoring the number N . Suppose if we instead only need an exponentially smaller squeezing factor for factoring in a new algorithm. This would imply
that a new algorithm performed on the qubit phase estimation
circuit (i.e., the qubit analog of our algorithm) exists that can
solve factoring with exponentially fewer control qubits compared to the currently known qubit phase estimation algorithm.
While qumodes like squeezed states can be used as a way
of encoding qudits and qubits [38,40,41], the squeezing factor
is still a resource that should be considered in its own right. Its
emphasis over qudits is important for practical considerations.
The practical advantages of considering qumode resources,
in general, are that CVs typically use affordable off-the-shelf
components and widely leveraged quantum optics techniques.
They also have higher detection efficiencies at room temperature and can be fully integrated into current fiber-optics
networks [42,43].
A computation is a physical process and the amount of available physical resources can limit the power of a computation.
In the power of one qumode model, we demonstrate how the
squeezing factor can be viewed as a resource to quantitatively
compare the difficulty of phase estimation problems like
factoring and the hardest problem in the DQC1 computational
class. Our model thus provides a unifying framework in which
to compare the resources required for both DQC1 and factoring
as well as other problems based on phase estimation. In
addition, we also explore the trade-off relations between the
squeezing factor, the running time of the computation, and the
interaction strength in our model.
The physical resources commonly discussed as computational resources are time, space, and inverse precision. The
definitions of computational complexity classes are also based
on these [15,44,45]. We identify that squeezing can also be
interpreted in terms of one of these resources: inverse precision. Furthermore, we can relate the squeezing factor to energy
and qudit dimensionality. This highlights very explicitly the
different ways one can quantify computational power.
NANA LIU et al.
PHYSICAL REVIEW A 93, 052304 (2016)
N.L. would like to thank G. Adesso, R. Alexander, A.
Ferraro, A. Garner, P. Humphreys, J. Ma, N. Menicucci,
M. Paternostro, F. Pollock, M. Vidrighin, and B. Yadin for
useful discussions. The authors also thank A. Furusawa. N.L.
acknowledges support by the Clarendon Fund and Merton College of the University of Oxford and the hospitality of Monash
University while part of this work was written. This work is
also supported by the National Research Foundation (NRF),
NRF-Fellowship (Reference No. NRF-NRFF2016-02); the
Ministry of Education in Singapore Grant and the Academic
Research Fund Tier 3 MOE2012-T3-1-009; the John Templeton Foundation Grant 53914 “Occam’s Quantum Mechanical
Razor: Can Quantum theory admit the Simplest Understanding
of Reality?”; the National Basic Research Program of China
Grants No. 2011CBA00300 and No. 2011CBA00302; the
National Natural Science Foundation of China Grants No.
11450110058, No. 61033001, and No. 61361136003; the
EPSRC (UK); the Leverhulme Trust and the Oxford Martin
School; the National Research Foundation, Prime Ministers
Office, Singapore under its Competitive Research Programme
(CRP Award No. NRF- CRP14-2014-02) and administered
by Centre for Quantum Technologies, National University of
Singapore. Finally, the authors are grateful to the anonymous
referees for their insightful comments and suggested changes
to this paper.
We note that in DQC1, there is a method of reducing the control gate U = |00| ⊗ 1 + |11| ⊗ U in terms
of elementary (e.g., one- or two-qubit) circuits [23]. The
analogous gate in the power of one qumode model is the
hybrid control gate CU = exp(i x̂ ⊗ H τ/x0 ), where we now
set τ = x0 for convenience. We demonstrate how this gate can
also be reduced to elementary operations to further clarify the
relationship between DQC1 and the power of one qumode
We first write the DQC1 setup. The DQC1 setup begins
with a polynomial sequence of elementary (e.g., one- or
two-qubit) gates {u
k = exp(ihk )}. We define the product of
these gates to be k uk ≡ U = exp(iH ). The next step is
to implement a control-unitary on each uk , so our collection
of elementary gates is now transformed into the set {λu ≡
|00| ⊗ 1 + |11| ⊗ uk }. The product of these gates will
recover the controlled-unitary operation U = |00| ⊗ 1 +
|11| ⊗ U appearing in the description of DQC1, since
λu =
|00| ⊗ 1 + |11| ⊗ uk
= |00| ⊗ 1 + |11| ⊗
= |00| ⊗ 1 + |11| ⊗ U = U .
The analogous requirement for the power of one qumode
model is to begin from a polynomial sequence of elementary
gates which can form the hybrid control-unitary operation
CU = exp(i x̂ ⊗ H ). We show how this can be achieved.
Let us begin with the same set of elementary gates {uk =
exp(ixhk )}. Instead of implementing the usual control unitary
on each uk , we implement a hybrid control unitary on each uk .
This means our set of elementary gates is modified into the
new set {cu ≡ exp(i x̂ ⊗ hk )}. We can take the product of these
operations and recover CU in the following way:
cu =
exp(i x̂ ⊗ hk ) =
dx|xx| ⊗ eixhk
dx|xx| ⊗
dx|xx| ⊗ eixH = ei x̂⊗H = CU ,
where x is a number and we used
eixhk ≡ eixH ,
which must be satisfied for
This condition, combined
all x. with the definition that k uk = k exp(ihk ) = exp(iH )=
U , implies that [hk ,hk ] = 0 for all k,k in the product k
[46]. Equivalently, this means {uk } must be a commuting set
of operators.
We can show that such a set {uk } where U = exp(iH ) =
k uk exists for the factoring problem. We know that factoring
the number N is equivalent to finding the order r of a
random integer q, where 1 < q < N, which requires U |1
mod N = exp(iH )|1 mod N = |q mod N . Since q is an
integer, we can make a binary decomposition q − 1 = 20 b0 +
21 b1 + 22 b2 + · · · + 2f , where f is an integer and bj = 0,1.
Then if we choose uk to be an elementary operation defined
by uk |1 mod N = |(1 + 2k bk ) mod N , we can see that
all operators in {uk } commute and k=0 uk |1 mod N = |q
mod N = U |1 mod N .
Suppose we begin with a coherent state |α in our model.
The coherent state can be written in the position basis as
|α = x|α|xdx,
whose position wave function is
1 4 − 2x12 [x−Re(α)]2 iIm(α)x/x0 − i Re(α)Im(α)
x|α =
e 0
e 2
, (B2)
π x02
where x0 ≡ 1/ mω and m,ω are the mass and frequency
scales, respectively, of the corresponding quantum harmonic
By using G(x) ≡ x|α in Eq. (3), we find the momentum
probability distribution of the final control state to be
1 P (p) = n
x0 −x02 {p− xτ [φm + Im(α)
τ ]}
= √ n
π 2 m=0
PHYSICAL REVIEW A 93, 052304 (2016)
If we measure variable pE ≡ px0 /τ (where inputs x0 and τ
are initially known), the probability distribution for pE is
π 2n m=0
P (pE ) = √
Here we provide a brief argument of how eigenvectors of
the Hamiltonian {|φj } can also be found using our model. The
hybrid state ρtotal after application of the hybrid gate is
Thus, the coherent state can be used for phase estimation,
where the accuracy of the phase estimation improves with
increasing running time of the hybrid gate.
Suppose we want to recover any eigenvalue of our Hamiltonian to accuracy E . The total number of pE measurements
required for an average of one success is
Tmeasure ∼
× |φm φm | ⊗ |xx |dxdx .
p|ρtotal |p
1 = n
G(φm τ/x0 − p)G ∗ (p − φm τ/x0 )|φm φm |.
2 m
where PE is the probability of retrieving the eigenvalues
to within the interval [φj − E ,φj + E ]. Using Eq. (5)
we find
≡ P (l = m) + P (l = m),
After a momentum measurement we are in the following state
of the target register:
PE ≡ P(pE ; |pE − φn | E )
2n s0 τ φl +E −(s0 τ )2 (pE −φm )2
=√ n
π 2 l=1 φl −E m=1
= n
G(x)G∗ (x )ei(x−x )H τ/x0 |xx |dxdx 2
1 =
√ 1
For a squeezed state G(x) = [1/( sπ 4 )]exp[−x 2 /(2s 2 )] the
final state of the target register becomes
p|ρtotal |p =
s −s 2 (p−φm τ/x0 )2
|φm φm |.
π m
2 s0 τ φm +E −(s0 τ )2 (pE −φm )2
P (l = m) = √ n
π2 m=1 φm −E
= erf(s0 τ E )
and P (l = m) = (1/2n ) 2l=m=1 (erf{s0 τ [(φl − φm )/r + E ]}−
erf{s0 τ [(φl − φm )/r − E ]}) > 0. These two contributions to
the total probability distribution PE can be interpreted in
the following way. P (l = m) is the probability of finding
φn to within E if the Gaussian peaks are very far apart.
This occurs when the spread of each Gaussian is much
smaller than the distance between neighboring Gaussian peaks
1/(s0 τ ) φmin , where φmin is the minimum gap between
adjacent eigenvalues. P (l = m) captures the overlaps between
the Gaussians. This overlap contribution vanishes for large N ,
so for simplicity we neglect this term. This neglecting will not
affect the overall validity of our result. We can now write
PE > P (l = m) = erf(s0 τ E ).
Approximate eigenvectors can thus be obtained by measurement of the target state. The probability of obtaining the
eigenvectors of the Hamiltonian is distributed in the same way
as for the eigenvalues. Eigenvector identification therefore also
improves with an increase in the squeezing factor.
Here we derive the number of momentum measurements
TDQC1 in our model needed to to recover the normalized trace
of U ≡ exp iH to within error δ. We show that this is upper
bounded by a quantity independent of the size of U .
Let us begin by introducing a new random variable y ≡
exp(ipE x0 ), where pE are the measurement outcomes from
our model. The probability distribution function with respect
to y can be rewritten as
Py (y) =
By demanding Tmeasure < Tbound , then using Eqs. (C1)
and (C4), we find it is sufficient to satisfy
Tbound erf(τ s0 E ) 1.
For large τ s0 E , the above inequality is automatically satisfied. This assumes that τ s0 grows more quickly in N than the
inverse of the eigenvalue uncertainty E that we are willing to
tolerate. More generally, however, it is the time and squeezing
resources we want to minimize for a given precision, so τ s0 E
is small. In this case, Eq. (C5) becomes
Tbound τ s0 E 1.
δ(y − eipE x0 )P (pE )dpE ,
where P (pE ) is given by Eq. (5). We find that the average of y
is related to the normalized trace of unitary matrix U ,
yPy (y)dy = eipE x0 P(pE )dpE
We now let τ = 1 since Uτ =1 = U .
Tr(Uτ )
NANA LIU et al.
PHYSICAL REVIEW A 93, 052304 (2016)
To find the normalized trace of U to error δ is equivalent to
finding the average of y to within , where
− 12 Tr(U )
yPy (y)dy ± ε = e
±δ .
For concreteness, we first separately examine recovering the
real part of the normalized trace of U to within Re(δ), then the
imaginary part of the trace to within Im(δ).
Real part of the normalized trace of U . We define a new
random variable yR ≡ Re(y) = cos(pE x0 ) whose average is
within Re() of the real part of the normalized trace of U . The
probability distribution with respect to yR is
PyR (yR ) =
δ[yR − cos(pE x0 )]P (pE )dpE .
We can similarly use the central limit theorem in this case to
find the necessary number of measurements tI ,
2 e 2s0
tI ∼ I 2 ,
where I2 is the variance with respect to probability distribution PyI (yI ). We can show
I2 ≡ yI2 PyI (yI )dyI −
yI PyI (yI )dyI
cos2 (pE x0 )P(pE )dpE −
cos(pE x0 )P (pE )dpE
We can now use Eqs. (E6) and (E7) to find an upper bound to
the number of measurements
F (s0 )
tR ,
F (s) = sinh
Imaginary part of the normalized trace of U . To recover the
imaginary part of the normalized trace of U to within an error
Im(δ), we average yI ≡ Im(y) = sin(pE x0 ). The probability
distribution with respect to yI is
PyI (yI ) =
δ[yI − sin(pE x0 )]P (pE )dpE .
− 12
1 1 2
sin (φm ) − e
sin(φm )
2n m=1
2n m=1
F (s0 ).
F (s0 )
This means the number of required measurements t to recover
the normalized trace of U to within δ has the upper bound
TDQC1 = max(tR ,tI ) F (s0 )
s0 τ m 2
2 pE
P (pE ) = √ n
π 2 m=0
Here we give the derivation of the number of runs Tfactor
needed to recover a nontrivial factor of N given the momentum
probability distribution [Eq. (9)]
− 12
1 1 2
cos (φm ) − e
cos(φm )
2n m=1
2n m=1
− 12 1 1
cos2 (φm )
2n m=1
− 12
− 12
tI (E6)
where R2 is the variance of the probability distribution with
respect to yR . Using Eqs. (E1) and (5) we can show
R2 ≡ yR2 PyR (yR )dyR −
yR PyR (yR )dyR
R2 e
tR ∼
We can employ the central limit theorem [47] and Eq. (E4) to
find the number tR of necessary pE measurements to be
We want to find the probability Pr in which one can retrieve
the correct value of the order r. The number of runs required
on average to find a nontrivial factor of N is inversely related
to this probability
Tfactor ∼
Here we derive a lower bound to Pr (hence an upper bound to
the number of runs) that satisfies the following two conditions.
To recover r it is sufficient to (i) know m/r to an accuracy
within 1/(2N 2 ) and (ii) to choose when m and r have no
factors in common so their greatest common denominator is 1
[i.e., gcd(m,r) = 1].
The first condition comes from the continued fractions algorithm [48], which can be used to exactly recover the rational
number m/r given some φ when |φ − m/r| 1/(2r 2 ). Since
r N , a sufficient condition is |φ − m/r| 1/(2N 2 ). The
second condition ensures we recover r instead of a nontrivial
factor of r. We see how to satisfy the second condition later on.
To satisfy the first condition, we see that the probability
of finding m/r to within 1/(2N 2 ) when measuring
pE ≡ pE /(2π ) is
m Pr ≡ P pE ; pE − r
2N 2
r−1 l
s0 τ r + 2N 2 m 2
2 =√ n
cm e−(2πs0 τ ) (pE − r ) 2π dpE
π 2 l=0 rl − 1 2 m=0
Then the probability of retrieving the correct r from the
probability distribution is at least
m + π2
s0 τ m 2
2 >√ n
π 2 m=0
r − 2
π s0 τ
π s0 τ
PHYSICAL REVIEW A 93, 052304 (2016)
Note that we do not require contributions to the probability
from every m in the summation. In order to successfully
retrieve r from the fraction m/r, we need only consider
the cases where gcd(m,r) = 1. Euler’s totient function (r)
represents the number of cases where m and r are coprime with
m < r. It can be shown that (r) > r/{eγ ln[ln(r)]} where γ
is Euler’s number [2]. In the cases where gcd(m,r) = 1, the
amplitude |cm | ≡ M, where M is the number of cases where
rd = r. It is also possible to show that when N = v1 v2 (where
v1 and v2 are prime numbers), M > (v1 − 1)(v2 − 1) [9].
[1] David Deutsch and Richard Jozsa, Rapid solution of problems
by quantum computation, Proc. R. Soc. London, Ser. A 439, 553
[2] Peter W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer, SIAM J.
Comput. 26, 1484 (1997).
[3] Lov K. Grover, A fast quantum mechanical algorithm for
database search, in Proceedings of the Twenty-eighth Annual
ACM Symposium on Theory of Computing (ACM, New York,
1996), p. 212.
[4] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd, Quantum
Algorithm for Linear Systems of Equations, Phys. Rev. Lett.
103, 150502 (2009).
[5] Richard Jozsa and Noah Linden, On the role of entanglement in
quantum-computational speed-up, Proc. R. Soc. London A 459,
2011 (2003).
[6] Emanuel Knill and Raymond Laflamme, Power of One Bit of
Quantum Information, Phys. Rev. Lett. 81, 5672 (1998).
[7] B. P. Lanyon, M. Barbieri, M. P. Almeida, and A. G. White, Experimental Quantum Computing without Entanglement, Phys.
Rev. Lett. 101, 200501 (2008).
[8] Animesh Datta and Guifre Vidal, Role of entanglement and
correlations in mixed-state quantum computation, Phys. Rev. A
75, 042310 (2007).
[9] S. Parker and Martin B. Plenio, Efficient Factorization with a
Single Pure Qubit and log N Mixed Qubits, Phys. Rev. Lett. 85,
3049 (2000).
[10] Carlton M. Caves, Quantum-mechanical noise in an interferometer, Phys. Rev. D 23, 1693 (1981).
[11] Alex Monras, Optimal phase measurements with pure gaussian
states, Phys. Rev. A 73, 033821 (2006).
π s0 τ
erf 2n
Pr >
(v1 − 1)(v2 − 1)r
π s0 τ
e 2 ln[ln(r)]
π s0 τ
(v1 − 1)(v2 − 1)
> γ n
e 2 ln[ln(r)]
From Eqs. (F2) and (F4) we now have an upper bound to the
time steps required
Tfactor <
eγ N ln[ln(N )]
(v1 − 1)(v2 − 1)erf πs22n0 τ
The large N limit (where v1 ,v2 1) gives our result
Tfactor <
eγ ln[ln(N )]
eγ ln[ln(2n )]
πs0 τ =
erf 22n
erf πs22n0 τ
[12] Olivier Pinel, Pu Jian, N. Treps, C. Fabre, and Daniel Braun,
Quantum parameter estimation using general single-mode gaussian states, Phys. Rev. A 88, 040102 (2013).
[13] Seth Lloyd and Samuel L. Braunstein, Quantum Computation
over Continuous Variables, Phys. Rev. Lett. 82, 1784 (1999).
[14] Mile Gu, Christian Weedbrook, Nicolas C. Menicucci, Timothy
C. Ralph, and Peter van Loock, Quantum computing with
continuous-variable clusters, Phys. Rev. A 79, 062318 (2009).
[15] Mikhail J. Atallah and Marina Blanton, Algorithms and Theory
of Computation Handbook: Special Topics and Techniques
(CRC Press, Boca Raton, FL, 2009), Vol. 2.
[16] Seth Lloyd, Hybrid quantum computing, in Quantum Information with Continuous Variables (Springer, Berlin, 2003), p. 37.
[17] Akira Furusawa and Peter Van Loock, Quantum Teleportation
and Entanglement: A Hybrid Approach to Optical Quantum
Information Processing (Wiley & Sons, New York, 2011).
[18] Peter W. Shor and Stephen P. Jordan, Estimating jones polynomials is a complete problem for one clean qubit, Quant Inf.
Comput. 8, 681 (2008).
[19] Dan Shepherd, Computation with unitaries and one pure qubit,
[20] David Poulin, Robin Blume-Kohout, Raymond Laflamme, and
Harold Ollivier, Exponential Speedup with a Single Bit of
Quantum Information: Measuring the Average Fidelity Decay,
Phys. Rev. Lett. 92, 177906 (2004).
[21] Emanuel Knill and Raymond Laflamme, Quantum computing
and quadratically signed weight enumerators, Inform. Process.
Lett. 79, 173 (2001).
[22] Animesh Datta, Steven T. Flammia, and Carlton M. Caves,
Entanglement and the power of one qubit, Phys. Rev. A 72,
042316 (2005).
NANA LIU et al.
PHYSICAL REVIEW A 93, 052304 (2016)
[23] Animesh Datta, Studies on the Role of Entanglement in Mixedstate Quantum Computation, Ph.D. thesis, The University of
New Mexico, 2008.
[24] Richard Cleve, Artur Ekert, Chiara Macchiavello, and Michele
Mosca, Quantum algorithms revisited, Proc. R. Soc. London A
454, 339 (1998).
[25] It is also possible to define a control gate controlled on the
particle number operator instead of x̂. However, analytical
solutions in this case are not straightforward and for our purposes
it suffices to look at our current hybrid control gate.
[26] Here we use natural units = 1 = c.
[27] Operators x̂ and p̂ satisfy the canonical commutator relation
[x̂,p̂] = i.
[28] Christopher Gerry and Peter Knight, Introductory Quantum
Optics (Cambridge University Press, Cambridge, U.K., 2005).
[29] This is equivalent to the initial expectation value of momentum
of the coherent state.
[30] Here s0 is a real number in the range s0 ∈ [1,∞).
[31] Julia Kempe, Alexei Kitaev, and Oded Regev, The complexity
of the local hamiltonian problem, SIAM J. Comput. 35, 1070
[32] See Appendix D. Also, see [33] for another algorithm on
eigenvector retrieval.
[33] Daniel S. Abrams and Seth Lloyd, Quantum Algorithm Providing Exponential Speed Increase for Finding Eigenvalues and
Eigenvectors, Phys. Rev. Lett. 83, 5162 (1999).
[34] Note that the number of momentum measurements and pE
measurements needed are equivalent.
[35] The F (s0 ) overhead is analogous to the case in DQC1 when
using a slightly mixed state probe state instead of the pure state
|++| [23]. The degree of mixedness does not affect the result
that the computation is efficient. The amount of squeezing in
our model thus corresponds to the degree of mixedness in the
input state of DQC1. Higher squeezing corresponds to greater
[36] This ensures that m/r is recovered exactly by using the continued fractions algorithm. See [48] for an explicit demonstration.
[37] Note that Tbound in this case corresponds to the number of
momentum measurements needed to find the correct eigenvalue
of the Hamiltonian. From the eigenvalue, one still needs an extra
classically efficient step to find the factor, so Tfactor > Tbound .
[38] Daniel Gottesman, Alexei Kitaev, and John Preskill, Encoding
a qubit in an oscillator, Phys. Rev. A 64, 012310 (2001).
[39] D = 2 is equivalent to a qubit.
[40] B. M. Terhal and D Weigand, Encoding a qubit into a cavity
mode in circuit-qed using phase estimation, Phys. Rev. A 93,
012315 (2016).
[41] Brian Vlastakis, Gerhard Kirchmair, Zaki Leghtas, Simon E
Nigg, Luigi Frunzio, Steven M. Girvin, Mazyar Mirrahimi,
Michel H. Devoret, and Robert J. Schoelkopf, Deterministically
encoding quantum information using 100-photon schrödinger
cat states, Science 342, 607 (2013).
[42] Samuel L Braunstein and Peter Van Loock, Quantum information with continuous variables, Rev. Mod. Phys. 77, 513 (2005).
[43] Christian Weedbrook, Stefano Pirandola, Raul Garcia-Patron,
Nicolas J. Cerf, Timothy C. Ralph, Jeffrey H. Shapiro, and Seth
Lloyd, Gaussian quantum information, Rev. Mod. Phys. 84, 621
[44] Michael R. Garey and David S. Johnson, Computers and
intractability (W. H. Freeman, London, 2002), Vol. 29.
[45] Stephen A. Cook, The complexity of theorem-proving procedures, in Proceedings of the Third Annual ACM Symposium on
Theory of Computing (ACM, New York, 1971), p. 151.
[46] Note that this also means H = k hk .
[47] Since we are selecting our random variable independently and
from the same distribution which has finite mean and variance,
it is valid to use the central limit theorem.
[48] Michael A. Nielsen and Isaac L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press,
Cambridge, U.K., 2010).
Document related concepts
Quantum electrodynamics wikipedia, lookup
Probability amplitude wikipedia, lookup
Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup
Coherent states wikipedia, lookup
Bohr–Einstein debates wikipedia, lookup
Quantum key distribution wikipedia, lookup
Copenhagen interpretation wikipedia, lookup
Density matrix wikipedia, lookup
History of quantum field theory wikipedia, lookup
Canonical quantization wikipedia, lookup
Quantum state wikipedia, lookup
Interpretations of quantum mechanics wikipedia, lookup
Path integral formulation wikipedia, lookup
Hidden variable theory wikipedia, lookup
T-symmetry wikipedia, lookup
Renormalization group wikipedia, lookup
Quantum machine learning wikipedia, lookup
EPR paradox wikipedia, lookup
Quantum group wikipedia, lookup
Hydrogen atom wikipedia, lookup
Bohr model wikipedia, lookup
Symmetry in quantum mechanics wikipedia, lookup
Relativistic quantum mechanics wikipedia, lookup
Many-worlds interpretation wikipedia, lookup
Orchestrated objective reduction wikipedia, lookup
Matter wave wikipedia, lookup
Bell's theorem wikipedia, lookup
Ising model wikipedia, lookup
Quantum entanglement wikipedia, lookup
Quantum teleportation wikipedia, lookup
Quantum field theory wikipedia, lookup
Quantum decoherence wikipedia, lookup
Quantum computing wikipedia, lookup
Quantum dot wikipedia, lookup
Quantum fiction wikipedia, lookup
Particle in a box wikipedia, lookup
Measurement in quantum mechanics wikipedia, lookup
Algorithmic cooling wikipedia, lookup |
d91dfdab3b53cdd5 | Difference Methods for One-Dimensional PDE
• Simon Širca
• Martin Horvat
Part of the Graduate Texts in Physics book series (GTP)
Finite-difference methods for one-dimensional partial differential equations are introduced by first identifying the classes of equations upon which suitable discretizations are constructed. It is shown how parabolic equations and the corresponding boundary conditions are discretized such that a desired local order of error is achieved and that the discretization is consistent and yields a stable and convergent solution scheme. Convergence criteria are established for a variety of explicit and implicit difference schemes. Energy estimates and theorems on maxima are given as auxiliary tools that allow us to ascertain that the solutions are physically meaningful. Difference schemes for hyperbolic equations are introduced from the standpoint of the Courant–Friedrich–Lewy criterion, dispersion and dissipation. Various techniques for non-linear equations and equations of mixed type are given, including high-resolution schemes for equations that can be expressed in terms of conservation laws. The Problems include the (parabolic) diffusion and (hyperbolic) advection equation, Burgers equation, the shock-tube problem, Korteweg–de Vries equation, and the non-stationary linear and cubic Schrödinger equations.
Difference Scheme Implicit Scheme Partial Differential Equation Nicolson Scheme Homogeneous Dirichlet Condition
1. 1.
J.W. Thomas, Numerical Partial Differential Equations: Finite Difference Methods. Springer Texts in Applied Mathematics, vol. 22 (Springer, Berlin, 1998) Google Scholar
2. 2.
D. Knoll, J. Morel, L. Margolin, M. Shashkov, Physically motivated discretization methods. Los Alamos Sci. 29, 188 (2005) Google Scholar
3. 3.
A. Tveito, R. Winther, Introduction to Partial Differential Equations. Springer Texts in Applied Mathematics, vol. 29 (Springer, Berlin, 2005) zbMATHGoogle Scholar
4. 4.
G.D. Smith, Numerical Solution of Partial Differential Equations (Oxford University Press, Oxford, 2003) Google Scholar
5. 5.
J.W. Thomas, Numerical Partial Differential Equations: Conservation Laws and Elliptic Equations. Springer Texts in Applied Mathematics, vol. 33 (Springer, Berlin, 1999) zbMATHGoogle Scholar
6. 6.
E. Godlewski, P.-A. Raviart, Numerical Approximations of Hyperbolic Systems of Conservation Laws (Springer, Berlin, 1996) Google Scholar
7. 7.
R.J. LeVeque, Numerical Methods for Conservation Laws (Birkhäuser, Basel, 1990) zbMATHCrossRefGoogle Scholar
8. 8.
P.K. Sweby, High resolution schemes using flux limiters for hyperbolic conservation laws. SIAM J. Numer. Anal. 21, 995 (1984) MathSciNetADSzbMATHCrossRefGoogle Scholar
9. 9.
D. Eisen, On the numerical analysis of a fourth order wave equation. SIAM J. Numer. Anal. 4, 457 (1967) MathSciNetADSzbMATHCrossRefGoogle Scholar
10. 10.
D.J. Korteweg, G. de Vries, On the change of form of long waves advancing in a rectangular channel and on a new type of long stationary wave. Philos. Mag. Ser. 5 39, 422 (1895) zbMATHCrossRefGoogle Scholar
11. 11.
B. Fornberg, G.B. Whitham, A numerical and theoretical study of certain nonlinear wave phenomena. Philos. Trans. R. Soc. Lond. Ser. A 289, 373 (1978) MathSciNetADSzbMATHCrossRefGoogle Scholar
12. 12.
X. Lai, Q. Cao, Some finite difference methods for a kind of GKdV equations. Commun. Numer. Methods Eng. 23, 179 (2007) MathSciNetzbMATHCrossRefGoogle Scholar
13. 13.
Q. Cao, K. Djidjeli, W.G. Price, E.H. Twizell, Computational methods for some non-linear wave equations. J. Eng. Math. 35, 323 (1999) MathSciNetzbMATHCrossRefGoogle Scholar
14. 14.
K. Kormann, S. Holmgren, O. Karlsson, Accurate time propagation for the Schrödinger equation with an explicitly time-dependent Hamiltonian. J. Chem. Phys. 128, 184101 (2008) ADSCrossRefGoogle Scholar
15. 15.
C. Lubich, in Quantum Simulations of Complex Many-Body Systems: From Theory to Algorithms, ed. by J. Grotendorst, D. Marx, A. Muramatsu. NIC Series, vol. 10 (John von Neumann Institute for Computing, Jülich, 2002), p. 459 Google Scholar
16. 16.
W. van Dijk, F.M. Toyama, Accurate numerical solutions of the time-dependent Schrödinger equation. Phys. Rev. E 75, 036707 (2007) MathSciNetADSCrossRefGoogle Scholar
17. 17.
T.N. Truong et al., A comparative study of time dependent quantum mechanical wave packet evolution methods. J. Chem. Phys. 96, 2077 (1992) ADSCrossRefGoogle Scholar
18. 18.
X. Liu, P. Ding, Dynamic properties of cubic nonlinear Schrödinger equation with varying nonlinear parameter. J. Phys. A, Math. Gen. 37, 1589 (2004) MathSciNetADSzbMATHCrossRefGoogle Scholar
Copyright information
© Springer-Verlag Berlin Heidelberg 2012
Authors and Affiliations
• Simon Širca
• 1
• Martin Horvat
• 1
1. 1.Faculty of Mathematics and PhysicsUniversity of LjubljanaLjubljanaSlovenia
Personalised recommendations |
a1ba013dbb1545b8 | Nouvelles techniques de compensation des effets nonlinéaires pour les transmissions optiques à longue distance
par Wasyhun asefa Gemechu
Projet de thèse en Réseaux, information et communications
Sous la direction de Yves Jaouen et de Mansoor Isvand yousefi.
Thèses en préparation à Paris Saclay en cotutelle avec l'University of brescia , dans le cadre de Sciences et Technologies de l'Information et de la Communication , en partenariat avec LTCI - Laboratoire de Traitement et Communication de l'Information (laboratoire) , GTO : Télécommunications Optiques (equipe de recherche) et de Télécom ParisTech (établissement de préparation de la thèse) depuis le 01-04-2016 .
• Titre traduit
Advanced nonlinearity Compensation techniques for long-haul optical Transmission systems
• Résumé
Context: In recent years, the exponentially increasing data traffic and consequent need to increase the capacity of optical communication networks has added more pressure on network infrastructure and design engineers to come up with better signal processing and transmission techniques. A new transmission technology was introduced in 2005, namely the orthogonal frequency division multiplexing (OFDM) method for long-haul high-speed optical transmission systems in the intensity modulation-direct detection (IM/DD) environment. The revival of coherent detection transmissions led to CO-OFDM as a highly competitive technology for next generation high capacity optical systems. CO-OFDM has shown tremendous potential for Tb/s, polarization diversity multiplexed transmissions, based on the compensation at the receiver of linear signal impairments due to dispersion and PMD. However compensating the ASE-noise accumulated in the fiber network remains challenging, unless a substantial increase of the OSNR was permitted, which in turn is prevented by the presence of nonlinear effects in the fiber channel. To date, it is clear that the Kerr fiber nonlinearity poses the major threat to optical communication systems. Hence novel and possibly breakthrough solutions for compensating the Kerr effect of the optical fiber channel are absolutely necessary for solving the upcoming capacity crunch. Thesis Work Description: This thesis work will develop new techniques for the mitigation first, and possibly full compensation next, of nonlinear fiber impairments in wavelength and polarization multiplexed CO-OFDM transmissions. Building from already existing fiber models and compensation methods like electronic compensators and DCF modules that increase circuitry complexity and fiber length, improved versions of the Volterra Series Transfer function (VSTF) method will be developed and numerically demonstrated as nonlinear-equalizer modules. Unlike other compensation techniques, the VSTF method has the capability to simultaneously compensate for linear attenuation and dispersion, weak nonlinear effects and accumulated ASE-noise from optical amplifiers, both in the single channel and in the multi-channel (WDM) environment. The spot light will be on the truncated third-order VSTF, to model and compensate for the most relevant nonlinear effects as described by the nonlinear Schrödinger equation (NLSE), namely: Self-Phase Modulation (SPM), Cross-Phase Modulation (XPM) and Four-Wave Mixing (FWM), both individually and combined. Higher-order terms will also be included in the VSTF method, and simulated for a better approximation of (supposedly weak) nonlinear effects. The performance of such equalizers will be evaluated using BER-metric and computational latency for both 100 GB and 400 GB-Nyquist channel single-channel and WDM transmissions. However, the VSTF only represent a first step towards the ultimate goal of reaching the highest spectral efficiency, which is fundamentally limited by the nonlinear Shannon channel capacity theorem. Present design of optical communication systems is based on the hypothesis that nonlinearity is a small perturbation (nonlinear noise) to a quasi-linear system. The development of a new kind of nonlinear optical communication system, such that nonlinearity can be fully compensated for, will enable a breakthrough increase of the OSNR, and a large leap in spectral efficiency. Therefore the second part of the thesis will focus on the development of a practical nonlinear communication method, based on the theory of the inverse spectral transform. This method, originally proposed by Hasegawa in 1993 and called eigenvalue (or multi-soliton) communication, is based on the fundamental observation that the discrete nonlinear spectrum of an optical signal is invariant (except for a trivial linear phase shift) upon propagation in the fiber channel, as described by the scalar NLSE. The method was never implemented, since it is was not suitable for the IM/DD systems of the time. However the recent development of practical coherent detectors based on fast digital signal processing enables the real-time measurement of both quadrature components of the received field. This means that the direct spectral transform (also known as nonlinear Fourier transform) of the received signal can be computed, and the eigenvalue spectrum fully recovered. Our first goal will thus be to find the optical input signal constellations, which lead to a signal alphabet such that their members enjoy the largest distance (and possibly so that they are even orthogonal) in the complex plane of their eigenvalues. The second goal will be to numerically evaluate and optimize the transmission performance and spectral efficiency of practical transmission systems based on the eigenvalue communication technique, both in the single and in the multi-channel environment. The third and final goal will be to extend the eigenvalue communication technique to include the polarization dimension, so as to double the channel transmission capacity by polarization modulation or multiplexing. To that end, the nonlinear Fourier transform of the vector NLSE, or Manakov system, will be numerically developed and implemented, and optimal polarization modulation formats will be developed. |
45035596cfd668d8 | Citation for this page in APA citation style. Close
Mortimer Adler
Rogers Albritton
Alexander of Aphrodisias
Samuel Alexander
William Alston
Louise Antony
Thomas Aquinas
David Armstrong
Harald Atmanspacher
Robert Audi
Alexander Bain
Mark Balaguer
Jeffrey Barrett
William Belsham
Henri Bergson
George Berkeley
Isaiah Berlin
Richard J. Bernstein
Bernard Berofsky
Robert Bishop
Max Black
Susanne Bobzien
Emil du Bois-Reymond
Hilary Bok
Laurence BonJour
George Boole
Émile Boutroux
Michael Burke
Joseph Keim Campbell
Rudolf Carnap
Ernst Cassirer
David Chalmers
Roderick Chisholm
Randolph Clarke
Samuel Clarke
Anthony Collins
Antonella Corradini
Diodorus Cronus
Jonathan Dancy
Donald Davidson
Mario De Caro
Daniel Dennett
Jacques Derrida
René Descartes
Richard Double
Fred Dretske
John Dupré
John Earman
Laura Waddell Ekstrom
Herbert Feigl
John Martin Fischer
Owen Flanagan
Luciano Floridi
Philippa Foot
Alfred Fouilleé
Harry Frankfurt
Richard L. Franklin
Michael Frede
Gottlob Frege
Peter Geach
Edmund Gettier
Carl Ginet
Alvin Goldman
Nicholas St. John Green
H.Paul Grice
Ian Hacking
Ishtiyaque Haji
Stuart Hampshire
Sam Harris
William Hasker
Georg W.F. Hegel
Martin Heidegger
Thomas Hobbes
David Hodgson
Shadsworth Hodgson
Baron d'Holbach
Ted Honderich
Pamela Huby
David Hume
Ferenc Huoranszki
William James
Lord Kames
Robert Kane
Immanuel Kant
Tomis Kapitan
Jaegwon Kim
William King
Hilary Kornblith
Christine Korsgaard
Saul Kripke
Andrea Lavazza
Keith Lehrer
Gottfried Leibniz
Jules Lequyer
Michael Levin
George Henry Lewes
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
E. Jonathan Lowe
John R. Lucas
Alasdair MacIntyre
Ruth Barcan Marcus
James Martineau
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
Thomas Nagel
Otto Neurath
Friedrich Nietzsche
John Norton
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders Peirce
Derk Pereboom
Steven Pinker
Karl Popper
Huw Price
Hilary Putnam
Willard van Orman Quine
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
Moritz Schlick
Arthur Schopenhauer
John Searle
Wilfrid Sellars
Alan Sidelle
Ted Sider
Henry Sidgwick
Walter Sinnott-Armstrong
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
G.H. von Wright
David Foster Wallace
R. Jay Wallace
Ted Warfield
Roy Weatherford
William Whewell
Alfred North Whitehead
David Widerker
David Wiggins
Bernard Williams
Timothy Williamson
Ludwig Wittgenstein
Susan Wolf
Michael Arbib
Walter Baade
Bernard Baars
Leslie Ballentine
Gregory Bateson
John S. Bell
Charles Bennett
Ludwig von Bertalanffy
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath Bose
Walther Bothe
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas Buckle
S. H. Burbury
Donald Campbell
Anthony Cashmore
Eric Chaisson
Jean-Pierre Changeux
Arthur Holly Compton
John Conway
John Cramer
E. P. Culverwell
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley Eddington
Gerald Edelman
Paul Ehrenfest
Albert Einstein
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
Joseph Fourier
Philipp Frank
Lila Gatlin
Michael Gazzaniga
GianCarlo Ghirardi
J. Willard Gibbs
Nicolas Gisin
Paul Glimcher
Thomas Gold
Brian Goodwin
Joshua Greene
Jacques Hadamard
Patrick Haggard
Stuart Hameroff
Augustin Hamon
Sam Harris
Hyman Hartman
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Art Hobson
Jesper Hoffmeyer
E. T. Jaynes
William Stanley Jevons
Roman Jakobson
Pascual Jordan
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Pierre-Simon Laplace
David Layzer
Benjamin Libet
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Ernst Mach
Donald MacKay
Henry Margenau
James Clerk Maxwell
Ernst Mayr
John McCarthy
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Roger Penrose
Steven Pinker
Colin Pittendrigh
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Adolphe Quételet
Juan Roederer
Jerome Rothstein
David Ruelle
Erwin Schrödinger
Aaron Schurger
Claude Shannon
David Shiang
Herbert Simon
Dean Keith Simonton
B. F. Skinner
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
William Thomson (Kelvin)
Giulio Tononi
Peter Tse
Vlatko Vedral
Heinz von Foerster
John von Neumann
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
John Wheeler
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
H. Dieter Zeh
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky
Free Will
Mental Causation
James Symposium
Erwin Schrödinger
Erwin Schrödinger is perhaps the most complex figure in twentieth-century discussions of quantum mechanical uncertainty, ontological chance, indeterminism, and the statistical interpretation of quantum mechanics.
In his early career, Schrödinger was a great exponent of fundamental chance in the universe. He followed his teacher Franz S. Exner, who was himself a colleague of the great Ludwig Boltzmann at the University of Vienna. Boltzmann used intrinsic randomness in molecular collisions (molecular chaos) to derive the increasing entropy of the Second Law of Thermodynamics. Th macroscopic irreversibility of entropy increase depends on Boltzmann's molecular chaos which depends on the randomness in microscopic irreversibility.
Before the twentieth century, most physicists, mathematicians, and philosophers believed that the chance described by the calculus of probabilities was actually completely determined. The "bell curve" or "normal distribution" of random outcomes was itself so consistent that they argued for underlying deterministic laws governing individual events. They thought that we simply lack the knowledge necessary to make exact predictions for these individual events. Pierre-Simon Laplace was first to see in his "calculus of probabilities" a universal law that determined the motions of everything from the largest astronomical objects to the smallest particles. In a Laplacian world, there is only one possible future.
On the other hand, in his inaugural lecture at Zurich in 1922, Schrödinger argued that evidence did not justify our assumptions that physical laws were deterministic and strictly causal. His inaugural lecture was modeled on that of Franz Serafin Exner in Vienna in 1908.
"Exner's assertion amounts to this: It is quite possible that Nature's laws are of thoroughly statistical character. The demand for an absolute law in the background of the statistical law — a demand which at the present day almost everybody considers imperative — goes beyond the reach of experience. Such a dual foundation for the orderly course of events in Nature is in itself improbable. The burden of proof falls on those who champion absolute causality, and not on those who question it. For a doubtful attitude in this respect is to-day by far the more natural."
Several years later, Schrödinger presented a paper on "Indeterminism in Physics" to the June, 1931 Congress of A Society for Philosophical Instruction in Berlin.
"Fifty years ago it was simply a matter of taste or philosophic prejudice whether the preference was given to determinism or indeterminism. The former was favored by ancient custom, or possibly by an a priori belief. In favor of the latter it could be urged that this ancient habit demonstrably rested on the actual laws which we observe functioning in our surroundings. As soon, however, as the great majority or possibly all of these laws are seen to be of a statistical nature, they cease to provide a rational argument for the retention of determinism.
"If nature is more complicated than a game of chess, a belief to which one tends to incline, then a physical system cannot be determined by a finite number of observations. But in practice a finite number of observations is all that we can make. All that is left to determinism is to believe that an infinite accumulation of observations would in principle enable it completely to determine the system. Such was the standpoint and view of classical physics, which latter certainly had a right to see what it could make of it. But the opposite standpoint has an equal justification: we are not compelled to assume that an infinite number of observations, which cannot in any case be carried out in practice, would suffice to give us a complete determination.
Despite these strong arguments against determinism, just after he completed the wave mechanical formulation of quantum mechanics in June 1926 (the year Exner died), Schrödinger began to side with the determinists, including especially Max Planck and Albert Einstein (who had in 1916 discovered that ontological chance is involved in the emission of radiation).
Schrödinger's wave equation is a continuous function that evolves smoothly in time, in sharp contrast to the discrete, discontinuous, and indeterministic "quantum jumps" of the Born-Heisenberg matrix mechanics. His wave equation seemed to Schrödinger to restore the continuous and deterministic nature of classical mechanics and dynamics. And it allows us to visualize particles as wave packets moving in spacetime, which was very important to Schrödinger. By contrast, Bohr and Heisenberg and their Copenhagen Interpretation of quantum mechanics insisted that visualization of quantum events is simply not possible. Einstein agreed with Schrödinger that visualization (Anschaulichkeit) should be the goal of describing reality.
Max Born, Werner Heisenberg's mentor and the senior partner in the team that created matrix mechanics, shocked Schrödinger with the interpretation of the wave function as a "probability amplitude."
The motions of particles are indeterministic and probabilistic, even if the equation of motion for the probability is deterministic.
It is true, said Born, that the wave function itself evolves deterministically, but its significance is that it predicts only the probability of finding an atomic particle somewhere. When and where particles would appear - to an observer or to an observing system like a photographic plate - was completely and irreducibly random, he said. Einstein had of course proposed in 1909 that the relationship between waves and particles is that the waves give us the probability of finding a particle.
Einstein had seen clearly for many years that quantum transitions involve chance, that quantum jumps are random, but he did not want to believe it. Although the Schrödinger equation of motion is itself continuous and deterministic, it is impossible to restore continuous deterministic behavior to material particles and return physics to strict causality. Even more than Einstein, Schrödinger hated this idea and never accepted it, despite the great success of quantum mechanics, which today uses Schrödinger's wave functions to calculate Heisenberg's matrix elements for atomic transition probabilities and all atomic properties.
Discouraged, Schrödinger wrote to his friend Willie Wien in August 1926
"[That discontinuous quantum jumps]...offer the greatest conceptual difficulty for the achievement of a classical theory is gradually becoming even more evident to me."...[yet] today I no longer like to assume with Born that an individual process of this kind is "absolutely random." i.e., completely undetermined. I no longer believe today that this conception (which I championed so enthusiastically four years ago) accomplishes much. From an offprint of Born's work in the Zeitsch f. Physik I know more or less how he thinks of things: the waves must be strictly causally determined through field laws, the wavefunctions on the other hand have only the meaning of probabilities for the actual motions of light- or material-particles."
Why did Schrödinger not simply welcome Born's absolute chance? It provides strong evidence that Boltzmann's assumption of chance in atomic collisions (molecular disorder) was completely justified. Boltzmann's idea that entropy is statistically irreversible depends on microscopic irreversibility. Exner thought chance is absolute, but did not live to see how fundamental it was to physics. And the early Epicurean idea that atoms sometimes "swerve" could be replaced by the insight that atoms are always swerving randomly - when they interact with other atoms.
Could it be that senior scientists like Max Planck and Albert Einstein were so delighted with Schrödinger's work that it turned his head? Planck, universally revered as the elder statesman of physics, invited Schrödinger to Berlin to take Planck's chair as the most important lecturer in physics at a German university. And Schrödinger shared Einstein's goal to develop a unified (continuous and deterministic) field theory. Schrödinger won the Nobel prize in 1933. But how different our thinking about absolute chance would be if perhaps the greatest theoretician of quantum mechanics had accepted random quantum jumps in 1926?
In his vigorous debates with Neils Bohr and Werner Heisenberg, Schrödinger attacked the probabilistic Copenhagen interpretation of his wave function with a famous thought experiment (actually based on an Einstein suggestion) called Schrödinger's Cat.
Schrödinger was very pleased to read the Einstein-Podolsky-Rosen paper in 1935. He immediately wrote to Einstein in support of an attack on Bohr, Born, and Heisenberg and their "dogmatic" quantum mechanics.
"I was very happy that in the paper just published in P.R. you have evidently caught dogmatic q.m. by the coat-tails...My interpretation is that we do not have a q.m. that is consistent with relativity theory, i.e., with a finite transmission speed of all influences. We have only the analogy of the old absolute mechanics . . . The separation process is not at all encompassed by the orthodox scheme.'
Einstein had said in 1927 at the Solvay conference that nonlocality (faster-than-light signaling between particles in a space-like separation) seemed to violate relativity in the case of a single-particle wave function with non-zero probabilities of finding the particle at more than one place. What instantaneous "action-at-a-distance" prevents particles from appearing at more than one place, Einstein oddly asked. [The answer, one particle becoming two particles never appears in nature. That would violate the most fundamental conservation laws.]
In his 1935 EPR paper, Einstein cleverly introduced two particles instead of one, and a two-particle wave function that describes both particles. The particles are identical, indistinguishable, and with indeterminate positions, although EPR wanted to describe them as widely separated, one "here" and measurable "now" and the other distant and to be measured "later."
Here we must explain the asymmetry that Einstein, and Schrödinger, have mistakenly introduced into a perfectly symmetric situation, making entanglement such a mystery.
Schrödinger challenged Einstein's idea that two systems that had previously interacted can be treated as separated systems, and that a two-particle wave function ψ12 can be factored into a product of separated wave functions for each system, ψ1 and ψ2.
Einstein called this his "separability principle (Trennungsprinzip). The particles cannot separate, until another quantum interaction separates them. Schrödinger published a famous paper defining his idea of "entanglement" in August of 1935. It began:
They can also be disentangled, or decohered, by interaction with the environment (other particles). An experiment by a human observer is not necessary
To disentangle them we must gather further information by experiment, although we knew as much as anybody could possibly know about all that happened. Of either system, taken separately, all previous knowledge may be entirely lost, leaving us but one privilege: to restrict the experiments to one only of the two systems. After reestablishing one representative by observation, the other one can be inferred simultaneously. In what follows the whole of this procedure will be called the disentanglement...
Attention has recently [viz., EPR] been called to the obvious but very disconcerting fact that even though we restrict the disentangling measurements to one system, the representative obtained for the other system is by no means independent of the particular choice of observations which we select for that purpose and which by the way are entirely arbitrary. It is rather discomforting that the theory should allow a system to be steered or piloted into one or the other type of state at the experimenter's mercy in spite of his having no access to it. This paper does not aim at a solution of the paradox, it rather adds to it, if possible.
In the following year, Schrödinger looked more carefully at Einstein's assumption that the entangled system could be separated enough Einstein to be regarded as two systems with independent wave functions:
Years ago I pointed out that when two systems separate far enough to make it possible to experiment on one of them without interfering with the other, they are bound to pass, during the process of separation, through stages which were beyond the range of quantum mechanics as it stood then. For it seems hard to-imagine a complete separation, whilst the systems are still so close to each other, that, from the classical point of view, their interaction could still be described as an unretarded actio in distans. And ordinary quantum mechanics, on account of its thoroughly unrelativistic character, really only deals with the actio in distans case. The whole system (comprising in our case both systems) has to be small enough to be able to neglect the time that light takes to travel across the system, compared with such periods of the system as are essentially involved in the changes that take place...
It seems worth noticing that the paradox could be avoided by a very simple assumption, namely if the situation after separating were described by the expansion [ψ (x,y) = Σ ak gk(x) fk(y), as assumed in EPR], but with the additional statement that the knowledge of the phase relations between the complex constants ak has been entirely lost in consequence of the process of separation.
When some interaction, like a measurement, causes a separation, the two-particle wave function Ψ12 collapses, the system decoheres into the product Ψ1Ψ2, losing their phase relation so there is no more interference, and we have a mixed state rather than a pure state.
This would mean that not only the parts, but the whole system, would be in the situation of a mixture, not of a pure state. It would not preclude the possibility of determining the state of the first system by suitable measurements in the second one or vice versa. But it would utterly eliminate the experimenters influence on the state of that system which he does not touch.
Schrödinger says that the entangled system may become disentangled long before any measurements and that perfect correlations between the measurements might remain. Note that the entangled system could simply decohere as a result of interactions with the environment, as proposed by decoherence theorists. The perfectly correlated results of Bell-inequality experiments might be preserved, depending on the interaction.
Schrödinger does not mention that there is a deeper reason for the perfect correlations between various observables. That is, as Einstein knew clearly, the conservation principles for mass, energy, momentum, angular momentum, and in recent years spin.
In 1952, Schrödinger wrote two influential articles in the British Journal for the Philosophy of Science denying quantum jumping. These papers greatly influenced generations of quantum collapse deniers, including John Bell, John Wheeler, Wojciech Zurek, and H. Dieter Zeh.
On Determinism and Free Will
Schrödinger's mystical epilogue to What Is Life? (1944), in which he "proves God and immortality at a stroke" but leaves us in the dark about free will.
As a reward for the serious trouble I have taken to expound the purely scientific aspects of our problem sine ira et, studio, I beg leave to add my own, necessarily subjective, view of the philosophical implications.
According to the evidence put forward in the preceding pages the space-time events in the body of a living being which correspond to the activity of its mind, to its self-conscious or any other actions, are (considering also their complex structure and the accepted statistical explanation of physico-chemistry) if not strictly deterministic at any rate statistico-deterministic.
But there is a randomness requirement for mind to break the causal chain of determinism
To the physicist I wish to emphasize that in my opinion, and contrary to the opinion upheld in some quarters, quantum indeterminacy plays no biologically relevant role in them, except perhaps by enhancing their purely accidental character in such events as meiosis, natural and X-ray-induced mutation and so on — and this is in any case obvious and well recognized.
For the sake of argument, let me regard this as a fact, as I believe every unbiased biologist would, if there were not the well-known, unpleasant feeling about 'declaring oneself to be a pure mechanism'. For it is deemed to contradict Free Will as warranted by direct introspection. But immediate experiences in themselves, however various and disparate they be, are logically incapable of contradicting each other. So let us see whether we cannot draw the correct, non-contradictory conclusion from the following two premises:
The only possible inference from these two facts is, I think, that I — I in the widest meaning of the word, that is to say, every conscious mind that has ever said or felt 'I' — am the person, if any, who controls the 'motion of the atoms' according to the Laws of Nature. Within a cultural milieu (Kulturkreis) where certain conceptions (which once had or still have a wider meaning amongst other peoples) have been limited and specialized, it is daring to give to this conclusion the simple wording that it requires. In Christian terminology to say: 'Hence I am God Almighty' sounds both blasphemous and lunatic. But please disregard these connotations for the moment and consider whether the above inference is not the closest a biologist can get to proving God and immortality at one stroke.
In itself, the insight is not new. The earliest records to my knowledge date back some 2,500 years or more. From the early great Upanishads the recognition ATHMAN = BRAHMAN (the personal self equals the omnipresent, all-comprehending eternal self) was in Indian thought considered, far from being blasphemous, to represent the quintessence of deepest insight into the happenings of the world. The striving of all the scholars of Vedanta was, after having learnt to pronounce with their lips, really to assimilate in their minds this grandest of all thoughts. Again, the mystics of many centuries, independently, yet in perfect harmony with each other (somewhat like the particles in an ideal gas) have described, each of them, the unique experience of his or her life in terms that can be condensed in the phrase: DEUS FACTUS SUM (I have become God).
To Western ideology the thought has remained a stranger, in spite of Schopenhauer and others who stood for it and in spite of those true lovers who, as they look into each other's eyes, become aware that their thought and their joy are numerically one — not merely similar or identical; but they, as a rule, are emotionally too busy to indulge in clear thinking, in which respect they very much resemble the mystic.
Allow me a few further comments. Consciousness is never experienced in the plural, only in the singular. Even in the pathological cases of split consciousness or double personality the two persons alternate, they are never manifest simultaneously. In a dream we do perform several characters at the same time, but not indiscriminately: we are one of them; in him we act and speak directly, while we often eagerly await the answer or response of another person, unaware of the fact that it is we who control his movements and his speech just as much as our own.
How does the idea of plurality (so emphatically opposed by the Upanishad writers) arise at all? Consciousness finds itself intimately connected with, and dependent on, the physical state of a limited region of matter, the body. (Consider the changes of mind during the development of the body, as puberty, ageing, dotage, etc., or consider the effects of fever, intoxication, narcosis, lesion of the brain and so on.) Now, there is a great plurality of similar bodies. Hence the pluralization of consciousnesses or minds seems a very suggestive hypothesis. Probably all simple, ingenuous people, as well as the great majority of Western philosophers, have accepted it.
It leads almost immediately to the invention of souls, as many as there are bodies, and to the question whether they are mortal as the body is or whether they are immortal and capable of existing by themselves. The former alternative is distasteful, while the latter frankly forgets, ignores or disowns the facts upon which the plurality hypothesis rests. Much sillier questions have been asked: Do animals also have souls? It has even been questioned whether women, or only men, have souls.
Such consequences, even if only tentative, must make us suspicious of the plurality hypothesis, which is common to all official Western creeds. Are we not inclining to much greater nonsense, if in discarding their gross superstitions we retain their naive idea of plurality of souls, but 'remedy' it by declaring the souls to be perishable, to be annihilated with the respective bodies?
The only possible alternative is simply to keep to the immediate experience that consciousness is a singular of which the plural is unknown; that there is only one thing and that what seems to be a plurality is merely a series of different aspects of this one thing, produced by a deception (the Indian MAJA); the same illusion is produced in a gallery of mirrors, and in the same way Gaurisankar and Mt Everest turned out to be the same peak seen from different valleys.
There are, of course, elaborate ghost-stories fixed in our minds to hamper our acceptance of such simple recognition. E.g. it has been said that there is a tree there outside my window but I do not really see the tree. By some cunning device of which only the initial, relatively simple steps are explored, the real tree throws an image of itself into my consciousness, and that is what I perceive. If you stand by my side and look at the same tree, the latter manages to throw an image into your soul as well. I see my tree and you see yours (remarkably like mine), and what the tree in itself is we do not know. For this extravagance Kant is responsible. In the order of ideas which regards consciousness as a singulare tantum it is conveniently replaced by the statement that there is obviously only one tree and all the image business is a ghost-story.
Yet each of us has the indisputable impression that the sum total of his own experience and memory forms a unit, quite distinct from that of any other person. He refers to it as 'I'. What is this 'I?
If you analyse it closely you will, I think, find that it is just a little bit more than a collection of single data (experiences and memories), namely the canvas upon which they are collected. And you will, on close introspection, find that what you really mean by 'I' is that ground-stuff upon which they are collected. You may come to a distant country, lose sight of all your friends, may all but forget them; you acquire new friends, you share life with them as intensely as you ever did with your old ones. Less and less important will become the fact that, while living your new life, you still recollect the old one. 'The youth that was I', you may come to speak of him in the third person, indeed the protagonist of the novel you are reading is probably nearer to your heart, certainly more intensely alive and better known to you. Yet there has been no intermediate break, no death. And even if a skilled hypnotist succeeded in blotting out entirely all your earlier reminiscences, you would not find that he had killed you. In no case is there a loss of personal existence to deplore. Nor will there ever be.
The point of view taken here levels with what Aldous Huxley has recently — and very appropriately — called The Perennial Philosophy. His beautiful book (London, Chatto and Windus, 1946) is singularly fit to explain not only the state of affairs, but also why it is so difficult to grasp and so liable to meet with opposition.
Order, Disorder, and Entropy
Chapter 6 of What Is Life?
Nec corpus mentem ad cogitandum, nec mens corpus ad motum, neque ad quietem, nec ad aliquid (si quid est) aliud determinare potent.' SPINOZA, Ethics, Pt III, Prop.2
Let me refer to the phrase on p. 62, in which I tried to explain that the molecular picture of the gene made it at least conceivable that the miniature code should be in one-to-one correspondence with a highly complicated and specified plan of development and should somehow contain the means of putting it into operation. Very well then, but how does it do this? How are we going to turn 'conceivability' into true understanding?
Delbruck's molecular model, in its complete generality, seems to contain no hint as to how the hereditary substance works. Indeed, I do not expect that any detailed information on this question is likely to come from physics in the near future. The advance is proceeding and will, I am sure, continue to do so, from biochemistry under the guidance of physiology and genetics. No detailed information about the functioning of the genetical mechanism can emerge from a description of its structure so general as has been given above. That is obvious. But, strangely enough, there is just one general conclusion to be obtained from it, and that, I confess, was my only motive for writing this book.
Information processing, not other laws of physics, is the key feature distinguishing life from physics and chemistry
From Delbruck's general picture of the hereditary substance it emerges that living matter, while not eluding the 'laws of physics' as established up to date, is likely to involve 'other laws of physics' hitherto unknown, which, however, once they have been revealed, will form just as integral a part of this science as the former.
This is a rather subtle line of thought, open to misconception in more than one respect. All the remaining pages are concerned with making it clear. A preliminary insight, rough but not altogether erroneous, may be found in the following considerations:
It has been explained in chapter I that the laws of physics, as we know them, are statistical laws.1 They have a lot to do with the natural tendency of things to go over into disorder.
But, to reconcile the high durability of the hereditary substance with its minute size, we had to evade the tendency to disorder by 'inventing the molecule', in fact, an unusually large molecule which has to be a masterpiece of highly differentiated order, safeguarded by the conjuring rod of quantum theory. The laws of chance are not invalidated by this 'invention', but their outcome is modified. The physicist is familiar with the fact that the classical laws of physics are modified by quantum theory, especially at low temperature. There are many instances of this. Life seems to be one of them, a particularly striking one. Life seems to be orderly and lawful behaviour of matter, not based exclusively on its tendency to go over from order to disorder, but based partly on existing order that is kept up.
To the physicist — but only to him — I could hope to make my view clearer by saying: The living organism seems to be a macroscopic system which in part of its behaviour approaches to that purely mechanical (as contrasted with thermodynamical) conduct to which all systems tend, as the temperature approaches the absolute zero and the molecular disorder is removed. The non-physicist finds it hard to believe that really the ordinary laws of physics, which he regards as the prototype of inviolable precision, should be based on the statistical tendency of matter to go over into disorder. I have given examples in chapter I. The general principle involved is the famous Second Law of Thermodynamics (entropy principle) and its equally famous statistical foundation. On pp. 69-74 I will try to sketch the bearing of the entropy principle on the large-scale behaviour of a living organism — forgetting at the moment all that is known about chromosomes, inheritance, and so on.
What is the characteristic feature of life? When is a piece of matter said to be alive? When it goes on 'doing something', moving, exchanging material with its environment, and so forth, and that for a much longer period than we would expect an inanimate piece of matter to 'keep going' under similar circumstances. When a system that is not alive is isolated or placed in a uniform environment, all motion usually comes to a standstill very soon as a result of various kinds of friction; differences of electric or chemical potential are equalized, substances which tend to form a chemical compound do so, temperature becomes uniform by heat conduction. After that the whole system fades away into a dead, inert lump of matter. A permanent state is reached, in which no observable events occur. The physicist calls this the state of thermodynamical equilibrium, or of 'maximum entropy'.
Practically, a state of this kind is usually reached very rapidly. Theoretically, it is very often not yet an absolute equilibrium, not yet the true maximum of entropy. But then the final approach to equilibrium is very slow. It could take anything between hours, years, centuries, To give an example – one in which the approach is still fairly rapid: if a glass filled with pure water and a second one filled with sugared water are placed together in a hermetically closed case at constant temperature, it appears at first that nothing happens, and the impression of complete equilibrium is created. But after a day or so it is noticed that the pure water, owing to its higher vapour pressure, slowly evaporates and condenses on the solution. The latter overflows. Only after the pure water has totally evaporated has the sugar reached its aim of being equally distributed among all the liquid water available.
These ultimate slow approaches to equilibrium could never be mistaken for life, and we may disregard them here. I have referred to them in order to clear myself of a charge of inaccuracy.
It is by avoiding the rapid decay into the inert state of `equilibrium' that an organism appears so enigmatic; so much so, that from the earliest times of human thought some special non-physical or supernatural force (vis viva, entelechy) was claimed to be operative in the organism, and in some quarters is still claimed.
How does the living organism avoid decay? The obvious answer is: By eating, drinking, breathing and (in the case of plants) assimilating. The technical term is metabolism. The Greek word (μεταβάλλειν) means change or exchange. Exchange of what? Originally the underlying idea is, no doubt, exchange of material. (E.g. the German for metabolism is Stoffwechsel.) That the exchange of material should be the essential thing is absurd. Any atom of nitrogen, oxygen, sulphur, etc., is as good as any other of its kind; what could be gained by exchanging them? For a while in the past our curiosity was silenced by being told that we feed upon energy. In some very advanced country (I don't remember whether it was Germany or the U.S.A. or both) you could find menu cards in restaurants indicating, in addition to the price, the energy content of every dish. Needless to say, taken literally, this is just as absurd. For an adult organism the energy content is as stationary as the material content. Since, surely, any calorie is worth as much as any other calorie, one cannot see how a mere exchange could help.
It is for this reason (my italics) that we decided to coin the words Ergo for negative entropy (or roughly free energy), and Ergodic for processes that can reduce the entropy locally (transferring away positive energy to allow the local reduction to be a stable information structure)
What is entropy? Let me first emphasize that it is not a hazy concept or idea, but a measurable physical quantity just like the length of a rod, the temperature at any point of a body, the heat of fusion of a given crystal or the specific heat of any given substance. At the absolute zero point of temperature (roughly – 273°C) the entropy of any substance is zero. When you bring the substance into any other state by slow, reversible little steps (even if thereby the substance changes its physical or chemical nature or splits up into two or more parts of different physical or chemical nature) the entropy increases by an amount which is computed by dividing every little portion of heat you had to supply in that procedure by the absolute temperature at which it was supplied – and by summing up all these small contributions. To give an example, when you melt a solid, its entropy increases by the amount of the heat of fusion divided by the temperature at the melting-point. You see from this, that the unit in which entropy is measured is cal./°C (just as the calorie is the unit of heat or the centimetre the unit of length).
I have mentioned this technical definition simply in order to remove entropy from the atmosphere of hazy mystery that frequently veils it. Much more important for us here is the bearing on the statistical concept of order and disorder, a connection that was revealed by the investigations of Boltzmann and Gibbs in statistical physics. This too is an exact quantitative connection, and is expressed by
entropy = k log D,
where k is the so-called Boltzmann constant 3.2983 x 10-24 cal./°C), and D a quantitative measure of the atomistic disorder of the body in question. To give an exact explanation of this quantity D in brief non-technical terms is well-nigh impossible. The disorder it indicates is partly that of heat motion, partly that which consists in different kinds of atoms or molecules being mixed at random, instead of being neatly separated, e.g. the sugar and water molecules in the example quoted above. Boltzmann's equation is well illustrated by that example. The gradual 'spreading out' of the sugar over all the water available increases the disorder D, and hence (since the logarithm of D increases with D) the entropy. It is also pretty clear that any supply of heat increases the turmoil of heat motion, that is to say, increases D and thus increases the entropy; it is particularly clear that this should be so when you melt a crystal, since you thereby destroy the neat and permanent arrangement of the atoms or molecules and turn the crystal lattice into a continually changing random distribution.
An isolated system or a system in a uniform environment (which for the present consideration we do best to include as a part of the system we contemplate) increases its entropy and more or less rapidly approaches the inert state of maximum entropy. We now recognize this fundamental law of physics to be just the natural tendency of things to approach the chaotic state (the same tendency that the books of a library or the piles of papers and manuscripts on a writing desk display) unless we obviate it. (The analogue of irregular heat motion, in this case, is our handling those objects now and again without troubling to put them back in their proper places.)
How would we express in terms of the statistical theory the marvellous faculty of a living organism, by which it delays the decay into thermodynamical equilibrium (death)? We said before: 'It feeds upon negative entropy', attracting, as it were, a stream of negative entropy upon itself, to compensate the entropy increase it produces by living and thus to maintain itself on a stationary and fairly low entropy level.
If D is a measure of disorder, its reciprocal, 1/D, can be regarded as a direct measure of order. Since the logarithm of 1/D is just minus the logarithm of D, we can write Boltzmann's equation thus:
– (entropy) = k log (1/D).
Information is still a better word for negative entropy, as shown by Claude Shannon a few years after Schrödinger. And cosmic information creation is required for the sunlight.
Some important papers by Schrödinger:
What Is A Law Of Nature? (1922 Inaugural Lecture at University of Zurich)
Indeterminism In Physics (1931 Lecture to Society for Philosophical Instruction, Berlin)
Discussion of Probability between Separated Systems (Entanglement Paper), Proceedings of the Cambridge Physical Society 1935, 31, issue 4, pp.555-563
The Present Status of Quantum Mechanics ("Cat" Paper), translation of Die Naturwissenschaften 1935, 23, Issue 48: original German Part I, Part II, Part III
Indeterminism and Free Will (Nature, Vol 138, No 3479, July 4, 1936 pp. 13-14)
Probability Relations between Separated Systems, Proceedings of the Cambridge Physical Society 1936, 32, issue 2, pp.446-452
Excerpts from What Is Life?: Chapter 3, Mutations, Chapter 4, Quantum-Mechanical Evidence, Chapter 7, Is Life Based on the Laws of Physics? (PDFs)
Are There Quantum Jumps?, Part I (The British Journal for the Philosophy of Science 3.10 (1952): 109-123.
Are There Quantum Jumps?, Part II (The British Journal for the Philosophy of Science 3.11 (1952): 233-242.
For Teachers
For Scholars
Chapter 1.5 - The Philosophers Chapter 2.1 - The Problem of Knowledge
Home Part Two - Knowledge
Normal | Teacher | Scholar |
676e98e7452a1ee6 | There is, thoughtful students agree, no entirely satisfactory interpretation of quantum mechanics. Freeman Dyson argued that the theory applies only to predictions about the future. The past is not probabilistic. What’s done is done. Quantum mechanics does not apply; the past must be described classically. I do not think many other physicists are willing to go this far.
I spent the winter and spring of 1989 at the European Center for Nuclear Research (CERN). Among other things, I was hoping to conduct a series of interviews with John Bell. I had known him for twenty years, but I knew little about his personal history. He told me that he came from a very modest Protestant family in Ireland. Under the normal evolution of things, he would have been expected to drop out of school at fourteen and begin working to support his family. There was no free high school education in Ireland. Realizing that there was something special about Bell, his mother scraped together enough money to allow him to continue his education. He worked his way through college, but soon after he got his undergraduate degree, he went to work for the British nuclear energy program.
Bell’s discontent with the standard formulations of quantum mechanics dated from his undergraduate education. During my interviews, Bell often complained about existing textbooks. None of them discussed the foundations of quantum mechanics coherently. Why didn’t he write his own? Bell said that he was stuck by the incompatibility of quantum mechanics and special relativity. I regret that I did not pursue this with him, although in essays such as “Against ‘Measurement,’” published in 1990, he spells out his dilemma very clearly.1
If it is difficult to reconcile quantum mechanics and special relativity, it is far more difficult to reconcile quantum mechanics and general relativity in a quantum theory of gravity. Léon Rosenfeld took the position that the project was impossible, and so unnecessary. Gravity is macroscopic; quantum events, microscopic. Most physicists argue today that quantum theory must include gravity, simply because quantum mechanics is universal.
Quantum mechanics was created in the mid-1920s and soon after, attempts were made to quantize the electromagnetic field. Quantum electrodynamics contained the mathematics for describing this. In calculating the self-energy of the electron, the theory demanded a parameter for its mass. What was used initially was its bare mass—the mass the electron would have if all its electromagnetic interactions were turned off. Were this to happen, the electron could no longer respond to electric and magnetic fields, and so it would remain opaque to any experiment designed to measure its mass. But its calculation inevitably seemed to involve logarithmically divergent integrals. Things remained this way until just before the Second World War, when Hendrik Kramers introduced a new idea. He suggested that the bare mass of an electron should, in calculations, be replaced by its observable mass. Infinities were swept away. This procedure came to be known as renormalization. After the war, physicists were able to calculate finite perturbations expansions in quantum electrodynamics by renormalizing both the mass and the charge.
In 1952, Dyson argued that while the terms in the series expansion
α=e2 / ħc ~ 1 / 137
can be rendered finite in every order, the series itself could not converge. His argument was a masterpiece of simplicity. If the series did converge, it would be an analytic function at α=0. It could then be extended to -α by analytic continuation. In that domain, electrons attract electrons and positrons attract positrons. Because of the Coulomb potential, these states have a larger and larger negative energy as the number of particles, N, increases. Pairs of particles are added to the negative energy as the square of N. The kinetic energy, on the other hand, is positive and increases linearly with N itself. Negative energy outpaces positive energy. In the end, the system collapses.
Divergent series notwithstanding, quantum electrodynamics yielded results of remarkable accuracy. Consider the magnetic moment of the electron. This calculation, which has been calculated up to the fifth order in α, agrees with experiment to ten parts in a billion. If one continued the calculation to higher and higher orders, at some point the series would begin to break down. There is no sign of that as yet. Why not carry out a similar program for gravitation? One can readily write down the Feynman graphs that represent the terms in the expansion. Yet there remains an irremediable difficulty. Every order reveals new types of infinities, and no finite number of renormalizations renders all the terms in the series finite.
The theory is not renormalizable.
Bell and Bohr
Niels Bohr never published a full version of what came to be known as the Copenhagen interpretation of quantum mechanics; disciples spread the word. At the heart of the Copenhagen interpretation is the separation of the world into a measuring apparatus and the system that it measures. The apparatus must be classical. Werner Heisenberg’s uncertainty principle does not apply. The classical world cannot be derived as a limit of quantum mechanics. The distinction accepted, a problem remains. How does a classical apparatus perform measurements on a quantum mechanical system? This process cannot be described by quantum mechanics itself. This infuriated Bell because Bohr was never precise about the distinction between classical and quantum systems.
Erwin Schrödinger originally thought that solutions to his wave equation were real waves oscillating in time and space. Physicists soon realized that this interpretation was hopelessly flawed. While the radial solution representing the electron’s ground state in a hydrogen atom falls off exponentially as the distance from the proton increases, it never vanishes. This suggests that Schrödinger’s equation may well guide the electron to infinity over the course of time. This does not happen. Max Born suggested that the wave function represented probabilities, and not waves in space and time. The Born interpretation is now canonical. This resolved another issue. The wave function of a single particle is defined on a four-dimensional space. When more than one particle is under analysis, the dimensions of space and time begin to expand. This is a problem for real waves, but one that disappears if Schrödinger’s equation governs waves of probability.
In addition to the wave function, a quantum system is characterized by its observables, represented, in turn, by operators. Observables take a set of allowed values. The object of measurement is to discover which of these values characterizes the system. These values do not exist prior to measurement. Suppose that they did. To each of the allowed values, a, in a quantum system, Ψ, there is an associated wave function φa. Expanding the wave function yields
where c ranges over the complex numbers. The absolute value of their amplitudes yields the probability that a measurement will reveal a particular value of the observable. What happens to the wave function as the result of measurement? The complete wave function disappears, leaving only a component that reflects the result of measurement. Such is the collapse of the wave function. This interpretive procedure is not derived from the theory; it is tacked on. This is no very serious objection, because it is common to all physical theories. Isaac Newton’s laws do not interpret themselves.
It is considerably more worrisome that quantum mechanical measurements cannot be described by quantum mechanics. Measurement is irreversible. Given a projected wave function, it is impossible to reconstruct the wave function prior to measurement. This is in conflict with the standard interpretation of Schrödinger’s equation. The wave function is reversible in time, and preserves its norm going backwards or forwards. Bohr seems to have understood this.
John von Neumann produced a model for the collapse of the wave function, one requiring a modification of quantum mechanical rules. This had no effect on experiment and still less of an effect on theoreticians. An exception was David Bohm, who dealt with the issue by denying the distinction between measurements and other sorts of interactions. On Bohm’s theory, the wave function plays an entirely different role. There is no collapse. I tried to read Bohm’s papers when they appeared in the early 1950s, and they seemed too complicated. I remember saying this to Julian Schwinger. He said that they seemed too simple.
Hugh Everett III came to Princeton in 1953 as a graduate student in mathematics, but switched to theoretical physics. He was fortunate to find a thesis advisor in John Wheeler, who had a remarkable tolerance for unusual ideas. Together with Bryce DeWitt, Everett was responsible for the many-worlds interpretation of quantum mechanics. In 1959, Wheeler arranged for Everett to visit Copenhagen, where he presented his ideas to Bohr and Rosenfeld. Their meeting was a disaster. Bohr had no interest in his ideas and Rosenfeld called him an idiot. After this, Everett rarely spoke about quantum mechanics in public. He died suddenly at the age of fifty-one.
Bell had, of course, studied Everett’s ideas and he had a sympathetic view of them. Still, he found the many-worlds interpretation flawed. In the first place, it was solipsistic. It assigned to an experimenter the power to bring new and unobservable universes into existence. Bell also noticed a conflict between the many-worlds interpretation and special relativity. On the other hand, Everett did inspire a new generation of physicists to think about the foundations of quantum mechanics. Bell died on October 10, 1990, at the age of sixty-two. He did not live to see the work of Murray Gell-Mann and James Hartle, which extended Everett’s ideas,2 but his reservations about many-worlds would, I suspect, have remained unchanged.
Many-Worlds from the Beginning
In 1933, Paul Dirac published a paper entitled “The Lagrangian in Quantum Mechanics” in an obscure Russian journal.3 It was pretty much ignored until Richard Feynman revisited it in his PhD thesis, prepared under Wheeler’s supervision. Dirac had been impressed by the way in which the formalism of quantum mechanics seemed a generalization of classical mechanics. His first paper on the subject, published in 1925, showed how the Poisson brackets of classical mechanics became the canonical commutation relations of quantum mechanics.4 Until Dirac’s 1933 paper, quantum mechanics had been formulated as a generalization of Hamiltonian mechanics. Dirac came to the conclusion that the Lagrangian was more fundamental. No matter its formalism, quantum mechanics must determine the evolution of quantum states. It is the transition amplitude that is of the essence, and this amplitude is given by the scalar product of two state vectors. Dirac produced an expression for this transition amplitude:
qt|qT corresponds to expitTLdt.
Here q denotes some set of observables, L is a Lagrangian, and the exponent is the quantum version of the classical action. Dirac then replaced this integral with a product of factors representing possible intermediate paths, which were computed separately. Dirac never wrote down an equation and never worked out an example.
Feynman converted Dirac’s correspondence into an equation, one prefaced by a constant that he derived from examples. The integral emerged as a sum over all paths consistent with constraints on the system. In simple cases, such as the harmonic oscillator, this path integral can be calculated. At the limit of a dynamically decreasing Planck’s constant, the only remaining path is classical.
In this way, the classical world is rediscovered. Suppose a system is in some initial state. Quantum mechanics yields the ensuing state transition probabilities, the system’s complete history. Probabilities are summed over them all. Like waves, histories can interfere or become entangled with one another. Interactions can also cause these histories to decohere, and under the right circumstances decoherent histories behave classically. Probabilities cannot be assigned to entangled histories. This formalism was expanded by Gell-Mann and Hartle. It was Gell-Mann, in particular, who introduced the idea of strong decoherence, a state or condition of a system in which quantum entanglements become negligible. A measurement picks out one of the histories, leaving the rest to evolve separately.
Some years ago I heard Eugene Wigner lecture about the measurement problem. It was a surprise to hear someone of his generation admit that there was a problem. Wigner suggested a small addition to the Schrödinger wave equation, one evoking its collapse. The addition would be non-Hermitian, so circumventing the issue of reversibility, but would have no other effect. A related but different version of this idea was proposed by Giancarlo Ghirardi, Alberto Rimini, and Tullio Weber (GRW). Like Wigner, GRW added something to Schrödinger’s equation. But in GRW, the requisite parameter is stochastic. Bell did not have anything against this idea, although it was not his favorite. It was to the de Broglie–Bohm pilot wave that he gave his allegiance.
We owe to Louis de Broglie the wave theory of matter. When Bohm began publishing, de Broglie at once claimed priority. Schrödinger’s wave equation reappears in Bohmian mechanics, together with a wave function that it determines. The observables are classical particles with real Newtonian trajectories. These trajectories are governed by a twofold force. The first represents those Newtonian forces that would act on a particle in the absence of quantum effects. The second represents a quantum potential determined by the wave function. The scheme is equivalent to ordinary quantum mechanics. There are no special operations for measurements because the wave function never collapses. It is the statistical nature of the system’s initial conditions that gives rise to the uncertainty principle.
It is instructive to take a specific example. If Ψ is the wave function that satisfies the Schrödinger equation, then the quantum potential is given by
where R is the amplitude of the wave function,
Ψ=R expiS / ħ.
The equation for a free particle reduces to Newton’s law,5 and, in a more general sense, Bohm’s theory reproduces nonrelativistic quantum mechanics. This is all to the good and it is also all to the bad. Bell showed that in the case of interacting particles, the results are hopelessly nonlocal. The influence of various particles on each other involves superluminal signals. Bell could not figure out how to make this theory relativistic. This might suggest that there is no satisfactory interpretation of quantum theory. We are waiting for experimental guidance. It is clear that the instantaneous collapse of the wave function is not relativistically covariant. This matters if the wave function represents something more than a notepad for recording probabilities.
In the 1930’s, von Neumann argued that special relativity and quantum mechanics were incompatible. In quantum mechanics, position, but not time, is designated by an operator. Much effort was expended in trying to find a temporal operator. Most physicists now believe this to have been a waste of time. Quantum mechanics cannot ever be made compatible with special relativity. When collision energies become relativistic, pairs of particles are produced from the vacuum. Quantum field theory is needed to accommodate this.
Bell’s Theorem
When I took Schwinger’s course in quantum mechanics, there was no assigned textbook; indeed, there were very few texts on quantum mechanics at all. I looked at what was available and chose Bohm’s Quantum Theory, which had been published in 1951.6 This was standard quantum theory with a Copenhagen interpretation—no hint of Bohmian mechanics. What I liked about this text was Bohm’s discussion of the meaning of the theory. The penultimate chapter was entitled “Quantum Theory of the Measurement Process.” His analysis of the 1922 Stern–Gerlach experiment has become standard. The experiment was designed to measure the angular momentum of a silver atom. Neither Otto Stern nor Walter Gerlach realized that the ground state has an angular momentum of ½. They began with a particle in the state
where c designates numbers, and v, spin functions. The particle now impinges on a strong magnetic field that is inhomogeneous in the z direction. The particles are coupled to the field by their spin magnetic moments,
where H is the magnetic field, and
μ=-eħ 2mc.
If this were classical physics, each particle would follow a trajectory determined by the sign of its spin. Since there are only two spins, there are only two possible trajectories. A suitably placed detector could determine whether the particle’s spin was up or down. Knowing nothing about quantum mechanics, this is just what Stern and Gerlach did.
Quantum mechanics shows where this is bound to lead. Consider the interaction Hamiltonian
where H represents the magnetic fields. The time dependence for each spin state is expizH0't, with a different sign in the exponential for each spin state. Measurement requires averaging, and these averages produce cross terms. This is quantum entanglement.
In the case at hand, the exponential oscillates very rapidly and the cross terms can be neglected. This is what decoherence means. Stern and Gerlach were able to proceed without quantum mechanics because they had recovered a classical situation with two noninterfering wave packets.7
With the Stern–Gerlach experiment in hand, I would like to use it to explain Bell’s theorem. Imagine first performing spin measurements along the z-axis, when particles are moving in the y direction, and rotate a magnet through an angle θ in the xz plane. The original Pauli matrix is
In the new system this becomes
with eigenvectors
cosθ/2sinθ/2 and -sinθ/2cosθ/2.
Expanding the first column vector,
This implies that a+ = cos(θ/2) and a = –sin(θ/2). The probability of finding the spin up in a rotated magnet is cos(θ/2)2, while the probability of finding spin down is sin(θ/2)2. In entangled singlet particles, any spin down measurement in one magnet implies that the probability of measuring the same result in a rotated magnet is sin(θ/2)2. The probability of measuring the opposite spin is cos(θ/2)2. The quantum mechanical correlation is sin(θ/2)2 – cos(θ/2)2= –cos(θ). The two spins are aligned a sin(θ/2)2 fraction of the time. This is in agreement with quantum mechanics. If both magnets are rotated in opposite directions by the same angle, they will be aligned 2sin(θ/2)2 of the time. Quantum mechanics predicts that this agreement occurs sin(θ)2 percent of the time.
We can summarize Bell’s result as follows. Let a and b be two unit vectors that characterize, say, the directions of the magnetic fields in the Stern Gerlach experiment. Then we have shown that the correlation predicted by quantum mechanics is ab. If λ is a hidden variable that determines the result of the experiment, Bell’s theorem says that there exists no functions A(a, λ), B(b, λ), which reproduce the quantum correlations.
This is Bell’s theorem.8
1. John Bell, “Against ‘Measurement,’” Physics World 3, no. 8 (1990): 33–40.
2. Murray Gell-Mann and James Hartle, “Strong Decoherence,” in Quantum Classical Correspondence: Proceedings of the 4th Drexel Symposium on Quantum Nonintegrability, Drexel University, Philadelphia, USA, September 8–11, 1994, eds. Bei-Lok Hu and Da Hsuan Feng (Cambridge, MA: International Press, 1997).
3. Paul Dirac, “The Lagrangian in Quantum Mechanics,” Physikalische Zeitschrift der Sowjetunion 3, no. (1933): 64–72.
4. Paul Dirac, “The Fundamental Equations of Quantum Mechanics,” Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences 109, no. 752 (1925): 642–53.
5. The reader is invited to explore more complicated examples such as the harmonic oscillator. Anyone who carries this out will discover that the particle equation of motion is just the classical oscillator equation. A particularly nice example is that of the two slits. When both slits are open the trajectories go through both slits, but once again the wave function-determined initial conditions reproduce the usual quantum mechanical results. For details see my A Chorus of Bells and Other Scientific Inquiries (Singapore: World Scientific, 2014).
6. David Bohm, Quantum Theory (New York: Prentice-Hall, 1951).
7. Stern and Gerlach were, in fact, surprised by their result. They had thought that the ground state had angular momentum 1, and were expecting a splitting of the atomic orbits into three. They found two.
8. I would like to thank Shelly Glashow for very helpful remarks.
More From This Author
• A Letter
On an influential letter by Niels Bohr.
( Physics / Short Notes / Vol. 4, No. 1 )
• Three for the Road
On particle scattering, quantum entanglement, and the reality of Planck numbers.
( Physics / Short Notes / Vol. 3, No. 3 )
• Recollections of Some Notable Texts
On teaching quantum physics.
( Physics / Book Review / Vol. 3, No. 2 )
• The Pope
Remembering Enrico Fermi and the nuclear program.
( Physics / Book Review / Vol. 2, No. 4 ) |
371a293bad9d4469 | Does God play dice?
For Albert Einstein, the answer is no. But what did he mean? Did the greatest theoretical physicist of all times really miss the bandwagon of quantum physics? What are the real issues of the controversy that has opposed him to the Copenhagen School (Bohr, Heisenberg …)? Back to the physics of the early twentieth century, its history, philosophy and ideas.
Apples and sun
It all started with a story of apples. Well almost, you might say, do not forget Galilee and Copernicus. You have guessed it, I am not talking about Adam and Eve’s but about the apple that fell on Newton’s head. In the 17th century, the British scientist laid the foundation for what is now called the “classical mechanics” and managed to model one of the four fundamental interactions: gravity. His model, fully deterministic, triumphed and appeared then universal for that it predicts the movement of the falling apple as well as the movement of solar system’s stars. This is fabulous! Unify the phenomena of everyday life with “what goes very high in the sky” in a world that men completely ignored at that time is a historic achievement.
The heyday of Albert
Let’s continue our brief summary of the foundations of modern physics, the journey leading us to the nineteenth century when Maxwell unified within four famous equations the electric and magnetic forces. This quest of the unification of all physics theories, begun by Maxwelln became even a dream, an ultimate goal for Einstein few decades later. But can we unify electromagnetism (EM) and gravity?
All starts from his thoughts on light, and two key findings that hold the attention of Einstein. The first, known since the late nineteenth century, is the fact that light has exactly the same speed (denoted c) whatever the point of view of the observer: if a passenger in a high-speed train sends a light beam in the train, the velocity of light with respect to the ground is always the same, it will not be increased by the train’s speed! The problem is that in Maxwell’s equations, light is described as indeed having the famous speed c. But if you look “naively” (using the conventions of … Newton!) at these equations for a light beam that is already underway in a high-speed train, they show a different speed of light with respect to the ground! In the nineteenth century, many physicists thought that in fact there was an invisible medium “attached” to light, called Ether, and that only someone who is “attached” to this medium could see the light travelling at c. It is precisely the theory of relativity by Einstein in the early twentieth century which has given another explanation: the speed of light is the same in all media, but it is necessary, when you look at the equations of Maxwell in a high-speed train, for example, to be careful not to write them in our daily-life three dimensions, but in four dimensions by adding time. This will keep the same speed for light whatever the speed of the train in which it is travelling!
On the other hand, but this is this time something Einstein discovered himself in his theory of four dimensional space, the speed of light, this constant we have just mentioned, is the maximum speed that any phenomenon or object can reach! This calls into question the Newtonian theory of gravity, which stated that the earth is instantly attracted by the Sun, that is to say that the Sun gives information to the Earth “I’m here, pay attention, I attract you !” with an infinite speed! It is here that Einstein’s relativity, which assumes that the information takes some time to reach the Earth, provides a theory of gravity which is consistent with Maxwell’s equations now written in the Einstein’s “space-time”.
In a few decades, Einstein challenged the ideas contained in Newtonian mechanics and invented a four-dimensions space-time model that gives coherence together to gravity and electromagnetism, light being the trigger of the two revolutions. But this theory of relativity does not unify EM and gravity: Einstein’s dream is now to find the master equation for these two major forces that seem to rule the universe.
But what about quantum physics?
We are in 1930, two new forces have been identified: the weak nuclear interaction, responsible for radioactivity, and the strong nuclear interaction, later used in atomic bombs. They are poorly theorized, it will be the work of physicists from the standard model and string theory few decades later. But they seem to fit together with Maxwell’s electromagnetism.
The dream of Einstein became the unification of gravitation, which is predominant on a cosmic scale, on the one hand, and on the other hand, the three other forces (EM, weak, strong) prevailing at the quantum scale. What role played the obsession of the German physicist in his skepticism regarding the recent quantum physics?
While Albert was trying to find a link between gravitation and these three other interactions, quantum physics was already born and developed thanks to Schrödinger, Heisenberg and Bohr among others, replacing the classical mechanics at the atomic scale. But make no mistake about it, Einstein played a role in the birth of quantum physics, since he used the 1905 model of Planck’s quantum and defined a particle model for light (previously regarded as a wave) in order to explain the photoelectric effect. This is the counterpart of the wave theory of matter developed by De Broglie. However, Einstein had many critics, tough constructive, to what quantum theory implied in the probabilistic, philosophical and physics fields. Meanwhile, he continued his work of unification, a work that would never come along …
“God does not play dice”
This sentence by Einstein gave birth to multiple interpretations, most of which has resulted, and may be an underlying objective, in a discredit of the genius’s post-relativistic work. We will not fall into this trap. It is very likely that Einstein refused to see Newton’s determinism collapsing, since it remained one of the postulates of its own theory of relativity. How could he accept the Heisenberg inequality, and especially the fact that the outcome of an experiment can not be estimated without using probabilities? A thought experiment solution would be to say that in fact all predictable results actually occur, but at a level such that the observer has access only to one of them. Without going into string theory which foresees the existence of seven invisible and folded on themselves dimensions (like an ant walking on a telephone cable can only see two of three dimensions, length and circularity), Einstein was, after long reflection, seduced by the mathematician Caluza’s idea of additional spatial dimensions. To overcome this lack of determinism, we can also discuss the concept of “imaginary time” (seen as a pure imaginary complex number) invented by Stephen Hawking and which allows the realization of all possible trajectories of a particle in the “imaginary time”, as long as one is performed in the “real time”.
Solvay conference (1927): Schrödinger, Bohr, Eisenberg, De Broglie, Curie, Langevin, Dirac, Lorentz, Einstein..
Realism and Positivism
However, we can see this purely as a philosophical quarrel between the realistic and the positivist schools of thought. The realists think we can develop theories that give us an objective knowledge of the world through the systematic comparison of theory and experiment. Scientific realists, including Einstein and Schrödinger, strongly care for determinism and for objective, independent of the observer and not hazardous measurement! According to them, it would go against the “common sense” on which physics must still rely. But then what about when Copernicus argued that the Earth was round, while in the eyes of all it appears of course flat (collective common sense)? This positivist argument was advanced by the so-called Copenhagen School (Bohr, Heisenberg, Jordan, Born …) which considered quantum physics and physics theories in general as elegant models, results of the imagination of human beings, which should be verified by observations. Where is the difference? Here there is no immediate sense data (such as “an event occurs in a unique way and not in a hazardous way”), so that it is pointless to ask whether the theory correspond to reality: reality is never independent of theory!
This positivist way of thinking, although conservative, seems implacable to me. However, some argue that it is outrun by quantum decoherence, which helps to explain mathematically the transition between “weird” quantum things such as the tunnel effect and the macroscopic world as we see it. In fact, it shows that the reduction of the wave packet due to observation is not in contradiction with the Schrödinger equation as one might think. I would say that this theory has at least the advantage of “solving” the supposed paradox of Schrödinger’s cat, which is in my opinion a very bad example of what can explain quantum physics …
Quantum physics, special relativity and measurement
A second problem probably grieved Albert Einstein: that of quantum measurement, which seems to be a projection on a random axis, by the observer, of the observed quantity. So it seems that the measure is an irregularity in the Schrödinger equation for the wave function, the so-called reduction of the wave packet. One solution to this problem may be to consider that the wave function represents our knowledge of the system, so that it changes abruptly at the time of measurement. But then, this assumption implies that a measure can reveal something about the system! However, according to quantum mechanics, only a very large number of measures allow to assess the probability that had the different values to be measured. Thus, all “solutions” quickly seem to fall into the field of philosophy, which we have already discussed … We have to accept that some variables are not defined before they are measured.
Because he was not convinced, Einstein tried to find a paradox, often called “EPR paradox” in the name of his two acolytes Podolsky and Rosen. Goal: To prove that the states of the particles are determined at any time. To do this, the thought experiment is to take two entangled photons, ie characterized by the following property: the measure of a quantity related to a photon implies that the same amount is fully determined for the other. Then we assume that we measure simultaneously (within a time that does not allow exchange of information between the two particles) the position of one of the photons and the speed of the other. We then obtain a contradiction to the Heisenberg uncertainty principle. Einstein then proposed to enhance quantum physics with the assumption of the existence of “hidden variables” that are not already included in quantum physics, but are fully deterministic.
However, in the early 80 Alain Aspect gives him wrong with experimental tests. The logic of the EPR paradox is perfect. The assumptions are in fact less. What does the entanglement mean? It can not be an exchange of information between the two photons, because it does not travel faster than light. In fact, Einstein implicitly assumes that when two photons are sufficiently far apart, we can talk about the physics characteristics of one specific photon (assuming locality of entanglement). This is directly contradicted by Aspect’s experiments (energy, momentum and polarization measurements), which showed that the correlation between the two photons is relatively high even if they are far apart: the results of a measurement on a photon depends in a non-local way on the results of the other photon measurement. The EPR paradox falls through, and the Heisenberg principle is not violated.
Note that nothing here contradicts relativity, which states that the speed of light is a universal upper limit, since there is no way to use the correlations between particles to transmit a signal faster than the speed: the causality is not violated.
A deeper criticism echoed by Dirac
Finally, there is an other reason which I think by far is the most interesting one, and that made Einstein skeptical. It is the application of another great principle of relativity: E ² = m² c4 + p ² c ²
Indeed, space and time seem to play the same role in this fundamental equation. But the Schrödinger equation contains a simple time derivative and a double spatial derivative. In fact, Schrödinger first sought a relativistic equation, now called the Klein-Gordon equation, which contains second derivatives in time and space. The latter being not entirely satisfactory for reasons that we do not detail here (negative energy solutions), Dirac then derived a system of four equations that contain simple derivatives in time and space. To summarize what we know today, the relativistic Dirac equations (1928) and the Klein-Gordon (1927) complement, are physically interpretable only in the context of a system of many particles (quantum field theory), and match Schrödinger in the classical limit. Einstein then had enough to reconcile quantum physics and relativity. Maybe enough to make him change his mind since his “God does not play dice” pronounced in the 1910s.
Ultimately, though some believe that the dispute between Einstein and the Copenhagen school is pure philosophy, it appears that many constructive reasons led Einstein to be skeptical of quantum mechanics: the role of randomness, shortcomings of the wave function … But keep in mind that unification remained an obsession for him, and that he deliberately sidelined himself from the physics of atoms. Somehow, we can say that he also refused that a particular philosophy is necessary for the understanding of quantum physics, which can be easily understood by any scientist.
Many philosophers are still working on the issue, whereas it might be time to look at what new physics produce today, and there is much to do. I think about the String Theory, which can not currently rely on any direct experiment, and lies on the border between philosophy and science. And what about this fierce desire to build a theory of “everything”, a full unification of the laws of nature? Is it not simply the desire to know the mind of God?
Faced with such questions, I will leave it to the reader to ponder over this quote from Newton, which I very much appreciate: “To myself I am only a child playing on the beach, while vast oceans of truth lie undiscovered before me.”
1. Very well rounded and balanced article, hitting all the highlights. It is astounding that the concerns that Schrödinger voiced more than 6o years ago with regards to the turn that QM took are just as relevant today.
To me it always appeared that linear QM must be an approximation that holds in flat space-time for sufficiently isolated systems, but it was just an intuition that seemed sensible to me, not informed by much actual theoretical analysis, although it directed me to read up on research on the outer edges such as the work of Mendel Sachs, (he made the case that his reformulations of GR unifies it with EM).
Eventually, I came across Australian physicist Kingsley Jones in a LinkedIn forum, and he published some very interesting work on non-linear deformations of the Schrödinger equation that can yield a “classic” wave equation that is equivalent to classical Hamilton mechanics. A result that gives very strong theoretical underpinnings to my hunch. (Incidentally Steven Weinberg played with the same models but missed the connection).
Kingsley is setting out to start a crowd-sourced effort to try to bring QM and QED more in line with Schrödinger’s original vision. I expect this to be a lot of fun and very educational (irregardless if we will be able to actually deliver on our goal). Contributors with good math skills will be most welcome 🙂
Leave a Reply
|
736739de17210258 | Consistent histories and the Bohm approach.
B. J. Hiley and O. J. E. Maroney
Theoretical Physics Research Unit
Birkbeck College, University of London
Malet Street, London WC1E 7HX, England
13 September 2000
In a recent paper Griffiths claims that the consistent histories interpretation of quantum mechanics gives rise to results that contradict those obtained from the Bohm interpretation. This is in spite of the fact that both claim to provide a realist interpretation of the formalism without the need to add any new mathematical content and both always produce exactly the same probability predictions of the outcome of experiments. In contrasting the differences Griffiths argues that the consistent histories interpretation provides a more physically reasonable account of quantum phenomena. We examine this claim and show that the consistent histories approach is not without its difficulties.
1 Introduction
It is well known that realist interpretations of the quantum formalism are known to be notoriously difficult to sustain and it is only natural that the two competing approaches, the consistent history interpretation (CH) [1] [2] and the Bohm interpretation (BI)[3] [4] , should be carefully compared and contrasted. Griffiths [5] is right to explore how the two approaches apply to interferometers of the type shown in figure 1 .
Although the predictions of experimental outcomes expressed in terms of probabilities are identical, Griffiths argues that, nevertheless, the two approaches actually give very different accounts of how a particle is supposed to pass through such an interferometer. After a detailed analysis of experiments based on figure 1 , he concludes that the CH approach gives a behaviour that is `physically acceptable', whereas the Bohm trajectories behave in a way that appears counter-intuitive and therefore `unacceptable'. This behaviour has even been called `surrealistic' by some authors 1 . Griffiths concludes that a particle is unlikely to actually behave in such a way so that one can conclude that the CH interpretation gives a `more acceptable' account of quantum phenomena.
Figure 1: Simple interferometer
Notice that these claims are being made in spite of the fact no new mathematical structure whatsoever is added to the quantum formalism in either CH or BI, and in consequence all the experimental predictions of both CH and BI are identical to those obtained from standard quantum mechanics. Clearly there is a problem here and the purpose of our paper is to explore how this difference arises. We will show that CH is not without its difficulties.
We should remark here in passing that these difficulties have already been brought out be Bassi and Ghirardi [8] [9] [10] and an answer has been given by Griffiths [11] . At this stage we will not take sides in this general debate. Instead will examine carefully how the analysis of the particle behaviour in CH when applied to the interferometer shown in figure 1 leads to difficulties similar to those highlighted by Bassi and Ghirardi [8] .
2 Histories and trajectories
The first problem we face in comparing the two approaches is that BI uses a mathematically well defined concept of a trajectory, whereas CH does not use such a notion, defining a more general notion of a history.
Let us first deal with the Bohm trajectory, which arises in the following way. If the particle satisfies the Schrödinger equation then the trajectories are identified with the one-parameter solutions of the real part of the Schrödinger equation obtained under polar decomposition of the wave function [4] . Clearly these one-parameter curves are mathematically well defined and unambiguous.
CH does not use the notion of a trajectory. It uses instead the concept of a history, which, again, is mathematically well defined to be a series of projection operators linked by Schrödinger evolution and satisfying a certainty consistency condition [1] . Although in general a history is not a trajectory, in the particular example considered by Griffiths, certain histories can be considered to provide approximate trajectories. For example, when particles are described by narrow wave packets, the history can be regarded as defining a kind of broad `trajectory' or `channel'. It is assumed that in the experiment shown in figure 1 , this channel is narrow enough to allow comparison with the Bohm trajectories.
To bring out the apparent difference in the predictions of the two approaches, consider the interferometer shown in figure 1 . According to CH if we choose the correct framework, we can say that if C fires, the particle must have travelled along the path c to the detector and any other path is regarded as ``dynamically impossible" because it violates the consistency conditions. The type of trajectories that would be acceptable from this point of view are sketched in figure 2 . In contrast a pair of typical Bohm trajectories 2 are shown in figure 3 . Such trajectories are clearly not what we would expect from our experience in the classical world. Furthermore there appears, at least at first sight, to be no visible structure present that would `cause' the trajectories to be `reflected' in the region I, although in this region interference between the two beams is taking place.
Figure 2: The CH `trajectories'.
In the Bohm approach, an additional potential, the quantum potential, appears in the region of interference and it is this potential that has a structure which `reflects' the trajectories as shown in figure 3 . (See Hiley et al. [7] for more details).
Figure 3: The Bohm trajectories.
In this short note we will show that the conclusions reached by Griffiths [5] cannot be sustained and that it is not possible to conclude that the Bohm `trajectories' must be `unreliable' or `wrong'. We will show that CH cannot be used in this way and the conclusions drawn by Griffiths are not sound.
3 The interference experiment
Let us analyse the experimental situation shown in figure 1 from the point of view of CH. A unitary transformation U(tj+1, tj ) is used to connect set of projection operators at various times. The times of interest in this example will be t0, t1, and t 2. t0 is a time before the particle enters the beam splitter, t2 is the time at which a response occurs in one of the detectors C or D and t1 is some intermediary time when the particle is in the interferometer before the region I is reached by the wave packets.
The transformation for t0® t 1 is
|y 0 ñ = | sCDñ 0® 1
[|cCD ñ1 + |dCD ñ1]
The transformation for t1® t 2 is, according to Griffiths [13] ,[5]
|cCD ñ1® |C*Dñ2, and |dCDñ1 ®|CD*ñ2
These lead to the histories
y 0 Äc1 ÄC2 *, and y0Ä d1ÄD2*
Here y0 is short hand for the projection operator |yñáy| at time t0 etc.
These are not the only possible consistent histories but only these two histories are used by Griffiths to make judgements about the Bohm trajectories. The two other possible histories
y 0 Äd1 ÄC2 *, and y0Äc1 ÄD2*
have zero weight and are therefore deemed to be dynamically impossible .
The significance of the histories described by equation (3) is that they give rise to new conditional probabilities that cannot be obtained from the Born probability rule [12] . These conditional probabilities are
Pr(c1 |y 0ÙC2 *) = 1, Pr(d1|y0 ÙD2*) = 1.
Starting from a given initial state, y 0, these probabilities are interpreted as asserting that when the detector C is triggered at t2, one can be certain that, at the time t 1, the particle was in the channel c and not in the channel d. In other words when C fires we know that the triggering particle must have travelled down path c with certainty.
This is the key new result from which the difference between the predictions of CH and the Bohm approach arises. Furthermore it must be stressed that this result cannot be obtained from the Born probability rule and is claimed by Griffiths [12] to be a new result that does not appear in standard quantum theory 3 .
Looking again at figure 1 , we notice that there is a region I where the wave packets travelling down c and d overlap. Here interference can and does take place. In fact fringes will appear along any vertical plane in this region as can be easily demonstrated. Indeed this interference is exactly the same as that produced in a two-slit experiment. The only change is that the two slits have been replaced by two mirrors. Once this is realised alarm-bells should ring because the probabilities in (5) imply that we know with certainty through which slit the particle passed. Indeed equation (5) shows that the particles passing through the lower slit will arrive in the upper region of the fringe pattern, while those passing through the upper slit will arrive in the lower half 4 .
Recall that Griffiths claims CH provides a clear and consistent account of standard quantum mechanics, but the standard theory denies the possibility of knowing which path the particle took when interference is present. Thus the interpretation of equation (5) leads to a result that is not part of the standard quantum theory and in fact contradicts it. Nevertheless CH uses the authority of the standard approach to strengthen its case against the Bohm approach. Surely this cannot be correct.
Indeed Griffiths has already discussed the two-slit experiment in an earlier paper [14] . Here he argues that CH does not allow us to infer through which slit the particle passes. He writes; -
Given this choice at t3 [whether C or D fires], it is inconsistent to specify a decomposition at time t2 [our t 1] which specifies which slit the particle has passed through, i.e., by including the projector corresponding to the particle being in the region of space just behind the A slit [our c], and in another region just behind the B slit [our d]. That is (15) [the consistency condition] will not be satisfied if projectors of this type at time t2 [our t1 ] are used along with those mentioned earlier for time t3.
The only essential difference between the two-slit experiment and the interferometer described by equation (3) above is in the position of the detectors. But according to CH measurement merely reveals what is already there, so that the position of the detector in the region I or beyond should not affect anything. Thus there appears to be a contradiction here.
To emphasise this difficulty we will spell out the contradiction again. The interferometer in figure 1 requires the amplitude of the incident beam to be split into two before the beams are brought back together again to overlap in the region I. This is exactly the same process occurring in the two-slit experiment. Yet in the two-slit experiment we are not allowed to infer through which slit the particle passed while retaining interference, whereas according to Griffiths we are allowed to talk about which mirror the particle is reflected off, presumably without also destroying the interference in the region I. We will return to this specific point again later.
One way of avoiding this contradiction is to assume the following: -
1. If we place our detectors in the arms c and d before the interference region I is reached then we have the consistent histories described in equation (3). Particles travelling down c will fire C, while those travelling down d will fire D. In this case we have an exact agreement with the Bohm trajectories.
2. If we place our detectors in the region of interference I then, according to Griffiths [14] , the histories described by equation (3) are no longer consistent. In this case CH can say nothing about trajectories.
3. If we place our detectors in the positions shown in figure 1 , then, according to Griffiths [14] , the consistent histories are described by equation (3) again. Here the conditional probabilities imply that all the particles travelling down c will always fire C. Bohm trajectories contradict this result and show that some of these particles will cause D to fire . These trajectories are shown in figure 3 .
It could be argued that this patchwork would violate the one-framework rule. Namely that one must either use the consistent histories described by equation (3) or use a set of consistent histories that do not allow us to infer off which mirror the particle was reflected. This latter would allow us to account for the interference effects that must appear in the region I.
A typical set of consistent histories that do not allow us to infer through which slit the particle passed can be constructed in the following way.
Introduce a new set of projection operators | (c + d)ñá(c + d) | at t3 where t1 < t3 < t 2. Then we have the following possible histories
y 0 Ä(c + d)3 ÄC2 *, and y0 Ä (c + d)3ÄD2 *
Clearly from this set of histories we cannot infer any generalised notion of a trajectory so that we cannot say from which mirror the particle is reflected. What this means then is that if we want to talk about trajectories we must, according to CH, use the histories described by equation (3) to cover the whole region as, in fact, Griffiths [5] actually does. But then surely the nodes in the interference pattern at I will cause a problem.
To bring out this problem let us first forget about theory and consider what actually happens experimentally as we move the detector C along a straight line towards the mirror M1. The detection rate will be constant as we move it towards the region I. Once it enters this region, we will find that its counting rate varies and will go through several zeros corresponding to the nodes in the interference pattern. Here we will assume that the detector is small enough to register these nodes.
Let us examine what happens to the conditional probabilities as the detector crosses the interference region. Initially according to (5), the first history gives the conditional probability Pr(c1 |y 0ÙC3 *) = 1. However, at the nodes this conditional probability cannot even be defined as Pr(C3*) = 0. Let us start again with the closely related conditional probability, derived from the same history Pr(C3 *|y0 Ù c1) = 1. Now this probability clearly cannot be continued across the interference region because Pr(C3* ) = 0 at the nodes, while Pr(y0 Ùc1) = 0.5 regardless of where the detector is placed. In fact, there is no consistent history that includes both c 1 and C3*, when the detector is in the interference region. We are thus forced to consider different consistent histories in different regions as we discussed above.
If we follow this prescription then when the detector C is placed on the mirror side of path c, before the beams cross at I, we can talk about trajectories and as stated above these trajectories agree with the corresponding Bohm trajectories. When C is moved right through and beyond the region I, we can again talk about trajectories. However in the intermediate region CH does not allow us to talk about trajectories. This means that we have no continuity across the region of interference and this lack of continuity means that it is not possible to conclude that any `trajectory' defined by y0Ä c1 ÄC* before C reaches the interference region is the same `trajectory' defined by the same expression after C has passed through the interference region. In other words we cannot conclude that any particle travelling down c will continue to travel in the same direction through the region of interference and emerge still travelling in the same direction to trigger detector C.
What this means is that CH cannot be used to draw any conclusions on the validity or otherwise of the Bohm trajectories. These latter trajectories are continuous throughout all regions. They are straight lines from the mirror until they reach the region I. They continue into the region of interference, but no longer travel in straight lines parallel to the initial their paths. They show `kinks' that are characteristic of interference-type bunching that is needed to account for the interference [15] . This bunching has the effect of changing the direction of the paths in such a way that some of them eventually end up travelling in straight lines towards detector D and not C as Griffiths would like them to do.
Indeed it is clear that the existence of the interference pattern means that any theory giving relevance to particle trajectories must give trajectories that do not move in straight lines directly through the region I. The particles must avoid the nodes in the interference pattern. CH offers us no reason why the trajectories on the mirror side of I should continue in the same general direction towards C on the other side of I. In order to match up trajectories we have to make some assumption of how the particles cross the region of interference. One cannot simply use classical intuition to help us through this region because classical intuition will not give interference fringes. Therefore we cannot conclude that the particles following the trajectories before they enter the region I are the same particles that follow the trajectories after they have emerged from that region. This requires a knowledge of how the particles cross the region I, a knowledge that is not supplied by CH.
Where the consistent histories (3) could provide a complete description is when the coherence between the two paths is destroyed. This could happen if a measurement involving some irreversible process was made in one of the beams. This would ensure that there was no interference occurring in the region I. In this case the trajectories would go straight through. This would mean that the conditional probabilities given in equation (5) would always be satisfied.
But in such a situation the Bohm trajectories would also go straight through. The particles coming from Mirror M1 would trigger the detector C no matter where it was placed. The reason for this behaviour in this case is because the wave function is no longer y c + yd, but we have two incoherent beams, one described by y c and the other by yd. This gives rise to a different quantum potential which does not cause the particles to be `reflected' in the region I. So here there is no disagreements with CH.
4 Conclusion
When coherence between the two beams is destroyed it is possible to make meaningful inferences about trajectories in CH. These trajectories imply that any particle reflected from the mirror M1 must end up in detector C. In the Bohm approach exactly the same conclusion is reached so that where the two approaches can be compared they predict exactly the same results.
When the coherence between the two beams is preserved then CH must use the consistent histories described by equation (6). These histories do not allow any inferences about trajectories to be drawn. Although the consistent histories described by equation (3) enable us to make inferences about particle trajectories because, as we have shown they lead to disagreement with experiment. Unlike the situation in CH the Bohm approach can define the notion of a trajectory which is calculated from the real part of the Schrödinger equation under polar decomposition. These trajectories are well defined and continuous throughout the experiment including the region of interference. Since CH cannot make any meaningful statements about trajectories in this case it cannot be used to draw any significant conclusions concerning the validity or otherwise of the Bohm trajectories. Thus the claim by Griffiths [5] , namely, that the CH gives a more reasonable account of the behaviour of particle trajectories interference experiment shown in Figure 1 than that provided by the Bohm approach cannot be sustained.
5 References
R. B. Griffiths, 1984, J. Stat. Phys., 36, 219.
R. B. Griffiths, 1996, Consistent histories and quantum reasoning Phys Rev., 54A, 2759-2774.
D. Bohm and B. J. Hiley, 1987, An Ontological Basis for the Quantum Theory: I-Non-relativistic Particle Systems, Phys. Reports 144, 323-348.
D. Bohm and B. J. Hiley, 1993, The Undivided Universe: an Ontological Interpretation of Quantum Theory, Routledge, London.
R. B. Griffiths, 1999, Bohmian mechanics and consistent histories, Phys. Lett., A261, 227-234.
B. J. Hiley, R. Callaghan and O. J. E. Maroney, 1999, Quantum trajectories, real, surreal or an approximation? To be published.
A. Bassi and G.C. Ghirardi, 1999, Can the decoherent histories description of reality be considered satisfactory? , Phys. Lett,. 247-263.
A. Bassi and G.C. Ghirardi, 1999, Decoherent histories and realism , J. Stat. Phys., (to appear); quant-ph/9912031.
A. Bassi and G.C. Ghirardi, 1999, About the notion of truth in the decoherent histories approach: a reply to Griffiths, Phys. Lett. (to appear); quant-ph/9912065.
R. B. Griffiths, 2000, Consistent quantum realism: a reply to Bassi and Ghirardi, quant-ph/0001093.
R. B. Griffiths, 1998, Choice of consistent family and quantum incompatibly, Phys Rev., 57A, 1604-17.
R. B. Griffiths, 1993, The Consistency of Consistent Histories: A Reply to d'Espagnat , Found. Phys., 23, 1601-10
R. B. Griffiths, 1994, A Consistent History Approach to the Logic of Quantum Mechanics , in Symposium on the Foundations of Modern Physics1994, 70 years of Matter Waves, ed, K. V. Lasurikainen, C. Montonen and K. Sunnarborg, Editions Frontires, Gif-sur-Yvette.
C. Dewdney, C. Philippidis and B. J. Hiley, 1979, Quantum Interference and the Quantum Potential, Il Nuovo Cimento 52B, 15-28.
1 This original criticism was made by Englert et al. [6] . An extensive discussion of this position has been presented by Hiley, Callaghan and Maroney [7] .
2 Detailed examples of these trajectories will be found in Hiley, Callaghan and Maroney [7] .
3 It should be noted that the converse of (5) must also hold. Namely, if C does not fire then we can conclude that at t1 the particle was not in pathway c. In other words Pr(c1|y0 ÙC2) = 0
4 Notice that in criticising the Bohm approach, it is this consistent history interpreted as a `particle trajectory' that is contrasted with the Bohm trajectory. The Bohm approach reaches the opposite conclusion, namely, the particle that goes through the top slit stays in the top part of the interference pattern [15]
File translated from TEX by TTH , version 2.77.
On 13 Sep 2000, 11:52. |
8f7b49accadca01b | Article | Open | Published:
Real world ocean rogue waves explained without the modulational instability
Scientific Reports volume 6, Article number: 27715 (2016) | Download Citation
According to the most commonly used definition, rogue waves are unusually large-amplitude waves that appear from nowhere in the open ocean. Evidence that such extremes can occur in nature is provided, among others, by the Draupner and Andrea events, which have been extensively studied over the last decade1,2,3,4,5,6. Several physical mechanisms have been proposed to explain the occurrence of such waves7, including the two competing hypotheses of nonlinear focusing due to third-order quasi-resonant wave-wave interactions8, and purely dispersive focusing of second-order non-resonant or bound harmonic waves, which do not satisfy the linear dispersion relation9,10.
In particular, recent studies propose third-order quasi-resonant interactions and associated modulational instabilities11,12 inherent to the Nonlinear Schrödinger (NLS) equation as mechanisms for rogue wave formation3,8,13,14,15. Such nonlinear effects cause the statistics of weakly nonlinear gravity waves to significantly differ from the Gaussian structure of linear seas, especially in long-crested or unidirectional (1D) seas8,10,16,17,18,19. The late-stage evolution of modulation instability leads to breathers that can cause large waves13,14,15, especially in 1D waves. Indeed, in this case energy is ‘trapped’ as in a long wave-guide. For small wave steepness and negligible dissipation, quasi-resonant interactions are effective in reshaping the wave spectrum, inducing large breathers via nonlinear focusing before wave breaking occurs16,17,20,21. Consequently, breathers can be observed experimentally in 1D wave fields only at sufficiently small values of wave steepness20,21,22. However, wave breaking is inevitable when the steepness becomes larger: ‘breathers do not breathe’23 and their amplification is smaller than that predicted by the NLS equation, in accord with theoretical studies24 of the compact Zakharov equation25,26 and numerical studies of the Euler equations27,28.
Typical oceanic wind seas are short-crested, or multidirectional wave fields. Hence, we expect that nonlinear focusing due to modulational effects is diminished since energy can spread directionally16,18,29. Thus, modulation instabilities may play an insignificant role in the wave growth especially in finite water depth where they are further attenuated30.
Tayfun31 presented an analysis of oceanic measurements from the North Sea. His results indicate that large waves (measured as a function of time at a given point) result from the constructive interference (focusing) of elementary waves with random amplitudes and phases enhanced by second-order non-resonant or bound nonlinearities. Further, the surface statistics follow the Tayfun32 distribution32 in agreement with observations9,10,31,33. This is confirmed by a recent data quality control and statistical analysis of single-point measurements from fixed sensors mounted on offshore platforms, the majority of which were recorded in the North Sea34. The analysis of an ensemble of 122 million individual waves revealed 3649 rogue events, concluding that rogue waves observed at a point in time are merely rare events induced by dispersive focusing. Thus, a wave whose crest height exceeds the rogue threshold2 1.25Hs occurs on average once every Nr ~ 104 waves with Nr referred to as the return period of a rogue wave and Hs is the significant wave height. Some even more recent measurements off the west coast of Ireland35 revealed similar statistics with 13 rogue events out of an ensemble of 750873 individual waves and Nr ~ 6 · 104.
To date, it is still under debate if in typical oceanic seas second-order nonlinearities dominate the dynamics of extreme waves as indicated by ocean measurements31,33, or if third-order nonlinear effects play also a significant, if not dominant, role in rogue-wave formation. The preceding provides our principal motivation for studying the statistical and physical properties of rogue sea states and to investigate the relative importance of second and third-order nonlinearities. We rely on WAVEWATCH III hindcasts and High Order Spectral (HOS) simulations of the Euler equations for water waves36. In our study, we consider the famous Draupner and Andrea rogue waves and the less well known Killard rogue wave35. The Andrea rogue wave was measured by Conoco on 9 November 2007 with a system of four Teledyne Optech lasers mounted in a square array on the Ekofisk platform in the North Sea in a water depth d = 74 m4,5. The metocean conditions of the Andrea wave are similar to those of the Draupner wave measured by Statoil at a nearby platform (d = 70 m) on 1 January 1995 with a down looking laser-based wave sensor37. The Killard wave was measured by ESB International on 28 January 2014 by a Waverider buoy off the west coast of Ireland in a water depth d = 39 m. In Table 1 we summarize the wave parameters of the three sea states in which the rogue wave occurred and we refer to the Methods section for definitions and details. As one can see, the actual crest-to-trough (wave) heights H and crest heights h meet the classical criteria2 H/Hs > 2 and h/Hs > 1.25 to qualify the Andrea, Draupner and Killard extreme events as rogue waves. The remainder of the paper is organized as follows. First, the probability structure of oceanic seas is presented33 together with the essential elements of Tayfun’s32 second-order theory for the wave skewness and Janssen’s8 formulation for the excess kurtosis of multidirectional seas29. Then, we present and compare second-order and third-order statistical properties of the three rogue sea states followed by an analysis of the shape of the largest waves and associated mean sea levels. In concluding, we discuss the implications of these results on rogue-wave predictions.
Table 1: Wave parameters and various statistics of the simulated sea states labelled Andrea, Draupner and Killard.
Probability structure of oceanic seas
Non-resonant and resonant wave-wave interactions cause the statistics of weakly nonlinear gravity waves to significantly differ from the Gaussian structure of linear seas8,10,16,17,18,38. The relative importance of ocean nonlinearities and the increased occurrence of large waves can be measured by integral statistics such as the wave skewness λ3 and the excess kurtosis λ40 of the zero-mean surface elevation η(t):
Here, overbars imply statistical averages and σ is the standard deviation of surface wave elevations. Here and in the following we refer to the Methods section for the definitions of the wave parameters and details.
The skewness coefficient represents the principal parameter with which we describe the effects of second-order bound nonlinearities on the geometry and statistics of the sea surface with higher sharper crests and shallower more rounded troughs9,32,33. The excess kurtosis comprises a dynamic component due to third-order quasi-resonant wave-wave interactions and a bound contribution induced by both second- and third-order bound nonlinearities,9,10,32,33,39,40. In order to compare the relative orders of nonlinearities, we consider the characteristic wave steepness μm = kmσ, where km is the wavenumber corresponding to the mean spectral frequency ωm32.
Return period of a wave whose crest height exceeds a given threshold
To describe the statistics of rogue waves, we consider the conditional return period Nh(ξ) of a wave whose crest height exceeds the threshold h = ξHs, namelywhere P(ξ) is the probability of a wave crest height exceeding ξHs. Equation (2) implies that the threshold ξHs, with Hs = 4σ, is exceeded on average once every Nh(ξ) waves.
For weakly nonlinear random seas, the probability P can be described by the (third-order) TF, (second-order Tayfun) T or (linear Rayleigh) R distributions. In particular33,where ξ0 follows from the quadratic equation 32. Here, the wave steepness μ = λ3/3 is of O(μm) and it is a measure of second-order bound nonlinearities as it relates to the skewness of surface elevations9. The relationship λ3 = 3μ is originally due to Tayfun31, who derived it for narrowband nonlinear waves that display a vertically asymmetric profile with sharper and higher crests and shallower and more rounded troughs. As such this sort of asymmetry is also reflected in a quantitative sense in the skewness coefficient λ3 of surface elevations from the mean sea level. Although the relationship was thought to be appropriate to only narrowband waves, Fedele & Tayfun9 have more recently verified that it is also valid for broadband waves. In simple terms, μ = λ3/3 serves as a convenient relative measure of the characteristic crest-trough asymmetry of ocean waves. For narrowband (NB) waves in intermediate water depth, Tayfun41 derived a compact expression that reduces to the simple form λ3,NB = 3μm in deep water32 (see Methods section for details). The parameter Λ in Eq. (3) is a measure of third-order nonlinearities as a function of the fourth order cumulants of the wave surface33. Our studies show that it is approximated by Λappr = 8λ40/3 (see Methods section). For second-order seas, hereafter referred to as Tayfun sea states42, Λ = 0 only and PTF in Eq. (3) yields the Tayfun (T) distribution32
For Gaussian seas, μ = 0 and Λ = 0 and PTF reduces to the Rayleigh (R) distribution
We point out that the Tayfun distribution represents an exact result for large second order wave crest heights and it depends solely on the steepness parameter defined as μ = λ3/39. In the following, we will not dwell on wave heights43,44 as our main focus will be the statistics of crest heights in oceanic rogue sea states.
Excess kurtosis
For third-order nonlinear random seas the excess kurtosiscomprises a dynamic component due to nonlinear quasi-resonant wave-wave interactions8,40 and a Stokes bound harmonic contribution 45. Janssen45 derived a complex general formula for the bound excess kurtosis. For narrowband (NB) waves in intermediate water depth, the formula is more compact (see Eq. (A23) in45 and Methods section). In deep water it reduces to the simple form 40,45,46 where λ3,NB = 3μm9,32,33. As for the dynamic component, Fedele29 recently revisited Janssen’s8 weakly nonlinear formulation for . In deep water, this is given in terms of a six-fold integral that depends on the Benjamin-Feir index BFI and the parameter , which is a dimensionless measure of the multidirectionality of dominant waves, with ν the spectral bandwidth and σθ the angular spreading40,47. As waves become 1D waves R tends to zero. Note that the R − values for the three rogue sea states in Table 1 range from 0.4 to 0.6.
For deep-water narrowband waves characterized by a Gaussian type directional spectrum, the six-fold integral can be reduced to a one-fold integral, so that the dynamic excess kurtosis is computed as29where ωm is the mean spectral frequency, ν the spectral bandwidth, and Im(x) denotes the imaginary part of x. In the focusing regime (0 < R < 1) the dynamic excess kurtosis of an initially homogeneous Gaussian wave field grows, attaining a maximum at the intrinsic time scale . Thus, the sea state initially deviates from being Gaussian, but eventually the excess dynamic kurtosis tends monotonically to zero as energy spreads directionally, in agreement with numerical simulations48. The dynamic excess kurtosis maximum is well approximated by29where (which corrects a misprint in29) and b = 2.48. In contrast, in the defocusing regime (R > 1) the dynamic excess kurtosis is always negative. It reaches a minimum at tc and then tends to zero over larger periods of time. In summary, the theoretical predictions indicate a decaying trend for the dynamic excess kurtosis over large times in multidirectional wave fields (R > 0).
In unidirectional (R = 0) seas, energy is ‘trapped’ as in a long wave-guide. An initially homogeneous Gaussian wave field evolves as the dynamic excess kurtosis monotonically increases toward an asymptotic non-zero value given by from Eq. (8)49. Clearly, wave energy cannot spread directionally, and quasi-resonant interactions induce nonlinear focusing and large breather-type waves initiated by modulation instability16,17,20,21,22,23,50. However, realistic oceanic wind seas are typically multidirectional (short-crested) and energy can spread directionally. As a result, nonlinear focusing due to modulational instability effects diminishes16,18,29,51 or becomes essentially insignificant under realistic oceanic conditions29. Indeed, the large excess kurtosis transient observed during the initial stage of evolution is a result of the unrealistic assumption that the initial wave field is homogeneous Gaussian whereas oceanic wave fields are usually statistically inhomogeneous both in space and time. Further, for time scales , starting with initial homogeneous and Gaussian conditions becomes irrelevant as the wave field tends to a non-Gaussian state dominated by bound nonlinearities as the total kurtosis of surface elevations asymptotically approaches the value represented by the bound component52,53.
These results and conclusions hold for deep-water gravity waves. The extension to intermediate water depth d readily follows by redefining the Benjamin-Feir Index as 40,54, where the depth factor αS depends on the dimensionless depth kmd, with km the wavenumber corresponding to the mean spectral frequency (see Methods section). In the deep-water limit αS becomes 1. As the dimensionless depth kmd decreases, αS decreases and becomes negative for kmd < 1.363 and so does the dynamic excess kurtosis. For the three rogue sea states under study, depth factors are less than 1 and given in Table 1 together with the associated BFI and R coefficients. From Eq. (8), the maximum dynamic excess kurtosis is of O(10−3) for all three sea states and thus negligible in comparison to the associated narrowband (NB) bound component of O(10−2) (see Methods section). Hereafter, this will be confirmed further by a quantitative analysis of High Order Spectral (HOS) simulations of the Euler equations36.
At present, whether second-order or third-order nonlinearities play a dominant role in rogue-wave formation is a subject of considerable debate. Recent theoretical results clearly show that third-order quasi-resonant interactions play an insignificant role in the formation of large waves in realistic oceanic seas29. Further, oceanic evidence available so far31,33,34 seems to suggest that the statistics of large oceanic wind waves are not affected in any discernible way by third-order nonlinearities, including NLS-type modulational instabilities that attenuate as the wave spectrum broadens24. Indeed, extensive analyses of storm-generated extreme waves do not display any data trend even remotely similar to the systematic breather-type patterns observed in 1D wave flumes10,31,33,34. However, third-order bound nonlinearities may affect both skewness and kurtosis as they shape the wave surface with sharper crests and shallower troughs.
In the following we will compare second and third-order nonlinear properties of the sea states where the Draupner, Andrea and Killard rogue waves occurred, using HOS simulations of the Euler equations36. To do so, we first use WAVEWATCH III to hindcast the three rogue sea states. The respective directional spectra S(ω, θ) are shown in Fig. 1. These are used to define the initial wave field conditions for the HOS simulations–see the Methods section.
Figure 1: WAVEWATCH III hindcast directional wave spectra S(ω, θ) used as input for the HOS simulations.
Figure 1
Here, ω is the angular frequency and θ the direction in degrees. (Left) Andrea, (center) Draupner, (right) Killard. The spectra have been normalized with respect to spectral peak values.
Second-order vs third-order nonlinearities
The time evolutions of skewness and excess kurtosis of the three simulated rogue sea states are shown in Fig. 2. Initially, the two statistics undergo a brief artificial transient of O(10) mean wave periods during which nonlinearities are smoothly activated by way of a ramping function55 applied to the HOS equations. Following this stage, we do not observe the typical overshoot beyond the Gaussian value as seen in wave tank measurements and simulations8,16,17,50. In contrast, both statistics rapidly reach a steady state as an indication that quasi-resonant wave-wave interactions due to modulation instabilities are negligible in agreement with theoretical predictions29. Indeed, the large-time kurtosis is mostly Gaussian for all the three sea states and there are insignificant differences between second-order and third-order HOS simulations. Further, Fig. 2 shows that the narrowband predictions slightly overestimate the observed simulated values for skewness and excess kurtosis. This is simply because narrowband approximations do not take into account the directionality and the finite bandwidth of the spectrum.
Figure 2: Time evolution of skewness λ3 and excess kurtosis λ40 for (left) Andrea, (center) Draupner and (right) Killard sea states; HOS second-order (black solid), HOS third-order (red solid) averages and theoretical predictions of the narrowband Tayfun skewness and Janssen excess bound kurtosis (blue solid, see Eq. (9) n Methods Section).
Figure 2
95% confidence bands (dashed) are also shown. Time is normalized by the mean wave period Tm. The statistical parameters are estimated from an ensemble of 50 HOS simulations. The initial artificial transients are excluded from the ensemble averages as they are the result of a ramping function55 applied to the HOS equations to smoothly activate nonlinearities. See Methods section for details and definitions of wave parameters.
Our main conclusion is that second-order bound nonlinearities mainly affect the large-time skewness λ3 whereas excess kurtosis is smaller since it is of 39,40 (see also Methods section). Clearly, second-order effects are the dominant factors in shaping the probability structure of the random sea state with a minor contribution of excess kurtosis effects. Such dominance is seen in Fig. 3, where the HOS numerical predictions of the conditional return period Nh(ξ) of a crest exceeding the threshold ξHs are compared against the theoretical predictions based on the linear Rayleigh (R), second-order Tayfun (T) and third-order (TF) models from Eq. (3). In particular, Nh(ξ) follows from Eq. (2) as the inverse 1/P(ξ) of the empirical probabilities of a crest height exceeding the threshold ξHs. An excellent agreement is observed between simulations and the third-order TF model, which is nearly the same as the second-order T model. This indicates that second-order effects are dominant, whereas the linear Rayleigh model underestimates the empirical return periods.
Figure 3: Crest height scaled by the significant wave height (ξ) versus conditional return period (Nh) for the (left) Andrea, (center) Draupner and (right) Killard rogue sea states: HOS numerical predictions () in comparison with theoretical models (T = second-order Tayfun (light solid lines), TF = third-order (red solid lines) and R = Rayleigh distributions (dark dashes)).
Figure 3
Confidence bands are also shown (light dashes). Nh(ξ) is the inverse of the exceedance probability P(ξ) = Pr[h > ξHs]. Horizontal lines denote the rogue threshold 1.25Hs2.
For both second- and third-order nonlinearities, the return period Nr of a wave whose crest height exceeds the rogue threshold 1.25Hs is nearly 2 · 104 for the Andrea, Draupner and Killard sea states. Oceanic rogue wave measurements34 indicate that the rogue threshold for crest heights is exceeded on average once every Nr ~ 104 waves. Similarly, recent measurements off the west coast of Ireland35 yield Nr ~ 6 · 104. In contrast, in a Gaussian sea the same threshold is exceeded more rarely and on average once every 3 · 105 waves.
Note that all three rogue waves have crest heights that exceed the threshold 1.5Hs. This is exceeded on average once every 5 · 105 waves in second- and third-order seas and extremely rarely in Gaussian seas, i.e. on average once every 6 · 107 waves. This implies that the three rogue wave crest events are likely to be rare occurrences of weakly second-order random seas, or Tayfun sea states42. Our results clearly confirm that rogue wave generation is the result of the constructive interference (focusing) of elementary waves enhanced by second-order nonlinearities in agreement with the theory of stochastic wave groups proposed by Fedele and Tayfun9, which relies on Boccotti’s43 theory of quasi-determinism43. Our conclusions are also in agreement with observations9,10,31,33.
Comparison of the profiles of three rogue waves
For all three rogue sea states under study, the largest wave observed in the HOS simulations is now compared against the actual rogue wave measurements. Figure 4 compares the actual wave profiles (thin solid line) with the largest second-order (thin dotted-dashed line) and third-order (thick solid line) simulated waves. While there are small differences between the two orders, second-order nonlinearities are sufficient in predicting the observed profiles with sufficient accuracy.
Figure 4: Third-order HOS simulated extreme wave profiles (red thin solid), second-order HOS profiles (blue thin solid) and mean sea levels (MSL) (thin dashed) versus the dimensionless time t/Tp for (left) Andrea, (center) Draupner and (right) Killard waves.
Figure 4
For comparisons, measurements (thick solid) and actual MSLs (thin solid) are also shown. Note that the Killard MSL is insignificant and the Andrea MSL is not available. Tp is the dominant wave period (see Methods section for definitions).
In the same figure, the simulated mean sea level (MSL) below the crests is also shown. The estimation of the MSL follows by low-pass filtering the measured time series of the wave surface with frequency cutoff fc ~ fp/2, where fp is the frequency of the spectral peak56. Note that the time series must be long enough and contain at least ~200 waves for a statistically robust estimation of wave-wave interactions. In shorter time series, a set-up is observed as a manifestation of the large crest segment that extends above the adjacent lower crests. The HOS simulations give approximately the same MSL for both second- and third-order nonlinearities predicting a setdown below the large crests as expected from theory57. However, the observed Draupner set-up (thin line) is not reproduced by our HOS numerical simulations (see Fig. 4). We also note that the HOS MSL is close to the narrowband prediction STNB (see Table 1 and Methods section for definitions). The actual MSL for Andrea is not available, and buoy observations give neither a set-up nor a set-down for Killard.
Taylor et al.58 reported that for the Draupner wave the hindcast from the European Centre for Medium-Range Weather Forecasts shows swell waves propagating at approximately 80 degrees to the wind sea. They argued that the Draupner wave may be due to the crossing of two almost orthogonal wave groups in accord with second-order theory. This would explain the set-up observed under the large wave56 instead of the second-order set-down normally expected57. Note that the angle between the two dominant sea directions lies outside the range ~20–60 degrees where modulation instability is enhanced59.
Further studies and a high resolution hindcast of the Draupner sea state are needed to clarify if it was a crossing-seas situation as our WAVEWATCH III hindcast spectrum does not indicate so. Concerning the disagreement for the Draupner wave on the set-up, we have conducted numerical HOS experiments where the input spectrum consists of two identical JONSWAP type crossing sea states at 90 degrees. And we indeed found a set-up. As a matter of fact, whether one obtains a set-up or a set-down depends on the angle between the crossing seas. As the angle increases, the set-down turns into a set-up – see Fig. 5. However, we still find that second-order effects are dominant and third-order contributions on skewness and kurtosis, mainly due to bound nonlinearities, are negligible.
Figure 5: Upper row: crossing directional wave spectra S(ω, θ) computed using two identical JONSWAP spectra with Draupner spectral properties.
Figure 5
Lower row: extreme wave profiles simulated with third order HOS (red lines) and second order HOS (black lines). In addition, the corresponding mean sea levels are shown (dashed lines). The mean sea levels are scaled by three for emphasis. Crossing angles from left to right: π/2, π/4, and π/8. Note that for the final case, the relatively small crossing angle results in the spectrum appearing to contain only one dominant peak.
Our results are in agreement with the recent numerical simulations by Trulsen et al.42 of the crossing sea state encountered during the accident of the tanker Prestige on 13 November 2002. Puzzled by the literature on crossing seas states, they checked whether the fact that the accident occurred during a bimodal sea state with two wave systems crossing nearly at a right angle increased or not the chance of encountering a rogue wave. They concluded that the wave conditions at the time of the accident were only slightly more extreme than those of a Gaussian sea state, and slightly less extreme than those of a second-order Tayfun sea state32.
Since the 1990s, modulational instability11,12 of a class of solutions to the NLS equation has been proposed as a mechanism for rogue wave formation3,8,13,14,15. The availability of exact analytical solutions of 1D NLS breathers13 via the Inverse Scattering Transform60 enormously stimulated new research on rogue waves. In particular, it has been found that in 1D wave fields, the late-stage evolution of modulation instability leads to large waves in the form of breathers13,14,15. Indeed, in such situations energy is ‘trapped’ as in a long wave-guide, and quasi-resonant interactions are effective in inducing large breathers via nonlinear modulation before wave breaking occurs16,17,20,21. However, rogue waves in the form of breathers can be observed experimentally in 1D waves only at sufficiently small values of wave steepness (~0.01–0.09)20,21,22. Indeed, wave breaking is inevitable for wave steepness larger than 0.1: ‘breathers do not breathe’23, and their amplification is smaller than that predicted by the NLS equation, as confirmed by numerical simulations27,28.
Clearly, typical oceanic wind seas are short-crested, or multidirectional wave fields and their dynamics is more ‘free’ than the 1D ‘long-wave-guide’ counterpart. Indeed, energy can spread directionally and as a result nonlinear focusing due to modulational instability is diminished16,18,29. Our results suggest that in typical oceanic fields third-order nonlinearities do not play a significant role in the wave growth.
Furthermore, we found that skewness effects on crest heights are dominant in comparison to bound kurtosis contributions and statistical predictions can be based on second-order models32,33,61. Thus, rogue waves are likely to be rare occurrences resulting from the constructive interference (dispersive and directional focusing) of elementary waves enhanced by second order nonlinear effects in agreement with observations9,10,31,33 and with the theory of stochastic wave groups9. This theory about the mechanics of wave groups shows that they can be thought of as genes of a non-Gaussian sea dominated by second-order nonlinearities, when interested in the dynamics of large surface displacements. The space-time evolution of wave crests during an extreme event can be seen in the Supplementary Video S1 of the simulated Killard rogue wave sea state analyzed in this paper. We anticipate that our results may motivate similar analysis of waves over a wider distribution of heights using extensive data sets34.
Wave parameters
The significant wave height Hs is defined as the mean value H1/3 of the highest one-third of wave heights. It can be estimated either from a zero-crossing analysis or more easily from the wave omnidirectional spectrum as Hs ≈ 4σ, where is the standard deviation of surface elevations, mj = ∫S(ω)ωjdω are spectral moments and S(ω, θ) is the directional wave spectrum.
The dominant wave period Tp = 2π/ωp refers to the frequency ωp of the spectral peak. The mean zero-crossing wave period T0 is equal to 2π/ω0, with . The associated wavelength L0 = 2π/k0 follows from the linear dispersion relation , with d the water depth. The mean spectral frequency is defined as ωm = m1/m032 and the associated mean period Tm is equal to 2π/ωm. A characteristic wave steepness is defined as μm = kmσ, where km is the wavenumber corresponding to the mean spectral frequency ωm32. The following quantitites are also introduced: qm = kmd, Qm = tanhqm, the phase velocity cm = ωm/km, the group velocity cg = cm[1 + 2qm/sinh(2qm)]/2. The spectral bandwidth gives a measure of the frequency broadening. The angular spreading is estimated as , where and 62. Note that .
The parameter Λ = λ40 + 2λ22 + λ04 is a measure of third-order nonlinearities and is a function of the fourth order cumulants λnm of the wave surface η and its Hilbert transform 33. In particular, and . In practice, Λ is usually approximated solely in terms of the excess kurtosis as Λappr = 8λ40/3 by assuming the relations between cumulants49 λ22 = λ40/3 and λ04 = λ40. These, to date, have been proven to hold for linear and second-order narrowband waves only39. For third-order nonlinear seas, our numerical studies indicate that Λ ≈ Λappr within a 3% relative error in agreement with observations19,63.
The wave steepness μ = λ3/3 relates to the wave skewness λ3 of surface elevations. For narrowband (NB) waves in intermediate water the wave skewness41 and bound excess kurtosis45wherewith the phase velocity in shallow water. The wave-induced set-down or mean sea level variation below a crest of amplitude h is STNB = Δh2 45. In deep water,
Note that Eq. (9) are not valid in small water depth as second and third-order terms of the associated Stokes expansion can be larger than the linear counterpart (see Eq. (A18) in45). To be valid, the constraints αμm ≤ 1 and βμm/α ≤ 1 must hold. And indeed they are satisfied for the three rogue sea states under study. The depth factor αS depends on kmd through of a lengthy expression, which is not reported here for the sake of simplicity – see Janssen and Onorato54.
Brief description of WAVEWATCH III and hindcast validation
WAVEWATCH III62,64 is a third generation wave model developed at NOAA/NCEP that solves the spectral energy action balance equation with a source function representing the wind input, wave-wave interactions and the wave energy dissipation due to diverse processes. The configuration of the model was set to solve the balance equation from a minimum frequency of 0.0350 Hz up to 0.5552 Hz for 36 directional bands and 30 frequencies. A JONSWAP spectrum was set as an initial condition at every grid point. We used the wind input fields from the NOAA Climate Forecast System Reanalysis (CFSR)64.
Higher Order Spectral Method
The HOS method is a numerical pseudo spectral method to solve the Euler equations governing the dynamics of incompressible fluid flow at a desired level of nonlinearity. In particular, the time evolution of the free surface of the fluid, η(x, y, t), and the associated velocity potential ψ(x, y, t) evaluated on the free surface are obtained. The method was independently developed in 1987 by Dommermuth & Yue36 and West et al.65. Within the present work, West et al.’s version is employed. Tanaka66 provides a thorough description of the method.
Initial conditions for the potential ψ and surface elevation η are obtained from the directional spectrum as an output of WAVEWATCH III. In the wavenumber domain, the Fourier transform of η is constructed as S(k)exp(), where β is normally distributed over [0, 2π]. Similarly, the Fourier transform of ψ is obtained via linear wave theory, and finally an inverse Fourier transform is applied. The numerical simulation is performed using 1024 × 1024 Fourier modes and over a time scale , where μm represents a characteristic wave steepness defined above. A low-pass filter is applied to avoid numerical blow-up.
Finally, we note that the use of the WAVEWATCH III model combined with HOS simulations may prove useful in assessing recently proposed techniques for rogue wave predictability based on chaotic time series analysis67,68 and third-order probabilistic models of unexpected wave extremes69.
Additional Information
How to cite this article: Fedele, F. et al. Real world ocean rogue waves explained without the modulational instability. Sci. Rep. 6, 27715; doi: 10.1038/srep27715 (2016).
1. 1.
A possible freak wave event measured at the Draupner Jacket January 1 1995. Proc. of Rogue waves 2004 , 1–8 (2004).
2. 2.
, & Oceanic rogue waves. Annual Review of Fluid Mechanics 40, 287–310 (2008).
3. 3.
Nonlinear ocean waves and the inverse scattering transform vol. 97 (Elsevier, 2010).
4. 4.
& The Andrea wave characteristics of a measured North Sea rogue wave. Journal of Offshore Mechanics and Arctic Engineering 135, 031108–031108 (2013).
5. 5.
, , , & The North Sea Andrea storm and numerical simulations. Natural Hazards and Earth System Science 14, 1407–1415 (2014).
6. 6.
, , , & Local analysis of wave fields produced from hindcasted rogue wave sea states. In ASME 2015 34th International Conference on Ocean, Offshore and Arctic Engineering, OMAE2015–41458 (American Society of Mechanical Engineers, 2015).
7. 7.
& Physical mechanisms of the rogue wave phenomenon. European Journal of Mechanics - B/Fluids 22, 603–634 (2003).
8. 8.
Nonlinear four-wave interactions and freak waves. Journal of Physical Oceanography 33, 863–884 (2003).
9. 9.
& On nonlinear wave groups and crest statistics. J. Fluid Mech 620, 221–239 (2009).
10. 10.
Rogue waves in oceanic turbulence. Physica D 237, 2127–2131 (2008).
11. 11.
The effects of randomness on the stability of two-dimensional surface wavetrains. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 363, 525–546 (1978).
12. 12.
, , & Stability of weakly nonlinear deep-water waves in two and three dimensions. Journal of Fluid Mechanics 105, 177–191 (1981).
13. 13.
Water waves, nonlinear Schrödinger equations and their solutions. Journal of the Australian Mathematical Society Series B 25, 16–43 (1983).
14. 14.
, & The nonlinear dynamics of rogue waves and holes in deep-water gravity wave trains. Phys. Lett. A 275, 386–393 (2000).
15. 15.
, & Are rogue waves robust against perturbations? Physics Letters A 373, 3997–4000 (2009).
16. 16.
et al. Statistical properties of mechanically generated surface gravity waves: a laboratory experiment in a three-dimensional wave basin. Journal of Fluid Mechanics 627, 235–257 (2009).
17. 17.
& An experimental study of spatial evolution of statistical parameters in a unidirectional narrow-banded random wavefield. Journal of Geophysical Research: Oceans 114, 2156–2202 (2009).
18. 18.
et al. Evolution of weakly nonlinear random directional waves: laboratory experiments and numerical simulations. Journal of Fluid Mechanics 664, 313–336 (2010).
19. 19.
, , & Nonlinear Schrödinger invariants and wave statistics. Physics of Fluids 22, 036601 (2010).
20. 20.
, & Rogue wave observation in a water wave tank. Phys. Rev. Lett. 106, 204502 (2011).
21. 21.
, , & Super rogue waves: Observation of a higher-order breather in water waves. Phys. Rev. X 2, 011015 (2012).
22. 22.
& Lagrangian kinematics of steep waves up to the inception of a spilling breaker. Physics of Fluids 26, 016601 (2014).
23. 23.
& Peregrine breather revisited. Physics of Fluids 25, 051701 (2013).
24. 24.
On certain properties of the compact zakharov equation. Journal of Fluid Mechanics 748, 692–711 (2014).
25. 25.
Stability of periodic waves of finite amplitude on the surface of a deep fluid. J. Appl. Mech. Tech. Phys. 9, 190–194 (1968).
26. 26.
& Compact Equation for Gravity Waves on Deep Water. JETP Lett. 93, 701–705 (2011).
27. 27.
& On the highest non-breaking wave in a group: fully nonlinear water wave breathers versus weakly nonlinear theory. Journal of Fluid Mechanics 735, 203–248 (2013).
28. 28.
et al. Super-rogue waves in simulations based on weakly nonlinear and fully nonlinear hydrodynamic equations. Phys. Rev. E 88, 012909 (2013).
29. 29.
On the kurtosis of ocean waves in deep water. Journal of Fluid Mechanics 782, 25–36 (2015).
30. 30.
, , & The effect of third-order nonlinearity on statistical properties of random directional waves in finite depth. Nonlinear Processes in Geophysics 16, 131–139 (2009).
31. 31.
Distributions of envelope and phase in wind waves. Journal of Physical Oceanography 38, 2784–2800 (2008).
32. 32.
Narrow-band nonlinear sea waves. Journal of Geophysical Research: Oceans 85, 1548–1552 (1980).
33. 33.
& Wave-height distributions and nonlinear effects. Ocean Engineering 34, 1631–1649 (2007).
34. 34.
& Field measurements of rogue water waves. Journal of Physical Oceanography 44, 2317–2335 (2014).
35. 35.
et al. ADCP measurements of extreme water waves off the west coast of Ireland. In The Proceedings of the 26th (2016) International Offshore and Polar Engineering, Rhodes, Greece, June 26 - July 2, 2016 (International Society of Offshore and Polar Engineers, 2016).
36. 36.
& A high-order spectral method for the study of nonlinear gravity waves. Journal of Fluid Mechanics 184, 267–288 (1987).
37. 37.
Evidences of the existence of freak waves. In Rogue Waves 129–140 (2001).
38. 38.
, , & Rogue wave occurrence and dynamics by direct simulations of nonlinear wave-field evolution. Journal of Fluid Mechanics 720, 357–392 (2013).
39. 39.
& Nonlinear effects on wave envelope and phase. J. Waterway, Port, Coastal and Ocean Eng. 116, 79–100 (1990).
40. 40.
& On the extension of the freak wave warning system and its verification. Tech. Memo 588, ECMWF (2009).
41. 41.
Statistics of nonlinear wave crests and groups. Ocean Engineering 33, 1589–1622 (2006).
42. 42.
, , , & Crossing sea state and rogue wave probability during the Prestige accident. Journal of Geophysical Research: Oceans 120 (2015).
43. 43.
Wave Mechanics for Ocean Engineering (Elsevier Sciences, Oxford, 2000).
44. 44.
& Generalized Boccotti distribution for nonlinear wave heights. Ocean Engineering 74, 101–106 (2013).
45. 45.
On some consequences of the canonical transformation in the hamiltonian theory of water waves. Journal of Fluid Mechanics 637, 1–44 (2009).
46. 46.
On a random time series analysis valid for arbitrary spectral shape. Journal of Fluid Mechanics 759, 236–256 (2014).
47. 47.
, & On the estimation of the kurtosis in directional sea states for freak wave forecasting. Journal of Physical Oceanography 41, 1484–1497 (2011).
48. 48.
& Evolution of kurtosis for wind waves. Geophysical Research Letters 36, 1944–8007 (2009).
49. 49.
& On kurtosis and occurrence probability of freak waves. Journal of Physical Oceanography 36, 1471–1483 (2006).
50. 50.
, & Effect of the initial spectrum on the spatial evolution of statistics of unidirectional nonlinear random waves. Journal of Geophysical Research: Oceans 115 (2010).
51. 51.
, & Evolution of a random directional wave and freak wave occurrence. Journal of Physical Oceanography 39, 621–639 (2009).
52. 52.
& Large-time evolution of statistical moments of wind–wave fields. Journal of Fluid Mechanics 726, 517–546 (2013).
53. 53.
& Evaluation of skewness and kurtosis of wind waves parameterized by JONSWAP spectra. Journal of Physical Oceanography 44, 1582–1594 (2014).
54. 54.
& The intermediate water depth limit of the Zakharov equation and consequences for wave prediction. Journal of Physical Oceanography 37, 2389–2400 (2007).
55. 55.
The initialization of nonlinear waves using an adjustment scheme. Wave Motion 32, 307–317 (2000).
56. 56.
, & The shape of large surface waves on the open sea and the Draupner new year wave. Applied Ocean Research 26, 73–83 (2004).
57. 57.
& Radiation stresses in water waves: a physical discussion, with applications. Deep-Sea Research II, 529–562 (1964).
58. 58.
, , , & Did the Draupner wave occur in a crossing sea? Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science rspa20110049 (2011).
59. 59.
, & Freak waves in crossing seas. The European Physical Journal-Special Topics 185, 45–55 (2010).
60. 60.
& Exact theory of two-dimensional self-focusing and one-dimensional self-modulation of waves in nonlinear media. Soviet Physics-JETP 34, 62–69 (1972).
61. 61.
Wave crest distributions: Observations and second-order theory. Journal of Physical Oceanography 30, 1931–1943 (2000).
62. 62.
& User manual and system documentation of WAVEWATCH III version 4.18. Tech. Rep. Tech. Note 316, NOAA/NWS/NCEP/MMAB (2014).
63. 63.
& Expected shape of extreme waves in storm seas. In ASME 2007 26th International Conference on Offshore Mechanics and Arctic Engineering, OMAE2007–29073 (American Society of Mechanical Engineers, 2007).
64. 64.
, & Validation of a thirty year wave hindcast using the climate forecast system reanalysis winds. Ocean Modelling 70, 189–206 (2013).
65. 65.
, , , & A new numerical method for surface hydrodynamics. Journal of Geophysical Research 92, 11803–11824 (1987).
66. 66.
A method of studying nonlinear random field of surface gravity waves by direct numerical simulation. Fluid Dynamics Research 28, 41–60 (2001).
67. 67.
, , & Predictability of Rogue Events. Phys. Rev. Lett. 114, 213901 (2015).
68. 68.
, , , & Random walks across the sea: the origin of rogue waves? arXiv:1507.08102v1 (2015).
69. 69.
Are rogue waves really unexpected? Journal of Physical Oceanography 46, 1495–1508 (2016).
Download references
This work is supported by the European Research Council (ERC) under the research projects ERC-2011-AdG 290562-MULTIWAVE and ERC-2013-PoC 632198-WAVEMEASUREMENT, and Science Foundation Ireland under grant number SFI/12/ERC/E2227. F.F. is grateful to George Z. Forristall and M. Aziz Tayfun for sharing the Draupner wave measurements utilized in this study. F.F. also thanks Michael Banner, Predrag Cvitanovic and M. Aziz Tayfun for discussions on rogue waves and nonlinear wave statistics. F.F. is grateful to M. Aziz Tayfun for revising an early draft of the manuscript. F.D. is grateful to ESBI for sharing the Killard wave measurements. F.D. and J.B. are grateful to Claudio Viotti for the development of the HOS code used in this study. The Andrea wave data were collected by ConocoPhillips Skandinavia AS. The numerical simulations were performed on the Fionn cluster at the Irish Centre for High-end Computing (ICHEC).
Author information
1. School of Civil & Environmental Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
• Francesco Fedele
2. School of Electrical & Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, USA
• Francesco Fedele
3. University College Dublin, School of Mathematics and Statistics, Belfield, Dublin 4, Ireland
• Joseph Brennan
• , Sonia Ponce de León
• & Frédéric Dias
4. Institut FEMTO-ST CNRS-Université de Franche-Comté UMR 6174 France
• John Dudley
1. Search for Francesco Fedele in:
2. Search for Joseph Brennan in:
3. Search for Sonia Ponce de León in:
4. Search for John Dudley in:
5. Search for Frédéric Dias in:
The concept and design was provided by F.F., who coordinated the scientific effort together with F.D. S.P.D.L. and J.B. performed numerical simulations and developed specific codes for the analysis. The wave statistical analysis was performed by F.F. together with J.B. The overall supervision was provided by F.F. and F.D. F.D. and J.D. made ongoing incisive intellectual contributions. All authors participated in the analysis and interpretation of results and the writing of the manuscript.
Competing interests
The authors declare no competing financial interests.
Corresponding author
Correspondence to Frédéric Dias.
Supplementary information
About this article
Publication history
Further reading
|
4362ad407f15aed4 | Monday, August 31, 2015
Evidence of ancient life discovered in mantle rocks deep below the seafloor
Ontology-Epistemology duality?
One can however develop objections.
Consider now possible objections.
Sunday, August 30, 2015
Sharpening of Hawking's argument
I already told about the latest argument of Hawking to solve information paradox associated with black holes (see this and this).
There is now a popular article explaining the intuitive picture behind Hawking's proposal. The blackhole horizon would involve tangential flow of light and particles of the infalling matter would induce supertranslations on the pattern of this light thus coding information about their properties to this light. After that this light would be radiated away as analog of Hawking radiation and carry out this information.
The objection would be that in GRT horizon is no way special - it is just a coordinate singularity. Curvature tensor does not diverge either and Einstein tensor and Ricci scalar vanish. This argument has been used in the firewall debates to claim that nothing special should occur as horizon is traversed. Why light would rotate around it? No reason for this!
The answer in TGD would be obvious: horizon is replaced for TGD analog of blackhole with a light-like 3-surface at which the induced metric becomes Euclidian. Horizon becomes analogous to light front carrying not only photons but all kinds of elementary particles. Particles do not fall inside this surface but remain at it!
The objection now is that photons of light front should propagate in direction normal to it, not parallel. The point is however that this light-like 3-surface is the surface at which induced 4-metric becomes degenerate: hence massless particles can live on it.
Wednesday, August 26, 2015
TGD view about black holes and Hawking radiation: part II
In the second part of posting I discuss TGD view about blackholes and Hawking radiation. There are several new elements involved but concerning black holes the most relevant new element is the assignment of Euclidian space-time regions as lines of generalized Feynman diagrams implying that also blackhole interiors correspond to this kind of regions. Negentropy Maximization Principle is also an important element and predicts that number theoretically defined black hole negentropy can only increase. The real surprise was that the temperature of the variant of Hawking radiation at the flux tubes of proton Sun system is room temperature! Could TGD variant of Hawking radiation be a key player in quantum biology?
The basic ideas of TGD relevant for blackhole concept
My own basic strategy is to not assume anything not necessitated by experiment or not implied by general theoretical assumptions - these of course represent the subjective element. The basic assumptions/predictions of TGD relevant for the recent discussion are following.
1. Space-times are 4-surfaces in H=M4× CP2 and ordinary space-time is replaced with many-sheeted space-time. This solves what I call energy problem of GRT by lifting gravitationally broken Poincare invariance to an exact symmetry at the level of imbedding space H.
GRT type description is an approximation obtained by lumping together the space-time sheets to single region of M4, with various fields as sums of induced fields at space-time surface geometrized in terms of geometry of H.
Space-time surface has both Minkowskian and Euclidian regions. Euclidian regions are identified in terms of what I call generalized Feynman/twistor diagrams. The 3-D boundaries between Euclidian and Minkowskina regions have degenerate induced 4-metric and I call them light-like orbits of partonic 2-surfaces or light-like wormhole throats analogous to blackhole horizons and actually replacing them. The interiors of blackholes are replaced with the Euclidian regions and every physical system is characterized by this kind of region.
Euclidian regions are identified as slightly deformed pieces of CP2 connecting two Minkowskian space-time regions. Partonic 2-surfaces defining their boundaries are connected to each other by magnetic flux tubes carrying monopole flux.
Wormhole contacts connect two Minkowskian space-time sheets already at elementary particle level, and appear in pairs by the conservation of the monopole flux. Flux tube can be visualized as a highly flattened square traversing along and between the space-time sheets involved. Flux tubes are accompanied by fermionic strings carrying fermion number. Fermionic strings give rise to string world sheets carrying vanishing induced em charged weak fields (otherwise em charge would not be well-defined for spinor modes). String theory in space-time surface becomes part of TGD. Fermions at the ends of strings can get entangled and entanglement can carry information.
2. Strong form of General Coordinate Invariance (GCI) states that light-like orbits of partonic 2-surfaces on one hand and space-like 3-surfaces at the ends of causal diamonds on the other hand provide equivalent descriptions of physics. The outcome is that partonic 2-surfaces and string world sheets at the ends of CD can be regarded as basic dynamical objects.
Strong form of holography states the correspondence between quantum description based on these 2-surfaces and 4-D classical space-time description, and generalizes AdS/CFT correspondence. Conformal invariance is extended to the huge super-symplectic symmetry algebra acting as isometries of WCW and having conformal structure. This explains why 10-D space-time can be replaced with ordinary space-time and 4-D Minkowski space can be replaced with partonic 2-surfaces and string world sheets. This holography looks very much like the one we are accustomed with!
3. Quantum criticality of TGD Universe fixing the value(s) of the only coupling strength of TGD (Kähler coupling strength) as analog of critical temperature. Quantum criticality is realized in terms of infinite hierarchy of sub-algebras of super-symplectic algebra actings as isometries of WCW, the "world of classical worlds" consisting of 3-surfaces or by holography preferred extremals associated with them.
Given sub-algebra is isomorphic to the entire algebra and its conformal weights are n≥ 1-multiples of those for the entire algebra. This algebra acts as conformal gauge transformations whereas the generators with conformal weights m<n act as dynamical symmetries defining an infinite hierarchy of simply laced Lie groups with rank n-1 acting as dynamical symmetry groups defined by Mac-Kay correspondence so that the number of degrees of freedom becomes finite. This relates very closely to the inclusions of hyper-finite factors - WCW spinors provide a canonical representation for them.
This hierarchy corresponds to a hierarchy of effective Planck constants heff=n× h defining an infinite number of phases identified as dark matter. For these phases Compton length and time are scale up by n so that they give rise to macroscopic quantum phases. Super-conductivity is one example of this kind of phase - charge carriers could be dark variants of ordinary electrons. Dark matter appears at quantum criticality and this serves as an experimental manner to produce dark matter. In living matter dark matter identified in this manner would play a central role. Magnetic bodies carrying dark matter at their flux tubes would control ordinary matter and carry information.
4. I started the work with the hierarchy of Planck constants from the proposal of Nottale stating that it makes sense to talk about gravitational Planck constant hgr=GMm/v0, v0/c≤ 1 (the interpretation of symbols should be obvious). Nottale found that the orbits of inner and outer planets could be modelled reasonably well by applying Bohr quantization to planetary orbits with tge value of velocity parameter differing by a factor 1/5. In TGD framework hgr would be associated with magnetic flux tubes mediating gravitational interaction between Sun with mass M and planet or any object, say elementary particle, with mass m. The matter at the flux tubes would be dark as also gravitons involved. The Compton length of particle would be given by GM/v0 and would not depend on the mass of particle at all.
The identification hgr=heff is an additional hypothesis motivated by quantum biology, in particular the identification of biophotons as decay products of dark photons satisfying this condition. As a matter fact, one can talk also about hem assignable to electromagnetic interactions: its values are much lower. The hypothesis is that when the perturbative expansion for two particle system does not converge anymore, a phase transition increasing the value of the Planck constant occurs and guarantees that coupling strength proportional to 1/heff increases. This is one possible interpretation for quantum criticality. TGD provides a detailed geometric interpretation for the space-time correlates of quantum criticality.
Macroscopic gravitational bound states not possible in TGD without the assumption that effective string tension associated with fermionic strings and dictated by strong form of holography is proportional to 1/heff2. The bound states would have size scale of order Planck length since for longer systems string energy would be huge. heff=hgr makes astroscopic quantum coherence unavoidable. Ordinary matter is condensed around dark matter. The counterparts of black holes would be systems consisting of only dark matter.
5. Zero energy ontology (ZEO) is central element of TGD. There are many motivations for it. For instance, Poincare invariance in standard sense cannot make sense since in standard cosmology energy is not conserved. The interpretation is that various conserved quantum numbers are length scale dependent notions.
Physical states are zero energy states with positive and negative energy parts assigned to ends of space-time surfaces at the light-like boundaries of causal diamonds (CDs). CD is defined as Cartesian products of CP2 with the intersection of future and past directed lightcones of M4. CDs form a fractal length scale hierarchy. CD defines the region about which single conscious entity can have conscious information, kind of 4-D perceptive field. There is a hierarchy of WCWs associated with CDs. Consciously experienced physics is always in the scale of given CD.
Zero energy states identified as formally purely classical WCW spinor fields replace positive energy states and are analogous to pairs of initial and final, states and the crossing symmetry of quantum field theories gives the mathematical motivation for their introduction.
6. Quantum measurement theory can be seen as a theory of consciousness in ZEO. Conscious observer or self as a conscious entity becomes part of physics. ZEO gives up the assumption about unique universe of classical physics and restricts it to the perceptive field defined by CD.
In each quantum jump a re-creation of Universe occurs. Subjective experience time corresponds to state function reductions at fixed, passive bounary of CD leaving it invariant as well as state at it. The state at the opposite, active boundary changes and also its position changes so that CD increases state function by state function reduction doing nothing to the passive boundary. This gives rise to the experienced flow of geometric time since the distance between the tips of CD increases and the size of space-time surfaces in the quantum superposition increases. This sequence of state function reductions is counterpart for the unitary time evolution in ordinary quantum theory.
Self "dies" as the first state function reduction to the opposite boundary of CD meaning re-incarnation of self at it and a reversal of the arrow of geometric time occurs: CD size increases now in opposite time direction as the opposite boundary of CD recedes to the geometric past reduction by reduction.
Negentropy Maximization Principle (NMP) defines the variational principle of state function reduction. Density matrix of the subsystem is the universal observable and the state function reduction leads to its eigenspaces. Eigenspaces, not only eigenstates as usually.
Number theoretic entropy makes sense for the algebraic extensions of rationals and can be negative unlike ordinary entanglement entropy. NMP can therefore lead to a generation of NE if the entanglement correspond to a unitary entanglement matrix so that the density matrix of the final state is higher-D unit matrix. Another possibility is that entanglement matrix is algebraic but that its diagonalization in the algebraic extension of rationals used is not possible. This is expected to reduce the rate for the reduction since a phase transition increasing the size of extension is needed.
The weak form of NMP does not demand that the negentropy gain is maximum: this allow the conscious entity responsible for reduction to decide whether to increase maximally NE resources of the Universe or not. It can also allow larger NE increase than otherwise. This freedom brings the quantum correlates of ethics, moral, and good and evil. p-Adic length scale hypothesis and the existence of preferred p-adic primes follow from weak form of NMP and one ends up naturally to adelic physics.
The analogs of blackholes in TGD
Could blackholes have any analog in TGD? What about Hawking radiation? The following speculations are inspired by the above general vision.
1. Ordinary blackhole solutions are not appropriate in TGD. Interior space-time sheet of any physical object is replaced with an Euclidian space-time region. Also that of blackhole by perturbation argument based on the observation that if one requires that the radial component of blackhole metric is finite, the horizon becomes light-like 3-surface analogous to the light-like orbit of partonic 2-surface and the metric in the interior becomes Euclidian.
2. The analog of blackhole can be seen as a limiting case for ordinary astrophysical object, which already has blackhole like properties due to the presence of heff=n× h dark matter particles, which cannot appear in the same vertices with visible manner. Ideal analog of blackhole consist of dark matter only, and is assumed to satisfy the hgr=heff already discussed. It corresponds to region with a radius equal to Compton length for arbitrary particle R=GM/v0=rS/2v0, where rS is Schwartschild radius. Macroscopic quantum phase is in question since the Compton radius of particle does not depend on its mass. Blackhole limit would correspond to v0/c→ 1 and dark matter dominance. This would give R=rS/2. Naive expectation would be R=rS (maybe factor of two is missing somewhere: blame me!).
3. NMP implies that information cannot be lost in the formation of blackhole like state but tends to increase. Matter becomes totally dark and the NE with the partonic surfaces of external world is preserved or increases. The ingoing matter does not fall to a mass point but resides at the partonic 2-surface which can have arbitrarily large surface. It can have also wormholes connecting different regions of a spherical surface and in this manner increase its genus. NMP, negentropy , negentropic entanglement between heff=n× h dark matter systems would become the basic notions instead of second law and entropy.
does not diverge either and Einstein tensor and Ricci scalar vanish. This argument has been used in the firewall debates to claim that nothing special should occur as horizon is traversed. So: why light would rotate around it? No reason for this! The answer in TGD would be obvious: horizon is replaced for TGD analog of blackhole with a light-like 3-surface at which the induced metric becomes Euclidian. Horizon becomes analogous to light front carrying not only photons but all kinds of elementary particles. Particles do not fall inside this surface but remain at it!
5. The replacement of second law with NMP leads to ask whether a generalization of blackhole thermodynamics does make sense in TGD Universe. Since blackhole thermodynamics characterizes Hawking radiation, the generalization could make sense at least if there exist analog for the Hawking radiation. Note that also geometric variant of second law makes sense.
Could the analog of Hawking radiation be generated in the first state function reduction to the opposite boundary, and be perhaps be assigned with the sudden increase of radius of the partonic 2-surface defining the horizon? Could this burst of energy release the energy compensating the generation of gravitational binding energy? This burst would however have totally different interpretation: even gamma ray bursts from quasars could be considered as candidates for it and temperature would be totally different from the extremely low general relativistic Hawking temperature of order
TGR=[hbar/8π GM ] ,
which corresponds to an energy assignable to wavelength equal to 4π times Schwartschild radius. For Sun with Schwartschild radius rS=2GM=3 km one has TGR= 3.2× 10-11 eV.
One can of course have fun with formulas to see whether the generalizaton of blackhole thermodynamics assuming the replacement h→ hgr could make sense physically. Also the replacement rS→ R, where R is the real radius of the star will be made.
1. Blackhole temperature can be formally identified as surface gravity
T=(hgr/hbar) × [GM/2π R2] = [hgr/h] × [rS2/R2]× TGR = 1/[4π v0] [rS2/R2] .
For Sun with radius R= 6.96× 105 km one has T/m= 3.2× 10-11 giving T= 3× 10-2 eV for proton. This is by 9 orders higher than ordinary Hawking temperature. Amazingly, this temperature equals to room temperature! Is this a mere accident? If one takes seriously TGD inspired quantum biology in which quantum gravity plays a key role (see this) this does not seem to be the case. Note that for electron the temperature would correspond to energy 3/2× 10-5 eV which corresponds to 4.5 GHz frequency for ordinary Planck constant.
It must be however made clear that the value of v0 for dark matter could differ from that deduced assuming that entire gravitational mass is dark. For M→ MD= kM and v0→ k1/2v0 the orbital radii remain unchanged but the velocity of dark matter object at the orbit scales to k1/2v0. This kind of scaling is suggested by the fact that the value of hgr seems to be too large as compared by the identification of biophotons as decay results of dark photons with heff=hgr (some arguments suggest the value k≈ 2× 10-4).
Note that for the radius R=[rS/2v0π] the thermal energy exceeds the rest mass of the particle. For neutron stars this limit might be achieved.
2. Blackhole entropy
SGR= [A/4 hbar G]= 4π GM2/hbar=4π [M2/MPl2]
would be replaced with the negentropy for dark matter making sense also for systems containing both dark and ordinary matter. The negentropy N(m) associated with a flux tube of given type would be a fraction h/hgr from the total area of the horizon using Planck area as a unit:
N(m)=[h/hgr] × [A/4hbar G]= [h/hgr] × [R2/rS2] ×SGR = v0×[M/m]× [R2/rS2] .
The dependence on m makes sense since a given flux tube type characterized by mass m determining the corresponding value of hgr has its own negentropy and the total negentropy is the sum over the particle species. The negentropy of Sun is numerically much smaller that corresponding blackhole entropy.
3. Horizon area is proportional to (GM/v0)2∝ heff2 and should increase in discrete jumps by scalings of integer and be proportional to n2.
How does the analog of blackhole evolve in time? The evolution consists of sequences of repeated state function reductions at the passive boundary of CD followed by the first reduction to the opposite boundary of CD followed by a similar sequence. These sequences are analogs of unitary time evolutions. This defines the analog of blackhole state as a repeatedly re-incarnating conscious entity and having CD, whose size increases gradually. During given sequence of state function reductions the passive boundary has constant size. About active boundary one cannot say this since it corresponds to a superposition of quantum states.
The reduction sequences consist of life cycles at fixed boundary and the size of blackhole like state as of any state is expected to increase in discrete steps if it participates to cosmic expansion in average sense. This requires that the mass of blackhole like object gradually increases. The interpretation is that ordinary matter gradually transforms to dark matter and increases dark mass M= R/G.
Cosmic expansion is not observed for the sizes of individual astrophysical objects, which only co-move. The solution of the paradox is that they suddenly increase their size in state function reductions. This hypothesis allows to realize Expanding Earth hypothesis in TGD framework (see this). Number theoretically preferred scalings of blackhole radius come as powers of 2 and this would be the scaling associated with Expanding Earth hypothesis.
See the chapter Criticality and dark matter" or the article TGD view about black holes and Hawking radiation.
TGD view about blackholes and Hawking radiation: part I
The most recent revealation of Hawking was in Hawking radiation conference held in KTH Royal Institute of Technology in Stockholm. The title of the posting of Bee telling about what might have been revealed is "Hawking proposes new idea for how information might escape from black holes". Also Lubos has - a rather aggressive - blog post about the talk. A collaboration of Hawking, Andrew Strominger and Malcom Perry is behind the claim and the work should be published within few months.
The first part of posting gives a critical discussion of the existing approach to black holes and Hawking gravitation. The intention is to demonstrate that a pseudo problem following from the failure of General Relativity below black hole horizon is in question.
Is information lost or not in blackhole collapse?
The basic problem is that classically the collapse to blackhole seems to destroy all information about the matter collapsing to the blackhole. The outcome is just infinitely dense mass point. There is also a theorem of classical GRT stating that blackhole has no hair: blachole is characterized only by few conserved charges.
Hawking has predicted that blackhole loses its mass by generating radiation, which looks like thermal. As blackhole radiates its mass away, all information about the material which entered to the blackhole seems to be lost. If one believes in standard quantum theory and unitary evolution preserving the information, and also forgets the standard quantum theory's prediction that state function reductions destroy information, one has a problem. Does the information really disappear? Or is the GRT description incapable to cope with the situation? Could information find a new representation?
Superstring models and AdS/CFT correspondence have inspired the proposal that a hologram results at the horizon and this hologram somehow catches the information by defining the hair of the blackhole. Since the radius of horizon is proportional to the mass of blackhole, one can however wonder what happens to this information as the radius shrinks to zero when all mass is Hawking radiated out.
What Hawking suggests is that a new kind of symmetry known as super-translations - a notion originally introduced by Bondi and Metzner - could somehow save the situation. Andrew Strominger has recently discussed the notion. The information would be "stored to super-translations". Unfortunately this statement says nothing to me nor did not say to Bee and New Scientist reporter. The idea however seems to be that the information carried by Hawking radiation emanating from the blackhole interior would be caught by the hologram defined by the blackhole horizon.
Super-translation symmetry acts at the surface of a sphere with infinite radius in asymptotically flat space-times looking like empty Minkowski space in very distant regions. The action would be translations along sphere plus Poincare transformations.
What comes in mind in TGD framework is conformal transformations of the boundary of 4-D lightcone, which act as scalings of the radius of sphere and conformal transformations of the sphere. Translations however translate the tip of the light-cone and Lorentz transformations transform the sphere to an ellipsoid so that one should restrict to rotation subgroup of Lorentz group. Besides this TGD allows huge group of symplectic transformations of δ CD× CP2 acting as isometries of WCW and having structure of conformal algebra with generators labelled by conformal weights.
Sharpening of the argument of Hawking
The objection would be that in GRT horizon is no way special - it is just a coordinate singularity. Curvature tensor does not diverge either and Einstein tensor and Ricci scalar vanish. This argument has been used in the firewall debates to claim that nothing special should occur as horizon is traversed. Why light would rotate around it? I see no reason for this! The answer in TGD framework would be obvious: horizon is replaced for TGD analog of blackhole with a light-like 3-surface at which the induced metric becomes Euclidian. Horizon becomes analogous to light front carrying not only photons but all kinds of elementary particles. Particles do not fall inside this surface but remain at it!
What are the problems?
My fate is to be an aggressive dissident listened by no-one, and I find it natural to continue in the role of angry old man. Be cautious, I am arrogant, I can bite, and my bite is poisonous!
1. With all due respect to Big Guys, to me the problem looks like a pseudo problem caused basically by the breakdown of classical GRT. Irrespective of whether Hawking radiation is generated, the information about matter (apart from mass, and some charges) is lost if the matter indeed collapses to single infinitely dense point. This is of course very unrealistic and the question should be: how should we proceed from GRT.
Blackhole is simply too strong an idealization and it is no wonder that Hawking's calculation using blackhole metric as a background gives rise to blackbody radiation. One might hope that Hawking radiation is genuine physical phenomenon, and might somehow carry the information by being not genuinely thermal radiation. Here a theory of quantum gravitation might help. But we do not have it!
2. What do we know about blackholes? We know that there are objects, which can be well described by the exterior Schwartschild metric. Galactic centers are regarded as candidates for giant blackholes. Binary systems for which another member is invisible are candidates for stellar blackholes. One can however ask wether these candidates actually consist of dark matter rather than being blackholes. Unfortunately, we do not understand what dark matter is!
3. Hawking radiation is extremely weak and there is no experimental evidence pro or con. Its existence assumes the existence of blackhole, which presumably represents the failure of classical GRT. Therefore we might be seeing a lot of trouble and inspired heated debates about something, which does not exist at all! This includes both blackholes, Hawking radiation and various problems such as firewall paradox.
There are also profound theoretical problems.
1. Contrary to the intensive media hype during last three decades, we still do not have a generally accepted theory of quantum gravity. Super string models and M-theory failed to predict anything at fundamental level, and just postulate effective quantum field theory limit, which assumes the analog of GRT at the level of 10-D or 11-D target space to define the spontaneous compactification as a solution of this GRT type theory. Not much is gained.
AdS/CFT correspondence is an attempt to do something in absence of this kind of theory but involves 10- or 11- D blackholes and does not help much. Reality looks much simpler to an innocent non-academic outsider like me. Effective field theorizing allows intellectual laziness and many problems of recent day physics will be probably seen in future as being caused by this lazy approach avoiding attempts to build explicit bridges between physics at different scales. Something very similar has occurred in hadron physics and nuclear physics and one has kind of stable of Aigeias to clean up before one can proceed.
2. A mathematically well-defined notion of information is lacking. We can talk about thermodynamical entropy - single particle observable - and also about entanglement entropy - basically a 2-particle observable. We do not have genuine notion of information and second law predicts that the best that one can achieve is no information at all!
Could it be that our view about information as single particle characteristic is wrong? Could information be associated with entanglement and be 2-particle characteristic? Could information reside in the relationship of object with the external world, in the communication line? Not inside blackhole, not at horizon but in the entanglement of blackhole with the external world.
3. We do not have a theory of quantum measurement. The deterministic unitary time evolution of Schrödinger equation and non-deterministic state function reduction are in blatant conflict. Copenhagen interpretation escapes the problem by saying that no objective reality/realities exist. Easy trick once again! A closely related Pandora's box is that experienced time and geometric time are very different but we pretend that this is not the case.
The only way out is to bring observer part of quantum physics: this requires nothing less than quantum theory of consciousness. But the gurus of theoretical physics have shown no interest to consciousness. It is much easier and much more impressive to apply mechanical algorithms to produce complex formulas. If one takes consciousness seriously, one ends up with the question about the variational principle of consciousness. Yes, your guess was correct! Negentropy Maximization Principle! Conscious experience tends to maximize conscious information gain. But how information is represented?
In the second part I will discuss TGD view about blackholes and Hawking radiation.
Tuesday, August 25, 2015
Saturday, August 22, 2015
Does color deconfinement really occur?
Bee had a nice blog posting related to the origin of hadron masses and the phase transition from color confinement to quark-gluon plasma involving also restoration of chiral symmetry in the sigma model description. In the ideal situation the outcome should be a black body spectrum with no correlations between radiated particles.
The situation is however not this. Some kind of transition occurs and produces a phase, which has much lower viscosity than expected for quark-gluon plasma. Transition occurs also in much smoother manner than expected. And there are strong correlations between opposite charged particles - charge separation occurs. The simplest characterization for these events would be in terms of decaying strings emitting particles of opposite charge from their ends. Conventional models do not predict anything like this.
Some background
The masses of current quarks are very small - something like 5-20 MeV for u and d. These masses explain only a minor fraction of the mass of proton. The old fashioned quark model assumed that quark masses are much bigger: the mass scale was roughly one third of nucleon mass. These quarks were called constituent quarks and - if they are real - one can wonder how they relate to current quarks.
Sigma model provide a phenomenological decription for the massivation of hadrons in confined phase. The model is highly analogous to Higgs model. The fields are meson fields and baryon fields. Now neutral pion and sigma meson develop vacuum expectation values and this implies breaking of chiral symmetry so that nucleon become massive. The existence of sigma meson is still questionable.
In a transition to quark-gluon plasma one expects that mesons and protons disappear totally. Sigma model however suggests that pion and proton do not disappear but become massless. Hence the two descriptions might be inconsistent.
The authors of the article assumes that pion continues to exist as a massless particle in the transition to quark gluon plasma. The presence of massless pions would yield a small effect at the low energies at which massless pions have stronger interaction with magnetic field as massive ones. The existence of magnetic wave coherent in rather large length scale is an additional assumption of the model: it corresponds to the assumption about large heff in TGD framework, where color magnetic fields associated with M89 meson flux tubes replace the magnetic wave.
In TGD framework sigma model description is at best a phenomenological description as also Higgs mechanism. p-Adic thermodynamics replaces Higgs mechanism and the massivation of hadrons involves color magnetic flux tubes connecting valence quarks to color singles. Flux tubes have quark and antiquark at their ends and are mesonlike in this sense. Color magnetic energy contributes most of the mass of hadron. Constituent quark would correspond to valence quark identified as current quark plus the associated flux tube and its mass would be in good approximation the mass of color magnetic flux tube.
There is also an analogy with sigma model provided by twistorialization in TGD sense. One can assign to hadron (actually any particle) a light-like 8-momentum vector in tangent space M8=M4× E4 of M4× CP2 defining 8-momentum space. Massless implies that ordinary mass squared corresponds to constant E4 mass which translates to a localization to a 3-sphere in E4. This localization is analogous to symmetry breaking generating a constant value of π0 field proportional to its mass in sigma model.
An attempt to understand charge asymmetries in terms of charged magnetic wave and charge separation
One of the models trying to explain the charge asymmetries is in terms of what is called charged magnetic wave effect and charge separation effect related to it. The experiment discussed by Bee attempts to test this model.
1. So called chiral magnetic wave effect and charge separation effects are proposed as an explanation for the the linear dependence of the asymmetry of so called elliptic flow on charge asymmetry. Conventional models explain neither the charge separation nor this dependence. Chiral magnetic wave would be a coherent magnetic field generated by the colliding nuclei in a relatively long scale, even the length scale of nuclei.
2. Charged pions interact with this magnetic field. The interaction energy is roughly h× eB/E, where E is the energy of pion. In the phase with broken chiral symmetry the pion mass is non-vanishing and at low energy one has E=m in good approximation. In chirally symmetric phase pion is massless and magnetic interaction energy becomes large a low energies. This could serve as a signature distginguishing between chirally symmetric and asymmetric phases.
3. The experimenters try to detect this difference and report slight evidence for it. This is change of the charge asymmetry of so called elliptic flow for positively and negatively charged pions interpreted in terms of charge separation fluctuation caused by the presence of strong magnetic field assumed to lead to separation of chiral charges (left/righ handedness). The average velocities of the pions are different and average velocity depends azimuthal angle in the collision plane: second harmonic is in question (say sin(2φ)).
In TGD framework the explanation of the un-expected behavior of should-be quark-gluon plasma is in terms of M89 hadron physics.
1. A phase transition indeed occurs but means a phase transition transforming the quarks of the ordinary M107 hadron physics to those of M89 hadron physics. They are not free quarks but confined to form M89 mesons. M89 pion would have mass about 135 GeV. A naive scaling gives half of this mass but it seems unfeasible that pion like state with this mass could have escaped the attention - unless of course the unexpected behavior of quark gluon plasma demonstrates its existence! Should be easy for a professional to check. Thus a phase transition would yield a scaled up hadron physics with mass scale by a factor 512 higher than for the ordinary hadron physics.
2. Stringy description applies to the decay of flux tubes assignable to the M89 mesons to ordinary hadrons. This explains charge separation effect and the deviation from the thermal spectrum.
3. In the experiments discussed in the article the cm energy for nucleon-nucleon system associated with the colliding nuclei varied between 27-200 GeV so that the creation of even on mass shell M89 pion in single collision of this kind is possible at highest energies. If several nucleons participate simultaneosly even many-pion states are possible at the upper end of the interval.
4. These hadrons must have large heff=n× h since collision time is roughly 5 femtoseconds, by a factor about 500 (not far from 512!) longer than the time scale associated with their masses if M89 pion has the proposed mass of 135 MeV for ordinary Planck constant and scaling factor 2× 512 instead of 512 in principle allowed by p-adic length scale hypothesis. There are some indications for a meson with this mass. The hierarchy of Planck constants allows at quantum criticality to zoom up the size of much more massive M89 hadrons to nuclear size! The phase transition to dark M89 hadron physics could take place in the scale of nucleus producing several M89 pions decaying to ordinary hadrons.
5. The large value of heff would mean quantum coherence in the scale of nucleus explaining why the value of viscosity was much smaller than expected for quark gluon plasma. The expected phase transition was also much smoother than expected. Since nuclei are many-nucleon systems and the Compton wavelength of M89 pion would be of order nucleus size, one expects that the phase transition can take place in a wide collision energy range. At lower energies several nucleon pairs could provide energy to generate M89 pion. At higher energies even single nucleon pair could provide the energy. The number of M89 pions should therefore increase with nucleon-nucleon collision energy, and induce the increase of charge asymmetry and strength of the charge asymmery of the elliptic flow.
6. Hydrodynamical behavior is essential in order to have low viscosity classically. Even more, the hydrodynamics had better to be that of an ideal liquid. In TGD framework the field equations have hydrodynamic character as conservation laws for currents associated with various isometries of imbedding space. The isometry currents define flow lines. Without further conditions the flow lines do not however integrate to a coherent flow: one has something analogous to gas phase rather than liquid so that the mixing induced by the flow cannot be described by a smooth map.
To achieve this given isometry flow must make sense globally - that is to define coordinate lines of a globally defined coordinate ("time" along flow lines). In this case one can assign to the flow a continuous phase factor as an order parameter varying along the flow lines. Super-conductivity is an example of this. The so called Frobenius conditions guarantee this at least the preferred extremals could have this complete integrability property making TGD an integrable theory (see the appendix of the article at my homepage). In the recent case, the dark flux tubes with size scale of nucleus would carry ideal hydrodynamical flow with very low viscosity.
See the chapter New Particle Physics Predicted by TGD: Part I or the article Does color deconfinement really occur?.
Wednesday, August 19, 2015
Could one define dynamical homotopy groups in WCW?
For details see the chapter Recent View about K\"ahler Geometry and Spin Structure of "World of Classical Worlds" of "Quantum physics as infinite-dimensional geometry" or the article Could One Define Dynamical Homotopy Groups in WCW?.
Tuesday, August 18, 2015
Hydrogen sulfide superconducts at -70 degrees Celsius!
About negentropic entanglement as analog of an error correction code
In classical computation, the simplest manner to control errors is to take several copies of the bit sequences. In quantum case no-cloning theorem prevents this. Error correcting codes (\url code n information qubits to the entanglement of N>n physical qubits. Additional contraints represents the subspace of n-qubits as a lower-dimensional sub-space of N qubits. This redundant representation is analogous to the use of parity bits. The failure of the constraint to be satisfied tells that the error is present and also the character of error. This makes possible the automatic correction of the error is simple enough - such as the change of the phase of spin state or or spin flip.
Negentropic entanglement (NE) obviously gives rise to a strong reduction in the number of states of tensor product. Consider a system consisting of two entangled systems consisting of N1 and N2 spins. Without any constraints the number of states in state basis is 2N1× 2N2 and one as N1+N2 qubits. The elements of entanglement matrix can be written as EA,B, A== ⊗i=1N1 (mi,si), B== ⊗k=1N2 (mk,sk) in order to make manifest the tensor product structure. For simplicity one can consider the situation N1=N2=N.
The un-normalized general entanglement matrix is parametrized by 2× 22N independent real numbers with each spin contributing two degrees of freedom. Unitary entanglement matrix is characterized by 22N real numbers. One might perhaps say that one has 2N real bits instead of almost 2N+1 real qubits. If the time evolution according to ZEO respects the negentropic character of entanglement, the sources of errors are reduced dramatically.
The challenge is to understand what kind of errors NE eliminates and how the information bits are coded by it. NE is respected if the errors act as unitary transformations E→ UEU of the unitary entanglement matrix. One can consider two interpretations.
1. The unitary automorphisms leave information content unaffected only if they commute with E. In this case unitary automorphisms acting non-trivially would give rise genuine errors and an error correction mechanism would be needed and would be coded to quantum computer program.
2. One can also consider the possibility that the unitary automorphisms do not affect the information content so that the diagonal form of entanglement matrix coded by N phases would carry of information. Clearly, the unitary automorphisms would act like gauge transformations. Nature would take care that no errors emerge. Of course, more dramatic things are in principle allowed by NMP: for instance, the unitary entanglement matrix could reduce to a tensor product of several unitary matrices. Negentropy could be transferred from the system and is indeed transferred as the computation halts.
By number theoretic universality the diagonalized entanglement matrix would be parametrized by N roots of unity with each having n possible values so that nN different NEs would be obtained and information storage capacity would be I=log(n)/log(2) × N bits for n=2k one would have k× N bits. Powers of two for n are favored. Clearly the option for which only the eigenvalues of E matter, looks more attractive realization of entanglement matrices. If overall phase of E does not matter as one expects, the number of full bits is k× N-1. This option looks more attractive realization of entanglement matrices.
In fact, Fermat polygons for which cosine and sine for the angle defining the polygon are expressible by iterating square root besides basic arithmetic operations for rationals (ruler and compass construction geometrically) correspond to integers, which are products of a power of two and of different Fermat primes Fn=22n+1. l
This picture can be related to much bigger picture.
1. In TGD framework number theoretical universality requires discretization in terms of algebraic extension of rationals. This is not performed at space-time level but for the parameters characterizing space-time surfaces at the level of WCW. Strong form of holography is also essential and allows to consider partonic 2-surfaces and string world sheets as basic objects. Number theoretical universality (adelic physics) forces a discretization of phases and number theoretically allowed phases are roots of unity defined by some algebraic extension of rationals. Discretization can be also interpreted in terms of finite measurement resolution. Notice that the condition that roots of unity are in question realizes finite measurement resolution in the sense that errors have minimum size and are thus detectable.
2. Hierarchy of quantum criticalities corresponds to a fractal inclusion hierarchy of isomorphic sub-algebras of the super-symplectic algebra acting as conformal gauge symmetries. The generators in the complement of this algebra can act as dynamical symmetries affecting the physical states. Infinite hierarchy of gauge symmetry breakings is the outcome and the weakening of measurement resolution would correspond to the reduction in the size of the broken gauge group. The hierarchy of quantum criticalities is accompanied by the hierarchy of measurement resolutions and hierarchy of effective Planck constants heff=n× h.
3. These hierarchies are argued to correspond to the hierarchy of inclusions for hyperfinite factors of type II1 labelled by quantum phases and quantum groups. Inclusion defines finite measurement resolution since included sub-algebra does induce observable effects on the state. By Mac-Kay correspondence the hierarchy of inclusions is accompanied by a hierarchy of simply laced Lie groups which get bigger as one climbs up in the hierarchy. There interpretation as genuine gauge groups does make sense since their sizes should be reduced. An attractive possibility is that these groups are factor groups G/H such that the normal subgroup H (necessarily so) is the gauge group and indeed gets smaller and G/H is the dynamical group identifiable as simply laced group which gets bigger. This would require that both G and H are infinite-dimensional groups.
An interesting question is how they relate to the super-symplectic group assignable to "light-cone boundary" δ M4+/-× CP2. I have proposed this interpretation in the context of WCW geometry earlier.
4. Here I have spoken only about dynamical symmetries defined by discrete subgroups of simply laced groups. I have earlier considered the possibility that discrete symmetries provide a description of finite resolution, which would be equivalent with quantum group description.
Summarizing, these arguments boil down to the conjecture that discrete subgroups of these groups act as effective symmetry groups of entanglement matrices and realize finite quantum measurement resolution. A very deep connection between quantum information theory and these hierarchies would exist.
Gauge invariance has turned out to be a fundamental symmetry principle, and one can ask whether unitary entanglement matrices assuming that only the eigenvalues matter, could give rise to a simulation of discrete gauge theories. The reduction of the information to that provided by the diagonal form be interpreted as an analog of gauge invariance?
1. The hierarchy of inclusions of hyper-finite factors of type II1 suggests strongly a hierarchy of effective gauge invariances characterizing measurement resolution realized in terms of hierarchy of normal subgroups and dynamical symmetries realized as coset groups G/H. Could these effective gauge symmetries allow to realize unitary entanglement matrices invariant under these symmetries.
2. A natural parametrization for single qubit errors is as rotations of qubit. If the error acts as a rotation on all qubits, the rotational invariance of the entanglement matrix defining the analog of S-matrix is enough to eliminate the effect on information processing.
Quaternionic unitary transformations act on qubits as unitary rotations. Could one assume that complex numbers as the coefficient field of QM is effectively replaced with quaternions? If so, the multiplication by unit quaternion for states would leave the physics and information content invariant just like the multiplication by a complex phase leaves it invariant in the standard quantum theory.
One could consider the possibility that quaternions act as a discretized version of local gauge invariance affecting the information qubits and thus reducing further their number and thus also errors. This requires the introduction of the analog of gauge potential and coding of quantum information in terms of SU(2) gauge invariants. In discrete situation gauge potential would be replaced with a non-integrable phase factors along the links of a lattice in lattice gauge theory. In TGD framework the links would correspond the fermionic strings connecting partonic two-surfaces carrying the fundamental fermions at string ends as point like particles. Fermionic entanglement is indeed between the ends of these strings.
3. Since entanglement is multilocal and quantum groups accompany the inclusion, one cannot avoid the question whether Yangian symmetry crucial for the formulation of quantum TGD \citeallb/twistorstory could be involved.
For details see the chapter Negentropy Maximization Principleor the article Quantum Measurement and Quantum Computation in TGD Universe
Sunday, August 16, 2015
Sleeping Beauty Problem
Lubos wrote polemically about Sleeping Beauty Problem. The procedure is as follows.
Sleeping Beauty is put to sleep and coin is tossed. If the coin comes up heads, Beauty will be awakened and interviewed only on Monday. If the coin comes up tails, she will be awakened and interviewed on both Monday and Tuesday. On Monday she will be put into sleep by amnesia inducing drug. In either case, she will be awakened on Wednesday without interview and the experiment ends. Any time Sleeping Beauty is awakened and interviewed, she is asked, "What is your belief now for the proposition that the coin landed heads?" No other communications are allowed so that the Beauty does not know whether it is Monday or Tuesday.
The question is about the belief of the Sleeping Beauty on basis of the information she has, not about the actual probability that the coined landed heads. If one wants to debate one imagine oneself to the position of Sleeping Beauty. There are two basic debating camps, halfers and thirders.
1. Halfers argue that the outcome of coin tossing cannot in any manner depend on future events and one has have P(Heads)= P(Tails)=1/2 just from the fact that that the coin is fair. To me this view is obvious. Lubos has also this view. I however vaguely remember that years ago, when first encountering this problem, I was ready to take the thirder view seriously.
2. Thirders argue in the following manner using conditional probabilities. The conditional probability P(Tails|Monday) =P(Head|Monday) (P(X/Y) denotes probability for X assuming Y) and from the basic formula for the conditional probabilities stating P(X|Y)= P(X and Y)P(Y) and from P(Monday)= P(Tuesday)=1/2 (this actually follows from P(Heads)= P(Tail)=1/2 in the experiment considered!) , one obtains P(Tails and Tuesday)= P(Tails and Monday).
Furthermore, one also has P(Tails and Monday)= P(Heads and Monday) (again from P(Heads)= P(Tails)=1/2!) giving
P(Tails and Tuesday)= P(Tails and Monday)=P(Heads and Monday). Since these events are independent for one trial and one of them must occur, each probability must equal to 1/3. Since "Heads" implies that the day is Monday, one has P(Heads and Monday)= P(Heads)=1/3 in conflict with P(Heads)=1/2 used in the argument. To me this looks like a paradox telling that some implicit assumption about probabilities in relation to time is wrong.
To my opinion the basic problem in the argument of thirders is there assumption that events occurring at different times can form a set of independent events. Also the difference between experienced and geometric time is involved in an essential manner when one speaks about amnesia.
When one speaks about independent events and their probabilities in physics they are must be causally independent and occur at the same moment of time. This is crucial in the application of probability theory in quantum theory and also classical theory. If time would not matter, one should be able to replace time-line with space-like line - say x-axis. The counterparts of Monday, Tuesday, and Wednesday can be located to x-axis with a mutual distance of say one meter. One cannot however realize the experimental situation since the notion of space-like amnesia does not make sense! Or crystallizing it: independent events must have space-like separation. The arrow of time is also essential. For the conditional probabilitys P(X|Y) used above X occurs before Y and this breaks the standard arrow of time.
This clearly demonstrates that philosophy and mathematics cannot be separated from physics and that the notion of time should be fundamental issued both in philosophy, mathematics and physics!
Friday, August 14, 2015
About quantum measurement and quantum computation in TGD Universe
During years I have been thinking how quantum computation could be carried out in TGD Universe (see this). There are considerable deviations from the standard view. Zero Energy Ontology (ZEO), weak form of NMP dictating the dynamics of state function reduction, negentropic entanglement (NE), and hierarchy of Planck constants define the basic differences between TGD based and standard quantum measurement theory. TGD suggests also the importance of topological quantum computation (TQC) like processes with braids represented as magnetic flux tubes/strings along them.
The natural question that popped in my mind was how NMP and Zero Energy Ontology (ZEO) could affect the existing view about TQC. The outcome was a more precise view about TQC. The basic observation is that the phase transition to dark matter phase reduces dramatically the noise affecting quantum quits. This together with robustness of braiding as TQC program raises excellent hopes about TQC in TGD Universe. The restriction to negentropic space-like entanglement (NE) defined by a unitary matrix is something new but does not seem to have any fatal consequences as the study of Shor's algorithm shows.
NMP strongly suggests that when a pair of systems - the ends of braid - suffer state function reduction, the NE must be transferred somehow from the system. How? The model for quantum teleportation allows to identify a possible mechanism allowing to achieve this. This mechanism could be fundamental mechanism of information transfer also in living matter and phosphorylation could represent the transfer of NE according to this mechanism: the transfer of metabolic energy would be at deeper level transfer of negentropy. Quantum measurements could be actually seen as transfer of negentropy at deeper level.
Thursday, August 13, 2015
Flux tube description seems to apply also to low Tc superconductivity
A brief summary about strengths and weakness of BCS theory
First I try to summarise what patent reminds about BCS theory.
2. BCS theory has also failures.
TGD based view about SC
Now come the heretic questions.
See the chapter Quantum model for bio-superconductivity: II
|
d323325fdcff71de |
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
B Color and Wavelengths
1. May 18, 2017 #1
I have read that color of light percieved by us depend on its wavelength since light is wave and also electron has wave like character that means electron has wavelength .Does that mean that electron has a color associated with it . I think its not but why .also I'm not able to understand why does color of light depend on its wavelength
2. jcsd
3. May 18, 2017 #2
User Avatar
Science Advisor
Gold Member
2017 Award
Color is not a physical but a physiological concept, and you cannot simply map color to a frequency in a one-to-one way.
Electrons have quantum character as any matter in the universe but no wave-like properties. In non-relativistic Quantum mechanics a single electron can be described by a complex valued field, obeying the Schrödinger equation (or better Pauli equation if you include spin as you should for the electron), which has wave-like solutions.
However it's not a classical field like, e.g., the electromagnetic field, either. The modulus squared of the wave function rather gives the probability distribution to find the electron at a given place when looking for it at a given time: ##P(t,\vec{x})=|\psi(t,\vec{x})|^2## (here I left out the spin-degree of freedom for simplicity).
4. May 18, 2017 #3
Sound is also a wave. An elastic wave. Elastic waves induced in solids by thermal motions may have wavelengths all the way down to a few angstroms. Some will be in the range of wavelengths for visible light. However you cannot see them and there is no sensation of color produced by these waves.
Actually even the medical ultrasound nowadays reaches into GHz so the wavelength in water (or tissue) will be in the range of hundreds of nano-meters, same as for visible light. But again, you don't see the colors if they use it to scan the eye.
So the point is that the wavelength is not all it matters. Each receptor (eye included) is sensitive to a specific type of wave.
Have something to add?
Draft saved Draft deleted |
c5c9d7ec13c1ffc1 | Multiple Quantum Interactions in Practical Material Regulated
Materials that have controllable quantum mechanical characteristics are highly significant for the quantum computers and electronics in future. Discovery or development of practical materials having these features is very difficult.
At present, an international theory and computational team headed by Cesare Franchini at the University of Vienna has discovered that some quantum interactions can exist simultaneously in a single real material and demonstrated the way these interactions are controlled by using an electric field. The outcomes of the study have been reported in the Nature Communications journal.
The application of an electric field changes the symmetry of the crystal and drives a transition from a metal (left) to an insulator (right). (Image credit: He/Franchini)
Next-generation quantum computers and electronics are dependent on materials that reveal quantum-mechanical phenomena and related properties, which can be regulated using external stimuli, for example, by using a battery in a microelectronic circuit. For instance, quantum mechanics controls whether and at what speed electrons can travel through a material and, hence, govern whether it is a metal conducting electric current or an insulator that does not. Moreover, the interaction between the electrons and the crystal structure directs the ferroelectric properties of a material. Here, when an external electric field is applied, switching between two electric orientations becomes possible. The probability of activating several quantum-mechanical characteristics in a single material is of scientific interest, but it can also increase the spectrum of prospective applications.
A group of international researchers (headed by Professor Cesare Franchini and Dr. Jiangang He from the Quantum Materials Modelling Group - University of Vienna, in collaboration with Professor Rondinelli from Northwestern University and Professor Xing-Qiu Chen from the Chinese Academy of Science) have shown that multiple quantum interactions can exist side-by-side in a single material and that it is feasible to switch between them by applying an electric field.
This is like awakening different kinds of quantum interactions that are quietly sleeping in the same house without knowing each other.
Professor Franchini
In order to achieve this, the researchers solved the relativistic form of the Schrödinger equation, by carrying out computer simulations on the Vienna Scientific Cluster. The material selected by the researchers, Ag2BiO3, is extraordinary because, first, it is made up of bismuth; a weighty element, which enables the spin of the electron to interact with its own motion. This process is known as spin-orbit coupling - a characteristic with no analogy in classical physics. Secondly, inversion symmetry is not exhibited by its crystal structure, indicating that ferroelectricity could occur.
Harmonizing multiple quantum mechanical properties which often do not coexist together and trying to do it by design is highly complex.
Professor Rondinelli
When an electric field is applied to the Ag2BiO3 oxide, the atomic positions change and it can be determined whether the spins are coupled in pairs, developing Weyl-fermions, or separated (Rashba-splitting) and whether the material can conduct electric current.
We have found the first real case of a topological quantum transition from a ferroelectric insulator to a non-ferroelectric semi-metal.
Professor Franchini
The spin-orbit coupling is of fundamental significance as it enables the emergence of innovative quantum states of matter, and depicts one of the intriguing research fields in modern physics. Moreover, owing to prospective applications, there are propitious applications: the regulation of quantum interactions in a realistic material will allow low-power, ultrafast quantum computers, and electronics for qualitative advancements forward in data collection, processing, and exchange.
Tell Us What You Think
Leave your feedback |
ee5e81330fd6ae3f | Impact Factor 2.323
The 1st most cited journal in Multidisciplinary Psychology
This article is part of the Research Topic
Quantum Structures in Cognitive and Social Science
Original Research ARTICLE
Front. Psychol., 03 August 2015 |
Quantum-like model of unconscious–conscious dynamics
• Department of Mathematics, Mathematical Institute, Linnaeus University, Växjö, Sweden
We present a quantum-like model of sensation–perception dynamics (originated in Helmholtz theory of unconscious inference) based on the theory of quantum apparatuses and instruments. We illustrate our approach with the model of bistable perception of a particular ambiguous figure, the Schröder stair. This is a concrete model for unconscious and conscious processing of information and their interaction. The starting point of our quantum-like journey was the observation that perception dynamics is essentially contextual which implies impossibility of (straightforward) embedding of experimental statistical data in the classical (Kolmogorov, 1933) framework of probability theory. This motivates application of nonclassical probabilistic schemes. And the quantum formalism provides a variety of the well-approved and mathematically elegant probabilistic schemes to handle results of measurements. The theory of quantum apparatuses and instruments is the most general quantum scheme describing measurements and it is natural to explore it to model the sensation–perception dynamics. In particular, this theory provides the scheme of indirect quantum measurements which we apply to model unconscious inference leading to transition from sensations to perceptions.
1. Introduction
In recent years the mathematical formalism of quantum mechanics was applied to a variety of problems outside of quantum physics: from molecular biology and genetics to cognition and decision making (see the monographs, Khrennikov, 2010b; Busemeyer and Bruza, 2012; Haven and Khrennikov, 2012) and the extended lists of references in them as well as in the papers (Aerts et al., 2014; Khrennikov et al., 2014).
The problem of mathematical modeling of bistable perception and, more generally, unconscious inference1 is that it can be rather complex and that its nature is not understood well-enough to allow one to choose the optimal model. In spite of tremendous efforts during the last 200 years, this problem cannot be considered fully solved (cf. Newman et al., 1996; Laming, 1997). In this note we apply the theory of quantum apparatuses and instruments (Davies and Lewis, 1970; Busch et al., 1995; Ozawa, 1997) to quantum-like modeling of sensation–perception dynamics as the concrete example of unconscious and conscious processing of information and their interaction. Our model can be applied to general unconscious–conscious information processing. It generalized the quantum-like model developed in Khrennikov (2004). We also point out that this paper is the first attempt to apply the theory of quantum apparatuses and instruments outside of physics, to cognition and psychology.
Special quantum structures were elaborated in order to mathematically represent most general measurement schemes and are applicable both in classical and quantum physics and, practically, in any domain of science. They generalize the pioneer quantum measurement representation by operators of the projection type, also known as von Neumann–Lüders measurements. In quantum physics, this new general framework is of vital importance since the projection type measurements do not completely cover real experimental situations (Davies and Lewis, 1970; Busch et al., 1995; Ozawa, 1997; Nielsen and Chuang, 2000). It seems that the same holds true in mathematical modeling in cognition and psychology (see Asano et al., 2010a,b; Khrennikov, 2010b; Asano et al., 2011, 2012; Khrennikov and Basieva, 2014; Khrennikov et al., 2014), although here the situation is not yet absolutely clear and, obviously, the underlying reason for using quantum instruments is different.
To motivate the use of the theory of quantum apparatuses and instruments, we shall compare it first to classical probabilistic methods and then to simpler quantum-like models of processing data from cognitive science and psychology based on the von Neumann–Lüders measurements. A detailed discussion on violation of laws of classical probability theory by statistical data collected in cognitive science and psychology can be found in Khrennikov, 2010b and SS. We can, for example, point to the order effect (Khrennikov, 2010b; Wang and Busemeyer, 2013) and the disjunction effect (Khrennikov, 2010b; Busemeyer and Bruza, 2012). In the probabilistic terms these are just various exhibitions of violation of the formula of total probability. In general, during recent years quantum probability and decision making were successfully applied to describe a variety of problems, paradoxes, and probability judgment fallacies, such as Allais paradox (humans violate Von Neumann–Morgenstern expected utility axioms), Ellsberg paradox (humans violate Aumann–Savage subjective utility axioms) (see e.g., Haven et al., 2009; Asano et al., 2010a,b, 2011, 2012; Busemeyer et al., 2011; Pothos and Busemeyer, 2013; Wang and Busemeyer, 2013; Aerts et al., 2014; Khrennikov and Basieva, 2014). Psychologists and economists explore the new way inspired by one simple fact from physics: quantum probability can work in situations where classical probability does not. Why? Answers may differ (see Khrennikov, 2010b). We point to contextuality of data as one of the main sources of its non-classicality (Khrennikov, 2010b; Dzhafarov and Kujala, 2012a,b, 2013).
As was pointed out, at the beginning of quantum theory physicists attempted to represent quantum measurements they were dealing with by projectors. The same attitude could be observed in applications of the quantum formalism outside of physics. Granted, some statistical psychological effects can be nicely described with the help of the von Neumann–Lüders measurements (see e.g., Haven et al., 2009; Busemeyer et al., 2011; Busemeyer and Bruza, 2012; Pothos and Busemeyer, 2013; Wang and Busemeyer, 2013; Aerts et al., 2014). However, more detailed analysis showed (Asano et al., 2010a,b; Khrennikov, 2010b; Asano et al., 2011, 2012; Khrennikov and Basieva, 2014; Khrennikov et al., 2014) that, in general, data from cognitive psychology cannot be embedded into the projection-measurement scheme. Therefore, it is natural to follow the development of quantum physics and proceed within a general theory of measurements.
In this paper we do this by illustrating the general theory of quantum instruments with one concrete example: bistable perception of the concrete ambiguous figure, the Schröder stair. Why do we use a quantum-like model? Here the argument is more complicated than in the case of the order and disjunction effects and other probability fallacies mentioned above. The deviation from classical probability theory is expressed not as a violation of the formula of total probability, but as a violation of one of the Bell-type inequalities, namely, the Garg–Leggett inequality (Asano et al., 2014). We point out that the Bell-type inequalities play an important role in modern quantum physics. If such an inequality is violated, then the data cannot fit a classical probability space. As was shown in our previous study (Asano et al., 2014), the data collected in a series of experiments performed at Tokyo University of Science (see Asano et al., 2014) for details, violate the Garg–Leggett inequality (statistically significantly)2.
The first step toward creation of a quantum-like model of bistable perception was done by Atmanspacher and Filk (2012, 2013). We studied this problem in Asano et al. (2014), where we demonstrated a violation of the Garg–Leggett inequality for experimental probabilistic data collected for rotating image of Schröder stair (the experiment was performed at Tokyo University of Science), in Accardi et al. (in press) we presented a quantum-like adaptive dynamical model for bistable perception. The latter is based on a more general formalism than the theory of quantum instruments—on the theory of adaptive quantum systems. In the present paper, the traditional approach to quantum measurement theory is used for modeling sensation–perception transition and unconscious inference.
Finally, we point out that violation of laws of classical probability theory is a statistical exhibition of violation of laws of classical Boolean logic. Thus, in logical terms the quantum-like modeling of cognition is modeling of a nonclassical reasoning, decision making, and problem solving. In particular, in our model unconscious inference, generation of a perception from a sensation, is not based on the rules of classical logics. We also remark that the so called quantum logic corresponding to the quantum formalism is just one special type of nonclassical logic. In principle, there are no reasons to assume that human (mental) cognition, even if it has a non-Boolean structure, can be modeled completely with the aid of quantum logic and quantum probability. Still more general models might be explored, see (Khrennikov and Basieva, 2014) for a discussion.
2. Advantageousness of Quantum Instrumental Modeling in Cognitive Psychology
We emphasize that, as well as quantum physics (Plotnitsky, 2006, 2009), cognitive and social sciences also can be treated as theories of measurements. A great deal of effort has been put into the development of measurement formalisms, cf. with, e.g., the time-honored Stimulus–Organism–Response (S–O–R) scheme for explaining cognitive behavior (Woodworth, 1921). Just like the situation in quantum physics, cognitive and social scientists cannot approach the mental world directly; they work with results of observations. Both quantum physics and cognitive and social sciences are fundamentally based on operational formalisms for observations.
The basic notions of the operational formalism for the quantum measurement theory are quantum apparatus and instrument (Davies and Lewis, 1970; Busch et al., 1995; Ozawa, 1997). Quantum apparatuses are mathematical structures representing at a high level of abstraction physical apparatuses used for measurements. They encode the probabilities of the results of observations as well as the back-actions of the measurements on the states of physical systems. Such back-actions are mathematically represented with the aid of another important mathematical structure, a quantum instrument. Our aim is to explore the theory of quantum apparatuses and instruments and especially its part devoted to indirect measurements in cognitive and social sciences.
The scheme of indirect measurements is very useful for applications, both in quantum physics and humanities. In this scheme, besides the “principle system” S, a probe system S′ is considered. A measurement on S is composed of the unitary interaction with S′ and a subsequent measurement on the latter.
In our cognitive modeling S represents unconscious information processing and Sconscious. In the concrete example of Helmholtz unconscious inference, S represents processing of sensation (its unconscious nature was emphasized already by Helmholtz) and S′ represents processing of perception - conscious representation of sensation.
This approach provides a possibility to extend the class of quantum measurements which originally were only von Neumann–Lüders measurements of the projection type. Such an extension serves not only the natural seeking of generality. Generalized quantum measurements have some new features. Here we shall concentrate only on those of them relevant to our project on quantum-like cognition.
For us, one of the main problems of exploring solely projective (direct) measurements is their fundamentally invasive nature: as the feedback of a measurement, the quantum state is “aggressively modified”—it is projected onto the subspace corresponding to the result of this measurement. In any event, this feature is not so natural for the dynamics of sensation and perception states. Of course, each “perception–creation” modifies the states of sensation and perception, but these modifications are not of the collapse type, as they should be in the case of projections.
Important for our applications is that a variety of different quantum instruments (describing back-reaction transformations resulting from measurements) can correspond to one and the same observable on the principle system S. That is, measurements having the same statistical results may lead to very different state transformations (due to very different types of interaction between the principle and probe systems). In quantum mechanics (as Ozawa emphasized Ozawa, 1997), the same observable can be measured by different apparatuses having different state-transforming quantum instruments. This is a very important characteristic of the theory of generalized quantum measurements. It is also very useful for cognitive modeling, since it reflects the individuality of measurement apparatuses/instruments which are used by cognitive systems (e.g., human beings) to generate the same perception.
We point out that the scheme of indirect measurements accounts for state dynamics in the process of measurement, which is not just a “yes”/“no” collapse as in the original von Neumann–Lüders approach. The possibility to mathematically describe the mental state dynamics in the process of perception–creation by means of the quantum formalism is very attractive. A study in this direction was already presented in the work of Pothos and Busemeyer (2013), although without appealing to the operational approach to quantum mechanics. In the series of works of Asano et al. (2010a,b, 2011, 2012), the process of decision making was described by a novel scheme of measurements generalizing the standard theory of quantum apparatuses and instruments (Asano et al., 2010a,b, 2011, 2012).
Now we list once again the main advantageous properties of the quantum instrument/apparatus modeling in cognitive psychology:
1. A possibility to model the feedback reaction of a “mental measurement” (including self-measurements such as decision making and problem judgment) without collapse-like projections of mental states (belief states).
2. The same (self-)measurement output can correspond to a variety of mental state processing.
3. This is the only way to consistently model indirect measurements in which the output of one psychological function of the brain is (self-) measured through the output of another psychological function.
3. Quantum States
We start with a brief introduction to the quantum basics and define pure and mixed quantum states. The state space of a quantum system is complex Hilbert space. Denote it by H. This is a complex linear space endowed with a scalar product, a positive-definite non-degenerate Hermitian form. Denote the latter by 〈·|·〉. It generates the norm on H: ψ=ψ|ψ.
A reader who does not feel comfortable in the abstract framework of functional analysis can simply proceed with the Hilbert space H = Cn, where C is the set of complex numbers, and the scalar product u|v=iuiv-i,u=(u1,,un),v=(v1,,vn). Instead of linear operators, one can consider matrices.
Pure quantum states are represented by normalized vectors, ψ ∈ H: ∥ψ∥ = 1. Two colinear vectors, ψ′ = λψ, λ ∈ C, |λ| = 1, represent the same pure state. Each pure state can also be represented as the projection operator Pψ which projects H onto the one dimensional subspace based on ψ. For a vector ϕ ∈ H, Pψϕ = 〈ϕ|ψ〉 ψ. Any projector is a Hermitian and positive-definite operator3. We also remark that the trace of the one dimensional projector Pψ equals to 1: Tr Pψ = 1. (We recall that, for a linear operator A, its trace can be defined as the sum of diagonal elements of its matrix in any orthonormal basis: TrA=iaii.) We summarize these properties of an operator (matrix) ρ = Pψ representing a pure state. It is
1. Hermitian,
2. positive-definite,
3. trace one,
4. idempotent: ρ2 = ρ.
A linear operator is an orthogonal projector if and only if it satisfies (1) and (4); in particular, (2) is a consequence of (4). The properties (1–4) are characteristic for one dimensional orthogonal projectors—pure states [for a projector, (3) implies that it is one dimensional], i.e., any operator satisfying (1–4) represents a pure state.
The next step in the development of quantum mechanics was the extension of the class of quantum states, from pure states represented by one dimensional projectors to states represented by linear operators (matrices) having the properties (1–3). Such operators (matrices) are called density operators (density matrices). (This nontrivial step of extension of the class of quantum states was based on the efforts of Landau and von Neumann). One typically distinguish pure states, as represented by one dimensional projectors, and mixed states, those density operators which cannot be represented by one dimensional projectors. The terminology “mixed” has the following origin: any density operator can be represented as a “mixture” of pure states (ψi):
ρ=ipiPψi,pi[0,1],ipi=1. (1)
The state is pure if and only if such a mixture is trivial: all pi, besides one, equal to zero. However, by operating with the terminology “mixed state” one has to take into account that the representation in the form Equation (1) is not unique. The same mixed state can be interpreted as mixtures of different collections of pure states.
Any operator ρ satisfying (1–3) is diagonalizable (even in the infinite-dimensional Hilbert space), i.e., in some orthonormal basis it is represented as a diagonal matrix, ρ = diag(pj), where pj[0,1],jpj=1. Thus, it can be represented in the form Equation (1) with mutually orthogonal one dimensional projectors. The property (4) can be used to check whether a state is pure or not. We point out that pure states are merely mathematical abstractions; in real experimental situations it is possible to prepare only mixed states; one defines the degree of purity as Tr[ρ2 − ρ]. Experimenters are satisfied by getting this quantity less than some small ϵ.
4. Atomic Instruments/Apparatuses
The notions of instrument and apparatus are based on very simple and natural consideration. Consider systems of any origin (physical, biological, social, financial). Suppose that the states of such systems can be represented by points of some set X. These are statistical states, i.e., by knowing the state of a system one can determine the values of observables only with some probabilities. Then, for each state xX and observable A and its concrete value ai, there is defined a map
pi=fA,ai(x) (2)
giving the probability of the result A = ai for systems in the state xX. Here fA,ai : X → [0, 1]. Then its is natural to assume that the measurement modifies the state x, i.e., there is is defined another map
xi=gA,ai(x), (3)
here gA,ai : XX. This scheme is applicable both in classical and quantum physics as well as in psychology—Stimulus–Organism–Response (S–O–R) scheme for explaining behavior (Woodworth, 1921) of humans and other cognitive systems.
For the fixed observable A, the system of the state transformation maps (gA,ai) corresponding to all possible values (ai) of A is called an instrument and the collection of maps (fA,ai; gA,ai) is called an apparatus. Of course, this scheme is too general and, to get something fruitful, one has to select the state space X having a special structure and special classes of f- and g-maps. Quantum theory is characterized by selection of the state space starting with a complex Hilbert space. This choice leads to theory of quantum instruments and apparatuses.
The general theory of quantum measurements is mathematically advanced, Section 9. Therefore, it is useful to illustrate it by a simple example. We consider the simplest class of quantum instruments extending the class of von Neumann–Lüders instruments of the projection type. These are atomic instruments.
Suppose that the range of values of a measurement, spectrum of an observable, is discrete O = {a1, …, an}. The main point of theory of instruments is that each measurement resulting in a concrete value ai generates the feedback action to the original state ρ of a quantum system, i.e., ρ is transformed into a new state ρai, see Equation (3):
ρρai. (4)
We start with the standard von Neumann–Lüders measurements. which gives us an important class of quantum instruments/apparatuses (especially from the historical viewpoint). These measurements are mathematically represented by Hermitian operators,
A=iaiPai, (5)
where Pai is the projector onto the eigensubspace corresponding to the eigenvalue ai. For pure states, the transformation (Equation 4) is based on the projection Pai:
ψPaiψ, (6)
this map is linear and it is convenient to work with it. However, if PaiI, where I is the unit operator, then ∥Paiψ∥ < 1, so the output of Equation (6) is not a state. To get a state, it has to be normalized by its norm:
ψPaiψPaiψ. (7)
This is a map from the space of pure states into the space of pure states, but it is nonlinear. This type of the feedback reaction to the result of measurement was postulated by von Neumann. It is well-known as the projection postulate of quantum mechanics (the state reduction postulate or the state collapse postulate, see (Khrennikov and Basieva, 2014) for a psychologist-friendly discussion on these postulates and their role in quantum physics and cognitive psychology and psychophysics) 4.
Now, for a pure state ψ, one can consider its representation by the density operator ρ = Pψ. In such terms, the state transform (Equation 6) can be written as
ρPaiρPai. (8)
This is the simplest example of a transformation which in quantum measurement theory is called a quantum operation. It can be extended to the linear map from the space of linear operators (matrices) to itself—by the same formula (Equation 8). For a finite spectral set O, the collection of quantum operations (Equation 8), aiO, gives the simplest example of a quantum instrument.
We are again interested in a map from the space of density operators (matrices) to itself, see Equation (4). Thus, we again have to make normalization:
ρρai=PaiρPaiTrPaiρPai. (9)
It is nonlinear and physicists work with quantum operations (forming instruments), by making normalization by trace only at the final step of calculations which can involve a chain of measurements.
However, we are primarily interested not in the measurement feedback to the initial quantum state ρ, but in the probabilities to get the results aiO. Denote them p(ai|ρ). Here they are given by Born's rule. If the initial state is pure ρ = Pψ, then
p(ai|ψ)=Paiψ|ψ=Paiψ2. (10)
It is easy to see that
p(ai|ψ)=TrPaiPψ. (11)
This formula can be easily generalized, e.g., via Equation (1), to an arbitrary initial state ρ:
p(ai|ρ)=TrPaiρ. (12)
A quantum apparatus is the combination of feedback state-transformations, i.e., a quantum instrument, and detection probabilities.
In the von Neumann–Lüders approach the quantum instrument is uniquely determined by an observable, the Hermitian operator A. The latter is the basis of the construction. However, even in this approach we could start directly with an instrument determined by a family of mutually orthogonal projectors (Pai), i.e.,
iPai=I, (13)
where PaiPaj, ij, and then define the observable A simply as this family (Pai). In quantum information the values ai have merely the meaning of labels for the results of measurement. For future generalization, we remark that the normalization condition (Equation 13) can be written as
iPai*Pai=I, (14)
because, for any orthogonal projector P, P* = P and P2 = P.
Now we move to general atomic instruments and apparatuses. Here quantum operations have the form:
ρQaiρQai, (15)
where, for each value ai, Qai is a linear operator which is a contraction (i.e., its norm is bounded by 1). These operators are constrained by the normalization condition, cf. (Equation 14):
iQai*Qai=I, (16)
These operations determine an atomic quantum instrument. Each quantum operation induces the corresponding state transformation:
ρρai=QaiρQaiTrQaiρQai. (17)
In particular, pure states are transformed into pure states (similar to the von Neumann–Lüders measurements):
ψQaiψQaiψ. (18)
Probabilities of the results of measurements are given by the following generalization of Equation (12):
p(ai|ρ)=TrMaiρ, (19)
Mai=Qai*Qai. (20)
(We remark that if Qai is a projector, then Qai*=Qai and Qai2=Qai. Thus, in this case (Equation 19) matches with (Equation 12). In this way we obtain the corresponding quantum instrument.
The class of atomic instruments and apparatuses is the most direct generalization of the von Neumann–Lüders class. In particular, in general quantum instruments do not transfer pure states into pure states, see Appendix.
5. Bistable Perception of Schröder Stair
The experiment is about perception of on the ambiguous figure, the Schröder stair, see Figure 1. Here we reproduce data from paper (Asano et al., 2014), where the reader can find a more detailed presentation.
Figure 1. Schröder Stair is an ambiguous figure which may have two different interpretations, “left part (L) is front and right part (R) is back,” and its converse. Humans percept either of them, and the tendency of the perception depends on the roatating angle θ.
A total of 151 subjects participated in the test performed at Tokyo University of Science. They were divided into three groups (nA = 55, nB = 48, nC = 48). To the subjects of all three groups, we showed 11 pictures of the Schröder stair which was leaning at different angles. Subjects answered L = “I can see that left side is front,”or R = “I can see that right side is front” for each picture. Thus, we have a random variable for perception, Xθ = L, R. We denote the experimental probability that a subject answers “Left side is front” by p(Xθ = L).
For the first group (A), order of showing pictures is randomly selected for each subject. For the second group (B), angle θ changed from 0 to 90 as if the picture was rotating clockwise. Inversely, for the third group (C), the angle θ was changed from 90 to 0. As a result, we obtained perception trends with respect of angles, see Figure 2. These graphs demonstrate contextuality of data, its dependence on experimental contexts, (A)–(C), (see Asano et al., 2014) for numerical estimation of the degree of contextuality as violation of the Garg–Leggett inequality. As was discussed in Introduction, contextual statistical data can be modeled by using the quantum formalism.
Figure 2. Optical illusion is affected by memory bias: subject's perception is shifted in response to rotation direction of the figure.
6. Mental Apparatuses
We shall proceed with finite dimensional state spaces by making remarks on the corresponding modifications in the infinite dimensional case. The symbol D(H) denotes the space of density operators in the complex Hilbert space H; L(H) the space of all linear operators in H (bounded operators in the infinite dimensional case).
The space L(H) can itself be endowed with the structure of the linear space. We also have to consider linear operators from L(H) into itself; such maps, T : L(H) → L(H) are called superoperators. We shall use this notion only in Section 9. Thus, for a moment, the reader can proceed without it.
Moreover, on the space L(H) it is possible to introduce the structure of Hilbert space with the scalar product
Therefore, for each superoperator T : L(H) → L(H), there is defined its adjoint (super)operator T* : L(H) → L(H), 〈T(A)|B〉 = 〈A|T*(B)〉, A, BL(H).
For reader's convenience we remind the notion of POVM.
Definition. A positive operator valued measure (POVM) is a family of positive operators {Mj} such that j=1mMj=I, where I is the unit operator.
Consider a cognitive system, to be concrete consider a human individual, call her Keiko. She confronts some recognition-problem, i.e., in our problem of bistable perception of Schröder stair she has to make the choice between two perception A = L, R. In the quantum(-like) model the space of her mental states is represented by complex Hilbert space H (pure states are represented by normalized vectors and mixed states by density operators).
In the model under construction H is tensor-factorized into two components, namely, H = HK, where H is the space of sensation-states and K is the space of perception-states. The states of the latter are open for conscious introspection, but the states of the former are in general not approachable consciously. We recall that we model Helmholtz unconscious inference.
In general suppose that Keiko confronts some concrete recognition problem A with possible perceptions labeled as ai, i = 1, 2, …, m. We denote the set of possible values of A by the symbol O, i.e., O = {a1, …, am}. By interacting with a figure (in our concrete case the figure is ambiguous) she generates the the sensation-state ρ (e.g., a pure state, i.e., ρ = |ψ〉〈ψ|, ψ ∈ H, ∥ψ∥ = 1). The process of generation of ρ can be mathematically represented as a unitary transformation in the space H. Denote the pre-recognition state of sensation by ρ0. Then
where the unitary operator U : HH depends on the figure; in our concrete case U = USchr.
To come to the concrete perception, Keiko uses a “mental apparatus,” denoted as A, which produces the results (perceptions) ai randomly with the probabilities p(ai|ρ), the output probabilities5. An apparatus represents not only perceptions and the corresponding probabilities, but also the results of the evolution of the initial sensation-state ρ as induced by the back-reaction to the concrete perception ai. This is a sort of the state reduction, “sensation-state collapse” as the result of creation of the concrete perception ai. Thus, the sensation state ρ which Keiko created from her visual image is transformed into the output state ρai.
However, as we shall see, in general this sensation-state update can be sufficiently peaceful, so our model differs crucially from the orthodox quantum models of cognition (Busemeyer and Bruza, 2012) based on the projection-type state update. Thus, each mental apparatus A corresponding to the recognition-problem A is mathematically represented by
• probabilities for concrete perceptions p(ai|ρ);
• transformations of the initial sensation-state corresponding to the concrete results of perception,
ρρai. (21)
The rigorous mathematical description of such state transformations leads to the notion of a quantum instrument, see Section 9.
6.1. Mixing Law
In the quantum operational formalism it is assumed that these probabilities, p(ai|ρ), satisfy the mixing law. We remark that, for any pair of states (density operators) ρ1, ρ2 and any pair of probability weights q1, q2 ≥ 0, q1 + q2 = 1, the convex combination ρ = q1ρ1 + q2ρ2 is again a state (density operator). In accordance with the mixing law any apparatus produces probabilities such that
p(ai|q1ρ1+q2ρ2)=q1p(ai|ρ1)+q2p(ai|ρ2). (22)
In our model of bistable perception the mixing law can be formulated as follows:
A probabilistic mixture of sensations produces the mixture of probabilities for perception outputs.
In physics this is a very natural assumption. However, in modeling of cognitive phenomena, in particular, unconscious inference, an additional analysis of its validity has to be performed. We have no possibility to do this in this note, so we postpone such analysis to one of coming publications. Now we mimic quantum physics explicitly and proceed under the assumption (Equation 22).
6.2. Composition of the Apparatuses
It is natural to assume that after resolving the recognition-problem A a person is ready to look at another image B and proceed to its perception. In general perception of B depends on the preceding perception of A. Such a sequence of perceptions represented as a new mental apparatus, the composition of the apparatuses A and B : BA. Its outputs are ordered pairs of perceptions (ai, bj). It is postulated that the corresponding output probabilities and states are determined as
p((ai,bj)|ρ)=p(bj|ρai)p(ai|ρ); (23)
ρ(ai,bj)=(ρai)bj. (24)
The law (Equation 23) can be considered as the quantum generalization of the Bayes rule. The law (Equation 24) is the natural composition law.
In our experiment with rotation of the Schröder stair, we are interested in a sequence of instruments Aθ corresponding to some sample of angles C = {θ1, …, θm}. Here C determines the context of the experiment. Our data from Section 5 can be represented as the superposition of quantum apparatuses: AC = AθmAθ1. Here AC is the quantum apparatus representing the context C. In our experimental study we considered not only deterministic contexts corresponding to clockwise and counter-clockwise rotations, but even the random context determined by the uniform probability distribution.
7. Perception through Unitary Interaction Between the Sensation and Perception-states
The above operational description of “perception–production” was formulated solely in terms of sensation-states. However, a sensation-state is a complex informational state which is in general unapproachable for conscious introspective. The operational representation of observables in the space of sensation-states is not straightforward and in general it cannot be formulated in terms of mutually exclusive perceptions. For example, in our experiment Keiko's perceptions can be binary encoded: A = L, R. However, her sensation of the Schröder stair is a complex information state depending on a variety of parameters (in particular, we are interested in dependence on the rotation angle). The subspaces corresponding to sensations leading to the L-perception and R-perception are in general not orthogonal. This non-orthogonality of sensation subspaces for different perceptions is the fundamental feature of bistable perception, recognition of ambiguous figures.
Therefore, it is more fruitful to define the perception-observable directly by using an additional state space, the space of the perception-states K. In the perception space a perception-observable can be defined as the standard von Neumann–Lüders projection observable.
Example 1. Consider the simplest case: recognition of the fixed figure A, with dichotomous output, i.e., there are two possible outcomes of “perception-measurement,” e.g., L = 0 and R = 1 for the Schröder stair. This observable can be represented by the pair of projectors (P0, P1) onto the subspaces K0 and K1 of the perception space K. Since the perceptions a0 = 0 and a1 = 1 are mutually exclusive, and sharply exclusive, the subspaces K0 and K1 are orthogonal. Hence, the projectors P0 and P1 can be selected as orthogonal. The perception-observable A can be represented as the conventional von-Neumann-Lüders observable  = a0P0+a1P1(= P1). However, we emphasize that this representation is valid only in the perception-state space K. It is often (but not always!) possible to proceed with one dimensional projectors, i.e., to represent possible perceptions just by the basis vectors in the two dimensional perception-state space, (|0〉, |1〉). Here each perception-state can be represented as superposition
ϕ=c0|0+c1|1,|c0|2+|c1|2=1. (25)
Measurement of A leads to probabilities of perceptions given by squared coefficients, p0=|c0|2,p1=|c1|2.
In the case of the finite-dimensional perception-state, a perception-observable A can be represented as
A=iaiPi, (26)
where (Pi) is the family of mutually orthogonal projectors in the space of perception-states K and (ai) are real numbers encoding possible answers (perceptions).
Now we shall explore the cognitive analog of the standard scheme of quantum indirect measurements.
In our cognitive framework “indirectness” means that the sensation-states are in general unapproachable for consicious introspection. Therefore, it is impossible to perform the direct measurement on the sensation-state ρ (in particular, on a pure state ρ = |ψ〉〈ψ|). Moreover, in the sensation-state the alternatives, say 0/1, encoded in a perception-observer A are not represented exclusively, they can have overlap. (Mathematically the overlap is expressed as non-orthogonality of sensation-subspaces corresponding to various perceptions.)
In the quantum measurement framework, this situation is described as follows: in the sensation space an observable A is represented as an unsharp observable of the POVM-type. Roughly speaking in the H-representation the A-zero contains partially the A-one and vice versa. The latter is simply a consequence of interpretation of POVM observables as unsharp observables.
Remark 1. To map the quantum physics scheme (Ozawa, 1997) of indirect measurements onto the quantum(-like) cognition scheme, one has to associate the state of the principle physical system S with the sensation-state and the state of the probe physical system S′ with the perception-state. We point out that in the cognitive framework we do not consider analogs of physical systems. In principle, one can consider the sensation-system S as a part of the neuronal system representing sensations and the perception system S′ as another part of the neuronal system representing possible perceptions. The latter can be specified: different measurements can be associated with different neuronal networks responsible for the corresponding perceptions. However, in principle we need not associate sensation and perception states with the concrete physical neuronal networks. In the case of cognition usage of isolated physical systems as carriers of the corresponding information states might be ambiguous. The interconnectivity of neuronal networks is very high. Therefore, the picture of distributed computational system is more adequate. (Of course, even in physics the notion of an isolated system is just an idealization of the real situation). Therefore, it is useful to proceed in the purely information approach by operating solely with states, without coupling them to bio-physical systems. This is, in fact, the quantum information approach, where systems play the secondary role, and one operates with states; especially for the information interpretation of quantum mechanics (Zeilinger, 2010).
In the simplest model we can assume that at the beginning of the process of perception-creation the sensation and perception-states, ρ and σ, are not entangled6. Thus, mathematically, in accordance with the quantum formalism, the integral sensation–perception-state, the complete mental state corresponding to the problem under consideration, can be represented as the tensor product
In the process of perception-creation the sensation and perception-states (cf. Remark 1) “interacts” and the evolution of the sensation–perception-state R is mathematically represented by a unitary operator7 U : HH:
RRoutURU*. (27)
In the space of sensation–perception-states H the perception-observer A is represented by the operator IA. Thus, the probabilities of perceptions are given by
paiAI=TrRout(IPi)=TrURU*(IPi), (28)
where the projectors (Pi) form the spectral decomposition of the Hermitian observable A in K, see Equation (26).
Since only the perception-state belonging K is a subject of conscious introspective, at the conscious level the perception process can be represented solely in the state space K. The post-interaction perception-state σout can be (mathematically) extracted from the integral state Rout with the aid of the operation of the partial trace:
σout=TrHRout. (29)
Then perceptions can be represented as the results of the A-measurement (measurement of the projection-type) in the perception space; measurement on the output state σout. The probabilities of the concrete perceptions (ai) are given by the standard Born rule:
paiA=TrKσoutPi=TrK(TrHRout)Pi=TrRout(IPi)=paiAI. (30)
Thus, Equations (28) and (30) match each other.
If the concrete result A = ai was observed, then the state of perception σ is transformed into
σi;out=TrHRout(IPi)TrRout(IPi). (31)
What does happen in the sensation space?
The expression (Equation 28) for the probability of the perception ai can be represented as
p(ai|ρ)=paiAI=TrRout(IPi)=TrρσU*(IPi)U =TrHρMai, (32)
Mai=TrK(Iσ)U*(IPi)U. (33)
The operator Mi; HH can also be represented in the following useful form (a consequence of the cyclic property of the trace operation):
Mai=TrKU*(IPi)U(Iσ) (34)
We remark that (Equation 33) implies:
We also remark that each operator Mai is positively defined and Hermitian.
Thus, in the sensation space the perception-observable of the projection-type A (acting in K) with the spectral family (Pi) is represented as POVM M = (Mi). We remark that in general the operators Mi are not projectors. Such measurement cannot separate sharply sensations leading to perceptions (ai) for different i.
The operational formalism also gives the “post-perception sensation-state,” i.e., the state of sensation created as the feedback to the consciously recognized perception ai,
ρai=TrKRout(IPi)TrRout(IPi). (35)
The output sensation-state depends not only on the initial sensation-state ρ, but also on the initial perception-state σ, interaction between believes and possible perceptions given by U and the question-observable A acting in K.
8. The Indirect Measurement Scheme for Rotation Contexts for Perception of Schröder Stair
As at the very end of Section 6.2, we consider contextual measurements for the Schröder stair: a sequence of perceptions corresponding to some sample of angles C = {θ1, …, θm}. Here C determines the context of the experiment. We apply the scheme of indirect measurements. We can assume that the perception space K is two dimensional with the orthogonal basis |L〉, |R〉 representing the “left-faced” and “right-faced” preceptions of the stair. Thus, projectors Pi, i = L, R, are one dimensional.
We start with the initial sensation state ρ0. By the visual image rotated at the angle θ1 this state is transformed to
ρθ1=USch;θ1ρ0USch;θ1*, (36)
where USch;θ1 represents the unitary dynamics induced by this image. Then the perception of the image is modeled starting with
Rθ1=ρθ1σ0, (37)
where σ0 represents the state of perception preceding interaction with the state of sensation. It is natural to assume that σ0 = |ϕ0〉〈ϕ0|, where
ϕ0=(|L+|R)2 (38)
is the neutral composition of the states “left-faced” and “right-faced.” It represents the deepest state of uncertainty. Suppose (for simplicity) that independently of the angle the interaction of sensation and perception states is given by the same unitary operator U. Then Keiko's perception of the Schröder stair observed at the angle θ1 with the fixed result i1 = L or R leads to the new states of sensation and perception:
σi1;θ1=TrHURθ1U*(IPi1)TrURθ1U*(IPi1),ρi1;θ1=TrKURθ1U*(IPi1)TrURθ1U*(IPi1). (39)
The probability of creation of the perception i can be calculated as
pi1;θ1=TrHρθ1Mi1;θ1. (40)
Here POVM's component Mi1; θ1, i1 = L, R, has the form:
Mi1;θ1=TrKU*(IPi1)U(Iσ0). (41)
For the next measurement corresponding to rotation of Schröder's stair for the angle θ2, Keiko selects ρi1; θ1 and σi2; θ1 as the initial states. This means that creation of the fixed perception i1 leads to disentanglement of her mental state into the product of two states, the state of sensation and perception. Then
ρθ2=USch;θ2ρi1;θ1USch;θ2*, (42)
where USch;θ1 represents the unitary dynamics induced by the θ2-image. Then
Rθ2=ρθ2σi1;θ1, (43)
Then Keiko's perception of the Schröder stair observed at the angle θ2 with the fixed result j = L or R leads to the new states of sensation and perception:
σi2;θ2=TrHURθ2U*(IPi2)TrURθ2U*(IPi2),ρi2;θ2=TrKURθ2U*(IPi2)TrURθ2U*(IPi2). (44)
The probability of creation of the perception i2 can be calculated as
pi2;θ2=TrHρθ2Mi2;θ2. (45)
Starting with ρi2; θ2, σi2; θ2, Keiko generates the perception of the θ3-rotated stair and so on. After the last test, Keiko's states of sensation and perception ρin; θn, σin; θn depend on the sequence of angles C and the sequence of her perceptions (i1, i2, …, in). The same is valid for the probability pin; θn. If the experiment is performed for two different contexts C = {θ1, …, θm} and C={θ1,,θm}. Then in general it is impossible to embed the probabilities of perceptions in a single Kolmogorov probability space. Therefore, the use of quantum theory of measurement and “quantum probabilities” can be fruitful. Our approach provides the possibility to model probabilities of perceptions depending on a context, a sequence of angles.
9. Representing Perception by Quantum Instruments
The considered model of perception as the result of unitary interaction between the sensation-state and the perception-state describes an important class of transformations of the sensation-state, see Equation (35). We now turn to the general case which was considered in Section 6, see Equation (21). Set
E(ai)ρ=p(ai|ρ)ρai (46)
and, for a subset Γ of O, where O = {a1, …, am} is the set of all possible perceptions, we set
E(Γ)ρ=aiΓE(ai)ρ=aiΓp(ai|ρ)ρai. (47)
We point to the basic feature of this map:
TrE(O)ρ=aiOp(ai|ρ)Trρai=1. (48)
For each concrete perception ai, E(ai) maps density operators to linear operators (in the infinite dimensional case, these are trace-class operators, but we proceed in the finite dimensional case, where all operators have finite traces).
The mixing law implies that, for any Γ ⊂ O,
E(Γ)(q1ρ1+q2ρ2)=q1E(Γ)ρ1+q2E(Γ)ρ2. (49)
As was shown by Ozawa (1997), under the assumption on the existence of composition of the apparatuses any such a map E(Γ) : D(H) → L(H) can be extended to a linear map (superoperator)
E(Γ):L(H)L(H) (50)
such that:
• each E(Γ) is positive, i.e., it transfers the set of positively defined operators into itself;
E(O)=iE(ai) is trace preserving:
TrE(O)ρ=Trρ. (51)
The latter property is a consequence of Equation (48)8.
Thus, the two very natural and simple assumptions, the mixing law for probabilities and the existence of composite apparatuses, have the fundamental mathematical consequence, the representation of the evolution of the state by a superoperator (Equation 50).
In quantum physics such maps are known as state transformers (Busch et al., 1995) or DL (Davis–Levis, Davies and Lewis, 1970) quantum operations9.
Thus, each perception induces the back-reaction which can be formally represented as a state transformer. In these terms
ρai=E(ai)ρTrE(ai)ρ (52)
We remark that the map Γ → L(L(H)), from subsets of the set of possible perceptions O into the space of superoperators, is additive:
E(Γ1Γ2)=E(Γ1)+E(Γ2),Γ1Γ2=. (53)
This is a measure with values in the space L(L(H)). Such measures are called (DL) instruments (Davies and Lewis, 1970). To specify the domain of applications in our case, we shall call them perception instruments.
The class of such instruments is essentially wider than the class of instruments based on the unitary interaction between sensation and perception components of the mental state, see Equation (35). The evident generalization of the scheme of Section 7 is to consider nonunitary interactions between the components of the mental state; another assumption which can be evidently violated in modeling of cognition is that the initial sensation and perception states are not entangled (“independent”) (see Asano et al., 2010a,b, 2011, 2012) for generalizations of the aforementioned scheme.
We start with a discussion on possible nonunitarity of interaction between the sensation and perception states. In quantum physics the assumption of unitarity of interaction between the principle system S and the probe system S′ (representing a part of the measurement apparatus interacting with S) is justified, because the compound system S+S~ can be considered (with a high degree of approximation) as an isolated quantum system and its evolution can be described (at least approximately) by the Schrödinger equation. And the latter induces the unitary evolution of a state.
In cognition the situation is totally different. The main scene of cognition is not the physical space-time, but the brain. It is characterized by huge interconnectivity and parallelism of information processing. Therefore, it is more natural to consider the sensation and perception states corresponding to different visual inputs as interacting, especially at the level of the sensation-states. Thus, the perception-creation model based on the assumption of isolation of different perception-creation processes from each other seems to be too idealized, although it can be used in many applications, where the concentration on one fixed problem may diminish the influence of other perception-creation processes.
In physics, the assumption that the initial state of the system S+S~ is factorized is also justified, since the exclusion of the influence of the state of the measurement device to the state of a system S prepared for measurement (and vice versa) is the experimental routine. In cognition the situation is more complicated. One cannot exclude that in some situations the initial sensation and perception state are entangled.
The representation of probabilities with the aid of POVMs is not a feature of only the unitary interaction representation of apparatuses, see Equation (32). In general, any DL-instrument generates such a representation. Take an instrument E, where, for each aiO, E(ai):L(H) → L(H) is a superoperator. Then we can define the adjoint operator E*(ai):L(H)L(H). Set Mai=E*(ai)I, where I : HH is the unit operator. Then, since pai=TrE(ai)ρ=TrI;E(ai)ρ=I|E(ai)ρ==E*(ai)I|ρ=Tr(E*(ai)I)ρ=TrMaiρ. By using the properties of an instrument it is easy to show that Mai is POVM. Thus, each mental apparatus can be represented by a POVM. We interpret this POVM as the mathematical representation of “unconscious" inference. Such “unconscious measurements” are not sharp, they cannot separate completely different perceptions ai which are mutually exclusive at the conscious level. Mathematically, we have that the subspaces Hai = MiH need not be orthogonal. Sensation states corresponding to the perceptions ai and aj, say ψiHai and ψjHaj, in general have nonzero overlap 〈ψij〉 ≠ 0.
10. Concluding Remarks
This paper is an attempt to present the theory of generalized quantum measurements based on quantum apparatuses and instruments in a humanities-friendly way. This is a difficult task, since this theory is based on advanced mathematical apparatus. We hope that the reader can at least follow our introductory presentation in Sections 3, 4. Although we applied quantum apparatuses and instruments to the concrete problem of cognition, modeling bistable perception and, more generally, Helmholtz unconscious inference, this approach can be used to model general unconscious–conscious information processing. We hope that in future other interesting examples will be presented with the aid of this formalism (cf. Khrennikov, 2010a, 2014).
Conflict of Interest Statement
1. ^Unconscious inference (Conclusion) is a term of perceptual psychology invented by von Helmholtz (1866); Boring (1942), to describe an involuntary, pre-rational and reflex-like mechanism which is part of the formation of visual impressions.
2. ^We remark that the formula of total probability and the Bell-type inequalities can be treated as just two special statistical tests of non-classicality of the data (see Conte et al., 2008; Bruza et al., 2010; Khrennikov, 2010b; Asano et al., 2014; Dzhafarov and Kujala, 2014) for discussion. This is the “minimal interpretation.” In quantum physics the standard interpretation of these inequalities is related to whether we can proceed with a realistic and local model. The Garg–Leggett inequality is a rather special type of Bell's inequalities, since it is about time correlations for a single system and the original Bell's inequality is about spatial correlations for pairs of systems.
3. ^We recall that a linear operator A in H is called Hermitian if it coincides with its adjoint operator, A = A*. If an orthonormal basis in H is fixed, (ei), and A is represented by its matrix, A = (aij), where aij = 〈Aei|ej〉, then it is Hermitian if and only if āij = aji. A linear operator is positive-definite if, for any ϕ ∈ H, 〈Aϕ|ϕ〉 ≥ 0. It is equivalent to positive definiteness of its matrix. We remark that, for a Hermitian operator, all its eigenvalues are real.
4. ^It is less known (in fact, practically unknown) that von Neumann sharply distinguished the case of observables with non-degenerate spectra, i.e., all (Pai) in the spectral decomposition of A, see Equation (5), are one dimensional projectors, and degenerate spectra, i.e., some of (Pai) are projectors onto multi-dimensional subspaces. In the first case he postulated aforementioned state-collapse (Equation 7), but in the second case he pointed out that the measurement feedback can generate state transformations different from one given by Equation (7); in particular, the output of the initial pure state can be a mixed state. Later Lüders extended the von Neumann projection postulate even to projectors with degenerate spectra, i.e., in fact, he reduced the class of possible state transformations (quantum operations). This simplification was convenient in theoretical studies and the projection postulate was widely treated as applicable generally, i.e., even to observables with degenerate spectra. The name of Lüders was washed out from the majority of foundational works and nowadays the projection postulate is typically known as the von Neumann projection postulate (see Khrennikov, 2008) for more details.
5. ^We are going toward creation of a cognitive analog of the quantum operational model of measurements with the aid of physical apparatuses.
6. ^One can say that they are independent. But one can use this terminology carefully, since the notion of quantum independence is more complicated than the classical one and it is characterized by diversity of approaches.
7. ^As was mentioned, in the works of Asano et al. (2010a,b, 2011, 2012) and Accardi et al. (in press) even non-unitary evolutions were in charge.
8. ^If one wants to extend E(Γ) from the set of density operators to the set of all linear operators (in the infinite dimensional case it has to be the set of finite-trace operators) by linearity then it has to be set E(Γ)μ = E(Γ)Trμ(μ∕Trμ) = Trμ E(Γ)(μ∕Trμ) and, in particular, E(O)μ = Trμ E(O)(μ∕Trμ) = Trμ.
9. ^DL-notion of the quantum operation is more general than the notion used nowadays. The latter is based on complete positivity, instead of simply positivity as the DL-notion, see Appendix for the corresponding definition and a discussions on whether the reasons used in physics to restrict the class of state transformers can be automatically used in cognitive science.
Accardi, L., Khrennikov, A., Ohya, M., Tanaka, Y., and Yamato, I. (in press). Application of non-Kolmogorovian probability quantum adaptive dynamics to unconscious inference in visual perception process. Open Syst. Inform. Dyn. doi: 10.1007/978-94-017-9819-8
CrossRef Full Text
Aerts, D., Sozzo, S., and Tapia, J. (2014). Identifying quantum structures in the ellsberg paradox. Int. J. Theor. Phys. 53, 3666–3682. doi: 10.1007/s10773-014-2086-9
CrossRef Full Text | Google Scholar
Aerts, D., Broeakert, J., Czachor, M., Kuna, M., Sinervo, B., and Sozzo, S. (2014). Quantum structure in competing lizard communities. Ecol. Model. 281, 38–51. doi: 10.1016/j.ecolmodel.2014.02.009
CrossRef Full Text | Google Scholar
Asano, M., Ohya, M., and Khrennikov, A. (2010a). Quantum-like model for perception making process in two players game. Found. Phys. 41, 538–548. doi: 10.1007/s10701-010-9454-y
CrossRef Full Text | Google Scholar
Asano, M., Ohya, M., Tanaka, Y., Khrennikov, A., and Basieva, I. (2010b). On application of Gorini-Kossakowski-Sudarshan-Lindblad equation in cognitive psychology. Open Syst. Inf. Dyn. 17, 1–15. doi: 10.1142/S1230161211000042
CrossRef Full Text | Google Scholar
Asano, M., Ohya, M., Tanaka, Y., Khrennikov, A., and Basieva, I. (2011). Dynamics of entropy in quantum-like model of perception making. J. Theor. Biol. 281, 56–64. doi: 10.1016/j.jtbi.2011.04.022
PubMed Abstract | CrossRef Full Text
Asano, M., Basieva, I., Khrennikov, A., Ohya, M., Tanaka, Y., and Yamato, I. (2012). Quantum-like model of diauxie in Escherichia coli: operational description of precultivation effect. J. Theor. Biol. 314, 130–137. doi: 10.1016/j.jtbi.2012.08.022
PubMed Abstract | CrossRef Full Text | Google Scholar
Asano, M., Khrennikov, A., Ohya, M., Tanaka, Y., and Yamato, I. (2014). Violation of contextual generalization of the Leggett-Garg inequality for recognition of ambiguous figures. Phys. Scr. T163, 014006. doi: 10.1088/0031-8949/2014/t163/014006
CrossRef Full Text | Google Scholar
Atmanspacher, H., and Filk, T. (2012). “Temporal nonlocality in bistable perception,” in Quantum Theory: Reconsiderations of Foundations - 6, Special Section: Quantum-like Decision Making: From Biology to Behavioral Economics, AIP Conf. Proc. Vol. 1508, eds A. Khrennikov, H. Atmanspacher, A. Migdall, and S. Polyakov (Melville, NY: American Institute of Physics), 79–88.
Atmanspacher, H., and Filk, T. (2013). The Necker-Zeno model for bistable perception. Top. Cogn. Sci. 5, 800–817. doi: 10.1111/tops.12044
PubMed Abstract | CrossRef Full Text | Google Scholar
Boring, E. G. (1942). Sensation and Perception in the History of Experimental Psychology. New York, NY: Appleton-Century Co.
Google Scholar
Bruza, P., Kitto, K., Ramm, B., Sitbon, L., and Blomberg, S. Song, D. (2010). Quantum-like non-separability of concept combinations, emergent associates and abduction. Log. J. IGPL 20, 455–457. doi: 10.1093/jigpal/jzq049
CrossRef Full Text | Google Scholar
Busch, P., Grabowski, M., and Lahti, P. (1995). Operational Quantum Physics. Berlin: Springer Verlag.
Google Scholar
Busemeyer, J. R., and Bruza, P. D. (2012). Quantum Models of Cognition and Perception. Cambridge: Cambridge Press.
Google Scholar
Busemeyer, J. R., Pothos, E. M., Franco, R., and Trueblood, J. (2011). A quantum theoretical explanation for probability judgment errors. Psychol. Rev. 118, 193–218. doi: 10.1037/a0022542
PubMed Abstract | CrossRef Full Text | Google Scholar
Conte, E., Khrennikov, A., Todarello, O., Federici, A., and Zbilut, J. P. (2008). A preliminary experimental verification on the possibility of Bell inequality violation in mental states. Neuroquantology 6, 214–221. doi: 10.14704/nq.2008.6.3.178
CrossRef Full Text | Google Scholar
Davies, E., and Lewis, J. (1970). An operational approach to quantum probability. Comm. Math. Phys. 17, 239–260. doi: 10.1007/BF01647093
CrossRef Full Text | Google Scholar
Dzhafarov, E. N., and Kujala, J. V. (2012a). Selectivity in probabilistic causality: where psychology runs into quantum physics. J. Math. Psychol. 56, 54–63. doi: 10.1016/
CrossRef Full Text | Google Scholar
Dzhafarov, E. N., and Kujala, J. V. (2012b). Quantum entanglement and the issue of selective influences in psychology: an overview. Lec. Notes Comput. Sci. 7620, 184–195. doi: 10.1007/978-3-642-35659-9/17
CrossRef Full Text | Google Scholar
Dzhafarov, E. N., and Kujala, J. V. (2013). All-possible-couplings approach to measuring probabilistic context. PLoS ONE 8:e61712. doi: 10.1371/journal.pone.0061712
PubMed Abstract | CrossRef Full Text | Google Scholar
Dzhafarov, E. N., and Kujala, J. V. (2014). On selective influences, marginal selectivity, and Bell/CHSH inequalities. Top. Cogn. Sci. 6, 121–128. doi: 10.1111/tops.12060
PubMed Abstract | CrossRef Full Text | Google Scholar
Haven, E., and Khrennikov, A. (2012). Quantum Social Science. Cambridge: Cambridge Press.
Google Scholar
Haven, E., and Khrennikov, A. (2009). Quantum mechanics and violation of the sure-thing principle: the use of probability interference and other concepts. J. Math. Psychol. 53, 378–388. doi: 10.1016/
CrossRef Full Text | Google Scholar
Khrennikov, A., and Basieva, I. (2014). Quantum model for psychological measurements: from the projection postulate to interference of mental observables represented as positive operator valued measures. Neuroquantology 12, 324–336. doi: 10.14704/nq.2014.12.3.750
CrossRef Full Text | Google Scholar
Khrennikov, A., Basieva, I., Dzhafarov, E. N., and Busemeyer, J. R. (2014). Quantum models for psychological measurements: an unsolved problem. PLoS ONE 9:e110909. doi: 10.1371/journal.pone.0110909
PubMed Abstract | CrossRef Full Text | Google Scholar
Khrennikov, A. (2004). “Information dynamics in cognitive, psychological, social, and anomalous phenomena,” in Fundamental Theories of Physics (Dordreht: Kluwer).
Khrennikov, A. (2008). The role of von Neumann and Lüders postulates in the Einstein, Podolsky, and Rosen considerations: comparing measurements with degenerate and nondegenerate spectra. J. Math. Phys. 49, 052102. doi: 10.1063/1.2903753
CrossRef Full Text | Google Scholar
Khrennikov, A. (2010b). Ubiquitous Quantum Structure: From Psychology to Finances. Berlin; Heidelberg; New York: Springer.
Google Scholar
Khrennikov, A. Y. (2010a). Modelling of psychological behavior on the basis of ultrametric mental space: encoding of categories by balls. P-Adic Num. Ultram. Anal. Appl. 2, 1–20. doi: 10.1134/S2070046610010012
CrossRef Full Text | Google Scholar
Khrennikov, A. Y. (2014). Cognitive processes of the brain: an ultrametric model of information dynamics in unconsciousness. P-Adic Num. Ultram. Anal. Appl. 6, 293–302. doi: 10.1134/S2070046614040049
CrossRef Full Text | Google Scholar
Laming, D. R. J. (1997). The Measurement of Sensation No. 30. Oxford: Oxford University Press.
Google Scholar
Newman, L. S., Moskowitz, G. B., and Uleman, J. S. (1996). People as flexible interpreters: evidence and issues from spontaneous trait inference. Adv. Exp. Soc. Psychol. 28, 211–279.
Google Scholar
Nielsen, M. A., and Chuang, I. L. (2000). Quantum Computation and Quantum Information. Cambridge: Cambridge University Press.
Google Scholar
Ozawa, M. (1997). An operational approach to quantum state reduction. Ann. Phys. 259, 121–137. doi: 10.1006/aphy.1997.5706
CrossRef Full Text | Google Scholar
Plotnitsky, A. (2006). Reading Bohr: Physics and Philosophy. Dordrecht: Springer.
Google Scholar
Plotnitsky, A. (2009). Epistemology and Probability: Bohr, Heisenberg, Schrödinger, and the Nature of Quantum-Theoretical Thinking. Heidelberg; Berlin; New York: Springer.
Google Scholar
Pothos, E. M., and Busemeyer, J. R. (2013). Can quantum probability provide a new direction for cognitive modeling? Behav. Brain Sci. 36, 255–274. doi: 10.1017/S0140525X12001525
PubMed Abstract | CrossRef Full Text | Google Scholar
Shaji, A., and Sudarshan, E. C. G. (2005). Who's afraid of not completely positive maps? Phys. Lett. A 341, 48–54. doi: 10.1016/j.physleta.2005.04.029
CrossRef Full Text | Google Scholar
von Helmholtz, H. (1866). Treatise on Physiological Optics. Transl. by Optical Society of America in English. New York, NY: Optical Society of America.
Google Scholar
Wang, Z., and Busemeyer, J. R. (2013). A quantum question order model supported by empirical tests of an a priori and precise prediction. Top. Cogn. Sci. 5, 689–710. doi: 10.1111/tops.12040
PubMed Abstract | CrossRef Full Text | Google Scholar
Woodworth, R. S. (1921). Psychology: A Study of Mental Life. New York, NY: H. Holt.
Zeilinger, A. (2010). Dance of the Photons: From Einstein to Quantum Teleportation. New-York, NY: Farrar, Straus and Giroux.
Do we Need Complete Positivity?
Nowadays theory of the DL-instruments is considered old-fashioned; the class of such instruments is considered to be too general: it contains mathematical artifacts which have no relation to real physical measurements and state transformations as back-reactions to these measurements. The modern theory of instruments is based on the extendability postulate (e.g., Busch et al., 1995; Ozawa, 1997; Nielsen and Chuang, 2000):
For any apparatus AS corresponding to measurement of observable A on a system S and any system S~ noninteracting with S there exists an apparatus AS+S~ representing measurement on the compound system S+S~ such that
p(ai|ρ ⊗ r) = p(ai|ρ);
• (ρ ⊗ r)ai = ρair
for any state ρ of S and any state r of S~.
In physics this postulate is quite natural: if, besides the quantum system S which is the object of measurement, there is (somewhere in the universe) another system S~ which is not entangled with S, i.e., their joint pre-measurement state has the form ρ ⊗ r, then the measurement on S with the result ai can be considered as measurement on S+S~ as well with the same result ai. It is clear that the back-reaction cannot change the state of S~. Surprisingly this very trivial assumption has tremendous mathematical implications.
Since we proceed only in the finite dimensional case, the corresponding mathematical considerations are simplified. Consider an instrument ES representing the state update as the result of the back-reaction from measurement on S. For each Γ, this is a linear map from L(H) → L(H), where H is the state space of S. Let W be the state space of the system S~. Then the state space of the compound system S+S~ is given by the tensor product HW. We remark that the space of linear operators in this state space can be represented as L(HW) = L(H) ⊗ L(W). Then the superoperator ES(Γ) : L(H) → L(H) can be trivially extended to the superoperator ES(Γ) ⊗ I : L(HW) → L(HW). It is easy to prove that the state transformer corresponding to the apparatus for measurements on S+S~ has to have this form ES+S~(ai)=ES(ai)I. Hence, this operator also has to be positively defined. We remark that if the state space W has the dimension k, then the space of linear operators L(W) can be represented as the space of k × k matrices which is further denoted as Ck×k.
Formally, a superoperator T : L(H) → L(H) is called completely positive if it is positive and each its trivial extension TI : L(H) ⊗ Ck×kL(H) ⊗ Ck×k is also positive. There are natural examples of positive maps which are not completely positive (Nielsen and Chuang, 2000).
A CP quantum operation is a DL quantum operation which is additionally completely positive; a CP instrument is based on CP quantum operations representing back-reactions to measurement. As was pointed out, in modern literature only CP quantum operations and instruments are in the use, so they are called simply quantum operations and instruments.
The main mathematical feature of (CP) quantum operations is that the class of such operations can be described in a simple way, namely, with the aid of the Kraus representation (Busch et al., 1995; Ozawa, 1997; Nielsen and Chuang, 2000):
Tρ=jVj*ρVj, (A1)
where (Vj) are some operators acting in H. Hence, for a (CP) instrument, we have: for each aiO, there exist operators (Vaij) such that
E(ai)ρ=jVaij*ρVaij. (A2)
ρai=jVaij*ρVaijjVaij*ρVaij, (A3)
where the trace one condition (Equation 48) implies that
ijVaij*Vaij=I. (A4)
The corresponding POVMs Mai can be represented as
Mai=jVaij*Vaij. (A5)
This is a really elegant mathematical representation. However, it might be that this mathematical elegance, and not a real physical situation, has contributed to widespread use of CP in quantum information theory (cf. Shaji and Sudarshan, 2005).
Is the use of the extendability postulate justified in the operational approach to cognition?
Seemingly, not (although further analysis is required). Any concrete perception takes place at the conscious level, and it is based on interaction with the sensation of a visual image. The state of this sensation corresponds to the state of the system S in the above considerations. To be able to consider the state of another sensation, the analog of the state of the system S~, the brain has to activate this sensation. Thus, we cannot simply consider all possible sensations as existing in some kind of the mental universe simultaneously. Hence, in general, sensations generated by different visual stimuli cannot be treated as existing simultaneously.
It is more natural to develop the theory of perception instruments as the theory of DL instruments and not CP instruments. In particular, although the Kraus representation can be used as a powerful analytic tool, we need not to overestimate its applicability for modeling of cognition.
Keywords: sensation, perception, quantum-like model, quantum apparatuses and instruments, bistable perception, unconscious inference
Citation: Khrennikov A (2015) Quantum-like model of unconscious–conscious dynamics. Front. Psychol. 6:997. doi: 10.3389/fpsyg.2015.00997
Received: 18 March 2015; Accepted: 02 July 2015;
Published: 03 August 2015.
Edited by:
Sandro Sozzo, University of Leicester, UK
Reviewed by:
George Kachergis, New York University, USA
Harald Atmanspacher, Collegium Helveticum, Switzerland
*Correspondence: Andrei Khrennikov, Department of Mathematics, Mathematical Institute, Linnaeus University, Universitetsplatsen 1, Växjö S-35195, Sweden, |
6b8a44424d98b449 | Thursday, May 10, 2012
A Universe from Nothing
The book A Universe from Nothing: Why There Is Something Rather than Nothing by Lawrence Krauss has stimulated a lot of aggressive debate between Krauss and some philosphers and of course helped in gaining media attention.
Peter Woit wrote about the debate - not so much about the contents of the book - and regarded the book boring and dull. He sees this book as an end for multiverse mania: bad philosophy and bad physics. I tried to get an idea about what Krauss really says but failed: Woit's posting concentrates on the emotional side (the more negative the better;-)) as blog posting must do to maximize the number of readers.
Peter Woit wrote also a second posting about the same theme. It was about Jim Holt's book Why Does the World Exist?: An Existential Detective Story. Peter Woit found the book brilliant but again it remained unclear to me what Jim Holt really said!
Sean Carroll has a posting about the book talking more about the contents of the book. This posting was much more informative: not just anecdotes and names but an attempt to analyze what is involved.
In the following I will not consider the question "Why There Is Something Rather than Nothing" since I regard it as pseudo question. The very fact that the question is made implies that something - the person who poses the question - exists. One could of course define "nothing" as vacuum state as physicists might do but with this definition the meaning of question changes completely from what it is for philosophers. Instead, I will consider the notion of existence from physics point of view and try to show how non-trivial implications the attempt to define this notion more precisely has.
What do we mean with "existence"?
The first challenge is to give meaning for the question "Why There Is Something Rather than Nothing". This process of giving meaning is of course highly subjective and I will discuss only my own approach. To my opinion the first step is to ask "What existence is?". Is there only single kind of existence or does existence come in several flavors? Indeed, several variants of existence seem to be possible. Material objects, mathematical structures, theories, conscious experiences, etc... It is difficult to see them as members of same category of existence.
This question was not made by Sean Carroll ,who equated all kinds of existence with material existence - irrespective of whether they become manifest as a reading in scale, as mathematical formulas, or via emotional expressions. Carroll did not notice that already this assumption might lead to astray. Carroll did the same as most mainstream physicists would do and I am afraid that also Krauss makes the same error. I dare hope that philosophers criticizing Krauss have avoided this mistake: at least they made clear what they thought about the deph of philosophical thinking of physicists of this century.
Why Carroll might have done something very stupid?
1. The first point is that this vision- known as materialism in philosophy - suffers from serious difficulties. The basic implication is that consciousness is reduced to physical existence. Free will is only an illusion, all our intentions are illusions, ethics is illusion and moral rules rely on illusion. Everything was dictated in Big Bang at least in the statistical sense. Perhaps we should think twice before accepting this view.
2. Second point is that that one ends up with heavy difficulties in physics itself: quantum measurement theory is the black sheep of physics and it is not tactful to talk about quantum measurement theory in the coffee table of physicists. The problem is simply that that the non-determinism of state function reduction - necessary for the interpretation of experiments in Copenhagen interpretation - is in conflict with the determinism of Schrödinger equation. The basic problem does not disappear for other interpretations. How it is possible that the world is both deterministic and deterministic at the same time? There seems to be two causalities: could they relate to two different notions of time? Could the times for Schrödiner equation and state function reduction be different?
I have just demonstrated that when one speaks about ontology, sooner or later begin to talk about time. This is unavoidable. As inhabitants of everyday world we of course know that the experienced time is not same as the geometric time of physicists. But as professional physicists we have been painfully conditioned to identify these two times. Also Carroll as a physics professor makes this identification - and does not even realize what he is doing - and starts to speak about time evolution as Hamiltononian unitary evolution without a single world about the problems of quantum measurement theory.
With this background I am ready to state what the permanent readers of the blog could do themselves. In TGD Universe the notion of existence becomes much more many-faceted thing as in the usual ultranaive approach of materialistic physicist. There are many levels of ontology.
1. Basic division is to "physical"/"objective" existence and conscious existence. Physical states identified as their mathematial representations ("identified" is important!: I will discuss this later) correspond the "objective" existence. Physical states generalize the solutions of Schrödinger equations: they are not counterparts for time=constant snapshots of time evolutions but counterparts for entire time evolutions. Quantum jumps take between these so that state function reduction does not imply failure of determinism and one avoids the basic paradox. This however implies that one must assign subjective time to the quantum jumps and geometric time to the counterparts of evolution of Schrödinger equation. There are two times.
In this framework the talk about the beginning of the Universe and what was before the Big Bang becomes nonsense. One can speak about boundaries of space-time surfaces but they have little to do with the beginning and end which are notions natural in the case of experienced time.
2. One can divide the objective existence into two sub-categories. Quantum existence (quantum states as mathematical objects) and classical existence having space-time surfaces as its mathematical representation. Classical determimism fails in its standard form but generalizes, and classical physics ceases to be an approximation and becomes exact part of quantum theory as Bohr orbitology implies by General Coordinate Invariance alone. We have ended up with tripartimism instead of monistic materialism.
3. One can divide the geometric existence on sub-existences based on ordinary physics obeying real topology and various p-adic physics obeying p-adic topology. p-Adic space-time sheets serve as space-time correlates for cognition and intentionality whereas real space-time sheets are correlates for what we call matter.
4. Zero energy ontology (ZEO) represents also a new element. Physical states are replaced with zero energy states formed by pairs of positive and negative energy states at the boundaries of causal diamond (CD) and correspond in the standard ontology to physical events formed by pairs of initial and final states. Conservations laws hold true only in the scale characterizing given CD. Inside given CD classical conservation laws are exact. This allows to understand why the failure of classical conservation in cosmic scales is consistent with Poincare invariance.
In this framework Schrödinger equation is only a starting point from which one generalizes. The notion of Hamiltonian evolution seen by Carroll as something very deep is not natural in relativistic context and becomes non-sensical in p-adic context. Only the initial and final states of evolution defining the zero energy state are relevant in accordance with strong form of holography. U-matrix, M-matrix and S-matrix become the key notions in ZEO.
5. A very important point is that there is no need to distinguish between physical objects and their mathematical description (as quantum states in Hilbert space of some short). Physical object is its mathematical description. This allows to circumvent the question "But what about theories: do also theories exist physically or in some other sense?". Quantum state is theory about physical state and physicist and mathematician exists in quantum jumps between them. Physical worlds define the Platonia of the mathematician and conscious existence is hopping around in this Platonia: from zero energy state to a new one. And ZEO allows all possible jumps! Could physicist or mathematician wish anything better;-)!
This list of items shows how dramatically the situation changes when one realizes that the materialistic dogma is just an assumption and in conflict with what we have known experimentally for almost century.
Could physical existence be unique?
The identification of physical (or "objective") existence as mathematical existence raises the question whether physics could be unique from the requirement that the mathematical description with which it is identical exists. In finite-dimensional case this is certainly not the case. Given finite-D manifold allows infinite number of different geometries. In infinite-dimensional case the situation changes dramatically. One possible additional condition is that the physics in question is maximally rich in structure besides existing mathematically! Quantum criticality has been my own phrasing for this principle and the motivation comes that at criticality long range fluctuations set on and the system has fractal structure and is indeed extremely richly structured.
This does not yet say much about what are the basic objects of this possibly existing infinite-dimensional space. One can however generalize Einstein's "Classical physics as space-time geometry" program to "Quantum physics as infinite dimensional geometry of world of classical worlds (WCW)" program. Classical worlds are identified as space-time surfaces since also the finite-dimensional classical version of the program must be realized. What is new is "surface": Einstein did not consider space-time as a surface but as an abstract 4-manifold and this led to the failure of the geometrization program. Sub-manifold geometry is however much richer than manifold geometry and gives excellent hopes about the geometrization of electro-weak and color interactions besides gravitation.
If one assumes that space-time as basic objects are surfaces of some dimension in some higher-dimensional space, one can ask whether it is possible for WCW to have a geometry. If one requires geometrization of quantum physics, this geometry must be Kähler. This is a highly non-trivial condition. The simplest spaces of this kind are loop spaces relating closely to string models: their Kähler geometry is unique from the existence of Riemann connection. This geometry has also maximal possible symmetries defined by Kac-Moody algebra, which looks very physical. The mere mathematical existence implies maximal symmetries and maximally beatiful world!
Loops are 1-dimensional but for higher-dimensional objects the mathematical constrains are much more stringent as the divergence difficulties of QFTs have painfully taught us. General Coordinate Invariance emerges as an additional powerful constraint and symmetries related to conformal symmetry generalizing from 2-D case to symmetries of 3-D light-like surfaces turns out to be the key to the construction. The requirement of maximal symmetry realized by conformal invariance leads to correct space-time dimension and also dictates that imbedding space has M4× S decomposition with light-cone boundaries also possess huge conformal symmetries giving rise to additional infinite-D symmetries.
There are excellent reasons to believe that WCW geometry is unique. The existence would be guaranteed by a reduction to generalized number theory: M4× CP2 forced by standard model symmetries becomes the unique choice if one requires that classical number fields are essential part of the theory. "Physics as infinite-D geometry" and "Physics as Generalized Number Theory" would be the basic principles and would imply consistency with standard model symmetries.
At 6:56 AM, Anonymous Orwin O'Dowd said...
This is about as close as experiment gets to gamma-ray bursts, as we now see from the heart of the universe, and its a ZEO scenario with magnetic dynamics:
Anything stringy would then resemble a "surface tension" in the "skin depth", which is a more than plausible prospect.
I suspect that atoms are intricately patterned in this way, which guises them as alchemical boids that go "quark" in the dead of night, but that's a surreal imagining. And I'm just passing through here.
At 12:24 PM, Blogger Ulla said...
At 11:43 PM, Anonymous ◘Fractality◘ said...
If DNA is topological quantum computer, all actions precede through it?
At 11:58 PM, Anonymous said...
Quantum computing like activities are possible always when to molecules or even larger objects are connected by flux tubes. This information processing is universal. DNA-lipid layer system would be however suitable just for this purpose. Minimal function would be realization of memory as braiding patters updated by flows for molecules.
At 2:56 AM, Blogger hamed said...
Dear Matti,
Thanks for the posting that would be Controversial for me.
The sentences like “Physical worlds define the Platonic of the mathematician” are lead to some beauty direction to thinking. Because in this view if i study geometry or algebra in a very abstracted manner, I can think that I am studying the physical world, in really!!! But we know that mathematics is very wide and it contains very abstracted theorems in branches of it and is progressing year by year. then For understanding the physical world in a precise manner one should learn all of the mathematics!?
For example for me it shoud be very interesting if some mathematical spaces like Lp spaces exist physically!( they are spaces that one Deals with p-norm instead of 2-norm.
or in number theory I think this view lead to some very deep understanding of physics if one think that what is physical meaning of all of 10 Musean hypernumbers: seditions, w , p , q , m , … . also nu number as unifying concept to allow to transition between all the hypernumber types. And sigma as the creator of axis. And also Antinumbers. Relation between the ten level of hypernumbers are very interesting for me:
At 4:02 AM, Anonymous said...
Dear Hamed,
Interesting question. This idea about Platonia as physical world looks controversial. At first it seems to be in conflict with the vision that the laws of physics are unique, that standard model symmetries are somehow very special, etc..
The point is however that standard model symmetries would be symmetries of mathematics itself! Octonions have SU(3) as automorphism group for instance and CP_2 is coset space SU(3)/SU(2) having interpretation as space of quaternionic planes of octonionic space at given point.
Second point is that physics should be like Turing machine. It should be able to emulate all possible physics which are internally consistent. Finite measurement resolution - if representable quite generally as effective gauge symmetry - would allow to emulate extremely general gauge symmetric theories.
One can also worry about higher dimensional spaces if 8-D space is the imbedding space dimensions. The world of classical worlds is however infinite-D and allows as sub-manifolds finite-D spaces of arbitrary dimension. Also unions of N disjoint n-sub-manifolds are effective N*n-dimensional locally: standard wave-mechanical description of N-particle system indeed uses N*3-D configuration space.
If the number theoretical Brahman=Atman based on the generalization of real number introducing infinite number of real units as ratios of infinite integers is accepted, space-time point becomes infinitely rich structured and WCW might allow realization as M^4xCP_2 which more general definition of space-time point.
There is also the proposal about fractal hierarchy in which arithmetics with + and * are replaced with direct sum and tensor product for Hilbert spaces. Replacing points of Hilbert spaces with Hilbert spaces one obtains hierarchy very similar to infinite primes and also now interpretation in terms of endless second quantization might make sense.
Infinite-dimensionality poses very very strong constraints on mathematical structures: Kahler metric in loop spaces is unique. Infinite-dimensionality would bring in the laws of physics! One might hope that this conditions poses strong enough conditions on the allowed mathematics: for instance, all finite-D structures would be such that they can be induced from infinite-D structures. Mathematicians talk about classifying spaces: probably this is the same basic idea.
It is certainly frustrating to realize how little individual can learn from mathematics during lifetime. I believe that the correct guideline is that mathematics that one learns or perhaps even creates must naturally emerge from applications to real world problems - in my case physics. When I was younger I used to make visits to math library and walk between book shelves with the idea that I might find some miraculous cure to my mathematical problems with TGD. I left the library in a rather depressed mood!;-).
L^p spaces for p=2 are most natural from the point of view of physics. Bilinearity means linearity and quantum superposition would be lost for p different from 2. In infinite-D context p=2 is natural.
At 8:39 AM, Blogger hamed said...
Thanks. I want to summarize your argument on proofing that “theory is the same as physical world” and “uniqueness of mathematical structures” in the follow, if I misunderstood it please guide me:
1-Mathematical structures classify into two subcategories:
Some of them like infinite dimensionality are essential for physical world and it is not possible to have a world without these structures. These mathematical structures are very rich.
2-these mathematical structures poses very strong constraints on other mathematical structures. So that because of these mathematical structures are unique therefore the other mathematical structures are unique too.
3-existence of other mathematical structures is very entanglement with these mathematical structures.
4-therefore Essentiality of these mathematical structures for physical world leads to Essentiality of other mathematical structures under the constraints imposed to them.
At 5:29 PM, Anonymous Orwin said...
Hence Husserl's Esswnces as idealizations at infinity. And continuum mechanics tries to follow, but is now wanting a Natural Philosophy.
A fresh leads on the Kahler extremals problem:
To me, tori which parse as Peirce ternary relations allow the mind to grasp physical dynamics, and here to construct (cognitively) Einsteinian 4-realism. This is not phenomenology of consciousness.
Also: Eintein-Maxwell conformals; Calabi energies; necessary conditions!!
At 9:08 PM, Anonymous said...
To Hamed:
More or less like this. Note however that finite-D induced structures are very rich: one can imbed probably any finite-D geometry to infinite-D symmetric space as surface! Only infinite-D structures are highly unique. Infinite-D mathematical existence is extremely tricky notion as perturbative quantum field theorists have demonstrated with huge amount of sweat and tears.
Fundamental structures are infinite-D and highly unique: Kahler metric is fundamental concept, and its existence relies on maximal symmetries realized a superconformal symmetries characteristic for 3-D ligh-like objects, classical number fields, real and p-adic number fields.
They induce the remaining structures. In particular finite-D structures in the sense of "emulation". Mathematics does not construct n-dimensional spaces for us but only emulates it using formulas.
This is of course only a dream of physicist. Today physicists do not spend enough time to day dreaming;-).
At 5:38 PM, Anonymous ◘Fractality◘ said...
Does ZEO imply that the Universe won't extinguish itself (heat death) at some point?
Living beings, civilizations, gods, are all dissipative systems - islands of negentropy in a sea of chaos.
The more complex a phenomenon, the more energy it must consume to maintain its identity, and thus it creates more disorder?
Does ZEO modify any of that?
At 8:36 PM, Anonymous said...
Dear Fractality:
Thank you for a good question.
Universe suffering a heat death is an outcome of theoretical thinking taken to extreme without taking into account the possibility that the basic assumptions behind second law might not hold true at the limit of vanishing temperature. To me it is amusing that so many physicists take heat death so seriously.
The essential assumption is that quantum coherence in scales considered does not play any role. At ultralow temperatures however quantum coherence becomes important even in standard physics: consider only superfluidity and super conductivity.
You mentioned metabolic energy. This is a good point. The amount of metabolic energy needed depends on external temperature. Metabolic energy quantum in living matter is about same order of magnitude as physiological temperature. At very low temperatures the needed metabolic energy quantum would be very small.
TGD predicts hierarchy of universal metabolic energy quanta identifiable as increments of zero point kinetic energies in the transfer between space-time sheet corresponding to different values of p-adic prime p=about 2^k. There is evidence for this kind of quanta in visible, UV, and IR as unidentified spectral lines usually believed to be molecular spectral lines. ATP-ADP would have same mechanism as a core element.
The new physics elements are also present and bring something new into the picture.
*ZEO predicts infinite hierarchy of CDs (serving as correlates of selves!). The larger the CD associated with mental images the long the time scale of memory recall and planned action for that subself. Electron corresponds to .1 seconds assignable to sensory mental images.
*Hierarchy of Planck constants allowing macroscopic quantum phases. Even at higher temperatures macroscopic quantum phases beceme possible.
* Number theoretic entropy allowing generation of the islands of negentropy. This modifies the view about second law dramatically.
In standard physics there are only only islands of small entropy: In TGD Universe (according to pessimist) living matter can pollute environment actively to become more negentropic as we indeed seem to do;-)!.
At 3:19 AM, Blogger Santeri Satama said...
To my limited understanding, and to give credit where credit is due, Bohm had deep comprehension of this problem and this basic problem indeed disappears in Bohm's interpretation of rewriting Schrödingers equation as quantum potential:
This implies other causality which depends only from shape and not from strength and size and which Bohm calls 'active information', which you define as negentropic entanglement.
At 5:13 AM, Anonymous said...
To Santeri:
Here I must disagree. We really observe state function reductions and stationary states. Reductions are inconsistent with the determinism of Schrodinger equation in standard ontology and we must find interpretation for the situation. Bohm's theory tries to keep quantum world deterministic.
Occam's razor does not favor Bohm's theory (BT).
*BT is hidden variable theory.
*Both classical orbits and evolutions of wave functions are assumed.
*BT brings in hypothetical hydrodynamic flow from which some points are selected.
*BT makes also the ad hoc assumption of quantum non-equilibrium stating that Born rule does not hold true in quantum non-equilbrium. What probability density then is? This remains unclear to me! Here presumably the unidentified hidden variable enter into the game. It is argued that this assumption allows to obtain wave function collapse from a theory which is deterministic. I cannot swallow this.
Bohm's theory as also serious mathematical problems.
*It is argued that the classical orbits are given by the guiding equation so that they depend on wave function. I do not see how this description could give rise to classical mechanics where orbits do not depend on wave function.
*The addition of particles does not affect guiding wave - a very strange feature which I find very difficult to accept.
*A further problem is that the theory makes mathematically sense only in wave-mechanics context. In QFT -in particular for fermions- the equations defining hydrodynamical flow do not make sense. Already for bosonic QFT the analogs of Schrodinger equation makes sense only formally due to the extreme nonlinearity.
*There are also serious problems with relativity.
Bohm's notion of active information has counterpart in TGD as negentropic entanglement but to me Bohm's wave mechanics looks very ugly attempt to do quantum theory without giving up the ontology of classical mechanics.
At 6:56 AM, Blogger Ulla said...
Comments of Sarfatti:#1 Our past and future cosmic horizons are computers. (He use the Wheeler Pic] If A is the area of a horizon it has A/4Lp^2 QUBITS where Lp^2 = hG/c^3 ~ 10^-66 cm^2.The area of our future horizon is about 10^56 cm^2.
#2 The world of 3D matter sandwiched between our two observer-dependent 2D cosmic horizons are EMANATIONS so to speak, i.e. hologram images. The horizons are the hardware of the anima mundi and the software is Hawking;s "Mind of God" - see last pages of "A Brief History of Time."
This sounds quite TGDish?
At 2:02 PM, Blogger Santeri Satama said...
Matti, I believe everyone agrees that Bohm's interpretation is incomplete and your theory - building on also Bohm's work and philosophical ideas whether consciously or unconsciously - is more advanced. But calling BT deterministic does not do it justice.
"When several particles are treated by the causal interpretation then, in addition to the conventional conventional classical potential that acts between them, there is a quantum potential which now depends on all the particles. Most important this potential does not fall of with the distance between particles, so that even distant particles can be strongly connected." (SOC p. 99)
So in BT particles do affect (universal) guiding wave, but holistically and non-locally or in other words, if I understand correctly, on the level of infinite-dimensional Hilbert space.
Main motive for preserving ontology of classical mechanics for Bohm was continuum and dialogue between theories and interpretations (especially Einstein and Bohr) in order to avoid fragmentation and communication breakdowns which hinder scientific creativity, and in that same spirit I have the following question: Bohm's notion of quantum potential and active information seems related not only your negentropic entanglement (and negentropy maximation) but also deeply connected to quantum mathematics of Hilbert spaces. This may be extremely naive miscomprehension or deep question, you decide, but isn't the whole notion and structure of quantum math as Hilbert spaces inside points of Hilbert spaces dependent from or a manifestation of negentropic entanglement?
Or in less abstract language, isn't the ultimate foundation of all abstract mathematical structures love?
At 8:45 PM, Anonymous said...
To Santeri:
TGD does not build on Bohm's work: neither consciously nor sub-consciously. TGD:s startings points and philosophy are very different.
1) Bohm tries to keep physics deterministic: this is the basic idea of the whole approach and was more natural at the time when theory was proposed. I admit that I simply do not understand how state function reduction is thought to result from the theory: the notion of quantum non-equilibrium and hidden variables are thought to make this possible. But these notions are hopelessly misty.
In TGD deterministic classical physics in sense of generalized Bohr orbits becomes exact part of quantum theory: this gives rise to strong form of holography too. This does not however mean that classical time evolutions would become real as in Bohm's theory: one has quantum superpositions of classical Bohr orbits instead of single classical orbit. Only quantum ontology but with quantum classical correspondence.
In Bohm's theory the feedback from classical to quantum is lacking and this leads to non-sensical predictions. Sarfatti tried to get over this problem but did not get anywhere.
2) Bohm indeed tried to resolve Einstein-Bohr debate by trying to keep both classical physics and quantum physics but his attempt was a failure and led to a garden of branching paths so familiar to any working theoretician: a situation analogous to the landscape in string models.
*In TGD general coordinate invariance which together with symmetries of special relativity are key symmetries of the theory: in Bohm's theory one starts from Newtonian framework: wave mechanics. The difficulties are predictable.
3) Bohm hoped to understand state function reduction as a derived notion and tried to solve EB debate using single time and keeping the deterministic world view. Bohm would have proposed something different if theories of consciousness would have been fashion in his time;-).
*In TGD quantum jump, free will, and non-determinism are taken as facts, no attempt to reduce. Two times and two causalities: this is the solution of Einstein-Bohr debate.
Amusingly: all this reflects evolution of time concept: Newtonian time, time of special relativity, time of general relativity, and finally the realization that there are two times and two causalities. Plus huge number of other more or less weird proposals such as no time at all!
4) The notion of active information is attractive concept if one does not drown it to the mathematics of wave mechanics. In similar manner Orch OR was drowned to ad hoc formulas. Theoreticians should avoid formulas as long as possible(;--). But we have the illusion that formulas make it more scientific.
*In TGD NMP + negentropic entanglement realize the analog of active information. There is also analogy with Orch OR. I have talked about conscious information, attention, experience of understanding, rule as quantum superposition of its instances, realization ff sensory qualia, also love, etc... Many interpretations.
At 8:49 PM, Anonymous said...
To Santeri:
About love. I must say that as an inhabitant of extremely cruel world of science (see some of the latest postings of Lubos or visit the comment section of Tommaso's blog to understand what I mean!;-) I find it very difficult to say the word "love", it seems to belong to some another spiritual plane;-).
I would not reduce love lego piece;-). Sounds too engineerish;-) One cannot give formula for love, mathematics cannot catch it: the core of K:s teachings (and all mystic teachings) is just this.
Quantum Math as such does not need negentropic entanglement but this notion seems to be possible to realize in terms of QM.
At 9:07 PM, Blogger hamed said...
This comment has been removed by the author.
At 9:08 PM, Anonymous ◘Fractality◘ said...
Are computers part of this negentropic pollution?
At 10:44 PM, Blogger hamed said...
Dear Matti,
Thanks for clarifications about Bohm theory.
some questions:
In M4*CP2 sometimes you speak about Dynamics of 3-surfaces and sometimes you speak about dynamics of space-time. I think when you speak about dynamics of 3-surfaces you dealt with geometric time and when you speak about dynamics of space-time you speak about subjective time. Isn’t it?
NMP define dynamic of space-time but the minimization of kahler action (something like minimal surface in string theory) defines dynamics of the 3-surface. Isn’t it?
But you wrote “Kahler action would define the fundamental dynamics for space-time surfaces”. it is contradiction with my understanding.?
If NMP define dynamic of space-time, is it essential to talk about dynamics of space-time surface in the level of classical TGD? Because it leads to confusion of the listener.
Basic ideas of TGD are very controversial in the view of current physics. So I should be very attentive when I explain them. Therefore I should explain TGD step by step to others and when I speak about a basic idea if it is possible I should try to don’t speak anything about other basic ideas.
When I explain space-time is a sub manifold in M4*CP2, is it possible to continue about geometrization of forces without explaining space-time sheets at first?(I think it is not possible!) Space-time sheets seem very fiction in the view of physicist. For avoiding this, I think that I should to explain essentiality of many space-time sheets.
I think that I can speak about geometrization of forces and space-time sheets without speaking about “TGD as a Poincare invariant theory of gravitation”. I can explain it after them. Isn’t it? Or it is essential to explain at first beside them?
In really I am thinking that what is best strategy to explain TGD step by step without confusion of the listener. That’s hard ;)
At 11:23 PM, Blogger Ulla said...
Ye, Hamed, that's hard :) I stopped at that place myself, but now I think I can proceed with the three-body problem (of Poincare) bringing in unsolvable infinity (Life?). After that the different spacetimes. Or...?
TGD is a knot^10 :D
Love as math seems very odd to me :) Love is about entanglement and continuum, math about discreateness, I think (except this quantum math?).
Maybe some drug would overbridge the gap (bring in more continuous coherence)? Love as White Light? The tiny little space inside, like a ZEO center (wormhole?) connecting to infinity? It is not dependent on any other as Lubos or Tommaso, it is about Self and choises.
At 4:22 AM, Blogger Santeri Satama said...
Matti, in regard to the relation of negentropic entanglement and QM, to quote your own words:
"Negentropic entanglement might serve as a correlate for emotions like love and experience of understanding. The reduction of ordinary entanglement entropy to random final state implies second law at the level of ensemble. For the generation of negentropic entanglement the outcome of the reduction is not random: the prediction is that second law is not a universal truth holding true in all scales. Since number theoretic entropies are natural in the intersection of real and p-adic worlds, this suggests that life resides in this intersection. The existence effectively bound states with no binding energy might have important implications for the understanding the stability of basic bio-polymers and the key aspects of metabolism. A natural assumption is that self experiences expansion of consciousness as it entangles in this manner. Quite generally, an infinite self hierarchy with the entire Universe at the top is predicted."
"This leads to a vision about the role of bound state entanglement and negentropic entanglement in the generation of sensory qualia. Negentropic entanglement leads to a vision about cognition. Negentropically entangled state consisting of a superposition of pairs can be interpreted as a conscious abstraction or rule: negentropically entangled Schrödinger cat knows that it is better to keep the bottle closed. A connection with fuzzy qubits and quantum groups with negentropic entanglement is highly suggestive. The implications are highly non-trivial also for quantum computation, which allows three different variants in TGD context. The negentropic variant would correspond to conscious quantum computation like process."
"Maybe it would be useful to talk about consciousness only when one has negentropic entanglement: positive information, knowledge. Otherwise awareness.
I think that emotions are very high level consciousness unlike often thought. They provide summaries about the hole and it would be natural to assign the to negentropic fusions of a large number of mental images giving rise to stereo consciousness."
No doubt the creative consciouss experience of fusion of various mathematical ideas and forms into QM involved also deep intellectual and emotional pleasure. So to say that "QM as such does not need NE" or active information sounds like removing your self and universal self-consciousness of creative gnothi seauton from the process.
With Brahman-Atman identity as basis it should be quite obvious that QM is the number theoretical realization of the very old and well known metaphor of Indra's net, and the path you took to your realization to overcome the limitations of the set theory involved the consciouss and emotional aspects of negentropic entanglement you describe in the quotes above. QM as n:th degree of order that allows also more detailed description of NE does not mean that NE - the very process of becoming conscious of QM - could and should be reduced to QM alone. Rather, there is negentropic entanglement between the pair of QM and NE itself. That is, if we are supposed to take you and your work seriously (enough) ;).
At 5:59 AM, Anonymous said...
To Santeri:
I want just to emphasize that there are several levels of existence and most problems result from erratically identifying these levels.
The level giving rise to conscious experience does not reduce to mathematics. The evolution of state of consciousness does not correspond to a solution of field equations. This is the whole point and I want to make this absolutely clear since it provides the solution to so many paradoxes.
In hidden variable theories one can argue that there are physical variables and those related to consciousness and non-determinism of volition is apparent since the dynamics of physical variables is that of shadow and only looks non-determistic. This was the dream of Sarfatti.
Probably also the vision of Bohm was that non-deterministic state function reduction could be understood as dynamics of a shadow. The selection rules of state function reduction makes the fulfillment of this dream highly implausible.
At 7:28 AM, Anonymous said...
To Fractality: If we take the pessimistic option seriously computer could be seen as tools of this pollution.
At 7:38 AM, Blogger Santeri Satama said...
Dynamics of "Shadow" in the Jungian sense?
The word "existence" comes from the Latin word existere meaning "to appear", "to arise", "to become", or "to be", but literally, it means "to stand out" (ex- being the Latin prefix for "out" added to the Latin verb stare, meaning "to stand").
So etymologically the word refers to process of actualizing (state function reduction) instead of potential or dynamis to actualize-exist. In Bohm's language explicate and implicate orders. Western metaphysics has been plagued by the idea 'substance'/'hypokeimenon', whether defined as particles, quantum fields or vacuums and considered these substance-stuffs "True-Existence" because they are considered something non-mutable time-invariable. Substance that can be defined, controlled and manipulated.
Then there is the mystery of Platonia-substance, substantive form of possible forms and it's dialectical relation with "no-thingness/-vacuum" of ZEO. And despite your denial philosophical connections to BT, I'm still under the impression that as in BT, the philosophical starting point of your theory is also what Whitehead calls organic realism instead the substance metaphysics of materialism.
It's very easy to get drawn into the overly analytic and fragmenting metaphysics of English language and scholastic philosophy and analytically define more and more levels of existence and substance. And to get entengled into the fighting mode of the science vs. philosophy, theory vs. theory etc. debates that emanate and radiate e.g. from the Krauss controvercy.
But we can both also speak Finnish and share and comprehend the unity of non-analytical/synthetic and etymologically more sound phenomenological existence in expressions like: havainnoidutaan, ilmennytään, todellistutaan, ollaan. Tunnetaan, nähdään ja kuullaan.
At 8:30 AM, Anonymous said...
some questions:
To Hamed:
I already thought having answered but noticed that I was wrong. I attach my answers between the lines.
[Hamed] In M4*CP2 sometimes you speak about Dynamics of 3-surfaces and sometimes you speak about dynamics of space-time. I think when you speak about dynamics of 3-surfaces you dealt with geometric time and when you speak about dynamics of space-time you speak about subjective time. Isn’t it? NMP define dynamic of space-time but the minimization of kahler action (something like minimal surface in string theory) defines dynamics of the 3-surface. Isn’t it? But you wrote “Kahler action would define the fundamental dynamics for space-time surfaces”. it is contradiction with my understanding.?
[MP] Sorry. This is just loose language on my side. The strictly correct manner to speak is to assign dynamics to 3-surfaces. Space-time surfaces are "orbits" of 3-surfaces. I also often talk about space-time sheets when I should actually speak about 3-surfaces.
[MP] NMP *does not define* dynamics of space-time!;-). It defines dynamics of consciousness and tells that the information gain in quantum jumps is maximized. NMP is mathematically analogous to second law (and implies it for ensembles) in that it tells only overall direction of dynamics but does not fix time evolution completely as action principles. Kahler action is the variational principle at space-time level: preferred extremals.
To be continued....
At 8:32 AM, Anonymous said...
To Hamed:
[MP] Space-time sheets are just sub-manifolds which are *representable as graphs for maps from M^4 to CP_2*: QFT like limit of TGD! There are also other kinds of sub-manifolds: string like objects with 2-D M^4 projection, and CP_2 type extremals with 1-D light-like projection! These are not called space-time sheets.
You can take large number of space-time sheets representing asymptotic regions to various subsystems. They are small deformations of a canonical imbedded M^4 extremely near to each other. They touch each other here and there. This is just the many-sheeted space-time. The replacement of superposition of classical fields with superposition of their effects forces many-sheeted space-time in TGD. Particle touches several sheets and experiences corresponding forces. Nothing ad hoc!! Sorry for repeating this idea: it is so beautiful!;-)
*Multi-sheeted covering of imbedding space associated with the hierarchy of Planck constants is something different from many-sheeted space-time. I have tried to make this explicit as often as possible. Here one has effective covering of the imbedding space inducing multi-sheeted structure for space-time surface. There is a good argument that also this notion reduces to the basic dynamics of Kahler action. Normal derivatives of imbedding space coordinates as many-valued functions of canonical momentum densities leads to effective covering: this is a basic implication of extreme non-linearity of Kahler action which in turn forced the geometrization of quantum physics in terms of WCW geometry.
*These notions are not anything new and ad hoc but follow naturally from the basic assumptions. Even the (effective( hierarchy of Planck constants, if I am correct. The only really new and therefore controversial element is sub-manifold geometry as a manner to realize Einstein's original program.
At 8:33 AM, Anonymous said...
To Hamed:
[MP] You cannot!;-). The fundamental idea of TGD approach is to solve the energy problem of general relativity realized in terms of sub-manifold gravity. This also leads to the geometrization of standard model quantum numbers. ZEO allows to have consistency with the fact that apparently energy is not conserved in cosmology: conservation laws become a length scale dependent notion, which is not actually anything new for the pragmatic physicsts who have talked about renormalization of coupling constants since the times of Dirac.
[MP] Good luck! You need it;-)!
At 8:54 AM, Anonymous said...
Dynamics of shadow in geometric sense. Shadow behaves apparently non-determistically since the variables in orthogonal direction serve as hidden variables. Consider mechanics in n-dimensional space and restrict the consideration on k<n-dimensional sub-space: shadow. The dynamics of n-k hidden degrees of freedom affects also the k-dimensional dynamics but since they are hidden variables you see this as non-determinism.
Existence in the sense you refer to it would be subjective existence. Existence as mathematical object is different kind of existence and would be the existence according to materialist.
I call my philosophy tripartistics as opposed to materialistic view of standard physics, and dualistic view of Bohm (non-hidden and hidden variables or classical particles + guiding waves). The completely new element is subjective existence - quantum jumps. About Whitehead I cannot say.
I would speak about vacuum state, not nothingness, which to academic philosopher means something different. In positive energy ontology vacuum state is the ground state in which energy momentum and other quantum numbers vanish. It would be fermionic Fock vacuum. In ZEO all states satisfy this condition. One could define non-vacuum states as those for which positive energy part (and thus also negative energy part) has non-vanishing quantum numbers.
Physics needs philosophy but physicists must build it themselves. Going to philosophy library does not help. The statues of academic philosophy did not know anything about modern physics. Finnish language indeed expresses more naturally what also mystics talk about. The intellectual, linguistic manner to see the world is painting pictures using words and picture is never the reality. One should take it as art rather than warfare.
"Nothingness" is also a problem in set theory: one constructs natural numbers by starting from empty set. But it is obvious that empty set has no operational meaning. One ends up with the well-known problems with infinite sets. Russell antinomy for instance. Could it be that a more natural definition of natural numbers could be as products of primes just like elementary particles are building blocks of physical states? In this approach the notion of infinity would be number theoretical (divisibility concept) and based on infinite primes.
At 9:10 AM, Blogger Ulla said...
I would not want to draw in Jungian Shadow (M-matrix?) into this, and I must confess I don't know Whiteheads organic realism. I read:
Whitehead firmly believed that the sharp division between nature and mind, established by Descartes, had "poisoned all subsequent philosophy", and held that in reality "we cannot determine with what molecules the brain begins and the rest of the body ends". He deemed human experience to be "an act of self-origination including the whole of nature, limited to the perspective of a focal region, located within the body, but not necessarily persisting in any fixed coordination within a definite part of the brain". Upon this concept of human experience, Whitehead founded his new metaphysical "philosophy of the organism", his cosmology, his defense of speculative reason, his ideas on the process of nature and his rational approach to God.
In his Philosophy of Organism or Organic Realism, now usually known as Process Philosophy, he posited subjective forms to complement Plato's eternal objects (or Forms). The theory identified metaphysical reality with change and dynamism, and held that change in not illusory or purely accidental to the substance, but rather the very cornerstone of reality or Being. His view of God, as the source of the universe, was therefore as growing and changing, just as the entire universe is in constant flow and change (essentially a kind of Theism, although his God differs essentially from the revealed God of Abrahamic religion). Later process philosophers, including Charles Hartshorne (1897 - 2000), John B. Cobb Jr. (1925 - ) and David Ray Griffin (1939 - ), developed the theory further into a full-blown Process Theology. Whitehead's rejection of mind-body Dualism was similar to elements in Buddhism, although many Christians and Jews have found Process Theology a fruitful way of understanding God and the universe.
Whitehead believed that "there are no whole truths; all truths are half-truths". His political views sometimes appear to be very close to Libertarianism, although he never used the label, and many Whitehead scholars have read his work as providing a philosophical foundation for the Social Liberalism of the New Liberal of the first half of the 20th Century.
But there are also names as Kant, Popper, Hegel etc, philosophers mentioned, so...
There is no mind-matter duality in this ontology, because "mind" is simply seen as an abstraction from an occasion of experience which has also a material aspect, which is of course simply another abstraction from it; thus the mental aspect and the material aspect are abstractions from one and the same concrete occasion of experience.
An occasion of experience consists of a process of prehending other occasions of experience, reacting to them. This is the process in process philosophy.
Such process is never deterministic. Consequently, free will is essential and inherent to the universe.
At 10:57 AM, Blogger Santeri Satama said...
From a philosophers answer to Krauss (
“I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today — and even professional scientists — seem to me like someone who has seen thousands of trees but has never seen a forest. A knowledge of the historical and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is — in my opinion — the mark of distinction between a mere artisan or specialist and a real seeker after truth.” (Albert Einstein)"
The sets of basic assumptions on which theories are build is often called metaphysics, which is a part of philosophy just like logic etc., but more importantly philosophy is about philosophical attitude, not about names and set of books in library. Philosophical attitude towards scientific theory building - at least in BT and PT - is to consider it a form of playfull and creative art and they share the criticism of anti-philosophical attitude of many materialists who make the set of materialistic metaphysics into authoritarian dogma to be followed religiously and to be defended by political bullying and warfare.
When theory building runs into trouble, e.g. search for the axiomatic mathematical foundation of QFT, a dialogue with philosophers and/or visit to library to pick a copy of Gödel's proof could and would help to prevent further banging of head against the wall. ZEO or not, TGD process is not happening in intellectual and social vacuum where narratives of theoretical physics reinvent philosophy or metaphysics from nothing. Notions of intellectual property and patent rights and tribalism of academic fields are very poor philosophy of the Ego in the world where novel ideas appear synchronistically. And again, there are archetypal ideas such as Brahman-Atman identity and Indra's Net being constantly reinvented and reformulated in various languages, now including QM. Those are ancient philosophical ideas that TGD is based upon, but in what sense "physicists must build it themselves"?
At 8:02 PM, Anonymous said...
To Ulla:
Thank you for the Whitehead;-). I cannot invent immediate disagreement with Whitehead. Accepting subjective existence as as a genuine level of existence and giving up dualism is common to us.
At 8:24 PM, Anonymous said...
To Santeri:
Philosophical attitude includes the historical view: it is no point in reinventing the wheel. Philosophical assumptions are important but only when theory is relatively well-formulated. One cannot start by saying: OK, I will construct dualistic physics. The most important aspect of philosophical attitude is genuine passion to answer the difficult age old questions related to time, free will, what "mind stuff" could be,.. There are also more concrete questions, what is energy, mass, what is the origin of quantum numbers, what the mysterious state function reduction means, ... Most of these questions are taboos nowadays: standard model--> GUT-->SUSY-->String models-->M-theory and the conclusion that those stlll asking are idiots as one particular besser wisser whom you certainly know would formulate it! ;-)
The trouble with physics is that most people doing physics have become mere pragmatic appliers of methods. When taken to extreme this leads to attitude that the basic goal of particle physics is to determine experimentally to what point of SUSY parameter space physics corresponds. This is insanity but a natural outcome of the attitude "science as a mere methodology".
Maybe physicists must rediscover the ancient ideas themselves from their starting points. Here open mind is enough.
At 1:57 AM, Blogger Santeri Satama said...
The Krauss debate brought up also Aharonov-Bohm effect (
David Albert: "Professor Kraus’ argument for the ‘reality’ of virtual particles, and for the instability of the quantum-mechanical vacuum, and for the larger and more imposing proposition that ‘nothing is something’, hinges on the claim that “the uncertainty in the measured energy of a system is inversely proportional to the length of time over which you observe it”. And it happens that we have known, for more than half a century now, from a beautiful and seminal and widely cited and justly famous paper by Yakir Aharonov and David Bohm, that this claim is false."
What's the TGD interpretation of Aharonov-Bohm effect and it's metaphysical implications? E.g. what kind of causality is in question?
At 5:51 AM, Anonymous said...
Krauss' argument "nothing is something" is to me game with badly chosen words. I would not call quantum physical vacuum state "nothing". In ZEO any zero energy state can be obtained from vacuum so that it has infinite potential of producing different structures.
Aharonov- Bohm effect is purely topological effect which does not depend on theory. Particle going around a closed loop in vanishing magnetic field can experience a non-trivial effect resulting from so alled non-integrable a phase factor. Any theory involving gauge fields predicts this effect. I would not call it causality. The effect represents topological physics and topological QFT:s made this branch of physics industry.
At 12:32 PM, Blogger Santeri Satama said...
Due to my mathematical handicap, I'm forced to question meanings of these words, that if a "particle" is supposed to be a special case of "field", how can vector potential affect or inform particle in case of vanishing field?
The jargon of theoretical physics (shorts for deep and difficult mathematical concepts) often brings to mind the times when priests preached in Latin to congregations that understood nothing of Latin. A game of badly chosen words, about which philosophers of language and politics and hermeneutics and science have various and often critical views.
At 8:27 PM, Anonymous said...
To Santeri:
The language is a real problem. I try to explain why it is a problem knowing that also here I encounter the language problem.
The field-particle correspondence is difficult conceptually also for physicists and involves a lot of mis-understandings. Basically one has two different abstraction levels and their correspondence.
Particle as a point of 3-space and dynamical evolution as particle orbit defines classical Newtonian ontology. This is simple. Quantum states of particle as wave functions in 3-D space E^3 for positions of particle is quantum ontology in Newtonian framework.
This brings in first quantization as abstraction (statements about statements in logic): one can have only quantum-classical correspondence as many-to-one map. The space of wave functions is infinite-D: configuration space is 3-D. Particular quantal particle states (say momentum eigenstates) have direct classical counterparts. The choice of this correspondence is of course not unique.
Indeed, one could call completely localized wave functions particles (well-defined position at one particular moment). One could also call momentum eigenstates which are completely delocalized wave functions particles. Wave particle duality relates these two alternative quantum classical correspondences.
Second quantization brings in further abstraction level and a layer of confusion unless one is fully aware of mathematics involved: basically hierarchy of abstractions.
Consider photons. The space of wave functions for photons in 3-D space E^3 is replaced with the space of wave functions in infinite-D space of classical gauge field potentials in E^3, which is already infinite-D. Fock state- a state with single photon which is analog of harmonic oscillator wave function in the infinite-D space of gauge potentials would be the counterpart of photon as classical free particle.
If there are non-contractible loops (non-trivial first homotopy) or if the scalar is many-valued, Bohm-Aharov is possible. For instance, depending on angle around z-axis such that it is changes by non-integer multiple as one goes over full circle, one has Bohm Aharonov. In TGD this kind of 3-surfaces can be considered.
At 8:40 PM, Anonymous said...
To Santeri:
Concerning your question about vector potential.
By gauge invariance gauge field is representable as "curl" of vector potential. This symmetry means that only two polarizations orthogonal to photon's momentum remain in spectrum. Photon is massless. Vector potential can be non-vanishing even if field vanishes: it is enough that it is a gradient of scalar.
If space has no non-contractible loops and if scalar is single valued function, no Bohm Aharonov effect results.
Bohm in his mixture of ontologies corresponding to two different abstraction levels (particle state as position of E^3 and particle state as wave function in E^3) would perhaps say that vector potential informs or affects particle. One cannot say this in the standard quantum ontology. It does not make sense.
After all this explaining I must confess that the role of vector potential in particle description is QFT based notion and it is now strongly challenged in twistor approach in which the notion is given up;-)!
Also in TGD one describes particle states as wave functionals in the space of 3-surfaces (by holography effectively partonic 2-surfaces at boundaries of CD) and there is a strong connection to twistor approach.
Things become conceptually clear once one accepts infinite-D mathematics of "world of classical worlds". In TGD they are 3-surfaces, in original string model (rather than the horrible conceptual fuss of M-theory) they are 1-surfaces- strings.
At 11:23 PM, Anonymous Orwin said...
Here's a crisp view of Krishnamurti as the philosopher against all dogmatism:
Would it help to explain tripartistics? Modern philosophy has no language for pluralism, and folk culture now reclaims "body, mind and spirit."
At 12:26 PM, Blogger Ulla said...
One thing that we will have to let go of is this kind of addiction to simplistic, primitive reductive materialism because there’s really no way that I can see a reductive materialist model coming remotely in the right ballpark to explain what we really know about consciousness now.
Coming from a neurosurgeon who, before my coma, thought I was quite certain how the brain and the mind interacted and it was clear to me that there were many things I could do or see done on my patients and it would eliminate consciousness. It was very clear in that realm that the brain gives you consciousness and everything else and when the brain dies there goes consciousness, soul, mind—it’s all gone. And it was clear.
Now, having been through my coma, I can tell you that’s exactly wrong and that in fact the mind and consciousness are independent of the brain. It’s very hard to explain that, certainly if you’re limiting yourself to that reductive materialist view.
Listen or read it.
At 3:58 PM, Anonymous Orwin said...
The herb fewerfew (Tanacetum parhenicem)(migraine, headaches and fevers) was planted outside a house to clean the air, and I find even as a modern supplement it works best in just that way - or on the drying plate to "earth" another herb. This is like external lights acting against EM pollution by instansiating boundary conditions. These are preferred (chosen, selected) extremals! But preferred for what by what?
At 11:31 PM, Anonymous Orwin said...
Air pollution and heart disease: Evidence: hard; link: unknown.
But it doesn't seem you can translate the language of tradition into metrics: e.g. Mind = Liouville/Toda. Its more a matter of interfaces or mediations.
At 11:45 PM, Blogger Ulla said...
The point is that brain is just a tool for decoherence (creating illusions). Waken - sleep - death diminish the use of brain (creation of subsystems) to zero, but there are other ways to measure things than with the body. It seems the body is just an end of a wormhole (BEC?) and it is tripartistic? Note that emeotions are also divided. (This is quite ironic, when Matti and I started the conversations I talked of the importance of emotions, now he has realized that, and I start talk of the importance of emotions as constraints, as pain :) A human can be totally distorted by pain, uncapable of sleeping normally, and when she dies all strain is gone. A psychic patient can be totally insensitive to pain. No feelings whatsoever. Why? They use the body differently?).
Herbs? Do you do herbs? That I would want to discuss, but not here, I guess? Flavonoids are interesting. They are called good, but are they really? Why not? Why do we have an antioxidant paradox? etc.
There is no theory of biology today, just guesses. Due to wrong basis?
At 4:41 AM, Blogger Santeri Satama said...
Matti, thanks for the clarification, which again shows the importance of philosophy of communication - and the importance of admitting confusion and emotional frustration with linguistic communication breakdowns.
It's helpfull to recollect that comprehension is literally a collective enterprize ('grasping together') and to keep in mind the simile about blind men and elephant ( and that also theoretical physics is one limited point of view to whole of being and to be meaningfull needs to be able to communicate and share it's point of view with other points of view.
At 6:08 AM, Anonymous said...
To Santeri:
Communication is difficult even when the communicators are willing to communicate. Sad to say that too often this is not the case.
To Ulla:
Neuroscientist wakes up from what is regarded as completely unconscious state and tells that she has been fully conscious and writes a book about it. She is not taken seriously at all! Does not fit with the dogma! This is only one example of what kinds of idiots scientists believing on dogma become.
Douglas Adams has in one of his books a hilarious piece of satire about skeptic scientists. Some communications (or attempts to communicate) with finnish skeptics have taught me that they do the hilarious satire themselves.
At 9:41 AM, Blogger Ulla said...
Ye, I have read some 'highs'. It always make my stomach wanting to turn inside out.
I think you shout under wrong tree. Matti is one of the very few scientists that really tries to communicate, but unfortunately his theory is so hard to grasp because it is complex (knot^10). As also Hamed has noticed the different parts are so entangled with each other that there are no beginning nor any end, It resembles very much traditional chinese medicine in that case. And notice that not even experts know enough to fully understand it, because they are experts only in some area. Today there are very few humans that know so much that they could grasp it. And to make it more simple is not either any easy task. The math is a hard nut.
I have many times thought that I understood, and then noticed I did not. I have checked and crosstested with my limited knowledge in physics (which make me grin at myself a little), and seldom there has been errors, only things not yet known, which isn't Mattis fault.
I share Mattis doubts about philosophy. It cannot guide, but it can be used as model afterwards. Philosophy has the same faults as math, everything is possible. Sorry to say if you are a philosopher.
This video was a neuroscientist man, and maybe then it is easier accepted. Jill Bolte Taylor was a female.
At 7:59 PM, Blogger hamed said...
Dear Matti,
so Thanks,
At your answers:
“The strictly correct manner to speak is to assign dynamics to 3-surfaces” and
I regarded dynamics and evolution the same meaning. TGD have two kinds of evolutions, one is informational evolution that is related to consciousness and another is geometric time evolution.
Before this I thought that the first one is evolution of space-time and the other one is evolution of 3-surface, but now I learned from the answers that each of them is evolution of the 3-surface.Is it correct? At the first one at each quantum jump the 3-surface is replaced with another 3-surface. A transition from p-adic 3-surface to real 3-surface occur rather than p-adic space time to real space time? Although it is not difference practically!
I deduced that at the sequence of quantum jumps from a 3-surface to another one, the direction of evolution is not unique by NMP and there are 3-surfaces at the end that provide the condition of NMP (There is something like degeneracy). Then what cause makes only one 3-surface of them occur? Is there only pure chance?! But there is obvious when a person will for doing something, if all external factors are appropriate, he can do it and exactly the same work without of any chance that governs on his behavior. Illusion of “I” doesn’t help you for the answer;)
At 9:58 PM, Anonymous said...
Dear Hamed,
As any 4-D classical action principle, one can see Kahler action as defining a dynamics for some 3-D configuration: usually they are field configurations, in TGD they are geometric objects. Preferred extremal property selects preferred orbits as analogs of Bohr orbits in TGD Universe: this is what distinguishes TGD from field theories based on path integrals (all orbits are allowed and "classical" ones correspond to stationary phase and thus extremals of action). One can assign to a collection of 3-surfaces at second end of CD a space-time surface as preferred extremal - this is holography. The holography has motivated my somewhat fuzzy use of space-time sheets and 3-surfaces: if holography would be globally true then 3-surface at end of CD would dictate space-time sheet uniquely. This not the case by the failure of strict determinism due to vacuum degeneracy of Kahler action. (A good exercise would be to look through the Kaehler action and what vacuum extremals are!). I apologize!
[Hamed] At “NMP is mathematically analogous to second law (and implies it for ensembles) in that it tells only overall direction of dynamics but does not fix time evolution completely as action principles”
a) An important point: quantum states are quantum superpositions of 3-surfaces!!!! To each of these 3-surfaces in superposition one can assign space-time sheet satisfying field equations modulo non-uniqueness due to failure of strict determinism. In this sense and only in this sense classical physics is exact part of quantum theory!! Bohm believed differently: he would have said that it is possible to speak both about quantum superpositions of 3-surfaces/associated space-time surfaces and single space-time surface. You have got the impression that I share this belief of Bohm! I definitely do not!!
One can speak about single space-time sheet only in stationary phase approximation for vacuum functional which is exponent of Kahler function (Kahler action from Euclidian regions) and imaginary analog of Morse function (Kahler action from Minkowskian regions). In this sense TGD and quantum field theories are analogous. Path integral is however replaced by functional integral with phase factor (hybrid of path integral and functional integral) and one can hope that it is therefore mathematically well-defined.
b) What NMP says is about what happens in state function reduction cascade for given subsystem-complement pair. It is formulated solely in terms of entanglement entropy for quantum jumps. It defines dynamics for subjective existence. Kahler action defines dynamics for geometric existence.
c) Quantum classical correspondence suggests that there could be however some correlate for NMP and its outcome second law at the space-time level. NMP and second law could correspond to the non-inversibility of the dynamics of Kahler action and for the arrow of time for zero energy states meaning that they are state function reduced at either end of CD. The breakdown of strict determinism at some points or sub-manifolds of space-time sheet is analogous to what happens in hydrodynamics in a flow which becomes supersonic. The hydrodynamical equations bifurcate and second law is used to select the bifurcating branch uniquely. Somehing like this might occur now.
At 10:33 PM, Anonymous ◘Fractality◘ said...
You've spoken about God of the Old Testament as the manifestation of collective consciousness.
Monotheistic dualism that separates God from everything else presents an almost whimsical picture of a God who is a supreme egoist creating the universe for the express purpose of being worshipped – rewarding those who do it properly (according to his rules) and punishing those who don’t. That this is reminiscent of all authoritarian power is no accident, for authoritarian secular power uses an authoritarian religion with its sacred symbolisms and its morality based on duty and sacrifice to justify itself. The question of whether God created the authoritarian form (as fundamentalists believe), or whether the form projected a God to justify itself, is not trivial.”
Is God our mistake, or are we God's mistake?
At 10:48 PM, Blogger Ulla said...
I see the answer to that as we have created a picture of God as something outside the creation. When he is inherent in everything, maybe as the omnipotent vacuum problem?
What God is in reality is very different from our limited view of matters. As instance in NDE they travel through a 'tunnel' (wormhole?) out of this void (GR) into another kind of existence (mirror world?), and this world we cannot describe bu words. So we have created the metaphor God? Kind of a talisman. But WE as humans direct our own evolution, and we and only we have the responsibility for it. There are good and less good choises we can make. In this way we ARE God (his tools).
There is a beautiful word. To live in the hands of God. Think at what it means in reality!
At 12:34 PM, Blogger Ulla said...
I should have been silent :)
At 1:37 AM, Blogger hamed said...
Dear Matti,
so thanks.
I think there are some bases to understanding of your answer about non-determinism correctly and I must wait :(!
I listed some questions and when think about them I deduce that each of them relates in some manner to understanding of M4 * CP2! It is very basic building of TGD that is not avoidable :).
I found an article on “STATUS OF SUPERSTRING AND M-THEORY” in and start to read it. because i think It is needed for me to learn bases of string theories at introductory level. i'd like to read it in the viewpoint of TGD;)
At your answer to Santeri, you wrote that first quantization as abstraction is like statements about statements in logic. Why? And also second quantization.
“basically hierarchy of abstractions”!!! Then what is third quantization?!
At 5:44 AM, Anonymous said...
Dear Hamed,
TGD can be also seen as a generalization of super string model so that getting some background in superstrings certainly helps. Also an article about old fashioned hadronic string model would help.
By the way, string model started from purely geometric formulation with string sheets identified as minimal surfaces. Then the Polyakov formulation emerged and one introduced metric on string world sheets as an independent dynamical variable which for extremals was equal to induced metric. This allowed to develop calculational formalism but led to astray, Eventually one made also the geometry of 10-D space dynamical and one had double gravity instead of reduction of gravity to the geometrodynamics of string world sheets. Pragmatism is not always good in theoretical physics!
First quantization means replacing configuration space of particle (Euclidian 3-space) with wave functions in this space. From space to function space. In case of Boolean algebra this means transition to the Boolean algebra of Boolean statements about Boolean statements. Reflective level of Boolean consciousness.
Second quantization means that space of wave functions is replaced with space of functions in the space of wave functions. Another abstraction.
The hierarchy of infinite primes and many-sheeted space-time lead to the proposal that this hierarchy of quantizations continues. Hadrons, atoms, etc., even galaxies are in well-defined mathematical sense elementary particles at some level of this hierarchy.
At 1:21 AM, Blogger Ulla said...
The theory of time reversal and duality of Markov processes was applied to non-relativistic quantum particles in Chapter III. In this chapter we apply the stochastic theory to relativistic quantum particles. We will consider the relativistic Schrödinger equation of a spinless particle in an electromagnetic field. It will be shown that the relativistic quantum particles no longer have continuous paths but move only through pure jumps in contrast to the continuous movement of non-relativistic quantum particles.
Krauss treated only relativistic aspects?
At 2:10 AM, Blogger Ulla said...
Schrödinger Equations and Diffusion Theory
AvMasao Nagasawa
At 7:08 AM, Blogger Dov Henis said...
2012: Restructure Science Plans, Policies, Budgets
Eppur Si Muove, Higgs Particle YOK
Regardless Of Whatever Whoever
Regardless Of Whatever Is Said By Whoever Says It -
Higgs Particle YOK.
S Hawking is simply wrong in accepting it. Obviously wrong.
Everyone who accepts the story of the Higgs particle is simply wrong.
Plain commonsense.
Singularity and the Big Bang MUST have happened with the smallest base universe particles, the gravitons, that MUST be both energy and mass, even if they are inert mass just one smallest fraction of a second at singularity. All mass formats evolve from gravitons that convert into energy i.e. extricate from their gravitons clusters into mass formats in motion, energy. And they all end up again as mass in a repeat singularity.
Universe expansion and re-contraction proceed simultaneously..
Dov Henis (comments from 22nd century)
Refresh Present SCIENCE Comprehensions And Restructure Science Plans, Policies And Budgets
Who Suppresses Science Creativity? Does Academia Suppress Creativity?
Again and again, ad absurdum:
Since the 1920s SCIENCE is suppressed by a Technology Culture, tightly supervised by a religious old style trade union , the AAAS…
Liberate Your Mind From Concepts Dictated By The Religious Trade-Union AAAS:
USA Science? Re-Comprehend Origins And Essence
* Higgs Particle? Dark Energy/Matter? Epigenetics? All YOK!
* Earth-life is just another, self-replicating, mass format.
* All mass formats evolve from gravitons, the primal universe mass-energy particles.
* Since singularity gravitons are extricated from their big-bang clusters , i.e. become mobile, energy, at a constant rate.
* Evolution Is The Quantum Mechanics Of Natural Selection.
* Quantum mechanics are mechanisms, possible or probable or actual mechanisms of natural selection.
* Life’s Evolution is the quantum mechanics of biology.
Update Concepts-Comprehension…
Earth life genesis from aromaticity-H bonding
Universe-Energy-Mass-Life Compilation
Seed of human-chimp genome diversity
New Era For Science Including Genomics
Dov Henis (comments from 22nd century)
Universe Inflation And Expansion
Inflation on Trial
Astrophysicists interrogate one of their most successful theories
Inflation and expansion are per Newton.
Common sense.
Dov Henis (comments from 22nd century)
Post a Comment
<< Home |
c96b2267c2136afa | lördag 21 januari 2017
The Origin of Fake Physics
Peter Woit on gives on Not Even Wrong a list of fake physics most of which can be traced back to the fake physics character of Schrödinger's linear multi-dimensional equation, as exposed in recent posts.
Woit's list of fake physics thus includes different fantasies of multiversa all originating from the multi-dimensional form of Schrödinger's equation giving each electron its own separate 3d space/universe to dwell in.
But the linear multi-d Schrödinger equation is a postulate of modern physics picked from out of the blue as a ready-made and as such like a religious dogma beyond human understanding and rationality.
Why modern physics has been driven into such an unscientific approach remains to be understood and exposed, and discussed...
The standard view is presented by David Gross as follows:
• Quantum mechanics emerged in 1900, when Planck first quantized the energy of radiating oscillators.
• Quantum mechanics is the most successful of all the frameworks that we have discovered to describe physical reality. It works, it makes sense, and it is hard to modify.
• Quantum mechanics does make sense, although the transition, a hundred years ago, from classical to quantum reality was not easy.
• The freedom one has to choose among different, incompatible, frameworks does not influence reality—one gets the same answers for the same questions, no matter which framework one uses.
• That is why one can simply “shut up and calculate.” Most of us do that most of te time.
• By now...we have a completely coherent and consistent formulation of quantum mechanics that corresponds to what we actually do in predicting and describing experiments and observations in the real world.
• For most of us there are no problems.
• Nonetheless, there are dissenting views.
So, the message is that quantum mechanics works if you simply shut up and calculate and don't ask if it makes sense, as physicists are being taught to do, but here are dissenting views...
Note that the standard idea ventilated by Gross is that quantum mechanics somehow emerged from Planck's desperate trick of "quantisation" of blackbody radiation 1900 when taking on the mission of explaining the physics of radiation while avoiding the "ultra-violet catastrophe" believed to torpedo classical wave mechanics. Planck never believed that his trick had a physical meaning and in fact the trick is not needed because an explanation can be given within classical wave mechanics in the form of computational blackbody radiation with the ultraviolet catastrophe not showing up.
This is what Anthony Leggett, Nobel Laureate and speaker at the 90 Years of Quantum Mechanics Conference, Jan 23-26, 2017, says (in 1987):
• If one wishes to provoke a group of normally phlegmatic physicists into a state of high animation—indeed, in some cases strong emotion—there are few tactics better guaranteed to succeed than to introduce into the conversation the topic of the foundations of quantum mechanics, and more specifically the quantum measurement problem.
• I do not myself feel that any of the so-called solutions of the quantum measurement paradox currently on offer is in any way satisfactory.
• I am personally convinced that the problem of making a consistent and philosophically acceptable 'join' between the quantum formalism which has been so spectacularly successful at the atomic and subatomic level and the 'realistic' classical concepts we employ in everyday life can have no solution within our current conceptual framework;
• We are still, after three hundred years, only at the beginning of a long journey along a path whose twists and turns promise to reveal vistas which at present are beyond our wildest imagination.
• Personally, I see this as not a pessimistic, but a highly optimistic, conclusion. In intellectual endeavour, if nowhere else, it is surely better to travel hopefully than to arrive, and I would like to think that the generation of students now embarking on a career in physics, and their children and their children's children, will grapple with questions at least as intriguing and fundamental as those which fascinate us today—questions which, in all probability, their twentieth-century predecessors did not even have the language to pose.
The need of a revision, now 30 years later, of the very foundations of quantum mechanics is even more clear, 90 years after conception. The starting point must be the wave mechanics of Schrödinger without particles, probabilities, multiversa, measurement paradox, particle-wave duality, complementarity and quantum jumps with atom microscopics described by the same continuum mathematics as the macroscopic world.
PS Is quantum computing fake physics or possible physics? Nobody knows since no quantum computer has yet been constructed. But the hype/hope is inflated: perhaps by the end of the year...
Inga kommentarer:
Skicka en kommentar |
ead217caaf702d80 | Recent Publications
Tchavdar Todorov
1. Title: Electron-phonon thermalization in a scalable method for real-time quantum dynamics
Author(s): Rizzi V., Todorov T.N., Kohanoff J.J., Correa A.A.,
Physical Review B, 93, No. 2 (27 January 2016)
doi: 10.1103/PhysRevB.93.024306
Full Text
We present a quantum simulation method that follows the dynamics of out-of-equilibrium many-body systems of electrons and oscillators in real time. Its cost is linear in the number of oscillators and it can probe time scales from attoseconds to hundreds of picoseconds. Contrary to Ehrenfest dynamics, it can thermalize starting from a variety of initial conditions, including electronic population inversion. While an electronic temperature can be defined in terms of a nonequilibrium entropy, a Fermi-Dirac distribution in general emerges only after thermalization. These results can be used to construct a kinetic model of electron-phonon equilibration based on the explicit quantum dynamics.
2. Title: Efficient simulations with electronic open boundaries
Author(s): Horsfield A.P., Boleininger M., D'Agosta R., Iyer V., Thong A., Todorov T.N., White C.
Physical Review B, 94, pp. 075118- (10 August 2016)
doi: 10.1103/PhysRevB.94.075118
We present a reformulation of the hairy-probe method for introducing electronic open boundaries that is appropriate for steady-state calculations involving nonorthogonal atomic basis sets. As a check on the correctness of the method we investigate a perfect atomic wire of Cu atoms and a perfect nonorthogonal chain of H atoms. For both atom chains we find that the conductance has a value of exactly one quantum unit and that this is rather insensitive to the strength of coupling of the probes to the system, provided values of the coupling are of the same order as the mean interlevel spacing of the system without probes. For the Cu atom chain we find in addition that away from the regions with probes attached, the potential in the wire is uniform, while within them it follows a predicted exponential variation with position. We then apply the method to an initial investigation of the suitability of graphene as a contact material for molecular electronics. We perform calculations on a carbon nanoribbon to determine the correct coupling strength of the probes to the graphene and obtain a conductance of about two quantum units corresponding to two bands crossing the Fermi surface. We then compute the current through a benzene molecule attached to two graphene contacts and find only a very weak current because of the disruption of the π conjugation by the covalent bond between the benzene and the graphene. In all cases we find that very strong or weak probe couplings suppress the current.
3. Title: Length Matters: Keeping Atomic Wires in Check
Author(s): Cunningham B., Todorov T.N., Dundas D.
MRS Proceedings, 1753 (2015)
doi: 10.1557/opl.2015.197
4. Title: Nonconservative current-driven dynamics: beyond the nanoscale
Beilstein Journal of Nanotechnology, 6, pp. 2140-2147 (13 November 2015)
doi: 10.3762/bjnano.6.219
Full Text
5. Title: Nonconservative dynamics in long atomic wires
Physical Review B, 90, pp. 115430 - (24 September 2014)
doi: 10.1103/PhysRevB.90.115430
6. Title: Current-induced forces: a simple derivation
Author(s): Todorov T.N., Dundas D., Lü J., Brandbyge M., Hedegård P.
European Journal of Physics, 35, No. 6, pp. 065004- (02 September 2014)
doi: 10.1088/0143-0807/35/6/065004
We revisit the problem of forces on atoms under current in nanoscale conductors. We derive and discuss the five principal kinds of force under steady-state conditions from a simple standpoint that—with the help of background literature—should be accessible to physics undergraduates. The discussion aims at combining methodology with an emphasis on the underlying physics through examples. We discuss and compare two forces present only under current—the non-conservative electron wind force and a Lorentz-like velocity-dependent force. It is shown that in metallic nanowires both display significant features at the wire surface, making it a candidate for the nucleation of current-driven structural transformations and failure. Finally we discuss the problem of force noise and the limitations of Ehrenfest dynamics
7. Title: Current-induced atomic dynamics, instabilities, and Raman signals: Quasiclassical Langevin equation approach
Author(s): Lü J.T., Brandbyge M., Hedegård P., Todorov T.N., Dundas D.,
Physical Review B, 85, pp. 245444- (25 June 2012)
doi: 10.1103/PhysRevB.85.245444
8. Title: An ignition key for atomic-scale engines
doi: 10.1088/0953-8984/24/40/402203
A current-carrying resonant nanoscale device, simulated by non-adiabatic molecular dynamics, exhibits sharp activation of non-conservative current-induced forces with bias. The result, above the critical bias, is generalized rotational atomic motion with a large gain in kinetic energy. The activation exploits sharp features in the electronic structure, and constitutes, in effect, an ignition key for atomic-scale motors. A controlling factor for the effect is the non-equilibrium dynamical response matrix for small-amplitude atomic motion under current. This matrix can be found from the steady-state electronic structure by a simpler static calculation, providing a way to detect the likely appearance, or otherwise, of non-conservative dynamics, in advance of real-time modelling.
9. Title: Nonconservative current-induced forces: A physical interpretation
doi: 10.3762/bjnano.2.79
Full Text
10. Title: Nonconservative generalized current-induced forces
doi: 10.1103/PhysRevB.81.075416
11. Title: Density-potential mapping in time-dependent density-functional theory
Physical Review A, 81, No. 4 (2010)
doi: 10.1103/PhysRevA.81.042525
12. Title: Modelling non-adiabatic processes using correlated electron-ion dynamics
doi: 10.1140/epjb/e2010-00280-5
13. Title: Ring currents in azulene
Author(s): Paxton A.T., Todorov T.N., Elena A.M.
Chemical Physics Letters, 483, No. 1-3, pp. 154-158 (2009)
doi: 10.1016/j.cplett.2009.10.041
Full Text
We propose a self consistent polarisable ion tight binding theory for the study of push–pull processes in aromatic molecules. We find that the method quantitatively reproduces ab initio calculations of dipole moments and polarisability. We apply the scheme in a simulation which solves the time dependent Schrödinger equation to follow the relaxation of azulene from the second excited to the ground states. We observe rather spectacular oscillating ring currents which we explain in terms of interference between the HOMO and LUMO states.
14. Title: Current-driven atomic waterwheels
doi: 10.1038/nnano.2008.411
A current induces forces on atoms inside the conductor that carries it. It is now possible to compute these forces from scratch, and to perform dynamical simulations of the atomic motion under current. One reason for this interest is that current can be a destructive force—it can cause atoms to migrate, resulting in damage and in the eventual failure of the conductor. But one can also ask, can current be made to do useful work on atoms? In particular, can an atomic-scale motor be driven by electrical current, as it can be by other mechanisms? For this to be possible, the current-induced forces on a suitable rotor must be non-conservative, so that net work can be done per revolution. Here we show that current-induced forces in atomic wires are not conservative and that they can be used, in principle, to drive an atomic-scale waterwheel.
15. Title: Current-assisted cooling in atomic wires
Author(s): McEniry E.J., Todorov T.N., Dundas D.,
Journal of Physics: Condensed Matter, 21, No. 19 (2009)
doi: 10.1088/0953-8984/21/19/195304
The effects of inelastic interactions between current-carrying electrons and vibrational modes of a nanoscale junction are a major limiting factor on the stability of such devices. A method for dynamical simulation of inelastic electron-ion interactions in nanoscale conductors is applied to a model system consisting of an adatom bonded to an atomic wire. It is found that the vibrational energy of such a system may decrease under bias, and furthermore that, as the bias is increased, the rate of cooling, within certain limits, will increase. This phenomenon can be understood qualitatively through low-order perturbation theory, and is due to the presence of an anti-resonance in the transmission function of the system at the Fermi level. Such current-assisted cooling may act as a stabilization mechanism, and may form the basis for a nanoscale cooling 'fan'.
16. Title: Newtonian origin of the spin motive force in ferromagnetic atomic wires
Author(s): Stamenova M., Todorov T.N., Sanvito S.,
Physical Review B, 77, No. 5 (2008)
doi: 10.1103/PhysRevB.77.054439
We demonstrate numerically the existence of a spin-motive force acting on spin carriers when moving in a time and space dependent internal field. This is the case for electrons in a one-dimensional wire with a precessing domain wall. The effect can be explained solely by adiabatic dynamics and is shown to exist for both classical and quantum systems.
17. Title: Inelastic quantum transport in nanostructures: The self-consistent Born approximation and correlated electron-ion dynamics
Author(s): McEniry E.J., Frederiksen T., Todorov T.N., Dundas D., Horsfield A.P.,
Physical Review B, 78, No. 3 (2008)
doi: 10.1103/PhysRevB.78.035446
A dynamical method for inelastic transport simulations in nanostructures is compared to a steady-state method based on nonequilibrium Green's functions. A simplified form of the dynamical method produces, in the steady state in the weak-coupling limit, effective self-energies analogous to those in the Born approximation due to electron-phonon coupling. The two methods are then compared numerically on a resonant system consisting of a linear trimer weakly embedded between metal electrodes. This system exhibits an enhanced heating at high biases and long phonon equilibration times. Despite the differences in their formulation, the static and dynamical methods capture local current-induced heating and inelastic corrections to the current with good agreement over a wide range of conditions, except in the limit of very high vibrational excitations where differences begin to emerge.
18. Title: Correlated electron-ion dynamics in metallic systems
Author(s): Horsfield A.P., Finnis M., Foulkes M., LePage J., Mason D., Race C., Sutton A.P., Bowler D.R., Fisher A.J., Miranda R., Stella L., Stoneham A.M., Dundas D., McEniry E., Todorov T.N., Sanchez C.G.,
Computational Materials Science, 44, No. 1, pp. 16-20 (November 2008)
doi: 10.1016/j.commatsci.2008.01.055
In this paper we briefly discuss the problem of simulating non-adiabatic processes in systems that are usefully modelled using molecular dynamics. In particular we address the problems associated with metals, and describe two methods that can be applied: the Ehrenfest approximation and correlated electron-ion dynamics (CEID). The Ehrenfest approximation is used to successfully describe the friction force experienced by an energetic particle passing through a crystal, but is unable to describe the heating of a wire by an electric current. CEID restores the proper heating. (C) 2008 Elsevier B.V. All rights reserved.
19. Title: Structure-related effects on the domain wall migration in atomic point contacts
Author(s): Stamenova M., Sahoo S., Todorov T.N., Sanvito S.
Journal Of Magnetism And Magnetic Materials, 316, No. 2, pp. E934-E936 (SEP 2007)
doi: 10.1016/j.jmmm.2007.03.147
We investigate the interplay between magnetic and structural dynamics in magnetic point contacts under current-carrying conditions. In particular we address the dependence of the results upon the specific parameterization of the elastic properties of the material. Our analysis shows a strong influence of the structural relaxation on the energy barrier for domain wall migration, but a negligible dependence of the mechanical properties on the magnetic state. These results are stable against the various material specific choices of parameters describing different transition metals. (c) 2007 Elsevier B.V. All rights reserved.
20. Title: Dynamical simulation of inelastic quantum transport
Author(s): McEniry E.J., Bowler D.R., Dundas D., Horsfield A.P., Sanchez C.G., Todorov T.N.
Journal Of Physics-Condensed Matter, 19, No. 19, Art. No. 196201 (MAY 16 2007)
doi: 10.1088/0953-8984/19/19/196201
A method for correlated quantum electron-ion dynamics is combined with a method for electronic open boundaries to simulate in real time the heating, and eventual equilibration at an elevated vibrational energy, of a quantum ion under current flow in an atomic wire, together with the response of the current to the ionic heating. The method can also be used to extract inelastic current voltage corrections under steady-state conditions. However, in its present form the open-boundary method contains an approximation that limits the resolution of current-voltage features. The results of the simulations are tested against analytical results from scattering theory. Directions for the improvement of the method are summarized at the end.
21. Title: Molecular conduction: Do time-dependent simulations tell you more than the Landauer approach?
Author(s): Sanchez C.G., Stamenova M., Sanvito S., Bowler D.R., Horsfield A.P., Todorov T.N.
Journal Of Chemical Physics, 124, No. 21, Art. No. 214708 (JUN 7 2006)
doi: 10.1063/1.2202329
A dynamical method for simulating steady-state conduction in atomic and molecular wires is presented which is both computationally and conceptually simple. The method is tested by calculating the current-voltage spectrum of a simple diatomic molecular junction, for which the static Landauer approach produces multiple steady-state solutions. The dynamical method quantitatively reproduces the static results and provides information on the stability of the different solutions. (c) 2006 American Institute of Physics.
22. Title: The transfer of energy between electrons and ions in solids
Author(s): Horsfield A.P., Bowler D.R., Ness H., Sanchez C.G., Todorov T.N., Fisher A.J.
Reports On Progress In Physics, 69, No. 4, pp. 1195-1234 (APR 2006)
doi: 10.1088/0034-4885/69/4/R05
In this review we consider those processes in condensed matter that involve the irreversible flow of energy between electrons and nuclei that follows from a system being taken out of equilibrium. We survey some of the more important experimental phenomena associated with these processes, followed by a number of theoretical techniques for studying them. The techniques considered are those that can be applied to systems containing many nonequivalent atoms. They include both perturbative approaches (Fermi's Golden Rule and non-equilibrium Green's functions) and molecular dynamics based (the Ehrenfest approximation, surface hopping, semi-classical Gaussian wavefunction methods and correlated electron-ion dynamics). These methods are described and characterized, with indications of their relative merits.
23. Title: Magnetomechanical interplay in spin-polarized point contacts
Author(s): Stamenova M., Sahoo S., Sanchez C.G., Todorov T.N., Sanvito S.
Physical Review B, 73, No. 9, Art. No. 094439 (MAR 2006)
doi: 10.1103/PhysRevB.73.094439
We investigate the interplay between magnetic and structural dynamics in ferromagnetic atomic point contacts. In particular, we look at the effect of the atomic relaxation on the energy barrier for magnetic domain wall migration and, reversely, at the effect of the magnetic state on the mechanical forces and structural relaxation. We observe changes of the barrier height due to the atomic relaxation up to 200%, suggesting a very strong coupling between the structural and the magnetic degrees of freedom. The reverse interplay is weak; i.e., the magnetic state has little effect on the structural relaxation at equilibrium or under nonequilibrium, current-carrying conditions.
24. Title: Current-driven magnetic rearrangements in spin-polarized point contacts
doi: 10.1103/PhysRevB.72.134407
26. Title: Correlated electron-ion dynamics with open boundaries: formalism
27. Title: Transport in nanoscale systems: the microcanonical versus grand-canonical picture
Author(s): Di Ventra M., Todorov T.N.
Journal Of Physics-Condensed Matter, 16, No. 45, pp. 8025-8034 (NOV 17 2004)
doi: 10.1088/0953-8984/16/45/024
We analyse a picture of transport in which two large but finite charged electrodes discharge across a nanoscale junction. We identify a functional whose minimization, within the space of all bound many-body wavefunctions, defines an instantaneous steady state. We also discuss factors that favour the onset of steady-state conduction in such systems, make a connection with the notion of entropy, and suggest a novel source of steady-state noise. Finally, we prove that the true many-body total current in this closed system is given exactly by the one-electron total current, obtained from time-dependent density-functional theory.
28. Title: Beyond Ehrenfest: correlated non-adiabatic molecular dynamics
Journal Of Physics-Condensed Matter, 16, No. 46, pp. 8251-8266 (NOV 24 2004)
doi: 10.1088/0953-8984/16/46/012
A method for introducing correlations between electrons and ions that is computationally affordable is described. The central assumption is that the ionic wavefunctions are narrow, which makes possible a moment expansion for the full density matrix. To make the problem tractable we reduce the remaining many-electron problem to a single-electron problem by performing a trace over all electronic degrees of freedom except one. This introduces both one- and two-electron quantities into the equations of motion. Quantities depending on more than one electron are removed by making a Hartree-Fock approximation. Using the first-moment approximation, we perform a number of tight binding simulations of the effect of an electric current on a mobile atom. The classical contribution to the ionic kinetic energy exhibits cooling and is independent of the bias. The quantum contribution exhibits strong heating, with the heating rate proportional to the bias. However, increased scattering of electrons with increasing ionic kinetic energy is not observed. This effect requires the introduction of the second moment.
29. Title: A Maxwell relation for current-induced forces
Author(s): Sutton A.P., Todorov T.N.
Molecular Physics, 102, No. 9-10, pp. 919-925 (MAY 10 2004)
doi: 10.1080/00268970410001703354
A Maxwell relation is presented involving current-induced forces. It provides a new physical picture of the origin of current-induced forces and in the small-voltage limit it enables the identification of a simple thermodynamic potential which drives electromigration. The question of whether current-induced forces are conservative or non-conservative is discussed briefly in the light of these insights.
30. Title: Power dissipation in nanoscale conductors: classical, semi-classical and quantum dynamics
Author(s): Horsfield A.P., Bowler D.R., Fisher A.J., Todorov T.N., Montgomery M.J.
Journal Of Physics-Condensed Matter, 16, No. 21, pp. 3609-3622 (JUN 2 2004)
doi: 10.1088/0953-8984/16/21/010
Modelling Joule heating is a difficult problem because of the need to introduce correct correlations between the motions of the ions and the electrons. In this paper we analyse three different models of current induced heating (a purely classical model, a fully quantum model and a hybrid model in which the electrons are treated quantum mechanically and the atoms are treated classically). We find that all three models allow for both heating and cooling processes in the presence of a current, and furthermore the purely classical and purely quantum models show remarkable agreement in the limit of high biases. However, the hybrid model in the Ehrenfest approximation tends to suppress heating. Analysis of the equations of motion reveals that this is a consequence of two things: the electrons are being treated as a continuous fluid and the atoms cannot undergo quantum fluctuations. A means for correcting this is suggested.
31. Title: Are current-induced forces conservative?
Author(s): Di Ventra M., Chen Y.C., Todorov T.N.
Physical Review Letters, 92, No. 17, Art. No. 176803 (APR 30 2004)
doi: 10.1103/PhysRevLett.92.176803
The expression for the force on an ion in the presence of current can be derived from first principles without any assumption about its conservative character. However, energy functionals have been constructed that indicate that this force can be written as the derivative of a potential. On the other hand, there exist specific arguments that strongly suggest the contrary. We propose physical mechanisms that invalidate such arguments and demonstrate their existence with first-principles calculations. While our results do not constitute a formal resolution to the fundamental question of whether current-induced forces are conservative, they represent a substantial step forward in this direction.
Author(s): Montgomery M.J., Todorov T.N.
33. Title: Inelastic current-voltage spectroscopy of atomic wires
34. Title: Reply to comment on 'Counterbalancing forces in electromigration'
Author(s): Hoekstra J., Sutton A.P., Todorov T.N.
Journal Of Physics-Condensed Matter, 14, No. 25, pp. 6603-6604 (JUL 1 2002)
doi: 10.1088/0953-8984/14/25/327
Reply to comment by K-H W Chu.
35. Title: Power dissipation in nanoscale conductors
Author(s): Montgomery M.J., Todorov T.N., Sutton A.P.
Journal Of Physics-Condensed Matter, 14, No. 21, pp. 5377-5389 (JUN 3 2002)
doi: 10.1088/0953-8984/14/21/312
A previous tight-binding model of power dissipation in a nanoscale conductor under an applied bias is extended to take account of the local atomic topology and the local electronic structure. The method is used to calculate the power dissipated at every atom in model nanoconductor geometries: a nanoscale constriction, a one-dimensional atomic chain between two electrodes with a resonant double barrier, and an irregular nanowire with sharp corners. The local power is compared with the local current density and the local density of states. A simple relation is found between the local power and the current density in quasiballistic geometries. A large enhancement in the power at special atoms is found in cases of resonant and anti-resonant transmission. Such systems may be expected to be particularly unstable against current-induced modifications.
36. Title: Tight-binding simulation of current-carrying nanostructures
Author(s): Todorov T.N.
Journal Of Physics-Condensed Matter, 14, No. 11, pp. 3049-3084 (MAR 25 2002)
doi: 10.1088/0953-8984/14/11/314
The tight-binding (TB) approach to the modelling of electrical conduction in small structures is introduced. Different equivalent forms of the TB expression for the electrical current in a nanoscale junction are derived. The use of the formalism to calculate the current density and local potential is illustrated by model examples. A first-principles time-dependent TB formalism for calculating current-induced forces and the dynamical response of atoms is presented. An earlier expression for current-induced forces under steady-state conditions is generalized beyond local charge neutrality and beyond orthogonal TB. Future directions in the modelling of power dissipation and local heating in nanoscale conductors are discussed.
37. Title: Counterbalancing forces in electromigration
Journal Of Physics-Condensed Matter, 14, No. 6, pp. L137-L140 (FEB 18 2002)
doi: 10.1088/0953-8984/14/6/101
In electromigration (EM) experiments on metallic wires, a flux of atoms can lead to motion of the centre of mass (COM) of the wire. Hence, it may be tempting to assume that the flow of current produces a net force on the wire as a whole. We point out, on the basis of known momentum-balance arguments, that the net force on a metallic wire due to a passing steady-state current is zero. This is possible, because in addition to EM driving forces, acting on scattering centres, there are counterbalancing forces, acting on the rest of the system. Drift of the COM in EM experiments occurs inevitably because the substrate keeps the crystal lattice of the wire fixed, while allowing diffusion of defects in the bulk of the wire. This drift is not evidence for a net force on the wire.
38. Title: Time-dependent tight binding
Author(s): Todorov T.N.
Journal Of Physics-Condensed Matter, 13, No. 45, pp. 10125-10148 (NOV 12 2001)
doi: 10.1088/0953-8984/13/45/302
Starting from a Lagrangian mean-field theory, a set of time-dependent tight-binding equations is derived to describe dynamically and self-consistently an interacting system of quantum electrons and classical nuclei. These equations conserve norm, total energy and total momentum. A comparison with other tight-binding models is made. A previous tight-binding result for forces on atoms in the presence of electrical current flow is generalized to the time-dependent domain and is taken beyond the limit of local charge neutrality.
39. Title: A simple model of atomic interactions in noble metals based explicitly on electronic structure
Author(s): Sutton A.P., Todorov T.N., Cawkwell M.J., Hoekstra J.
Philosophical Magazine A-Physics Of Condensed Matter Structure Defects And Mechanical Properties, 81, No. 7, pp. 1833-1848 (JUL 2001)
doi: 10.1080/01418610108216639
A total energy tight-binding model with a basis of just one s state per atom is introduced. It is argued that this simplest of all tight-binding models provides a surprisingly good description of the structural stability and elastic constants of noble metals. By assuming inverse power scaling laws for the hopping integrals and the repulsive pair potential, it is shown that the density matrix in a perfect primitive crystal is independent of volume, and structural energy differences and equations of state are then derived analytically. The model is most likely to be of use when one wishes to consider explicitly and self-consistently the electronic and atomic structures of a generic metallic system, with the minium of computation expense. The relationship to the free-electron jellium model is described. The applicability of the model to other metals is also considered briefly.
40. Title: Quantum electronics - Nanotubes go ballistic
Author(s): White C.T., Todorov T.N.
Nature, 411, No. 6838, pp. 649-651 (JUN 2001)
doi: 10.1038/35079720
41. Title: Current-induced embrittlement of atomic wires
Author(s): Todorov T.N., Hoekstra J., Sutton A.P.
Physical Review Letters, 86, No. 16, pp. 3606-3609 (APR 16 2001)
doi: 10.1103/PhysRevLett.86.3606
42. Title: Non-linear conductance of disordered quantum wires
Author(s): Todorov T.N.
Journal Of Physics-Condensed Matter, 12, No. 42, pp. 8995-9006 (OCT 23 2000)
doi: 10.1088/0953-8984/12/42/306
The self-consistent electron potential in a current-carrying disordered quantum wire is spatially inhomogeneous due to the formation of resistivity dipoles across scattering centres. In this paper it is argued that these inhomogeneities in the potential result in a suppression of the differential conductance of such a wire at finite applied voltage. A semi-classical argument allows this suppression, quadratic in the voltage, to be related directly to the amount of intrinsic defect scattering in the wire. This result is then tested against numerical calculations.
43. Title: Electromigration of vacancies in copper
Author(s): Hoekstra J., Sutton A.P., Todorov T.N., Horsfield A.P.
Physical Review B, 62, No. 13, pp. 8568-8571 (OCT 1 2000)
doi: 10.1103/PhysRevB.62.8568
The total current-induced force on atoms in a Cu wire containing a vacancy are calculated using the self consistent one-electron density matrix in the presence of an electric current, without separation into electron-wind and direct forces. By integrating the total current-induced force, the change in vacancy migration energy due to the current is calculated. We use the change in migration energy with current to infer an effective electromigration driving force F-e. Finally, we calculate the proportionality constant rho* between F-e and the current density in the wire.
44. Title: Current-induced forces in atomic-scale conductors
Philosophical Magazine B-Physics Of Condensed Matter Statistical Mechanics Electronic Optical And Magnetic Properties, 80, No. 3, pp. 421-455 (MAR 2000)
doi: 10.1080/13642810008208601
We present a self-consistent tight-binding formalism to calculate the forces on individual atoms due to the flow of electrical current in atomic-scale conductors. Simultaneously with the forces, the method yields the local current density and the local potential in the presence of current flow, allowing a direct comparison between these quantities. The method is applicable to structures of arbitrary atomic geometry and can be used to model current-induced mechanical effects in realistic nanoscale junctions and wires. The formalism is implemented within a simple Is tight-binding model and is applied to two model structures; atomic chains and a nanoscale wire containing a vacancy. |
daa222324acbb199 | Introduction Program Talks & posters Participants Practical Info
Young Women in Harmonic Analysis and PDE
December 2-4, 2016
Lisa Onkes (University of Bonn)
Singularity formation for dispersive waves
We consider the (focusing) nonlinear Schrödinger equation \begin{align*} (\text{NLS})\qquad \left\lbrace\begin{array}{l} i u_t = - \Delta u - |u|^{p-1}u,\quad(t,x) \in \mathbb{R} \times \mathbb{R}^N \\ u(0,x)= u_0(x) \in H^1(\mathbb{R}^N, \mathbb{C}) \end{array} \right., \end{align*} which depending on the exponent is either subcritical, critical or supercritical. In the subcritical case all solutions are globally defined (J. Ginibre and G. Velo), while solutions in the critical case - assumed initial data mass comparable to the soliton mass and negative Galilean energy - present singularities with almost self-similar blow-up speed (F. Merle and P. Raphaël).
In the supercritical case physical experiments (V. Zakharov, E. Kuznetsov and S. Musher) suggest the existence of self-similar blow-up solutions. However this has only been proven in the slightly supercritical case (F. Merle, P. Raphaël, and J. Szeftel), by viewing this as a pertubation of the critical case. Our goal is to adopt an approach used by R. Donninger and B. Schörkhuber for the supercritical wave equation and obtain a proof for the existence of self-similar blow-up solutions of the mass supercritical Schrödinger equation, which does not depend on the closeness to the critical case. Thus (under numerically verifiable assumptions) providing a self-similar blow-up result for the whole supercritical range. |
4ea717adfb51a15b | Green's functions
Lecture Notes on Green's Functions for lecture given on St. Patrick's day, so written in Green naturally.
Some of the most important equations in physics can be solved by constructing a beast with a curious set of properties, called a Green’s function. This post contains some interesting nuggets from a lecture I gave on St. Patrick’s day about Green’s functions to the course I assist, Mathematical Methods in the Physical Sciences II. I’ll give some historical background about the life of George Green, the functions’ namesake, introduce what a Green’s function actually is–and what exactly it’s good for–in layperson’s terms, and in the final section go through the physics and the math to develop a deeper understanding of what is going on as well as to truly convince ourselves that the function is holding up its side of the bargain.
The lecture follows closely material in Mathematical Methods in The Physical Sciences, Mathematical Methods for Physicists, and “The Green of Green Functions” and the interested reader is referred to these sources for further information.
George Green
The namesake of Green’s theorem and Green’s functions, George Green, led an atypical life, first blossoming as a mathematician in his 30s after a career as a miller with little formal education to speak of. Born in 1793 to a baker in Nottingham Green managed to learn to read and write during his 18 months of private school as a child before joining the family business so to speak, working at the nearby mill and having 7 children over the years with the miller’s daughter, whom he never married. In 1823 Green’s life took a turn when he joined the Nottingham subscription library at age 30, which gave him access to the leading scientific journals of the age and a peer group of like minded individuals hailing from the surrounding countryside.
Just 5 years later at age 35, Green self-published “An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism1 almost apologizing for wasting mathematicians’ time in the introductory paragraphs
…it is hoped the difficulty of the subject will incline mathematicians to read this work with indulgence, more particularly when they are informed that it was written by a young man, who has been obliged to obtain the little knowledge he possesses, at such intervals and by such means, as other indispensable avocations which offer but few opportunities of mental improvement, afforded.
This 70 page essay contained both the derivation of Green’s theorem and Green’s functions. Mostly, Green’s customers were local merchants, seemly looking for an erudite bookshelf decoration. At some point, a successful businessman with a background in mathematics, Edward Bromhead, was sufficiently impressed by Green’s work to write him a letter offering mentorship and support. Green apparently asked his local friends their opinion of Bromhead’s sincerity and was told that Bromhead was just being polite and that it would be inappropriate for someone of such a different social standing to accept such an offer. 20 months later, Green reconsidered and wrote Bromhead accepting his offer.
With Bromhead’s support, Green published several papers, and eventually attended Cambridge in spite of the fact that he possessed, in his own words, “little Latin, less Greek” and had “seen too many winters”. His seminal work that took his career from miller to mathematician remained unpublished and largely unappreciated until shortly after Green’s ignoble death of the flu at age 43 when Lord Kelvin arranged for its publication.
Green’s functions
Poisson’s equation $latex \nabla^2 \psi = \frac{\rho}{\epsilon}$ and Laplace’s equation $latex \nabla^2 \psi = 0$ model the electrostatic potential $latex \psi$ in the presence and absence of charge, respectively. The equation in the presence of charge is clearly more complicated and can be solved by invoking the machinery of Green’s functions, which were originally directed towards electrostatic problems of this sort. In this example, the Green’s function $latex G(r,r’)$ physically represents the potential at the point $latex r$ produced by a unit charge $latex r’$. Before moving to more complicated examples, it’s easy to see although more complicated to prove that $latex G$ is symmetric in its arguments. The universe does not often play directional favorites, so we would imagine that the potential produced would be the same, were the charges $latex r$ and $latex r’$ swapped.
Green’s functions quickly found other applications to problems in electromagnetic, thermal and mechanical phenomena. Moreover, Green’s functions can be used to formulate a theory of classical wave scattering which leads us into quantum mechanical applications as we notice that the Schrödinger equation of quantum mechanics is itself a wave equation. Using this we can extend the technique to apply to the situation of a non-relativistic scattering of a single particle by an external potential. Once we’ve used Green’s functions to treat scattering we can be very excited. In particle physics interactions scattering is how we investigate properties of the elementary particles. Particle interactions are multiple scattering processes and the transmission of forces is done via quantum fields. The propagation of fields between points was exactly what Green’s functions were invented for–indeed Green’s functions appear in modern quantum field theories. Known as Feynman propagators in this context, they are a standard tool in modern particle physics.
But going back to Poisson’s equation above, we can generalize the equation and its solution, the Green’s function, with a modicum of additional mathematical machinery. This procedure and its associated notation is well covered in Mathematical Methods in The Physical Sciences, Section 9.4. Once generalized it’s easy to spot candidate applications–physical situations with equations of a similar form–for applying the Green’s function methodology to solve them.
Once we have found an equation of the right form we’d like to solve, with the increased abstraction we can show that if we construct a function $latex G(x,t)$ which is:
1. piecewise over a defined interval
2. whose pieces satisfy boundary conditions on the interval
3. whose pieces patch together nicely (i.e. fit together perfectly at some location $latex t$)
4. but not too nicely (i.e. their derivative is discontinuous at $latex t$)
then we can use $latex G(x,t)$ to solve the original equation. More precisely a we take a weighted sum of $latex G$’s value over our defined interval. In our electrostatics terminology $latex G$ corresponds to a weighting function that enhances or reduces the effect of a charge element according to its distance from the source. Everything we need from a solution has now been built into the construction of $latex G$: sounds like black magic, but it’s not. Convincing oneself mathematically is a conceptually straightforward procedure, although a bit involved with the full generalized machinery: simply plug the constructed solution back into the original equation and see it satisfied.
Physical Intuition
Here I’ll consider a simpler, less general example, to develop some physical intuition as to what the Green’s function is actually representing by providing the mathematical representation of what I have up to now simply referred to as the solution and then concisely going through the calculation checking that our constructed solution actually satisfies the desired equation.
Consider the differential equation $latex y” + \omega^2 y = f(t)$, where $latex f(t)$ is some given forcing function. Moreover, let’s impose some simple boundary conditions namely
(1) $latex y_0=y_0’=0$.
Next we’re going to do something a bit strange, but why will become clear. Namely, we’re going to rewrite $latex f(t)$ as an integral over a delta function. Recall from the definition for a delta function that $latex \int_a^b = \phi(t) \delta(t-t_0)dt = \phi(t_0)$ if $latex t_0$ lies within the integration limits, and 0 otherwise. In this formulation we can think of the force $latex f(t)$ as the limiting case of a whole sequence of impulses.
(2) $latex f(t)=\int_a^b = f(t’) \delta(t’-t)dt $.
Now that we have formulated this new, strange way of thinking of the forcing function and many more delta functions than we might have initially bargained for, let’s simplify things again by considering a single delta function, namely what if $latex f(t)=\delta(t’-t)$. We now solve our differential equation (1) for this $latex f(t)$, which corresponds to a unit impulse at $latex t’$. Solving this equation is rather easy with a sleight of hand: we simply define the solution to be a function which we call $latex G(t,t’)$. That is $latex G(t,t’)$ is the solution to
(3) $latex \frac{d^2}{dt^2} G(t,t’) + \omega^2 G(t,t’) = \delta(t’-t)$.
Finally given some forcing function $latex f(t)$ we try to find the solution of (1) by simply adding up the response of many such impulses, guessing that the final solution is over the form
(4) $latex y(t)=\int_0^\infty G(t,t’)f(t’)dt$.
For now, this is just a guess but we can show it’s correct by following the strategy of plugging it into the original equation (1) and showing that it indeed satisfies it:
• Substitute (4) into (1): $latex y” + \omega^2 y = (\frac{d^2}{dt^2}+\omega^2)y=(\frac{d^2}{dt^2}+\omega^2)\int_0^\infty G(t,t’)f(t’) dt’=\int_0^\infty (\frac{d^2}{dt^2}+\omega^2)G(t,t’)f(t’) dt’$
• Use (3) to simplify: $latex y” + \omega^2 y=\int_0^\infty \delta(t’-t) f(t’)dt’$
• Use (2) to finish the proof: $latex y” + \omega^2y=f(t)$
Thus (4) is the solution of (1). This finally gives us an easier way to imagine the roll the Green’s function is playing in the solution, it is the response of the system to a unit impulse at $latex t=t’$ and we have been able to show the finally solution is a sum of such responses.
1. George Green (1841). An Essay on the Application of mathematical Analysis to the theories of Electricity and Magnetism Crelle’s Journal arXiv: 0807.0088v1
0 thoughts on “Green's functions
Leave a Reply
|
0c2b102b3fdad6a0 | Atomic energy levels and chemical bonding
Atomic structure is the basis of chemistry. It is explained by Quantum Mechanics, which is part of physics. We will see that physics explains chemistry, which explains physiology, which at least start to explain neurobiology. It’s one thing that leads to another.
In QM, the properties of a system, that is, a given object or set of objects, such as an atom, are given by the solution to the Schrödinger equation for the system. For atoms, there are a set of solutions, corresponding to different energy states of the atoms. What follows may smack of numerology.
Consider the hydrogen atom, composed of one negatively-charged electron in orbit around a nucleus containing one positively-charged proton. (This is an experimental result.) Look out, the orbit is not a well-defined path around the nucleus like those animations you see in TV ads, but rather a cloud of probability which indicates the likelihood that the electron will be found at any given point in the cloud. This is due to the probabilistic character of QM and the Uncertainty Principle. The different solutions to the Schrödinger equation express the possible energy values of the atom. Each one is specified by a set of integer numbers called quantum numbers. In the case of the hydrogen atom, they are the following:
1. The principal quantum number, designated by the symbol n, takes on integer values from 1 on up, but in practice only to 7. It indicates the shell, or level of the cloud, in which the electron is found. The values 1-7 are often indicated by the letters K, L, M…Q.
2. The orbital quantum number, l, indicates a level within the shell which is called the subshell, It can take on values from 0 up to n-1. The values 0-3 are often referred to as s, p, d and f.1)The notations s, p, d and f come from spectroscopy and are abbreviated forms of sharp, principal, diffuse and fundamental.
3. The orbital magnetic quantum number, m, refers to the magnetic orientation of the electron. It can range from -l up through +l.
4. The electron spin, ms, can take on only two values, ½ or -½.
So the only allowed values for the quantum numbers are
n = 1, 2, 3, …
l = 0…n-1 (for a given value of n)
m = -l…+l (for a given value of l)
because those are the ones for which the Schrödinger equation has solutions. It is actually quite simple.
The QM exclusion principle forbids two electrons to occupy the same state. So each set of values (n, l, m, ms) can correspond to only one electron. The result is illustrated in the following table.
n (shell)
l (subshell)
m (orbital)
Max no. electrons
1 0 0 2
2 0
3 0
-1, 0, 1
-2. -1, 0, 1, 2
4 0
-1, 0, 1
-2, -1, 0, 1, 2
The fact that the quantum numbers do not vary continuously from, say, 0 to 0.001 and then 0.002 and on, but but jump from one integer value to another means that the energy of the electron in the electric field of the nucleus also takes on non-continuous values. These are called quantum states and are a feature, or if you prefer, a peculiarity, of QM.
The chemical properties of an atom depend only on the number of electrons. This is equal to the number of protons and is called the atomic number. All atoms except hydrogen have nuclei which also contain neutrons. The table summarizes the allowed values of quantum numbers for the first four shells.
In specifying which subshells are occupied by the electrons in an atom, one often uses the format
where l is specified as s, p, d or f and # is the number of electrons in the subshell. In its minimum energy state, called the ground state, the carbon atom (atomic number = 12, nucleus contains 6 protons and 6 neutrons) has the following electron configuration:
12C: 1s22s22p2
which indicates the maximum number of two electrons in shell 1, again in subshell s of shell 2 and the remaining two in subshell p of shell 2. Similarly, oxygen (atomic number = 16, 8 each of protons and neutrons) is
16O: 1s22s22p4
the meaning of which should now be clear.
What is interesting is that, for energetic reasons, each atom would like to have its outside subshell filled. If a few electrons are missing, it wants more; if most are missing, it might be willing to give up the rest in order to have an empty outside shell, referred to as the valence shell. (The number of electrons in this outer shell is called the valence.2)Officially, the maximum number of univalent atoms (originally hydrogen or chlorine atoms) that may combine with an atom of the element under consideration, or with a fragment, or for which an atom of this element can be substituted.) For instance, hydrogen
1H: 1s1
wants two electrons or none in its 1s shell, so it could give up its electron or gain one. What happens is, two H atoms share their electrons to make a molecule of H2, so each has two electrons half the time. Better than nothing.3)To continue the anthropomorphisms, this is a kind of solidarity in which humans are often lacking.
Since oxygen already has shell 2 half-filled, it would probably prefer to gain electrons to fill it. And carbon… but carbon is special and will be considered in a moment.
Look at sodium (Na, atomic number 11) and chlorine (Cl, atomic number 17):
Na: 1s22s22p63s1
Cl: 1s22s22p63s23p5
Sodium could happily give up that 3s electron and chlorine could use it to fill up its 3p valence shell. And this is what happens in table salt, NaCl. If you put salt in water, it separates (for reasons which will be discussed shortly) into charged ions, Na+ and Cl, because chlorine is greedy and keeps that negative 3s electron it took away from sodium. This attraction for electrons is called electronegativity. This is very important in biochemical reactions in cells, as we shall see.
Chemistry is the study of chemical systems (atoms, molecules) and chemical bonding between such objects. In the case of NaCl, the sodium and chlorine have opposite electrical charge and the attractive electric force is what holds the molecule together. This is called ionic bonding. Sometimes, when atoms cannot decide which has more right to an electron, the electron is shared between them, as in H2, making both atoms relatively happy. Bonding based on shared electrons is called covalent bonding; it is a sort of consensus situation, if we may go on with the anthropomorphism.
Elements with the same number of electrons in their outer shells have similar chemical properties. So they are arranged in columns in that wonderful physical/chemical tool, the periodic table of the elements.
Periodic table of the elements
Periodic table from Wikimedia Commons
It is easy to see that each element in the first column is like hydrogen in having one electron in its valence shell.
H: 1s1
Li: 1s22s1
Na: 1s22s22p63s1
K: 1s22s22p63s23p64s1
… and so on.
The extra elements in the middle are rule-breakers. Instead of filling one subshell before moving on to the next, they start one, add a small number (often only one) of electrons to the next, then go back to finish filling the next-to-last.
Columns in the table are called groups; rows, periods.
The subshell configurations we have been giving are for the lowest energy state of the atom, called the ground state, in which subshells are filled from the “bottom” up (with some exceptions, as just mentioned). But if that hydrogen electron is struck by a photon, enough energy may be transferred from the photon to the 1s electron to push it into a higher-energy subshell. The atom is then said to be in an excited state. The electron may then re-descend spontaneously to the lower subshell, emitting a photon of energy equivalent to the difference in energy levels of the subshells. In QM, photons behave like waves whose energy is a function of their frequency, so the frequency – equivalently, the color – of the light emitted is characteristic of the difference in energy of the two subshells. Any atom’s subshells will therefore correspond to a given set of photon frequencies emitted and these are seen as colors, although not all these colors will be visible to a human eye. The set of frequencies constitute the spectrum of the atom and may be used to analyze the identity of a light source. In this way, we can identify the chemical components of light-emitting objects like distant stars.
There are two other types of bonding. We will consider hydrogen bonds very shortly in the discussion of water. The fourth form is due to the shifting electron density distribution around an atom. At times, this may form a temporary dipole even in a neutral atom. This may in turn induce a dipole in nearby atom in such a way that the two dipoles attract each other very weakly. This is London, or van der Waal’s, bonding.
The functioning of all living things depends on water and on the versatility of the carbon atom. So let’s start with carbon.
Notes [ + ]
3. To continue the anthropomorphisms, this is a kind of solidarity in which humans are often lacking.
Leave a Reply
|
0112eb47417ea146 | Dismiss Notice
Join Physics Forums Today!
Shrodinger Equation, where did it come from?
1. Feb 3, 2009 #1
I was just wondering this lately: where does the Shrodinger Equation come from? How was it 'invented'?
Our teacher told us that it cannot be derived from any classical mechanics (quite obviously) so how did Shrodinger come up with it?
I can hardly believe he just postulated that a wavefunction would obbey that equation, and it "happened" to obbey measurements..?
I can't find this anywhere, not even in my QM textbook (by Griffiths). Actually the first chapter begins by stating the SE... No explanation whatsoever!
2. jcsd
3. Feb 3, 2009 #2
I am not sure if my history is correct here but I believe that his equation was borrowed from classical theory and he found that it could be adapted to encompass De Broglies idea of the wave nature of matter.
4. Feb 3, 2009 #3
User Avatar
Staff: Mentor
5. Feb 3, 2009 #4
User Avatar
Staff Emeritus
Science Advisor
Gold Member
I don't know exactly, but I think it went something like this:
de Broglie's work suggested that matter had wave properties. So let's try writing down a function describing a wave. [itex]exp(-iEt+i\vec p\cdot \vec x)[/itex] is a plane wave propagating in the direction of [itex]\vec p[/itex]. That suggests that [itex]\vec p[/itex] is proportonal to the velocity. It could be the velocity itself, or the momentum. If this exponential is the solution of a differential equation, then this equation is also going to tell us the relationship between E and [itex]\vec p[/itex]. What equation will give us the relativistic relationship between energy and momentum, [itex]E^2=\vec p^2+m^2[/itex]? The answer is the Klein-Gordon equation. That didn't work out very well, so let's try the equation that gives us the non-relativistic relationship between energy and momentum, [itex]E=\vec p^2/2m[/itex]. The result is the Schrödinger equation.
I'm not sure about the details, but I'm pretty sure that I've read that the solutions were found before the equation, and that the Klein-Gordon equation was found before the Schrödinger equation. (By the way, I'm using units such that [itex]\hbar=1[/itex]).
Edit: My speculations can safely be ignored. Jtbell seems to have actually read the original papers. :biggrin: I consider that a form of cheating. :smile:
6. Feb 4, 2009 #5
Thanks, that actually makes sense :)
Our teacher did give us a quick "semi-derivation" simply by explaining what the derivatives for example physically represent and that it is at least probable that it is correct, but this is much more satisfying :) |
bbaaf3b9c9ed6420 | tisdag 31 januari 2017
The End of CO2 Alarmism
• Climate sensitivity to CO2 emission vastly exaggerated.
• Climate industrial complex a very dangerous special interest.
And read about this historic press conference:
Radiation as Superposition or Jumping?
Which is more convincing: Superposition or jumping?
• The second paper refers to radiation only in passing.
måndag 30 januari 2017
Towards a Model of Atoms
which I hope to assemble into a model which can describe:
• electrons as clouds of charge subject to Coulomb and compression forces
• no conceptual difference between micro and macro
• no probability, no multi-d
• generalised harmonic oscillator
• small damping from Abraham-Lorentz force from oscillating electro charge
• near resonant forcing with half period phase shift
söndag 29 januari 2017
The Radiating Atom
with the following characteristic energy balance between outgoing and incoming energy:
in terms of work of forcing.
lördag 28 januari 2017
Physical Interpretation of Quantum Mechanics Needed
torsdag 26 januari 2017
Why Atomic Emission at Beat Frequencies Only?
and $t$ is a time variable.
with corresponding charge density
onsdag 25 januari 2017
Ny Läroplan med Programmering på Regeringens Bord
tisdag 24 januari 2017
Is the Quantum World Really Inexplicable in Classical Terms?
• The quantum world is inexplicable in classical terms.
The question is how to generalise Schrödinger's equation for the Hydrogen atom with one electron, which works fine and can be understood, to Helium with two electrons and so on...The question is then how the two electrons of Helium find co-existence around the kernel. In Real Quantum Mechanics they split 3d space without overlap....like East and West of global politics or Germany...
Quantum Mechanics as Retreat to (German) Romantic Irrational Ideal
• How could this happen?
måndag 23 januari 2017
Quantum Mechanics as Classical Continuum Physics and Not Particle Mechanics
Planck (with eyes shut) presents Einstein with the Max Planck medal of the German Physical Society, 28 June 1929, in Berlin, as the highest award of the Deutsche Physikalische Gesellschaft, for Einstein's idea of light as particles, which Planck did not believe in (and did not want to see).
Modern physics in the form of quantum mechanics was born in 1900 when Planck in a desperate act introduced the idea of smallest packet of energy or quanta to explain black-body radiation followed up in 1905 by Einstein's equally desperate attempt to explain photo-electricity by viewing light as a stream of light particles of energy quanta $h\nu$ where $\nu$ is frequency and $h$ Planck's constant.
Yes, Einstein was desperate, because he was stuck as patent clerk in Bern and his academic career was going nowhere. Yes, Planck was also desperate because his role at the University of Berlin as the successor of the great Kirchhoff, was to explain blackbody radiation as the most urgent unsolved problem of physics and thereby demonstrate the scientific leadership of an emerging German Empire.
The "quantisation" into discrete smallest packets of energy and light was against the wisdom of the continuum physics of the 19th century crowned by Maxwell's wave equations describing all of electro-magnetics as a system of partial differential equations over 3d-space as a continuum over real numbers as the ultimate triumph of the infinitesimal Calculus of Leibniz and Newton.
The "quantisation" of energy and light thus meant a partial retreat to the view of the early Greek atomists with the world ultimately built from indivisible particles or quanta and not waves, also named particle physics.
But the wave nature was kept in Schrödinger's linear multi-d equation as the basis of quantum mechanics, but then not in physical form as in Maxwell's equations, but as probability waves supposedly describing probabilities of particle configurations. The mixture was named wave-particle duality, which has been the subject of endless discussion after its introduction by Bohr.
Schrödinger never accepted a particle description and stuck to his original idea that waves are enough to explain atom physics. The trouble with this view was the multi-d aspect of Schrödinger's equation which could not be given a meaning/interpretation in terms of physical waves, like Maxwell's equations. This made Schrödinger's waves-are-enough idea impossible to defend and Schrödinger's equation was hijacked Bohr/Born/Heisenberg and twisted into a physical particle - probabilistic wave Copenhagen Interpretation as the textbook truth.
But blackbody radiation and the photoelectric effect can be explained by wave mechanics without any form of particles in the form of Computational Blackbody Radiation with the new element being finite precision computation.
The idea of a particle is contradictory, as something with physical presence without physical dimension. Atom physics can make sense as wave mechanics but not as particle mechanics. It is important to remember that this was the view of Schrödinger when he formulated his wave equation in 1925 for the Hydrogen atom. What is needed is an extension of Schrödinger's equation to atoms with several electrons which has a physical meaning, maybe as Real Quantum Mechanics, and this is not the standard linear multi-d Schrödinger equation with solutions interpreted as probability distributions of particle configurations in the spirit of Born-Bohr-Heisenberg but not Schrödinger.
Recall that particle motion is also a contradictory concept, as shown in Zeno's paradox: At each instant of time the particle (Zeno's arrow) is still at a point in space, and thus cannot move to another point. On the other hand, wave motion as the translatory motion of a water wave across a water surface of water, is possible to explain as the result of (circular) transversal water oscillation without translation. Electro-magnetic waves are propagating by transversal oscillation of electric-magnetic fields.
And do not believe that Zeno's paradox was ever solved. It expresses the truly contradictory nature of the concept of particle, which cannot be resolved. Ponder the following "explanation" on Stanford Encyclopedia of Philosophy:
As you understand, this is just nonsense:
Particles don't exist, and if they anyway are claimed to exist, they cannot move.
Waves do exist and can move. It is not so difficult to understand!
lördag 21 januari 2017
Deconstruction of CO2 Alarmism Started
Directly after inauguration the White House web site changes to a new Energy Plan, where all of Obama's CO2 alarmism has been completely eliminated:
Nothing about dangerous CO2! No limits on emission! Trump has listened to science! CO2 alarmism will be defunded and why not then also other forms of fake physics...
This is the first step to the Fall of IPCC and the Paris agreement and liberation of resources for the benefit of humanity, see phys.org.
The defunding of CO2 alarmism will now start, and then why not other forms of fake science?
PS1 Skepticism to CO2 alarmism expressed by Klimatrealisterna is now getting published in media in Norway, while in Sweden it is fully censored. I have recently accepted an invitation to become a member of the scientific committee of this organisation (not yet visible on the web site).
PS2 Read Roy Spencer's analysis of the Trump Dump:
Bottomline: With plenty of energy, poverty can be eliminated. Unstopped CO2 alarmism will massively increase poverty with no gain whatsoever. Trump is the first state leader to understand that the Emperor of CO2 Alarmism is naked, and other leaders will now open their eyes to see the same thing...and skeptics may soon say mission complete...
See also The Beginning of the End of EPA.
The Origin of Fake Physics
The standard view is presented by David Gross as follows:
• For most of us there are no problems.
• Nonetheless, there are dissenting views.
fredag 20 januari 2017
Shaky Basis of Quantum Mechanics
Schrödinger's equation! Where did we get that equation from? Nowhere. It is not possible to derive it from anything you know. It came out of the mind of Schrodinger. (Richard P. Feynman)
In the final analysis, the quantum mechanical wave equation will be obtained by a postulate, whose justification is not that it has been deduced entirely from information already known experimentally (Eisberg and Resnick in Quantum Physics)
Schrödinger's equation as the basic mathematical model of quantum mechanics is obtained as follows:
Start with classical mechanics with a Hamiltonian of the following form for a system of $N$ interacting point particles of unit mass with positions $x_n(t)$ and momenta $p_n=\frac{dx_n}{dt}$ varying with time $t$ for $n=1,...N$:
• $H(x_1,...,x_N)=\frac{1}{2}\sum_{n=1}^Np_n^2+V(x_1,....,x_N)$
where $V$ is a potential depending on the particle positions $x_n$, with the corresponding equations of motion
• $\frac{dp_n}{dt}=\frac{\partial V}{\partial x_n}$ for $n=1,...,N$. (1)
Proceed by formally replacing momentum $p_n$ by the differential operator $-i\nabla_n$ where $\nabla_n$ is the gradient operator acting with respect to $x_n$ now viewed as the coordinates of three-dimensional space (and $i$ is the imaginary unit), to get the Hamiltonian
• $H(x_1,...,x_N)=-\frac{1}{2}\sum_{n=1}^N\Delta_n +V(x_1,...,x_N)$
supposed to be acting on a wave function $\psi (x_1,...,x_N)$ depending on $N$ 3d coordinates $x_1,...,x_N$, where $\Delta_n$ is the Laplacian with respect to coordinate $x_n$. Then postulate Schrödinger's equation with a vague reference to (1) as a linear multi-d equation of the form:
• $i\frac{\partial \psi}{\partial t}=H\psi$. (2)
Schrödinger's equation thus results from inflating single points to full 3d spaces in a purely formal twist of classical mechanics by brutally changing the meaning of $x_n$ from point to full 3d space and then twisting (1) as well. The inflation gives a wave function which depends on $3N$ space coordinates and as such has no physicality and is way beyond computability.
The inflation corresponds to a shift from actual position, which may be of interest, to possible position (which can be anywhere), which has no interest.
The inflation from point to full 3d space has become the trade mark of modern physics as expressed in Schrödinger's multi-d linear equation, with endless speculation without conclusion about the possible physics of the inflation and the meaning of (2).
The formality and lack of physicality of the inflation of course should have sent Schrödinger's multi-d linear equation (2) to the waste-bin from start, but it didn't happen with the argument that even if the physics of the equation was beyond rationale, predictions from the equation always (yes, always!!) agree with observation. The lack of scientific logic was thus acknowledged from start, but it was taken for granted that anyway the equation describes physics very accurately. If a prediction from computation with Schrödinger's equation does not compare well with observation, there must be something wrong with the computation or comparison, never with the equation itself...
But solutions of Schrödinger's multi-d equation cannot be computed in any generality and thus claims of general validity has no real ground. It is simply a postulate/axiom and as such true by assumption as a tautology which can only be true.
The main attempts to give the inflation of classical mechanics into Schrödinger's multi-d linear equation a meaning, are:
• Copenhagen Interpretation CI (probabilistic)
• Many World Interpretation MWI (infinitely many parallel universa in certain contact)
• Pilot-Wave (Bohm)
with no one explanation gathering clear acceptance. In particular, Schrödinger did not like these interpretations of his equation and dreamed of a different version in 3d with physical "anschaulich" meaning, but did not find it...
In the CI the possibilities become an actualities by observation, while in MWI all possibilities are viewed as actualities and in Bohmian mechanics the pilot wave represents the possibilities with a particle somehow carried by the wave representing actuality...all very strange...
onsdag 18 januari 2017
Many Worlds Interpretation vs Double Slit Experiment
When I ask David Deutsch what his basic motivation is to believe that the Many Worlds Interpretation MWI of the multi-d linear Schrödinger equation describes real physics, I get the response that it is in particular the single electron double slit experiment, which he claims is difficult to explain otherwise.
But is this so difficult to explain assuming that electrons are always waves and never particles? I don't think so. Here is my argument:
In the single electron double slit experiment a screen displays an interference pattern created by a signal passing through a double slit, even with the input so weak that the interference pattern is created dot by dot as if being hit by a stream of single electron particles.
This is presented as a mystery, by arguing that an electron particle must chose one of the slits to pass through, and doing so cannot create an interference pattern because that can only arise if the single electron is a wave freely passing through both slits. So the experiment cannot be explained which gives evidence that quantum mechanics is a mystery, and since it is a mystery anything is possible, like MWI.
But there is no mystery if following Schrödinger we understand that electrons are always waves and never particles, and that the fact that the effect on the screen of an incoming wave on may be a dot somewhere on the screen triggered by local perturbations. A dot as effect does not require the cause to be dot-like.
It is thus possible to understand the single electron double slit experiment under the assumption that electrons are always wave-like and always pass through both slits and thus can create an interference pattern, in accordance with the original objective of Schrödinger to describe electrons as waves, and then physical waves and not probability waves as in the Copenhagen Interpretation as another form of MWI.
The trouble with quantum mechanics is the multi-d linear Schrödinger equation which describes probability waves or many worlds waves, which are not physical waves. The challenge is to formulate a Schrödinger equation which describes physical waves, that is to reach the objective of Schrödinger, which may possibly be done with something like realQM...
Ironically, Schrödinger's equation for just one electron is a physical wave equation, and so if anything can be explained by that equation it is the single electron double slit experiment and its mystery then evaporates...
PS The fact that putting a detector at one of the slits destroys the interference pattern, is also understandable with the electron as wave, since a detector may affect a wave and thus may destroy the subtle interference behind the pattern.
tisdag 17 januari 2017
David Deutsch on Quantum Reality
David Deutsch is a proponent of Everett's Many Worlds Interpretation MWI of quantum mechanics under a strong conviction that (from Many Worlds? Everett, Quantum Theory and Reality, Oxford Press 2010):
• Science can only be explanation: asserting what is there in reality.
• The only purpose of formalism, predictions, and interpretation is to express explanatory theories about what is there in reality, not merely predictions about human perceptions.
• Restricting science to the latter would be arbitrary and intolerably parochial.
These convictions forces Deutsch into claiming that the multiverse of MWI is reality, which many physicists find hard to believe, including me.
But I share the view of Deutsch that science is explanation of what is there in reality (in opposition to the Copenhagen Interpretation disregarding reality), and this is the starting point of realQM.
Concerning the development and practice of quantum mechanics Deutsch says:
• It is assumed that in order to discover the true quantum-dynamical equations of the world, you have to enact a certain ritual.
• First you have to invent a theory that you know to be false, using a traditional formalism and laws that were refuted a century ago.
• Then you subject this theory to a formal process known as quantization (which for these purposes includes renormalization).
• And that’s supposed to be your quantum theory: a classical ghost in a tacked-on quantum shell
• In other words, the true explanation of the world is supposed to be obtained by the mechanical transformation of a false theory, without any new explanation being added.
• This is almost magical thinking.
• How far could Newtonian physics have been developed if everyone had taken for granted that there had to be a ghost of Kepler in every Newtonian theory—that the only valid solutions of Newtonian equations were those based on conic sections, because Kepler’s Laws had those. And because the early successes of Newtonian theory had them too?
Yes, quantum mechanics (based on Schrödinger's linear multi-d equation) is ritual, formality and magical thinking, and that is not what science is supposed to be.
The logic about Schrödinger's linear multi-d equation then is:
1. Interpretations must be made to give the equation a meaning.
2. All interpretations are basically equivalent.
3. One interpretation is MWI.
4. MWI is absurd non-physics.
5. Linear multi-d Schrödinger equation does not describe physics.
måndag 16 januari 2017
Is Quantum Computing Possible?
• .....may or may not be mystery as to what the world view that quantum mechanics represents. At least I do, because I'm an old enough man that I haven't got to the point that this stuff is obvious to me. Okay, I still get nervous with it. And therefore, some of the younger students ... you know how it always is, every new idea, it takes a generation or two until it becomes obvious that there's no real problem. It has not yet become obvious to me that there's no real problem. I cannot define the real problem, therefore I suspect there's no real problem, but I'm note sure there's no real problem.
• So that's why I like to investigate things. So I know that quantum mechanics seem to involve probability--and I therefore want to talk about simulating probability. (Feynman asking himself about a possibility of quantum computing in 1982)
The idea of quantum computing originates from a 1982 speculation by Feynman followed up by Deutsch on the possibility of designing a quantum computer supposedly making use of the quantum states of subatomic particles to process and store information. The hope was that quantum computing would allow certain computations, such as factoring a large natural number into prime factors, which are impossible on a classical digital computer.
A quantum computer would be able to crack encryption based on prime factorisation and thus upset the banking system and the world. In the hands of terrorists it would be a dangerous weapon...and so do we have to be afraid of quantum computing?
Not yet in any case! Quantum computing is still a speculation and nothing like any real quantum computer cracking encryption has been constructed up to date, 35 years later. But the hopes are still high...although so far the top result is factorisation of 15 into 3 x 5...(...in 2012, the factorization of 21 was achieved, setting the record for the largest number factored with Shor's algorithm...)
But what is the reason behind the hopes? The origin is the special form of Schrödinger's equation as the basic mathematical model of the atomic world viewed as a quantum world fundamentally different from the macroscopic world of our lives and the classical computer, in terms of a wave function
• $\psi (x_1,...,x_N,t)$
depending on $N$ three-dimensional spatial coordinates $x_1$,...,$x_N$ (and time $t$) for a system of $N$ quantum particles such as an atom with $N$ electrons. Such a wave function thus depends on $3N$ spatial variables of $N$ different versions of $R^3$ as three-dimensional Euclidean space.
The multi-dimensional wave function $\psi (x_1,...,x_N,t)$ is to be compared with a classical field variable like density $\rho (x,t)$ depending on a single 3d spatial variable $x\in R^3$. The wave function $\psi (x_1,...,x_N,t)$ depends on $N$ different copies of $R^3$, while for $\rho (x,t)$ there is only one copy, and that is the copy we are living in.
In the Many Worlds Interpretation MWI of Schrödinger's equation the $N$ different copies of $R^3$ are given existence as parallel universes or multiversa, while our experience still must be restricted to just one of them, with the other as distant shadows.
The wave function $\psi (x_1,...,x_N,t)$ thus has an immense richness through its contact with multiversa, and the idea of quantum computing is to somehow use this immense richness by sending a computational task to multiversa for processing and then bringing back the result to our single universe for inspection.
It would be like sending a piece of information to an immense cloud for complex computational processing and then bringing it back for inspection. But for this to work the cloud must exist in some form and be accessible.
Quantum computing is thus closely related to MWI and the reality of a quantum computer would seem to depend on a reality of multiversa. The alternative to MWI and multiversa is the probabilistic Copenhagen Interpretation CI, but that does not make things more clear or hopeful.
But what is the reason behind MWI and multiversa? The only reason is the multi-dimensional aspect of Schrödinger's equation, but Schrödinger's equation is a man-made ad hoc variation of the equations of motion of classical mechanics obtained by a purely formal procedure of representing momentum $p$ by a multi-dimensional gradient differential operator as $p=i\nabla$ thus formally replacing $p^2$ by the action on $\psi$ by a multi-dimensional Laplacian $-\Delta =-\sum_j\Delta_j$ with $\Delta_j$ the Laplacian with respect to $x_j$, thus acting with respect to all the $x_j$ for $j=1,...,N$.
But by formally replacing $p$ by $i\nabla$ is just a formality without physical reason, and it is from this formality that MWI and multiversa arise and then also the hopes of quantum computing. Is there then reason to believe that the multi-dimensional $-\Delta\psi$ has a physical meaning, or does it rather represent some form of Kabbalism or scripture interpretation?
My view is that multiversa and quantum computing based on a multi-dimensional Schrödinger equation based on a formality, is far-fetched irrational dreaming, that Feynman's feeling of a real problem sensed something important, and this is my reason for exploration of realQM based on a new version of Schrödinger's equation in physical three-dimensional space.
PS1 One may argue that if MWI is absurd, which many think, then CI is also absurd, which many think, since both are interpretations of one an the same multi-dimensional Schrödinger equation, and the conclusion would then be that if all interpretations are absurd, then so is what is being interpreted, right? Even more reason for realQM and less hope for quantum computing...
PS2 MWI was formulated by Hugh Everett III in his 1956 thesis with Wheeler. Many years later, Everett laughingly recounted to Misner, in a tape-recorded conversation at a cocktail party in May 1977, that he came up with his many-worlds idea in 1954 "after a slosh or two of sherry", when he, Misner, and Aage Petersen (Bohr’s assistant) were thinking up "ridiculous things about the implications of quantum mechanics". (see Many Worlds? Everett, Quantum Theory and Reality, Oxford University Press)
PS3 To get a glimpse of the mind-boggling complexity of $3N$-dimensional space, think of the big leaps form 1d to 2d and from 2d to 3d, and then imagine the leap to the 6d of the two electrons of Helium with $N=2$ as the simplest of all atoms beyond Hydrogen with $N=1$. In this perspective a single Helium atom as quantum computer could be imagined to have the computational power of a laptop. Yes, many dimensions and many worlds are mind-boggling, and as such maybe just phantasy.
lördag 14 januari 2017
The Quantum Manifesto Contradiction
The scientific basis of the Manifesto is:
• With quantum theory now fully established, we are required to look at the world in a fundamentally new way: objects can be in different states at the same time (superposition) and can be deeply connected without any direct physical interaction (entanglement).
The idea is that superposition and entanglement will open capabilities beyond imagination:
• Quantum computers are expected to be able to solve, in a few minutes, problems that are unsolvable by the supercomputers of today and tomorrow.
But from where comes the idea that the quantum world is a world of superposition and entanglement? Is it based on observation? No, it is not, because the quantum world is not open to such inspection.
Instead it comes from theory in the form of a mathematical model named Schrödinger's equation, which is linear and thus allows superposition, and which includes Coulombic forces of attraction and repulsion as forms of instant (spooky) action at distance thus expressing entanglement.
But Schrödinger's equation is an ad hoc man-made theoretical mathematical model resulting from a purely formal twist of classical mechanics, for which a deeper scientific rationale is lacking. Even worse, Schrödinger's equation for an atom with $N$ electrons involves $3N$ space dimensions, which makes computational solution impossible even with $N$ very small. Accordingly, the Manifesto does not allocate a single penny for solution of Schrödinger's equation, which is nowhere mentioned in the Manifesto. Note that the quantum simulators of the grand plan shown above are not digital solvers of Schrödinger's equation, but Q
• can be viewed as analogue versions of quantum computers, specially dedicated to reproducing the behaviour of materials at very low temperatures, where quantum phenomena arise and give rise to extraordinary properties. Their main advantage over all-purpose quantum computers is that quantum simulators do not require complete control of each individual component, and thus are simpler to build.
• Several platforms for quantum simulators are under development, including ultracold atoms in optical la ices, trapped ions, arrays of superconducting qubits or of quantum dots and photons. In fact, the rst prototypes have already been able to perform simulations beyond what is possible with current supercomputers, although only for some particular problems.
The Quantum Manifesto is thus based on a mathematical model in the form of a multi-dimensional Schrödinger equation suggesting superposition and entanglement, from which the inventive physicist is able to imagine yet unimagined capabilities, while the model itself is considered to be useless for real exploration of possibilities, because not even a quantum computer can be imagined to solve the equation. This is yet another expression of quantum contradiction.
Recall that the objective of RealQM is to find a new version of Schrödinger's equation which is computable and can be used for endless digital exploration of the analog quantum world.
See also Quantum Europe May 2017.
onsdag 4 januari 2017
Update of realQM and The Trouble of Quantum Mechanics
I have made an update of realQM as start for the New Year! More updates will follow...
The update contains more computational results (and citations) and includes corrections of some misprints.
The recent book by Bricmont Making Sense of Quantum Mechanics reviews the confusion concerning the meaning of quantum mechanics, which is still after 100 years deeply troubling the prime achievement of modern physics. As only salvation Bricmont brings out the pilot-wave of Bohm from the wardrobe of dismissed theories, seemingly forgetting that it once was put there for good reasons. The net result of the book is thus that quantum mechanics in its present shape does not make sense...which gives me motivation to pursue realQM...and maybe someone else sharing the understanding that science must make sense...see earlier post on Bricmont's book ...
Yes, the trouble of making sense of quantum mechanics is of concern to physicists today, as expressed in the article The Trouble with Quantum Mechanics in the January 2017 issue of The New York Review of Books by Steven Weinberg, sending the following message to the world of science ultimately based on quantum mechanics:
• The development of quantum mechanics in the first decades of the twentieth century came as a shock to many physicists. Today, despite the great successes of quantum mechanics, arguments continue about its meaning, and its future.
• I’m not as sure as I once was about the future of quantum mechanics. It is a bad sign that those physicists today who are most comfortable with quantum mechanics do not agree with one another about what it all means.
• What then must be done about the shortcomings of quantum mechanics? One reasonable response is contained in the legendary advice to inquiring students: “Shut up and calculate!” There is no argument about how to use quantum mechanics, only how to describe what it means, so perhaps the problem is merely one of words.
• The goal in inventing a new theory is to make this happen not by giving measurement any special status in the laws of physics, but as part of what in the post-quantum theory would be the ordinary processes of physics.
• Unfortunately, these ideas about modifications of quantum mechanics are not only speculative but also vague, and we have no idea how big we should expect the corrections to quantum mechanics to be. Regarding not only this issue, but more generally the future of quantum mechanics, I have to echo Viola in Twelfth Night: “O time, thou must untangle this, not I.”
Weinberg thus gives little hope that fixing the trouble with quantum mechanics will be possible by human intervention, and so the very origin of the trouble, the multi-dimensional linear Schrödinger equation invented by Schrödinger, must be questioned and then questioned seriously (as was done by Schrödinger propelling him away from the paradigm of quantum mechanics), and not as now simply be accepted as a God-given fact beyond question. This is the starting point of realQM.
Of course Lubos Motl, as an ardent believer in the Copenhagen Interpretation, whatever it may be, does not understand the crackpot troubles/worries of Weinberg.
As an expression of the interest in quantum mechanics still today, you may want to browse the upcoming Conference on 90 Years of Quantum Mechanics presented as:
• This conference celebrates this magnificent journey that started 90 years ago. Quantum physics mechanics has during this period developed in leaps and bounds and this conference will be devoted to the progress of quantum mechanics since then. It aims to show how universal quantum mechanics is penetrating all of basic physics. Another aim of the conference is to highlight how quantum mechanics is at the heart of most modern science applications and technology. ago
Note the "leaps and bounds" which may be the troubles Weinberg is referring to... |
fba3eaef39d04c1a | Multiverse - Living in parallel - February 2017
Evil Triumphs in These Multiverses, and God Is Powerless
By Dean Zimmerman
It may turn out that our world is fairly middling, one among the many universes that were good enough for God to create.
A religious Everettian might hope that God would just prune the tree, and leave only those branches where good triumphs over evil. But as the philosopher Jason Turner of the University of Arizona has pointed out, such pruning undermines the Schrödinger equation. If God prevents the worst universes from emerging on the world-tree, then the deterministic law would not truly describe the evolution of the multiverse. Not all the superposed states that it predicts would actually occur, but only those that God judges to be “good enough.”
Even if the pruning argument doesn’t work, there is another reason to think that the many-worlds interpretation doesn’t pose a serious threat to belief in God. Everett’s multiverse is just a much expanded physical world like this one, and finding we were in it would be like finding we were in a world with many more inhabited planets, some the amplified versions of the worst parts of our planet and others the amplified versions of the best parts. And so, even the worst parts of an Everettian multiverse are just particularly ugly versions of planet Earth. If an afterlife helps to explain our seemingly pointless suffering, then it would help explain the seemingly pointless suffering in even the worst of these Everett worlds, if we suppose that everyone in every branch, shows up in an afterlife.
A theist may also take comfort in the fact that the many-worlds interpretation is still far from scientific orthodoxy. Although beloved by Oxford philosophers and accepted by a growing number of theoretical physicists, the theory remains highly controversial, and there are fundamental problems still being hashed out by the experts.
The Everettian multiverse contains worlds that are hard to reconcile with a good God, but Tegmark’s multiverse might contain the worst. His theory, from his 2014 book Our Mathematical Universe, isn’t anchored in quantum mechanics but in modal realism, the doctrine proposed by philosopher David Lewis that every possible way that things could have gone—every consistent, total history of a universe—is as real as our own universe.
Most philosophers talk about possible worlds as abstract things, like numbers, located outside of space and time, and as if they are very different from the actual world, which is substantial and made out of good old-fashioned matter. Tegmark agrees that other merely possible universes are abstract like numbers. But he denies that this makes them less real than the physical world. He thinks our universe is itself fundamentally a mathematical structure. Every physicist agrees that there is a set of mathematical entities standing in relations that perfectly models the distribution of fields and particles which a perfect physics would ascribe to our world. But Tegmark argues that our universe is identical to those mathematical things.
If the world we inhabit is a purely mathematical structure, then all the other possible worlds we can imagine are equally real, their existence a necessary result of slightly different mathematical structures. For every possible way in which mathematical models dictate that matter can be consistently arranged to fill a spacetime universe, there exists such a universe.
If our world is a purely mathematical structure, then all the other possible worlds we can imagine are equally real.
These possible arrangements of matter are bound to include ones corresponding to miserable universes full of pointless suffering—universes like all of the worst branches in the Everettian world-tree, and infinitely many more just as bad. But there would also be worlds that are worse. Unlike Everett’s worlds that are generated by a physical theory, Tegmark’s worlds are generated by mere possibility, which he identifies with mathematical consistency.
Budding Universes Everett's many worlds interpretation holds that there are multiple overlapping universes, all branching off from some initial state in a great world-tree.
Jacopo Werther
According to Tegmark, every possible story about living creatures that can be told by means of a mathematical model of the underlying physical facts is a true story. This means that even if some of Tegmark’s universes last long enough to include episodes in which their inhabitants have an afterlife, the existence of mathematical structures with every possible shape and size requires shorter worlds, too. And, infinitely many of these worlds will not last long enough for their inhabitants to enjoy an afterlife.
Tegmark’s theory implies there have to be worlds with short miserable lives and no afterlife.
There is one way, then, in which Everett’s multiverse poses less of a challenge to the theist than Tegmark’s. Everett’s theory doesn’t predict that God won’t do anything for people with short, miserable lives, and it doesn’t predict that God won’t somehow compensate them in an afterlife. Rather, it only predicts that there will be many more short, miserable lives than just the ones in our universe; whereas Tegmark’s theory implies that there have to be worlds in which there are short miserable lives and no afterlife.
Adding insult to injury, since the horrifying worlds are consequences of pure mathematics, they exist as a matter of absolute necessity—so there is nothing God can do about it! The resulting picture will remain offensive to pious ears: A God who loved all creatures, but was forced to watch infinitely many of them endure lives of inconsolable suffering, would be a God embroiled in a tragedy.
But there is still hope for the theist.
Unlike the Everettian many worlds, which issue from experimental theories in physics and so are harder to dismiss, Tegmark’s theory is based on frail philosophical arguments. Take, for example, his claim that the physical universe is a purely mathematical structure: Why should we accept this? Ordinarily, physicists use mathematical structures as models for how the physical world might work, but they do not identify the mathematical model with the world itself. Tegmark’s reason for taking the latter approach is his conviction that physics must be purged of anything but mathematical terms. Non-mathematical concepts, he says, are “anthropocentric baggage,” and must be eliminated for objectivity’s sake. But why think that the only objective descriptions that can truly apply to things as they are in themselves are mathematical descriptions? So far as I can see, he never justifies this assumption. And such a counterintuitive starting point isn’t enough to threaten one’s belief in a benevolent God.
Apart from the threats posed by the awful worlds within the multiverses of Everett and Tegmark, the idea that we inhabit a multiverse doesn’t have to undermine a belief in God. Every theist should take seriously the possibility that there might exist more universes, simply on the grounds that God would have reason to create more good stuff. Indeed, an infinitely ingenious, resourceful, and creative Being might be expected to work on canvases the size of worlds—some filled with frenetic activity, others more like vast minimalist paintings, many maybe even featuring intelligent beings like ourselves. And the theories of physicists such as Alan Guth and Andrei Linde—whose multiverse is an eternally inflating field that spins off baby universes—or Paul Steinhardt and Neil Turok—whose multiverse amounts to an endless cyclical universe punctuated by big bangs and big crunches—are arguably compatible with this theological vision.
It may turn out that our world is fairly middling, one among the many universes that were good enough for God to create. And the idea of a multiverse consisting of disconnected spacetime universes may make it easier to believe that our world—our universe—is a part of a larger one that is on balance very good and created by a perfectly benevolent deity.
Lead Image: "Christ in Limbo," Hieronymous Bosch |
97d69fd8efa004c8 | Friday, September 26, 2008
Dark black holes, dark flow, and how to avoid heat death?
Lubos made interesting comments about the calculation of black hole entropy in his blog. I have absolutely nothing to say about this branch of science as far as technicalities are considered. The formulas for blackhole entropy however inspire new visions about black holes if one accepts the hierarchy of Planck constants and the notion of relative darkness in the sense that particles at the different pages of the book like structure, whose pages are labelled by the values of Planck constant are dark relative to each other. I glue below a slightly edited comment in Kea's blog.
1. Black hole entropy and dark black holes
Lubos made in his posting explicit the 1/hbar proportionality of formulas for black hole entropy. This proportionality reflects the basic thermodynamical implication of quantization: the phase space of N-dimensional system decomposes into cells of volume hbarN and entropy is proportional to the phase space volume using this volume as unit. If hbar becomes large and gigantic as it would in the case of dark gravitation (hbar= GM1M2/v0, v0/c∼ 2-11 for inner planetary Bohr orbits) this means that blackhole entropy is extremely small. Black is dark;-) as I realized for few years ago, and it would be interesting to consider the consequences.
2. Hierarchy of Planck lengths
It deserves to be noticed that the rough order of magnitude estimate for the gravitational Planck constant of Sun can be written as hbargr=x4GM2. This gives for the Planck length the expression
LP= (Ghbar)1/2 = x1/2 2GM .
For x=1 one Planck length would be just Schwartshild radius. This makes sense since these two lengths play rather similar role. Quite generally, one would have a hierarchy of Planck lengths.
3. Dark flow
Second comment is related to the earlier posting of Lubos about the observed dark flow in length scales larger than horizon size towards an attractor outside horizon. The presence of the attractor outside the visible universe conforms with the notion of manysheeted space-time predicting also a manysheeted cosmology.
Many-sheeted cosmology means a hierarchy of space-time sheets obeying their own Robertson-Walker type cosmologies: those with varying p-adic length scale and those labelled by various values of Planck constants at pages of book like structure obtained by gluing together singular coverings and factor spaces of 8-D imbedding space (roughly). Particles at different pages are dark relative to each other in the sense that there are no local interaction vertices: classical interactions and those by exchanges of say photons are possible. Each sheet in many-sheeted cosmology has different horizon size.
The attractor would correspond to a different value of Planck constant and have larger horizon size than our sheet. Dark energy would be dark matter and the phase transitions increasing Planck constant would induce phases of accelerated expansion. In average sense these periods would give ordinary cosmology without accelerated expansion.
4. How to avoid heat death?
Third comment relates to the dark flow and implications of the hierarchy of Planck constants to future prospects of intelligent life. Heat death is believed by standard physicists to be waiting for all forms of life. We would live in the silliest possible Universe. I cannot believe this. I am ready to admit that some of our theories about the Universe are really silly, but entire Universe?--No!
The hierarchy of Planck constants would allow to avoid heat death. For instance, if the rate for the reduction of temperature is proportional to 1/hbar -as looks natural- then there is always an infinite number of hierarchy levels for which temperature is above a given temperature since the temperature at these pages is reduced so slowly.
Life can escape to the pages of the Big Book labelled by larger values of Planck constant without breaking second law since the scaling of the size of the system by hbar increases phase space volume and keeps entropy constant. Evolution by quantum leaps increasing hbar increasing the time scale of planned action and long term memory is another manner to say this.
The observed dark flow might be seen as a direct support for this more optimistic view about Life and the Universe and Everything;-).
Tuesday, September 23, 2008
Flyby anomaly as a relativistic transverse Doppler effect?
For half year ago I discussed a model for the flyby anomaly based on the hypothesis that dark matter ring around the orbit of Earth causes the effect. The model reproduced the formula deduced for the change of the velocity of the space-craft at qualitative level, and contained single free parameter: essentially the linear density of the dark matter at the flux tube.
From Lubos I learned about a new twist in the story of flyby anomaly. September twelfth 2007 Jean-Paul Mbelek proposed an explanation of the flyby anomaly as a relativistic transverse Doppler effect. The model predicts also the functional dependence of the magnitude of the effect on the kinematic parameters and the prediction is consistent with the empirical findings in the example considered. Therefore the story of flyby anomaly might be finished and dark matter at the orbit of Earth could bring in only an additional effect. It is probably too much to hope for this kind of effect to be large enough if present.
For background see the chapter TGD and Astrophysics.
Monday, September 22, 2008
Tritium beta decay anomaly and variations in the rates of radioactive processes
The determination of neutrino mass from the beta decay of tritium leads to a tachyonic mass squared [2,3,4,5]. I have considered several alternative explanations for this long standing anomaly. The first class of models relies on the presence of dark neutrino or antineutrino belt around the orbit of Earth. The second class of models relies on the prediction of nuclear string model that the neutral color bonds connecting nucleons to nuclear string can be also charged. This predicts large number of fake nuclei having only apparently the proton and neutron numbers deduced from the mass number.
1. 3He nucleus resulting in the decay could be fake (tritium nucleus with one positively charged color bond making it to look like 3He). The idea that slightly smaller mass of the fake 3He might explain the anomaly: it however turned out that the model cannot explain the variation of the anomaly from experiment to experiment.
2. Later (yesterday evenening!) I realized that also the initial 3H nucleus could be fake (3He nucleus with one negatively charged color bond). It turned out that fake tritium option has the potential to explain all aspects of the anomaly and also other anomalies related to radioactive and alpha decays of nuclei.
3. Just one day ago I still believed on the alternative based on the assumption of dark neutrino or antineutrino belt surrounding Earth's orbit. This model has the potential to explain satisfactorily several aspects of the anomaly but fails in its simplest form to explain the dependence of the anomaly on experiment. Since the fake tritium scenario is based only on the basic assumptions of the nuclear string model and brings in only new values of kinematical parameters it is definitely favored.
In the following I shall describe only the models based on the decay of tritium to fake Helium and the decay of fake tritium to Helium.
1. Fake 3He option
Consider first the fake 3He option. Tritium (pnn) would decay with some rate to a fake 3He, call it 3Hef, which is actually tritium nucleus containing one positively charged color bond and possessing mass slightly different than that of 3He (ppn).
1. In this kind of situation the expression for the function K(E,k) differs from K(stand) since the upper bound E0 for the maximal electron energy is modified:
E0 ® E1=M(3H)-M(3Hef)-mm = M(3H)-M(3He)+DM-mm ,
DM = M(3He)-M(3Hef) .
Depending on whether 3Hef is heavier/lighter than 3He E0 decreases/decreases. From Vb Î [5-100] eV and from the TGD based prediction order m([`(n)]) ~ .27 eV one can conclude that DM should be in the range 5-100 eV.
2. In the lowest approximation K(E) can be written as
K(E) = K0(E,E1)q(E1-E) @ (E1-E)q(E1-E).
Here q(x) denotes step function and K0(E,E1) corresponds to the massless antineutrino.
3. If the fraction p of the final state nuclei correspond to a fake 3He the function K(E) deduced from data is a linear combination of functions K(E,3He) and K(E,3Hef) and given by
K(E) = (1-p)K(E,3He)+ pK(E,3Hef)
@ (1-p)(E0-E)q(E0-E)+ p(E1-E)q(E1-E)
in the approximation mn=0.
For m(3Hef) < m(3He) one has E1 > E0 giving
K(E) = (E0-E)q(E0-E)+ p(E1-E0)q(E1-E)q(E-E0).
K(E,E0) is shifted upwards by a constant term (1-p)DM in the region E0 > E. At E=E0 the derivative of K(E) is infinite which corresponds to the divergence of the derivative of square root function in the simpler parametrization using tachyonic mass. The prediction of the model is the presence of a tail corresponding to the region E0 < E < E1.
4. The model does not as such explain the bump near the end point of the spectrum. The decay 3H® 3Hef can be interpreted in terms of an exotic weak decay d® u+W- of the exotic d quark at the end of color bond connecting nucleons inside 3H. The rate for these interactions cannot differ too much from that for ordinary weak interactions and W boson must transform to its ordinary variant before the decay W® e+`n. Either the weak decay at quark level or the phase transition could take place with a considerable rate only for low enough virtual W boson energies, say for energies for which the Compton length of massless W boson correspond to the size scale of color flux tubes predicted to be much longer than nuclear size. Is so the anomaly would be absent for higher energies and a bump would result.
5. The value of K(E) at E=E0 is Vb º p(E1-E0). The variation of the fraction p could explain the observed dependence of Vb on experiment as well as its time variation. It is however difficult to understand how p could vary.
2. Fake 3H option
Assume that a fraction p of the tritium nuclei are fake and correspond to 3He nuclei with one negatively charged color bond.
1. By repeating the previous calculation exactly the same expression for K(E) in the approximation mn=0 but with the replacement
DM = M(3He)-M(3Hef)® M(3Hf)-M(3H) .
2. In this case it is possible to understand the variations in the shape of K(E) if the fraction of 3Hf varies in time and from experiment to experiment. A possible mechanism inducing this variation is a transition inducing the transformation 3Hf® 3H by an exotic weak decay d+p® u+n, where u and d correspond to the quarks at the ends of color flux tubes. This kind of transition could be induced by the absorption of X-rays, say artificial X-rays or X-rays from Sun. The inverse of this process in Sun could generate X rays which induce this process in resonant manner at the surface of Earth.
3. The well-known poorly understood X-ray bursts from Sun during solar flares in the wavelength range 1-8 A correspond to energies in the range 1.6-12.4 keV, 3 octaves in good approximation. This radiation could be partly due to transitions between ordinary and exotic states of nuclei rather than brehmstrahlung resulting in the acceleration of charged particles to relativistic energies. The energy range suggests the presence of three p-adic length scales: nuclear string model indeed predicts several p-adic length scales for color bonds corresponding to different mass scales for quarks at the ends of the bonds. This energy range is considerably above the energy range 5-100 eV and suggests the range [4×10-4, 6×10-2] for the values of p. The existence of these excitations would mean a new branch of low energy nuclear physics, which might be dubbed X-ray nuclear physics.
4. The approximately 1/2 year period of the temporal variation would naturally correspond to the 1/R2 dependence of the intensity of X-ray radiation from Sun. There is evidence that the period is few hours longer than 1/2 years which supports the view that the origin of periodicity is not purely geometric but relates to the dynamics of X-ray radiation from Sun. Note that for 2 hours one would have DT/T @ 2-11, which defines a fundamental constant in TGD Universe and is also near to the electron proton mass ratio.
5. All nuclei could appear as similar anomalous variants. Since both weak and strong decay rates are sensitive to the binding energy, it is possible to test this prediction by finding whether nuclear decay rates show anomalous time variation.
6. The model could explain also other anomalies of radioactive reaction rates including the findings of Shnoll [1] and the unexplained fluctuations in the decay rates of 32Si and 226Ra reported quite recently and correlating with 1/R2, R distance between Earth and Sun. 226Ra decays by alpha emission but the sensitive dependence of alpha decay rate on binding energy means that the temporal variation of the fraction of fake 226Ra isotopes could explain the variation of the decay rates. The intensity of the X-ray radiation from Sun is proportional to 1/R2 so that the correlation of the fluctuation with distance would emerge naturally.
7. Also a dip in the decay rates of 54Mn coincident with a peak in proton and X-ray fluxes during solar flare has been observed: the proposal is that neutrino flux from Sun is also enhanced during the solar flare and induces the effect. A peak in X-ray flux is a more natural explanation in TGD framework.
8. The model predicts interaction between atomic physics and nuclear physics, which might be of relevance in biology. For instance, the transitions between exotic and ordinary variants of nuclei could yield X-rays inducing atomic transitions or ionization. The wave length range 1-8 Angstroms for anomalous X-rays corresponds to the range Z in the rage [11,30] for ionization energies. The biologically important ions Na+, Mg++, P-, Cl-, K+, Ca++ have Z= (11,15,17,19,20). I have proposed that Na+, Cl-, K+ (fermions) are actually bosonic exotic ions forming Bose-Einstein condensates at magnetic flux tubes (see this). The exchange of W bosons between neutral Ne and A(rgon) atoms (bosons) could yield exotic bosonic variants of Na+ (perhaps even Mg++, which is boson also as ordinary ion) and Cl- ions. Similar exchange between A atoms could yield exotic bosonic variants of Cl- and K+ (and even Ca++, which is also boson as ordinary variant). This transformation might relate to the paradoxical finding that noble gases can act as narcotics. This hypothesis is testable by measuring the nuclear weights of these ions. X-rays from Sun are not present during night time and this could relate to the night-day cycle of living organisms. Note that the nagnetic bodies are of size scale of Earth and even larger so that the exotic ions inside them could be subject to intense X-ray radiation. X-rays could also be dark X-rays with large Planck constant and thus with much lower frequency than ordinary X-rays so that control could be possible.
[3] Ch. Weinheimer et al (1993), Phys. Lett. 300B, 210.
[4] J. I. Collar (1996), Endpoint Structure in Beta Decay from Coherent Weak-Interaction of the Neutrino, hep-ph/9611420. [5]G. J. Stephenson Jr. (1993), Perspectives in Neutrinos, Atomic Physics and Gravitation, ed. J. T. Thanh Van, T. Darmour, E. Hinds and J. Wilkerson (Editions Frontieres, Gif-sur-Yvette), p.31.
For more details see the chapters TGD and Nuclear Physics and Nuclear String Hypothesis of "p-Adic length scale Hypothesis and Dark Matter Hierarchy".
Monday, September 15, 2008
Zero energy ontology, self hierarchy, and the notion of time
In the previous posting I discussed the most recent view about zero energy ontology and p-adicization program. One manner to test the internal consistency of this framework is by formulating the basic notions and problems of TGD inspired quantum theory of consciousness and quantum biology in terms of zero energy ontology. I have discussed these topics already earlier but the more detailed understanding of the role of causal diamonds (CDs) brings many new aspects to the discussion.
In consciousness theory the basic challenges are to understand the asymmetry between positive and negative energies and between two directions of geometric time at the level of conscious experience, the correspondence between experienced and geometric time, and the emergence of the arrow of time. One should also explain why human sensory experience is about a rather narrow time interval of about .1 seconds and why memories are about the interior of much larger CD with time scale of order life time. One should also have a vision about the evolution of consciousness takes place: how quantum leaps leading to an expansion of consciousness take place.
Negative energy signals to geometric past - about which phase conjugate laser light represents an example - provide an attractive tool to realize intentional action as a signal inducing neural activities in the geometric past (this would explain Libet's classical findings), a mechanism of remote metabolism, and the mechanism of declarative memory as communications with the geometric past. One should understand how these signals are realized in zero energy ontology and why their occurrence is so rare.
In the following my intention is to demonstrate that TGD inspired theory of consciousness and quantum TGD proper indeed seem to be in tune and that this process of comparison helps considerably in the attempt to develop the TGD based ontology at the level of details.
1 Causal diamonds as correlates for selves
Quantum jump as a moment of consciousness, self as a sequence of quantum jumps integrating to self, and self hierarchy with sub-selves experienced as mental images, are the basic notion of TGD inspired quantum theory of consciousness. In the most ambitious program self hierarchy reduces to a fractal hierarchy of quantum jumps within quantum jumps.
It is natural to interpret CD:s as correlates of selves. CDs can be interpreted in two manners: as subsets of the generalized imbedding space or as sectors of the world of classical worlds (WCW). Accordingly, selves correspond to CD:s of the generalized imbedding space or sectors of WCW, literally separate interacting quantum Universes. The spiritually oriented reader might speak of Gods. Sub-selves correspond to sub-CD:s geometrically. The contents of consciousness of self is about the interior of the corresponding CD at the level of imbedding space. For sub-selves the wave function for the position of tip of CD brings in the delocalization of sub-WCW.
The fractal hierarchy of CDs within CDs defines the counterpart for the hierarchy of selves: the quantization of the time scale of planned action and memory as T(k) = 2kT0 suggest an interpretation for the fact that we experience octaves as equivalent in music experience.
2. Why sensory experience is about so short time interval?
CD picture implies automatically the 4-D character of conscious experience and memories form part of conscious experience even at elementary particle level: in fact, the secondary p-adic time scale of electron is T=1 seconds defining a fundamental time scale in living matter. The problem is to understand why the sensory experience is about a short time interval of geometric time rather than about the entire personal CD with temporal size of order life-time. The obvious explanation would be that sensory input corresponds to sub-selves (mental images) which correspond to CD:s with T(127) @ .1 s (electrons or their Cooper pairs) at the upper light-like boundary of CD assignable to the self. This requires a strong asymmetry between upper and lower light-like boundaries of CD:s.
1. The only reasonable manner to explain the situation seems to be that the addition of CD:s within CD:s in the state construction must always glue them to the upper light-like boundary of CD along light-like radial ray from the tip of the past directed light-cone. This conforms with the classical picture according to which classical sensory data arrives from the geometric past with velocity which is at most light velocity.
2. One must also explain the rare but real occurrence of phase conjugate signals understandable as negative energy signals propagating towards geometric past. The conditions making possible negative energy signals are achieved when the sub-CD is glued to both the past and future directed light-cones at the space-like edge of CD along light-like rays emerging from the edge. This exceptional case gives negative energy signals traveling to the geometric past. The above mentioned basic control mechanism of biology would represent a particular instance of this situation. Negative energy signals as a basic mechanism of intentional action would explain why living matter seems to be so special.
3. Geometric memories would correspond to the lower boundaries of CD:s and would not be in general sharp because only the sub-CD:s glued to both upper and lower light-cone boundary would be present. A temporal sequence of mental images, say the sequence of digits of a phone number, could corresponds to a sequence of sub-CD:s glued to the upper light-cone boundary.
4. Sharing of mental images corresponds to a fusion of sub-selves/mental images to single sub-self by quantum entanglement: the space-time correlate for this could be flux tubes connecting space-time sheets associated with sub-selves represented also by space-time sheets inside their CD:s. It could be that these ëpisodal" memories correspond to CD:s at upper light-cone boundary of CD.
On basis of these arguments it seems that the basic conceptual framework of TGD inspired theory of consciousness can be realized in zero energy ontology. Interesting questions relate to how dynamical selves are.
1. Is self doomed to live inside the same sub-WCW eternally as a lonely god? This question has been already answered: there are interactions between sub-CD:s of given CD, and one can think of selves as quantum superposition of states in CD:s with wave function having as its argument the tips of CD, or rather only the second one since T is assumed to be quantized.
2. Is there a largest CD in the personal CD hierarchy of self in an absolute sense? Or is the largest CD present only in the sense that the contribution to the contents of consciousness coming from very large CD:s is negligible? Long time scales T correspond to low frequencies and thermal noise might indeed mask these contributions very effectively. Here however the hierarchy of Planck constants and generalization of the imbedding space would come in rescue by allowing dark EEG photons to have energies above thermal energy.
3. Can selves evolve in the sense that the size of CD increases in quantum leaps so that the corresponding time scale T=2kT0 of memory and planned action increases? Geometrically this kind of leap would mean that CD becomes a sub-CD of a larger CD either at the level of conscious experience or in absolute sense. This leap can occur in two senses: as an increase of the largest p-adic time scale in the personal hierarchy of space-time sheets or as increase of the largest value of Planck constants in the personal dark matter hierarchy. At the level of individual this would mean emergence of increasingly lower frequencies of generalization of EEG and of the levels of dark matter hierarchy with large value of Planck constant.
4. In 2-D illustration of the leap leading to a higher level of self hierarchy would mean simply the continuation of CD to right or left in the 2-D visualization of CD. Since the preferred M2 is contained in the tangent space of space-time surfaces, and since preferred M2 plays a key role in dark matter hierarchy too, one must ask whether the 2-D illustration might have some deeper truth in it.
3. New view about arrow of time
Perhaps the most fundamental problem related to the notion of time concerns the relationship between experienced time and geometric time. The two notions are definitely different: think only the irreversibility of experienced time and the reversibility of the geometric time and the absence of future of the experienced time. Also the deterministic character of the dynamics in geometric time is in conflict with the notion of free will supported by the direct experience.
In the standard materialistic ontology experienced time and geometric time are identified. In the naivest picture the flow of time is interpreted in terms of the motion of 3-D time=constant surface of space-time towards geometric future without any explanation for why this kind of motion would occur. This identification is plagued by several difficulties. In special relativity the difficulties relate to the impossibility define the notion of simultaneity in a unique manner and the only possible manner to save this notion seems to be the replacement of time=constant 3-surface with past directed light-cone assignable to the world-line of observer. In general relativity additional difficulties are caused by the general coordinate invariance unless one generalizes the picture of special relativity: problems are however caused by the fact that past light-cones make sense only locally. In quantum physics quantum measurement theory leads to a paradoxical situation since the observed localization of the state function reduction to a finite space-time volume is in conflict with the determinism of Schrödinger equation.
TGD forces a new view about the relationship between experienced and geometric time. Although the basic paradox of quantum measurement theory disappears the question about the arrow of geometric time remains.
1. Selves correspond to CD:s the own sub-WCW:s. These sub-WCW:s and their projections to the imbedding space do not move anywhere. Therefore standard explanation for the arrow of geometric time cannot work. Neither can the experience about flow of time correspond to quantum leaps increasing the size of the largest CD contributing to the conscious experience of self.
2. The only plausible interpretation is based on quantum classical correspondence and the fact that space-times are 4-surfaces of the imbedding space. If quantum jump corresponds to a shift of quantum superposition of space-time sheets towards geometric past in the first approximation (as quantum classical correspondence suggests), one can indeed understand the arrow of time. Space-time surfaces simply shift backwards with respect to the geometric time of the imbedding space and therefore to the 8-D perceptive field defined by the CD. This creates in the materialistic mind a kind of temporal variant of train illusion. Space-time as 4-surface and macroscopic and macro-temporal quantum coherence are absolutely essential for this interpretation to make sense.
Why this shifting should always take place to the direction of geometric past of the imbedding space? What seems clear is that the asymmetric construction of zero energy states should correlate with the preferred direction. If question is about probabilities, the basic question would be why the probabilities for shifts in the direction of geometric past are higher. Here some alternative attempts to answer this question are discussed.
1. Cognition and time relate to each other very closely and the required fusion of real physics with various p-adic physics of cognition and intentionality could also have something to do with the asymmetry. Indeed, in the p-adic sectors the transcendental values of p-adic light-cone proper time coordinate correspond to literally infinite values of the real valued light-cone proper time, and one can say that most points of p-adic space-time sheets serving as correlates of thoughts and intentions reside always in the infinite geometric future in the real sense. Therefore cognition and intentionality would break the symmetry between positive and negative energies and geometric past and future, and the breaking of arrow of geometric time could be seen as being induced by intentional action and also due to the basic aspects of cognitive experience.
2. Zero energy ontology suggests also a possible reason for the asymmetry. Standard quantum mechanics encourages the identification of the space of negative energy states as the dual for the space of positive energy states. There are two kinds of duals. Hilbert space dual is identified as the space of continuous linear functionals from Hilbert space to the coefficient field and is isometrically anti-isomorphic with the Hilbert space. This justifies the bra-ket notation. In the case of vector space the relevant notion is algebraic dual. Algebraic dual can be identified as an infinite direct product of the coefficient field identified as a 1-dimensional vector space. Direct product is defined as the set of functions from an infinite index set I to the disjoint union of infinite number of copies of the coefficient field indexed by I. Infinite-dimensional vector space corresponds to infinite direct sum consisting of functions which are non-vanishing for a finite number of indices only. Hence vector space dual in infinite-dimensional case contains much more states than the vector space and does not have enumerable basis.
If negative energy states correspond to a subspace of vector space dual containing Hilbert space dual, the number of negative energy states is larger than the number of positive energy states. This asymmetry could correspond to better measurement resolution at the upper light-cone cone boundary so that the state space at lower light-cone boundary would be included via inclusion of HFFs to that associated with the upper light-cone boundary. Geometrically this would mean the possibility to glue to the upper light-cone boundary CD which can be smaller than those associated with the lower one.
3. The most convincing candidate for an answer comes from consciousness theory. One must understand also why the contents of sensory experience is concentrated around a narrow time interval whereas the time scale of memories and anticipation are much longer. The proposed mechanism is that the resolution of conscious experience is higher at the upper boundary of CD. Since zero energy states correspond to light-like 3-surfaces, this could be a result of self-organization rather than a fundamental physical law.
1. The key assumption is that CDs have CDs inside CDs and that the vertices of generalized Feynman diagrams are contained within sub-CDs. It is not assumed that CDs are glued to the upper boundary of CD since the arrow of time results from self organization when the distribution of sub-CDs concentrates around the upper boundary of CD. In a category theoretical formulation for generalized Feynman diagrammatics based on this picture is developed.
2. CDs define the perceptive field for self. Selves are curious about the space-time sheets outside their perceptive field in the geometric future (relative notion) of the imbedding space and perform quantum jumps tending to shift the superposition of the space-time sheets to the direction of geometric past (past defined as the direction of shift!). This creates the illusion that there is a time=snapshot front of consciousness moving to geometric future in fixed background space-time as an analog of train illusion.
3. The fact that news come from the upper boundary of CD implies that self concentrates its attention to this region and improves the resolutions of sensory experience and quantum measurement here. The sub-CD:s generated in this manner correspond to mental images with contents about this region. As a consequence, the contents of conscious experience, in particular sensory experience, tend to be about the region near the upper boundary.
4. This mechanism in principle allows the arrow of the geometric time to vary and depend on p-adic length scale and the level of dark matter hierarchy. The occurrence of phase transitions forcing the arrow of geometric time to be same everywhere are however plausible for the reason that the lower and upper boundaries of given CD must possess the same arrow of geometric time.
For details see chapters TGD as a Generalized Number Theory I: p-Adicization Program.
Sunday, September 14, 2008
The most recent vision about zero energy ontology and p-adicization
The generalization of the number concept obtained by fusing real and p-adics along rationals and common algbraics is the basic philosophy behind p-adicization. This however requires that it is possible to speak about rational points of the imbedding space and the basic objection against the notion of rational points of imbedding space common to real and various p-adic variants of the imbedding space is the necessity to fix some special coordinates in turn implying the loss of a manifest general coordinate invariance. The isometries of the imbedding space could save the situation provided one can identify some special coordinate system in which isometry group reduces to its discrete subgroup. The loss of the full isometry group could be compensated by assuming that WCW is union over sub-WCW:s obtained by applying isometries on basic sub-WCW with discrete subgroup of isometries.
The combination of zero energy ontology realized in terms of a hierarchy causal diamonds and hierarchy of Planck constants providing a description of dark matter and leading to a generalization of the notion of imbedding space suggests that it is possible to realize this dream. The article TGD: What Might be the First Principles? provides a brief summary about recent state of quantum TGD helping to understand the big picture behind the following considerations.
1. Zero energy ontology briefly
1. The basic construct in the zero energy ontology is the space CD×CP2, where the causal diamond CD is defined as an intersection of future and past directed light-cones with time-like separation between their tips regarded as points of the underlying universal Minkowski space M4. In zero energy ontology physical states correspond to pairs of positive and negative energy states located at the boundaries of the future and past directed light-cones of a particular CD. CD:s form a fractal hierarchy and one can glue smaller CD:s within larger CD along the upper light-cone boundary along a radial light-like ray: this construction recipe allows to understand the asymmetry between positive and negative energies and why the arrow of experienced time corresponds to the arrow of geometric time and also why the contents of sensory experience is located to so narrow interval of geometric time. One can imagine evolution to occur as quantum leaps in which the size of the largest CD in the hierarchy of personal CD:s increases in such a manner that it becomes sub-CD of a larger CD. p-Adic length scale hypothesis follows if the values of temporal distance T between tips of CD come in powers of 2n. All conserved quantum numbers for zero energy states have vanishing net values. The interpretation of zero energy states in the framework of positive energy ontology is as physical events, say scattering events with positive and negative energy parts of the state interpreted as initial and final states of the event.
2. In the realization of the hierarchy of Planck constants CD×CP2 is replaced with a Cartesian product of book like structures formed by almost copies of CD:s and CP2:s defined by singular coverings and factors spaces of CD and CP2 with singularities corresponding to intersection M2ÇCD and homologically trivial geodesic sphere S2 of CP2 for which the induced Kähler form vanishes. The coverings and factor spaces of CD:s are glued together along common M2ÇCD. The coverings and factors spaces of CP2 are glued together along common homologically non-trivial geodesic sphere S2. The choice of preferred M2 as subspace of tangent space of X4 at all its points and having interpretation as space of non-physical polarizations, brings M2 into the theory also in different manner. S2 in turn defines a subspace of the much larger space of vacuum extremals as surfaces inside M4×S2.
3. Configuration space (the world of classical worlds, WCW) decomposes into a union of sub-WCW:s corresponding to different choices of M2 and S2 and also to different choices of the quantization axes of spin and energy and and color isospin and hyper-charge for each choice of this kind. This means breaking down of the isometries to a subgroup. This can be compensated by the fact that the union can be taken over the different choices of this subgroup.
4. p-Adicization requires a further breakdown to discrete subgroups of the resulting sub-groups of the isometry groups but again a union over sub-WCW:s corresponding to different choices of the discrete subgroup can be assumed. Discretization relates also naturally to the notion of number theoretic braid.
Consider now the critical questions.
1. Very naively one could think that center of mass wave functions in the union of sectors could give rise to representations of Poincare group. This does not conform with zero energy ontology, where energy-momentum should be assignable to say positive energy part of the state and where these degrees of freedom are expected to be pure gauge degrees of freedom. If zero energy ontology makes sense, then the states in the union over the various copies corresponding to different choices of M2 and S2 would give rise to wave functions having no dynamical meaning. This would bring in nothing new so that one could fix the gauge by choosing preferred M2 and S2 without losing anything. This picture is favored by the interpretation of M2 as the space of longitudinal polarizations.
2. The crucial question is whether it is really possible to speak about zero energy states for a given sector defined by generalized imbedding space with fixed M2 and S2. Classically this is possible and conserved quantities are well defined. In quantal situation the presence of the lightcone boundaries breaks full Poincare invariance although the infinitesimal version of this invariance is preserved. Note that the basic dynamical objects are 3-D light-like "legs" of the generalized Feynman diagrams.
2. Definition of energy inzero energy ontology
Can one then define the notion of energy for positive and negative energy parts of the state? There are two alternative approaches depending on whether one allows or does not allow wave-functions for the positions of tips of light-cones.
Consider first the naive option for which four momenta are assigned to the wave functions assigned to the tips of CD:s.
1. The condition that the tips are at time-like distance does not allow separation to a product but only following kind of wave functions
Ψ = exp(ip·m)Θ(m2) Θ(m0)× Φ(p) , m=m+-m-.
Here m+ and m- denote the positions of the light-cones and Q denotes step function. F denotes configuration space spinor field in internal degrees of freedom of 3-surface. One can introduce also the decomposition into particles by introducing sub-CD:s glued to the upper light-cone boundary of CD.
2. The first criticism is that only a local eigen state of 4-momentum operators p± = (h/2p) Ñ/i is in question everywhere except at boundaries and at the tips of the CD with exact translational invariance broken by the two step functions having a natural classical interpretation. The second criticism is that the quantization of the temporal distance between the tips to T = 2kT0 is in conflict with translational invariance and reduces it to a discrete scaling invariance.
The less naive approach relies of super conformal structures of quantum TGD assumes fixed value of T and therefore allows the crucial quantization condition T=2kT0.
1. Since light-like 3-surfaces assignable to incoming and outgoing legs of the generalized Feynman diagrams are the basic objects, can hope of having enough translational invariance to define the notion of energy. If translations are restricted to time-like translations acting in the direction of the future (past) then one has local translation invariance of dynamics for classical field equations inside dM4± as a kind of semigroup. Also the M4 translations leading to interior of X4 from the light-like 2-surfaces surfaces act as translations. Classically these restrictions correspond to non-tachyonic momenta defining the allowed directions of translations realizable as particle motions. These two kinds of translations have been assigned to super-canonical conformal symmetries at dM4±×CP2 and and super Kac-Moody type conformal symmetries at light-like 3-surfaces. Equivalence Principle in TGD framework states that these two conformal symmetries define a structure completely analogous to a coset representation of conformal algebras so that the four-momenta associated with the two representations are identical .
2. The condition selecting preferred extremals of Kähler action is induced by a global selection of M2 as a plane belonging to the tangent space of X4 at all its points . The M4 translations of X4 as a whole in general respect the form of this condition in the interior. Furthermore, if M4 translations are restricted to M2, also the condition itself - rather than only its general form - is respected. This observation, the earlier experience with the p-adic mass calculations, and also the treatment of quarks and gluons in QCD encourage to consider the possibility that translational invariance should be restricted to M2 translations so that mass squared, longitudinal momentum and transversal mass squared would be well defined quantum numbers. This would be enough to realize zero energy ontology. Encouragingly, M2 appears also in the generalization of the causal diamond to a book-like structure forced by the realization of the hierarchy of Planck constant at the level of the imbedding space.
3. That the cm degrees of freedom for CD would be gauge like degrees of freedom sounds strange. The paradoxical feeling disappears as one realizes that this is not the case for sub-CDs, which indeed can have non-trivial correlation functions with either upper or lower tip of the CD playing a role analogous to that of an argument of n-point function in QFT description. One can also say that largest CD in the hierarchy defines infrared cutoff.
3. p-Adic variants of the imbedding space
Consider now the construction of p-adic variants of the imbedding space.
1. Rational values of p-adic coordinates are non-negative so that light-cone proper time a4,+=Ö(t2-z2-x2-y2) is the unique Lorentz invariant choice for the p-adic time coordinate near the lower tip of CD. For the upper tip the identification of a4 would be a4,-=Ö((t-T)2-z2-x2-y2). In the p-adic context the simultaneous existence of both square roots would pose additional conditions on T. For 2-adic numbers T=2nT0, n ³ 0 (or more generally T=åk ³ n0bk 2k), would allow to satisfy these conditions and this would be one additional reason for T=2nT0 implying p-adic length scale hypothesis. The remaining coordinates of CD are naturally hyperbolic cosines and sines of the hyperbolic angle h±,4 and cosines and sines of the spherical coordinates q and f.
2. The existence of the preferred plane M2 of un-physical polarizations would suggest that the 2-D light-cone proper times a2,+ = Ö(t2-z2) a2,- = Ö((t-T)2-z2) can be also considered. The remaining coordinates would be naturally h±,2 and cylindrical coordinates (r,f).
3. The transcendental values of a4 and a2 are literally infinite as real numbers and could be visualized as points in infinitely distant geometric future so that the arrow of time might be said to emerge number theoretically. For M2 option p-adic transcendental values of r are infinite as real numbers so that also spatial infinity could be said to emerge p-adically.
4. The selection of the preferred quantization axes of energy and angular momentum unique apart from a Lorentz transformation of M2 would have purely number theoretic meaning in both cases. One must allow a union over sub-WCWs labeled by points of SO(1,1). This suggests a deep connection between number theory, quantum theory, quantum measurement theory, and even quantum theory of mathematical consciousness.
5. In the case of CP2 there are three real coordinate patches involved . The compactness of CP2 allows to use cosines and sines of the preferred angle variable for a given coordinate patch.
ξ1= tan(u)× cos(Θ/2)× exp(i(Ψ+Φ)/2) ,
ξ2= tan(u)× sin(Θ/2)× exp(i(Ψ-Φ)/2).
The ranges of the variables u,Q, F,Y are [0,p/2],[0,p],[0,4p],[0,2p] respectively. Note that u has naturally only the positive values in the allowed range. S2 corresponds to the values F = Y = 0 of the angle coordinates.
6. The rational values of the (hyperbolic) cosine and sine correspond to Pythagorean triangles having sides of integer length and thus satisfying m2 = n2+r2 (m2=n2-r2). These conditions are equivalent and allow the well-known explicit solution . One can construct a p-adic completion for the set of Pythagorean triangles by allowing p-adic integers which are infinite as real integers as solutions of the conditions m2=r2±s2. These angles correspond to genuinely p-adic directions having no real counterpart. Hence one obtains p-adic continuum also in the angle degrees of freedom. Algebraic extensions of the p-adic numbers bringing in cosines and sines of the angles p/n lead to a hierarchy increasingly refined algebraic extensions of the generalized imbedding space. Since the different sectors of WCW directly correspond to correlates of selves this means direct correlation with the evolution of the mathematical consciousness. Trigonometric identities allow to construct points which in the real context correspond to sums and differences of angles.
7. Negative rational values of the cosines and sines correspond as p-adic integers to infinite real numbers and it seems that one use several coordinate patches obtained as copies of the octant (x ³ 0,y ³ 0,z ³ 0,). An analogous picture applies in CP2 degrees of freedom.
8. The expression of the metric tensor and spinor connection of the imbedding in the proposed coordinates makes sense as a p-adic numbers in the algebraic extension considered. The induction of the metric and spinor connection and curvature makes sense provided that the gradients of coordinates with respect to the internal coordinates of the space-time surface belong to the extensions. The most natural choice of the space-time coordinates is as subset of imbedding space-coordinates in a given coordinate patch. If the remaining imbedding space coordinates can be chosen to be rational functions of these preferred coordinates with coefficients in the algebraic extension of p-adic numbers considered for the preferred extremals of Kähler action, then also the gradients satisfy this condition. This is highly non-trivial condition on the extremals and if it works might fix completely the space of exact solutions of field equations. Space-time surfaces are also conjectured to be hyper-quaternionic , this condition might relate to the simultaneous hyper-quaternionicity and Kähler extremal property. Note also that this picture would provide a partial explanation for the decomposition of the imbedding space to sectors dictated also by quantum measurement theory and hierarchy of Planck constants.
4. p-Adic variants for the sectors of WCW
One can also wonder about the most general definition of the p-adic variants of the sectors of the world of classical worlds.
1. The restriction of the surfaces in question to be expressible in terms of rational functions with coefficients which are rational numbers of belong to algebraic extension of rationals means that the world of classical worlds can be regarded as a a discrete set and there would be no difference between real and p-adic worlds of classical worlds: a rather unexpected conclusion.
2. One can of course whether one should perform completion also for WCWs. In real context this would mean completion of the rational number valued coefficients of a rational function to arbitrary real coefficients and perhaps also allowance of Taylor and Laurent series as limits of rational functions. In the p-adic case the integers defining rational could be allowed to become p-adic transcendentals infinite as real numbers. Also now also Laurent series could be considered.
3. In this picture there would be close analogy between the structure of generalized imbedding space and WCW. Different WCW:s could be said to intersect in the space formed by rational functions with coefficients in algebraic extension of rationals just real and p-adic variants of the imbedding space intersect along rational points. In the spirit of algebraic completion one might hope that the expressions for the various physical quantities, say the value of Kähler action, Kähler function, or at least the exponent of Kähler function (at least for the maxima of Kähler function) could be defined by analytic continuation of their values from these sub-WCW to various number fields. The matrix elements for p-adic-to-real phase transitions of zero energy states interpreted as intentional actions could be calculated in the intersection of real and p-adic WCW:s by interpreting everything as real.
Wednesday, September 03, 2008
|
d6c8717895dd7e09 | Quantum dot
From Wikipedia, the free encyclopedia
Jump to: navigation, search
A quantum dot is a nanocrystal made of semiconductor materials that are small enough to exhibit quantum mechanical properties. Specifically, its excitons are confined in all three spatial dimensions. The electronic properties of these materials are intermediate between those of bulk semiconductors and of discrete molecules.[1][2][3] Quantum dots were discovered in a glass matrix by Alexei Ekimov and in colloidal solutions by Louis E. Brus. The term "quantum dot" was coined by Mark Reed.[4]
Researchers have studied applications for quantum dots in transistors, solar cells, LEDs, and diode lasers. They have also investigated quantum dots as agents for medical imaging and as possible qubits in quantum computing. The first commercial release of a product utilizing quantum dots was the Sony XBR X900A series of flat panel televisions released in 2013.[5]
Electronic characteristics of a quantum dot are closely related to its size and shape. For example, the band gap in a quantum dot which determines the frequency range of emitted light is inversely related to its size. In fluorescent dye applications the frequency of emitted light increases as the size of the quantum dot decreases. Consequently, the color of emitted light shifts from red to blue when the size of the quantum dot is made smaller.[6] This allows the excitation and emission of quantum dots to be highly tunable. Since the size of a quantum dot may be set when it is made, its conductive properties may be carefully controlled. Quantum dot assemblies consisting of many different sizes, such as gradient multi-layer nanofilms, can be made to exhibit a range of desirable emission properties.
Quantum confinement in semiconductors[edit]
3D confined electron wave functions in a quantum dot. Here, rectangular and triangular-shaped quantum dots are shown. Energy states in rectangular dots are more s-type and p-type. However, in a triangular dot the wave functions are mixed due to confinement symmetry. (Click for animation)
Main article: Potential well
In a semiconductor crystallite whose diameter is smaller than the size of its exciton Bohr radius, the excitons are squeezed, leading to quantum confinement. The energy levels can then be modeled using the particle in a box model in which the energy of different states is dependent on the length of the box. Quantum dots are said to be in the 'weak confinement regime' if their radii are on the order of the exciton Bohr radius; quantum dots are said to be in the 'strong confinement regime' if their radii are smaller than the exciton Bohr radius. If the size of the quantum dot is small enough that the quantum confinement effects dominate (typically less than 10 nm), the electronic and optical properties are highly tunable.
Splitting of energy levels for small quantum dots due to the quantum confinement effect. The horizontal axis is the radius, or the size, of the quantum dots and ab* is the Exciton Bohr radius.
Fluorescence occurs when an excited electron relaxes to the ground state and combines with the hole. In a simplified model, the energy of the emitted photon can be understood as the sum of the band gap energy between the occupied level and the unoccupied energy level, the confinement energies of the hole and the excited electron, and the bound energy of the exciton (the electron-hole pair):
the figure is a simplified representation showing the excited electron and the hole in an exciton entity and the corresponding energy levels. The total energy involved can be seen as the sum of the band gap energy, the energy involved in the Coulomb attraction in the exciton, and the confinement energies of the excited electron and the hole
Band gap energy
The band gap can become larger in the strong confinement regime where the size of the quantum dot is smaller than the Exciton Bohr radius ab* as the energy levels split up.
a^*_b = \varepsilon_r\left(\frac{m}{\mu}\right) a_b
where ab is the Bohr radius=0.053 nm, m is the mass, μ is the reduced mass, and εr is the size-dependent dielectric constant
This results in the increase in the total emission energy (the sum of the energy levels in the smaller band gaps in the strong confinement regime is larger than the energy levels in the band gaps of the original levels in the weak confinement regime) and the emission at various wavelengths; which is precisely what happens in the sun, where the quantum confinement effects are completely dominant and the energy levels split up to the degree that the energy spectrum is almost continuous, thus emitting white light.
Confinement energy
The exciton entity can be modeled using the particle in the box. The electron and the hole can be seen as hydrogen in the Bohr model with the hydrogen nucleus replaced by the hole of positive charge and negative electron mass. Then the energy levels of the exciton can be represented as the solution to the particle in a box at the ground level (n = 1) with the mass replaced by the reduced mass. Thus by varying the size of the quantum dot, the confinement energy of the exciton can be controlled.
Bound exciton energy
There is Coulomb attraction between the negatively charged electron and the positively charged hole. The negative energy involved in the attraction is proportional to Rydberg's energy and inversely proportional to square of the size-dependent dielectric constant[7] of the semiconductor. When the size of the semiconductor crystal is smaller than the Exciton Bohr radius, the Coulomb interaction must be modified to fit the situation.
Therefore, the sum of these energies can be represented as:
E_\textrm{confinement} &= \frac{\hbar^2\pi^2}{2 a^2}\left(\frac{1}{m_e} + \frac{1}{m_h}\right) = \frac{\hbar^2\pi^2}{2\mu a^2}\\
E_\textrm{exciton} &= -\frac{1}{\epsilon_r^2}\frac{\mu}{m_e}R_y = -R_y^*\\
E &= E_\textrm{band gap} + E_\textrm{confinement} + E_\textrm{exciton}\\
&= E_\textrm{band gap} + \frac{\hbar^2\pi^2}{2\mu a^2} - R^*_y
where μ is the reduced mass, a is the radius, me is the free electron mass, mh is the hole mass, and εr is the size-dependent dielectric constant.
Although the above equations were derived using simplifying assumptions, the implications are clear; the energy of the quantum dots is dependent on their size due to the quantum confinement effects, which dominate below the critical size leading to changes in the optical properties. This effect of quantum confinement on the quantum dots has been experimentally verified[8] and is a key feature of many emerging electronic structures.[9][10]
Besides confinement in all three dimensions (i.e., a quantum dot), other quantum confined semiconductors include:
• Quantum wires, which confine electrons or holes in two spatial dimensions and allow free propagation in the third.
• Quantum wells, which confine electrons or holes in one dimension and allow free propagation in two dimensions.
Quantum Dots with gradually stepping emission from violet to deep red are being produced in a kg scale at PlasmaChem GmbH
There are several ways to confine excitons in semiconductors, resulting in different methods to produce quantum dots. In general, quantum wires, wells and dots are grown by advanced epitaxial techniques in nanocrystals produced by chemical methods or by ion implantation, or in nanodevices made by state-of-the-art lithographic techniques.[11]
Colloidal synthesis[edit]
Colloidal semiconductor nanocrystals are synthesized from precursor compounds dissolved in solutions, much like traditional chemical processes. The synthesis of colloidal quantum dots is done by using precursors,[3] organic surfactants,[12] and solvents. Heating the solution at high temperature, the precursors decompose forming monomers which then nucleate and generate nanocrystals. The temperature during the synthetic process is a critical factor in determining optimal conditions for the nanocrystal growth. It must be high enough to allow for rearrangement and annealing of atoms during the synthesis process while being low enough to promote crystal growth. The concentration of monomers is another critical factor that has to be stringently controlled during nanocrystal growth. The growth process of nanocrystals can occur in two different regimes, "focusing" and "defocusing". At high monomer concentrations, the critical size (the size where nanocrystals neither grow nor shrink) is relatively small, resulting in growth of nearly all particles. In this regime, smaller particles grow faster than large ones (since larger crystals need more atoms to grow than small crystals) resulting in "focusing" of the size distribution to yield nearly monodisperse particles. The size focusing is optimal when the monomer concentration is kept such that the average nanocrystal size present is always slightly larger than the critical size. Over time, the monomer concentration diminishes, the critical size becomes larger than the average size present, and the distribution "defocuses".
There are colloidal methods to produce many different semiconductors. Typical dots are made of binary compounds such as lead sulfide, lead selenide, cadmium selenide, cadmium sulfide, indium arsenide, and indium phosphide. Dots may also be made from ternary compounds such as cadmium selenide sulfide. These quantum dots can contain as few as 100 to 100,000 atoms within the quantum dot volume, with a diameter of 10 to 50 atoms. This corresponds to about 2 to 10 nanometers, and at 10 nm in diameter, nearly 3 million quantum dots could be lined up end to end and fit within the width of a human thumb.
Colloidal nanoparticle of lead sulfide (selenide) with complete passivation by oleic acid, oleyl and hydroxyl (size ~5nm)
Large batches of quantum dots may be synthesized via colloidal synthesis. Due to this scalability and the convenience of benchtop conditions, colloidal synthetic methods are promising for commercial applications. It is acknowledged[citation needed] to be the least toxic of all the different forms of synthesis.
• Self-assembled quantum dots are typically between 5 and 50 nm in size. Quantum dots defined by lithographically patterned gate electrodes, or by etching on two-dimensional electron gases in semiconductor heterostructures can have lateral dimensions exceeding 100 nm.
• Some quantum dots are small regions of one material buried in another with a larger band gap. These can be so-called core–shell structures, e.g., with CdSe in the core and ZnS in the shell or from special forms of silica called ormosil.
• Quantum dots sometimes occur spontaneously in quantum well structures due to monolayer fluctuations in the well's thickness.
• Self-assembled quantum dots nucleate spontaneously under certain conditions during molecular beam epitaxy (MBE) and metallorganic vapor phase epitaxy (MOVPE), when a material is grown on a substrate to which it is not lattice matched. The resulting strain produces coherently strained islands on top of a two-dimensional wetting layer. This growth mode is known as Stranski–Krastanov growth. The islands can be subsequently buried to form the quantum dot. This fabrication method has potential for applications in quantum cryptography (i.e. single photon sources) and quantum computation. The main limitations of this method are the cost of fabrication and the lack of control over positioning of individual dots.
The quantum dot absorption features correspond to transitions between discrete,three-dimensional particle in a box states of the electron and the hole, both confined to the same nanometer-size box.These discrete transitions are reminiscent of atomic spectra and have resulted in quantum dots also being called artificial atoms.[13]
• CMOS technology can be employed to fabricate silicon quantum dots. Ultra small (L=20 nm, W=20 nm) CMOS transistors behave as single electron quantum dots when operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The transistor displays Coulomb blockade due to progressive charging of electrons one by one. The number of electrons confined in the channel is driven by the gate voltage, starting from an occupation of zero electrons, and it can be set to 1 or many.[14]
Viral assembly[edit]
Lee et al. (2002) reported using genetically engineered M13 bacteriophage viruses to create quantum dot biocomposite structures.[15] As a background to this work, it has previously been shown that genetically engineered viruses can recognize specific semiconductor surfaces through the method of selection by combinatorial phage display.[16] Additionally, it is known that liquid crystalline structures of wild-type viruses (Fd, M13, and TMV) are adjustable by controlling the solution concentrations, solution ionic strength, and the external magnetic field applied to the solutions. Consequently, the specific recognition properties of the virus can be used to organize inorganic nanocrystals, forming ordered arrays over the length scale defined by liquid crystal formation. Using this information, Lee et al. (2000) were able to create self-assembled, highly oriented, self-supporting films from a phage and ZnS precursor solution. This system allowed them to vary both the length of bacteriophage and the type of inorganic material through genetic modification and selection.
Electrochemical assembly[edit]
Highly ordered arrays of quantum dots may also be self-assembled by electrochemical techniques. A template is created by causing an ionic reaction at an electrolyte-metal interface which results in the spontaneous assembly of nanostructures, including quantum dots, onto the metal which is then used as a mask for mesa-etching these nanostructures on a chosen substrate.
Quantum dot manufacturing relies on a process called "high temperature dual injection" which has been scaled by multiple companies for commercial applications that require large quantities (hundreds of kilograms to tonnes) of quantum dots. This is a reproducible production method that can be applied to a wide range of quantum dot sizes and compositions.
The bonding in certain cadmium-free quantum dots, such as III-V-based quantum dots, is more covalent than that in II-VI materials, therefore it is more difficult to separate nanoparticle nucleation and growth via a high temperature dual injection synthesis. An alternative method of quantum dot synthesis, the “molecular seeding” process, provides a reproducible route to the production of high quality quantum dots in large volumes. The process utilises identical molecules of a molecular cluster compound as the nucleation sites for nanoparticle growth, thus avoiding the need for a high temperature injection step. Particle growth is maintained by the periodic addition of precursors at moderate temperatures until the desired particle size is reached.[17] The molecular seeding process is not limited to the production of cadmium-free quantum dots; for example, the process can be used to synthesise kilogram batches of high quality II-VI quantum dots in just a few hours.
Another approach for the mass production of colloidal quantum dots can be seen in the transfer of the well-known hot-injection methodology for the synthesis to a technical continuous flow system. The batch-to-batch variations arising from the needs during the mentioned methodology can be overcome by utilizing technical components for mixing and growth as well as transport and temperature adjustments. For the production of CdSe based semiconductor nanoparticles this method has been investigated and tuned to production amounts of kg per month. Since the use of technical components allows for easy interchange in regards of maximum through-put and size, it can be further enhanced to tens or even hundreds of kilograms.[18]
Recently a consortium of U.S. and Dutch companies reported a "milestone" in high volume quantum dot manufacturing by applying the traditional high temperature dual injection method to a flow system.[19] However as of 2011, applications using bulk-manufactured quantum dots are scarcely available.[20]
Heavy Metal-Free Quantum Dots[edit]
In many regions of the world there is now a restriction or ban on the use of heavy metals in many household goods which means that most cadmium based quantum dots are unusable for consumer-goods applications.
For commercial viability, a range of restricted, heavy metal-free quantum dots has been developed showing bright emissions in the visible and near infra-red region of the spectrum and have similar optical properties to those of CdSe quantum dots. Among these systems are InP/ZnS and CuInS/ZnS, for example.
Cadmium and other restricted heavy metals used in conventional quantum dots are of major concern in commercial applications. For Quantum Dots to be commercially viable in many applications they must not contain cadmium or other restricted heavy metal elements.[21]
Peptides are being researched as potential quantum dot material.[22] Since peptides occur naturally in all organisms, such dots would likely be nontoxic and easily biodegraded.
Environmental impact[edit]
The environmental impact of bulk manufacturing and consumption of quantum dots is currently undergoing studies in both private and public labs.[citation needed]
Optical properties[edit]
Fluorescence spectra of CdTe quantum dots of various sizes. Different sized quantum dots emit different color light due to quantum confinement.
An immediate optical feature of colloidal quantum dots is their color. While the material which makes up a quantum dot defines its intrinsic energy signature, the nanocrystal's quantum confined size is more significant at energies near the band gap. Thus quantum dots of the same material, but with different sizes, can emit light of different colors. The physical reason is the quantum confinement effect.
The larger the dot, the redder (lower energy) its fluorescence spectrum. Conversely, smaller dots emit bluer (higher energy) light. The coloration is directly related to the energy levels of the quantum dot. Quantitatively speaking, the bandgap energy that determines the energy (and hence color) of the fluorescent light is inversely proportional to the size of the quantum dot. Larger quantum dots have more energy levels which are also more closely spaced. This allows the quantum dot to absorb photons containing less energy, i.e., those closer to the red end of the spectrum. Recent articles in Nanotechnology and in other journals have begun to suggest that the shape of the quantum dot may be a factor in the coloration as well, but as yet not enough information is available. Furthermore, it was shown [23] that the lifetime of fluorescence is determined by the size of the quantum dot. Larger dots have more closely spaced energy levels in which the electron-hole pair can be trapped. Therefore, electron-hole pairs in larger dots live longer causing larger dots to show a longer lifetime.
As with any crystalline semiconductor, a quantum dot's electronic wave functions extend over the crystal lattice. Similar to a molecule, a quantum dot has both a quantized energy spectrum and a quantized density of electronic states near the edge of the band gap.
Quantum dots can be synthesized with larger (thicker) shells (CdSe quantum dots with CdS shells). The shell thickness has shown direct correlation to the spectroscopic properties of the particles like lifetime and emission intensity, but also to the stability.
Quantum dots are particularly significant for optical applications due to their high extinction coefficient.[24] In electronic applications they have been proven to operate like a single electron transistor and show the Coulomb blockade effect. Quantum dots have also been suggested as implementations of qubits for quantum information processing.
The ability to tune the size of quantum dots is advantageous for many applications. For instance, larger quantum dots have a greater spectrum-shift towards red compared to smaller dots, and exhibit less pronounced quantum properties. Conversely, the smaller particles allow one to take advantage of more subtle quantum effects.
Researchers at Los Alamos National Laboratory have developed a device that efficiently produces visible light, through energy transfer from thin layers of quantum wells to crystals above the layers.[25]
Being zero-dimensional, quantum dots have a sharper density of states than higher-dimensional structures. As a result, they have superior transport and optical properties, and are being researched for use in diode lasers, amplifiers, and biological sensors. Quantum dots may be excited within a locally enhanced electromagnetic field produced by gold nanoparticles, which can then be observed from the surface plasmon resonance in the photoluminescent excitation spectrum of (CdSe)ZnS nanocrystals. High-quality quantum dots are well suited for optical encoding and multiplexing applications due to their broad excitation profiles and narrow/symmetric emission spectra. The new generations of quantum dots have far-reaching potential for the study of intracellular processes at the single-molecule level, high-resolution cellular imaging, long-term in vivo observation of cell trafficking, tumor targeting, and diagnostics.
Quantum dot technology is one of the most promising candidates for use in solid-state quantum computation. By applying small voltages to the leads, the flow of electrons through the quantum dot can be controlled and thereby precise measurements of the spin and other properties therein can be made. With several entangled quantum dots, or qubits, plus a way of performing operations, quantum calculations and the computers that would perform them might be possible.
In modern biological analysis, various kinds of organic dyes are used. However, with each passing year, more flexibility is being required of these dyes, and the traditional dyes are often unable to meet the expectations.[26] To this end, quantum dots have quickly filled in the role, being found to be superior to traditional organic dyes on several counts, one of the most immediately obvious being brightness (owing to the high extinction co-efficient combined with a comparable quantum yield to fluorescent dyes[27]) as well as their stability (allowing much less photobleaching). It has been estimated that quantum dots are 20 times brighter and 100 times more stable than traditional fluorescent reporters.[26] For single-particle tracking, the irregular blinking of quantum dots is a minor drawback.
The usage of quantum dots for highly sensitive cellular imaging has seen major advances over the past decade.[28] The improved photostability of quantum dots, for example, allows the acquisition of many consecutive focal-plane images that can be reconstructed into a high-resolution three-dimensional image.[29] Another application that takes advantage of the extraordinary photostability of quantum dot probes is the real-time tracking of molecules and cells over extended periods of time.[30] Antibodies, streptavidin,[31] peptides,[32] DNA,[33] nucleic acid aptamers,[34] or small-molecule ligands [12] can be used to target quantum dots to specific proteins on cells. Researchers were able to observe quantum dots in lymph nodes of mice for more than 4 months.[35]
Semiconductor quantum dots have also been employed for in vitro imaging of pre-labeled cells. The ability to image single-cell migration in real time is expected to be important to several research areas such as embryogenesis, cancer metastasis, stem cell therapeutics, and lymphocyte immunology.
Scientists have proven that quantum dots are dramatically better than existing methods for delivering a gene-silencing tool, known as siRNA, into cells.[36]
First attempts have been made to use quantum dots for tumor targeting under in vivo conditions. There exist two basic targeting schemes: active targeting and passive targeting. In the case of active targeting, quantum dots are functionalized with tumor-specific binding sites to selectively bind to tumor cells. Passive targeting uses the enhanced permeation and retention of tumor cells for the delivery of quantum dot probes. Fast-growing tumor cells typically have more permeable membranes than healthy cells, allowing the leakage of small nanoparticles into the cell body. Moreover, tumor cells lack an effective lymphatic drainage system, which leads to subsequent nanoparticle-accumulation.
One of the remaining issues with quantum dot probes is their potential in vivo toxicity. For example, CdSe nanocrystals are highly toxic to cultured cells under UV illumination. The energy of UV irradiation is close to that of the covalent chemical bond energy of CdSe nanocrystals. As a result, semiconductor particles can be dissolved, in a process known as photolysis, to release toxic cadmium ions into the culture medium. In the absence of UV irradiation, however, quantum dots with a stable polymer coating have been found to be essentially nontoxic.[35][37] Hydrogel encapsulation of quantum dots allows for quantum dots to be introduced into a stable aqueous solution, reducing the possibility of cadmium leakage.Then again, only little is known about the excretion process of quantum dots from living organisms.[38] These and other questions must be carefully examined before quantum dot applications in tumor or vascular imaging can be approved for human clinical use.
Another potential cutting-edge application of quantum dots is being researched, with quantum dots acting as the inorganic fluorophore for intra-operative detection of tumors using fluorescence spectroscopy.
Delivery of undamaged quantum dots to the cell cytoplasm has been a challenge with existing techniques. Vector-based methods have resulted in aggregation and endosomal sequestration of quantum dots while electroporation can damage the semi-conducting particles and aggregate delivered dots in the cytosol. Cell squeezing - a method invented in 2013 by Armon Sharei, Robert Langer and Klavs Jensen at MIT - has demonstrated efficient cytosolic delivery of quantum dots without inducing aggregation, trapping material in endosomes, or significant loss of cell viability. Moreover, it has shown that individual quantum dots delivered by this approach are detectable in the cell cytosol, thus illustrating the potential of this technique for single molecule tracking studies. These results indicate that Cell squeezing could potentially be implemented as a robust platform for quantum dot based imaging in a variety of applications.[39]
Photovoltaic devices[edit]
Quantum dots may be able to increase the efficiency and reduce the cost of today's typical silicon photovoltaic cells. According to an experimental proof from 2004,[40] quantum dots of lead selenide can produce more than one exciton from one high energy photon via the process of carrier multiplication or multiple exciton generation (MEG). This compares favorably to today's photovoltaic cells which can only manage one exciton per high-energy photon, with high kinetic energy carriers losing their energy as heat. Quantum dot photovoltaics would theoretically be cheaper to manufacture, as they can be made "using simple chemical reactions."
Light emitting devices[edit]
There are several inquiries into using quantum dots as light-emitting diodes to make displays and other light sources, such as "QD-LED" displays, and "QD-WLED" (White LED). In June 2006, QD Vision announced technical success in making a proof-of-concept quantum dot display and show a bright emission in the visible and near infra-red region of the spectrum. Quantum dots are valued for displays, because they emit light in very specific gaussian distributions. This can result in a display that more accurately renders the colors that the human eye can perceive. Quantum dots also require very little power since they are not color filtered. Additionally, since the discovery of "white-light emitting" QD, general solid-state lighting applications appear closer than ever.[41] A color liquid crystal display (LCD), for example, is usually backlit by fluorescent lamps (CCFLs) or conventional white LEDs that are color filtered to produce red, green, and blue pixels. A better solution is using a conventional blue-emitting LED as light source and converting part of the emitted light into pure green and red light by the appropriate quantum dots placed in front of the blue LED. This type of white light as backlight of an LCD panel allows for the best color gamut at lower cost than a RGB LED combination using three LEDs.
Quantum dot displays that intrinsically produce monochromatic light can be more efficient, since more of the light produced reaches the eye.QD-LEDs can be fabricated on a silicon substrate, which allows integration of light sources onto silicon-based integrated circuits or microelectromechanical systems.[42] A QD-LED integrated at a scanning microscopy tip was used to demonstrate fluorescence near-field scanning optical microscopy (NSOM) imaging.[43]
Photodetector devices[edit]
Quantum dot photodetectors (QDPs) can be fabricated either via solution-processing,[44] or from conventional single-crystalline semiconductors.[45] Conventional single-crystalline semiconductor QDPs are precluded from integration with flexible organic electronics due to the incompatibility of their growth conditions with the process windows required by organic semiconductors. On the other hand, solution-processed QDPs can be readily integrated with an almost infinite variety of substrates, and also postprocessed atop other integrated circuits. Such colloidal QDPs have potential applications in surveillance, machine vision, industrial inspection, spectroscopy, and fluorescent biomedical imaging.
Theoretical Models[edit]
A variety of theoretical frameworks exist to model optical, electronic, and structural properties of quantum dots. These may be broadly divided into quantum mechanical, semiclassical, and classical.
Quantum Mechanics[edit]
Quantum mechanical models and simulations of quantum dots often involve the interaction of electrons with a pseudopotential.
Semiclassical models of quantum dots frequently incorporate a chemical potential. For example, The thermodynamic chemical potential of an N-particle system is given by
\mu(N) = E(N) - E(N-1)
whose energy terms may be obtained as solutions of the Schrödinger equation. The definition of capacitance,
{1\over C} \equiv {\Delta \,V\over\Delta \,Q},
with the potential difference
\Delta \,V = {\Delta \,\mu \,\over e} = {\mu(N+\Delta \,N) -\mu(N) \over e}
may be applied to a quantum dot with the addition or removal of individual electrons,
\Delta \,N = 1 and \Delta \,Q=e.
C(N) = {e^2\over\mu(N+1)-\mu(N)} = {e^2 \over E(N)}
is the "quantum capacitance" of a quantum dot.[46]
Classical Mechanics[edit]
Classical models of electrostatic properties of electrons in quantum dots are similar in nature to the Thomson problem of optimally distributing electrons on a unit sphere.
The classical electrostatic treatment of electrons confined to spherical quantum dots is similar to their treatment in the Thomson,[47] or plum pudding model, of the atom.[48]
The classical treatment of both two-dimensional and three-dimensional quantum dots exhibit electron shell-filling behavior. A "periodic table of classical artificial atoms" has been described for two-dimensional quantum dots.[49] As well, several connections have been reported between the three-dimensional Thomson problem and electron shell-filling patterns found in naturally-occurring atoms found throughout the periodic table.[50] This latter work originated in classical electrostatic modeling of electrons in a spherical quantum dot represented by an ideal dielectric sphere.[51]
See also[edit]
1. ^ Brus, L.E. (2007). "Chemistry and Physics of Semiconductor Nanocrystals". Retrieved 7 July 2009.
2. ^ Norris, D.J. (1995). "Measurement and Assignment of the Size-Dependent Optical Spectrum in Cadmium Selenide (CdSe) Quantum Dots, PhD thesis, MIT". hdl:1721.1/11129.
4. ^ Reed MA, Randall JN, Aggarwal RJ, Matyi RJ, Moore TM, Wetsel AE (1988). "Observation of discrete electronic states in a zero-dimensional semiconductor nanostructure". Phys Rev Lett 60 (6): 535–537. Bibcode:1988PhRvL..60..535R. doi:10.1103/PhysRevLett.60.535. PMID 10038575.
5. ^ http://www.technologyreview.com/news/509801/quantum-dots-get-commercial-debut-in-more-colorful-sony-tvs/
7. ^ Brandrup, J.; Immergut, E.H. (1966). Polymer Handbook (2 ed.). New York: Wiley. pp. 240–246.
8. ^ Khare, Ankur, Wills, Andrew W., Ammerman, Lauren M., Noris, David J., and Aydil, Eray S. (2011). "Size control and quantum confinement in Cu2ZnSnS4 nanocrystals". Chem. Commun. 47 (42): 47. doi:10.1039/C1CC14687D.
9. ^ Greenemeier, L. (5 February 2008). "New Electronics Promise Wireless at Warp Speed". Scientific American.
10. ^ "SCIENCE WATCH; Tiny Lasers Break Speed Record". The New York Times. 31 December 1991.
11. ^ C. Delerue, M. Lannoo (2004). Nanostructures: Theory and Modelling. Springer. p. 47. ISBN 3-540-20694-9.
12. ^ a b Zherebetskyy D., Scheele M., Zhang Y., Bronstein N., Thompson C., Britt D., Salmeron M., Alivisatos P., Wang L.W. Science 2014 June;344(6190):1380-4 (2014). "Hydroxylation of the surface of PbS nanocrystals passivated with oleic acid". Science 344 (6190): 1380–1384. doi:10.1126/science.1252727.
13. ^ Silbey, Robert J.; Alberty, Robert A.; Bawendi, Moungi G. (2005). Physical Chemistry, 4th ed. John Wiley &Sons. p. 835.
14. ^ Prati, Enrico; De Michielis, Marco; Belli, Matteo; Cocco, Simone; Fanciulli, Marco; Kotekar-Patil, Dharmraj; Ruoff, Matthias; Kern, Dieter P et al. (2012). "Few electron limit of n-type metal oxide semiconductor single electron transistors". Nanotechnology 23 (21): 215204. arXiv:1203.4811. Bibcode:2012Nanot..23u5204P. doi:10.1088/0957-4484/23/21/215204. PMID 22552118.
15. ^ Lee SW, Mao C, Flynn CE, Belcher AM (2002). "Ordering of quantum dots using genetically engineered viruses". Science 296 (5569): 892–5. Bibcode:2002Sci...296..892L. doi:10.1126/science.1068054. PMID 11988570.
16. ^ Whaley SR, English DS, Hu EL, Barbara PF, Belcher AM (2000). "Selection of peptides with semiconductor binding specificity for directed nanocrystal assembly". Nature 405 (6787): 665–8. doi:10.1038/35015043. PMID 10864319.
17. ^ A.M. Jawaid, S. Chattopadhyay, D.J. Wink, L.E. Page and P.T. Snee, ACS Nano, 2013, 7, 3190
18. ^ http://www.azonano.com/article.aspx?ArticleID=3473
19. ^ Quantum Materials Corporation and the Access2Flow Consortium (2011). "Quantum materials corp achieves milestone in High Volume Production of Quantum Dots". Retrieved 7 July 2011.
20. ^ The Economist (16 June 2011). "Quantum-dot displays-Dotting the eyes". Retrieved 7 July 2011.
21. ^ "Cadmium-free quantum dots". Retrieved 7 July 2009.
22. ^ Hauser, Charlotte A. E.; Zhang, Shuguang (25 Nov 2010). "Peptides as biological semiconductors". Nature 468 (7323): 516–517. Bibcode:2010Natur.468..516H. doi:10.1038/468516a. Retrieved 10 Apr 2010.
23. ^ Van Driel, A. F. (2005). "Frequency-Dependent Spontaneous Emission Rate from CdSe and CdTe Nanocrystals: Influence of Dark States". Physical Review Letters 95 (23): 236804. arXiv:cond-mat/0509565. Bibcode:2005PhRvL..95w6804V. doi:10.1103/PhysRevLett.95.236804. PMID 16384329.
24. ^ Leatherdale, C. A.; Woo, W. -K.; Mikulec, F. V.; Bawendi, M. G. (2002). "On the Absorption Cross Section of CdSe Nanocrystal Quantum Dots". The Journal of Physical Chemistry B 106 (31): 7619. doi:10.1021/jp025698c. edit
25. ^ Achermann, M.; Petruska, M. A.; Smith, D. L.; Koleske, D. D.; Klimov, V. I. (2004). "Energy-transfer pumping of semiconductor nanocrystals using an epitaxial quantum well". Nature 429 (6992): 642–646. Bibcode:2004Natur.429..642A. doi:10.1038/nature02571.
26. ^ a b Walling, M. A.; Novak, Shepard (February 2009). "Quantum Dots for Live Cell and In Vivo Imaging". Int. J. Mol. Sci. 10 (2): 441–491. doi:10.3390/ijms10020441. PMC 2660663. PMID 19333416.
27. ^ Michalet X, Pinaud FF, Bentolila LA, et al. (2005). "Quantum dots for live cells, in vivo imaging, and diagnostics". Science 307 (5709): 538–44. Bibcode:2005Sci...307..538M. doi:10.1126/science.1104274. PMC 1201471. PMID 15681376.
28. ^ Spie (2014). "Paul Selvin Hot Topics presentation: New Small Quantum Dots for Neuroscience". SPIE Newsroom. doi:10.1117/2.3201403.17. edit
29. ^ Tokumasu, F; Fairhurst, Rm; Ostera, Gr; Brittain, Nj; Hwang, J; Wellems, Te; Dvorak, Ja (Mar 2005). "Band 3 modifications in Plasmodium falciparum-infected AA and CC erythrocytes assayed by autocorrelation analysis using quantum dots". Journal of Cell Science (Free full text) 118 (Pt 5): 1091–8. doi:10.1242/jcs.01662. PMID 15731014.
30. ^ Dahan, M; Lévi, S; Luccardini, C; Rostaing, P; Riveau, B; Triller, A (Oct 2003). "Diffusion dynamics of glycine receptors revealed by single-quantum dot tracking". Science 302 (5644): 442–5. Bibcode:2003Sci...302..442D. doi:10.1126/science.1088525. PMID 14564008.
31. ^ Howarth M, Liu W, Puthenveetil S, Zheng Y, Marshall LF, Schmidt MM, Wittrup KD, Bawendi MG, Ting AY. Nat Methods. 2008 May;5(5):397-9 (2008). "Monovalent, reduced-size quantum dots for imaging receptors on living cells". Nature methods 5 (5): 397–9. doi:10.1038/nmeth.1206. PMC 2637151. PMID 18425138.
32. ^ Akerman ME, Chan WC, Laakkonen P, Bhatia SN, Ruoslahti E. Proc Natl Acad Sci U S A. 2002 Oct 1;99(20):12617-21 (2002). "Nanocrystal targeting in vivo". Proceedings of the National Academy of Sciences of the United States of America 99 (20): 12617–21. Bibcode:2002PNAS...9912617A. doi:10.1073/pnas.152463399. PMC 130509. PMID 12235356.
33. ^ Farlow J, Seo D, Broaders, KE, Taylor, MJ, Gartner ZJ, Jun, YW. Nat. Methods. U S A. 2013 Oct (2013). "Formation of targeted monovalent quantum dots by steric exclusion". Nature Methods. doi:10.1038/nmeth.2682.
34. ^ Dwarakanath S, Bruno JG, Shastry A, Phillips T, John AA, Kumar A, Stephenson LD. Biochem Biophys Res Commun. 2004 Dec 17;325(3):739-43 (2004). "Quantum dot-antibody and aptamer conjugates shift fluorescence upon binding bacteria". Biochemical and Biophysical Research Communications 325 (3): 739–43. doi:10.1016/j.bbrc.2004.10.099. PMID 15541352.
35. ^ a b Ballou, B; Lagerholm, Bc; Ernst, La; Bruchez, Mp; Waggoner, As (2004). "Noninvasive imaging of quantum dots in mice". Bioconjugate chemistry (Free full text) 15 (1): 79–86. doi:10.1021/bc034153y. PMID 14733586.
36. ^ "Gene Silencer and Quantum Dots Reduce Protein Production to a Whisper". Newswise. Retrieved 24 June 2008.
37. ^ Pelley JL, Daar AS, Saner MA. Toxicol Sci. 2009 Dec;112(2):276-96 (2009). "State of academic knowledge on toxicity and biological fate of quantum dots". Toxicological sciences : an official journal of the Society of Toxicology 112 (2): 276–96. doi:10.1093/toxsci/kfp188. PMC 2777075. PMID 19684286.
38. ^ Choi HS, Liu W, Misra P, Tanaka E, Zimmer JP, Itty Ipe B, Bawendi MG, Frangioni JV. Nat Biotechnol. 2007 Oct;25(10):1165–70. Epub 2007 Sep 23 (2007). "Renal clearance of quantum dots". Nature Biotechnology 25 (10): 1165–70. doi:10.1038/nbt1340. PMC 2702539. PMID 17891134.
39. ^ Armon Sharei, Janet Zoldan, Andrea Adamo, Woo Young Sim, Nahyun Cho, Emily Jackson, Shirley Mao, Sabine Schneider, Min-Joon Han, Abigail Lytton-Jean, Pamela A. Basto, Siddharth Jhunjhunwala, Jungmin Lee, Daniel A. Heller, Jeon Woong Kang, George C. Hartoularos, Kwang-Soo Kim, Daniel G. Anderson, Robert Langer, and Klavs F. Jensen (2013). "A vector-free microfluidic platform for intracellular delivery". PNAS. Bibcode:2013PNAS..110.2082S. doi:10.1073/pnas.1218705110.
40. ^ Schaller, R.; Klimov, V. (2004). "High Efficiency Carrier Multiplication in PbSe Nanocrystals: Implications for Solar Energy Conversion". Physical Review Letters 92. arXiv:cond-mat/0404368. Bibcode:2004PhRvL..92r6601S. doi:10.1103/PhysRevLett.92.186601. PMID 15169518.
41. ^ Shrinking quantum dots to produce white light. Vanderbilt's Online Research Magazine. Vanderbilt.edu. Retrieved on 24 July 2013.
42. ^ "Nano LEDs printed on silicon". 3 July 2009.
43. ^ Hoshino, Kazunori; Gopal, Ashwini; Glaz, Micah S.; Vanden Bout, David A.; Zhang, Xiaojing (2012). "Nanoscale fluorescence imaging with quantum dot near-field electroluminescence". Applied Physics Letters 101 (4): 043118. Bibcode:2012ApPhL.101d3118H. doi:10.1063/1.4739235.
44. ^ Konstantatos, G.; Sargent, E. H. (2009). "Solution-Processed Quantum Dot Photodetectors". Proceedings of the IEEE 97 (10): 1666–1683. doi:10.1109/JPROC.2009.2025612.
45. ^ Vaillancourt, J.; Lu, X.-J.; Lu, Xuejun (2011). "A High Operating Temperature (HOT) Middle Wave Infrared (MWIR) Quantum-Dot Photodetector". Optics and Photonics Letters 4 (2): 1–5. doi:10.1142/S1793528811000196.
46. ^ G. J. Iafrate, K. Hess, J. B. Krieger, and M. Macucci (1995). "Capacitive nature of atomic-sized structures". Phys. Rev. B 52: 15.
49. ^ V. M. Bedanov and F. M. Peeters (1994). "Ordering and phase transitions of charged particles in a classical finite two-dimensional system". Physical Review B 49: 2667–2676. Bibcode:1994PhRvB..49.2667B. doi:10.1103/PhysRevB.49.2667.
50. ^ T. LaFave Jr. (2013). "Correspondences between the classical electrostatic Thomson Problem and atomic electronic structure". Journal of Electrostatics 71 (6): 1029–1035. doi:10.1016/j.elstat.2013.10.001.
51. ^ T. LaFave Jr. (2011). "The discrete charge dielectric model of electrostatic energy". Journal of Electrostatics 69 (5): 414–418. doi:10.1016/j.elstat.2013.10.001.
General references[edit]
External links[edit] |
a6e23f8d9f55a6c3 | Take the 2-minute tour ×
I've read that QM operates in a Hilbert space (where the state functions live). I don't know if its meaningful to ask such a question, what are the answers to an analogous questions on GR and Newtonian gravity?
share|improve this question
2 Answers 2
up vote 1 down vote accepted
I interpreted your question differently, more like a mathematics question.
In Quantum Mechanics, we basically have an equation, the Schrödinger equation, which is a differential equation on the space of square-integrable complex valued functions. This space is a Hilbert space, which means that it is a vector space, and it also has a nice topological structure, basically all Cauchy-sequences of vectors converge in that space.
In Newtonian mechanics, the equations are defined on phase space, which is basically a $6N$-dimensional space, $N$ is the total number of particles, on which coordinates for a point consist of the positions and momenta of each particle you want to describe. The solution of the equations induces a flow on this phase space. The structure of phase space is usually that of a symplectic manifold.
In General Relativity, the equations are Einstein's field equations. They link the Riemann tensor to the energy-momentum tensor. They are difficult to solve in the sense that they are nonlinear and you have to specify an energy-momentum tensor, but this tensor will also depend on the geometry of space-time, thus the Riemann tensor. So you have to solve in one go for the geometry and energy-matter distribution. In practice, many simplifying assumptions will be made. But the "space" of solutions is the space of geometries and energy-matter distributions compatible with the field equations.
share|improve this answer
@Raskolnikov : Your interpretation is what i was intending while i was asking the question. Is '6N' a typo, i did'nt get what it is.Is it 'infinite' ? What are the mathematical properties of a phase space of newtonian mechanics ? – Rajesh D Dec 1 '10 at 15:16
No, it's not a typo, but I admit I have not been clear enough. $N$ is the amount of particles you want to describe. You multiply by 6 because each particle has 3 spatial coordinates and 3 momenta along each spatial direction. The structure of phase space is that of a symplectic manifold. – Raskolnikov Dec 1 '10 at 15:22
@Raskolnikov : In newtonian mechanics, is the trajectory of the state of the system always smooth ? – Rajesh D Dec 1 '10 at 15:34
More precisely the dimension of phase space is equal to the number of free generalized co-ordinates of the system. If you have constraints, they generally reduce the dimensions of phase space, which a very important justification for using it. – Sklivvz Dec 1 '10 at 15:34
@Rajesh: for the next time I suggest you try to formulate your questions clearer so that one doesn't have to waste time on answers that happen to not be what you were intending. Well, I probably shouldn't have answered such a vaguely formulated question in the first place... – Marek Dec 1 '10 at 15:40
First, I'll assume that you're talking about quantization. To understand how to quantize GR it is absolutely necessary to give an account (however sketchy) of the approach used to quantize simpler systems.
Classical mechanics
This is a procedure whereby one transfers from the classical point of view (Newtonian mechanics or equivalently Lagrangian or Hamiltonian mechanics) to the quantum point of view. Now, there are some general prescriptions how one can quantize classical mechanical systems. The most common one is that one replaces the phase space by Hilbert space, functions on phase space by operators on the Hilbert space and Poisson bracket of functions by commutator of the operators.
Field theory
The previous paragraph was only dealing with mechanics, i.e. case where there are only few degrees of freedom. But GR is a field theory (of gravitational field) and is actually a kind of gauge theory (but a little special at that). One has to first learn how to quantize classical fields and then gauge fields. To do that you can replace the (infinite-dimensional) phase space of the field by (very large) Hilbert space and produce an analogue of Poisson brackets called Dirac bracket which you then replace by commutators.
(The second very common approach to quantization is via path integral for which you don't need any operators but I won't elaborate on that here because it is a huge area that would take us far away off the topic of your question)
Then to quantize a gauge theory with its own huge gauge symmetry one has to carry out a very nontrivial discussion about the structure of these Dirac brackets.
(There also exist other approaches to this but none of them is particularly easy for a beginner. If you're interested see Faddeev-Popov ghosts in path integral gauge quantization and BRST quantization)
Now, the thing is that GR (as a field theory) is hard to quantize. I.e. if you repeat the above approach for GR, you'll find out that your quantum theory doesn't make sense (because it is not renormalizable).
This suggests that something more than naive approach is needed. And there are actually lots of them. For one thing, one can quantize gravity in certain special dimensions (like 2+1) if one generalizes GR a little (this was done by Witten in '80s). There are also various reformulations that relate quantum gravity and QFT (like AdS/CFT correspondence). There is also matrix string theory that shows duality between matrix quantum mechanics and GR (as pointed out to me by Matt in this question of mine).
In short, quantization of GR is very hard. There are many theories and as of yet there is no experimental evidence that would let us know which one is the correct one.
share|improve this answer
Thank you for pointing out.I i still felt your answer very useful in totally different way than i was expecting...I would try and formulate my question clearly in future. I think that your answer is very helpful for someone googling or browsing through this forum. – Rajesh D Dec 1 '10 at 15:49
@Rajesh: all right then. I also think my answer could be good if only someone asked the question it addresses :-) – Marek Dec 1 '10 at 16:03
Your Answer
|
73538b13d1419b0a | Physics Friday 58
In a previous Friday post, I demonstrated one method of determining the energy eigenvalues for the one-dimensional quantum harmonic oscillator. In particular, we took the (time-independent) Schrödinger equation , and by defining dimensionless parameters , , and then attempting a solution of the form , we derived the differential equation , where the series solution has recursion relation . Then, the requirement that the wavefunction be normalizable requires that the series solution terminate after finitely many terms, requiring that , for n a non-negative integer.
Now, let us consider the ground state case: n=0. Then , and the series recursion relation becomes , so that , and the terminating solution is that u(ξ) is a constant (a0). Then , and so
Normalizing this wavefunction,
So the ground state wavefunction of the 1-dimensional quantum harmonic oscillator is a gaussian.
Now, let us examine the uncertainties of position and momentum. The Heisenberg Uncertainty Principle tells us that (see here). For our wavefunction, we see
as in both cases the integrand is odd (see here).
Using and , we see that
, and so
Taking the product,
which is exactly the lower limit allowed by the uncertainty principle. Thus, the non-zero value of the ground-state energy (zero-point energy) can be seen as being a result of the Heisenberg Uncertainty Principle.
Tags: , , , , , , , , , ,
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Google+ photo
Connecting to %s
%d bloggers like this: |
9ad7cc9bc314d7c4 | Take the 2-minute tour ×
Why is a proton assumed to be always at the center while applying the Schrödinger equation? Isn't it a quantum particle?
share|improve this question
Self interactions are not considered in a non-relativistic quantum mechanical treatment and the Hydrogen atom is usually treated that way, in a first course. – Torsten Hĕrculĕ Cärlemän Dec 31 '13 at 8:51
@TorstenHĕrculĕCärlemän : What about proton being at the center? – Rajesh D Dec 31 '13 at 8:53
I don't get the fact about it being at the center of a coordinate frame, and it being a quantum particle. You can infact take any point as the origin, only to complicate the expressions further. It is most natural to hence take the nucleus at the center. – Torsten Hĕrculĕ Cärlemän Dec 31 '13 at 8:56
@RajeshD The assumption that the proton is stationary is just an approximation used since protons are about 2000 times as massive as the electrons and 2000 is approximately infinity. – David H Dec 31 '13 at 8:57
@DavidH : Thanks David. That seems very reasonable. – Rajesh D Dec 31 '13 at 9:00
3 Answers 3
up vote 19 down vote accepted
There is a rigorous formal analysis which lets you do this. The true problem, of course allows both the proton and the electron to move. The corresponding Schrödinger equation thus has the coordinates of both as variables. To simplify things, one usually transforms those variables to the relative separation and the centre-of-mass position. It turns out that the problem then separates (for a central force) into a "stationary proton" equation and a free particle equation for the COM.
There is a small price to pay for this: the mass for the centre of mass motion is the total mass - as you'd expect - but the radial equation has a mass given by the reduced mass $$\mu=\frac {Mm}{M+m}=\frac{m}{1+m/M} ,$$ which is close to the electron mass $m$ since the proton mass $M$ is much greater.
It's important to note that an exactly analogous separation holds for the classical treatment of the Kepler problem.
Regarding self-interactions, these are very hard to deal with without invoking the full machinery of quantum electrodynamics. Fortunately, in the low-energy limits where hydrogen atoms can form, it turns out you can completely neglect them.
share|improve this answer
I assume you're talking of the hydrogen atom; the hamiltonian of the nucleus + electron system is $$ H = \frac{p_e^2}{2 m _e} + \frac{p_n^2}{2 m _n} - \frac{e^2}{|r_e - r_n|}. $$ You can do a change of coordinates (center of mass coordinates) $$ \vec{R} = \frac{m_e \vec{r}_e + m_n \vec{r}_n}{m_e+m_n} \\ \vec{r} = r_e -r_n $$ and find the conjugate momenta to these coordinates: $$ \vec{P} = \vec{p}_e + \vec{p}_n \\ \vec{p} = \frac{m_n \vec{p}_e - m_e \vec{p}_n}{m_e+m_n}. $$ Defining also the reduced mass $\mu$ such that $$ \frac{1}{\mu} = \frac{1}{m_e} + \frac{1}{m_n} $$ and the total mass $M = m_e + m_n$, you can write the hydrogen atom hamiltonian as $$ H = \frac{P^2}{2 M} + \frac{p^2}{2 \mu} - \frac{e^2}{r} = H_{CM} + H_{rel}. $$ In this calculations I always treated the nucleus as a quantum particle; but if you look at $H_{rel} = p^2/2\mu - e^2/r$ and let the mass of the nucleus tend to infinity, you obtain the hydrogen atom hamiltonian usually taught in basic QM courses Also, you don't have other terms like spin-orbit, j-j couplings etc. because they are relativistic effects that come out from the Dirac equation.
share|improve this answer
thanks for the explanation @AlexA – Rajesh D Dec 31 '13 at 9:36
With regards your first question:
A similar (the same?) question you might reasonably ask is: how can we assume that the proton is stationary, at the centre of the problem, since it is surely going to be attracted by the electron and jiggle about a little? This is a question that would be just as valid directed at a classical system --- say, a planet orbiting a star --- as a quantum mechanical one.
The solution to this is as described above, by others: the fact that the star/proton is so much more massive than the planet/electron means that it is going to move very little (the acceleration of an object is inversely proportional to its mass, and hence with a large mass we have a very small acceleration i.e. very little motion), and so the stationary nature of the star/proton is a great approximation. And in fact, we can make the analysis completely rigorous by dealing with relative separations and reduced masses. But the finite mass of the proton means that indeed, the proton won't actually be stationary.
However, I'm not sure this is the question you're asking. Your concern was not "isn't the proton a particle of finite mass" but rather "isn't it a quantum particle". The suggestion is that you think the proton should jiggle due to its quantum mechanical nature --- that is, due to the uncertainty principle etc. --- irrespective of the mass of the proton (perhaps I am mistaken about this).
In the limit of the proton having infinitely more mass than the electron, the quantum mechanical nature of the proton won't force it to jiggle. In other words, the uncertainty in its position, $\Delta x$, can be made arbitrarily close to zero. This is consistent with the uncertainty principle since its momentum $p$ (mass x velocity) can tend to infinity in the limit of an infinitely massive proton. Hence we can still achieve
$$ \Delta p \Delta x \geq \frac{\hbar}{2} $$
with an arbitrarily small velocity and positional uncertainty, if we make the mass arbitrarily large.
In other words, in the assumption that we're using to neglect motion of the proton due to it being attracted to the electron, we are also able to neglect the motion of the proton due to quantum mechanical effects.
The reality of course is that the proton will jiggle --- it will jiggle a bit due to its intrinsic quantum mechanical nature, and it will jiggle a bit more due to the attractive force on it of the electron. However, this can be dealt with rigorously just as before, using relative separations and reduced masses.
share|improve this answer
protected by Qmechanic Mar 3 at 16:12
Would you like to answer one of these unanswered questions instead?
|
20d5767d282266d9 | From Wikipedia, the free encyclopedia
Jump to: navigation, search
Chemisorption is a kind of adsorption which involves a chemical reaction between the surface and the adsorbate. New chemical bonds are generated at the adsorbant surface. Examples include macroscopic phenomena that can be very obvious, like corrosion, and subtler effects associated with heterogeneous catalysis. The strong interaction between the adsorbate and the substrate surface creates new types of electronic bonds.[1]
In contrast with chemisorption is physisorption, which leaves the chemical species of the adsorbate and surface intact. It is conventionally accepted that the energetic threshold separating the binding energy of "physisorption" from that of "chemisorption" is about 0.5 eV per adsorbed species.
Due to specificity, the nature of chemisorption can greatly differ, depending on the chemical identity and the surface structure.
An important example of chemisorption is in heterogeneous catalysis which involves molecules reacting with each other via the formation of chemisorbed intermediates. After the chemisorbed species combine (by forming bonds with each other) the product desorbs from the surface.
Hydrogenation of an alkene on a solid catalyst entails chemisorption of the molecules of hydrogen and alkene, which form bonds to the surface atoms.
Self-assembled monolayers[edit]
Self-assembled monolayers (SAMs) are formed by chemisorbing reactive reagents with metal surfaces. A famous example involves thiols (RS-H) absorbing onto the surface of gold. This process forms strong Au-SR bonds and releases H2. The densely packed SR groups protect the surface.
Gas-surface Chemisorption[edit]
Adsorption Kinetics[edit]
As an instance of adsorption, chemisorption follows the adsorption process. The first stage is for the adsorbate particle to come into contact with the surface. The particle needs to be trapped onto the surface by not possessing enough energy to leave the gas-surface potential well. If it elastically collides with the surface, then it would return to the bulk gas. If it loses enough momentum through an inelastic collision, then it “sticks” onto the surface, forming a precursor state bonded to the surface by weak forces, similar to physisorption. The particle diffuses on the surface until it finds a deep chemisorption potential well. Then it reacts with the surface or simply desorbs after enough energy and time.[2]
The reaction with the surface is dependent on the chemical species involved. Applying Gibbs free energy equation for reactions:
\Delta G = \Delta H - T\Delta S
General thermodynamics states that for spontaneous reactions at constant temperature and pressure, the change in free energy should be negative. Since a free particle is restrained to a surface, and unless the surface atom is highly mobile, entropy is lowered. This means that the enthalpy term must be negative, implying an exothermic reaction.[3]
Figure 1 is a graph of physisorption and chemisorption energy curves of tungsten and oxygen. Physisorption is given as a Lennard-Jones potential and chemisorption is given as a Morse potential. There exists a point of crossover between the physisorption and chemisorption, meaning a point of transfer. It can occur above or below the zero-energy line (with a difference in the Morse potential, a), representing an activation energy requirement or lack of. Most simple gases on clean metal surfaces lack the activation energy requirement.
For experimental setups of chemisorption, the amount of adsorption of a particular system is quantified by a sticking probability value.[3]
However, chemisorption is very difficult to theorize. A multidimensional potential energy surface (PES) derived from effective medium theory is used to describe the effect of the surface on absorption, but only certain parts of it are used depending on what is to be studied. A simple example of a PES, which takes the total of the energy as a function of location:
E(\{R_i\}) = E_{el}(\{R_i\}) + V_{\text{ion-ion}}(\{R_i\})
where E_{el} is the energy eigenvalue of the Schrödinger equation for the electronic degrees of freedom and V_{ion-ion} is the ion interactions. This expression is without translational energy, rotational energy, vibrational excitations, and other such considerations.[4]
There exist several models to describe surface reactions: the Langmuir-Hinschelwood mechanism in which both reacting species are adsorbed, and the Eley-Rideal mechanism in which one is adsorbed and the other reacts with it.[3]
Real systems have many irregularities, making theoretical calculations more difficult:[5]
• Solid surfaces are not necessarily at equilibrium.
• They may be perturbed and irregular, defects and such.
• Distribution of adsorption energies and odd adsorption sites.
• Bonds formed between the adsorbates.
Compared to physisorption where adsorbates are simply sitting on the surface, the adsorbates can change the surface, along with its structure. The structure can go through relaxation, where the first few layers change interplanar distances without changing the surface structure, or reconstruction where the surface structure is changed.[5]
For example oxygen can form very strong bonds (~4 eV) with metals, such as Cu(110). This comes with the breaking apart of surface bonds in forming surface-adsorbate bonds. A large restructuring occurs by missing row as seen in Figure 2.
Dissociation Chemisorption[edit]
And example is the hydrogen and copper system, one that has been studied many times over. It has a large activation energy of .35 - .85 eV. The vibrational excitation of the hydrogen molecule promotes dissociation on low index surfaces of copper.[2]
See also[edit]
1. ^ Oura, K.; V. G. Lifshits; A. A. Saranin; A. V. Zotov; M. Katayama (2003). Surface Science, An Introduction. Berlin: Springer. ISBN 3-540-00545-5.
2. ^ a b c Rettner, C.T; Auerbach, D.J. (1996). "Chemical Dynamics at the Gas-Surface Interface". Journal of Physical Chemistry 100 (31): 13021–13033. doi:10.1021/jp9536007.
3. ^ a b c Gasser, R.P.H.; (1985) An introduction to chemisorption and catalysis by metals, Clarendon Press, Oxford
4. ^ Norskov, J.K. (1990). "Chemisorption on metal surfaces". Reports on Progress in Physics 53 (10): 1253–1295. doi:10.1088/0034-4885/53/10/001.
5. ^ a b Clark, A.; (1974); The Chemisorptive Bond: Basic Concepts, Academic Press, New York and London |
b76dc1b5a7075a53 | Take the 2-minute tour ×
I saw this video of the double slit experiment by Dr. Quantum on youtube. Later in the video he says, the behavior of the electrons changes to produce double bars effect as if it knows that it is being watched or observed.
What does that mean? How is that even possible? An atom knows if it is being watched? Seriously? Probably, likely are the chances that I dint understand the video?
share|improve this question
That video has prompted questions here before. The first half of it is pretty standard explanation of quantum mechanics for laypeople, but at some point in veers off into new age woo and silly quantum mysticism. The basic answer is that QM describes the way the universe works very accurately. It is futile to assign wacky philosophical explanations to it. The universe will do what the universe will do, and QM is simply a description of it's behavior. – Colin K Nov 8 '11 at 22:11
Don't let Dr Quantum touch you there... He's not a real doctor. – Mikhail Nov 9 '11 at 3:51
remember what Dr. Feynman said about QM...If you think you understand QM, then you didn't understand it! – Vineet Menon Nov 9 '11 at 4:47
add comment
migrated from philosophy.stackexchange.com Nov 8 '11 at 21:14
This question came from our site for those interested in logical reasoning.
2 Answers
up vote 4 down vote accepted
Before I attempt to answer your question it is necessary to cover some basic background, you must also forgive the length but you raise some very interesting question:
There are two things that govern the evolution of a Quantum Mechanical (QM) system (For All Practical Purposes (FAPP) the election and the double-slit/Youngs apparatus you mention I will take to be a purely QM system), the time evolution of the system (governed by the Schrödinger equation) which we will denote as $\mathbf{U}$ and the State Vector Reduction or Collapse of the Wave Function $\mathbf{R}$. The Schrödinger equation describes the unitary/time evolution of the wave function or quantum state of a particle which here we will denote as $\mathbf{U}$. This evolution is well defined and provides information on the evolution of the quantum state of a system. The quantum state itself, expresses the entire weighted sum of all the possible alternatives (complex number weighting factors) that are open to the system. Due to the nature of the complex probabilities, it is possible for a QM system, like your electron traveling through the Youngs apparatus, to be in a complex superposition of multiple states (or to put it another way, be in a mixture of possible states/outcomes that the given system will allow).
For your system lets assume for simplicity that there are two states $|T\rangle$ the state associated with the electron going through the [T]op ‘slit’, and $|B\rangle$ the state associated with the electron passing through the bottom ‘slit’ (for simplicity we will ignore the phase factors associated with the QM states. See here for more information about the phase factor associated with Quantum States). So, just before the electron strikes the wall it is in a superposition of states $\alpha|T\rangle + \beta|B\rangle$, where $\alpha$ and $\beta$ are complex number probabilities that represent the likely hood of the particle being in the respective states. Now, in order to determine which path/’slit’ the electron actually took (either $|T\rangle$ or $|B\rangle$) we have to make some kind of ‘observation’/measurement (as was pointed out above). This measurement is what causes process $\mathbf{R}$ to occur and subsequently the collapse of the wave function which force the superposition of states $\alpha|T\rangle + \beta|B\rangle$ to become either state $|T\rangle$ OR $|B\rangle$. It is this QM state reduction or wave function collapse caused by process $\mathbf{R}$ that invokes all the mystery and the very strange nature of QM. There are numerous paradoxes (EPR-Paradox, Schrödinger’s cat etc. see here for an overview and some background) that stem from this measurement procedure/problem. At this point I can now address your questions: “What does that mean? How is that even possible? An atom knows if it is being watched? Seriously? Probably, likely are the chances that I didn’t understand the video?”
So it is the process $\mathbf{R}$ that causes this issue so you are right to ask what does it mean when someone says “it knows that it is being observed”. To answer the above I will ask one of my own questions: “Is $\mathbf{R}$ a real process?”. I ask this because there are two ways of viewing $\mathbf{R}$. Some physicists view the collapse of the wave function and the quantum superpositions of complex probabilities (the use of state vectors) as real physical properties, others do not (even Dirac, Einstein and Schrodinger himself, did not take the probabilistic view of QM as serious view of what was actually happening in reality, rather they took it as a mathematical formalism that allowed these physical processes to be predicted). If you are to deem the state vector as a real entity then you must accept the consequential blur between what happens at the quantum level and what happens at the macroscopic/large scale level. This leads to the Feynman’s multiple history view of QM where all of the possible outcomes of a QM system occur and this itself leads to the “Many-World” interpretations of QM. I for one (along with the like of Penrose, Einstein etc.) believe the current picture of QM is not complete and that there is some physical process causing the collapse of the wave function.
The wave function collapse is what causes the electron to choose a QM state, and the act of observation/measurement does seem to cause this collapse. However, this give rise to the question “Is it the act of human observation/consciousness that causes this collapse?”. It is impossible to argue this is the case. To go into more depth I will have to bring in the idea of quantum entanglements, which is essentially what was described above as a superposition of two QM states. These entanglements are what “collapse” when observations/measurements are made and are what constitute $\mathbf{R}$. So the real question is what causes dis-entanglement of two superposed states. There are some very interesting theories that postulate that the state vector reduction is gravitationally reduced and not the act of any observation. These ideas also have a bearing on the question of human consciousness! These details and in depth discussion on this subject can be found in the very accessible book: “Shadows of the Mind” by Roger Penrose.
I hope this was of some help.
share|improve this answer
add comment
The video shows that the interference pattern goes away when one tries to measure which slit the electron went through. The point is that in order to measure which slit the electron went through, one must disturb the electron (shoot some light at it, for example). And amazingly, this interaction is enough to destroy the interference pattern. In some sense, though there is still some mystery about this. One says that the measurement (which implies an interaction) collapses the wave function (which describes the electron motion). The double slit experiment is a good place to start to get into the strange world of quantum mechanics!
share|improve this answer
simple and clear....I guess the narrator wanted to convey the simple fact about uncertainty principle! – Vineet Menon Nov 9 '11 at 4:48
add comment
Your Answer
|
83e12a7410e0d866 | March 15, 2012
More Designer Electrons- Artificial Molecular Graphene used to Mimic Higgs Field and Relativity
Researchers arranged carbon monoxide molecules to form the same hexagonal pattern found in graphene, except that they could adjust molecular spacing slightly. They placed individual molecules of carbon monoxide onto a copper sheet. The material's electrons behave remarkably like relativistic particles, with a "speed of light" that they can adjust. Additionally, the researchers could change the spacing between molecules in a way that the masses of the quasiparticles changed, or cause them to behave as though they are interacting with electric and magnetic fields—without actually applying those fields to the material.
Manoharan has indicated that his team will be working on using the new material as a test bed for future exploitation as well as creating new nanoscale materials with new properties.
This is a follow up to the design electron article from yesterday
Manoharan lab covers their own work here
The work could lead to new materials and devices.
Graphical summary of this work. Artificial “molecular” graphene is fabricated via atom manipulation, and then imaged and locally probed via scanning tunneling microscopy (STM). Guided by theory, we fabricate successively more exotic variants of graphene. From left to right: pristine graphene exhibiting emergent massless Dirac fermions; graphene with a Kekulé distortion dresses the Dirac fermions with a scalar gauge field that creates mass; graphene with a triaxial strain distortion embeds a vector gauge field which condenses a time-reversal-invariant relativistic quantum Hall phase. In the theory panel, images are color representations of the strength of the carbon-carbon bonds (corresponding to tight-binding hopping parameters t), and the curves shown are calculated electronic density of states (DOS) from tight-binding (TB) theory. In the experiment panel, images are STM topographs acquired after molecular assembly, and the curves shown are normalized conductance spectra obtained from the associated nanomaterial.
In this work we combine a central tenet of condensed matter physics—how electronic band structure emerges from a periodic potential in a crystal—with the most advanced imaging and atomic manipulation techniques afforded by the scanning tunnelling microscope. We synthesize a completely artificial form of graphene (“molecular graphene”) in which Dirac fermions can be materialized, probed, and tailored in ways unprecedented in any other known materials. We do this by using single molecules, bound to a two-dimensional surface, to craft designer potentials that transmute normal electrons into exotic charge carriers. With honeycomb symmetry, electrons behave as massless relativistic particles as they do in natural graphene. With altered symmetry and texturing, these Dirac particles can be given a tunable mass, or even be married with a fictitious electric or magnetic field (a so-called gauge field) such that the carriers believe they are in real fields and condense into the corresponding ground state. We show an array of new phenomena emerging from: patterning Dirac carrier densities with atomic precision, without need for conventional gates (corresponding to locally uniform electric fields which adjust chemical potential); spatially texturing the electron bonds such that the Dirac point is split by an energy gap (corresponding to a nonuniform scalar gauge field); straining the bonds in such a way that a quantum Hall effect emerges even without breaking time-reversal symmetry (corresponding to a vector gauge field). Along the way, we make use of several theoretical predictions for real graphene which have never been realized in experiment
Nature - Designer Dirac fermions and topological phases in molecular graphene
Phantom Fields
A version of molecular graphene in which the electrons respond as if they're experiencing a very high magnetic field (red areas) when none is actually present. Scientists from Stanford and SLAC National Accelerator Laboratory calculated the positions where carbon atoms in graphene should be to make its electrons believe they were being exposed to a magnetic field of 60 Tesla, more than 30 percent higher than the strongest continuous magnetic field ever achieved on Earth. (A 1 Tesla magnetic field is about 20,000 times stronger than the Earth's.) The researchers then used a scanning tunneling microscope to place carbon monoxide molecules (black circles) at precisely those positions. The electrons responded by behaving exactly as expected — as if they were exposed to a real field, but no magnetic field was turned on in the laboratory. Image credit: Hari Manoharan / Stanford University.
Schrödinger Meets Dirac
Visualization depicting the transformation of an electron moving under the influence of the non-relativistic Schrödinger equation (upper planar quantum waves) into an electron moving under the prescription of the relativistic Dirac equation (lower honeycomb quantum waves). The light blue line shows a quasiclassical path of one such electron as it enters the molecular graphene lattice made of carbon monoxide molecules (black/red atoms) positioned individually by an STM tip (comprised of iridium atoms, dark blue). The path shows that the electron becomes trapped in synthetic chemical bonds that bind it to a honeycomb lattice and allow it to quantum mechanically tunnel between neighboring honeycomb sites, just like graphene. The underlying electron density in a honeycomb pattern (lower part of image, yellow-orange) is the quantum superposition formed from all such electron paths as they transmute into a new tunable species of massless Dirac fermions. Image credit: Hari Manoharan / Stanford University.
Designer Electrons
This graphic shows the effect that a specific pattern of carbon monoxide molecules (black/red) has on free-flowing electrons (orange/yellow) atop a copper surface. Ordinarily the electrons behave as simple plane waves (background). But the electrons are repelled by the carbon monoxide molecules, placed here in a hexagonal pattern. This forces the electrons into a honeycomb shape (foreground) mimicking the electronic structure of graphene, a pure form of carbon that has been widely heralded for its potential in future electronics. The molecules are precisely positioned with the tip of a scanning tunneling microscope (dark blue). Image credit: Hari Manoharan / Stanford University.
Molecular Graphene PNP Junction Device
Stretching or shrinking the bond lengths in molecular graphene corresponds to changing the concentrations of Dirac electrons present. This image shows three regions of alternating lattice spacing sandwiched together. The two regions on the ends contain Dirac "hole" particles (p-type regions), while the region in the center contains Dirac "electron" particles (n-type region). A p-n-p structure like this is of interest in graphene transistor applications. Image credit: Hari Manoharan / Stanford University.
The observation of massless Dirac fermions in monolayer graphene has generated a new area of science and technology seeking to harness charge carriers that behave relativistically within solid-state materials. Both massless and massive Dirac fermions have been studied and proposed in a growing class of Dirac materials that includes bilayer graphene, surface states of topological insulators and iron-based high-temperature superconductors. Because the accessibility of this physics is predicated on the synthesis of new materials, the quest for Dirac quasi-particles has expanded to artificial systems such as lattices comprising ultracold atoms. Here we report the emergence of Dirac fermions in a fully tunable condensed-matter system—molecular graphene—assembled by atomic manipulation of carbon monoxide molecules over a conventional two-dimensional electron system at a copper surface5. Using low-temperature scanning tunnelling microscopy and spectroscopy, we embed the symmetries underlying the two-dimensional Dirac equation into electron lattices, and then visualize and shape the resulting ground states. These experiments show the existence within the system of linearly dispersing, massless quasi-particles accompanied by a density of states characteristic of graphene. We then tune the quantum tunnelling between lattice sites locally to adjust the phase accrual of propagating electrons. Spatial texturing of lattice distortions produces atomically sharp p–n and p–n–p junction devices with two-dimensional control of Dirac fermion density and the power to endow Dirac particles with mass. Moreover, we apply scalar and vector potentials locally and globally to engender topologically distinct ground states and, ultimately, embedded gauge fields wherein Dirac electrons react to ‘pseudo’ electric and magnetic fields present in their reference frame but absent from the laboratory frame. We demonstrate that Landau levels created by these gauge fields can be taken to the relativistic magnetic quantum limit, which has so far been inaccessible in natural graphene. Molecular graphene provides a versatile means of synthesizing exotic topological electronic phases in condensed matter using tailored nanostructures.
14 pages of supplemental material
Molecular graphene assembly
Molecular graphene assembly. A movie shows the nanoscale assembly sequence of an electronic honeycomb lattice by manipulating individual CO molecules on the Cu(111) two-dimensional electron surface state with the STM tip. The video comprises 52 topographs (30 × 30 nm2, bias voltage V = 10 mV, tunnel current I = 1 nA) acquired during the construction phase and between manipulation steps.
Tunable Pseudomagnetic Field
Molecular Manipulation
Форма для связи
Email *
Message * |
6c12391fd4b3eb20 | Guide to Vector Version (sspropv, sspropvc)
The vector version of the SSPROP solves the coupled nonlinear Schrödinger equations for propagation in a birefringent fiber. The code can model birefringence, differential group delay (PMD), polarization-dependent dispersion, and polarization dependent loss, all in the context of nonlinear propagation.
The user may choose from two different algorithms, depending on whether the birefringent beat length is shorter or longer than the nonlinear length.
In general, the birefringent axes of an optical fiber may not be oriented in the x- and y- directions, but in some other arbitrary direction ψ. Moreover, the two orthogonal eigenstates of the fiber may not even be linearly polarized — they could be circularly or even elliptically polarized. This would be the case, for example, in fiber that is twisted or spun during or after fabrication. To handle the most general case, SSPROP allows the user to separately specify not only the dispersion β(ω) and loss (α) for each of the two eigenstates, but also the exact polarization states to which these coefficients apply.
The most general elliptical polarization state can be described by two angular parameters, ψ and χ. As depicted in the figure below, ψdescribes the angle that the polarization ellipse makes with the x-axis and χ is an anglar quantity that describes the degree of ellipticity.
Positive values of χ correspond to right-handed polarization states while negative values of χ are left-handed polarization states. χ = 0 corresponds to linear polarization while χ = π/4 is circularly polarized. On the Poincaré sphere, 2ψ and 2χ describe the longitude and lattitude of the principal eigenstate, respectively.
When specifying the eigenstates of the fiber, it is sufficient to give ψ and χ for one eigenstate because the second eigenstate is known to be orthogonal to the first.
Elliptical Basis Method
Any polarization state may be decomposed into a linear combination of the two orthogonal eigenstates of the fiber, which we label “a” and “b”. If ux and ux represent the two components of the electric field vector in the x-y basis (i.e., the Jones vector), then the corresponding components ua and ub can be calculated using the unitary transformation:
where ψ and χ describe the principal eigenstate (“a”) of the fiber. In this new basis, the linear portion of the wave equations for ua and ub are decoupled. The linear portion of the propagation can therefore be performed separately on ua and ub in the spectral domain, using a technique analogous to that used in the scalar case.
where ha and hb are given by:
When performing the nonlinear part of the propagation, the appropriate coupled nonlinear equations (with linear terms omitted) are [Menyuk, JQE 1989]:
where χ quantifies the degree of ellipticity of the eigenstates as described above. The (…) terms in the above expression denote additional nonlinear terms that average to zero when the birefringent beat length is much shorter than the nonlinear length. These additional terms are also identically zero when the eigenstates are circularly polarized.
After propagating through the desired number of steps, the final solution can be rotated from the elliptical basis (ua, ua) back into the Jones basis (ux,uy) by using the inverse transformation:
Circular Basis Method
When the fiber birefringence is small, i.e., when the beat length is comparable to or larger than the nonlinear length, the additional terms in the nonlinear equations cannot be neglected. In this case, it is necessary to decompose the field into left- and right-hand circular polarization components before computing the nonlinear propagation. The circular components u+ and u can be computed from ux and uy using the following unitary transformation:
With this transformation, the coupled nonlinear equations for u+ and u become (again omitting linear terms):
where in this case no additional nonlinear terms have been neglected. Because the eigenstates of the fiber are not in general circularly polarized, the linear portion of the propagation is not as simple in the circular basis. After some algebra, one finds that the linear propagation can be be computed in the spectral domain, using the following matrix multiplication:
where the matrix elements hnm are given by:
and ha and hb are the same quantities given earlier in the context of the elliptical basis method.
After propagating through the desired number of steps, the final solution can be rotated from the circular basis (u+, u) back into the Jones basis (ux,uy) by using the inverse transformation:
The circular basis method is more accurate than the elliptical basis method because it does not neglect any nonlinear terms. The disadvantage of the circular method is that the stepsize dz must always be much smaller than the beat length in order to produce meaningful results. If the beat length is smaller than the nonlinear length, this requirement forces one to use a stepsize that is much smaller than the nonlinearity would otherwise dictate.
A summary of the syntax and usage can be obtained from Matlab by typing “help sspropv” or “help sspropvc“.
The compiled mex file (sspropvc) can be invoked from Matlab using one of the following forms:
u1 = sspropvc(u0x,u0y,dt,dz,nz,alphaa,alphab,betapa,betapb,gamma);
u1 = sspropvc(u0x,u0y,dt,dz,nz,alphaa,alphab,betapa,betapb,gamma,psp,method);
u1 = sspropvc(u0x,u0y,dt,dz,nz,alphaa,alphab,betapa,betapb,gamma,psp,method,maxiter);
The last four arguments assume a default value if they are left unspecified. The corresponding Matlab m-file can be invoked using a similar syntax by replacing sspropvc with sspropv.
sspropvc may also be invoked with a single input argument, to specify options specific to the FFTW routines (discussed below):
sspropvc -option
Input Arguments
u0x, u0y
vector (N) Input optical field, specified by two length-N vector time sequences. u0x represents the x-component of the complex, slowly-varying envelope of the optical field, and u0y represents the corresponding y-component. The fields should be normalized so that |u0x|^2 + |u0y|^2 is the optical power.
dt scalar The time increment between adjacent points in the vector u0.
dz scalar The step-size to use for propagation
nz scalar (int) The number of steps to take. The total distance propagated is therefore L = nz*dz
alphaa, alphab scalar or vector (N) The linear power attenuation coefficients for the two eigenstates of the fiber. Here we use the labels “a” and “b” to denote the two eigenstates, which need not coincide with the x-y axes. Polarization dependent loss is modeled by using different numbers for alphaa and alphab.The loss coefficient may optionally be specified as a vector of the same length as u0x, in which case it will be treated as vector that describes a wavelength-dependent loss coefficient α(ω) in the frequency domain. (The function wspace.m in the tools subdirectory can be used to construct a vector with the corresponding frequencies.)
betapa, betapb vector Real-valued vectors that specify the dispersion for each eigenstate (a, b) of the fiber. The dispersion can be specified to any polynomial order by using a betap vector of the appropriate length.Birefringence is accomodated by making the first elements betapa(1) and betapb(1) unequal. Differential group delay, or polarization mode dispersion is likewise treated by making the second elements betapa(2) and betapb(2) different. (See note below for a more complete discussion.)The propagation constant can also be specified directly by replacing the polynomial argument betap with a vector of the same length as u0x. In this case, the argument betap is treated as a vector describing propagation constant β(ω) in the frequency domain. (The function wspace.m in the tools subdirectory can be used to construct a vector with the corresponding frequencies.)
gamma scalar A real number that describes the nonlinear coefficient of the fiber, which is related to the mode effective area and the nonlinear refractive index n2.
psp scalar or vector (2) Principal eigenstate of the fiber, specified as a 2-vector containing the angles ψ and χ (see discussion above), psp = [ψ ,χ].If psp is a scalar, it is interpreted to be ψ, and χ is then taken to be zero. This corresponds to a linearly-birefringent fiber whose axes are oriented at an angle χ with respect to the x-y axes.If psp is left completely unspecified, it assumes a default value of [0,0], which means that the fiber eigenstates are linearly polarized along the x- and y- directions.
method string String that specifies which method to use when performing the split-step calculations. The following methods are recognized “elliptical” or “circular”.When method = “elliptical”, sspropv will solve the equations by decomposing the input field into the (in general) elliptical eigenstates of the fiber. This method is appropriate only in fibers where the birefringent beat length is much shorter than the nonlinear length.When method = “circular”, sspropv will instead solve the equations by decomposing the input field into a right- and left-circular basis. This method is more accurate, but requires that the step size be small compared to the beat length.
maxiter scalar (int) The maximum number of iterations to make per step. If the solution does not converge to the desired tolerance within this number of iterations, a warning message will be generated. Usually this means that the chosen stepsize was too small. (default = 4)
tol scalar Convergence tolerance: controls to what level the solution must converge when performing the symmetrized split-step iterations in each step. (default = 10–5.)
Output Arguments
u1x, u1y
vector (N) Output optical field, specified as two length-N vectors.
Several internal options of the routine can be controlled by separately invoking sspropvc with a single argument:
sspropvc -savewisdom
sspropvc -forgetwisdom
sspropvc -loadwisdom
The first command will save the accumulated FFTW wisdom to a file that can be later used. The second command causes sspropc to forget all of the accumulated wisdom. The last command forces FFTW to load the wisdom file from the current directory. The wisdom file (if it exists) is automatically loaded the first time sspropvc is executed. The name of the wisdom file is “fftw-wisdom.dat” for the double-precision version of the program and “fftwf-wisdom.dat” for the single-precision version. This can be changed by recompiling the code. The wisdom files can and are shared between the vector and scalar versions of SSPROP. Note that the wisdom files are platform- and machine-specific. You should not expect optimal performance if you use wisdom files that were generated on a different computer.
The following four commands can be used to designate the planner method used by the FFTW routines in subsequent calls to sspropc.
sspropvc -estimate
sspropvc -measure
sspropvc -patient
sspropvc -exhaustive
The default method is patient. These settings are reset when the function is cleared or when Matlab is restarted.
These options are only available in the compiled version of the routine.
Slowly-varying Envelope: In the scalar version of SSPROP, is is customary to factor out the rapidly oscillating terms exp(i(β0z – ωt)) from the field in order to obtain an equation for the slowly-varying envelope. In SSPROP, this is achieved by setting the first argument of the dispersion polynomial betap(1) equal to 0. In a fiber that has birefringence, it is no longer clear how to factor out these rapid oscillations: should we use β0x or β0y? One approach is to factor out exp(iβ0xz) from the x-component of the field and exp(iβ0yz) from the y-component of the field. However, with this definition we can no longer regard u0x and u0y as a Jones vector that describes the polarization state. Therefore, we instead choose to factor out a common phase exp(iβ0avgz) variation from both components of the field. Provided we choose β0avg to be the average of β0x and β0y, the resulting fields ux and uy will still be slowly-varying envelopes that describe the instantaneous Jones vector of the optical signal. In SSPROP, this is accomplished numerically by choosing betapa(1) and betapb(1) to be equal and opposite such that betapa(1) – betapb(1) = Δβ0.
Moving Reference Frames: A similar consideration applies to the difference in group velocity. In a birefringent fiber, the group velocities can be different for the x- and y- polarizations. Therefore we solve the nonlinear Schrodinger equations in a reference frame that is moving at a velocity in between vx and vy. This amounts to making a change of varibles T = t – β1avgz, where β1avg is the average value of β1. In SSPROP, this is accomplished numerically by choosing betapa(2) and betapb(2) to be equal and opposite such that betapa(2) – betapb(2) = Δβ1.
Units and Dimensions: The dimensions of the input and output quantities are arbitrary, as long as they are self consistent. For examle, if |u0|2 has dimensions of Watts and dz has dimensions of meters, then the nonlinearity parameter gamma should be specified in W-1m-1. Similarly, if dt is given in picoseconds, and dz is given in meters, then the dispersion polynomial coefficients betap(n) should have dimensions of ps(n-1)/m. It is of course possible to solve the normalized dimensionless nonlinear Schrödinger equation by setting some of the input terms to 1 or –1 as appropriate.
Periodicity: SSPROP uses the FFT (DFT) to calculate the spectrum, which implies that the input and output signals are periodic in time. The periodicity is determined by the time increment and the length of the input vector, T = dt*length(u0x). Because of the periodic boundary conditions used by the DFT, care must be taken to ensure that if the optical field at the edges of the window is not negligible it must be continuous in both magnitude and phase.
Iterations and Tolerance: The last two optional parameters, maxiter and tol are related to the symmetrized split-step iteration algorithm. The algorithm uses a trapazoid integration equation to approximate the effect of the nonlinearity over a distance dz, but this approximation requires knowledge of the field at the subsequent distance-step. This problem is solved by using an iterative approach. maxiter represents the maximum number of iterations performed per step, and tol is a positive dimensionless number that tells the algorithm what level of convergence is required before the iteration stops. |
9ca2f6f6ab3332ba | So a positive and a positive wave function create a bonding orbital where the probability of finding an electron is summed while a positive and a negative create an anti-bonding orbital with a lower electron probability in the region between them leading to a repulsion. My confusion stems from not having any idea as to what a negative wave function is representing - can anyone give me some physical intuition on how this negative wave function could be correlated to something in reality?
(I tried asking this question on physics.SE but it would seem that in physics you just don't talk about or use the wave function until after you square it)
• 5
$\begingroup$ And imagine, the wavefunction can easily be even imaginary! (Or, better, complex-valued function). $\endgroup$ – ssavec Aug 19 '15 at 8:29
• 5
$\begingroup$ Don't think about "negative" as being different than a "positive" wave function. What's really important is that the signs are either the same or opposite. It's the difference between signs in wavefunctions that give rise to the interesting stuff. $\endgroup$ – user19026 Sep 21 '15 at 5:05
• 1
$\begingroup$ @ssavec, and it is not even that it can, rather that it is a complex-valued function. $\endgroup$ – Wildcat Oct 4 '15 at 20:23
The wavefunction of a particle actually has no physical interpretation to it until an operator is applied to it such as the Hamiltonian operator, or if you square it which gives its probability of being at a certain place. So having a negative wavefunction doesn't mean anything physically. However, let's say for a particle in a box, if you solve the momentum operator for the $n^\text{th}$ stationary state, you would get two solutions: $+\hbar k$ and $-\hbar k$, where $k = n\pi/L$. The plus and minus signs mean that there is a $50$% chance the particle could be moving from the left to the right and a $50$% chance of the particle moving from right to left.
In the case of summing of two wavefunctions to get the wavefunction of a molecular orbital, a negative wavefunction doesn't mean anything at all. The reason why when a positive and negative wavefunction is added, an antibonding orbital is formed is simply because when you add a positive and negative number, you get $0$ or a really small number. This is known as destructive interference. Therefore the new wavefunction will contain a region where the y values equal $0$. So when you square this new wavefunction of the molecular orbital you are going to get that there is very 0 probability of finding the electron in that position (known as a nodal plane), and hence it is called an antibonding orbital.However note that just because there is negative region in the wavefunction, doesn't mean that it will always add to form an anti-bonding MO. For example, if you add 2 negative wavefunction values (known as constructive interference), you get a large negative value. When you square becomes a very large positive value. Hence this means that there is a large probability of the electron being in that region and hence it will be a bonding MO.
enter image description here
• $\begingroup$ but how can it make sense to say "a negative wavefunction doesn't mean anything at all" when it is the distinguishing feature between the combination of two wavefunctions being attractive or repulsive? $\endgroup$ – norlesh Aug 19 '15 at 11:23
• 2
$\begingroup$ @norlesh The phase of the wavefunction (i.e. the sign) doesn't mean anything. Let me ask you this about the $2p_z$ orbital (can be any p orbital, but I choose this one to avoid potential nitpicking). In this orbital, there is a positive lobe and a negative lobe. What is the difference between them? Do you think there is a "correct" way to label which lobe is positive and which lobe is negative? $\endgroup$ – orthocresol Aug 19 '15 at 12:38
• 2
$\begingroup$ @norlesh I'm going off soon so I'm just going to write, maybe a somewhat suitable analogy would be the two mirror images of a molecule, (R) and (S) isomers. They're different, but they react exactly the same way (let's assume their environment is achiral; our universe does not distinguish between positive and negative wavefunctions at all, just like an achiral environment does not distinguish between (R) and (S) isomers). $\endgroup$ – orthocresol Aug 19 '15 at 12:44
• 9
$\begingroup$ @norlesh Let's say you react these two isomers with a different, enantiomerically pure, molecule to form (R,S) and (S,S) diastereomers. These are "different" molecules, they have different physical and chemical properties. These are analogous to the antibonding and bonding molecular orbitals. Only the relative phase matters; the absolute phase doesn't. As Nanoputian already said, the wavefunction $\psi$ itself actually has no physical interpretation; Max Born actually won a Nobel Prize for finally interpreting $|\psi|^2$ as the probability density (amongst other things, of course). $\endgroup$ – orthocresol Aug 19 '15 at 12:49
In addition to @Nanoputian's excellent description of constructive and destructive interference in the formation of MOs, I want to provide a more mathematical explanation for why the phase of the wavefunction does not matter.
Finding the wavefunction
The time-independent Schrödinger equation, in one dimension, reads:
$$\hat{H}\psi(x) = E\psi(x)$$
It can be shown that, if a wavefunction $\psi = \psi(x)$ satisfies the above equation, the wavefunction $k\psi$ (with $k \in \mathbb{C}$) also satisfies the above equation with the same energy eigenvalue $E$. This is because of the linearity of the Hamiltonian:
$$\begin{align} \hat{H}(k\psi) &= k(\hat{H}\psi) \\ &= k(E\psi) \\ &= E(k\psi) \end{align}$$
There are several conditions that a wavefunction must satisfy for it to be physically realisable, i.e. for it to represent a "real" physical particle. In this discussion, the relevant condition is that the wavefunction must be square-integrable (or normalisable). In mathematical terms:
$$\langle\psi\lvert\psi\rangle = \int_{-\infty}^{\infty}\!\lvert\psi\rvert^2\,\mathrm{d}x < \infty$$
This means that there has to exist a constant $N \in \mathbb{C}$ such that $N\psi$ is normalised:
$$\int_{-\infty}^{\infty}\!\lvert N\psi\rvert^2\,\mathrm{d}x = \lvert N \rvert^2 \!\!\int_{-\infty}^{\infty}\!\lvert\psi\rvert^2\,\mathrm{d}x = 1$$
From this point onwards, we will assume that we will already have found the suitable normalisation constant such that the wavefunction $\psi$ is already normalised. In other words, let's assume $\langle\psi\lvert\psi\rangle = 1$, because we can. Now let's consider the wavefunction $-\psi$, which is equivalent to $N\psi$ with $N = -1$. Is this new wavefunction normalised?
$$\begin{align} \int_{-\infty}^{\infty}\!\lvert -\psi\rvert^2\,\mathrm{d}x &= \lvert -1 \rvert^2 \!\!\int_{-\infty}^{\infty}\!\lvert\psi\rvert^2\,\mathrm{d}x \\ &= \int_{-\infty}^{\infty}\!\lvert\psi\rvert^2\,\mathrm{d}x \\ &= 1 \end{align}$$
Of course it is. So, what I've written so far basically says: if $\psi$ is a normalised solution to the Schrödinger equation, so is $-\psi$.
In fact, you could go one step further. Using exactly the same working as above, you could show that if $\psi$ is a normalised solution to the Schrödinger equation, the wavefunction $(a + ib)\psi$ would also be one, as long as $a^2 + b^2 = 1$. (If you like exponentials, that's equivalent to saying $a + ib = e^{i\theta}$.) I've illustrated this idea on this diagram:
If $\psi$ is a real-valued, one-dimensional wavefunction, you could plot it on a graph against $x$. The wavefunction $i\psi$ would then be exactly the same shape, just coming out of the plane of the paper ($\theta = 90^\circ$). You could have the wavefunction $(1+i)\psi/\sqrt{2}$. It would be pointing outwards of the plane of the paper by $\theta = 45^\circ$, exactly halfway in between $\psi$ and $i\psi$, but exactly the same shape. However, physics doesn't know where the plane of your paper is, so all these wavefunctions are equally admissible. From the point of view of the system, they are all the same thing.
Using the wavefunction
"But wait! If the wavefunction is negative, what about the values of momentum, position, and energy that you calculate? Will they become negative?"
"Good question, myself!"
Well, for starters, one thing that you use the wavefunction for is to find the probability density, $P(x)$. According to Max Born's interpretation of the wavefunction, this is given by $P(x) = \lvert \psi \rvert ^2$. Let's say that the probability density described by the negative wavefunction $-\psi$ is a different function of $x$, called $Q(x)$:
$$\begin{align} Q(x) = \lvert -\psi \rvert ^2 &= \lvert -1 \rvert^2 \lvert \psi \rvert ^2 \\ &= \lvert \psi \rvert ^2 \\ &= P(x) \end{align}$$
So, the probability density described by the negative wavefunction is exactly the same. In fact, the probability density described by $i\psi$ is exactly the same as well.
Now let's talk about observables, such as position $x$, momentum $p$, and energy $E$. Every observable has a corresponding operator: $\hat{x}$, $\hat{p}$, and $\hat{H}$ respectively (the Hamiltonian has a special letter because it's named after William Hamilton). You use these operators to calculate the mean value of the observable. I'll give an example regarding the momentum. If you want to find the mean momentum, denoted $\langle p \rangle$, you would do the following:
$$\begin{align} \langle p \rangle &= \langle\psi\lvert\hat{p}\rvert\psi\rangle \\ &= \int_{-\infty}^\infty\!\psi^*\hat{p}\psi\,\mathrm{d}x \end{align}$$
I'm going to call the value of that integral $p_1$. Now, let's do the same thing. Let's assume that the mean momentum for the negative wavefunction is not necessarily the same value. Let's call the new mean momentum something else, like $p_2$.
Before we go on, I'm going to establish that the momentum operator $\hat{p} = -i\hbar\frac{\mathrm{d}}{\mathrm{d}x}$ is also linear. If you doubt it, you can test it out using the definition of linearity in the very first link I posted. In fact, all quantum mechanical operators corresponding to observables are linear. Therefore $\hat{p}(-\psi) = -\hat{p}\psi$ and so:
$$\begin{align} p_2 &= \langle -\psi\lvert\hat{p}\lvert-\psi\rangle \\ &= \int_{-\infty}^\infty\! (-\psi)^*\hat{p} (-\psi)\,\mathrm{d}x \\ &= (-1)^2\!\!\int_{-\infty}^\infty\! \psi^*\hat{p}\psi\,\mathrm{d}x \\ &= \int_{-\infty}^\infty\! \psi^*\hat{p}\psi\,\mathrm{d}x \\ &= p_1 \end{align}$$
So, if we talk about the ground state of the particle in a box of length $L$, no matter whether you use the positive wavefunction
$$\psi_1 = \sqrt{\frac{2}{L}}\sin{\left(\frac{\pi x}{L}\right)}$$
or the negative wavefunction
$$-\psi_1 = -\sqrt{\frac{2}{L}}\sin{\left(\frac{\pi x}{L}\right)}$$
or the complex wavefunction
$$i\psi_1 = i\sqrt{\frac{2}{L}}\sin{\left(\frac{\pi x}{L}\right)}$$
you'll get exactly the same values for average position $(= L/2)$, average momentum $(= 0)$, and average energy $(= h^2/2mL^2)$ (the word average is redundant here, since this is a stationary state, but whatever).
Everything that I have said so far can be easily generalised to three dimensions. It can also be generalised to linear combinations of stationary states, i.e. solutions of the time-dependent Schrödinger equation.
A note about molecular orbitals
"Okay, but what happens when you combine atomic orbitals to make molecular orbitals? You have constructive interference from the positive + positive, and destructive interference from the positive + negative, but what about the negative + negative combination?"
"Good question, myself!"
Let's talk about the $\ce{H2}$ molecule. The proper way to find the molecular orbitals is to solve the Schrödinger equation for the entire system, which is really difficult to do. One way to find approximate forms of the MOs is to make linear combinations of atomic orbitals; this method is called the LCAO approximation. Let's call the 1s orbital of the hydrogen on the left $\phi_1$ and the 1s orbital of the hydrogen on the right $\phi_2$. From the previous sections, we have already established that as far as the hydrogen atom is concerned, the individual phases of $\phi_1$ and $\phi_2$ do not matter. So, let's assume for simplicity's sake that their phases are both positive.
Now, from what you already know, you can get two molecular orbitals $\psi_1$ and $\psi_2$:
$$\begin{align} \psi_1 &= \phi_1 + \phi_2 \\ \psi_2 &= \phi_1 - \phi_2 \end{align}$$
These are the bonding and antibonding orbitals respectively (at least, to within a normalisation constant, which I'm not going to care about here because the details are irrelevant). Now let's talk about those combinations that we missed out.
$$\begin{align} -\phi_1 - \phi_2 &= -\psi_1 \\ -\phi_1 + \phi_2 &= -\psi_2 \end{align}$$
We already said that $\psi_1$ and $\psi_2$ are (approximations of) solutions to the Schrödinger equation. That means that, from what we've talked about earlier, $-\psi_1$ and $-\psi_2$ must also be (approximations of) solutions to the Schrödinger equation. They must have the same energies as $\psi_1$ and $\psi_2$. In fact, as far as the molecule knows (and cares), they are the same thing as $\psi_1$ and $\psi_2$.
Now, since the individual phases of the atomic orbitals do not matter, if you really wished to, you could declare to the whole world that you define:
$$\phi_3 = \phi_1 \text{ and } \phi_4 = -\phi_2$$
i.e. left hydrogen 1s orbital, $\phi_3$, is positive and right hydrogen 1s orbital, $\phi_4$, is negative. In that case, you can construct the molecular orbitals:
$$\begin{align} \psi_1 &= \phi_3 - \phi_4 \\ \psi_2 &= \phi_3 + \phi_4 \end{align}$$
The coefficients of the atomic orbitals would have to be different, since you insisted on having them in different phases - however, the outcome is the same! You get one bonding MO and one antibonding MO.
• 2
$\begingroup$ Upvoting simply for "Good question, myself!" (well, and it's a good answer, too!) $\endgroup$ – hBy2Py Oct 4 '15 at 16:05
Your confusion for its most part stems from two things:
1. You are using the wrong terms such as "positive" or "negative" wave function which shows that you don't quite understand what is going on here mathematically. This is especially important taking into account the second point.
2. You are looking for some physical intuition behind a purely mathematical model which has very little to do with physical reality.
Let us for the sake of argument define a positive function of a single real variable $x$ as a real-valued function that takes only positive values, i.e. such that $f(x) > 0$ for all $x$, and a negative function as a function that takes only negative values, i.e. such that $f(x) < 0$ for all $x$. I think this is something OP has in his mind talking about "positive" and "negative" wave functions. But then the way OP uses these adjectives to describe what he has trouble with is surely wrong, since given a wave function $\psi(x)$ how do you know is it a positive or a negative one? Obviously, you don't know that, and consequently, you can't tell either if $-\psi(x)$ is positive or negative.
These whole LCAO-MO business has simply nothing to do with wave functions being positive or negative, rather it is all about forming two orthogonal linear combinations of two atomic orbitals (AO) to form two essentially different molecular orbitals (MO). Nanoputian described this simple LCAO-MO formalism for the particular case of $\ce{H2}$ molecule in details in his answer, and here I just want to warn OP (and others) about the danger of taking this picture literally.
MOs are formed from AOs in a mathematical and not a physical sense. It is just a game of numbers: take AOs and write MOs as linear combinations of them. One should not think that this primitive LCAO-MO model with bonding and anti-bonding MOs formed as a result of constructive and destructive interference of the corresponding AOs describes real physical processes. As I said, this model has very little to do with reality.
In addition to other answers and very briefly, a bond is formed when there is a lot of electron density between the nuclei, this lowers the overall potential energy. Conversely when there is not much electron density between atoms the potential energy is higher and this is anti-bonding. We identify anti-bonding by looking for nodes between atoms. As a rule of thumb, the more nodes the higher the energy hence more anti-bonding.
When adding or subtracting or multiplying wavefunctions together the normal rules of maths apply because wavefunctions are represented by normal mathematical equations. We have to remember that often they are presented in polar coordinates instead of the usual x,y,z so may not seem so familiar, e.g. shapes of s, p, d orbitals.
The sign of a wavefunction $\psi$ is just part of its mathematical description; some wavefunctions have positive and negative parts, e.g. spatial part of p orbitals, some are represented by complex numbers e.g. some d orbitals. We can let $-\psi$ be the same as $\psi$ as the probability of finding the particle in a small region of space $\tau $ to $\tau+d\tau$ is $\psi^2d\tau$.
The measured value of some property, X, e.g. position, is given by the expectation (or average) value; $<X>=\int \psi^* X \psi d\tau$, so again the sign of $\psi$ does not mater. ($\psi^*$ is the complex conjugate, which is only important if $\psi$ is a complex number.) When adding wavefunctions so as to make a linear combination e.g. $\psi = s_1-s_2+s_3$ is the same as $-\psi=-s_1+s_2-s_3$.
Chemists often use diagrams with the sign of the wavefunction labelled as $\pm $ or coloured in a particular way to help understand bonding. Its a very useful shortcut compared to calculating things out. The figure below shows some examples.
orbital signs
The p orbitals add 'in phase' as $p_z+p_z$ which is bonding and out of phase as $p_z-p_z$ which is anti-bonding. The shading on the orbitals shows the 'sign' of the wavefunction. In $p_z-p_z$ the difference is not zero everywhere because the orbitals have different origins since they are displaced along the bond axis. The difference is zero at the midpoint, however, and this is called a node.
The rules are that the same shading adds, different shading subtracts.
The $s+p_z$ is non-bonding as the s orbital has overlap with both parts of the p orbital and the two overlaps cancel.
The three s orbitals can add, all three the same shade make bonding (grey or white its the same, $s_1+s_2+s_3$ is equivalent to $-s_1-s_2-s_3$) or subtract orbitals say as $s_1-s_2+s_3$ which is anti-bonding.
You have this misconception because you forget that a wave function is a wave!!!. Waves can be either in phase or out of phase. All these terminology of negative and positive waves drove your mind to mix unrelated concepts.
Imagine an isolated H atom, you can ask, what is the phase of his wave function?, and the answer will be, who cares or What phase are you talking about?. The phase only has physical meaning if it can be observed, like the superposition of waves. Then, asking the meaning of a "negative" wave functions is senseless.
Think about the superposition of EM waves or string waves. You don't ask for the meaning of a "negative" string wave, you speak about the string wave out of phase with respect another. The above is exactly the same but instead with quantum mechanical waves.
The use of "+" and "-" signs are a handy mathematical tool to describe the phase difference, but you can use another thing like colors.
I expect that the above clarification will help you to really understand how LCAO-MO theory works.
Your Answer
|
6c10159962c4cc78 | Fundamental Reality XIV: Conclusion
Putting everything together, I have argued—using plausibility arguments, not strictly deductive proofs—that it is reasonable to believe in a metaphysically ultimate being, and that given the reality of Ethics or Consciousness, it is probable that it is more like a mind than like a set of equations. More specifically, my arguments pointed to just one eternal God, existing necessarily, who is all-powerful, all-knowing, and good, who is the source of all other things, yet is distinct from them, and who appreciates mathematical beauty, conscious life, and ethical behavior.
Of course there are a lot of mysteries left in this view. Even though God is supposed to be the explanation of all other things, we cannot predict, from this information alone, exactly which laws of physics God would select, nor whether he would intervene in the Universe thus created in other ways. Not sharing the divine knowledge about what is best, we have to make additional stipulations about the world he has created, adding to the complexity of any specific Theistic worldview.
But then again, Naturalism by itself cannot tell us either (apart from experiment) which specific laws of nature to expect. All views contain a certain amount of irreducible mystery. The difference is that Naturalism hides or denies the mysteries, and pretends to solve problems that it cannot possibly really solve, while Theism puts them up-front and center and does the best it can to fit them into a consistent picture of the world.
It does not matter so much whether you are convinced that my conclusions have to be right. Maybe there were several places in the argument where I selected one of two paths, but you think it was a toss-up, or that the other way was somewhat more plausible. That's part of the hazards of armchair reasoning. Personally I am primarily concerned with the arguments for Theism as a prelude to Christianity, which is founded on the Resurrection of Christ and the testimony of God's Spirit, not philosophical discourse. But plausibility arguments still have their place. If you are thirsting after goodness and beauty and meaning, and if you learn that there could well be a fountain capable of slaking that thirst, shouldn't this increase your incentive to search for it?
A purely intellectual philosophy can only get you so far. Actual religion involves opening yourself up to the divine being, over a continued period of time, allowing God to get hold of you. Any approach must be by his initiative rather than yours, but your attitude can determine whether or not you are receptive to his advances. Without this, philosophy is sterile. If it advances only to savoir, conceptual knowledge, it might as well have remained atheistic. All of these philosophical arguments are only there to help you make further steps, to connaître or knowledge by acquaintance. Arguing for the existence of the Good is one thing; tasting the reality of the Holy is another.
When that happens, the purely intellectual arguments—and the doubts which are a necessary corollary of any honest attempt to evaluate them—can be kicked aside like a ladder that has served its purpose, and replaced with something far better.
About Aron Wall
4 Responses to Fundamental Reality XIV: Conclusion
1. TY says:
Dr Wall,
I think if I were an atheist or a naturalist, these 14 well-reasoned Posts on Fundamental Reality would. at least, give me pause to question my belief or its premises. If I were an agnostic, I might be persuaded that my agnosticism was baseless.
I like the deductive proofs because even if one does not accept the conclusion, one might still accept the rules of logic, and if the argument is good (the premises are plausible, the argument is valid or strong, and the premises are more plausible than the conclusion), so much the better. Deductive proofs are by nature economical in the use of words, but they assume the person one is trying to convince is familiar with the premises. Usually, that doesn’t happen and the exchange becomes fruitless.
The great advantage of your approach is that it recognises there are many premises for the argument that God exists. The you work out each premise using all knowledge from science, metaphysics, ethics, philosophy, etc. And from this collection of arguments, a plausible conclusion emerges or grows out; not forced or contrived.
I think I am better equipped to defend theism this “bottom-up” way, as St Polkinghorne would say, and if and when I do use a deductive argument (still a useful form of argument) I know what I’m saying.
Thanks again for this series.
2. Aron Wall says:
Thanks TY. That's pretty much what I was aiming for. I also hope that people who don't believe in God (for whom I took great care not to overstate my case) will be shaken by my arguments. But I can't help but notice that most of the people commenting have been theists.
If any atheists or agnostics have read through the entire series, I would be curious to know to which of my arguments they found plausible.
3. John Michael Salinas says:
I'm sorry for going a little off topic but Dr. Wall will you ever be participating in a Veritas forum or formal debate? It would be a great experience to hear you explain your views with a moderator.
4. Parag Dixit says:
Dear Aron,
Finally found what I think is an appropriate place for this question (consciousness, fundamental reality being some of the keystones of this series) but feel free to relocate the comment to a better (or newer) section. I had emailed you with this but looking back that was a real embarrassing word-soup, hopefully this is a tad better.
One of the biggest conceptual leaps with QM is the one of giving up realism (in the sense that things have objective properties prior to measurement). Now, if these objects are but appearances, they have to appear to something that is not an appearance and that is where consciousness - (not necessarily a soul, mind - I want to avoid those terms because even if true they veer the discussion off) - as a "truly" real entity that "truly" perceives comes in - one that registers a definite value rather than a superposition. And with that there is a suite of well-discussed issues (Schrodinger's cat, Wigner's friend, inter-subjective agreement etc.)
My question is on the last of these (inter-subjective agreement) : More specifically Herve Zwirn's thesis on how this might work. It is a dualistic theory in that it makes a distinction between the observer's physical state (i.e. his brain) and the perceiving part.
Page 35 is the meat of his thesis and his application of this model to EPR correlations on Pg 42 is interesting.
Two salient points of the thesis:
A) The physical state of the universe (including the observer's brain) evolves unitarily through the Schrödinger equation and remains in a superposed state. However the consciousness of each observer can be aware of only one branch of the superposed state and cannot perceive the superposition
B) Any state vector is relative to a given observer and cannot be considered as absolute. Each observer gets her own state vector for all she is able to observe. This state vector evolves deterministically through the Schrödinger equation and remains always a superposition of states of entangled systems. The physical brain of each observer is part of this universal but relative superposed state and the consciousness of the observer is hung-up to one branch so that the perceptions of the observer are confined to this branch and her daughters when a new measurement is done.
The interesting part - as described in the mathematical development of the theory - is how it guarantees inter-subjective agreement - even if there isn't any (i.e. it won't disagree with the predictions from QM formalism).
Obviously the last part is disconcerting. So is how the theory does not seem parsimonious (while there is no many world splitting, this does have each observer split off with his branch". I was wondering if you've read this and if so what your thoughts are (or that of any of the guests on here)
Leave a Reply
|
c633d248e0c18fe8 | Lecture 01. General Course Information and Introduction to Quantum Mechanics
Formal Metadata
Lecture 01. General Course Information and Introduction to Quantum Mechanics
Alternative Title
Lecture 01. Quantum Principles: General Course Information and Introduction to Quantum Mechanics
Title of Series
Part Number
Number of Parts
Shaka, Athan J.
CC Attribution - ShareAlike 4.0 International:
University of California Irvine (UCI)
Release Date
Content Metadata
Subject Area
UCI Chem 131A Quantum Principles (Winter 2014) Instructor: A.J. Shaka, Ph.D Description: This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Index of Topics: 0:05:31 Light 0:12:05 Quantization 0:19:10 The Photoelectric Effect 0:28:59 Photon Momentum
Computer animation
hi and welcome to can 131 a physical chemistry I'm Dr. shocker and I'll be leading you through the series of lectures on physical chemistry starting with an Adams up approach of quantum mechanics is our 1st topic which is kind of a rude introduction to the subject but here we go again some general cost information and then an introduction to quantum mechanics as seen through the eyes of a chemist rather than a physicist we have slightly different viewpoints on some the here's a preliminary problem practice problem 1 here is a jumbled word and the question is what word can you make indeed another 5 letters you might find it pretty hard but the answer is you can make to but you have to have that word in your vocabulary if you're going to make it and if you're in the fashion business are you working with fabrics you may happen to know that is a type of fabric but if not you could play around with that word for a long time mathematical equations that we're going to be dealing with the quite similar we have to rearrange symbols in ways that are legal and we have to somehow seek where to go that means 2 things you have to have the basic vocabulary to understand what it means and you have to have enough background material that you know what a legal moving and I will assume that you've Fred in the book the background material the fundamentals and if you haven't done that your 1st task right we hear some general information the lecture in attendance is optional except on exam days will have 1 problem do each Friday and we only have 1 problem because they're quite hard actually and we have a website and on it will have what's new but avoid sending me e-mail just ask person after class and warnings do not start the problem on Thursday evening please realize that 1 problem and its multiple parts and it's really quite difficult actually completed and so have a go clarify your understanding and then try again the TAC and I'll go over some of the problems at the end of the chapter and clarify any ambiguous wording of the problem all were counts for 20 per cent of the great in this class I'm not a big fan of making exams count a lot Doby to midterms and a final exam and reading I can tell you is not the best way to learn physical chemistry reading in chemistry is like tying your shoes to run a race you have to tie your shoes but that doesn't count it's not training what's training in chemistry is practicing solving problems visualizing what things look like and trying to work quickly and accurately the textbook we're going to use quantum matter and change by Atkins DePaul and Friedman and will cover the 1st 5 chapters but I warn you that textbooks are getting a little bit like Amazon . com they're trying to be all things to all people it's not necessary that you memorize lots of facts or small details of who did what was important is to just try to understand the ideas and develop an intuition it can be done even in a field like quantum mechanics so Chapter 1 quantum mechanics is the study of the small specifically things that we can't see mobutu closely the world appears to be digital in other words the small packets of everything and Adams are the smallest unit of a particular element of course even the ancient Greeks realized that it might be true that things had 1 the smallest indivisible amount now we know that that is in fact true but they're very very small and very light 1 neutral carbon carbon 12 Adam has a massive only about 2 times minus 23 grams electron is much much smaller has a massive 9 times 10 to the minus 28 grams and these very small things are very unfamiliar to us because we can't see anything that small so much human intuition is guided by things that around the same size as us things that are much much bigger than us or much much smaller than we are are hard to understand and we have to use careful experiments to try to figure out what's going on and likewise protons and neutrons occur in integer units you can have 1 proton or not but you can't have passed and you can't have Hassan electron this is pretty much similar to currency there's a minimal amount of currency it's different for every currency but there's still a minimum amount in the U.S. system that's 1 cent so you can have 1 cent but you can't have anything smaller and have been legitimate currency light heat Newton actually believed that light also had a currency that there was a minimum amount of light was corpus because light seem to travel in straight lines just like a bullet fired from a gun but later investigations in which light went through narrow slits or pinholes showed interference phenomena very much like water waves and so the work of Hoy against cousin really to abandon his cough muscular theory of light because it didn't seem like that theory could explain this kind of wave phenomena waves have positive or negative phase and so they can subtract as well as at a particle can either be there be positive or 0 but it can't be negative in the usual sense and so we wouldn't expect small numbers of marbles to somehow give interference phenomena free-fire them through so for example here's a picture of 2 slightly different wavelengths of 0 4 0 away and if we add them up you can see that their positions where In space here because this is wavelength there is a very small response and then there is another place where there's a very large response and then there's a small response and and this is very much similar to each but sound waves for example where the tune of violin and you hear the difference when you aren't in tune you hear this wall while while sort of beating and you're seeing in this picture that kind of beating occurring and it's a universal phenomenon with weight white light it turns out has a mixture of different wavelengths and that was sort of 1st surmised when you passed white light through a prism and you resolve it into a rainbow but that doesn't necessarily mean that white light is composed of different colors because it could be that the colors some have come from the president and so is only really when you took another prism and then took the same rainbow and really
turned it into a white light that people became convinced that white light was really a mixture of colors and there was nothing coming necessarily from the prison but when you have a convex surface near of flat surface there is a difference in refractive index and the condition for these deeds to add up or subtract depends on the wavelength and this is a phenomenon called Newton's rings a pattern of constructive interference that for example you can see if you spilled oil which tends to beat up on water on a wet surface and we used to always look about as a kid because cars in Salt Lake City always have a lot of oil leaking out of them in those days and it rained and we would spend a lot of time looking at these beautiful patterns so we can see here is an example we can see how the colors very systematically with wavelength and much like there were to be eating patterns In the end that graphite showed you this will repeat more than once depending on the total thickness of of by contrast the media there's another experiment that unfortunately made it difficult to understand why this waves and that was a black body radiation this is a classic thing where you think you understand everything and you have a very simple calculations and there are some very very smart people like Lord Rayleigh and you do the calculation and many compare with the experiment this is the essence of science if you think you understand something you ought to be able to predicted or explain why you can't predicted but at this time there was Maxwell's equations pretty much people believe that these wave equations really described life in totality and that light was an electromagnetic wave and that theory was wildly so we could explain the unification of electricity and magnetism why the speed of light hazards Speed diffraction reflection refraction how lenses and so on and so forth but the 2 crucial experiments show that this description of light was complete incomplete and the first one was black body radiation so what is black body radiation well if I take a black body by which he could just mean a lump of coal a lamp black and heated up in a vacuum so that there's no error doesn't burst into flames it'll and this global gives a characteristic spectrum of colors just like white light has a characteristic spectrum of colors and it was known that the color depended only on the temperature hence we speak of red-hot and things like that and greatly with the correction by genes calculated the spectrum of waif-like life and what they found is that it would be much much more likely that you would get a lot of high-frequency radiation because the chance of getting each frequency was equally likely and there are a lot more high frequencies than there are low frequencies and this was called the ultraviolet catastrophe because it basically predicted that if we opened something like a kitchen other than outcome a town of X-rays and very high frequency light and kill us basically and that's obviously not what happened so we thought there was a big big enough soul-searching and there was a lot of thought about what might be going on in some theories were put forward that were later proven not to be quite right but it was Max plant that found the correct solution but what what he found that 50 observations although he didn't really quite believe it himself but he could follow through physics was the light was quantized that is there is a minimum amount of light and the smallest amount was related to the frequency Of the light and once the frequency was chosen the energy of the light was equal stage New Age has subsequently been called plunks constant honor it was discovered so the frequency could apparently be continuous anything you want but what she chooses then there is a minimum amount and if you don't have the minimum amount there is no light and since the minimum amount depends on the frequency higher frequencies if there's
not enough energy around can't have the minimum amount to make a single particle for Quantum of light and therefore those frequencies get cut off so far that gets around this problem with the X-rays killing us if we open the other so here is an equation that just says the likelihood that you're going to have a certain amplitude of light on a number of relative amount of light per-unit frequency given according to the really Gene's law and you can see but it's basically a parabola In the frequency squared so as the frequency goes up the amount of light that's predicted predicted to go up faster and faster and faster and that's not what's observed and what plant did I'm in instead of this continuous equation Is there the quantized photon energy and he obtained this formula which on 1st blush looks completely different it hasn't hue In the numerator and has this funny exponential of each new upon T In the denominator and that we can compare these 2 formulas an autograph and see what happens and in again this formula leverage let me remarked that case Boltzmann constant pages plants constant and tedious the temperature Calvin and of course in physical chemistry or chemistry of any time you never "quotation mark temperature in anything other then Calvin because if you do you're very likely to be wrong if you plug it into a formula is a "quotation mark the temperature and Calvin and you say it's a balmy days 298 Kelvin you may be eccentric but you're never wrong but in chemistry if use any other units you're very likely going to be wrong so you have to be careful we hear is a graph part is in the green and rarely James In the sort of pink color and you can see that 4 a certain temperature of 300 Calvin they agree had very very low frequency where the quantise Asian is very small and is the frequency increases plant actually follows the observed distribution almost exactly and really genes diverges mourned warned more from and would continue to go up now we can make a connection between the 2 theories because we know they're greedy when the frequency is is small and the energy slow and so we can take this parameter H. new upon Katie and we can assume it's much much less than 1 and we know from calculus that we can expand the to the accident power series 1 plus sex and then affects is small the next square is very small so we can throw it away and that means we can write the 2 B H new upon Katie is 1 plus additional upon Katie and then if we make that substitution we find that as long as the frequency is not high compared with the temperature Katie that we get exactly the same formula that really jeans go but if the temperature is the frequency is hot where there was a problem then it starts to deviate and so that's very nice because it shows you that you get the same result as the other guys got where their theory seemed to work and that's what scientists often look for now the physical meaning of Katie is the K T is really a measure of the random thermal energy that's available at temperature T 1 something's hot things are moving their colliding there banging around and there's lots of energy available to excite things and to create photon but if the temperature is very cold then there's hardly any energy around and then you just don't have enough energy to make the minimum amount of a photon and that's why cold things don't go well but things that are heated up finally do start to amid a global like an electric element ,comma still so at high frequency the problem is we just don't have enough energy to make even a single photon we just keep run out before we get there and so the distribution has to fall off sharply and the analogy I can give you is that supposed the smallest amount of currency the smallest coins were a thousand dollars then that means that a lot of people are going to have no money at all because they don't have that much the reason we don't notice the digital nature of life in day-to-day observation we don't notice that it looks like Sanders something like that is because of the relative size of these 2 constants K and age bowls miscast in this 1 times 10 minus 23 jewels Patel's and plants constant is 6 times demise 34 and that means there is a factor of about 10 of the 11 or 100 billion between them and that means that usually at low frequency there are plenty of photons around and high-frequency we start to notice that age knew the quantum of light has a minimum value the other experiment that really sealed the deal with respect to the core postulated or quantized nature of life was the photoelectric effect and it's not quite such a simple experiment as the black body radiation but nevertheless it is a pretty simple experiment and the experiment is this you evacuated chamber and you have a clean metal surface and you shine a light on the surface and what was observed is that electrons would come off the surface and this is how
you make a cathode ray tube in fact but electric electrons would come off the surface of the metal and would come into the vacuum and they would be ejected and ideally flight were than what should happen is if you have a wave coming in and it should sort of excited electrons more and more and more and more and more and then boom finally just like pushing a swing if you push it enough times you can get the person moving so that means that when you turn on the light there should be a delay before the electrons rejected you can measure that by chopping the lights turning it on and off as quickly as you need to and the electron energy that comes out should depend on the intensity of the light but what was observed are these 3 things 1st of all the photo electrons when they came out were ejected essentially instantaneously so there was no delay the 2nd point is that below a threshold frequency there were no photo electrons at all In the 3rd point is that turning up the intensity of a low-frequency lighting made no difference is still didn't get any photo electrons and Einstein interpreted this experiment in terms of photons namely that the particles of light were coming in and each particle of light could hit an electron and 1 particle hits 1 electron and it's that 1 particle that hits the electron doesn't have enough votes to kick the electron out then the electron doesn't come out and having a lot of particles none of which can weigh on a single here hit the electron out doesn't help you you need 1 particle that has not loans instigated in 1 Our goal and he found that the particle an energy exactly in accordance with plants formula so now we have another experiment which is indicating that light depending on its frequency has a quantized energy and that that we call that a photon and we think of it as a particle so as I said if 1 photon hit electron the energy is good enough kicks it otherwise the electron stays in the metal but here's an idealized view of the experiment I apologize for the kind of gray of the potassium is hard to see that this is a potassium metal used because it's very easy to kick electrons out and their 3 wavelengths and the energy source quoted to an electron volts and electron volt is the energy to that 1 electron gets by being dropped through 1 bolt of potential difference and its 1 . 6 times 10 months 19 jewels if we have 700 enemy light which is red we get no electrons if we shine in green light at 550 we get electrons to come out and they come out with a maximum speed that we can measure by timing when they get a detector of about 3 times to the 5 meters per 2nd and if we use more energetic light toward the pilot and of the visible spectrum then we also get full of electrons out instantaneously but now the owner at the speed of the electrons is higher so the electrons have more so here's a diagram that shows the energy balance it takes a certain amount of energy In this case for potassium to electron volts 2 get the electrons that the part ways from the potassium atoms and then whatever energies
left over since and we believe energy is conserved even in this crazy you realm of quantum mechanics that energy must be done the kinetic energy of the electron and so that makes it up to the top and you can see that if the photon itself doesn't have enough energy to get up to the Red Bark then there is no way that the electron is going to be the objective yeah and the energy fires called the work functions of the metal it's too bolts for potassium but not for other metals and the kinetic energy as I showed us just the difference so we can express that equations kinetic energy of the electron is the difference between the photon energy and the energy to pry it out of the material and that means that 1 one-half squared is equal to the difference and so Arkansas for V as the square root of 2 Asian minus-5 over the mass of the electron and what we can see is that the mathematics here gives us a clue that we might have a problem because if faced is not up to the Red bars it's less than 5 that we get a negative square root which would give us an imaginary velocity which is kind of hard for us to interpret just because you get an imaginary number from an equation doesn't mean it's wrong we're going to see plenty of imaginary numbers but it in this case when we interpreted as a velocity it we'd have to figure out what an imaginary velocity actually meant in terms of what we would see no it turns out that we could we can only be sure Of the energy of the electron for the potassium atoms that are very near the surface so that the light hits and then rejects the electron the potassium the silvery surface almost like a mirror and so we could get us and we'll be right the light doesn't really penetrate through like a window and come out the other side is so if we ejected electron from an atom further down the electron might come up and yet another Adam and
slow down and heat up the material so we just look for the maximum energy electrons that come out and that tells us what it is for the surface and that incidentally lets us know that this kind of photo electron spectroscopy is a very good technique to interrogate a solid surface because you will only attract electrons very near the surface of the specimen and what that means is that you won't see stuff underneath so in some cases if you make an alloy area maker material but you find out is that the surface tends to be enriched in 1 kind of element and the ball In the Senate tends to be enriched in another kind and that could be very important if you're designing parts that are gonna fit together and you think the surface has a certain kind of composition and in fact because some Adams preferred beyond the surface because of the way they bombed the composition is much much different in that case you can use photo electron spectroscopy and you can do this ancient experiment and since now all all the work functions are known for all the atoms you can easily figure out who's there usually we know the wavelength of the light but insists the wavelength times the frequency is the velocity which relied is given the special symbols C we can also write the the energy in terms of the wavelength and here's the equation the energy of the photons is age time see over finally if you take very energetic photons like Cameron then the speed of the electron could approach see the speed of light and in that case we had to use a different formula but which I won't arrive but we have to use what's called the toll relativistic energy which is given by this formula he squared equals peace P-Square C squared plus and squared seated before and I think you can see that it if P vanishes then the rest mass if you have no other kinetic energy Is he squared equals and square feet at the 4th and that's where equals mc squared came from I am not it's just call the rest mass of the electron cements when it's not moving and the relativistic momentum P is still just sometimes the but and this is not the rest mass but rather gets corrected by this formula that includes the ratio of the speed of the particle versus the speed of light and interestingly this formula also lets us figure out the momentum of a photon so we start with the formula and then we we note that a photon has 0 rest mass and therefore the energy is just the squared equals peace Corps he squared and we can take the square root of that and find that the momentum this year upon and since lights quantized that's H overseas and since Sears the frequency times a wavelength we just end up with the With sorry with P equals aged over land which is the momentum and that means that short wavelength photons with small wavelength I have high momentum and there's an application the proposal this phenomenon here a spaceship that set sail a couple years ago and it's a giant solar sail across you don't have to worry about their resistance if you're out in the middle of space and you can weather reflective coating you can actually use the momentum of the photons from the sun to steer your spaceship around and you don't need any fuel or anything else you could just use the sun itself to push you around and you can turn things this way and that kind of an interesting application of the photon momentum and I'll let you speculate about how fast you think that the spaceship could go in open space now there is a saying to take note of the matter is even if you're going pretty fast you don't have to worry about relativity unless year about 10 per cent Of the speed of light if you're 10 per cent of the speed of light or something like that then you may have to start wearing a little bit about relativity but normally found in chemistry we don't worry about relativistic corrections so the only point of introducing this formula was to show that the photon although it has no mass has a moment the Leicester practice problems solicitors confirmed that the speeds that we quoted on the potassium photoelectric effect diagrams are in fact the right speeds well we could use either the wavelength of the light or we could use the energy in electron and as I told you 1 is 1 . 6 times and my mine minus 19 jewels but since we have the work function for potassium in electron volts I think that's probably going to be easier course so for the 2 . 2 TV photons the green light we can set up our energy balance equation that the kinetic energy is the difference between the photon energy when the work function and then we can solve for the velocity of the speed of the electron comes out and you notice that when I assaulted I put in the units In chemistry it is very very important to put in the units and make sure all the units go away except the 1 that you want to get me and so taking 2 times . 2 5 Evie dividing it by that the mass of the electron and kg and converting the lead the jewels and then I'm remembering that jewel is a kilogram meters squared per 2nd squared I can remember that because I know that forces mass times acceleration acceleration I can remember is meters per 2nd squared so forces kg meter per 2nd squared and Julissa Newton needed and I can remember that too and so I have another meeting and I see the kg go away these go away the jewels go away and I have the square root of meters squared per 2nd squared which is meters per 2nd whenever you do a problem in chemistry you want to analyze it in exactly this way if you just write down numbers with no units a very likely going to have some funny units left over like the square root of the view over jeweler something else and without the units there to let you know that they did not fold up like a hat trick you just get the wrong numerical answer and if you're right the wrong numerical answer on an exam or submitted in a report or build a bridge with it and falls down nobody is interested in why that happened there are only interested in the state so we went through this and you can see that the way I quoted as I keep
a lot of digits and then I put the ones that I don't think a significant into parentheses and then I can rounded together 3 times 10 to the 5 meters per 2nd and the same thing with the violet light you just put in a slightly different numbers and again on leaders 6 . 2 times 10 to the 5 meters per 2nd always retained in significant digits in case you want to continue the calculation further on and never ever round in the middle of a calculations your calculator will hold took 12 digits 12 digits 5th whole 15 digits everything's 15 digits never around and you can see in my examples I take the time to write 1 . 6 0 to etc. I look up the exact value because if I do a lot of calculation and I start rounding things here and there and everywhere by the time I get to the end my accuracy is poor and sometimes if I'm unlucky it can be very poor so at least keep all the digits people kill themselves trying to get those digits on those numbers fell years of work and to just say Well I can't even be bothered to punch and my calculator is really almost a crime so in many experiments like behaves like a and the question is if it behaves like a particle in these experiments but it behaves like a wave was too slow and other things like that which is it like a particle or it away and the answer is it apparently depends on the nature of the experiments lend itself seems to have both qualities at once and even tho we we think of them as completely different kinds of things apparently these 2 qualities are not mutually exclusive but if light is a particle then the thought is that if I shoot 1 particle at a time With the 2 sled that the interference phenomena would have to go away because the reason we're getting interference phenomena as we're getting all these waves going through together and then they would add subtract but if they are aren't there simultaneously if there's only 1 particle going through a time pick pick pick pick then we would see just 2 piles did you go through this let you get a pile here go through this let pile there but interestingly enough this fails and so on this to such experiment where you get an interference pattern you get exactly the same pattern even if you can verify the issuing photons 1 at a time so you shoot 1 photon another another you detect where they end up and at the end of the day but you get an interference pattern and the only way you can really try to explain that is it seems like the photon Witcher claiming it a particle when it suits you it is now the particle is somehow slipping through both slipped In other words the particle can interfere With itself this is a very very foreign idea In terms of what we understand in the physical world where if we have a particle and we the particle goes through it once led we know it went through that slept and it doesn't somehow a break-up and then reconstitute itself on the other side and this was 1 of the most discomfiting things about this new theory of quantum mechanics because it seemed to suggest something that was very foreign to our physical intuition and art even foreign to our common-sense notion about what a particle hits so if we fire up 1 photon at a time and do photon counting what we find is we have to fire a lot of photons to get good statistics but when we do if we have 1 slip we get the pattern on the left and if we have 2 to slits we get the patent on the right and we get the pattern on the right with 2 slits whether we fire the photons 1 at a time and take forever to do it or whether we fire a bunch of them at once and and they all go through and that means that whatever the wave nature of light it's like it doesn't seem to have much to do with water even stranger an electron as a certain amount of charge it has a certain Mass and I've never seen half an electron but if we fire electrons no From an electron guns like an electron microscope and we fire them 1 at a time and then we lost quick to what happens we would expect to get again to piles of shot of the electron went through here it would go here and at a certain angle we get a big pile and then we get a pile over here for the ones that apparently went through this list because we can't control exactly like a marksman where the electrons go so they may go through but which are ones go through the slats we should get too what if you do this and you fire them 1 at a time again but what you find is that you do not get to lumps so what you find in it shown on the next slide this is a brilliant experiment that was done at Hitachi using an electron microscope and what you can see here is just how interesting it is because when you have 10 electrons you have just 10 spots and it looks to me like they may have slightly miscounted there may be a glitch because the 11 if you look closely but anyway you get 10 spots and you notice that when the electrons hit the screen you get us a spot as if it were a very tiny article In this slits are much farther apart then those of any kind of work vs . and that when he do 200 electrons is you get a shotgun pattern and then when you do a lot more you start to see ridges like a wave and then when you finally do hundreds of thousands of electrons you see this clear kind of 10 roof the appearance of the pattern of intensity which even earlier shot the electrons through 1 at a time seems to be indicating that each electrons goes through both slits this is even worse than the photon because I don't have any particular picture about a photon it was a mathematical thing came up but that doesn't bother me maybe too much but I certainly do have a picture of an electron and if the electron is interfering with itself the question you might ask is well
which slid did the charge go through which led to the mass through why is it that I whenever I look at electron I see exactly the same manner in exactly the same charge and then when I fire them through these 2 sleds I do not and the short answer to that is that if you look at which slipped the electron goes through you get to lumps of shops so if you tried to intercept the electron and you try to see it looking damn you get to lumps of shot and the electron says OK you're gonna look at what I'm doing I'm going to go through with the last letter the rights slip that's it but if you don't look which a cost they were not looking ahead to set up by electrons if you look normally we think well but I see the ride I wanna see the monitor I wanna see whatever I simply look at it but in fact what's happening is we've got
lined up we've got light on and because I'm so heavy in the light so light it's not moving me around it's not doing anything to me but in fact if I've got an electron going through a slip I can just see it it's too tiny I need to shine some light on and when I shine a light on it I know the photon can even kickin electron out of a metal and so the photon interacts with the electron and changes and so by trying to observe where it is I actually change the nature of the experiment and a very frustrating because when I don't look it does something incredible but I can hardly believe it when I do look it behaves exactly the way I would have thought that it would be behaved and what will do then and is will close out their effort this lecture and in the next lecture what I want to talk about is the connection that a man by the name of the 2 broadly made between the wavelength of these particles and the wavelength of light which seemed to be a major advance in on this kind of interesting that that's that's the 1 thing he did and he did that and that was great men who never did much else after that OK thanks very much
550 ms - page object
AV-Portal 3.9.1 (0da88e96ae8dbbf323d1005dc12c7aa41dfc5a31) |
e262f38f3435b958 | 2018/07/04 Wed 16:30 - 17:30
The University of Edinburgh
In recent years, there has been significant progress on theoretical understanding of singular stochastic partial differential equations (SPDEs) with rough random forcing. The main difficulty in studying singular SPDEs lies in making sense of products of distributions, thus giving a precise meaning to an equation after appropriately modifying the equation (via renormalization).
In the field of stochastic parabolic PDEs, M. Hairer introduced the theory of regularity structures and gave a precise meaning to the so-called "subcritical" singular SPDEs such as the KPZ equation and the three-dimensional stochastic quantization equation, for which he was awarded a Fields medal in 2014. Around the same time, M. Gubinelli introduced the theory of paracontrolled distributions and solved a similar class of singular SPDEs. In this talk, I will first go over the basic difficulty in the subject and explain the main idea in these theories. Then, I will discuss recent developments in stochastic dispersive PDEs such as stochastic nonlinear wave and Schrödinger equations along with open problems in the field. |
17a1a99a1df8eb48 |
Theoretical chemistry
From Citizendium, the Citizens' Compendium
Revision as of 08:37, 15 November 2007 by Subpagination Bot (Talk | contribs) (Add {{subpages}} and remove any categories (details))
Jump to: navigation, search
Main Article
Related Articles [?]
Bibliography [?]
External Links [?]
Citable Version [?]
Theoretical chemistry is the use of reasoning to explain or predict chemical phenomena. In recent years, it has consisted primarily of quantum chemistry, i.e., the application of quantum mechanics to problems in chemistry. Theoretical chemistry may be broadly divided into electronic structure, dynamics, and statistical mechanics. In the process of solving the problem of predicting chemical reactivities, these may all be invoked to various degrees. Other "miscellaneous" research areas in theoretical chemistry include the mathematical characterization of bulk chemistry in various phases (e.g. the study of chemical kinetics) and the study of the applicability of more recent math developments to the basic areas of study (e.g. for instance the possible application of principles of topology to the study of electronic structure.) The latter area of theoretical chemistry is sometimes referred to as mathematical chemistry.
Much of this may be categorized as computational chemistry, although computational chemistry usually refers to the application of theoretical chemistry in an applied setting, usually with some approximation scheme such as certain types of post Hartree-Fock, Density Functional Theory, semiempirical methods (like for instance PM3) or force field methods. Some chemical theorists apply statistical mechanics to provide a bridge between the microscopic phenomena of the quantum world and the macroscopic bulk properties of systems.
Theoretical attacks on chemical problems go back to the earliest days, but until the formulation of the Schrödinger equation by the Austrian physicist Erwin Schrödinger, the techniques available were rather crude and speculative. Currently, much more sophisticated theoretical attacks based on quantum mechanics are in vogue.
Branches of theoretical chemistry
Quantum chemistry
The application of quantum mechanics to chemistry
Computational chemistry
The application of computer codes to chemistry
Molecular modelling
Molecular dynamics
Molecular mechanics
Mathematical chemistry
Theoretical chemical kinetics
Theoretical study of the dynamical systems associated to reactive chemicals and their corresponding differential equations.
Closely related disciplines
• Atomic physics: The science of the electrons surrounding the atomic nuclei
• Molecular physics: The science of the electrons surrounding the molecular nuclei and of movement of the nuclei. This term usually refer to the study of molecules made of a few atoms in the gas phase. But some consider that molecular physics is also the study of bulk properties of chemicals in terms of molecules.
|
47166cefc7241492 | I understand the derivation to get
$$\frac{d}{dt}⟨\psi|Q|\psi⟩ = -\frac{1}{i\hbar}⟨\psi|[H,Q]\psi⟩ + ⟨\psi|\frac{d}{dt}Q|\psi⟩$$
and that setting $[H,Q] = 0$ results in
$$\frac{d}{dt}⟨\psi|Q|\psi⟩ = ⟨\psi|\frac{d}{dt}Q|\psi⟩$$
However I don't understand the next part. It just says "if $Q$ is not explicitly dependent on time" , i.e.
$$\frac{dQ}{dt} = 0,$$ then
$$\frac{d}{dt}⟨Q⟩ = 0.$$
I don't see how or why we would just assume $\frac{dQ}{dt} = 0$ after all that effort. I thought that was what we were trying to prove in the first place! It seems circular/tautological to me.
• $\begingroup$ Consider an operator $\cos(\omega t)\hat{x} + \sin(\omega t)\hat{p}$. That thingy has explicit time dependence aside from whatever time dependence there is in $\langle x \rangle$ coming from the evolution of the system. Is that what you're puzzled about? $\endgroup$ – DanielSank Jan 9 '17 at 21:17
• $\begingroup$ Sorry I don't understand what you mean .. $\endgroup$ – dain Jan 9 '17 at 21:23
• $\begingroup$ "if Q is not explicitly dependent on time" means that $\frac{\partial Q}{\partial t}=0$. It does not mean that $\frac{\mathrm dQ}{\mathrm dt}=0$. But recall that HEO implies that $\frac{\mathrm dQ}{\mathrm dt}=\frac{\partial Q}{\partial t}+i[H,Q]$. $\endgroup$ – AccidentalFourierTransform Jan 9 '17 at 21:25
• 1
$\begingroup$ I think someone needs to give two mutually exclusive examples: one where $\langle O \rangle$ depends on time explicitly and one where it doesn't, and show how those cases differ. $\endgroup$ – DanielSank Jan 9 '17 at 21:32
• $\begingroup$ Possible duplicate: physics.stackexchange.com/q/9122/2451 $\endgroup$ – Qmechanic Jan 9 '17 at 21:35
So there are a bunch of "pictures" (that's the technical term!) of quantum mechanics, agreeing in the broad perspective that:
1. There is some vector space $\{|\phi\rangle\}$ over the field $\mathbb C$ and its canonical dual space $\{\langle\phi|\},$ such that the dual operation $\mathcal D$ maps $$\mathcal D\Big(a |\alpha\rangle + b |\beta\rangle\Big) = \langle \alpha|a^* + \langle\beta|b^*$$ and there is an inverse mapping the other way and so on; we usually write this dualizing operation with a superscript $\dagger$ so that $\big(c~|\alpha\rangle\langle\beta|\big)^\dagger = c^* |\beta\rangle\langle\alpha|.$
2. Observable quantities are represented by Hermitian operators $\hat O^\dagger = \hat O,$ or in other words you have expressions like $$\mathcal D\Big(\hat O |\phi\rangle\Big)~|\psi\rangle = \langle\phi| \hat O|\psi\rangle,$$or what mathematicians will sometimes write $\langle \hat O\phi,~\psi\rangle = \langle \phi,~\hat O\psi\rangle.$ The point is that they are their own conjugate transpose, in the sense that they play nice with this dualizing operation.
3. The central prediction of QM is: "you observe the eigenvalues of the Hermitian operators, but we only predict the averages of these eigenvalues over many measurements. The average always takes the form $\langle O \rangle = \langle \psi|\hat O|\psi\rangle,$ where $|\psi\rangle$ is a vector we regard as the state of the system."
In one of these pictures in particular, the Schrödinger picture, all of the operators $\hat p$ and $\hat x$ and so on are generally formally independent of time, and the state $|\psi\rangle$ changes explicitly with time according to the Schrödinger equation, $$i \hbar |\partial_t \Psi(t)\rangle = \hat H |\Psi(t)\rangle,$$ where $\hat H$ is an observable for the total energy in the system. Of course we could still define time-dependent observables like $\hat O = \hat x ~ \cos(\omega t) + \hat p/\hbar ~ \sin(\omega t)$ if we wanted, and then we would have something that we'd call maybe $d \hat O\over d t,$ but the basic point is that the theory is made out of basic things which are not fundamentally time-dependent, and you can do the time dependence if you want to. So $\hat p = -i\hbar \partial_x$ as an operator, it does not change over time.
One nevertheless gets that the actual observable change in the average value is given by the formula you gave, which includes the possibility of explicit time dependence. Explicit time dependence is unusual in the Schrödinger picture, but we can handle it by saying $$\frac{d\langle A\rangle}{dt} = \frac{i}{\hbar} \langle [H, A]\rangle + \langle \frac{dA}{dt} \rangle,$$ where all of the stuff inside brackets is fundamentally some operator expression first and foremost, so $\langle dA/dt\rangle$ means, "first figure out what operator $d\hat A/dt$ is, then its average appears above."
And then, you have all of the other pictures. It turns out that we can think about solving the equation $i\hbar |\partial_t \psi(t)\rangle = \hat h |\psi(t)\rangle$ for an arbitrary Hermitian $\hat h$, and we get that $\psi(t) = \hat u(t) |\psi(0)\rangle$ for some "unitary operator" $\hat u(t)$, meaning that $\hat u \hat u^\dagger = \hat u^\dagger \hat u = 1.$ One particular one of these, $\hat U(t)$, corresponds to the case where $\hat h = \hat H.$
We can insert these into the expectation value given by the Schrödinger picture to do a sort of quantum coordinate transform,$$\langle A \rangle = \langle \psi_0|\hat U^\dagger \hat u ~ \hat u^\dagger \hat A \hat u ~ \hat u^\dagger \hat U |\psi_0\rangle.$$
The point is that now instead of $|\psi\rangle =\hat U |\psi_0\rangle$ we think about $|\psi'\rangle = \hat u^\dagger \hat U |\psi_0\rangle$ which we can derive evolves according to $$i\hbar |\partial_t \psi'\rangle = (\hat H' - \hat h') |\psi'\rangle.$$ Above you can also see primes on the operators; see now it is also more typical for operators to have explicit time dependence, since we are also replacing $\hat A$ with $\hat A' = \hat u^\dagger \hat A \hat u$ and finding $$i \hbar \frac{d\hat A'}{dt} = -\hat u^\dagger \hat h \hat A \hat u + \hat u^\dagger \hat A \hat h \hat u = [\hat A',~\hat h'].$$ In the most extreme form of this, the Heisenberg picture, we choose $\hat h = \hat H$ so that the state does not evolve at all and remains at $|\psi_0\rangle$ in perpetuity. Instead all of the operators evolve in time. This was the basis for the original "matrix mechanics" form of quantum mechanics before Schrödinger discovered his wave equation.
It is also very common to have "interaction pictures" where we divide $\hat H$ into a nice easy noninteracting part $\hat H_0$ plus whatever complications exist in the interactions $\hat H_I.$ Then we choose $\hat h = \hat H_0$ which usually just throws some $e^{i \omega t}$ terms on all of the operators we're analyzing, and then we can make various approximations for the remaining dynamics now that the easy part is "out of the way."
• 1
$\begingroup$ I think I get it now. I was getting confused between the operator $\hat{Q}$ and the quantity $Q$. Obviously $\hat{Q}$ shouldn't change with time, but $Q$ might or might not, depending on what $\hat{Q}$ is. $\endgroup$ – dain Jan 9 '17 at 22:00
• $\begingroup$ @dain yeah, that's about right. $\endgroup$ – CR Drost Jan 9 '17 at 22:01
• $\begingroup$ @CRDrost I don't think calling $Q$ a time dependent quantity but $\hat{Q}$ time independent is standard/sensible at all. Usually, $Q$ and $\hat{Q}$ mean exactly the same thing, i.e. an operator, be it time dependent or not. $\endgroup$ – DanielSank Jan 9 '17 at 22:17
• $\begingroup$ In fact, between @dain's comment here and the ones on the OP, I'm pretty sure he/she doesn't understand what's going on yet. $\endgroup$ – DanielSank Jan 9 '17 at 22:18
• $\begingroup$ @DanielSank: I suppose. I am assuming that if dain has found a distinction to be made, then it is the only distinction that can be made, namely $Q$ as the nonlinear operator from the Hilbert space $\mathcal H$ to $\mathbb R$, a shorthand for $Q = Q[\psi] = \langle \psi|\hat Q|\psi\rangle,$ as distinct from $\hat Q$, the linear operator from $\mathcal H \to \mathcal H.$ Possibly I should be more cynical about such things but I am hoping that the above answer gets enough "in the weeds" that it reminds people that there's a difference between those two. $\endgroup$ – CR Drost Jan 9 '17 at 22:25
You could have set $dQ/dt=0$ at the beginning, but at the end you would not have known if $$ \frac{d}{dt}\langle \psi |Q |\psi\rangle=0 $$ depends on that assumption or not.
If, on the other hand, you are asking about the difference between $$ \frac{d}{dt}\langle \psi |Q |\psi\rangle \qquad \text{and} \qquad \langle \psi |\frac{d}{dt}Q |\psi\rangle, $$ note that they are not the same, they differ by the quantity $$ i \langle \psi | [H,Q]| \psi \rangle $$ as your calculation shows. The state $|\psi\rangle$ depends on time, that is where the derivative acts in evaluating $$\frac{d}{dt}\langle \psi |Q |\psi\rangle$$
This was all in Schroedinger's picture, where states evolve with time.
Your Answer
|
18db7f3d4d7a5a9a | World Library
Flag as Inappropriate
Email this Article
Quantum gravity
Quantum gravity
Strictly speaking, the aim of quantum gravity is only to describe the quantum behavior of the gravitational field and should not be confused with the objective of unifying all fundamental interactions into a single mathematical framework. Although some quantum gravity theories such as string theory try to unify gravity with the other fundamental forces, others such as loop quantum gravity make no such attempt; instead, they make an effort to quantize the gravitational field while it is kept separate from the other forces. A theory of quantum gravity which is also a grand unification of all known interactions, is sometimes referred to as a theory of everything (TOE).
• Overview 1
• Effective field theories 1.1
• Quantum gravity theory for the highest energy scales 1.2
• Quantum mechanics and general relativity 2
• The graviton 2.1
• The dilaton 2.2
• Nonrenormalizability of gravity 2.3
• QG as an effective field theory 2.4
• Spacetime background dependence 2.5
• String theory 2.5.1
• Background independent theories 2.5.2
• Semi-classical quantum gravity 2.6
• Points of tension 2.7
• Candidate theories 3
• String theory 3.1
• Loop quantum gravity 3.2
• Other approaches 3.3
• Weinberg–Witten theorem 4
• Experimental tests 5
• See also 6
• References 7
• Further reading 8
List of unsolved problems in physics
Diagram showing where quantum gravity sits in the hierarchy of physics theories
Effective field theories
Quantum gravity theory for the highest energy scales
Quantum mechanics and general relativity
The graviton
The dilaton
The dilaton made its first appearance in Kaluza–Klein theory, a five-dimensional theory that combined gravitation and electromagnetism. Generally, it appears in string theory. More recently, it has appeared in the lower-dimensional many-bodied gravity problem[12] based on the field theoretic approach of Roman Jackiw. The impetus arose from the fact that complete analytical solutions for the metric of a covariant N-body system have proven elusive in General Relativity. To simplify the problem, the number of dimensions was lowered to (1+1) namely one spatial dimension and one temporal dimension. This model problem, known as R=T theory[13] (as opposed to the general G=T theory) was amenable to exact solutions in terms of a generalization of the Lambert W function. It was also found that the field equation governing the dilaton (derived from differential geometry) was the Schrödinger equation and consequently amenable to quantization.[14]
Thus, one had a theory which combined gravity, quantization and even the electromagnetic interaction, promising ingredients of a fundamental physical theory. It is worth noting that the outcome revealed a previously unknown and already existing natural link between general relativity and quantum mechanics. However, this theory needs to be generalized in (2+1) or (3+1) dimensions although, in principle, the field equations are amenable to such generalization as shown with the inclusion of a one-graviton process[15] and yielding the correct Newtonian limit in d dimensions if a dilaton is included. However, it is not yet clear what the fully generalized field equation governing the dilaton in (3+1) dimensions should be. This is further complicated by the fact that gravitons can propagate in (3+1) dimensions and consequently that would imply gravitons and dilatons exist in the real world. Moreover, detection of the dilaton is expected to be even more elusive than the graviton. However, since this approach allows for the combination of gravitational, electromagnetic and quantum effects, their coupling could potentially lead to a means of vindicating the theory, through cosmology and perhaps even experimentally.
Nonrenormalizability of gravity
General relativity, like electromagnetism, is a classical field theory. One might expect that, as with electromagnetism, there should be a corresponding quantum field theory.
As explained below, there is a way around this problem by treating QG as an effective field theory.
Any meaningful theory of quantum gravity that makes sense and is predictive at all energy scales must have some deep principle that reduces the infinitely many unknown parameters to a finite number that can then be measured.
• Another possibility is that there are new symmetry principles that constrain the parameters and reduce them to a finite set. This is the route taken by string theory, where all of the excitations of the string essentially manifest themselves as new symmetries.
QG as an effective field theory
Spacetime background dependence
String theory
Background independent theories
Semi-classical quantum gravity
Points of tension
• Third, there is the Problem of time in quantum gravity. Time has a different meaning in quantum mechanics and general relativity and hence there are subtle issues to resolve when trying to formulate a theory which combines the two.[20]
Candidate theories
There are a number of proposed quantum gravity theories.[21] Currently, there is still no complete and consistent quantum theory of gravity, and the candidate models still need to overcome major formal and conceptual problems. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests, although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available.[22][23]
String theory
One suggested starting point is ordinary quantum field theories which, after all, are successful in describing the other three basic fundamental forces in the context of the standard model of elementary particle physics. However, while this leads to an acceptable effective (quantum) field theory of gravity at low energies,[24] gravity turns out to be much more problematic at higher energies. Where, for ordinary field theories such as quantum electrodynamics, a technique known as renormalization is an integral part of deriving predictions which take into account higher-energy contributions,[25] gravity turns out to be nonrenormalizable: at high energies, applying the recipes of ordinary quantum field theory yields models that are devoid of all predictive power.[26]
One attempt to overcome these limitations is to replace ordinary quantum field theory, which is based on the classical concept of a point particle, with a quantum theory of one-dimensional extended objects: string theory.[27] At the energies reached in current experiments, these strings are indistinguishable from point-like particles, but, crucially, different modes of oscillation of one and the same type of fundamental string appear as particles with different (electric and other) charges. In this way, string theory promises to be a unified description of all particles and interactions.[28] The theory is successful in that one mode will always correspond to a graviton, the messenger particle of gravity; however, the price to pay are unusual features such as six extra dimensions of space in addition to the usual three for space and one for time.[29]
In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity[30] form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.[31][32] As presently understood, however, string theory admits a very large number (10500 by some estimates) of consistent vacua, comprising the so-called "string landscape". Sorting through this large family of solutions remains a major challenge.
Loop quantum gravity
Simple spin network of the type used in loop quantum gravity
Loop quantum gravity is based first of all on the idea to take seriously the insight of general relativity that spacetime is a dynamical field and therefore is a quantum object. The second idea is that the quantum discreteness that determines the particle-like behavior of other field theories (for instance, the photons of the electromagnetic field) affects also the structure of space.
The main result of loop quantum gravity is the derivation of a granular structure of space at the Planck length. This is derived as follows. In the case of electromagnetism, the quantum operator representing the energy of each frequency of the field has discrete spectrum. Therefore the energy of each frequency is quantized, and the quanta are the photons. In the case of gravity, the operators representing the area and the volume of each surface or space region have discrete spectrum. Therefore area and volume of any portion of space are quantized, and the quanta are elementary quanta of space. It follows that spacetime has an elementary quantum granular structure at the Planck scale, which cuts-off the ultraviolet infinities of quantum field theory.
The theory is based on the reformulation of general relativity known as Ashtekar variables, which represent geometric gravity using mathematical analogues of electric and magnetic fields.[33][34] In the quantum theory space is represented by a network structure called a spin network, evolving over time in discrete steps.[35][36][37][38]
The dynamics of the theory is today constructed in several versions. One version starts with the canonical quantization of general relativity. The analogue of the Schrödinger equation is a Wheeler–DeWitt equation, which can be defined in the theory.[39] In the covariant, or spinfoam formulation of the theory, the quantum dynamics is obtained via a sum over discrete versions of spacetime, called spinfoams. These represent histories of spin networks.
Other approaches
There are a number of other approaches to quantum gravity. The approaches differ depending on which features of general relativity and quantum theory are accepted unchanged, and which features are modified.[40][41] Examples include:
Weinberg–Witten theorem
Experimental tests
The BICEP2 experiment detected primordial B-mode polarization caused by gravitational waves in the early universe. The waves were born as quantum fluctuations in gravity itself. Cosmologist Ken Olum (Tufts University) stated: "I think this is the only observational evidence that we have that actually shows that gravity is quantized....It's probably the only evidence of this that we will ever have."[55]
See also
6. ^ a b c Donoghue (1995). "Introduction to the Effective Field Theory Description of Gravity". arXiv:9512024 [gr-qc]. (verify against ISBN 9789810229085)
7. ^
8. ^
9. ^
10. ^
11. ^
12. ^ Ohta, Tadayuki; Mann, Robert (1996). "Canonical reduction of two-dimensional gravity for particle dynamics".
14. ^ Farrugia; Mann; Scott (2007). "N-body Gravity and the Schroedinger Equation".
15. ^ Mann, R B; Ohta, T (1997). "Exact solution for the metric and the motion of two bodies in (1+1)-dimensional gravity". Physical Review D 55 (8): 4723–4747.
18. ^ Pages 220–226 are annotated references and guide for further reading.
19. ^ Hunter Monroe (2005). "Singularity-Free Collapse through Local Inflation". arXiv:0506506 [astro-ph].
20. ^ Edward Anderson (2010). "The Problem of Time in Quantum Gravity". arXiv:1009.2157 [gr-qc]. (also published as chapter 4 of ISBN 9781611229578)
21. ^ A timeline and overview can be found in Rovelli, Carlo (2000). "Notes for a brief history of quantum gravity". arXiv:0006061 [gr-qc]. (verify against ISBN 9789812777386)
22. ^
24. ^ Donoghue, John F.(editor), (1995). "Introduction to the Effective Field Theory Description of Gravity". In Cornet, Fernando. Effective Theories: Proceedings of the Advanced School, Almunecar, Spain, 26 June–1 July 1995. Singapore:
25. ^
27. ^ An accessible introduction at the undergraduate level can be found in
30. ^
31. ^ Townsend, Paul K. (1996). "Four Lectures on M-Theory". 1996 Summer School in High Energy Physics and Cosmology. ICTP Series in Theoretical Physics. p. 385.
32. ^
33. ^
34. ^
36. ^
37. ^
39. ^ Rovelli, Carlo (2004). Quantum Gravity. Cambridge University Press.
40. ^
41. ^
48. ^
49. ^ See Daniele Oriti and references therein.
50. ^
51. ^ Wen 2006
52. ^ See ch. 33 in Penrose 2004 and references therein.
54. ^ Hossenfelder, Sabine (2010-10-17). "Experimental Search for Quantum Gravity". "Classical and Quantum Gravity: Theory, Analysis and Applications," Chapter , Edited by V. R. Frignanni, Nova Publishers () 5 (2011).
55. ^ Camille Carlisle. "First Direct Evidence of Big Bang Inflation". Retrieved March 18, 2014.
Further reading
• Ahluwalia, D. V. (2002). "Interface of Gravitational and Quantum Realms".
• Herbert W. Hamber (2009). Quantum Gravitation. Springer Publishing. [1]
• Kiefer, Claus (2007). Quantum Gravity. Oxford University Press.
• Kiefer, Claus (2005). "Quantum Gravity: General Introduction and Recent Developments".
• Lämmerzahl, Claus, ed. (2003). Quantum Gravity: From Theory to Experimental Search.
• Trifonov, Vladimir (2008). "GR-friendly description of quantum systems".
|
385adf8acd006e2b | ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel
• »
• Education and Science»
• Philosophy
Quantum Mechanics: A Not-Too-Technical Introduction
Updated on January 11, 2010
Do House Rules Allow Doubling-Down on the Universe?
Marty Thurston
We often refer to Albert Einstein and Niels Bohr as the respective opponent and proponent of quantum theory, but when we make this comparison we are really examining the dichotomy between two respective camps of physicists; a venerable all-star line up at the turn of the last century. This essay is a non-technical floor plan for the game of chance that has been described as our universe. I will begin by reiewing some of the philosophical implications of quantum theory, and then I will attempt to lay a foundation for what processes had been considered “standard” by the Copenhagen interpretation. Following I will discuss some of the more controversial features of quantum theory including uncertainty and entanglement, and I will also discuss the measurement problem and interpretations for solutions to it. I will conclude the essay by briefly describing some of the possible technological innovations that quantum processing could provide. These leaps forward will illustrate, in the form of a reply to the measurement problem, that not all hope is lost to causal physics. Understanding is a relative term and by harvesting this technology we are that much closer to obtaining the holy grail of science, a unified field theory of quantum mechanics.
“If an atom was in a state of higher energy, it was possible to calculate the probability that it would emit a photon at any specific moment. But it was not possible to determine the momentum of emission precisely. Nor was it possible to determine the direction. No matter how much information you had. It was all a matter of chance, like the roll of dice.” – Albert Einstein (Isaacson, 323)
Where Einstein would tell Bohr that God can not play dice with the universe, Bohr would amiably reply, “Einstein, [maybe you should] stop telling God what to do.” (Isaacson, 326) These discussions of uncertainty would leave much of the classical scientific community feeling, in a sense, deceived by quantum games of chance. To some scientists and philosophers, it is was a very cruel practical joke that classical “truths” essential to the discovery of quantum processes were now insufficient. In this respect, I think Einstein was always very nervous about quantum processes because they were further demonstrations of “weird” sciences that either required an advanced degree in physics or blind faith to completely subscribe to. Even in general relativity, which “preserved rigid cause-and-effect rules,” just imagining a picture of four-dimensional space time is about as easy as fitting a square peg into a triangular slot for most if not all people. (Isaacson, 323) With the controversies that arose from the general relativity theory, Einstein was all too familiar with how the scientific community would respond to eerie quantum behaviour; he did not want to be part of the scientific generation that would turn a large portion of Euclidean physics directly on its head.
For Einstein, the freewill emission and absorption of quanta, or pieces of light, were not merely scientific inconsistencies, but rather amounted to an existential nightmare, mocking man’s futile attempt to understand. In one sense, Einstein believed that prior to the quantum debates, he had been a physicist, but he had now been reduced to an “employee of a gaming house” (Isaacson, 324) In this time period there were many opponents to these theories, anti-relativists who believed that Einstein’s theory of relativity was the beginning of the end for the scientific community and that the world could no longer be considered measurable. Einstein however, (especially in his younger years) believed that this extended description of reality would open up endless doors in the fields of science, discovery, and technology. Einstein saw uncertainties arising in general relativity, but was optimistic that through further study, they could be accurately measured and described. It should be noted that although apprehensive about quantum theory, Einstein was an advocate of the truth; he would play devils advocate as often as he was forced into the role.
Quantum theory, on the other hand, subscribes to the notion that we should accept uncertainties as a description of reality. The spooky part of this interpretation is that we might come to a truer description of immediate reality by accepting the fact that there are undeniably unclear rules. Einstein could never fully accept these theories. He likened his opinion to a thought experiment in which he “imagined two boxes, one of which we know contains a ball. As we prepare to look in one of the boxes, there is a 50% chance of the ball being there. After we look there is either a 100% or 0% chance it is in there. But all along, in reality, the ball was in one of the boxes.” (Isaacson, 454) Yes, probability will help us make our decision but in reality there are no probabilities, the ball was really in one box all along.
The following is an example of a conversation between Einstein and friend Philipp Frank that illustrates Einstein’s personal feelings towards emerging quantum theories.
Einstein: “A new fashion has arisen in physics, which declares that certain things cannot be observed and therefore should not be ascribed reality.”
Frank: “But the fashion you speak of was invented by you in 1905!”
(Isaacson, 332)
The Copenhagen Consensus
At this point it will be useful to describe what has come to be known as the Copenhagen interpretation. This is a non-local interpretation of quantum mechanics that claims all particles are described by their wavefunction, which dictates the probability for them to be found in any location following an observation. When a measurement is made, it causes a change to the particle and results in an apparent collapse of the wavefunction. The Copenhagen Consensus can be thought of as a collection of various theories from multiple authors that tends to depict the foundations of quantum mechanics. The following are considered the pillars of quantum theory:
1. Schrodinger’s wavefunction would represent the complete system being measured.
2. Max Born would provide the mathematical applications to describe probability as a relation to the square of the amplitude of the wavefunction.
3. The uncertainty principle would acknowledge the fact that we can not aim to achieve a complete description of the entire system.
4. Principle of complementarity would describe that all matter exhibits wave/particle duality, the effects of which, can never be observed simultaneously.
5. Correspondence principle would try to keep things from becoming too “spooky”. Roughly, the quantum realm should correspond in logical ways to the macro-realm.
The uncertainty principle was credited to Heisenberg in 1927. He argued that we can not know all the values in a system at the same time. Things we do not know can be described in terms of probabilities. Planck’s constant is directly responsible for studying the way the system will evolve; the more one variable is measured (position or momentum), the less accurate our measurements for its corresponding variable will be. Isaacson explains that the uncertainty principle is a little more complex than this. “An electron does not have a definite position or path until we observe it.” (331) Niels Bohr believed that the uncertainty principle was a feature of nature; his opinion was that science was the pursuit of describing what is taking place in our world, not necessarily how it works. In correspondence with Max Born, Einstein would reply, “I am very, very reluctant to give up complete causality” (Einstein to Max Born, Jan. 27, 1920)
The complementarity principle was conceived by Bohr who was a proponent of positivism. This principle would mature shortly after the EPR paper that struck powerful blows to non-locality in 1935. This discovery was the idea that an electron could in fact drop from higher energy orbits to lower ones, which would result in the creation of a photon. This photon or light quanta would be comprised of discrete energy. The uncertainty principle would show that there were features of wave-particle duality, but these effects could not be observed simultaneously. The principle of complementarity would show that the two separate particles were in fact a representation of one singular phenomenon, which is the basis for the concept of quantum entanglement. In rebuttal, Einstein would cling to the fact that QM violates the fundamental principle of separability which holds that, “two systems that are spatially separated have an independent existence.” (Isaacson, 453)
The Quantum Process
The following is an elementary introduction to a few aspects of linear quantum mechanics. I am definitely not qualified, but here are my two cents.
The particles we study will be subject to motion along the (x)-axis only. With any given particle we choose to study, we can expect to find a matter-wave that we refer to as the wavefunction. It should be noted that our definition of wavefunction is an “expression for the amplitude of the particle wave” (wave function, Encyclopedia Britannica, 2008) Since a wavefunction has no physical reality, it is mathematically represented by a normalized Hilbert space, which is a multi-variant complex number that behaves differently in different environments. Normalization means simply that the particle can in fact be found somewhere along the (x)-axis. The two lessons for understanding basic linear quantum mechanics are to understand how a wavefunction can be calculated and what kinds of information can be extracted from it.
Max Born was a German theoretical physicist who was credited in 1926 with coming up with a probability interpretation for a given wavefunction along the (x)-axis of the Schrödinger equation. This interpretation recognizes the fact that we are unable to specify locations of the particle along the (x). Instead, Born’s interpretation attaches probabilities to the wavefunction, describing where we might expect to find it along the (x-axis). Graphically, the wavefunction will look like a curve, and the probability density will be an area under the curve between linear points A and B. One of the biggest problems for quantum physics is that when we are given a wavefunction position at some time (t), we can not readily describe the position of that particle once we acknowledge (t).
As mentioned above, the wavefunction contains all the information required to measure a given particle, but once actually measured, the wave will then be subject to outside forces of nature acting on it and collapse into a single-state. This represents the dynamic nature of quantum mechanics; the study of an evolving system. “In Newton’s mechanics, position and velocity that are relative to a given time are calculated from Newton’s second law; in quantum mechanics, a wavefunction relative to position and time must be calculated from a different law – Schrödinger’s equation.” (Modern Physics, 194)
Schrödinger’s equation, relativistic quantum theory, and quantum gravity are all attempts to explain the evolutionary process of normal quantum systems when they are not being measured or observed. These are all time-dependent theories that are deterministic by nature. Schrödinger’s equation can be solved for free particles in stationary states known as plane-waves. Free particles that undergo some change in (x) value in Schrödinger’s equation are known as wave packets and represent a collection of multiple plane-waves. “The momentum of such a particle is not known precisely, but only to some accuracy of change in (p) value that is related to the change in (x) value through the uncertainty principle.” (Modern Physics, 225)
The square of the area under the wave packet solution can be considered the probability density. These wave packets form into what is referred to as a superposition; which will be relevant in the coming discussion of superselection. In quantum realms, a nucleus exists in a superposition, meaning “it exists simultaneously as being decayed and un-decayed until it is observed, at which point its wavefunction collapses and it becomes one or the other” (Isaacson, 456) This would become a point of contention for Einstein and his colleagues and a rebuttal would come in the form of a thought experiment eventually named Schrödinger’s Cat. “According to the Copenhagen interpretation developed by Bohr and his fellow pioneers of quantum mechanics, until such an observation is made, the reality of the particle’s position or state consists only of these probabilities.” (Isaacson, 457) Schrödinger wanted to find out when a superposition stops representing two or more probabilities, collapses, and snaps into one state. To question this he related the micro-realm to the macro-world, where, a cat is exposed to a radioactive substance that may or may not decay. If the cat is hidden within a box then the Copenhagen interpretation would have us believe the cat’s immediate reality consists of it being both alive and dead within its superposition. This was too large of a pill to swallow for Einstein and by 1935, the attacks at QM were shifted to include these kinds of thought experiments. From a historical perspective, one can imagine the calculated and serious manner of Bohr, pacing his room, trying not to smile while he was contemplating the idea of a half-dead cat.
Interpretations and Problems
Quantum mechanics has a fundamental problem known as the measurement problem. When measurements are made in the quantum realm, the measurements themselves can change the expected outcome of the measurement. This is basically the problem of knowing how measurements can turn superpositions into pure states. The following is an argument for the Everett / Many Worlds interpretation, as in my opinion, it might have been the most acceptable for Einstein.
The Everett / Many Worlds (EMW) description was first discussed by Dr. Hugh Everett in 1957 and fundamentally deals with the process of observation (or measurement) within wave-mechanics. In this interpretation, the wavefunction is treated as physically real. This can be considered “as-relativistic-as-it-gets”, because after observing one of the relative particles, the wavefunction is not said to collapse, but rather, it is said to carry on deterministically. Because of the simultaneity of the outcomes, each outcome also produces the outcomes of other possible outcomes. These outcomes are given relativistically; that is to say that they are worlds that are as unobservable to us as our world is to them. I say this description is as “relativistic-as-it-gets” because we can no longer take the measurements of only one variable to study. In QM, in order to study anything it is essential to study the complete subsystem before we notice any variance within that system.
The animal rights activist in Einstein would have probably found comfort in the fact that the EMW interpretation seems to have eased the fate of Schrodinger’s Cat. There is no “ta-da!” moment in the EMW interpretation; instead, there is a dominant outcome that for all intents and purposes, is the only outcome that matters. This comforting notion of reality is what I think Einstein was digging his nails into with the EPR paper.
Arguably, Einstein might have supported this interpretation because of how it ranks compared to other alternative solutions. First of all, the Copenhagen Consensus could be viewed as being damned if there was a wavefunction and damned if there was not. This makes sense because if the interpretation is based on non-locality, then it will never move beyond spooky action at a distance. Likewise, if the wavefunction is not real, then we are almost advocating solipsism because there is no objective reality. I will not diverge further by going into Bohm’s interpretation of hidden variables, but I will say that it is probably one of the more competitive theories available. The main difference between Bohm’s approach and Everett’s is that once the wavefunction splits into separate realities, Bohm claims it ceases to exist as a reality in all unobservable worlds. Since the wavefunction contains all the knowable information, the outcome would be the only reality that has an existence. However, the EMW interpretation sees no need to make such a distinction.
There are also undeveloped techniques of quantum logic. For the most part, these are trial-and-error experiments and there has been relatively no collaborative agreement on any of these proposals. The fundamentals of logic are tedious enough, and highly complex logic requires highly elaborate reasoning – something there is no common or practical method for. This is in the same realm as Muckenheim’s concept of extended probabilities, which, for the same reasons, is deeply contested. Further, there are a number of half-hearted theories like the Many Minds interpretation and other non-linear proposals, but virtually none of these have received any considerable academic or experimental support.
Our understanding of the interpretations of QM is very much limited to our understanding of these theories in relation to classical physics. First, there is the abstract, nature of describing quantum processes. For example, Hilbert spaces used in the Schrödinger equation are representations of complex multivariate numbers that will behave differently in different environments. There are also problems with the process of measurement, as illustrated by the different interpretations described above.
One of the most interesting philosophical implications of quantum theory is that there would be no accurate way of ever describing all the properties of a system at the same time. Basically, this is saying that we can not define a reality, even though we are living through something that feels an awful lot like reality. Furthermore, QM supports the existence of non-deterministic, irreversible, and entangled processes, all of which are more than arms-length away from regular Newtonian capacities.
Solution to the Measurement Problem?
In the measurement problem we are observing a spinning particle that is within a superposition of two eigenstates of spin in a direction, this is basically a graphical representation of Schrödinger’s equation. When this equation is worked out with a statistical operator, two interference terms result. Superpositions will not usually give the same statistical results as mixtures do, and therefore if our expectation values were to be equal, we would almost certainly have to have zero value interference terms. This process is known as a wash-out solution and this particular example is based on superselection rules. In superselection, these principles are the result of quantum processes and not brought forward from some outside theory, so this theory has a particularly natural feeling.
What superselection rules set out to do is express certain restrictions on which kinds of operators represent genuine observables. Robinson explains that the simplest way to explain this solution is not to think in terms of restrictions on the operator-observable link but in terms of restrictions on the formation of superpositions of states of the measuring apparatus. “If superpositions of states of the apparatus could be formed without any interference terms arising, then we would have, arguably, a wash-out solution to the measurement problem.” (Robinson, 81)
It should be noted however that these scenarios are a source of controversy. Maximilian Schlosshauer from the University of Washington has said that before we can take theories of superselection or induced-decoherence seriously, we need to “clarify key features of the decoherence program, including its more recent results, and to investigate their application and consequences in the context of the main interpretive approaches to quantum mechanics.” (Schlosshauer, 2005) The main consequence of superselection is that even if we try to remove these terms in practice, it does not mean this can be equally done in principle until there are reliable, non ad-hoc methods of doing so.
Where to From Here?
Before quantum mechanics, we knew very little about the individual atom. However, we did have incredible statistical techniques for estimating. “We now are in a very different situation, for we have much the same statistical techniques, but a highly confirmed and sophisticated theory of the individual atoms and their constituents.” (Robinson, 84-85) This type of physical description had simply never been available prior to quantum mechanics.
Quantum technology is a developing field that attempts to build on the knowledge of quantum processes and tries to mimic these systems in technological development. “We can create entangled matter and energy that are not likely to exist anywhere else in the universe. These new man-made quantum states have novel properties of sensitivity and non-local correlation that have wide applications to the development of computers, communication systems, sensors, and compact metrological devices.” (Dowling and Milburn, 1656) In this respect the technological possibilities for quantum processors are so boundless, they could make the invention of the internet seem about as important as the Y2K bug.
Much of the quantum information technology emerges out of the second order effects that come out of these processes. These effects were first seriously discussed in the Einstein, Podolsky, and Rosen paper. These are “not just manifestations of the usual wave-particle duality, but rather they are a new type of higher-level quantum effect that manifests itself only in precisely engineered, man-made quantum architectures. (Dowling and Milburn, 1658) While I will not deviate much further into quantum technology, Dowling and Milburn list a number of specific topics that may be of further interest to the reader. Namely, the protocols for communication including distributed quantum computing and quantum wavepacket switching as well as physical systems suggested for constructing a quantum computer including “NMR, ion traps, cavity QED, quantum dots, superconducting circuits, and optical systems. (1658) With such quantum computing, there are interesting philosophical questions to ask as well. For instance, in the Everett / Many Worlds interpretation, we know that we do not directly interact with parallel Everett worlds. However, this is not to say we do not interfere with them. With respect to Feynman diagrams, interaction would be an event occurring at the vertices, interference could be any event that happens to be in line of two distinct world lines. Therefore, if we had some kind of reversible quantum computer, we might have a different way of studying the measurement problem.
The quantum theory has come a long way since the 1900s. More than one-hundred years since its initial discussions, we are still absent of a unified field theory. For Einstein, there are still a lot of questions that are not yet answered and may never be. I think he would have been an advocate of Dr. Everett’s approach to the measurement problem. At the very least, the EMW interpretation of quantum mechanics is a good sequel to Schrodinger’s Cat experiment. For the time being, think it is suffice to say that Einstein would have been relieved of some of the doubts that were cast on quantum mechanics, especially in light of more recent developments to the measurement problem. Einstein was by no means a card-dealer, and as uncomfortable as it made him, it seems as if we are still forced to play by the “house” rules.
Marty Thurston
Dowling, J. P., Milburn, G.J. (2003). Quantum Technology: The Second Quantum Revolution. Philosophical Transactions: Mathematical, Physical and Engineering Sciences, 361, 1655-1674
Isaacson, Walter. (2007). Einstein: His Life and Universe. Simon and Schuster, NY
Robinson, Don. (1994). Can Superselection Rules Solve the Measurement Problem?. The British Journal for the Philosophy of Science, 45, 79-93
Schlosshauer, Maximilian. (2005). Decoherence, the Measurement Problem, and Interpretation of Quantum Mechanics. Department of Physics, University of Washington, Seattle.
Serway, R.A., Moses, C.J., and Moyer, C.A. (2005). Modern Physics, 3rd ed.
Wave Function. (2008). In Encyclopædia Britannica. Retrieved March 8, 2008, from Encyclopædia Britannica Online: []
Bohm, David J. (1952). A Suggested Interpretation of the Quantum Theory in Terms of “Hidden Variables. Physical Review, 85 166-193
Muchenheim, W. (1986). A Review of Extended Probabilities. Physics Reports, 133, 339-. By, Youssef Saul. Quantum Mechancis as Complex Probability Theory
0 of 8192 characters used
Post Comment
• cosmomed profile image
cosmomed 4 years ago from Sarawak. Malaysia
Still very difficult to be understand by laymen.
• Robert Kernodle profile image
Robert Kernodle 6 years ago
It is interesting how Einstein was instrumental in establishing the very quantum theory that he never fully accepted.
You might find my latest hub on this subject interesting: |
d9d47f3c9db2a730 | Saturday, January 31, 2009
A comment about thermodynamics of dark black holes
Lubos Motl had an excellent posting about thermodynamics of black holes. Unfortunately I am too busy with the updatings for a detailed response. Just a hasty comment about thermodynamics of dark black holes inspired by the vision about dark matter as a hierarchy of phases with non-standard value of Planck constant realized in terms of a book like structure of the generalized imbedding space (generalization of H=M4×CP2) with pages labeled by the values of Planck constant and phase transitions changing Planck constant interpreted as a leakage between different pages of the Big Book.
Suppose we accept the identification of dark matter in astrophysical length scales as matter with a gigantic gravitational Planck constant suggested by Bohr orbitology of planetary orbits. For instance, hbar =GM2/v0, v0=1/4, would hold true for an ideal black hole with Planck length (hbarG)1/2 equal to Schwartshild radius 2GM. Since black hole entropy is inversely proportional to hbar, this would predict black hole entropy to be of order single bit. This of course looks totally non-sensible if one believes in standard thermodynamics. For the star with mass equal to 1040 Planck masses discussed in the example of Lubos the entropy associated with the initial state of the star would be roughly the number of atoms in star equal to about 1060. Black hole entropy proportional to GM2/hbar would be of order 1080 provided the standard value of hbar is used as unit.
This stimulates some questions.
1. Does second law pose an upper bound on the value of hbar of dark black hole from the requirement that black hole has at least the entropy of the initial state. The maximum value of hbar would be given by the ratio of black hole entropy to the entropy of the initial state and about 1020 in the example of Lubos to be compared with GM2/v0 ≈1080.
2. Or should one generalize thermodynamics in a manner suggested by zero energy ontology by making explicit distinction between subjective time (sequence of quantum jumps) and geometric time? The arrow of geometric time would correlate with that of subjective time. One can argue that the geometric time has opposite direction for the positive and negative energy parts of the zero energy state interpreted in standard ontology as initial and final states of quantum event. If second law would hold true with respect to subjective time, the formation of ideal dark black hole would destroy entropy only from the point of view of observer with standard arrow of geometric time. The behavior of phase conjugate laser light would be a more mundane example. Do self assembly processes serve as example of non-standard arrow of geometric time in biological systems? In fact, zero energy state is geometrically analogous to a big bang followed by big crunch. One can however criticize the basic assumption as ad hoc guess. One should really understand the the arrow of geometric time. This is discussed in detail in the article About the Nature of Time.
Monday, January 19, 2009
Important notice about links which do not work anymore!!
Readers have perhaps noticed that the discovery of new longlived particle in CDF predicted by TGD already around 1990 turned out to be one of most fantastic breakthroughs of TGD. The reported findings could be explained and even predictd at quantitative level and a lot of testable predictions follow from the model as one might expect since essentially a leptonic variant of QCD is predicted.
My influental colleagues in Helsinki University who have told last 31 years in every possible occasion that I am totally mad, became understandably very angry. Since they could not punish Nature for behaving according to the predictions of TGD, they decided to punish me. I am not allowed to use university computer for my homepage anymore. This would not be a problem as such but they also refused to redirect visitors to my new homepage. These idiots have reached their holy goal: TGD has more or less disappeared from the web.
I have not yet had time and energy to update links from the blog to my homepage. When the link fails to work the recipe is however very simple: replace
in the link with
and everything should work. Thank you very much for your kind attention.
Matti Pitkänen
The recent view about the construction of configuration space spinor structure
During the last five years both the mathematical and physical understanding of quantum TGD has developed dramatically. Some ideas have died and large number of conjectures have turned to be un-necessary strong, un-necessary, or simply wrong. The outcome is that the books about basic TGD do not correspond the actual situation in the theory. Therefore I decided to perform a major cleaning operation throwing away the obsolete stuff and making good arguments more precise. Good household is not my only motivation: this kind of process, although it challenges the ego, is always extremely fruitful. The basic goal has been to replace the perspective as it was for five years ago with the one which is outcome of the development of visions and concepts like fundamental description of quantum TGD as almost topological QFT in terms of modified Dirac action for fermions at light-like 3-surfaces identified as the basic objects of the theory, zero energy ontology, finite measurement resolution as a fundamental physical principle realized in terms of Jones inclusions and having number theoretic braids as space-time correlate, generalization of S-matrix to M-matrix, number theoretical universality and number theoretical compactification reducing standard model symmetries to number theory and allowing to solve some basic problems of quantum TGD, realization of the hierarchy of Planck constants in terms of the generalization of imbedding space concept, discovery of a hierarchy of symplectic fusion algebras provided concrete understanding of the super-symplectic conformal invariance, and so on.
I started the cleaning up process from the chapter Configuration Space Spinor Structure and I glue below the abstract.
Quantum TGD should be reducible to the classical spinor geometry of the configuration space. In particular, physical states should correspond to the modes of the configuration space spinor fields. The immediate consequence is that configuration space spinor fields cannot, as one might naively expect, be carriers of a definite spin and unit fermion number. Concerning the construction of the configuration space spinor structure there are some important clues.
1. Geometrization of fermionic statistics in terms of configuration space spinor structure
The great vision has been that the second quantization of the induced spinor fields can be understood geometrically in terms of the configuration space spinor structure in the sense that the anti-commutation relations for configuration space gamma matrices require anti-commutation relations for the oscillator operators for free second quantized induced spinor fields.
1. One must identify the counterparts of second quantized fermion fields as objects closely related to the configuration space spinor structure. Ramond model has as its basic field the anti-commuting field Gk(x), whose Fourier components are analogous to the gamma matrices of the configuration space and which behaves like a spin 3/2 fermionic field rather than a vector field. This suggests that the complexified gamma matrices of the configuration space are analogous to spin 3/2 fields and therefore expressible in terms of the fermionic oscillator operators so that their anti-commutativity naturally derives from the anti-commutativity of the fermionic oscillator operators.
As a consequence, configuration space spinor fields can have arbitrary fermion number and there would be hopes of describing the whole physics in terms of configuration space spinor field. Clearly, fermionic oscillator operators would act in degrees of freedom analogous to the spin degrees of freedom of the ordinary spinor and bosonic oscillator operators would act in degrees of freedom analogous to the 'orbital' degrees of freedom of the ordinary spinor field.
2. The classical theory for the bosonic fields is an essential part of the configuration space geometry. It would be very nice if the classical theory for the spinor fields would be contained in the definition of the configuration space spinor structure somehow. The properties of the modified massless Dirac operator associated with the induced spinor structure are indeed very physical. The modified massless Dirac equation for the induced spinors predicts a separate conservation of baryon and lepton numbers. The differences between quarks and leptons result from the different couplings to the CP2 Kähler potential. In fact, these properties are shared by the solutions of massless Dirac equation of the imbedding space.
3. Since TGD should have a close relationship to the ordinary quantum field theories it would be highly desirable that the second quantized free induced spinor field would somehow appear in the definition of the configuration space geometry. This is indeed true if the complexified configuration space gamma matrices are linearly related to the oscillator operators associated with the second quantized induced spinor field on the space-time surface and/or its boundaries. There is actually no deep reason forbidding the gamma matrices of the configuration space to be spin half odd-integer objects whereas in the finite-dimensional case this is not possible in general. In fact, in the finite-dimensional case the equivalence of the spinorial and vectorial vielbeins forces the spinor and vector representations of the vielbein group SO(D) to have same dimension and this is possible for D=8-dimensional Euclidian space only. This coincidence might explain the success of 10-dimensional super string models for which the physical degrees of freedom effectively correspond to an 8-dimensional Euclidian space.
4. It took a long time to realize that the ordinary definition of the gamma matrix algebra in terms of the anti-commutators {gA,gB} = 2gAB must in TGD context be replaced with {gAf,gB} = iJAB\per, where JAB denotes the matrix elements of the Kähler form of the configuration space. The presence of the Hermitian conjugation is necessary because configuration space gamma matrices carry fermion number. This definition is numerically equivalent with the standard one in the complex coordinates. The realization of this delicacy is necessary in order to understand how the square of the configuration space Dirac operator comes out correctly.
5. The only possible option is that second quantized induced spinor fields are defined at 3-D light-like causal determinants associated with 4-D space-time sheet. The unique partonic dynamics is almost topological QFT defined by Chern-Simons action for the induced Kähler gauge potential and by the modified Dirac action constructed from it by requiring super-conformal symmetry. The resulting theory has all the desired super-conformal symmetries and is exactly solvable at parton level. It is 3-dimensional lightlike 3-surfaces rather than generic 3-surfaces which are the fundamental dynamical objects in this approach.
The classical dynamics of the interior of space-time surface defines a classical correlate for the partonic quantum dynamics and provides a realization of quantum measurement theory. It is determined by the vacuum functional identified as the Dirac determinant. There are good arguments suggesting that it reduces to an exponent of absolute extremum of Kähler action in each region of the space-time sheet where the Kähler action density has a definite sign.
2. Modified Dirac equation for induced classical spinor fields
The identification of the light-like partonic 3-surfaces as carriers of elementary particle quantum numbers inspired by the TGD based quantum measurement theory forces the identification of the modified Dirac action as that associated with the Chern-Simons action for the induced Kähler gauge potential. At the fundamental level TGD would be almost-topological super-conformal QFT in the sense that only the light-likeness condition for the partonic 3-surfaces would involve the induced metric. Chern-Simons dynamics would thus involve the induced metric only via the generalized eigenvalue equation for the modified Dirac operator involving the light-like normal of X3l subset X4. N=4 super-conformal symmetry emerges as a maximal Super-Kac Moody symmetry for this option. The application of D to any generalized eigen-mode gives a zero mode and zero modes and generalized eigen-modes define a cohomology.
The basic idea is that Dirac determinant defined by eigenvalues of DC-S can be identified as the exponent of Kähler action for a preferred extremal. There are however two problems. Without further conditions the eigenvalues of DC-S are functions of the transversal coordinates of X3l and the standard definition of Dirac determinant fails. Second problem is how to feed the information about preferred extremal to the eigenvalue spectrum. The solution of these problems is discussed below.
The eigen modes of the modified Dirac equation are interpreted as generators of exact N=4 super-conformal symmetries in both quark and lepton sectors. These super-symmetries correspond to pure super gauge transformations and no spartners of ordinary particles are predicted: in particular N=2 space-time super-symmetry is generated by the righthanded neutrino is absent contrary to the earliest beliefs. There is no need to emphasize the experimental implications of this finding.
An essential difference with respect to standard super-conformal symmetries is that Majorana condition is not satisfied, the super generators carry quark or lepton number, and the usual super-space formalism does not apply. The situation is saved by the fact that super generators of super-conformal algebras anticommute to Hamiltonians of symplectic transformations rather than vector fields representing the transformations.
Configuration space gamma matrices identified as super generators of super-symplectic or super Kac-Moody algebras (depending on CH coordinates used) are expressible in terms of the oscillator operators associated with the eigen modes of the modified Dirac operator. The number of generalized eigen modes turns out to be finite so that standard canonical quantization does not work unless one restricts the set of points involved defined as intersection of number theoretic braid with the partonic 2-surface. The interpretation is in terms of finite measurement resolution and the surprising thing is that this notion is implied by the vacuum degeneracy of Kähler action.
3. The exponent of Kähler function as Dirac determinant for the modified Dirac action
Although quantum criticality in principle predicts the possible values of Kähler coupling strength, one might hope that there exists even more fundamental approach involving no coupling constants and predicting even quantum criticality and realizing quantum gravitational holography.
1. The Dirac determinant defined by the product of Dirac determinants associated with the light-like partonic 3-surfaces X3l associated with a given space-time sheet X4 is the simplest candidate for vacuum functional identifiable as the exponent of the Kähler function. One can of course worry about the finiteness of the Dirac determinant. p-Adicization requires that the eigenvalues belong to a given algebraic extension of rationals. This restriction would imply a hierarchy of physics corresponding to different extensions and could automatically imply the finiteness and algebraic number property of the Dirac determinants if only finite number of eigenvalues would contribute. The regularization would be performed by physics itself if this were the case.
2. The basic problem has been how to feed in the information about the preferred extremal of Kähler action to the eigenvalue spectrum of C-S Dirac operator DC-S at light-like 3-surface X3l. The identification of the preferred extremal came possible via boundary conditions at X3l dictated by number theoretical compactification. The basic observation is that the Dirac equation associated with the 4-D Dirac operator DK defined by Kähler action can be seen as a conservation law for a super current. By restricting the super current to flow along X3l by requiring that its normal component vanishes, one obtains a singular solution of 4-D modified Dirac equation restricted to X3l. The ënergy" spectrum for the solutions of DK corresponds to the spectrum of eigenvalues for DC-S and the product of the eigenvalues defines the Dirac determinant in standard manner. Since the eigenmodes are restricted to those localized to regions of non-vanishing induced Kähler form, the number of eigen modes is finite and therefore also Dirac determinant is finite. The eigenvalues can be also algebraic numbers.
3. It remains to be proven that the product of eigenvalues gives rise to the exponent of Kähler action for the preferred extremal of Kähler action. At this moment the only justification for the conjecture is that this the only thing that one can imagine. The identification of super-symplectic conformal weights as zeros of zeta function defined by the eigenvalues of modified Dirac operator would couple them with the dynamics defined by the Kähler action.
4. A long-standing conjecture has been that the zeros of Riemann Zeta are somehow relevant for quantum TGD. Rieman zeta is however naturally replaced Dirac zeta defined by the eigenvalues of DC-S and closely related to Riemann Zeta since the spectrum consists essentially for the cyclotron energy spectra for localized solutions region of non-vanishing induced Kähler magnetic field and hence is in good approximation integer valued up to some cutoff integer. In zero energy ontology the Dirac zeta function associated with these eigenvalues defines"square root" of thermodynamics assuming that the energy levels of the system in question are expressible as logarithms of the eigenvalues of the modified Dirac operator defining kind of fundamental constants. Critical points correspond to approximate zeros of Dirac zeta and if Kähler function vanishes at criticality as it ineed should, the thermal energies at critical points are in first order approximation proportional to zeros themselves so that a connection between quantum criticality and approximate zeros of Dirac zeta emerges.
5. The discretization induced by the number theoretic braids reduces the world of classical worlds to effectively finite-dimensional space and configuration space Clifford algebra reduces to a finite-dimensional algebra. The interpretation is in terms of finite measurement resolution represented in terms of Jones inclusion M subset N of HFFs with M taking the role of complex numbers. The finite-D quantum Clifford algebra spanned by fermionic oscillator operators is identified as a representation for the coset space N/M describing physical states modulo measurement resolution. In the sectors of generalized imbedding space corresponding to non-standard values of Planck constant quantum version of Clifford algebra is in question.
4. Super-conformal symmetries
The almost topological QFT property of partonic formulation based on Chern-Simons action and corresponding modified Dirac action allows a rich structure of N=4 super-conformal symmetries. In particular, the generalized Kac-Moody symmetries leave corresponding X3-local isometries respecting the light-likeness condition. A rather detailed view about various aspects of super-conformal symmetries emerge leading to identification of fermionic anti-commutation relations and explicit expressions for configuration space gamma matrices and Kähler metric. This picture is consistent with the conditions posed by p-adic mass calculations.
Number theoretical considerations play a key role and lead to the picture in which effective discretization occurs so that partonic two-surface is effectively replaced by a discrete set of algebraic points belonging to the intersection of the real partonic 2-surface and its p-adic counterpart obeying the same algebraic equations. This implies effective discretization of super-conformal field theory giving N-point functions defining vertices via discrete versions of stringy formulas.
For the updated version of the chapter see Configuration Space Spinor Structure of "Physics as Infinite-Dimensional Geometry".
Vision about quantization of Planck constant
The quantization of Planck constant has been the basic them of TGD since 2005 and the perspective in the earlier version of this chapter reflected the situation for about year and one half after the basic idea stimulated by the finding of Nottale that planetary orbits could be seen as Bohr orbits with enormous value of Planck constant given by hbargr = GM1M2/v0, v0 ≈ 2-11 for the inner planets. The general form of hbargr is dictated by Equivalence Principle. This inspired the ideas that quantization is due to a condensation of ordinary matter around dark matter concentrated near Bohr orbits and that dark matter is in macroscopic quantum phase in astrophysical scales.
The second crucial empirical input were the anomalies associated with living matter. Mention only the effects of ELF radiation at EEG frequencies on vertebrate brain and anomalous behavior of the ionic currents through cell membrane. If the value of Planck constant is large, the energy of EEG photons is above thermal energy and one can understand the effects on both physiology and behavior. If ionic currents through cell membrane have large Planck constant the scale of quantum coherence is large and one can understand the observed low dissipation in terms of quantum coherence.
1. The evolution of mathematical ideas
From the beginning the basic challenge -besides the need to deduce a general formula for the quantized Planck constant- was to understand how the quantization of Planck constant is mathematically possible. From the beginning it was clear that since particles with different values of Planck constant cannot appear in the same vertex, a generalization of space-time concept is needed to achieve this.
During last five years or so many deep ideas -both physical and mathematical- related to the construction of quantum TGD have emerged and this has led to a profound change of perspective in this and also other chapters. The overall view about TGD is described briefly here.
1. For more than five years ago I realized that von Neumann algebras known as hyperfinite factors of type II1 (HFFs) are highly relevant for quantum TGD since the Clifford algebra of configuration space ("world of classical worlds", WCW) is direct sum over HFFs. Jones inclusions are particular class of inclusions of HFFs and quantum groups are closely related to them. This led to a conviction that Jones inclusions can provide a detailed understanding of what is involved and predict very simple spectrum for Planck constants associated with M4 and CP2 degrees of freedom (later I replaced M4 by its light cone M4± and finally with the causal diamond CD defined as intersection of future and past light-cones of M4).
2. The notion of zero energy ontology replaces physical states with zero energy states consisting of pairs of positive and negative energy states at the light-like boundaries δM4±×CP2 of CDs forming a fractal hierarchy containing CDs within CDs. In standard ontology zero energy state corresponds to a physical event, say particle reaction. This led to the generalization of S-matrix to M-matrix identified as Connes tensor product characterizing time like entanglement between positive and negative energy states. M-matrix is product of square root of density matrix and unitary S-matrix just like Schrödinger amplitude is product of modulus and phase, which means that thermodynamics becomes part of quantum theory and thermodynamical ensembles are realized as single particle quantum states. This led also to a solution of long standing problem of understanding how geometric time of the physicist is related to the experienced time identified as a sequence of quantum jumps interpreted as moments of consciousness in TGD inspired theory of consciousness which can be also seen as a generalization of quantum measurement theory (see this) .
3. Another closely related idea was the emergence of measurement resolution as the basic element of quantum theory. Measurement resolution is characterized by inclusion M subset N of HFFs with M characterizing the measurement resolution in the sense that the action of M creates states which cannot be distinguished from each other within measurement resolution used. Hence complex rays of state space are replaced with M rays. One of the basic challenges is to define the nebulous factor space N/M having finite fractional dimension N:M given by the index of inclusion. It was clear that this space should correspond to quantum counterpart of Clifford algebra of world of classical worlds reduced to a finite-quantum dimensional algebra by the finite measurement resolution (see this).
4. The realization that light-like 3-surfaces at which the signature of induced metric of space-time surface changes from Minkowskian to Euclidian are ideal candidates for basic dynamical objects besides light-like boundaries of space-time surface was a further decisive step or progress. This led to vision that quantum TGD is almost topological quantum field theory ("almost" because light-likeness brings in induced metric) characterized by Chern-Simons action for induced Kähler gauge potential of CP2. Together with zero energy ontology this led to the generalization of the notion of Feynman diagram to a light-like 3-surface for which lines correspond to light-like 3-surfaces and vertices to 2-D partonic surface at which these 3-D surface meet. This means a strong departure from string model picture. The interaction vertices should be given by N-point functions of a conformal field theory with second quantized induced spinor fields defining the basic fields in terms of which also the gamma matrices of world of classical worlds could be constructed as super generators of super conformal symmetries (see this).
5. By quantum classical correspondence finite measurement resolution should have a space-time correlate. The obvious guess was that this correlate is discretization at the level of construction of M-matrix. In almost-TQFT context the effective replacement of light-like 3-surface with braids defining basic objects of TQFTs is the obvious guess. Also number theoretic universality necessary for the p-adicization of quantum TGD by a process analogous to the completion of rationals to reals and various p-adic number fields requires discretization since only rational and possibly some algebraic points of the imbedding space (in suitable preferred coordinates) allow interpretation both as real and p-adic points. It was clear that the construction of M-matrix boils to the precise understanding of number theoretic braids (see this).
6. The interaction with M-theory dualities (see this) led to a handful of speculations about dualities possible in TGD framework, and one of these dualities- M8-M4×CP2 duality - eventually led to a unique identification of number theoretic braids. The dimensions of partonic 2-surface, space-time, and imbedding space strongly suggest that classical number fields, or more precisely their complexifications might help to understand quantum TGD. If the choice of imbedding space is unique because of uniqueness of infinite-dimensional Kähler geometric existence of world of classical worlds then standard model symmetries coded by M4×CP2 should have some deeper meaning and the most obvious guess is that M4×CP2 can be understood geometrically. SU(3) belongs to the automorphism group of octonions as well as hyper-octonions M8 identified by subspace of complexified octonions with Minkowskian signature of induced metric. This led to the discovery that hyper-quaternionic 4-surfaces in M8 can be mapped to M4×CP2 provided their tangent space contains preferred M2 subset M4 subset M4×E4. Years later I realized that the map generalizes so that M2 can depend on the point of X4. The interpretation of M2(x) is both as a preferred hyper-complex (commutative) sub-space of M8 and as a local plane of non-physical polarizations so that a purely number theoretic interpretation of gauge conditions emerges in TGD framework. This led to a rapid progress in the construction of the quantum TGD. In particular, the challenge of identifying the preferred extremal of Kähler action associated with a given light-like 3-surface X3l could be solved and the precise relation between M8 and M4×CP2 descriptions was understood (see this).
7. Also the challenge of reducing quantum TGD to the physics of second quantized induced spinor fields found a resolution recently (see this). For years ago it became clear that the vacuum functional of the theory must be the Dirac determinant associated with the induced spinor fields so that the theory would predict all coupling parameters from quantum criticality. Even more, the vacuum functional should correspond to the exponent of Kähler action for a preferred extremal. The problem was that the generalized eigenmodes of Chern-Simons Dirac operator allow a generalized eigenvalues to be arbitrary functions of two coordinates in the directions transversal to the light-like direction of X3l. The progress in the understanding of number theoretic compactification however allowed to understand how the information about the preferred extremal of Kähler action is coded to the spectrum of eigen modes.
The basic idea is simple and I actually discovered it for more than half decade ago but forgot! The generalized eigen modes of 3-D Chern-Simons Dirac operator DC-S correspond to the zero modes of a 4-D modified Dirac operator defined by Kähler action localized to X3l so that induced spinor fields can be seen as 4-D spinorial shock waves. The led to a concrete interpretation of the eigenvalues as analogous to cyclotron energies of fermion in classical electro-weak magnetic fields defined by the induced spinor connection and a connection with anyon physics emerges by 2-dimensionality of the evolving system. Also it was possible to identify the boundary conditions for the preferred extremal of Kähler action -analog of Bohr orbit- at X3l and also to the vision about how general coordinate invariance allows to use any light-like 3-surface X3 subset X4(X3l) instead of using only wormhole throat to second quantize induced spinor field.
8. It became as a total surprise that due to the huge vacuum degeneracy of induced spinor fields the number of generalized eigenmodes identified in this manner was finite. The good news was that the theory is manifestly finite and zeta function regularization is not needed to define the Dirac determinant. The manifest finiteness had been actually must-be-true from the beginning. The apparently bad news was that the Clifford algebra of WCW world constructed from the oscillator operators is bound to be finite-dimensional. The resolution of the paradox comes from the realization that this algebra represents the somewhat mysterious coset space N/M so that finite measurement resolution and the notion inclusion are coded by the vacuum degeneracy of Kähler action and the maximally economical description in terms of inclusions emerges automatically.
9. A unique identification of number theoretic braids became also possible and relates to the construction of the generalized imbedding space by gluing together singular coverings and factor spaces of CD\M2 and CP2\S2I to form a book like structure. Here M2 is preferred plane of M4 defining quantization axis of energy and angular momentum and S2I is one of the two geodesic sphere of CP2. The interpretation of the selection of these sub-manifolds is as a geometric correlate for the selection of quantization axes and CD defining basic sector of world of classical worlds is replaced by a union corresponding to these choices. Number theoretic braids come in too variants dual to each other, and correspond to the intersection of M2 and M4 projection of X3l on one hand and S2I and CP2 projection of X3l on the other hand. This is simplest option and would mean that the points of number theoretic braid belong to M2 (S2I) and are thus quantum critical although entire X2 at the boundaries of CD belongs to a fixed page of the Big Book. This means solution of a long standing problem of understanding in what sense TGD Universe is quantum critical. The phase transitions changing Planck constant correspond to tunneling represented geometrically by a leakage of partonic 2-surface from a page of Big Book to another one.
10. Many other steps of progress have occurred during the last years. Much earlier it had become clear that the basic difference between TGD and string models is that in TGD framework the super algebra generators are non-hermitian and carry quark or lepton number (see this). Super-space concept is un-necessary because super generators anticommute to Hamiltonians of bosonic symmetries rather than corresponding vector fields. This allows to avoid the Majorana condition of super string models fixing space-time dimension to 10 or 11. During last years a much more precise understanding of super-symplectic and super Kac-Moody symmetries has emerged. The generalized coset representation for these two Super Virasoro algebras generalizes Equivalence Principle and predicts as a special case the equivalence of gravitational and inertial masses. Coset construction also provides justification for p-adic thermodynamics in apparent conflict with super-conformal invariance. The construction of the fusion rules of symplectic QFT as analog of conformal QFT led to the notion of number theoretic braid and to an explicit construction of a hierarchy of algebras realizing symplectic fusion rules and the notion of finite measurement resolution (see this). This approach led to the formulation of generalized Feynman diagrams and coupling constant evolution in terms of operads Taylor made for a mathematical realization of the notion of coupling constant evolution. One of the future challenges is to combine symplectic fusion algebras with the realization for the hierarchy of Planck constants.
2. The evolution of physical ideas
The evolution of physical ideas related to the hierarchy of Planck constants and dark matter as a hierarchy of phases of matter with non-standard value of Planck constants was much faster than the evolution of mathematical ideas and quite a number of applications have been developed during last five years.
1. The basic idea was that ordinary matter condenses around dark matter which is a phase of matter characterized by non-standard value of Planck constant.
2. The realization that non-standard values of Planck constant give rise to charge and spin fractionization and anyonization led to the precise identification of the prerequisites of anyonic phase (see this). If the partonic 2-surface, which can have even astrophysical size, surrounds the tip of CD, the matter at the surface is anyonic and particles are confined at this surface. Dark matter could be confined inside this kind of light-like 3-surfaces around which ordinary matter condenses. If the radii of the basic pieces of these nearly spherical anyonic surfaces - glued to a connected structure by flux tubes mediating gravitational interaction - are given by Bohr rules, the findings of Nottale can be understood. Dark matter would resemble to a high degree matter in black holes replaced in TGD framework by light-like partonic 2-surfaces with minimum size of order Schwarstchild radius rS of order scaled up Planck length: rS ~ (hbar G)1/2. Black hole entropy being inversely proportional to hbar is predicted to be of order unity so that dramatic modification of the picture about black holes is implied.
3. Darkness is a relative concept and due to the fact that particles at different pages of book cannot appear in the same vertex of the generalized Feynman diagram. The phase transitions in which partonic 2-surface X2 during its travel along X3l leaks to different page of book are however possible and change Planck constant so that particle exchanges of this kind allow particles at different pages to interact. The interactions are strongly constrained by charge fractionization and are essentially phase transitions involving many particles. Classical interactions are also possible. This allows to conclude that we are actually observing dark matter via classical fields all the time and perhaps have even photographed it (see this).
4. Perhaps the most fascinating applications are in biology. The anomalous behavior ionic currents through cell membrane (low dissipation, quantal character, no change when the membrane is replaced with artificial one) has a natural explanation in terms of dark supra currents. This leads to a vision about how dark matter and phase transitions changing the value of Planck constant could relate to the basic functions of cell, functioning of DNA and aminoacids, and to the mysteries of bio-catalysis. This leads also a model for EEG interpreted as a communication and control tool of magnetic body containing dark matter and using biological body as motor instrument and sensory receptor. One especially shocking outcome is the emergence of genetic code of vertebrates from the model of dark nuclei as nuclear strings (see this).
3. Brief summary about the generalization of the imbedding space concept
A brief summary of the basic vision in order might help reader to assimilate the more detailed representation about the generalization of imbedding space.
1. The hierarchy of Planck constants cannot be realized without generalizing the notions of imbedding space and space-time since particles with different values of Planck constant cannot appear in the same interaction vertex. This suggests some kind of book like structure for both M4 and CP2 factors of the generalized imbedding space is suggestive.
2. Schrödinger equation suggests that Planck constant corresponds to a scaling factor of M4 metric whose value labels different pages of the book. The scaling of M4 coordinate so that original metric results in M4 factor is possible so that the scaling of hbar corresponds to the scaling of the size of causal diamond CD defined as the intersection of future and past directed light-cones. The light-like 3-surfaces having their 2-D and light-boundaries of CD are in a key role in the realization of zero energy states. The infinite-D spaces formed by these 3-surfaces define the fundamental sectors of the configuration space (world of classical worlds). Since the scaling of CD does not simply scale space-time surfaces, the coding of radiative corrections to the geometry of space-time sheets becomes possible and Kähler action can be seen as expansion in powers of hbar/hbar0.
3. Quantum criticality of TGD Universe is one of the key postulates of quantum TGD. The most important implication is that Kähler coupling strength is analogous to critical temperature. The exact realization of quantum criticality would be in terms of critical sub-manifolds of M4 and CP2 common to all sectors of the generalized imbedding space. Quantum criticality would mean that the two kinds of number theoretic braids assignable to M4 and CP2 projections of the partonic 2-surface belong by the definition of number theoretic braids to these critical sub-manifolds. At the boundaries of CD associated with positive and negative energy parts of zero energy state in given time scale partonic two-surfaces belong to a fixed page of the Big Book whereas light-like 3-surface decomposes into regions corresponding to different values of Planck constant much like matter decomposes to several phases at thermodynamical criticality.
4. The connection with Jones inclusions was originally a purely heuristic guess based on the observation that the finite groups characterizing Jones inclusion characterize also pages of the Big Book. The key observation is that Jones inclusions are characterized by a finite subgroup G Ì SU(2) and that this group also characterizes the singular covering or factor spaces associated with CD or CP2 so that the pages of generalized imbedding space could indeed serve as correlates for Jones inclusions. The elements of the included algebra M are invariant under the action of G and M takes the role of complex numbers in the resulting non-commutative quantum theory.
5. The understanding of quantum TGD at parton level led to the realization that the dynamics of Kähler action realizes finite measurement resolution in terms of finite number of modes of the induced spinor field. This automatically implies cutoffs to the representations of various super-conformal algebras typical for the representations of quantum groups closely associated with Jones inclusions. The Clifford algebra spanned by the fermionic oscillator operators would provide a realization for the factor space N/M of hyper-finite factors of type II1 identified as the infinite-dimensional Clifford algebra N of the configuration space and included algebra M determining the finite measurement resolution. The resulting quantum Clifford algebra has anti-commutation relations dictated by the fractionization of fermion number so that its unit becomes r=hbar/hbar0. SU(2) Lie algebra transforms to its quantum variant corresponding to the quantum phase q=exp(i2p/r).
6. Jones inclusions appear as two variants corresponding to N:M < 4 and N:M=4. The tentative interpretation is in terms of singular G-factor spaces and G-coverings of M4 or CP2 in some sense. The alternative interpretation in terms of two geodesic spheres of CP2 would mean asymmetry between M4 and CP2 degrees of freedom.
7. Number theoretic Universality suggests an answer why the hierarchy of Planck constants is necessary. One must be able to define the notion of angle -or at least the notion of phase and of trigonometric functions- also in p-adic context. All that one can achieve naturally is the notion of phase defined as root of unity and introduced by allowing algebraic extension of p-adic number field by introducing the phase if needed. In the framework of TGD inspired theory of consciousness this inspires a vision about cognitive evolution as the gradual emergence of increasingly complex algebraic extensions of p-adic numbers and involving also the emergence of improved angle resolution expressible in terms of phases exp(i2p/n) up to some maximum value of n. The coverings and factor spaces would realize these phases geometrically and quantum phases q naturally assignable to Jones inclusions would realize them algebraically. Besides p-adic coupling constant evolution based on hierarchy of p-adic length scales there would be coupling constant evolution with respect to hbar and associated with angular resolution.
For the updated version of the chapter see Does TGD Predict a Spectrum of Planck Constants? of "Towards S-matrix".
Saturday, January 17, 2009
Unidentified spectral noise as a new experimental support for quantum TGD?
The news of yesterday morning came in email from Jack Sarfatti. The news was that gravitational detectors in GEO600 experiment have been plagued by unidentified noise in the frequency range 300-1500 Hz. Craig J. Hogan has proposed an explanation in terms of holographic Universe. By reading the paper I learned that assumptions needed are essentially those of quantum TGD. Light-like 3-surfaces as basic objects, holography, effective 2-dimensionality, are some of the terms appearing repeatedly in the article.
Maybe this means a new discovery giving support for TGD. I hope that it does not make my life even more difficult in Finland. Readers have perhaps noticed that the discovery of new longlived particle in CDF predicted by TGD already around 1990 turned out to be one of most fantastic breakthroughs of TGD since the reported findings could be explained at quantitative level. The side effect was that Helsinki University did not allow me to use the computer for homepage anymore and they also refused to redirect visitors to my new homepage. The goal was achieved: I have more or less disappeared from the web. It seems that TGD is becoming really dangerous and power holds of science are getting nervous.
In any case, I could not resist the temptation to spend the day with this problem although I had firmly decided to use all my available time to the updating of basic chapters of quantum TGD.
1. The experiment
Consider first the graviton detector used in GEO600 experiment. The detector consists of two long arms (the length is 600 meters)- essentially rulers of equal length. The incoming gravitational wave causes a periodic stretch of the arms: the lengths of the rulers vary. The detection of gravitons means that laser beam is used to keep record about the varying length difference. This is achieved by splitting the laser beam into two pieces using a beam splitter. After this the beams travel through the arms and bounce back to interfere in the detector. Interference pattern tells whether the beam spent slightly different times in the arms due to the stretching of arm caused by the incoming gravitational radiation. The problem of experimenters has been the presence of an unidentified noise in the range 100-1500 Hz.
The prediction of Measurement of quantum fluctuations in geometry by Craig Hogan published in Phys. Rev. D 77, 104031 (2008) is that holographic geometry of space-time should induce fluctuations of classical geometry with a spectrum which is completely fixed . Hogan's prediction is very general and - if I have understood correctly - the fluctuations depend only on the duration (or length) of the laser beam using Planck length as a unit. Note that there is no dependence on the length of the arms and the fluctuations characterize only the laser beam. Although Planck length appears in the formula, the fluctuations need not have anything to with gravitons but could be due to the failure of the classical description of laser beams. The great surprise was that the prediction of Hogan for the noise is of same order of magnitude as the unidentified noise bothering experiments in the range 100-700 Hz.
2. Hogan's theory
Let us try to understand Hogan's theory in more detail.
1. The basic quantitative prediction of the theory is very simple. The spectral density of the noise for high frequencies is given by hH= tP1/2, where tP=(hbar G)1/2 is Planck time. For low frequencies hH is proportional to 1/f just like 1/f noise. The power density of the noise is given by tP and a connection with poorly understood 1/f noise appearing in electronic and other systems is suggestive. The prediction depends only Planck scale so that it should very easy to kill the model if one is able to reduce the noise from other sources below the critical level tP1/2. The model predicts also the the distribution characterizing the uncertainty in the direction of arrival for photon in terms of the ratio lP/L. Here L is the length or beam of equivalently its duration. A further prediction is that the minimal uncertainty in the arrival time of photons is given by Δ t= (tPt)1/2 and increases with the duration of the beam.
2. Both quantum and classical mechanisms are discussed as an explanation of the noise. Gravitational holography is the key assumption behind both models. Gravitational holography states that space-time geometry has two space-time dimensions instead of three at the fundamental level and that third dimension emerges via holography. A further assumption is that light-like (null) 3-surfaces are the fundamental objects. Sounds familiar!
2.1 Heuristic argument
The model starts from an optics inspired heuristic argument.
1. Consider a light ray with length L, which ends to aperture of size D. This gives rise to a diffraction spot of size λL/D. The resulting uncertainty of the transverse position of source is minimized when the size of diffraction spot is same as aperture size. This gives for the transverse uncertainty of the position of source Δ x= (λ L)1/2. The orientation of the ray can be determined with a precision Δ θ= (λ/L)1/2. The shorter the wavelength the better the precision. Planck length is believed to pose a fundamental limit to the precision. The conjecture is that the transverse indeterminacy of Planck wave length quantum paths corresponds to the quantum indeterminacy of the metric itself. What this means is not quite clear to me.
2. The basic outcome of the model is that the uncertainty for the arrival times of the photons after reflection is proportional to
Δ t =tP1/2× (sin(θ))1/2×sin(2θ),
where θ denotes the angle of incidence on beam splitter. In normal direction Δ t vanishes. The proposed interpretation is in terms of Brownian motion for the distance between beam splitter and detector the interpretation being that each reflection from beam splitter adds uncertainty. This is essentially due to the replacement of light-like surface with a new one orthogonal to it inducing a measurement of distance between detector and bean splitter.
This argument has some aspects which I find questionable.
1. The assumption of Planck wave length waves is certainly questionable. The underlying is that it lead to the classical formula involving the aperture size which is eliminated from the basic formula by requiring optimal angular resolution. One might argue that a special status of waves with Planck wave length breaks Lorentz invariance but since the experimental apparatus defines a preferred coordinate system this ned not be a problem.
2. Unless one is ready to forget the argument leading to the formula for Δ θ, one can argue that the description of the holographic interaction between distant points induced by these Planck wave length waves in terms of aperture with size D= (lPL)1/2 should have some more abstract physical counterpart. Could elementary particles as extended 2-D objects (as in TGD) play the role of ideal apertures to which a radiation with Planck wave length arrives? If one gives up the assumption about Planck wave radiation the uncertainty increases as λ. To my opinion one should be able to deduced the basic formula without this kind of argument.
2.2 Argument based on uncertainty principle for waves with Planck wave length
Second argument can do without diffraction but still uses Planck wave length waves.
1. The interactions of Planck wave length radiation at null surface at two different times corresponding to normal coordinates z1 and z2 at these times are considered. From the standard uncertainty relation between momentum and position of the incoming particle one deduces uncertainty relation for transverse position operators x(zi), i=1,2. The uncertainty comes from uncertainty of x(z2) induced by uncertainty of the transverse momentum px(zi). The uncertainty relation is deduced by assuming that (x(z2)-x(z1)/(z2-z1) is the ratio of transversal and longitudinal wave vectors. This relates x(z2) to px(zi) and the uncertainty relation can be deduced. The uncertainty increases linearly with z2-z1. Geometric optics is used to describing the propagating between the two points and this should certainly work for a situation in which wavelength is Planck wavelength if the notion of Planck wave length wave makes sense. From this formula the basic predictions follow.
2. Hogan emphasizes that the basic result is obtained also classically by assuming that light-like surfaces describing the propagation of light between ends points of arm describe Brownian like random walk in directions transverse to the direction of propagation. I understand that this means that Planck wave length wave is not absolutely necessary for this approach.
2.3 Description in terms of equivalent gravitonic wave packet
Hogan discusses also an effective description of holographic noise in terms of gravitational wave packet passing through the system.
1. The holographic noise at frequency f has equivalent description in terms of a gravitational wave packet of frequency f and duration T=1/f passing through the system. In this description the variance for the length difference of arms using standard formula for gravitational wave packet
Δl2/l2= h2f ,
where h characterizes the spectral density of gravitational wave.
2. For high frequencies one obtains
h= hP= (tP)1/2 .
3. For low frequencies the model predicts
h= (fres/f)(tP)1/2 .
Here fres characterized the inverse residence time in detector and is estimated to be about 700 Hz in GEO600 experiment.
4. The predictions of the theory are compared to the unidentified noise in the frequency range 100-600 Hz which introduces amplifying factor varying from 7 to 1. The orders of magnitude are same.
3. TGD based model
In TGD based model for the claimed noise on can avoid the assumption about waves with Planck wave length. Rather Planck length corresponds to the transversal cross section of so called massless extremals (MEs) assignable to MEs and orthogonal to the direction of propagation. Further elements are so called number theoretic braids leading to the discretization of quantum TGD at fundamental level. The mechanism inducing the distribution for the travel times of reflected photon is due to the transverse extension of MEs, discretization in terms of number theoretic braids. Note that also in Hogan's model it is essential that one can speak about position of particle in the beam.
3.1 Some background
Consider first the general picture behind the TGD inspired model.
1. What authors emphasize can be condensed to the following statement: The transverse indeterminacy of Planck wave length seems likely to be a feature of 3+1 D space-time emerge is as a dual of quantum theory on a 2+1-D null surface. In TGD light-like 3-surfaces indeed are the fundamental objects and 4-D space-time surface is in a holographic relation to these light-like 3-surfaces. The analog of conformal invariance in light-like radial direction implies that partonic 2-surfaces are actually basic objects in short scales in the sense that one 3-dimensionality only in discretized sense.
2. Both the interpretation as almost topological quantum field theory, the notion of finite measurement resolution, number theoretical universality making possible p-adicization of quantum TGD, and the notion of quantum criticality lead to a fundamental description in terms of discrete points sets. These are defined as intersections of what I call number theoretic braids with partonic 2-surfaces X2 at the boundaries of causal diamonds identified as intersections of future and paste directed light-cones forming a fractal hierarchy. These 2-surfaces X2 correspond to the ends of light-like three surfaces. Only the data from this discrete point set is used in the definition of M-matrix: there is however continuum of selections of this data set corresponding to different directions of light-like ray at the boundary of light-cone, and in detection one of these direction is selected and corresponds to the direction of beam in the recent case.
3. Fermions correspond to CP2 vacuum extremal with Euclidian signature of induced metric condensed to space-time sheet with Minkowskian signature and light-like wormhole throat for which 4-metric is degenerate carries the quantum numbers. Bosons correspond to wormhole contacts consisting of a piece of CP2 vacuum extremal connecting two two space-time sheets with Minkowskian signature of induced metric. The strands of number theoretic braids carry fermionic quantum numbers and discretization is interpreted as a space-time correlate for the finite measurement resolution implying the effective grainy nature of 2-surfaces.
3.2 The model
Consider now the TGD inspired model for a laser beam of fixed duration T.
1. In TGD framework the beams of photons and perhaps also photons themselves would have so called massless extremals as space-time correlates. The identification of gauge bosons as wormhole contacts means that there is a pair of MEs connected by a pieces of CP2 type vacuum extremal and carrying fermion and antifermion at the wormhole throats defining light-like 3-surfaces. The intersection of ME with light-cone boundary would represent partonic 2-surface and any transverse cross section of the M4 projection of ME is possible.
2. The reflection of ME has description in terms of generalized Feynman diagrams for which the incoming lines correspond to the light-like three surfaces and vertices to partonic 2-surfaces at which the MEs are glued together. In this simplest model this surface defines transverse cross section of both incoming and outgoing ME. The incoming and outgoing braid strands end to different points of the cross section because if two points coincide the N-point correlation function vanishes. This means that in the reflection the distribution for the positions of braid points representing exact positions of photon change in non-deterministic manner. This induces a quantum distribution of transverse coordinates associated with braid strands and in the detection state function reduction occurs fixing the position of braid strands.
3. The transversal cross section has maximum area when it is parallel to ME. In this case the area is apart from a numerical constant equal to d×L, L the length defined by the duration of laser beam defining the length of ME and d the diameter of orthogonal cross section of ME. This makes natural the assumption about Gaussian distribution for the positions of points in the cross section as Gaussian with variance equal to d×L. The distribution proposed by Hogan is obtained if d is given by Planck length. This would mean that the minimum area for a cross section of ME is very small, about S=hbar×G. This might make sense if the ME represents laser beam.
4. The assumption susceptible to criticism is that for the primordial ME representing photon the area of cross section orthogonal to the direction of propagation is assumed to be always given by Planck length. This assumption of course replaces Hogan's Planck wave. Note that the classical four-momentum of ME is massless. One could however argue that in quantum situation transverse momentum square is well defined quantum number and of order Planck mass mass squared.
5. In TGD Universe single photon would differ from infinitely narrow ray by having thickness defined by Planck length. There would be just single braid strand and its position would change in the reflection. The most natural interpretation indeed is that the pair of space-time sheets associated with photon consists of MEs with different transversal size scales: larger ME could represent laser beam. The noise would come from the lowest level in the hierarchy. One could argue that the natural size for M4 projection of wormhole throat is of order CP2 size R and therefore roughly 104 Planck lengths. If the cross section has area of order R2, where R is CP2 size, the spectral density would be roughly by a factor 100 larger than for Planck length and this might predict too large holographic noise in GEO600 experiment if the value of fres is correct. The assumption that the Gaussian characterizing the position distribution of the wormhole throat is very strongly concentrated near the center of ME with transverse size given by R looks un-natural.
6. It is important to notice that single reflection of primordial ME corresponds to a minimum spectral noise. Repeated reflections of ME in different directions gradually increase the transversal size of ME so that the outcome is cylindrical ME with radius of order L =cT, where T is the duration of ME. At this limit the spectral density of noise would be T1/2 meaning that the uncertainty in the frequency assignable to the arrival time of photons would of same order as the oscillation period f=1/T assignable to the original ME. The interpretation is that the repeated reflections gradually generate noise and destroy the coherence of the laser beam. This would however happen at single particle level rather than for a member of fictive ensemble. Quite literally, photon would get old! This interpretation conforms with the fact that in TGD framework thermodynamics becomes part of quantum theory and thermodynamical ensemble is represented at single particle level in the sense and time like entanglement coefficients between positive and negative energy parts of zero energy state define M-matrix as a product of square root of diagonal density matrix and of S-matrix.
7. The notion of number theoretic braid is essential for the interpretation for what happens in detection. In detection the positions of ends of number theoretic braid are measured and this measurement fixes the exact time spent by photons during the travel. Similar position measurement appears also in Hogan's argument. Thus the overall picture is more or less same as in the popular representation where also the grainy nature of space-time is emphasized.
8. I already mentioned the possible connection with poorly understood 1/f noise appearing in very many systems. The natural interpretation would be in terms of MEs.
3.3 The relationship with hierarchy of Planck constants
It is interesting to combine this picture with the vision about the hierarchy of Planck constants (I am just now developing in detail the representation of the ideas involved from a perspective given by the intense work during last five years).
1. If one accepts that dark matter corresponds to a hierarchy of phases of matter labeled by a hierarchy of Planck constants with arbitrarily large values, one must conclude that Planck length lP proportional to hbar1/2, has also spectrum. Primordial photons would have transversal size scalings as hbar1/2. One can consider the possibility that for large values of hbar the transversal size saturates to CP2 length R ≈104× lP. The spectral density of the noise would scale as hbar1/4 at least up to the critical value hbarcr=R2/G, which is in the range [2.3683, 2.5262]× 107. The preferred values of hbar number theoretically simple integers expressible as product of distinct Fermat primes and power of 2. hbarcrit/hbar0=3× 223 is integer of this kind and belongs to the allowed range of critical values.
2. The order of magnitude for gravitational Planck constant assignable to the space-time sheets mediating gravitational interaction is gigantic - of order hbargr≈ GM2 - so that the noise assignable to gravitons would be gigantic in astrophysical scales unless R serves as the upper bound for the transverse size of both primordial gauge bosons and gravitons.
3. If ordinary photonic space-time sheets are in question hbar has its standard value. For dark photons which I have proposed to play a key role in living matter, the situation changes and Δl2/l2 would scale like hbar1/2 at least up to critical value of Planck constant. Above this value of Planck constant spectral density would be given by R and Δl2/l2 would scale like R/l and Δ θ like (R/l)1/2.
For details and background see the updated chapter Quantum Astrophysics of "Physics in Many-Sheeted Space-time". |
370e15ba7cb983bc | Thursday, January 01, 2015
Dreamers and Doers
Sean Carroll has a guest post by Chip Sebens on the Many-Interacting-Worlds Approach to Quantum Mechanics. Here's the first part of it.
"In Newtonian physics objects always have definite locations. They are never in two places at once. To determine how an object will move one simply needs to add up the various forces acting on it and from these calculate the object’s acceleration. This framework is generally taken to be inadequate for explaining the quantum behavior of subatomic particles like electrons and protons. We are told that quantum theory requires us to revise this classical picture of the world, but what picture of reality is supposed to take its place is unclear. There is little consensus on many foundational questions: Is quantum randomness fundamental or a result of our ignorance? Do electrons have well-defined properties before measurement? Is the Schrödinger equation always obeyed? Are there parallel universes?
"Some of us feel that the theory is understood well enough to be getting on with. Even though we might not know what electrons are up to when no one is looking, we know how to apply the theory to make predictions for the results of experiments. Much progress has been made―observe the wonder of the standard model―without answering these foundational questions. Perhaps one day with insight gained from new physics we can return to these basic questions. I will call those with such a mindset the doers. Richard Feynman was a doer:
“It will be difficult. But the difficulty really is psychological and exists in the perpetual torment that results from your saying to yourself, ‘But how can it be like that?’ which is a reflection of uncontrolled but utterly vain desire to see it in terms of something familiar. I will not describe it in terms of an analogy with something familiar; I will simply describe it. … I think I can safely say that nobody understands quantum mechanics. … Do not keep saying to yourself, if you can possibly avoid it, ‘But how can it be like that?’ because you will get ‘down the drain’, into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that.”
-Feynman, The Character of Physical Law (chapter 6, pg. 129)
"In contrast to the doers, there are the dreamers. Dreamers, although they may often use the theory without worrying about its foundations, are unsatisfied with standard presentations of quantum mechanics. They want to know “how it can be like that” and have offered a variety of alternative ways of filling in the details. Doers denigrate the dreamers for being unproductive, getting lost “down the drain.” Dreamers criticize the doers for giving up on one of the central goals of physics, understanding nature, to focus exclusively on another, controlling it. But even by the lights of the doer’s primary mission―being able to make accurate predictions for a wide variety of experiments―there are reasons to dream:
“Suppose you have two theories, A and B, which look completely different psychologically, with different ideas in them and so on, but that all consequences that are computed from each are exactly the same, and both agree with experiment. … how are we going to decide which one is right? There is no way by science, because they both agree with experiment to the same extent. … However, for psychological reasons, in order to guess new theories, these two things may be very far from equivalent, because one gives a man different ideas from the other. By putting the theory in a certain kind of framework you get an idea of what to change. … Therefore psychologically we must keep all the theories in our heads, and every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics.”
-Feynman, The Character of Physical Law (chapter 7, pg. 168)
"In the spirit of finding alternative versions of quantum mechanics―whether they agree exactly or only approximately on experimental consequences―let me describe an exciting new option which has recently been proposed by Hall, Deckert, and Wiseman (in Physical Review X) and myself (forthcoming in Philosophy of Science), receiving media attention in: Nature, New Scientist, Cosmos, Huffington Post, Huffington Post Blog, FQXi podcast… Somewhat similar ideas have been put forward by Böstrom, Schiff and Poirier, and Tipler.
"The new approach seeks to take seriously quantum theory’s hydrodynamic formulation which was developed by Erwin Madelung in the 1920s. Although the proposal is distinct from the many-worlds interpretation, it also involves the postulation of parallel universes. The proposed multiverse picture is not the quantum mechanics of college textbooks, but just because the theory looks so “completely different psychologically” it might aid the development of new physics or new calculational techniques (even if this radical picture of reality ultimately turns out to be incorrect)."
Click here for the rest of it.
The essential mystery of quantum mechanics is that the theory is built around the dynamics of a thing called the wave function (hence wave mechanics), conventionally labelled ψ. The value of the wave function at each point in space and time is given by the solution to the Schrödinger equation (with appropriate boundary conditions): you imagine the ψ wave flowing around obstacles, through slits, and interfering with itself. The trouble is, the wave function is (apparently) not a 'real entity'. For one thing its values are complex, not real (all observables are real numbers); for another, in its multi-particle mode, the wave function lives in an arbitrarily high-dimension space called configuration space, not our conventional 3 + 1 dimensional space-time.
The wave function, as mentioned, is not itself observable. But if you square the value of the wave function (e.g. in a region of space at a point in time) you get the probability of observing the attribute-value of your interest (e.g. the probability of finding the particle in that region at that time).
The theory is incredibly accurate in giving you the correct probabilities; but it does not tell you what reality is actually doing. About that, quantum mechanics is not just silent - it informs you that your prior beliefs about the world consisting of well-defined particles with defined positions and momenta cannot be true (Bell's theorem).
The doers get on and calculate .. and design the modern technological world; the dreamers wonder whether there is completely non-obvious way to reconstruct the world of appearances ('reality') such that (relativistic) quantum mechanics turns out to be true in that structure of reality.
To date, no-one ever quite succeeded. Maybe Chip Sebens is onto something; maybe the Everett many-worlds formulation of quantum mechanics (still a work-in-progress) can be made to work.
It is my birthday tomorrow (I've reached binary one million) and I expect a present which will shed further light on these perplexing issues. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.